Sunteți pe pagina 1din 86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Published on NTIA (http://www.ntia.doc.gov)

Chapter 1: Theory of Markets and Privacy

A. Markets, Self-regulation, and Government Enforcement in the


Protection of Personal Information
B. Privacy and Self-regulation: Markets for Electronic Privacy
C. Economic Aspects of Personal Privacy
D. Extensions to the Theory of Markets and Privacy: Mechanics of
Pricing Information
E. Self-regulation on the Electronic Frontier:
Implications for Public Policy
F. "Whatever Works"--The American Public's Attitudes Toward
Regulation and Self-regulation
on Consumer Privacy Issues
G. The Limits and the Necessity of Self-regulation:
The Case for Both
H. Children's Privacy and the GII

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

1/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Markets, Self-Regulation, and Government Enforcment in


the Protection of Personal Information
Peter P. Swire(1)

Let's begin with a sense of the problem. Imagine that one day your bank or telephone company puts all of your
transaction or phone records up on a Web site for the world to see. Imagine, more realistically, that the
company without your permission simply sells your records to another company, for use in the latter's
marketing efforts. A broad consensus would agree that posting to the Web site is undesirable. Many people
would also object to the sale of personal information without the customer's permission.
Assuming that there can be significant problems in the protection of personal information, the next question is to
ask what institutions in society should be relied upon to address such problems. This paper examines the chief
institutions for protecting personal information. One institutional solution is to rely on the market. The basic idea
is that the reputation and sales of companies will suffer if they offend customers' desires about protecting
privacy. An opposite institutional approach would rely on government enforcement. The basic idea is that
enforcement of mandatory legal rules would deter companies from abusing people's privacy.
A significant element of current thinking about privacy, however, stresses "self-regulation" rather than market or
government mechanisms for protecting personal information. Numerous companies and industry groups have
promulgated self-regulatory codes or guidelines for the use of personal information. This article is part of a
broader study by the National Telecommunications and Information Administration (NTIA) about the uses and
limitations of self-regulation. The NTIA has also already given (somewhat qualified) support for a self-regulatory
approach for the control of personal information in telecommunications. 1
Today we face a special urgency in deciding how to use markets, self-regulation, and government enforcement
to protect personal information. There is a widespread and accurate sense that a greater amount of personal
information is being assembled in databases, and that more and more people have the computer and
telecommunications resources to access and manipulate that personal information. The economics and
technologies underlying use of personal information are fundamentally changing. These changes, in turn, make
it quite likely that we will need to change the institutional arrangements governing use of personal information.
The protection of personal information arises in a wide and growing range of industries. A partial listing might
include: health records; credit history; banking transactions; local and long-distance telephone calls; pay-perview, VCR rental, cable, and other video records; records of an Internet service provider; and purchases made
through direct mail or telephone ordering. This paper cannot hope to determine the best mix of markets, selfregulation, and government for protecting privacy in all of these diverse industries. This paper instead provides an
analytic framework for understanding privacy issues in a wide range of industries. Armed with the analytic
framework, we will not only understand more clearly what is meant by "self-regulation," but we will identify the
empirical issues that are likely to be crucial in deciding when self-regulation should be preferred over market or
government approaches.
The structure of the paper is as follows. Throughout the paper, in order to make the analysis easier to follow,
examples will be drawn from a hypothetical "Internet Commerce Association" (ICA), whose members sell
products over the Internet. Part I lays out the pure market and pure government enforcement models for
protecting privacy, showing how either markets or government could in theory assure the desirable level of
protection for personal information. Part II highlights the important market failures and government failures that
make it unlikely that either markets or government, acting alone, will do as good a job as we would like of
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

2/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

achieving both privacy and other social goals such as efficiency.


If markets and government are unsatisfactory, then we become more tempted to explore self-regulatory
approaches to privacy. Part III defines "self-regulation," stressing how industry regulation has the same
separation-of-powers structure as government regulation: industry can have a special role in legislation (drafting
the rules), enforcement, or adjudication. It is not enough to be for or against self-regulation; instead, one must
be clear about whether self-regulation is desirable at each stage of the process. Once self-regulation is defined,
Part IV makes the case for why it may be better than either markets or government. Notably, self-regulation
might take advantage of industry expertise and the possibility of community norms. Self-regulation can produce
certain sorts of collective goods, such as technical standards or an enhanced industry reputation for protecting
privacy. Self-regulation can also prove useful when the alternative is mandatory and perhaps less desirable
government regulation. Part V then provides the key criticisms of self-regulation. It critiques the rationales
offered in Part IV, and examines the longstanding worry that self-regulation will promote cartel behavior and other
possible bad effects on third parties. Finally, the Conclusion summarizes the discussion and highlights the key
empirical issues for comparing markets, self-regulation, and government in the protection of personal
information.
THE PURE MARKET AND PURE ENFORCEMENT MODELS FOR PROTECTING PRIVACY
The overall task of this paper is to understand the roles of markets, self-regulation, and government in protecting
personal information. An initial step is to see how well privacy might be protected by a system based entirely on
the market--the pure market model--or entirely on the government--the pure enforcement model. 2
Under the pure market model, the incentives for industry to protect privacy are entirely financial. The
assumption, for now, is that there is no legal enforcement against a company that discloses personal
information about its customers. Customers can be directly attracted by a strong privacy protection policy or
repelled by breaches of privacy. In at least some instances, privacy may be a salient enough marketing point to
induce consumers to switch from one company to another. For example, AT&T has advertised nationally that it
will not use customer calling records to contact potential new customers, the way that MCI apparently has done
under its "Friends and Family" program. As such, a company's privacy policy may become part of its overall
marketing effort to develop brand equity and an image of quality service. Bad customer experience or bad
publicity about the company's privacy practices can detract from the company's total reputation for quality. Even
more broadly, an entire industry might be able to gain sales by developing a reputation for protecting privacy. To
take a famous example, Swiss banks as an industry undoubtedly benefitted historically from a strong reputation
for guarding customers' privacy.
In the pure market model so far described, there are two important constraints on companies' privacy policies.
The first restraint comes from consumer preferences. The more that some or all consumers are willing to change
their purchasing decisions based on privacy policies, the greater the market discipline on companies. The
second restraint comes from publicity about companies' privacy practices. Publicity affects customers' choices
by making them better informed about which companies are meeting their preferences. The prospect of such
publicity encourages companies to conform to customers' preferences. Publicity over time may also shape
consumers' preferences, such as by making them more concerned as a group about possible privacy problems.
The pure market model thus has a dynamic component, in which both customer preferences and company
practices can evolve over time as awareness and concern about privacy themselves evolve. The effectiveness of
publicity as a constraint on companies will depend on factors such as how well the media can detect privacy
problems, how widespread reporting on the issue becomes, and how strongly customers will react to the
stories.
At the opposite extreme from the pure market model is the pure enforcement model. The assumption here is
that market discipline is largely or entirely ineffective at protecting individuals' privacy. Instead, vindication of
individuals' privacy rights occurs through legal enforcement. Privacy rules are defined by the government,
whether by statute, agency regulation, or decision of the courts. Designated parties, such as a government
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

3/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

agency or the citizen who has been wronged, are allowed to sue to enforce those rules. The suits seek to
achieve the twin goals of compensation and deterrence. Compensation takes place when the individual whose
privacy is violated is paid to the extent of the violation. Deterrence is focused on the incentives of the
corporation--the corporation that violates privacy should face an expected cost for violating privacy (in the form of
compensatory payments plus fines) that exceeds its expected benefit from its bad privacy practices.

LIMITATIONS OF THE PURE MARKET AND PURE ENFORCEMENT MODELS


In theory, either the pure market or the pure enforcement approach could lead to optimal protection of privacy. If
market discipline is strong enough, then companies will find it unprofitable to use personal information in ways
that customers find objectionable. If the legal rules are correctly defined, and enforcement is effective enough,
then companies will similarly be deterred from violating customers' privacy. In practice, there are important
limitations upon the extent to which either markets or legal enforcement will protect privacy. This section of the
paper discusses some key market failures and government failures that arise in the protection of privacy. Once
the nature of these failures is appreciated, we will be in a better position to explore the uses of self-regulation.

Market Failures
The extent of market imperfection is measured against the goals of privacy protection--how much do the actual
workings of the market differ from the ideal? The privacy literature to date has emphasized individuals' personal
or human rights to control information about themselves. This human rights approach is especially prominent in
the regime of data protection in Europe. The approach was developed primarily with respect to data collection by
governments, where individuals are subject to the coercive power of the state and forced to reveal sensitive data.
The topic of self-regulation, by contrast, arises with respect to data collection and use by non-governmental
enterprises. 3 A thesis of my own ongoing research is that data collection by private enterprises should be
examined in terms of the contractual relationship between the company and the customer. 4 Examples include
the deposit contract a customer has with a bank, or terms affecting privacy in a contract for sale with a member
of the Internet Commerce Association. For reasons that will be explained more fully in my forthcoming work,
there are important advantages to analyzing the privacy issues of private companies as a matter of contract. Not
least of these is the simple fact that the legal relationship between consumer and company has historically
been treated under the law of contracts. Any rules protecting customers' privacy will need to be integrated with
that body of law.
Market failure can be defined with respect to either the human rights or contractual approaches to the protection
of personal information. Under the human rights approach, the goal is to protect individuals' right to privacy
according to the moral theory that defines the right. A pure market model will fail to the extent that it protects
privacy less well than is desirable under the moral theory. Under the contractual approach, the primary goal is to
understand what well-informed parties would agree to, if there were no costly hurdles to their reaching an
agreement. A pure market model will fail to the extent that it protects privacy less well than these parties would
have agreed to, if they were fully informed and had some equality of bargaining power. The focus of the
discussion here will be on market failure under the contractual approach. 5
The key market failures with respect to privacy concern information and bargaining costs. The information costs
arise because of the information asymmetry between the company and the customer--the company typically
knows far more than the customer about how the information will be used by the company. A member of the
ICA, for instance, would have ready access to details about how customer information will be generated,
combined with other databases, or sold to third parties. The customer may face significant costs simply in
trying to learn and understand the nature of a company's privacy policies.
The costs of learning about companies' policies are magnified by the difficulty customers face in detecting
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

4/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

whether companies in fact are complying with those policies. Customers can try to adopt strategies for
monitoring whether companies have complied. For instance, if a person contracted with several companies that
promised not to sell her name to third parties, she could report a different middle initial to each company. She
could then identify the company that broke the agreement by noticing the middle initial that later appeared on an
unsolicited letter or e-mail. These sorts of strategies, however, are both costly (in time and effort) and likely to
be ineffective. A member of the ICA, for instance, could use existing technology to cross-check her address with
her real name, and thereby insert her correct middle initial.
The cost and ineffectiveness of monitoring logically leads to over-disclosure of private information. Consider the
incentives facing a company that acquires private information. That company gains the full benefit of using the
information, notably in its own marketing efforts or in the fee it receives when it sells the information to third
parties. The company, however, does not suffer the full losses from disclosure of private information. Because of
imperfect monitoring, customers often will not learn of that use. They will not be able to discipline the company
efficiently in the marketplace for its less-than-optimal privacy practices. Because the company internalizes the
gains from using the information, but can externalize a significant share of the losses, it will have a systematic
incentive to over-use private information. In terms of the contract approach, companies will have an incentive to
use private information even where the customers would not have freely bargained for such use.
Not only are there imperfections in the ability of consumers to learn about and monitor a company's privacy
policies. The problems are exacerbated by the costs of bargaining for the desired level of privacy. It is a daunting
prospect for an individual consumer to imagine bargaining with a distant Internet marketing company or a huge
telephone company about a desired privacy regime. To be successful, bargaining would likely require a
considerable degree of expertise in privacy issues, as well as a substantial commitment of time and effort. The
cost of this elaborate bargaining process is likely to exceed the incremental benefit in privacy to that citizen. 6
The temptation for the ordinary consumer will be to free ride, and hope that someone else will negotiate a more
favorable privacy regime. In addition, the benefits of the bargain would be undermined by the cost and difficulty,
already discussed, of monitoring the company's compliance with its announced privacy policies.

Government Failures
These substantial market failures must be considered together with substantial governmental failures. The pure
enforcement model above posits a rosy picture of government regulation in which optimal rules are enforced with
perfect accuracy, all at minimal cost. Even for government's greatest supporters, the real world of government
regulation is likely to appear considerably different. Once we better understand both market and government
failures, we will see more clearly the attraction of a self-regulatory approach to privacy protection.
Government failures where officials seek the public interest. In order to understand the most important
types of governmental failure, assume for the moment that the government actors are public spirited. That is,
assume that the people drafting and enforcing the rules are competent, well-informed, and wish to achieve the
public good in the area of privacy protection. 7 Even under these optimistic assumptions, government privacy
regulation will lead to administrative costs on government and taxpayers, and compliance costs on industry.
Administrative costs include the expense to the government of drafting privacy rules, administering the rules,
and enforcing the rules in particular cases. In the modern state, all of these functions might take place within a
particular government agency. For instance, a rule might be promulgated by the agency under the Administrative
Procedure Act, administered by agency personnel, and adjudicated by an Administrative Law Judge. It is also
possible for mandatory government rules to take place outside of an agency, such as when rules are drafted in
the legislature and enforced in a court. No matter how these functions are allocated between the branches of
government, taxpayer funds are usually needed to pay for the government regulatory activities. The amount of
funding can clearly be substantial.
Industry will incur a variety of costs in complying with the government regulation. It would not be accurate,
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

5/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

however, to say that all costs incurred by industry are a measure of governmental failure. Where privacy rules
are well drafted, the government regulatory system will have net benefits compared to a system without
regulation. That is, the gains resulting from compliance with the regulations will outweigh the costs incurred by
the company in molding its behavior to the regulation. For regulation of ICA members, a particular disclosure
rule might have relatively small costs to industry, such as the cost of placing the privacy disclosure forms on
their Web site. The rule might also have relatively large benefits to consumers, such as if the disclosure enables
a significant number of customers to choose a level of privacy protection that they prefer. In considering this sort
of net-beneficial rule, governmental failure arises to the extent that a different rule would have even lower
compliance costs for industry or even greater benefits for consumers. 8
Although the range of possible compliance costs is wide, it is helpful to mention a few that may be especially
relevant to the privacy discussion. One important factor in determining the size and type of compliance costs is
the degree of precision in the regulation. 9 Enforcement by the government can be based on fairly precise rules,
stated in advance. These sorts of rules give clear notice to industry of what is expected, and it is relatively
inexpensive to determine whether industry has violated a precisely-stated rule. The chief problem with precise
rules is that they tend to be both over- and under-broad. They are over-broad whenever there are net benefits
from using the information, but the rule prohibits such use. A rule, for instance, might prohibit uses of personal
information that consumers, if asked, would approve. Rules are under-broad whenever there are net costs from
using the information, but the use is nonetheless allowed. The rule, for example, might instead let a company
use information in ways that a customer would find highly objectionable. One way to avoid the over- and underbreadth problem is by using vague standards instead, such as the injunction to "act reasonably under the
circumstances." These vague standards create their own compliance costs, however. Industry lacks clear notice
of what is expected, and expensive trials may be needed after the fact to determine what was reasonable in a
particular case. In short, where rules are either precise or vague, there are likely to be significant costs to
industry in complying.
Another compliance cost to industry arises from the inflexibility of government rules. Simply put, it is often
difficult to change government rules, even when there is a consensus in the agency and policy community that
such change is appropriate. Anyone experienced in Washington is likely to have favorite examples of this
inflexibility. 10 The problem of inflexibility is likely to be particularly acute during a period of rapid technological
and market change--rules promulgated under one set of assumptions will make less sense when the technical
and economic realities change. Today, the uses of personal data seem to be undergoing just this sort of rapid
change. Vast amounts of public records are coming on-line, new industries are arising to mine for public and
private data, and advances in computers and telecommunications are distributing the ability to create customer
profiles to an unprecedented array of users.
Today's rapid changes present a dilemma for those interested in creating legal rules to protect privacy. On the
one hand, the inflexibility of government rules suggests that rules passed today may create substantial
compliance costs, because the rules will not adapt smoothly enough to changing market and technical realities.
On the other hand, the heightened risks to privacy lead many to conclude that the need for mandatory rules is
greater than before. In assessing the degree of government failure, an important question will thus be the degree
to which legal rules can keep up with changes in markets, technology, and the protection of privacy.
Government failures and public choice problems. The discussion of government failures to this point has
assumed that the government officials are competent and seek to achieve the public good in the area of privacy
protection. If government officials are incompetent, then it follows that the costs of regulation will likely be
greater and the benefits smaller. Perhaps of even greater importance, government officials may not faithfully
follow the public good. Instead, as emphasized by public choice theory, officials may be influenced by powerful
interest groups, or may themselves seek other goals, such as an increase in their agency's turf. 11
In considering the effects of interest groups on privacy law, it is not necessarily clear whether the political
process will tilt toward either the industry or consumer position. First consider what will occur when the industry
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

6/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

position dominates politically. We will expect less thorough regulation to protect privacy than would be
promulgated in the public interest. The industry might succeed, for instance, in having government enact the
precise rules that the industry itself would write under self-regulation. Indeed, government rules could be even
more protective of industry than self-regulatory ones. Industry has an incentive to use government rules as a
shield to preempt any contrary laws. An example is the ability of the tobacco industry to preempt many lawsuits
by complying with the warning requirements of a 1969 federal statute. 12 If the federal statute did not exist, the
tobacco industry would have been under greater pressure to regulate itself, and would have faced greater liability
under evolving state statute and tort law. For privacy advocates, the tobacco story can serve as a warning
against a too-ready conclusion that some mandatory regulation is better than none. At a minimum, such
advocates should consider the effect that passage of mandatory regulation will have on how the field of law
would otherwise develop.
In the alternative, consider if the forces favoring regulation dominate politically. Although some observers might
find this possibility remote, the debates about regulatory reform show a wide range of parties who claim that the
costs of regulations often exceed their benefits. One way such regulations might be passed is by a coalition of
regulatory advocates and government officials (legislators and regulators) who do not themselves incur the costs
of complying with the regulation. 13 Another possibility is that some companies or industries might succeed in
pressing for regulations that impose costs on their competitors. A third possibility is that the government agency
may systematically over-estimate the benefits of regulation, whether out of sincere mistake or a less honorable
desire to increase the agency's turf.
Without seeking to take a general position on whether there is under-regulation or over-regulation, this
discussion of possible public choice problems identifies a series of possible governmental failures. To the extent
these government failures affect the nature of privacy regulation, there will be greater reason to seek nongovernmental approaches for guarding privacy.

DEFINING "SELF-REGULATION:" LEGISLATION, ENFORCEMENT, ADJUDICATION


The pure market and pure enforcement models make no mention of self-regulation, and need not rely on selfregulation in order to reach the desired privacy protection. Examination of market failures and government
failures, however, show that pure models bear little resemblance to reality. Because both market and
government efforts to protect privacy are subject to significant limitations, the question arises whether a different
approach, such as self-regulation, might create the reasonable protection of privacy without excessive cost.
Before further examining the rationales for self-regulation, we must first be more specific about the meaning of
the term "self-regulation." Self-regulation, like government regulation, can occur in the three traditional
components of the separation of powers: legislation, enforcement, and adjudication. Legislation refers to the
question of who should define appropriate rules for protecting privacy. Enforcement refers to the question of who
should initiate enforcement actions. Adjudication refers to the question of who should decide whether a
company has violated the privacy rules.
An industry-organized process can "regulate" at one or more of the three stages. Probably the greatest amount
of self-regulation occurs at the legislative stage. Industry groups often create and issue codes on privacy and
many other topics. The Direct Marketing Association and Consumer Bankers Association, among many others,
have issued guidelines for good privacy practices. These guidelines often provide for no legal enforcement, but
instead are simply made available to industry members, government agencies, and the general public. In other
instances, industry-drafted rules are enforceable. For example, building codes adopted by local and state
governments routinely incorporate technical industry standards by reference--a violation of the "self-regulatory"
code is itself a violation of law.
Enforcement and adjudication can also be undertaken by industry organizations. Prominent examples include
state bar associations, medical boards, and the National Association of Securities Dealers. These organizations
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

7/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

can typically both bring enforcement actions against their members and judge that professionals should be fined
or stripped of their license to practice. In situations such as these, government regulation and self-regulation can
be mixed together in almost endlessly complex ways. For instance, the rules that govern a lawyer's conduct
may be a mixture of government-defined law (statutes) and self-regulatory law (bar association rules).
Enforcement might be by an individual complainant, the bar association itself, or a government prosecutor.
Adjudication might be by the organization itself, members of the profession officially appointed to a state board,
or state agency personnel. Even when adjudication initially includes the self-regulatory organization, there may
be an appeal to a government agency or to the courts.
These examples of "self-regulation" should make the basic point clear: Industry can be involved at one or any
number of points in the process of legislating, enforcing, or adjudicating the rules. In the privacy context, one
can imagine the Internet Commerce Association in the multiple roles of defining privacy rules, taking
enforcement action against those who violate the rules, or deciding that a member has violated industry
standards. In the latter instance, for example, the member might no longer be permitted to use the "ICA Seal of
Good Privacy Practices." One should not speak too freely about the advantages or disadvantages of "selfregulation" generally. Instead, one should see whether and under what conditions industry has a particular,
positive role to play at each stage of creating and enforcing the applicable regime.

THE CASE FOR WHY SELF-REGULATION MAY IMPROVE ON MARKET OR GOVERNMENT APPROACHES
Now that we have defined self-regulation, we are in a position to explore why it might be better than pure market
or government approaches to the protection of personal information. First, self-regulation may provide benefits to
society compared with an otherwise-unregulated market. Self-regulation can build on the collective expertise of
industry. An industry might help instill ethics in members of the industry about the importance of protecting
personal information, and community norms might reduce the amount that privacy is invaded. Members of an
industry acting together might also be able to supply collective goods that they would not be able to supply
acting alone. For instance, self-regulation might promote the reputation of the industry as a whole, and it might
facilitate the creation of technical standards that will benefit the industry itself and society more generally. In
addition, self-regulation may be better than a pure government solution. The same factors that can make selfregulation better than the market may also make it better than government. Self-regulation may also be adopted
in order to stave off mandatory government regulation, and may thereby gain some of the good attributes of both
government regulation and industry participation.

Reasons Why Self-Regulation May Benefit Society Compared with the Market
To explore these possible benefits, we will first build the case for self-regulation, and then explore reasons that
might make the case less persuasive. The argument for industry expertise is intuitive and straightforward.
Members of the industry have a great deal of knowledge about how customer information is used and sold. In
assessing the cost-effectiveness of privacy practices, industry will have special insights about the costs of
complying with rules. Industry will also understand the rules' effectiveness in preventing the dissemination of
customer information. If any sort of regulation is indicated, then accurate information from the industry will be
vital to making the rules as cost-effective as possible.
A different argument for self-regulation focuses on the role of an industry or profession in creating and enforcing
norms of behavior. 14 These norms are not legally enforceable, but may be taught or absorbed as part of
professional training. The ICA, for instance, might require companies to have their personnel trained in the ICA
privacy guidelines. 15 Once a person enters an industry or profession, the norms can be enforced both internally
and externally. The internal enforcement takes shape in what we call a person's ethics, scruples, or just plain
unwillingness to do certain things. There will be situations where a person or firm can profit from disclosing client
information, but scruples about privacy prevent the disclosure from occurring. The nature and empirical effect of
these scruples are difficult to determine; the stronger the societal norms against disclosure, however, the more
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

8/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

likely that companies will at least sometimes protect privacy rather than maximizing profits. 16 The external
enforcement occurs when members of a community monitor and discipline those who violate the community
norms. An every-day example is when a group of people refuse to speak with someone because he or she is a
gossip; i.e., the gossip is disciplined for disclosing private information. For the ICA, a company that violates
privacy norms may find itself punished in a variety of informal ways, such as by having company personnel
shunned at conferences. Once again, the empirical effect of community norms may be difficult to determine, but
in theory strong norms can be an effective complement to market discipline and government enforcement.
Members of the industry might engage in self-regulation on a disinterested basis--they may wish to get the rules
right, or may have ethical beliefs that certain sorts of private information should not be disclosed. Members of
the industry may also find it in their collective self-interest to promulgate and enforce regulations. An important
example is where self-regulation can enhance the overall reputation of the industry. Consider how this
reputational issue might arise for the new Internet Commerce Association. Consumers will have concerns that
Internet commerce will not be secure (i.e., hackers will steal their credit card numbers) and private (i.e.,
merchants will disseminate personal information widely). In order to allay these concerns, members of the ICA
may find it useful to promulgate a Code for Internet Commerce. The ICA might educate consumers about the
Code, and individual members could let purchasers know that they adhere to the Code. The ICA might even
expel members that violated the Code, or sue companies in court for falsely claiming to adhere to it.
Notice how this hypothetical Code builds on the previous discussion. Drafting and enforcement of the Code
relies on industry expertise. The Code might be enforced in part by individual ethics and community norms. And
individual firms may find it highly profitable to pay dues to the ICA in order to subsidize a collective good--the
Code enhances the overall industry reputation and reduces the risk that consumers otherwise perceive in doing
business on the Internet.
Technical standards are another prominent example of a collective good that may be beneficial to both industry
and society at large. A great number of standard-setting organizations foster self-regulation--the American
National Standards Institute, the Institute of Electrical and Electronics Engineers, and many more. 17 A key role
of technical standards is to provide what economists call "network externalities." 18 The most familiar example of
a network externality is the telephone system--if everyone is hooked up to the same system, the value of
telephones rises for everyone. When new people hook into the telephone network, the new members benefit
from being part of the network. Additional benefits--external to the new members--are realized by existing
members of the network, who can now communicate with a larger number of people.
The creation of technical standards can lower costs and increase competition in numerous ways. 19 The case for
such self-regulation is especially strong, however, where there are important network externalities. In such
instances individual companies working alone cannot create the same amount of benefits to all users. For
instance, in order to lower the cost of processing transactions, the Internet Commerce Association might
develop a technical protocol for transmitting information among participating companies. No one company could
similarly save costs by adopting the protocol--the benefits arise from the fact that many different companies
adopt it. To take another example, the ICA might develop a standard form for consumers who wish to opt out of
uses of their personal information. It might also act as a clearinghouse for forwarding the forms to all ICA
members. Consumers could thereby express their privacy preferences once, and benefit from having those
preferences recognized by the full network of ICA members.

Reasons Why Self-Regulation May Benefit Society Compared with Government Enforcement
Many of the points that make self-regulation potentially better than the market also make it potentially better
than government enforcement. Industry expertise might not be given its full effect in a government-controlled
system. Individual ethics and community norms might be more effective when arising from the community itself
than when mandated by government agencies. Poorly-considered government rules might also interfere with the
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

9/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

ability of industry to create collective goods such as technical standards or a strong industry reputation. For
instance, the ICA might not be able to implement certain technical standards, which would improve the industry
reputation, if mandatory government rules prevent sensible and cost-effective standards from being adopted.
There is an additional, powerful reason that it might be in industry's interest to self-regulate--in order to stave off
mandatory government regulation. Consider how members of the ICA might rationally prefer an unregulated
market to a market with self-regulation. As discussed above, companies can profit from using and selling
personal information in an unregulated market, in large part because customers have difficulty in monitoring
which companies have bad information practices. Members of the ICA might thus prefer no regulation to selfregulation, at least until a credible threat of government regulation arises. At that point, the calculus for industry
changes. Adopting self-regulation will tend to reduce the likelihood of government regulation. The expected cost
to the industry of self-regulation may thus be lower than the expected cost of complying with government
regulation.
Industry is often quite explicit that the threat of government regulation is what spurs the adoption of selfregulation. 20 If one is an optimist, it is possible to believe that this sort of self-regulation is the best possible
solution. The self-regulation can draw on industry expertise and on the legitimacy of community-based norms.
We might expect the self-regulation to be strict about protecting privacy, on the theory that only a reasonably
strict rule will persuade government not to step in. We can thus hope for the advantages of self-regulation and of
strict government regulation, but without some of the disadvantages of government regulation, such as inflexible
rules and costly, formal enforcement processes.

THE LIMITS OF SELF-REGULATION:


CARTELS AND CRITIQUING THE BENEFITS OF SELF-REGULATION
We have now seen the case for how self-regulation may be better than the market because of industry
expertise, community norms, and the provision of collective goods such as industry reputation and technical
standards. Self-regulation may be better than government regulation for the same reasons, and also because of
the possibility that the threat of government regulation will produce effective self-regulation at lower cost than a
mandatory government regime.
In making the case for self-regulation, the emphasis was on situations where self-regulation would benefit the
industry as a whole, such as by enhancing the industry's reputation or establishing technical standards that
would profit the industry. An implicit assumption was that persons outside of the industry would not be
significantly harmed by the industry's efforts. Now we shall relax that assumption, and examine the principal
ways in which industry regulation may benefit the industry but harm outsiders. The traditional concern about
self-regulation has been that the industry would harm outsiders by creating a cartel or otherwise exercising
market power. In the privacy setting, an additional important concern is that self-regulation might be designed by
industry for its own benefit, but that the privacy concerns of customers will not be effectively considered within
the industry process. The discussion here will briefly examine the antitrust issues, and then critique each of the
reasons given so far for why self-regulation should be the preferred institutional approach.

Cartels and the Possibility that Self-Regulation Will Be Used to Wield Market Power
Other papers in this NTIA report address the connection between antitrust law and self-regulation, and the
comments here on the topic will be relatively brief. A first observation is that it is easy to see how self-regulation
can lead to the risk of cartels--a cartel agreement, after all, is precisely an agreement by members of an
industry to regulate their own sales. According to standard economic theory, cartels tend to increase industry
profits by raising prices. Cartels are difficult to administer, however. Members are tempted to cheat to gain
market share, such as by secretly lowering the price or raising the quality of the goods sold. In order to help
cartel members police each other, cartels work best with standardized products at clearly-stated prices. Cartels
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

10/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

thus do more than raise price. They also tend to stifle innovation and reduce the range of quality and choice
available to customers.
The asserted benefits of self-regulation in any setting, therefore, must be weighed against the risk that industry
members are acting together to exercise market power. The extent of the risk will depend heavily on the
structure of the underlying market. At one extreme are cases where the antitrust risks are low, such as where
there are low barriers to entry and the self-regulation does not increase barriers to entry. If starting a business
on the Internet, for instance, primarily involves the low cost of writing a Web page, then rules of the Internet
Commerce Association are unlikely to have major antitrust implications. At the other extreme are cases where
self-regulation moves a market from competition toward monopoly--the regulation might reduce competition
among members of the industry association, and also block entry by new competitors. An example might be a
rule that somehow prevented sellers from using the Internet unless they agreed to join the cartel. In such an
instance, the benefits of self-regulation would seem more doubtful when weighed against the likelihood of higher
prices and lower quality for consumers.
The earlier discussion of market failures focused on the inability of customers to detect abuses of private
information. Where customers cannot easily monitor privacy practices, a company's reputation does not suffer
fully for bad data protection practices, and the company has an incentive to over-use private information. The
existence of monopoly power provides an additional way that the market will not discipline a company's use of
private information. Even if customers know that the monopoly has bad information practices, they may have no
ready way to avoid doing business with the monopoly. The traditional policy response to the existence of such
monopolies has been either to seek to end the monopoly or else to regulate it as a public utility. How to regulate
the use of private information by utilities is, itself, a complicated inquiry within antitrust law. For purposes of this
paper, the important point is that the existence of monopoly power can be the sort of market failure that can
justify government regulation of the use of private information.

Critiquing the Asserted Benefits of Self-Regulation


The next task is to scrutinize the arguments that have been put forward to justify self-regulation: industry
expertise; community norms and ethical values; enhancing industry reputation; technical standards; and selfregulation as an alternative to threatened government regulation. The discussion here will seek to highlight the
analytical and empirical issues that will be important to determining the role of markets, self-regulation, and
government regulation.
Industry Expertise. There is wide consensus that industry expertise should be brought to bear in designing
rules for protecting personal information. As with other regulatory issues, industry will have unique access to
information about the underlying technology and market conditions, and about the costs of complying with
alternative regimes.
It is less clear, however, that our belief in industry expertise also means that we should favor self-regulation over
market or government approaches. In a market approach, each company has the usual incentive to apply its
expertise in order to maximize profits. All of the company's efforts to use its expertise will ordinarily inure to the
profit of that company itself. By contrast, it will only sometimes be in the self-interest of a company to employ
its expertise as part of an industry-wide effort to develop self-regulation. The industry-wide regulation will be a
collective good to the industry. The individual companies will have the usual incentives to free-ride and let other
companies suffer the expense of organizing the effort. Companies engaged in the process may also suffer by
letting competitors learn about their business operations, or by undergoing special scrutiny of their privacy
practices. In short, a member of the ICA might rationally "lie low" and fail to share its expertise, especially if it
wished to continue profiting from the use of personal information.
In comparison to the market, then, the case for self-regulation will depend on there being an explanation of why
the expertise is provided, and how that expertise will take shape in the form of well-drafted and effective selfhttp://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

11/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

regulation. In comparison to government regulation, the case for self-regulation must take account of the ways
industry expertise is mobilized in the government setting. Industry representatives are deeply involved in the
process of drafting statutes and regulations. As a formal matter, industry representatives are almost always
included as witnesses at legislative hearings and as sources of information for agency efforts such as this
Report by the NTIA. Under the Administrative Procedure Act, interested parties have the right to comment on
proposed rulemakings, and the agency is required to respond to those comments. On a less formal level,
industry expertise is made available to government in a wide range of lobbying and educational contexts.
It is no simple task to compare how well industry expertise is included in self-regulation and government
regulation. The case for self-regulation will stress how industry might be more forthcoming to an industry group
than to a formal government process. Discussing industry issues with the government often means disclosing
that information to the world, in light of the requirements of the Freedom of Information Act, the Federal Advisory
Committee Act, and other government-in-the-sunshine laws. The formality and publicity of sharing information
with government thus might favor a self-regulatory approach. On the other hand, an advantage of government
regulation is that it systematically takes account of the views of those outside of the industry. An obvious worry
about self-regulation is that the rules will be drafted to favor industry, such as by allowing greater industry use of
personal information than a more inclusive process would have permitted. The effect on other parties highlights
the possibility that those outside of the industry will have relevant expertise.
Community norms and ethical values. The next argument for self-regulation focuses on the role of an
industry or profession in creating and enforcing norms of behavior. The idea is that individuals may feel ethical
constraints against misusing customers' personal information. In addition, those who do disclose such
information may be subject to non-legal sanctions from the community. This sort of self-regulation naturally
complements a market approach. Individuals and companies in the industry will protect their reputation, not only
in the eyes of consumers (the market approach), but also in the eyes of their own professional community (the
self-regulatory approach). Compared to government regulation, it is plausible that self-regulation will do a better
job of inducing voluntary compliance with norms--a sense of honor or ethical pride in adhering to high standards
might be diluted if enforcement is done through bureaucratic rules and procedures.
That said, there is serious doubt about how well community norms will protect personal information in the
modern settings relevant to protecting personal information. First, our usual intuition is that informal norms work
most effectively in small groups. In these settings, the members interact with each other repeatedly, information
about disreputable acts spreads widely, and each person has reason to care about his or her reputation with
each of the others. By contrast, the modern issues about protecting personal information tend to occur in
nationwide or even global settings. If an individual or company acts in a disreputable way and profits from the
sale of personal information, it is quite possible that no one outside of the companies using the information will
know of the act. Even if some other persons learn of the distasteful act, those persons might be geographically
distant or otherwise outside of the social circles that would express outrage upon learning of the act.
A related reason to doubt the effectiveness of norms is that many decisions about uses of personal information
are done as a matter of corporate policy rather than individual decision. An individual professional might decide
to accept lower profits for the sake of upholding ethical principles. That ethical decision might be bolstered by
the individual's awareness that his or her personal reputation would be on the line if any unethical behavior
became known. By contrast, a similarly ethical person acting within a corporation might be required to justify a
policy in terms of how it will increase the company's profits. That person would also know that blame for the bad
act would fall on the company as a whole, rather than on him or her personally. When decisions about the
protection of personal information are diffused widely across a large corporation, it seems unlikely that
community norms will be a powerful constraint on the company's incentive to maximize profits.
Enhancing Industry Reputation. We next turn to the argument that the industry might promulgate and enforce
regulations in order to enhance the overall reputation of the industry. In the discussion above, we considered
how the ICA might create a Code for Internet Commerce in order to allay customer concerns about security and
privacy. The idea is that it will be in the interest of individual ICA members to comply with this Code. This selfhttp://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

12/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

interest will exist precisely when the profits from the improved industry reputation outweigh the losses from the
company not being able to use personal information. A familiar example might be the history of banks in
Switzerland (or other countries), where all members of the industry benefited from a strong reputation for keeping
bank records private. 21
The chief task with respect to the industry reputation argument is to specify the conditions under which the
industry would actually provide the collective good. A first thing to notice is how much maintaining the industry
reputation for protecting privacy resembles the task of maintaining a company's reputation for protecting privacy.
A concern in both instances is that the market does an imperfect job of policing the reputations--it is difficult for
consumers to detect when a company or an industry has misused personal information, and so the companies
and industry have incentives to over-use that information.
The incentive for industry to create the collective good is especially great when customers can tell that someone
in the industry has misused personal information, but cannot tell which company in the industry has done so.
For instance, one might imagine a circumstance in which a customer could tell that some problem has arisen in
connection with an Internet purchase, perhaps because the personal information was linked to that person's email information. The customer does not know, however, which Internet company misused the information. In
such a case, the customer might become less willing to use the Internet generally for purchases. Members of
the Internet Commerce Association would then have a collective interest in enhancing the reputation of Internet
purchasing, and might act together as an industry to promulgate an effective Code for Internet Commerce.
A different way to create the collective good is where the reputation of a dominant company or a small set of
companies overlaps substantially with the reputation of the entire industry. In such a case, the leading company
or companies may find it in their self-interest to lead the way to an industry-wide Code of privacy practices. 22
This "leading company" scenario may help in protecting privacy, if the result of promulgation of the Code is to
spread better data protection practices more widely in industry. 23 The Code might help reduce the likelihood of
companies seeking competitive advantage by cutting corners on data protection policies.
In general, it would seem that such efforts to enhance the industry's reputation for privacy would be a helpful,
although perhaps modest, supplement to market competition. The main objections to the argument are not that
enhancing industry reputation is a bad thing. The main concern instead is that the collective good simply will not
be created that often--as explained here, industry members will only promulgate a Code and enforce it under
fairly restrictive conditions. In addition, an industry member might still find it in its self-interest to break a privacy
rule, when the loss of reputation is spread across the entire industry.
Technical standards. As discussed above, a prominent sort of self-regulation is a different sort of collective
good, the creation of industry technical standards. Such standards might provide a variety of benefits compared
to a market lacking such standards. One can imagine the ICA developing a standard electronic form, for
instance, that would lower the costs to members of sharing personal information. The same form might also
provide an inexpensive way to let customers opt out of having their information shared.
Intricate antitrust issues can arise about when the benefits of standard-setting procedures are outweighed by
possible antitrust problems. In the Internet context, Professor Mark Lemley has recently argued that joint
standard-setting activity raises the most acute antitrust risks in two settings: (1) where the standards are
"closed" rather than "open;" i.e., where access to the standards is limited to members of the organization; and
(2) where a particular participant "captures" the standard-setting process and uses the process to its
competitive advantage. 24 An example of the latter is if the standard requires use of intellectual property owned
by one participant. In the privacy context, it is not immediately apparent that either of these situations is likely
to occur. If not, then the antitrust concerns about standard-setting are not likely to a prominent argument
against self-regulation.
A more pressing privacy problem is likely to result from the relatively small role that customers and others
outside of industry often play in the creation of industry standards. For many technical standards, where
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

13/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

negative effects on outsiders are small, the standards should indeed be drafted by the industry experts who are
most affected by the rule. For instance, if the ICA creates standard forms that simply reduce the cost of doing
business, then it seems unlikely that the government could do a better job. In other instances, however, effects
on outsiders may be substantial. Imagine, for instance, if the ICA standards made it much easier for merchants
to discover highly sensitive personal information, such as by opening up previously-inaccessible databases. This
ICA regime might create profits for industry, but at a substantial privacy cost to customers.
Where the burden on outsiders is substantial, then the argument for government regulation becomes stronger.
The case for government regulation will be stronger to the extent that the government rules are more rigorously
enforced and better incorporate the interests of those outside of industry. Any such benefits of government
regulation will be weighed against the usual costs of government intervention, including the possible inflexibility
of government rules and the likely higher administrative and compliance costs.
Self-Regulation as an Alternative to Threatened Government Regulation. The last argument for selfregulation is that it might be desirable in order to stave off the threat of mandatory government regulation. In
order to forestall government regulation, the self-regulation may need to be fairly strict. If the self-regulatory rules
are indeed strict, then it is possible that the protection of privacy would be comparable under either selfregulation or government regulation. At the same time, a self-regulatory approach might be able to avoid some of
the substantial costs of having a formal government regime.
On the other hand, there are grounds for believing that this sort of self-regulation will be less protective of
personal information than government regulation would be. First, there is the question of how non-binding
enforcement of industry codes compares with legally-binding enforcement of government rules. 25 Second, we
again face the general question of how the concerns of persons outside of industry, such as consumers, will be
included within the industry regulation. If self-regulation is indeed more flexible, it may be more flexible for
industry than for others. Third, this sort of self-regulation is premised on the existence of a credible threat of
government regulation. Self-regulation is more likely to be adopted when the legislative or executive branches
are very concerned about privacy issues. Over time, however, the legislative threat might ease. 26 Agency
attention may be directed elsewhere. As the threat of government action subsides, we might expect that selfregulatory efforts would also become more lax. After all, by hypothesis, the industry is spurred to regulate itself
because of the threat of government regulation. Unless someone outside of the industry has the ongoing ability
to enforce for privacy lapses, whether by market action or legal enforcement action, then we should expect the
effectiveness of self-regulation to be uneven over time.
In conclusion, there are significant reasons to believe that government regulation will be stricter in enforcing the
protection of personal information than this sort of self-regulation. The difficult question will be to balance these
gains in privacy protection against the likely higher administrative and compliance costs of government
regulation.

CONCLUSION
Economists sometimes warn against the "Nirvana fallacy"--against the idea that there is some perfect
institutional arrangement that will solve all problems. Markets, self-regulation, and government each have
potential strengths for protecting privacy and achieving other social goals. One task here has been to identify the
ways that each might do so. The pure market model shows how a company might effectively protect privacy in
order to enhance its reputation and sales. The pure enforcement model shows how government rules might deter
improper disclosure of personal information. The discussion of self-regulation shows how that approach may
protect privacy by drawing on industry expertise, community norms, and the ability of industry to provide
collective goods such as technical standards and an enhanced industry reputation.
Markets, self-regulation, and government inevitably also have their own limitations in the protection of privacy. A
chief failure of the market approach is that customers find it costly or impossible to monitor how companies use
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

14/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

personal information. When consumers cannot monitor effectively, companies have an incentive to over-use
personal information: the companies get the full benefit of the use (in terms of their own marketing or the fee
they receive from third parties), but do not suffer for the costs of disclosure (the privacy loss to consumers).
Government regulation is subject to the well-known possible failures of rigid, costly, and/or ineffective rules. Selfregulation is subject to the possibility that the industry is using the self-regulation for cartel purposes. The
claimed advantages of industry expertise, community norms, and collective goods may also, on inspection, be
less substantial than advocates of self-regulation would hope.
Even when Nirvana cannot be achieved, we must do the best we can with the available, imperfect institutions.
As mentioned in the introduction, the issue of how to protect personal information arises in a large and rapidlygrowing number of settings. A chief goal of this paper has been to supply an analytic framework for examining
the role of markets, self-regulation, and government in the protection of personal information. An important
benefit of this framework is that it supplies a list of empirical questions that will be helpful in choosing among the
alternative institutional approaches.
Based on the analytic framework developed in this paper, the following empirical questions provide a useful
checklist for choosing institutions to help protect personal information in a given setting:

Key Questions about a Market Approach


How difficult is it for consumers to discover companies' policies for use of private information and monitor the
companies' compliance with those policies? How much do such difficulties lead to over-use of private information
by companies?
How difficult is it for customers who wish to do so to bargain with companies for different privacy practices?

Key Questions about a Government Approach


How great are government's administrative costs and industry's compliance costs under a mandatory
government regime?
How do the costs of drafting, enforcing, and adjudicating a government privacy regime compare to those costs in
the private sector?
What sort of public choice or other political problems would we expect in the government process?
What are the key benefits of a government approach, which notably may include stricter enforcement of privacy
rules and greater concern for the interests of those not in the industry?

Key Questions about a Self-Regulatory Approach

Monopoly Power
Do the self-regulatory processes offer significant opportunities to create cartels or otherwise enhance market
power?
To what extent do sellers already have monopoly power, so that a bad reputation concerning use of personal
information will not reduce their profits?
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

15/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Industry Expertise
To what extent will self-regulation result in the use of industry expertise more than would occur in an unregulated
market?
How do the incentives of industry to provide expertise compare for self-regulatory and government efforts? How
significant are the disincentives to disclose information to the government, such as from government-in-thesunshine laws?
How likely is there to be important expertise from outside of industry? How well will that expertise be
incorporated into either self-regulatory or government efforts?

Community Norms and Ethical Values


Can we identify ways in which companies or the industry will instill ethics or enforce community norms against
excessive disclosure of personal information? Will compliance with norms be greater in some way than would be
compliance with legally enforceable rules?
Can we identify a small enough community or a direct enough effect on individuals' reputations that we can
expect the ethical rules or community norms to substantially constrain the incentive of companies to maximize
profits?

Enhancing Industry Reputation for Protecting Privacy


What incentives will the industry have to enhance its reputation beyond the incentives that companies have in an
unregulated market?
We will expect such incentives to be strongest where: (a) customers can tell that someone in the industry has
misused personal information, but cannot tell which company in the industry has done so; or (b) the reputation
of a dominant company or a small set of companies overlaps substantially with the reputation of an entire
industry.

Technical Standards
How much are network externalities or other benefits realized through industry standard-setting processes in the
privacy realm?
Do antitrust problems argue that a governmental process would be preferable to an industry process?
How well are the concerns of those outside of industry, including consumers, included within the industry
standard-setting process? How much better, if at all, are such concerns incorporated into a government
process?
What other costs would the governmental process have that would be less in the industry standard-setting
process?
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

16/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Self-regulation and the Threat of Government Regulation


If self-regulation does not stave off government regulation, what sort of additional costs would arise from
government-mandated rules, such as greater inflexibility and other administrative and compliance costs?
How do these costs of government regulation compare to any benefits in improved protection of personal
information? How likely is there to continue to be a credible threat of government regulation, in order to keep
self-regulation effective? How well are the interests of those outside of the industry protected in self-regulation
compared with government regulation?
In conclusion, the empirical magnitude of these various costs and benefits will vary considerably across
industries. The best mix of markets, self-regulation, and government regulation will often vary for the distinct
stages of defining, enforcing, and adjudicating the rules for protecting personal information. At each stage, we
can examine how self-regulation may be better or worse than a more fully market or government approach.
At heart, the attraction of self-regulation is that the industry generally has the greatest expertise and the most at
stake in the regulatory process. We might readily imagine that a measure of industry cooperation and selfregulation will protect private information more fully than would a pure market approach. The corresponding worry
about self-regulation is that it may harm those outside of the industry--those who are not part of the "self."
Where the likely harm to those outside of industry is greatest, the argument for government regulation becomes
stronger.
________________________________
ENDNOTES
1 A recent NTIA study concluded: "Uniform privacy requirements will further benefit the private sector by
eliminating a potential source of competitive advantage or disadvantage among rival providers of
telecommunications and information services. At the same time, NTIA's recommended approach gives private
firms considerable flexibility to discharge their privacy obligations in a way that minimizes costs to the firms and
to society. For all of these reasons, NTIA believes that both consumers and the private sector will benefit
substantially from voluntary implementation of that approach. If, however, industry self- regulation does not
produce adequate notice and customer consent procedures, government action will be needed to safeguard the
legitimate privacy interests of American consumers." NTIA, Privacy and the NII: Safeguarding
Telecommunications-Related Personal Information, www.ntia.doc.gov/ntiahome/ privwhit epaper.html.
2 For a somewhat similar discussion of market and government enforcement systems, see David Charny,
Nonlegal Sanctions in Commercial Relationships, 104 Harv. L. Rev. 375, 397-403 (1991).
3 "Self-regulation" of government's use of data is handled by separate law. In the United States, the Privacy Act
and the Freedom of Information Act are the primary "self-regulation" for how the federal government treats
personal data.
4 I am currently at work on a longer article tentatively entitled "Cyberbanking and Privacy: The Contracts Model."
5 A full description of the human rights and contracts approaches must be left to a different paper. For present
purposes, it is not necessary to choose between the two approaches, which differ somewhat as to the overall
goals of privacy protection. The focus here is on which institutional arrangements, including self-regulation, will
tend to achieve those goals, however defined.
6 An exception would be if the person involved in the bargaining gained some other sort of benefit from his or her
effort. For instance, the person might be an employee of a citizen's group devoted to privacy issues. The
individual and the group might gain in various ways, including professional satisfaction and favorable publicity, by
reaching agreement with a major company. While acknowledging the substantial effects that citizen groups
often have, there remains a strong suspicion in the academic literature that public goods, such as bargaining for
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

17/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

effective privacy protection, will be provided less than people's actual preferences would warrant. For the classic
treatment, see Mancur Olson, The Logic Of Collective Action (1965); see also Peter P. Swire, The Race to
Laxity and the Race to Undesirability: Explaining Failures in Competition Among Jurisdictions in Environmental
Law, Yale Journal on Regulation/Yale Law and Policy Review, Symposium: Constructing a New Federalism, at
67, 98-105 (1996) (discussing likely underprovision of public goods).
7 There can obviously be endless debate about what constitutes the "public good" and whether the term even
has any coherent meaning. The point for now is that the government personnel are sincerely seeking what they
believe to be the best policy, rather than being governed by motives such as personal corruption, interest-group
politics, or desires for increased agency turf.
8 The discussion here focuses on the efficiency of the rule, rather than its distributional effects. The question of
who actually pays for the cost of regulation is often extremely difficult to answer, and depends on empirical
issues such as the existence of close substitutes or complements for the product, and on the ability of the
industry to pass on added costs to its customers. ICA members, for instance, might or might not be able to
charge more for their products if a burdensome privacy rule were imposed. If their purchasers readily switched to
mail-order, the ICA members and their stockholders might suffer a loss. If purchasers instead were mostly
choosing among ICA members, then the purchasers would be more likely to absorb the higher prices.
9 For one discussion of the issue, see Louis Kaplow, Rules versus Standards: An Economic Analysis, 42 Duke
L.J. 557 (1992).
10 In the area of information privacy, a good example of slowness-to-amend may be the longstanding
controversy about how to update the 1974 Freedom of Information Act to take account of computerized records.
Important such amendments were included in the Electronic Freedom of Information Amendments Act of 1996.
11 For two general introductions to public choice theory, see Daniel A. Farber & Philip P. Frickey, Law and
Public Choice: A Critical Introduction (1991), and Dennis Mueller, Public Choice II (1989).
12 Cipollone v. Liggett Group, Inc., 505 U.S. 504 (1992) (federal statute preempts state laws requiring additional
warnings, as well as state failure-to-warn and fraudulent misrepresentation claims).
13 For one academic treatment of this sort of possible over-regulation, see Henry Butler & Jonathan Macey,
Externalities and the Matching Principle: The Case for Reallocating Environmental Regulatory Authority, in Yale
Journal on Regulation/Yale Law and Policy Review, Symposium: Constructing a New Federalism, at 23 (1996).
14 For extensive discussion of the role of norms in supplementing markets and legal rules, see Symposium,
Law, Economics, and Norms, 144 U. Pa. L. Rev. 1643-2339 (1996).
15 Current examples of such ethical training include law students, who are required to study the rules of
professional responsibility, including the ban on disclosing a client's secrets. Similarly, bankers are trained in a
distinct culture that has generally frowned on disclosing client financial information.
16 For an analogous argument about the importance of norms and ethics in environmental law, see Carol M.
Rose, Rethink ing Environmental Controls: Management Strategies for Common Resources, 1991 Duke L.J. 1.
17 Web links to many of these organizations are provided by the National Standards Systems Network, at
www.nssn.org.
18 For a clear discussion of the role of network externalities and other factors favoring promotion of
standardization on the Internet, see Mark A. Lemley, Antitrust and the Internet Standardization Problem, 28
Conn. L. Rev. 1041, 1043-54 (1996).
19 According to ANSI: "Implementing standards can: Increase market access and acceptance; Reduce time
and costs in product development; Attain a competitive advantage and faster time to market; Cut costs in
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

18/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

component and materials acquisition; Reduce administrative and material expenses. Participating in standards
development can: Help develop new markets and strengthen existing ones; Ensure foreign market access to
your company technology or processes; Help you gain a competitive edge by influencing the content of
domestic and international standards; Minimize your time to market, strengthen your market presence, and
allow you to realize new revenue through the licensing of technology on reasonable terms." See
www.ansi.org/broch1.html.
20 A clear example comes from the recent announcement by the Consumer Bankers Association of their new
privacy guidelines. The trade press report on the guidelines stated: "The Consumer Bankers issued the privacy
guidelines to show the federal government that the banking industry is policing itself and no new regulations are
needed." Barbara A. Rehm, "Bank Group Issues Guidelines for Protecting Consumer Privacy," Am. Banker,
Nov. 22, 1996.
21 In at least some countries whose banks have a reputation for secrecy, the industry efforts to keep records
secret are bolstered by laws that prohibit disclosure of information.
22 In particular, the self-interest of leading companies can explain how the self-regulatory efforts of industry
might be funded. The expected value of the Code to the company might be great enough to reduce any incentive
to free ride on the efforts of other companies. The leading companies may also benefit by tinkering with the
Code so that, at the margins, it provides a good fit for their own privacy practices.
23 The "leading company" creation of the Code might also be an anti-competitive effort to raise rivals' costs or
increase barriers to entry to the industry. In the privacy setting, however, the risk of net harm to consumers does
not seem especially great. The harm to consumers would result only if a specific sort of supplier stopped
competing--those suppliers who can survive in the market only by using more personal information than selfregulation would allow.
24 Mark A. Lemley, Antitrust and the Internet Standardization Problem, 28 Conn. L. Rev. 1041, 1083-88 (1996).
25 It is possible that self-regulation will be more protective of privacy than government regulation, such as when
expertise is better applied in industry regulation, or when ethical beliefs and community norms work better under
an industry system than a government system. I am inclined to be cautious about such an optimistic view of the
effectiveness of self-regulation. For a highly critical assessment of the effectiveness of self-regulation by the
Direct Marketing Association, see Paul M. Schwartz & Joel R. Reidenberg, Data Privacy Law: A Study of United
States Data Protection 307-48 (1996). As discussed in the text, a more likely scenario is that government
regulation will result in stricter protection of personal privacy, but will also impose higher administrative and
compliance costs.
26 In a well-known article, economist Anthony Downs discussed the "issue-attention cycle," in which issues
predictably would rise and fall in the level of attention they received in the legislature and the general public.
Anthony Downs, Up and Down with Ecology: The Issue Attention Cycle, 28 Pub. Interest 38 (1972).

Privacy and Self-Regulation: Markets for Electronic Privacy


Eli M. Noam(2) [1]
Professor of Finance and Economics
Director, Columbia Institute of Tele-information
tel: (212) 854-4596
fax: (212)932-7816
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

19/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

e-mail: enoam@research.gsb.columbia.edu
or www.ctr.columbia.edu/citi/ [3]

[2]

INTRODUCTION

For a long time, the conventional wisdom was that electronic communications constituted a major threat to
individual privacy. Wiretapping, eavesdropping, and data banks were part of the Big Brother and Nosy Sister
scenario. This fear for personal privacy is justified in the short term. But in the long term, the opposite is more
likely to happen, because the electronic tools that permit privacy invasion are even more powerful in controlling
an individual's informational autonomy. In the process, still another revolution is upon us, the revolution of
access control. By gaining such control individuals achieve bargaining strength over those who seek information
about them. They can establish a perimeter over the inflow and outflow of information. They can create property
rights in personal information. Transactions become possible, and markets in private information can emerge.
No problem is ever new. Jeopardies to privacy have been associated with electronic media from the beginning.
Gossipy manual operators, 1 party lines with participatory neighbors, 2 and the absence of a warrant requirement
for wiretapping3 all created privacy problems. 4 The first American patent for a voice scrambling device was
issued only five years after the invention of the telephone.
The New York Police Department, always on the technology frontier, listened in on telephones since at least
1895. In 1916 this led to a public controversy about eavesdropping on a Catholic priest as well as on a law firm
involved with competitors to J.P. Morgan & Co. For World War I munitions contracts. 5
Today, a new generation of electronic privacy problems has emerged, for several reasons:
An increasing number of transactions are conducted electronically. 6

It has become easier and cheaper to collect, store, access, match, and redistribute information about
transactions and individuals. 7
Wireless transmission conduits include unsecured portions.
The number of communications carriers and service providers has grown enormously, leading to an
increasingly open network system in which information about use and user is exchanged as part of
network interoperability.
The Internet computer network system is wide open.
In consequence, new electronic privacy problems keep emerging. Recent controversies include:
Intrusive telemarketing.
Data collection about transactions.
The ability of governments to control encryption.
The ability to determine an incoming caller's phone number and use of such information.
The monitoring of wireless mobile communications.
Employers' monitoring of their employees.
The ability of using e-cash for illegal transactions.
The difficulties of law enforcement agencies to keep up with transmission technology.
The unsecured nature of the Internet, and the ability to track the sites which an individual visits.
And more is coming our way. For example, tiny mobile communication transceivers, together with number
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

20/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

portability, will enable telephone subscribers to be continuously connected. Their locational whereabouts, their
comings and goings, and the identity of other persons in the same location could, therefore, be continuously
ascertained.
Given that privacy is important to so many people, and given that information technology keeps raising new
questions, what approach should be adopted to deal with privacy problems?
In the past, if remedies were considered, the primary strategy was to resort to regulation. The call for the state
to control and protect privacy is a natural response especially in the field of electronic communications, given
their history around the world as either a state-controlled telephone or broadcast monopoly or tightly regulated
sector. This has led to a view of electronic privacy problems largely as an issue of rights versus the state or its
regulated monopoly firms-- and to the question how to create such rights in the political, regulatory and legal
sphere. But such a view is static: having a right is often believed to be the end of the story. Yet in most parts of
society, the allocation of rights is only the beginning of a much more complex interaction.
Privacy is an interaction, in which the rights of different parties collide. A has a certain preference about the
information he receives and lets out. B, on the other hand, may want to learn more about A, perhaps in order to
protect herself. The controversies about caller-identification, or of AIDS disclosure by medical personnel,
illustrate that privacy is an issue of control over information flows, with a much greater inherent complexity than
a conventional "consumers versus business," or "citizens versus the state" analyses suggests. In this case,
different parties have different preferences on "information permeability" and need a way to synchronize these
preferences or be at tension with each other. This would suggest that interactive negotiation over privacy would
have a place in establishing and protecting privacy.
While this article will not suggest that markets can provide a solution to every privacy issue, it will argue that
they can be utilized much more than in the past.

WHAT IS PRIVACY?
In the information sector, privacy consists of two distinguishable but related aspects: 8
The protection against intrusion by unwanted information. This is sometimes termed "the right to be left alone," 9
and it is an analogue to the constitutional protection to be secure in one's home against intrusion.
The ability to control information about oneself and one's activities; this is related in some ways to proprietary
protection accorded to other forms of information through copyright laws, 10 and security of information about
oneself from tampering by others.
The common aspect of both these elements is that they establish a barrier to information flows between the
individual and society at large. In the first case, it is a barrier against information inflows; in the second instance,
against information outflows.
The concept of privacy is not without its detractors. Among the major criticisms are:
"Privacy protects anti-social behavior." In this view, privacy is a smoke-screen used to hide activities that
should be discouraged. This may be true at times; yet it is also the price of personal freedom. Authoritarian or
backward societies do not value a private sphere since they do not tend to respect individuality and subordinate
it to the demands of rulers or societal groups. 11 The recognition of a private sphere is hence one of the touchstones of a civilized and free society. 12
"Privacy is costly to the economy." Privacy protection raises the cost of an information search. For example,
potential employers and buyers have to spend more effort (and money) to find out who they are dealing with if
access to personal information is restricted. Deception becomes easier and transaction costs rise.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

21/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

But there are economic arguments on the other side. Privacy affects the ability of companies and organizations
to hold on to their trade secrets and details of their operations, and to protect themselves from leaks of insider
information and against governmental intrusion. Information has value, and where it has no protection through
property rights it must be protected through confidentiality or secrecy. 13 To permit its easy breach14 would lead
to a lesser production of such information.
The loss of privacy leads to inefficiency in information flows, just as excessive privacy protection may. One of
the predictable results of third party monitoring of telephone calls is to force speakers to disguise or modify their
communications in order to keep them secret.
Partly in response to economic and social needs, many transactions have been specifically accorded special
common-law informational protection known as "privileges," e.g., between attorney-client, patient-doctor, citizencensus taker, penitent-clergy, etc. The idea in each case is that the protection of information leads to an
economically and socially superior result even if it is inconvenient to others in an individual instance.
"There is no demand for Privacy." This objection views privacy as an issue of concern only to a small elite
group. But to the contrary, attention to privacy is widely shared. For example, according to information from the
New York Telephone Co., of a few years ago, 34% of all residential households in Manhattan and 24% of all its
residential households in the State had unpublished telephone numbers at subscriber's request. Most
policemen, doctors, or judges, to name but a few professions, have unlisted numbers. On the West Coast, the
spread of unlisting is still further advanced, reaching 55% in California! It should be noted that it costs extra to
be unlisted. In other words, a large number of customers is willing to pay in order to increase its privacy. With
more than half of the population willing to do so, it becomes impossible to keep denying that privacy is an
important issue.

POLICY APPROACHES
As the new technological options emerge they create new opportunities but also new privacy problems. How
can such problems be dealt with?
As was mentioned, the primary policy response has been regulatory. Within that position there were two major
directions--centralized general protection and decentralized ad-hoc protection. West European countries, in
particular, have pursued the former, and passed comprehensive (omnibus) data protection laws and established
institutionalized boards with fairly rigorous rules, and coordinated internationally on information collection and
data flows. 15 The United States, in contrast, has dealt with specific problems, one at a time, and with different
approaches across the country.
In Europe, advances in data processing led in the 1970s to fears about the abuse of information storage and the
potential for a "1984"-like surveillance state. Many of these fears were based on the technological notion of
computers as vast centralized mainframes, a notion which corresponded to the state of computer technology of
the 1960s. But since then, this technology has moved steadily toward a decentralized system, with millions of
small computers in people's offices and homes.
Though the origin of concern over privacy was the potential violent abuse of data by government agencies, the
focus of remedial action shifted quickly to data collection activities by private business. Rules against the
government's collection of data were also set, but with less severity. At the same time that Germany
promulgated the first data protection laws against private data abuse, its federal and state governments took a
quantum leap in the use of data-processing technology for the surveillance of its citizenry. During the 1970s, a
handful of terrorists prompted the German police to institute a chillingly efficient system of border checks,
citizen registration, data access, and domestic road blocks, all of which were interconnected by data banks and
communication links. Although the terrorism was quickly stopped, many control mechanisms were not.
Additionally, the rules had a tendency to spread. A loophole was soon recognized in privacy laws: international
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

22/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

data transfers permitted the evasion of data protection laws. In Sweden, for example, a data file on any
employee is subject to protection from disclosure to third persons. However, if a Swede works for a foreign firm,
it would be possible that the data would be transmitted to the headquarters of the firm, where it would be less
protected. Conceivably, therefore, some countries could set themselves up as "data havens" in order to attract
businesses determined to circumvent privacy laws. Although these threats were more theoretical than real, they
led to a movement to "harmonize" data protection practices or to restrict the flow of sensitive data in the
absence of such harmonization.
The Organization for Economic Cooperation and Development (OECD) was instrumental. In 1979, the OECD
drafted a first set of guidelines for its member states: Data collection should be limited to necessary information
obtained lawfully, and, where appropriate, with consent; data should be accurate, complete, up-to-date, and
relevant to the needs of the collector; use of the data ought to be specified at the time of collection, and its
disclosure should be in conformity with the purpose of collection; assurances must be made against
unauthorized access, use, and disclosure; and data should be open to inspection and correction by the
individual to whom it refers. 16
The Council of Europe incorporated the OECD guidelines in the 1980 Convention on the Protection of Individuals
with Regard to Automatic Processing of Personal Data. The convention affected all transborder data flows
among European countries and with other countries, such as the United States. This made American firms with
international business activities nervous, since the convention provided that any country could restrict the
transmission of data to another country that did not have data protection legislation comparable to its own.
Since firms conducting international transactions generally prefer to have uniform procedures for transactions in
various countries, procedures were likely to conform to the strictest of national rules.
In 1992, the European Commission adopted a directive establishing basic telecommunications privacy rights for
its member states. The draft included restrictions on unsolicited calls, calling number identification, and use and
storage of data collected by telephone carriers for electronic profiles. 17 It mandates that holders of data pay for
security measures in order to bar unauthorized access. It also prohibits the creation of electronic profiles of
individuals utilizing data concerning their purchases or other actions, and it bars transfers of data to non-EC
member countries unless those countries have adequate data protection rules. 18
Among Third World countries, Brazil has been particularly active in data and telematics issues. Instituted during
the years of military dictatorship, the thrust of Brazil's policy was evident in the statement of its top information
officer, who combined both the civilian and military functions of that term.
The administration [i.e., the restriction] of TDF [transborder data flows] appears to be an effective government
instrument for the creation of an environment that mak es the emergence of an internationally viable national
dat-service industry possible. By itself, such an industry would have had great difficulties in overcoming the
obstacles of a completely "laissez-faire" environment. The country's TDF policy altered that situation. 19
A license had to be obtained before establishing international data links. Applications for foreign processing,
software import, and database access were rejected if domestic capability existed. The policy was strongly
embraced by the Brazilian military dictatorship and its business and industry allies, and it was admired around
the world as an assertion of national sovereignty by many observers who would otherwise feel no kindness
toward right-wing juntas.
In the United States a generally more pragmatic approach to legislation, and a case oriented decision process
administered through the judiciary and the regulatory agencies, have led to the tackling of specific data abuses
when the became apparent rather than to comprehensive laws. This has led to a less systematic approach that
in Europe, and to a variety of ad hoc federal and state legislation. Typically, they addressed a narrow and
specific issue of concern. 20 Most such statutes were either aimed at particular industries (for example, credit
rating bureaus), or at the conduct of governmental agencies, or they dealt with flagrant abuse such as computer
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

23/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

break-ins. 21
Thus, contrary to often-held views in other countries, numerous laws protecting data and privacy exist in the
United States, and some of them are quite far-reaching, especially in terms of access to state files, and limits
on such files.
Nevertheless, U.S. privacy legislation remains considerably less strict than European law in the regulation of
private databases, and coverage of U.S. governmental organizations by privacy law is not comprehensive.
Although the Privacy Act of 1974 restricts collection and disclosure by the federal government, and vests some
responsibility in the Office of Management and Budget, only a few states and local governments have passed
similar fair information practices laws for their agencies. The U.S. has no government agency specifically
charged with data protection similar to the centralized data protection commissions or authorities established in
European countries, though proposals have been advanced in Congress.
A synthesis of the comprehensive European and the ad-hoc American approaches is to formulate a set of broad
rules or principles applicable to a sector of the economy, or to a set of issues. This was the direction taken by
the New York Public Service Commission on the issue of telecommunications privacy.
The New York Public Service Commission's approach in 1991 went well beyond the problem-specific approach.
It issued, after a proceeding initiated by the author, a set of broad privacy principles applicable to the whole
range of telecommunications services under its jurisdiction. 22
A similar approach, that of privacy principles, was recently taken by the Federal Government's high visibility
Information Infrastructure Task Force, in the report by its Privacy Working Group, which issued a set of
Principles for Providing and Using Personal Information. But that report is virtually devoid of a discussion of a
market mechanism in protecting privacy, or in integrating such mechanisms in its privacy principles.
MARKETS IN PRIVACY
The reflexive approaches to privacy problems has been regulation, or denial. Are there other options?
First, there is the possibility of self-regulation, where an industry agrees to restrict some of its practices.
Realistically, though, self-regulation is rarely voluntary (unless serving an anti-competitive purpose): it usually
occurs only under the threat of state regulation, and it can therefore be considered a variant of direct regulation.
The practice for the state to control and protect privacy is a natural response in the telecommunications field,
given its history as state-controlled monopoly. It has led to a view of privacy problems largely as an issue of
rights, and the question is how to create such rights in the political, regulatory and legal sphere. Such a view is
appropriate in the context of privacy rights of the individual against the state. But the same cannot be said for
the privacy claims of individuals against other individuals. The allocation of rights is only the beginning of a much
more complex interaction. Some people may want and need more privacy than others. Privacy, by definition, is
an interaction in which the informational rights of different parties collide. Different parties have different
preferences on "information permeability" and need a way to synchronize these preferences or be at tension
with each other. This would suggest that interactive negotiation over privacy would have a place in establishing
and protecting privacy.
How should one analyze the role of bargaining over privacy? It is useful to consider as a framework for
discussion the economic theorem of Nobel laureate Ronald Coase, a Chicago economist. Coase23 argues that
in a conflict between the preference of two people the final outcome will be determined by economic calculus
and (assuming reasonably low transaction costs) result in the same outcome regardless of the allocation of
rights. 24 If the final result is the same, who then should have the rights? According to Coase, it should be the
"least cost avoider," i.e., the party who can resolve the conflict at the lowest possible cost.
Let us apply this discussion to privacy, using the example of telemarketing. Both of the parties to a telephone
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

24/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

solicitation call attribute a certain utility to their preference. For example, it may be worth $3 to the telemarketer
to have an opportunity to talk to the consumer. If necessary, she would be willing to pay a potential customer up
to that amount.
Conversely, assume that the consumer would be willing to pay--grudgingly for sure--up to $4 to the telemarketer
to keep her off the phone. The $4 is the value he places on his privacy in this instance. Thus, if the telemarketer
has a legal tight to call the consumer at home, the latter would "bribe" her not to call in order to keep his peace
and quiet.
The basic decision on regulatory rights is either to prohibit unsolicited telemarketing calls, or to permit them.
But regardless of which rule is adopted, the call will not take place, because under our numerical example the
value of privacy to the consumer is greater than its interruption is to the telemarketer. But if for some reason the
value to the telemarketer should rise, say to $6, the consumer could not pay her enough not to call; and
conversely, if the telemarketer would have no initial right to make unsolicited calls, she would pay for the
consumer's cooperation by a payment of $4 or more, so that the call is accepted.
In other words, the distribution of the legal rights involved may largely determine who has to pay whom, not
whether something will happen. Thus the law does not necessarily determine whether telemarketing calls
actually take place, it only affects the final wealth distribution. This interactive concept is often difficult to grasp if
one is used to think in absolutes of black-letter law. Common law, in contrast, has recognized transactions from
the beginning. Indeed, the original legal cases which established the tort of privacy were not based on a finding
that the plaintiff had a right to privacy, but instead that the plaintiff had a right to be adequately compensated. 25
For privacy transactions to occur, however, there are several prerequisites They include:
Sufficiently low transaction costs.
A legal environment that permits transactions to be carried out.
An industry structure which permits transactions to occur.
Symmetry of information among the transacting parties.
No "market failure," i.e., no growing instability in the market. 26
The ability to create property rights, or to exclude.
Courts have been reluctant to grant property rights to personal information outside of the case of luminaries. In
one case, 27 Avrahami vs. U.S. News & World Report, a gutless court 28 managed to hold for two organizations
that exchanged subscriber name lists without permission, even though Virginia Code 8.01-40 (Michie 1999)
clearly provided that "Any person whose name, portrait, or picture is used without having first obtained written
consent of such personfor advertising purposes or for the purposes of trade, such person may maintain a suitto
prevent and restrain the use thereof." The statute also permitted the aggrieved party to recover actual and
punitive damages. 29 The court held that the inclusion of a name was "too fleeting and incidental," and that a
person's name was not personal property. An appeal may be brought before the Virginia Supreme Court.
This reluctance of courts (and probably of legislatures) to recognize property rights in residual information s not
surprising in light of the role of direct marketing in the economy. However, property is only not established from
above by formal statutes or court decisions, but also from below, by the simple mechanism of an individual's
ability to exclude others. Good fences create good neighbors, and good transactions as well. Electronics makes
this increasingly possible. Such access control creates the possibility of bargaining, by transforming information
from a "public good" (like a light house's flashing) to a private good (like a flashlight).

EXAMPLES FOR THE MARKET APPROACH

Telemarketing
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

25/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

As we discussed, because privacy and access are of value to parties in a telemarketing transaction, exchange
transactions will emerge once they become technically feasible. How could this happen on a practical level?
Signaling technology and telecommunications equipment provide now the capability to select among incoming
calls electronically. This creates the precondition for access control by individuals, namely information about the
calling party, which until now enjoyed the stealth of anonymity. Information is power, or rather it is worth money.
Once this choice of avoiding calls is available to the called party without loss of important incoming calls, callers
must offer incentive to be admitted. Friendship, family ties, reciprocity, useful information business--or a financial
payment. What will therefore inevitably emerge is a system of individualized access charges.
Such a system might be described as Personal-900 Service, analogous to 900-service in which the caller pays
a fee to the called. The caller would be automatically informed that the customer charges telemarketers for his
time and attention.
Individual customers could set different price schedules for themselves based on their privacy value, time
constraints, and even the time of day. They would establish a "personal access charge" account with their
phone or an enhanced services provider, or a credit card company. By proceeding, the telemarketer enters into
a contractual agreement The billing service provider would then automatically credit and debit the accounts in
question.
Such a system will probably have a negative impact on the business of telemarketers. Currently, they
"externalize" some of their costs by accessing customers at home at no charge to themselves other than their
operating cost. Right now, consumers do not yet have the means to make the telemarketer compensate them
for their attention. (In television, the audience gets at least to view an entertainment, sports, or news program.)
Under personal-900, telemarketers will be forced to pay more for consumer access.
Consumers will benefit from the payment the receive for accepting calls. Some might even become "professional
call-receivers." though telemarketers will no doubt refine ways to select the most likely buyers. Telemarketers
will become more selective in who they try to reach, and spend more money on "fine tuning" their customer list.
Technological tools to refine their search are intelligent agents sent out to find interested and affordable targets
for solicitation.
Markets in access will develop. Consumers will adjust the payment they demand in response to the number of
telemarketer calls competing for their limited attention span. If a consumer charges more than telemarketers are
willing to pay, he can either lower access or will not be called anymore. Prices could vary by time of day.
Consumers will bear some of the portion of these costs. First, by way of higher prices for telemarketed
products. The extent to which these costs can be shifted by telemarketers are in strong competition with other
forms of marketing, and where consumers are price-inelastic, telemarketers will bear most of the added cost. 30

Wireless Transmission
Market forces may also be able to resolve the unauthorized eavesdropping of wireless communication systems
such as cellular and cordless telephones. True, such monitoring is illegal for cellular calls (though not for
cordless phones), but it is widely practiced by scanning hobbyists as well as investigators. Just ask Prince
Charles.
Eavesdropping is inefficient because it forces the participants in a communication to disguise the content of
their transmissions, or to seek other ways of communicating. Thus, there are incentives for cellular service
providers or equipment firms to offer scrambling devices. 31
Encryption systems require extra equipment and may increase the amount of spectrum required for a given
quality and information content of a signal. Customers who value privacy sufficiently will be willing to pay for the
increased resource cost. 32
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

26/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Data Banks
Companies often sell or pass along information about their customers to others, for a variety of purposes.
Insurance companies want to know the accident and medical history of new applicants; stores, whether new
customers are credit-worthy; employers, whether job applicants have criminal histories; doctors, whether a
patient has brought a malpractice suit in the past; and so on. 33
In America individuals, firms, and governments have a substantial right to collect and redistribute personal and
financial data about individuals. One could conceive of a market transaction system by which consumers offer
companies payments to delete such information or refrain from distributing it. But could such a system work? In
any transaction, both parties remain with information about it. The problem is not usually that a party saves that
information, but rather that it disseminates it to others. The regulatory approach restricts some of these
transfers. Could a market work instead?
The answer is usually "no" today. And only "maybe," in the future.
The reason for this can be found in the logic of reselling information. In many cases the holder of information
about a second party could share that information with a third party at a higher price that the resulting reduction
in value to him. Take, for example, a piece of credit history information on individual A that is worth $5 to B so
long as B retains the information exclusively. If B distributes the data to another party, C, the direct value of the
data to B may not be diminished at all, or may drop a bit to, say, $4. (It is one of the peculiar economic
properties of information that it can usually be shared without any or only little loss of usefulness to its holder.
The exceptions are business and trade secrets) Suppose C, too, is willing to pay up to $4 for the same
information because it is of similar usefulness to him. Then the total value to B of not destroying the information
is $8. And why stop at two beneficiaries? B could resell the information also to D, E, etc. So could C. In each
case, the reduction in value of the information to one of its holders may be less that what another party will gain
by obtaining it.
Hence the information will spread. Accordingly, the subject of the information, individual A, might have to expend
a significant amount of money to prevent B from spreading the information. If it is of use to a hundred firms, each
valuing it at say $4, it would take a $396 "bribe" for A to keep B from reselling it. If a resale of information is
possible, B and C would market the same information about A, and they will drive down its price to the marginal
costs of distribution. In that case, the information would spread greatly, but it would also be cheaper for A to
bribe B at the outset. Yet all B would have to do is to contractually assure, in the transaction with C, against
resale.
A could attempt to stop personal data from getting released to a third party by preferring to do business only
with firms that agree to destroy such data. But companies would charge customers higher prices to
compensate for the lost information resale. Furthermore, once many companies start refusing to sell
information, each will have less information that before and hence a greater business risk, which would be
reflected in the price. In effect, firms would charge for withholding the information through their product or service
prices.
At the same time, any effort by A to pay a high price to B for non-revelation will likely raise the value of the
information to B, C, etc--what is A trying to hide, anyway? And, wouldn't A have to pay a similar bribe to C, too,
if the information reaches it? Thus, the more important the information is to more parties, the less affordable is a
market transaction to purchase privacy. Only where information is of little use to others, or only to a very few,
are privacy transactions likely.
An example is a video store. Such a business could advertise that its policy is to guarantee privacy. It would
gain customers, and since the information is not usually very important to many other parties, it would lose little
(the interest in political figures and celebrities is an exception). In contrast, it is hard to imagine a credit card
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

27/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

company willing to be compensated for non-disclosure to other credit-extending firms. The value of preventing
credit-fraud is so great to so many firms that any payment to undermine the reporting system would have to be
quite high. Yet video-store disclosure is prohibited by law, while credit-reporting is legal. The reason is probably
that the loss of information-value was low for video-viewing and nobody therefore mounted a fight against such
legislation, while politicians running for election were particularly sensitive about the issue.
Even if A could pay B to withhold the information, it may not be possible in practical terms. One of the
characteristics of information is that its exclusivity is almost impossible to acquire once multiple parties have
access to it.
Any negotiating approach will only work for transactions between individuals and businesses. If the information
is obtained by government, fewer market-based incentive exists to prevent transfer of the data. This is one
reason why government agencies are becoming so active in selling information to others. They have little to lose.
Where else could one go to get a driver's license?34
Currently, there is a right to collect, distribute and utilize personal data. What then if the rights were reversed
and one would have to get a person's permission before retaining, transferring or utilizing personal data about
him? If the information is of value to a bank and other credit institutions, they would acquire it by compensating
the customer. Given the collective value of the information, such transaction would be likely. Hence, the
information would be circulating. Consumer would be richer that before, but the information would be, in effect,
still in the public domain. 35
In conclusion, for personal data banks containing information about individuals, market transactions are either
unlikely where the information is of use to many others, or it will be acquired by them. In either case the
personal information, if valuable, becomes public information. For the future, one possibility that may help
alleviate this problem is the emergence of encryptions.

Encryption
For markets in personal information to exist, it is necessary to protect that information from appropriation by
others.
With digital technology, methods of protecting information with encryption have become powerful and
convenient. Encryption goes back for thousands of years. It emerged primarily for the first electronic computers
being the impetus as part of national security work, and spread to civilian computer applications. Encryption
became popular with the release of the Data Encryption Standard (DES) to the public in 1977. DES is a 56 bit
single key algorithm. To send a message to B using DES, A needs to encrypt it. This leaves open the risk that
the key is intercepted, and anyone knowing the key can decrypt the transaction.
Dual key systems solved this problem. In this system, anyone who wants to receive a message has a "public"
key. If A wants to send information to B in a secure way, he can encrypt it using B's public key. But the
encrypted message can be decrypted only by using B's "private" key. Thus, there is never a need for the riskladen transmission of private keys.
Dual-key encryption software has appeared with the spread of the Internet: Pretty Good Privacy (PGP) employs
dual key cryptography and is distributed free of charge for private use. Business users pay. Privacy Enhanced
Mail (PEM) uses DES encryption along with a dual key algorithm to secure mail transmission.
According to International Resource Development, the U.S. data encryption market has grown from $384 million
in 1991 to an estimated $946 million in 1996. 36
Where information is protected by encryption it is more marketable. Ironically, the U.S. government, for reasons
of law enforcement and national security, has opposed easy and fully secure encryption, thus reducing the
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

28/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

ability of individuals to control access to their information, to establish property rights, and to create the
foundation for markets.
Present encryption, however, does not solve the problem of information resale to a third party C, once decrypted
by the second party B. Solving that problem in the future would be a god-send to every owner of information and
copyright, but it is hard to conceive how it might be done securely. A buyer of information cannot be stopped
from memorizing and or photographing the de-crypted information on his screen and then reselling it.
Even so, giving A protection vis-a-vis B already goes a long way. It permits, for example for property rights in
information about transactions between A and B to be held jointly. Both A and B hold keys to it, and therefore
need each other's permission for their release. This would enable, for example A (a consumer) to require
compensation from B (a credit card company) for releasing transaction information. It is true that B could copy
information once it accessed it for one purpose, in other says that were not authorized. But to do this in a
systematic way to thousands of customers would be a foolish business practice.
The dual-key systems would permit also individuals to sell information about themselves directly, instead of
letting various market researchers and credit checkers snoop in their demographics, personal history, and
garbage cans. Individuals would define a set of access rights: their doctor only would be allowed to view medical
records. Other categories of information would have free access, while others would be costly. Presumably, the
more valuable information is to the buyer, and the more negative it is to the seller, the higher the price. Some
information would be priced too high for voluntary exchange. This system would also allow an individual to keep
track of who asked for the information.. And, the reselling of the information would be authorized only by
agreement of both key holders.

SELLING THE RIGHT OF PRIVACY


So far we have analyzed the role of markets in the provision of privacy in a largely pragmatic way--will it work?
Yes, in some cases. No, in other cases. But at least as important is the normative question--should privacy be
part of a market? While the market approach could be in many instances efficient on economic grounds and
would differentiate according to needs, efficiency is not the only value to be concerned about. Just as there are
economic trade-offs, so are there non-economic ones.
A distribution of privacy rights on a free-market basis would provide no protection for citizens against
encroachment by the state. The only effective limits on government are those established through constitutional
and statutory means. Therefore there would have to be two types of privacy rules, one for transactions among
private parties, the other for transactions between private parties and the state. The former would be left, in part,
to the market to allocate, the latter would involve a constitutionally protected right. Yet the question may be
asked whether such a bifurcation in the treatment of the most mobile of resources-- information--is sustainable
and practical.
Perhaps the most prevalent argument against markets in privacy is that affiance is not the only societal goal.
Thus, some resources, such as privacy allocations, might be in the category of inalienable rights that are
protected from encroachment and "commodification" by the market system.
This position leads to several responses to the notion of transaction-generated privacy:
Privacy is a basic human right, and not subject to exchange transactions.
Consumers cannot correctly assess the market value of giving up personal information.
A transaction system in privacy will disproportionally burden the poor.
To state that privacy is a basic human right is a noble sentiment with which I am in accord, but it does not follow
that privacy therefore is outside the mechanism of transactions. As mentioned, a right is merely an initial
allocation. It may be acquired without a charge and be universally distributed regardless of wealth, but is in the
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

29/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

nature of humans to have varying preferences and needs, and to exchange what they have for what they want.
Thus, whether we like it or not, people continuously trade in rights. In doing so they exercise a fundamental
right, the right of free choice.
In most cases, a person does not so much transfer his right to another but chooses not to exercise it, in return
for some other benefit. An accused has the right to a jury trial, but he can waive it for the promise of a lenient
sentence. A person has the freedom of his religion, but may reconsider in order to make his spouse's parents
happy. One can be paid to assemble or not to assemble, to forgo bearing arms, travel, petition, or speak.
Voluntary temporary servitude in exchange for oceanic passage has peopled early America. Students have the
right to read faculty letters of recommendation written in their behalf, but they usually waive that right in return for
letters they hope will have greater credibility. 37
These departures from textbook civics are socially undesirable if the rights in question were given up under
some form of duress, for example if in a single-employer town workers must agree not to assemble as a
condition of employment. But when an informed, lucid, sober, and solvent citizen makes a choice freely, the
objections are much harder to make. They then boil down to a transaction being against public policy, often
because it affects others outside the transactions (i.e., "negative externalities"). To make these transactions
illegal, however, does not stop many of them, if there are willing buyers and sellers, but it makes them more
difficult and hence costly. The extent of the success of such a ban depends, among other factors, on the ability
of the state to insert itself into the transaction. In the case of privacy, which by its nature is an interactive use of
information, such insertion is difficult. All it usually takes is to make the information transaction consensual.
And if it becomes illegal to offer compensation to obtain consent, one can expect imaginative schemes to
circumvent such a prohibition. After all, we now have over 3.0 lawyers per thousand population, up from 1.3 in
1970. 38 Indeed, the success of government enforcement would then depend on intrusive actions by the state
into private transactions. As important as privacy is, it will not necessarily override other values, such as free
choice, the right to know, and the right to be left alone.
A second objection is that consumers have asymmetric knowledge relative to business about the value of their
personal information, and that they consequently would be exploited (Gandy, 1996). The holders of this view
discount the information-revealing process of competition. They must assume chronic oligopolistic behavior by
business firms. Because such asymmetry in information would extend to all other dimensions of transactions as
well, this view, to be consistent must be deeply skeptical of informed consent in consumer transactions
generally.
The third objection to transactions in privacy is that they disproportionately harm the poor. Here, it is believed
that it is especially those suffering from financial pressures and ignorance will sell their privacy rights to rich
individuals and institutions. It is, of course, true that a poor person's priorities may often not include privacy
protection. (In other cases, however, the opposite may hold and poor people need privacy more than those who
can afford to create protective physical and organizational walls for themselves.) On the other hand, the same
poverty condition may also make a poor person an unattractive target for a commercial intrusion. Telemarketers
will prefer to make a pitch to individuals who can afford their products. The poor are best helped by money; to
micromanage their condition through restricting their right to transact may well end up a patronizing social policy
and inefficient economic policy. This leads to a conclusion that privacy, being a broad umbrella for a variety of
issues, cannot be dealt with in a single fashion. Where transactions are not forthcoming, indicating a structural
market failure, (perhaps due to monopoly or high transaction costs), or where negative externalities are large,
regulations can be appropriate that reflect the policy preferences of the community for privacy and as well as for
other values. But it must be recognized that, given the initial logic of the exchange transactions, they will find a
way to assert themselves in other ways, thus undercutting the actual effect of the restriction and leaving them
more in the nature of a societal statement of intent.
But where the level of privacy protection can be readily set by free exchanges among individuals there is no
reason for state intervention, and one should instead strive to eliminate constraints against such transactions.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

30/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Those who believe that the market approach to privacy protection is overly generous to business violators of
personal privacy might find themselves pleasantly surprised because the tools of access control will have shifted
the balance of power to individuals and to the protection of privacy. Indeed, it will be the business users of
personal information who will end up objecting to transactions. They are, of course, worried that while they
(together with politicians and parties) have today relatively free access to individuals or to data about them, a
system where they might have to pay compensation in return for consent might become expensive. they are
correct, but what can they do about it? Access to a individual, even if sanctioned by law, will require the latter's
cooperation. Right now, individuals do not yet have effective means to make those desiring personal information
compensate them. But the tools to change this, such as encryption or caller identification, are here or near.
Soon, equipment makers and communications service providers will enable consumers to conveniently sell
access. And when this happens, those marketers who claim to live by the free market will also have to play (and
pay) by its rules.
__________________________
ENDNOTES
1 Recall the TV series "Petticoat Junction."
2 Recall the movie "Pillow talk."
3 Olmstead v. United States, 277 U.S. 438 (1927).
4 See Westin (1967).
5 Seipp (1981).
6 For example, in 1962, the U.S. federal government had 1030 computer central processing units; in 1972,
6,731; in 1982, 18,747; and in 1985, over 100,000. (Linowes, 1989). Today, thier equivalent is probably beyond
counting.
7 In the past twenty years the cost of access to a name on a computer-based mailing list has come down to
about one thousandth of its earlier cost.
8 See, e.g., Richard Posner, (1981).
9 Warren and Brandeis, (1890).
10 The common-law copyright protection provided primarily that if one had not published information in one's
possession, no one else could take and publish it. This was similar to a trespass and conversion
11 On the history of privacy, see Posner (1981); Simmel (1906); Westin (1965); Seipp (1978). In the United
States, privacy is a non-partisan issue. The Privacy Act of 1974 was co-sponsored by Senators Edward
Kennedy and Barry Goldwater.
12 Justice Louis Brandeis, in a famous dissent, wrote of "the right to be left alone--the most comprehensive of
rights and the right most valued by civilized men." Olmstead at 478.
13 In the extreme, private information is so valuable to an individual as to make him a target for blackmail. See
also Brown and Gordon (1980) for an economic perspective from the FCC.
14 See Richard A. Posner, The Economics of Justice, Harvard University Press, Cambridge, Massachusetts
(1981, pp. 231-347).
15 See Noam, Eli. M., Telecommunications in Europe, Oxford University Press, 1993.
16 Organization of Economic Cooperation and Development, 1979.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

31/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

17 Gilhooly, 1990, p. 1; CEC, 1990, p. 5.


18 Oster, Patrick; Galen, Michele; Schwartz, Evan, "Privacy vs. Marketing: Europe Draws the Line," Business
Week , June 3, 1991.
19 Pipe, 1984b.
20 A 1990 example is a Congressional bill for monitoring of computer bulletin by the host system operators in
order to prevent use for illegal activities.
21 Shaffer, David, Ban on Recording Telemark eting Upheld, St. Paul Pioneer Press, March 29, 1993. (For
example, the state of Minnesota banned the use of automatic dialing equipment. The United States Supreme
Court let stand a Minnesota Supreme Court decision upholding the ban despite arguments that such a law
violates constitutional free speech protection.)
22 See Proceeding on Motion of the Commission to Review Issues Concerning Privacy in Telecommunications,
Case 90-C-0075, State of New York Public Service Commission, March 22, 1991.
23 Ronald Coase, The Problem of Social Cost, The Journal of Law and Economics 3 (October 1960) 1-44.
24 If the final result is the same, who then should have the rights? According to Coase it should be the "least
cost avoider," i.e., the party who can resolve the conflict at the lowest possible cost.
25 Posner, at 225. The early cases developing the tort of privacy often involved the use of a person's likeness is
commercial advertising without permission or offer of monetary compensation. E.g., Paversich v. New England
Life Inc. Co., 50 S.E. 68 (1905) (The unauthorized use of a man's photo in an insurance advertisement).
26 For a discussion of the limitations, see Noam, Eli M., Privacy in Telecommunications: Mark ets, Right, and
Regulations, Office of Communication, United Church of Christ, April 1994, 5M.
27 Commonwealth of Virginia, Circuit Court of Arlington County, Arlington county, At Law No. 95-1318, June 13,
1996.
28 The court found that direct marketing accounted in 1995 for approximately one billion dollars in revenues.
29 In New York, property rights in one's likeness and name go back to the turn of the century. See New York
Civil Right Law 8850, 51., enacted 1903.
30 One might argue that telemarketers will attempt to avoid absorbing this added cost by increasing their prices
and then advertising a "fictitious" discount in return for a customer giving access rights. But such a practice will
not succeed in a competitive environment where the initial price increase cannot be sustained.
31 For example, GTE has released since 1991 an encryption system for the cellular-consumer market. GTE
Mobilnet developed the system because some customers--mostly government accounts and defense
contractors--were concerned about the use of scanners that can monitor radio waves over which mobile-telphone
signals.
32 A special problem of privacy in mobile communications is that the person initiating the call to a mobile
customer does not pick its privacy level, and may be entirely unaware of any jeopardy. This "negative
externality" suggests that some form of a signal which alerts such a caller to the presence of radio-segments in
the transmission path.
33 The consumer information business is a multi-billion dollar a year business, centered around credit bureaus
such as Equifax, TRW, and Trans Union. It has been estimated that the average American is on 100 mailing
lists and 50 databases. Fisher, Susan E., What do computers k now about you? Personal information too readily
available, PC Week, Information Access Company; Vol. 8; No. 6; Pg. 156, February 11, 1991.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

32/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

34 Additionally, data bank activities include several negative externalities, Rothfeder, Jefferey, Privacy for Sale,
How Computerization Has Made Everyone's Private Life an Open Secret, Simon & Schuster, New York, 1992.
For example, incorrect information contained in data banks. For the database providers, such inaccuracies,
while bothersome and somewhat reducing the database value, may not justify the cost of attaining great
accuracy. Yet for the data subject, the cost of an inaccuracy can be very high. Thus, some transactions of data
transfer between two parties take place more often than is truly efficient, taking all costs and benefits into
account.
35 One obstacle is that consumers will have to police companies to make certain that they do not utilize
information without first making compensation. This difficulty could be dealt with the assistance of a service
provider who would run "key word" searches to determine if a person's name and personal data are utilized for
any uncompensated purpose. This, however, would also raise a new type of privacy concerns.
36 Hoffman, Lance J., Ali, Faraz A., Heckler, Steven L., "Cryptography Policy," Communications of the ACM,
September 1994, Vol. 37, No. 9, p. 109.
37 Votes are not formally for sale, but candidates and parties vie with each other in making promises to benefit
voters and interest groups, and if they renege on their part of the bargain, they may be punished at the next
election. That is the theory.
38 Epstein, Richard, Simple Rules for a Complex World. Harvard University Press.

Economic Aspects of Personal Privacy


Hal R. Varian(3) [4]
University of California at Berkeley

INTRODUCTION

The advent of low-cost technology for manipulating and communicating information has raised significant
concerns about personal privacy. Privacy is a complex issue and can be treated from many perspectives; this
whitepaper provides an overview of some of the economic issues surrounding privacy. 1
In particular, I first describe the role of privacy in economic transactions and argue that consumers will rationally
want certain kinds of information about themselves to be available to producers and want other kinds of
information to be secret. I then go on to consider how one might define property rights in private information in
ways that allow consumers to retain control over how information about them is used.

A SIMPLE EXAMPLE
The most fundamental economic transaction is that of exchange: two individuals engage in a trade. For
example, one person, "the seller" gives another person, "the buyer," an apple; in exchange, the buyer gives the
seller some money.
Let us think about how privacy concerns enter this very basic transaction. Suppose the seller has many different
kinds of apples (Jonathan, Macintosh, Red Delicious, etc.) The buyer is willing to pay at most r to purchase a
Jonathan, and 0 to purchase any other kind of apple.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

33/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

In this transaction the buyer would want the seller to know certain things about him, but not others. In particular,
the buyer would like the seller to know what it is he wants--namely a Jonathan apple. This helps the buyer
reduce his search costs since the seller can immediately offer him the appropriate product. The transaction is
made more efficient if detailed information about the consumer's tastes is available to the seller.
On the other hand, the buyer will in general not want the seller to know r, the maximum price that he is willing to
pay for the item being sold. If this information were available to the seller, the seller would price the product at
the buyer's maximum willingness to pay, and the buyer would receive no surplus from the transaction.
Roughly speaking the buyer wants the seller to know his tastes about which products he may be interested in
buying; but he doesn't want the seller to know how much he is willing to pay for those products.
Armed with this simple insight, let us investigate some more realistic examples.

Search Costs
When many people talk about "privacy rights" they are really talking about the "right not to be annoyed." I don't
really care if someone has my telephone number as long as they don't call me during dinner and try to sell me
insurance. Similarly, I don't care if someone has my address, as long as they don't send me lots of officiallooking letters offering to refinance my house or sell me mortgage insurance. In this case, the annoyance is in
the form of a distraction--the seller uses more of my "attention" than I would like.
In the "information age" attention is becoming a more and more valuable commodity, and ways to economize on
attention may be quite valuable. Junk mail, junk phone calls, and junk email are annoying and costly to
consumers.
In the context of the apple example described above, it is as though the seller of apples has to tell me about
each of the different kinds of apples that he has to sell before I am able to purchase.
It is important to recognize that this form of annoyance--essentially excess search costs--arise because the
seller has too little information about the buyer. If the seller knew precisely whether or not I was interested in
buying insurance or refinancing my mortgage, he could make a much better decision about whether or not to
provide me with information about his product.
In the context of the apple example: it is in both parties' interest to know that the buyer will only purchase a
certain kind of apple. The buyer has every incentive to present this information to the seller, and the seller has
every incentive to solicit this information from the buyer.
This is, in fact, how the direct mail market works. If I subscribe to a computer magazine, I will end up on a
mailing list that is sold to companies that want to sell me computer hardware and software. If I refinance my
house, I am deluged with letters offering me mortgage insurance. In these cases the seller is using information
about me that is correlated with my likelihood of purchasing certain products. (See [1] for a discussion of some
current trends in direct marketing.)
In this context the more the seller knows about my preferences the better. If, for example, I am interested in
buying a computer printer, it may well be in my interest and the seller's interest for this fact to be known. If I am
only interested in a laser printer, this is even more valuable information since it further reduces search costs for
both the buyer and the seller. If I already have a laser printer that I am happy with, the seller may find it valuable
to know that since he will not have to incur costs trying in vain to sell me a new printer.

Secondary Users of Information


When a mailing list is sold to a third party, the relationship between the buyer's original interests and the seller's
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

34/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

interest may become more tenuous. For example, suppose the list of computer magazine subscribers is sold to
an office furniture supplier. Some of the people on this mailing list may or may not have any interest in office
furniture.
Even though the first two parties in the transaction--the individual who may want to buy something, and the
seller who may want to sell him something--have incentives that are more or less aligned, the transaction
between the original owner of the mailing list and those to whom it is sold do not have such well-aligned
incentives.
Economists would say that an externality is present. The actions of the party who buys the mailing list will
potentially impose costs on the individuals on that list, but the seller of the mailing list ignores those costs when
selling it.
These costs could be mitigated, to some degree, if the individual who is on the mailing list has a voice in the
transaction. For example, the individual could forbid all secondary transactions in his personal information. Or,
more generally, the individual could allow his information to be distributed to companies who would send him
information about laser printers, but not about office furniture.
These considerations suggest that the difficulty in the "annoyance" component of privacy could be significantly
improved if the communications channels between the buyers and the sellers were clearer, the information
conveyed was more accurate, and third-party transactions were restricted only to those transactions that the
original consumers authorized.

Incentives Involving Payment


Let us now consider a more difficult case, the case where the buyer's revealing information about himself is
detrimental. Suppose that the buyer wishes to purchase life insurance but knows information about his health
that would adversely influence the terms under the which seller would offer insurance. In this case, the buyer
does not want information released that would influence the price at which the insurance would be offered.
Suppose for example that the potential buyer of insurance is a smoker, and knowledge of this information would
result in higher life insurance premium. Should the buyer be required to truthfully release the information? Since
the information here concerns the price at which the service (insurance) is offered, the incentives are perfectly
opposed: the buyer would not want to reveal that he is a smoker, while the seller would want to know this
information.
Note, however, that a nonsmok er would want this particular information about himself revealed. Hence the
insurance company has an easy solution to this problem: they offer insurance at a particular rate appropriate for
smokers, and then offer a discount for non-smokers. This would succeed in aligning information incentives for
the buyer and seller.
More generally, suppose that the price that the seller would like to charge is higher for people with some
characteristic C. Then people who have that characteristic have bad incentives to reveal it, but people who don't
have that characteristic have good incentives to reveal it. It is in the interests of the seller to construct the
transaction in a way that the information is revealed.

CONTRACTS AND MARKETS FOR INFORMATION


We have seen that several of the problems with personal privacy arise because of the lack of information
available between concerned parties. Perhaps some of these problems could be mitigated by allowing for more
explicit ways to convey information between buyers and sellers.
For example, it is common to see boxes on subscription cards that say "check here if you do not want your
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

35/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

name and address redistributed to other parties." This is a very primitive form of contract. A more interesting
contract might be something like: "Check here if you would like your name distributed to other parties who will
provide you with information about computer peripherals until 12/31/98. After that, name and address will be
destroyed. In exchange you will be paid $ 5.00 for each list to whom your name and address is distributed."
Although it would be hard to fit this sort of contract on a subscription response card, it would be easy to fit it on
a Web page. The contract that is being offered implicitly assigns property rights in an individual's name and
address to him or herself, unless the individual chooses to sell, or more properly, rent, that information.
This particular legal policy seems quite attractive: assign a property right in information about an individual to
that individual, but then allow contracts to be written that would allow that information to be used for limited
times and specified purposes. In particular, information about an individual could not be resold, or provided to
third parties, without that individual's explicit agreement.
This idea appears to have been most thoroughly explored by [Laudon, 1996]. He goes further than simple
contracting and suggests that one might sell property rights in personal information on markets. As Laudon
points out, there is already a large market in personal information. But the property rights are held by those who
collect and compile information about individuals--not by the individuals themselves. These third parties buy and
sell information that can impose costs on those individuals, without the individuals being directly involved in the
transactions. In economic terminology, there is an externality.
The personal information industry in the US is primarily self-regulated, based on the so-called Fair Information
Practices. 2
There shall be no personal record systems whose existence is secret;
Individuals have rights of access, inspection, review, and amendment to systems containing information about
them;
There must be a way for individuals to prevent the use of information about themselves gathered for one purpose
for another purpose without their consent;
Organizations and managers of systems are responsible for the damage done by systems for their reliability and
security;
Governments have the right to intervene in the information relationships among private parties.
The European Community has more explicit privacy regulation; for more on international regulations, see the
Electronic Privacy Information Center's 3 web page on International Privacy Standards. 4
It is worth observing that the Fair Information Practices Principles would automatically be implemented if the
property rights in individual information resided solely with those individuals: secret information archives would be
illegal; individuals could demand the right of review before allowing information about themselves to be used; and
those who wanted to utilize individual information would have to explicitly request that right from the individual in
question or an agent acting on his behalf.
Laudon goes on to propose that pieces of individual information could be aggregated into bundles that would be
leased on a public market he refers to as the National Information Market. For example, an individual might
provide information about himself to a company that aggregates it with 999 other individuals with similar
demographic and marketing characteristics. Such groups could be described by titles such as "20-30 year old
males in California who are interested computers," or "20-30 year old married couples interested in home
purchase."
Those who wanted to sell to such groups could purchase rights to use these mailing lists for limited periods of
time. The payments they made would flow back to the individual users as "dividends." Individuals who found the
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

36/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

annoyance cost of being on such lists greater than the financial compensation could remove their names.
Individuals who felt appropriately compensated could remain on the lists.
Although there are many practical details of implementation that would need to be solved to implement Lauder's
market, it is important to recognize that information about individuals is commonly bought and sold today by
third parties in market-like environments. The National Information Market simply gives individuals an economic
stake in those transactions that they currently do not have.

Personal information
There may be information about me that I don't want revealed just because I don't want people to know it. For
example, many people are very touchy about personal financial information being revealed. They don't want other
people to know how much income they make, or how much they paid for their house or car.
In some cases there is a social interest to making such information public. Consider the following two
examples.
A computer consultant in Oregon paid the state $ 222 for its complete motor vehicles data base, which he then
posted to a Web site, prompting charges of privacy violations from people who complained that he had invaded
their privacy. The database allows anyone with an Oregon license plate number to look up the vehicle owner's
name, address, birthdate, driver's license number, and title information. The consultant's motive in posting the
information, which anyone can obtain for a fee by going to a state office, was to improve public safety by
allowing identification of reckless drivers. Oregon Governor John Kitzhaver says that instant access to motor
vehicle records over the Internet is different from information access obtained by physically going to state offices
and making a formal request for information: "I am concerned that this ease of access to people's addresses
could be abused and present a threat to an individual's safety." (Associated Press 8 Aug 96)
Victoria, the first city in Canada to put its tax-assessment rolls on the Internet, has pulled the plug after British
Columbia's Information Commissioner announced an investigation into the practice, believing it violates privacy
laws. (Toronto Globe and Mail 27 Sep 96 A3)
In each of these cases there is a public interest in having this information publicly available. Making information
available about owners of motor vehicles may help insure safer operations. Making sales prices of houses
available may help ensure the accuracy of tax assessments. My neighbors may care about the assessment of
my house, not because they particularly care about my tax assessment, but because they care about their tax
assessment.
Whether or not such information should be publicly available would ideally depend on an individual benefit-cost
analysis. If I am willing to pay more to keep my assessment private than my neighbors would be willing to pay
to see it, we have a potential way to make everyone better off: I pay my neighbors for the right to keep my
assessment private. If they value seeing my information more than I value keeping it private, then they pay me
for the right to see it.
This sort of transaction is not really practical for a variety of reasons, but the same principle should apply in
aggregate: one has to weigh the "average" potential benefits from making this sort of information public to the
potential costs of keeping it private. The presence of a market where individuals can sell information about
themselves helps provide a benchmark for such benefit-cost calculations.
Certain kinds of information can be collected and distributed without revealing the identity of individuals.
[Froomkin, 1996] explores some of the legal issues involving anonymity and pseudonymity; see [Camp,
Harkavey, Yee, & Tygar, 1996] for a computer science view. [Karnow, 1994] proposes the interesting idea of "epersons", or "epers," which serve to provide privacy while conveying a relevant description of the individual.

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

37/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Costs of acquiring public information


Many sorts of public information have been available at some transactions cost: in order to find housing
assessments, it has typically been necessary to travel to a city or county office and look up the information.
Now that increasing numbers of consumers are computerized it is possible to acquire this information much
more inexpensively. Information that was previously deemed useful to be publicly available under the old
transactions technology, may now be deemed to be too available.
This, it seems to me, has a reasonably simple solution. The information could be made available in digital form,
but at a price that reflected the transactions costs implicit in acquiring the information using the old technology.
The price paid for the information could then be used to defray the cost of making it publicly available.
For example, suppose that, on average, it took a citizen one hour to go to the country records department, look
up a tax assessment and photocopy the relevant material. Then a reasonable charge for accessing this
information online might be on the order of $ 25 plus 20 cents or so per assessment requested.
This sort of charging schedule essentially restores the status quo, provides some funds for local government,
and offers an additional choice to individuals. People who didn't want to pay the $ 25 could make the trip to the
county records office and access the same information there "for free" (i.e., paying no monetary cost.)

ASSIGNMENT OF RIGHTS
I have argued that an appropriate way to deal with privacy issues is to determine a baseline assignment of
rights, but allow individuals to trade those rights if they desire to do so. If there are no transactions costs in
trading or negotiation, the initial assignment of privacy rights is arbitrary from the viewpoint of economic
efficiency. 5
To see this, suppose that it is worth 50 cents a week to me to have my name omitted from a junk email list, and
that it is worth 20 cents a week to the owner of the junk email list to have my name on it. If the owner of the
email list has the right to put my name on it without consulting me, then I would have to pay him some amount
between 20 and 50 cents to have him remove it. On the other hand, if he has to seek my permission to use my
name, it would not be forthcoming, since the value to him of having my name on the list is less than the value to
me of having it off. Either way the property rights are assigned, my name would end up off the list.
If there are significant transactions costs to making contracts such as these, the standard Coasian arguments
suggest that an efficient allocation of rights would be one in which the transactions and negotiation costs are
minimized. In this case, the appropriate comparison involves the transactions cost to the individual to having his
or her name removed from the list to the cost to the mailing list owner of soliciting permission from individuals to
add them to the list.
When phrased in this way, it appears that the current practice of adding someone's name to a list unless they
specifically request removal probably minimizes transactions costs. However, the rapid advances in information
and communications technology may change this conclusion. The development of social institutions such as
Laudon's market would also have a significant impact on transactions costs.

SUMMARY
Privacy is becoming a very contentious public policy issue. The danger, in my opinion, is that Congress will rush
into legislation without due considerations of the options. In particular, a poorly-thought-out legislative solution
would likely result in a very rigid framework that assigned individuals additional rights with respect to information
about themselves, but did not allow for ways to sell such property rights in exchange for other considerations.
In my view, legislation about rights individuals have in information about themselves should explicitly recognize
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

38/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

that those rights can be "leased" to others for specific uses, but cannot be resold without explicit permission.
This simple reform would lay the foundation for a more flexible, and more useful policy about individual privacy.
In addition it would enable business models that would potentially allow for reduced transactions costs and
better matches between buyers and sellers.
__________________________
ENDNOTES
1 There are many other aspects of privacy that we do not cover. For example, there are issues involving
misrepresentation, unauthorized publicity, and so on that we omit due to lack of space.
2 Certain sorts of behavior have legislative protection; e.g., lists of rental videos.
3 http://www.epic.org
4 http://www.epic.org/privacy/intl/default.html
5 Economic efficiency is, of course, only one concern involved in assignment of property rights. Considerations
of fairness, natural rights, etc. are also relevant.

REFERENCES
[1] Blattberg, R.C., & Deighton, J., Interactive mark eting: Exploiting the age of addressability. 33(1) Sloan
Management Review 5, 14 (1991).
[2] Camp, L. J., Harkavey, M., Yee, B., & Tygar, J.D., Anonymous atomic transactions, Tech. Rep., Carnegie
Mellon U (1996). http://www.cs.cmu.edu/afs/cs/user/jeanc/ www/ home.html.
[3] Froomkin, A.M., Flood control on the information ocean: Living with anonymity, digital cash, and distributed
databases, Tech. Rep., U. Miami Sch. L. (1996). http://www.law. miami.edu/froomkin/.
[4] Karnow, C.E.A., The encrypted self: Fleshing out the rights of electronic personalities. In Conference on
Computers, Freedom, and Privacy (1994).
[5] Laudon, K.C., Mark ets and privacy. 39(9) Communications of the ACM, 992, 104 (1996).

Extensions to the Theory of Markets and Privacy: Mechanics


of Pricing Information
Kenneth C. Laudon
Stern School of Business
New York University
klaudon@stern.nyu.edu [5]

The theory of markets and privacy begins with the understanding that the current crisis in the privacy of personal
information is a result of market failure and not "technological progress" alone. The market failure has occurred
because of a poor social choice in the allocation of property rights. Under current law, the ownership right to
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

39/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

personal information is given to the collector of that information, and not to the individual to whom the information
refers. Individuals have no property rights in their own personal information. As a result, they cannot participate
in the flourishing market for personal information, i.e., they receive no compensation for the uses of their
personal information. As a further consequence, the price of personal information is so low that informationintense industries become inefficient in its use. The price is low because the price of personal information does
not reflect the true social costs of coping with personal information. The market is dominated by privacy-invading
institutions. And as a further result, there is a disturbing growth in privacy invasion, an excessive and abusive
disregard for the interests of many in keeping elements of their life private, or at least under their control.
These abuses of personal information are reflected in attitude surveys which over the last decade have recorded
a growing public distrust in how major institutions use personal information, a wide-spread feeling of frustration
and hopelessness, and the belief that "individuals have lost all control over their personal information." (Equifax,
1996). There is a growing anger in American public opinion over the loss of control over personal information.
Like other market failures, the personal information market failure results in enormous asymmetries in power
and information. For many Fortune 500 firms, personal information is a strategic asset. As it turns out, privacy
invasion pays handsome rewards. There is already today a lucrative market in personal information, but ordinary
individuals cannot participate in the market (because they have no property interest), they are completely
mystified about how their personal information is used in the market, and they have almost no tools to influence
how major institutions use their information. The transaction costs for obtaining information by large institutions
are small and falling, while the transaction costs which individuals incur in obtaining even a copy of their, say,
medical record, are very high. In other words, the tools available to citizens to protect their information rights--as
few as they are--are too costly to use! Contrast this situation to the average person's understanding of the
automobile market in the U.S.: trading locations are known, seller and buyer rights are fairly clear, information
about quality can be obtained, transaction costs for buyer and seller are more equal, and disposition of the
asset is finally decided by the individual.

THE LIMITS OF PRIVACY LEGISLATION


A second element to the crisis in privacy is the inability of the society to elaborate a set of concepts and
policies to rectify the market failure. Over the last twenty years, since the landmark report of the Privacy
Commission in 1972, the societal response to privacy invasion has been a regulatory response driven from
Washington and State capitals. In an effort to correct the market failure, political executives and legislators have
passed more than twenty pieces of federal legislation, and hundreds of state statutes, which attempt to provide
individuals with due process rights to their personal information, without at the same time granting individuals
ownership rights.
The regulatory efforts of the last twenty years have attempted to reduce the asymmetries in information and
power which the market failure creates. Regulatory efforts have tried to define due process rights for individuals
vis-a-vis personal record systems. These efforts are informed by a doctrine called "fair information practices"
developed in the late 1960s, an era when only a few large scale national institutions possessed national
information databases.
There are two kinds of government regulation in response to market failure situations. One type affirms a "natural
monopoly" and tries to regulate price and access. Public utilities and common carriers fall into this situation.
The other kind of government regulation attempts to introduce competition, create a marketplace, and reduce
the market power of large institutions. Unfortunately, privacy legislation of the last twenty years falls into the
former camp: it reaffirms the market failure by securing the property interest in personal information for the
gatherer, and denying ownership to individuals.
The current privacy legislation perpetuates a central dilemma of the information age: how can we live in a society
where individuals can have as much information privacy as they want, and yet where the economic benefits of
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

40/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

using personal information in commerce are optimized?

THE LEGAL AND ECONOMIC FOUNDATIONS FOR INDIVIDUAL OWNERSHIP OF PERSONAL


INFORMATION
An earlier paper attempted to lay the legal and economic foundation for a true marketplace for personal
information [Laudon, 1996]. In this marketplace, individuals would retain the ownership in their personal
information and have the right, but not the obligation, to sell this information either to institutional users directly,
or more likely, to information intermediaries who would aggregate the information into useful tranches (e.g.
blocks of one thousand individuals with known demographic characteristics) and sell these information baskets
on a National Information Exchange.
Individual ownership of personal information can be anchored within British and American common law. The
common law tort of appropriation protects the right of celebrities to own their images, likenesses, voices, and
other elements of their persona. To appropriate personal images of celebrities for commercial purposes without
consent or payment is recognized by the courts as an appropriation. Likewise, it is conceivable that courts and
juries could be convinced to protect the personal "data images" of ordinary citizens. These data images have
somewhat less resolution than a photographic image, but they are increasingly and profoundly descriptive and
predictive of human behavior. As computers extend their powers, these data images will approach photographic
resolutions.
The economic foundation for individual ownership of personal information can be found in the theory of markets
(and related theories of governance) and the theory of externalities. Markets are likely the most efficient
mechanisms for allocating scarce resources. Governments should intervene in markets only if markets fail.
Markets do fail under conditions of monopoly, asymmetries in power and information, and in the case of public
goods, e.g., clean air. Governments should either seek to restore markets or regulate the activity. In the case of
personal information, the market has failed because of asymmetries in power and information brought about by
poor social choice in the allocation of property rights to information. The price of personal information is far too
low, and therefore its abuse in the form of privacy invasion is far too cost beneficial to those institutions that
dominate the market. The function of government here should be to restore the power of one class of
participants in the market, namely individuals, by vesting ownership of personal information in the individual. A
second function of government is to ensure the orderly functioning of a personal information marketplace.
The failure of the marketplace results in significant negative externalities for individuals. These externalities are
experienced as excessive indirect and direct costs involved in "coping" with information. Coping costs include
tangible costs like excessively large mail handling facilities (public and private), and loss of attention, as well as
intangible costs like loss of serenity, privacy, and solitude. These negative externalities must be balanced
against the positive externalities of nearly unlimited exploitation of personal information which results in
enormous amounts of marketing information being delivered to consumers (whether they want it or not).
However, it can no longer be argued that these positive externalities fully compensate individuals or society for
the negative costs of unlimited exploitation of personal information.

ADDING VALUE DOES NOT LEGITIMIZE APPROPRIATION


Information gathering institutions often argue that a personal name and address has zero value. In fact they
argue, personal medical, credit and related information also has no value per se. The large institutions, for whom
personal information is a strategic asset--or so they claim in their annual reports-- argue that by collecting
information on individuals from a variety of sources, and mixing this information with other information, they
create the value in personal information, and therefore this value belongs all to them. In this argument, property
results from the "sweat of the brow" expended by gathering institutions.
"Sweat of the brow" is only one element in the theory of property. Actually, the largest portion of wealth in
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

41/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

America is inherited, not created. Surely "sweat of the brow" is a weak theory when it comes to personal
information. For instance, if a thief steals your car, fixes the car, paints it, and mixes it with a fleet of stolen
cars, then indeed the thief has added value to the car and the collection. But these actions by the thief do not
therefore transfer ownership to the thief. To argue that information gathering institutions add value to my personal
information by compiling, collating and mixing in a database, does not solve the question of ownership. To say
information gathering institutions have exclusive property rights to my personal information because they have
added value to the information simply begs the question of who owns my personal information. Whether or not
my personal information appears in a collection, or was mixed with other information, is not decisive for the
question of ownership.

Research on the Mechanics: How Will Personal Information be Priced?


The theory of markets and privacy raises many mechanical questions of implementation. Currently, with
colleagues at New York University's Stern School of Business, we are planning research in a number of areas.
Here are some interesting researchable questions raised by readers of an earlier theoretical paper:
What would individual citizens deposit in local depository institutions? Their "information" or their "information
rights?" How would these rights be transferred?
What would be included in these rights--the right to use only certain information? All information? For what
period of time could these rights be sold?
How would depository institutions or traders on the National Information Market, price individual personal
information?
How could people be compensated for the use of their information? How could any mechanism keep track of the
uses of all this personal information over a period of a year?
I believe each of these questions has a sensible and practical answer. We are exploring answers in a
forthcoming book called Privacy and Mark ets. In this paper, we sketch out two lines of on-going research which
address the question of information pricing.

FINDING THE PRICE OF PERSONAL INFORMATION BASKETS


It is amazing how little is known about the economics of personal information in an age when the trade in
personal information has become so vital for the conduct of efficient markets and transaction systems. Currently
we are pursuing two lines of research: (1) the economics of existing personal information markets, and (2)
experimental simulation of market pricing mechanisms to test various formal models of pricing.

The Economics of Existing Personal Information Markets


In this "information economy" about 65% of the GDP is generated in the "information sector" and about 70% of
the labor force is engaged in "information processing" activities (which does not include lower level service
activities). The precise role of personal information in the information sector--as opposed to all other kinds of
information on things and places--is not known but it can be assumed to play an important role. The FIRE
(Finance, Insurance, Real-estate) industry is one of the largest generators and users of personal information,
accounting for 1.1 trillion dollars in GDP, over 500,000 establishments and seven million employees. Even here,
there is no accounting of the dollar amount of personal information trade. The Statistical Abstract of the United
States does not have an index entry for "personal information," or for "information." How odd this all seems as
we enter the "Age of Information."
Every day trained professionals buy and sell enormous baskets of information on millions of individuals in the
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

42/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

form of mailing lists, computer data files, demographic information, and locational information. We know that
governments, credit granting institutions, insurance firms, and credit reporting agencies are the major sellers of
personal information, as well as the major purchasers. We know that this trade in personal information involves
billions of dollars in trade. And yet we don't know the total size of this trade, how traders decide the purchase
and selling prices, or even how much a driver record, medical insurance, or credit record is really "worth."
One line of our research, therefore, is a series of interviews and questionnaires aimed at professional information
brokers in the FIRE and marketing industries. The aim of this research will be to understand the size, structure
and operation of the existing marketplace in personal information, and to understand the underlying pricing
strategies of market participants.

Experimenting with the Economics of Future Personal Information Markets:


Finding the Price of Information
On one Internet site, people are paid to read advertisements, and to reveal their personal preferences. More
sites like this can be expected and are a harbinger of future information markets in which individuals are paid for
revealing information about themselves. In fact, personal information markets are springing up all around us in
response to the reticence which individuals feel about giving away personal information. In another unobtrusive
information market, customers at supermarkets are given "discount cards" scanned at every purchase. The
scanned information contains their personal name, as well as all purchase information. The information is then
sold to marketers and manufacturers. Customers receive payments in the form of store discounts on selected
items (which people truly want) and other "payments" in the form of product promotions sent to their home, or
unsolicited phone calls to their home (which most people do not want).
How do ordinary people decide the purchase and selling prices of their or other personal information? We will be
pursuing answers to this question at the Economics Laboratory at NYU's Department of Economics. Using
student subjects, we will create a market place in which baskets of personal information having variable
attributes of demography, accuracy, and currency can be traded by student subjects (see the article by Hal
Varian 1996 in this collection). At the end of the experiment, subjects will be allowed to keep their trading
profits, a nominal reward for participation.
Using this data we hope to test various formal models of information pricing. The pricing of personal information
is probably no different from the pricing of other kinds of information. Students must answer this question every
day: how much is the basket of information called a "college degree" really worth? My MBA students continually
worry about this question: will I ever earn back the $50,000 dollars which an MBA degree costs? What formal
models do we have to answers this question, and to test in the laboratory or against real-world data?
Discounted cash flow methods with no learning. Students and experts tend to see the value of college
degrees, and perhaps all baskets of information, as a rather linear return on investment problem, in which the
worth of information today is equal to the discounted cash flows which the information will produce over a period
of time. Or

Where
NPV = the net present value of information
C0 = the cost of information at the start (t=0)
A t = the cash flow at the end of the period t
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

43/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

T = the number of years we are calculating returns


r = the risk based discount rate based on the rate of return for investments with similar risk.
This model assumes that the possession of information today does not really influence the possession of
information in the future, and therefore it assumes a linear view of the information valuation and acquisition
process (See Figure 1).

Whenever people value a college degree in terms of the its


future income producing potential, they typically are using a
discounted linear cash flow model. Similarly, information
brokers in a marketplace might price information baskets based
entirely on the expected future cash flows associated with the
basket of information.
Discounted cash flow with "learning" effects. While the
acquisition of knowledge and information can be serendipitous, abrupt, and unexpected, it is often cumulative.
That is, learning a piece of information now will help you learn more information in the future. See Figure 2.
People and organizations do learn sometimes, they accumulate information, store it in the form of learned
routines, and occasionally act on what they have learned. This would suggest a branching problem in which the
value of information today is actually much greater than the cash flows produced by that single basket of
information. Instead, we need to increment the discounted value of a single basket by an amount equal to the
value of future information that might be learned (and which could not be learned or acquired if the initial basket
of information was not purchased).

Dos Santos has suggested a two stage discounted cash flow


model that could be useful in predicting how individuals value
information baskets:

Where
E(Vssp) = the expected value of second stage learning; and
E(Vssp) =

Where:
pi = probability that state i occurs
bi = expected revenues generated from the information
Cs = cost of obtaining the information in the second stage
In this instance, when students say "the finance course I take today will help me make a killing on stock
options after I graduate" they are using some sort of two-stage discounted cash flow model to establish the
value of information. So also information brokers in a marketplace might pay a great deal more than net present
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

44/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

value for information baskets based on returns from a single basket. Instead, brokers might pay a "learning
premium" in the belief that what personal information they buy today about individuals will allow them to learn
even more about these same individuals in the future.

Discounted cash flow with "learning" effects and "loss avoidance." So far we have assumed that
individuals are forced to learn in the second stage. In fact, once people know a little information, they can decide
whether or not to learn more and how much to spend on learning it. So, for instance, students will say "I don't
know if I should seek a position in Europe or the U.S. I will take some courses first, and then I'll decide that
when I 'get there.' If I need additional courses to meet some requirement, I will take the courses when needed."
When students "get there," they may find that one option is worth 0, and another option is worth some positive
number. They will choose the option with the highest benefits and avoid losses or zero benefits. See Figure 3.

Learning poses many hazards. One might learn the wrong


thing, or put learning to the wrong ends. In teaching a course on
business ethics, the faculty often point out to students that
what they learn in business school could indeed lead to being in
a position to make just the right decision at the right time, and
to make a great deal of money. Or, alternatively, what they
learn today could lead directly to Club Fed, that federal
penitentiary system established for white collar felons.
Sometimes, people learn the wrong things, or learn the right
things but apply them to the wrong ends. The point of learning
is in part to avoid future loss. People may pay a considerable
amount to avoid a loss.
Dos Santos provided a modified version of two stage discounted cash flows by setting the revenues of the
second stage learning to:
In this formulation, the value of information learned now is
greater than merely the summation of discounted cash
flows in the first two time periods. The value of information is
greater by an amount equal to the value of avoiding a loss or a zero return on the cost of investing in new
information. Information brokers in an information marketplace might pay a "loss avoidance" premium for
information which they thought could help them avoid future losses. For instance, knowing the DNA information
for a basket of individuals would be very useful and valuable for employers and insurers because it could greatly
decrease future losses, or at least permit adjusting the cost of insurance to personal risk.
Options models. Discounted cash flow models, no matter how modified, have limitations. All possible
outcomes are assumed known, the probability of each outcome is known, and the value of each outcome is
known. Also, most troublesome, is the selection of the discount rate which has a powerful effect on economic
benefits. One is supposed to choose a discount rate of roughly equivalent market risk. The risk of buying various
kinds of information is not well understood.
One possibility is to consider the price of a basket of information today as an option on future revenue streams
which will come from the use of that information in the future. A call option gives the holder the right to purchase
a share of stock for a set exercise or strike price in the future. The real market price of the stock may be much
higher than the stock price because it either pays dividends or simply has a current value greater than or equal
to the exercise price. Of course, there's always the risk that the stock value will decline lower than the strike
price, and hence the option will be worthless.
Likewise, buying a basket of information today is like buying an option on future uses of that information. Just
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

45/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

like a stock option, the owner of an information basket may not exercise the option if, at the time of the exercise
date, the expected revenues are less than the exercise price. So, for instance, when students say, "I don't know
what will be 'hot' five years from now--investment banking or corporate finance--so I plan to take a wide variety of
courses and be ready to move in either direction when the time comes," they are unconsciously invoking an
options model to evaluate the "worth" of today's information. They are in effect buying an option (paying $50,000
to purchase MBA courses) on an underlying asset (their future possible careers) in the belief that at some future
point they will be able to make a judgment or decision which will produce returns of some sort.
Options models are useful to consider because (a) the price of options reflects the riskiness of the underlying
security, and (b) they reflect the ability of individuals to make decisions about future information gathering costs
over time. A model for pricing options as been developed by Black and Scholes, and a modification of that work
by Margrabe offers the possibility to extend the original model to a situation where one risky asset (information
gathering costs) for another risky asset (future revenues from baskets of information). These models are beyond
the scope of this paper but are described in Dos Santos. In options models, the price of the option is related to
the underlying variance of the security or stock. The greater the variance, the greater the price of the option
because the potential rewards are greater. This is opposite of the case with discounted cash flow models where
variance of future revenues decreases today's value. Options models will probably overestimate the price of
some personal information, but they may be quite accurate for other kinds of information. For instance, for
certain types of information which does not change much--say for instance name and social security
information-- market brokers would probably pay little using an options model. However, for information which
has great variance and where currency is important--like, say, one's medical condition--brokers might pay a lot.
Information and co-specialized information assets. The value of a basket of information is probably a nonlinear function of the number of information dimensions which characterize the basket. This seems to defy the
law of diminishing returns. Few things defy the law of diminishing returns. Networks may be one such
phenomenon. It costs close to zero to add another person to a computer network, or telephone network, but the
margin revenues or other gains are substantially greater than zero. Brands may follow a similar pattern: the
more people who use Microsoft Windows the more valuable the operating system (and Microsoft) become.
Personal information baskets may have network features. We might think of information baskets as
interconnected network nodes, and the more nodes we have the more valuable the whole package. A name and
address, by itself, has little value. A name, address, and occupation, has considerably more value; name,
address, occupation, zip code, medical history, driver history, and credit record probably has a much higher
market value. The more dimensions included in the basket, the greater the value by some exponent. At some
point of course the cost of finding out more information exceeds the expected returns.
There are several implications. An individual person who sells rights to his/her information should charge
considerably more for selling the "complete persona" than should be charged for, say, just a medical record, or
credit history. A buyer of the complete persona should be willing to pay a great deal more than for small bits and
pieces of the person, especially given the likely high gathering costs for small chunks of personal information
which later must be collated with other information.
An interesting situation arises for holders of information assets. For instance, if an employer already holds job
performance and psychological evaluation information on an employee, along with demographic data, how much
would the employer be willing to pay for additional dimensions of employee persona like medical records, DNA
records, credit, and so forth. Obviously the value of information on hand (co-specialized assets) would be greatly
enhanced through the purchase of a few more pieces of information. But the sellers of personal information
would not know this fact. In other words, holders of information are advantaged in the marketplace, and the
buyers are better off than the sellers because the buyers' total information value will greatly expand by
purchasing just a few pieces of information and mixing it with information already on hand. The sellers will not
know this. The best strategy for sellers will be to sell the "whole persona" rather than sell bits and pieces. The
best strategy for buyers will be to avoid buying the "whole persona" and instead to buy bits and pieces.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

46/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

CONCLUSION
It is unclear at this time which of the four models (a-d) are most useful in characterizing how people evaluate the
price of information, or which strategies will emerge in future information markets. As Tversky and Kahneman
have discovered, people overestimate the risks of extremely rare events (shark attacks) and greatly
underestimate the risk of likely events (having an accident on the way to the beach). Likewise, people may
underestimate the future value of information gathered today relative to the value of its future uses, and
overestimate the cost of today's information gathered today relative to its future value. We simply don't know
how people in fact behave in information markets. If we to hope to understand privacy in our time, we will have to
understand the market mechanisms which shape the flow of personal information.
_________________________
REFERENCES

Laudon, Kenneth. Mark ets and Privacy, 39 (9) Communications of the ACM (September 1996).
Varian, Hal R., Economic Aspects of Personal Privacy, Paper submitted to NTIA Panel @http://
alfred.sims.berkeley.edu/privacy.htm
Dos Santos, Brian L., Justifying Investments in New Information Technologies, (7) 4 Journal of Management
Information Systems 71-90 (Spring 1991).
Black, F. And Scholes, M. The Pricing of Options and Corporate Liabilities, (81) 3 Journal of Political Economy
637-654 (May/June 1973).
Margrabe, W. The Value of An Option to Exchange One Risk y Asset for Another, (33) 1 Journal of Finance 177186 (March 1978).

Self-Regulation on the Electronic Frontier: Implications for


Public Policy
Mary J. Culnan
School of Business
Georgetown University
Washington, D.C. 20057
culnanm@gunet.georgetown.edu

[6]

INTRODUCTION

I am new to this group, but the first thing I saw was about Ptrax (sic). I just want to k now if anyone else has
called other database corporations to get their information removed from their database. What are the other
companies? Ever since this Ptrax incident companies that I have called are giving me the run around saying
that another company is responsible for the database storage. They've been uncooperative about telling me
anything. (Posting to FTC Privacy Discussion Group, September 23, 1996)
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

47/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

The Internet can support self-regulation in two ways. First, it provides an effective vehicle for firms to implement
fair information practices. Web sites allow firms to disclose their practices and provide a name removal or "optout" opportunity to a global audience at a nominal cost. The Internet also serves as an effective vehicle to
mobilize public opinion and collectively to seek corrective action for situations where industry practices are
perceived as being at odds with social norms for acceptable use of personal information. This paper will use
three case studies to demonstrate the latter use of the Internet.
The paper will be organized as follows. First the three case studies, Lotus MarketPlace: Households, Marketry
and P-TRAK, will be described. Next, the three cases will be analyzed and common themes will be identified.
The paper will conclude with the lessons these cases provide for assessing the effectiveness of self-regulation.

CASE 1: LOTUS MARKETPLACE: HOUSEHOLDS


In April 1990, Equifax and Lotus Development Corporation jointly announced the Lotus MarketPlace:
Households, a CD-ROM mailing list. After delaying the launch date, Lotus and Equifax announced on January
23, 1991 that they were cancelling the product. The press release issued jointly by the two firms said the
decision to cancel the product "came after an assessment of the public concerns and misunderstanding of the
product, and the substantial, unexpected additional costs required to fully address consumer privacy issues." 1
Six months later, Equifax announced that it would discontinue sales of direct marketing lists derived from its
consumer credit file.
The database contained both actual and inferred information on 120 million individuals in 80 million households.
Names and addresses in MarketPlace: Households came from Equifax's credit report database. The remaining
fields were taken from the Equifax Consumer Marketing Database (ECMD). These fields included: geographic
information (SMSA, dwelling type and zip were derived from the U.S. Postal Service); gender (inferred from a
name table which categorized common first names as male, female or "unknown"); age in ranges (from public
records); marital status (inferred from type of credit cards held--a credit report showing only individual accounts
would be listed as "single"); income (modeled from self-reported incomes in a computer survey and extrapolated
across the entire population with the same zip+4 area); and "neighborhood lifestyle" (each record was assigned
to one of 50 Microvision clusters). Individuals could opt out of the database by contacting either Lotus or
Equifax, or by registering for the Direct Marketing Association's Mail Preference Service.
The public first became widely aware of MarketPlace: Households after the Wall Street Journal ran a story in
November 1990. 2 The discussion of the product on the Internet began after this article was posted to at least
one public computer conference. Subsequently, the e-mail address for Jim Manzi, the CEO of Lotus, was
circulated on the Internet with a call for people seeing the message to e-mail Manzi and express their concerns.
It was reported that by mid-January 1991, Lotus had been barraged by some 30,000 individuals who felt the
product invaded their privacy and did not want to be included in the database.
CASE 2: MARKETRY
In September 1995, Marketry Inc., a list broker in Bellevue, Washington, announced the availability of a list of
250,000 e-mail addresses compiled from Internet newsgroups and web sites. In October 1995, just one month
later, Marketry resigned from managing the controversial file, citing "vociferous" industry reaction. 3
The list, which was owned by Response America of Altoona, Pennsylvania, was segmented using eleven
"interest" categories: adult, computer, sports, science, education, news, investor, games, entertainment,
religion and pets. It was not possible to opt out of the list.
Like MarketPlace: Households, news of the list was posted to a number of public discussion groups on the
Internet. Marketry's e-mail address was published in the Electronic Privacy Information Center's EPIC Alert and
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

48/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

readers were encouraged to make their views known to the company. The Washington Post also ran a story on
the list highlighting the privacy concerns. The launch of the Marketry list coincided with the Direct Marketing
Association's annual conference where it was the subject of mostly negative discussion at a session on legal
issues related to the information superhighway.
CASE 3: P-TRAK
On June 1, 1996, Lexis-Nexis released P-TRAK, a product for use by attorneys and other Lexis-Nexis
subscribers to locate individuals. 4 The new file was publicized by a direct marketing campaign to existing LexisNexis subscribers. Today, the product remains on the market in essentially the same form as it was first
released.
The file was based on header information from the Trans Union credit report database. At a minimum, each
record contained name and current address. Records could also contain alias names (e.g., maiden name), up to
two prior addresses, telephone number, date-of-birth (month, year) and social security number. All available
fields, including social security number, were displayed for a particular record.
Within a week after releasing P-TRAK, Lexis-Nexis received approximately 200 telephone calls from subscribers
expressing concern because P-TRAK displayed the individual's social security number as part of the search
results. On June 11, Lexis-Nexis discontinued the display of the social security number and instituted an opt
out for P-TRAK; however, a social security number could still be used to retrieve an individual's name and
address. By the end of July 1996, telephone calls about P-TRAK had dwindled to nearly zero. In early
September however, a message about P-TRAX (sic) was posted to RISKS, a computer discussion group about
the risks of computer technology to society. By mid-September, Lexis-Nexis was receiving thousands of
telephone calls about P-TRAK as word spread across the Internet. Both the Washington Post and the Wall
Street Journal ran articles about the protest. The Washington Post also published an editorial focusing on the
public unease caused by "having your every public or commercial transaction on fileretrievable at the touch of a
button. 5
The conclusion to the P-TRAK case differed from Lotus MarketPlace and Marketry in two ways. First, unlike the
first two cases, Lexis-Nexis did not withdraw P-TRAK from the market, nor did they make any additional
modifications to the product. Instead, they increased their communications facilities to respond to the public
outcry, including accepting opt-out requests from the Internet, either by e-mail or by completing a form on their
web site. Second, P-TRAK attracted the attention of Congress. On October 8, 1996, Senators Bryan, Pressler
and Hollings wrote to FTC Chairman Robert Pitofsky asking the FTC to conduct an investigation of the
compilation, sale and use of electronically transmitted databases that include personal information about private
citizens without their knowledge, and to propose any legislation the Commission deemed appropriate given the
results of the study.

COMMON THEMES FROM THE THREE CASES


The three case studies share common themes related to privacy:
The three products represented third party use of personal information. The information in the three
products was not collected as the result of a customer relationship between the individuals who
complained about the product and any of the three firms. Instead, the information was acquired from
another firm, and the customers of the three firms were other companies;
Consumers were not aware of the opt-out procedures for the products. While two of the three firms
(Equifax/Lotus and Lexis-Nexis) offered an opt-out, there was no feasible method to notify the individuals
in the files that their personal information was to be used in this way and to inform them that they could
remove their names from the files if they objected. This point is related to the first point. Absent a
customer relationship, neither firm was in direct communication with the individuals whose personal
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

49/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

information they were marketing; and


All three products represented incompatible use of personal information, where information collected for
one purpose was used for a different purpose without the individual's knowledge or consent. In the case of
Lotus MarketPlace and P-TRAK, the information was collected for credit reporting purposes. In the case
of Marketry, individuals disclosed their identities and their interests for the purpose of participating in a
public discussion group.
It is also interesting to note that as word about the products spread over the Internet, misinformation about the
products was also transmitted in the same way that miscommunication occurs in a parlor game where people
whisper a message from person to person in a large circle. The message is invariably distorted when it is
whispered again to the originator after making the rounds of the group.

DISCUSSION: IMPLICATIONS FOR PUBLIC POLICY


The title of this paper, "Self-regulation on the Electronic Frontier," was chosen intentionally to highlight the ad
hoc way that the public concerns with these three products were discussed. The outcomes, which were viewed
at the time by consumers as victories, yielded solutions which in retrospect turned out to be temporary and also
resulted in a competitive disadvantage for the three firms. In June 1996, a division of Experian (formerly TRW
Marketing Services, Inc) announced a Lotus MarketPlace: Households lookalike product, Neighborhood ConnX,
to be available in local computer stores beginning third quarter 1996. Other CD-ROM databases are also
commercially available. The DM Group of Aurora, Ohio picked up the list of e-mail addresses after Marketry
resigned the list. Lexis-Nexis has a number of competitors to P-TRAK, none of which appear to offer a
comparable opt-out. Lexis-Nexis is no longer competing on a level playing field because its competitors offer a
more complete set of names in their databases. This suggests that the current approach to self-regulation,
particularly for information resold by third parties rather than being acquired directly from consumers, is not
effective for either consumers or business. Table 1 (which follows the endnotes) summarizes the results of the
three case studies.
There are several lessons that can be drawn from the three case studies. Black 's Law Dictionary does not
contain a definition for self-regulation. It defines regulation broadly as governing according to rule. The lessons of
the three case studies suggest first, that for self-regulation to be effective, it needs to be fair--meaning the same
rules should apply to all firms in an industry. 6 Absent a uniform set of standards (e.g., fair information practices)
and effective enforcement mechanisms, there are no incentives for firms to self-regulate if advantages accrue to
those who do not play by the rules. Currently neither exist.
Second, information acquired from data compilers or third party resellers instead of being collected directly from
individuals appears to present a special challenge for self-regulation. There are no market pressures for the
reseller or the compiler to observe fair information practices, given an increasing demand for these types of
information services. Because other firms, not consumers, are the reseller's customers, the individuals whose
information is being resold cannot vote with their wallets and take their business elsewhere as a means of
pressuring the reseller to observe fair information practices. Most information resellers or compilers typically
have not established processes for handling communications from people who are not their direct customers but
whose personal information is the basis for the resellers' or compilers' products. If they have, public awareness
of these procedures is likely to be low. For this type of information use to be fair, consumers should be informed
about the firms who resell their personal information, the legitimate uses to which the information will be put, and
the procedures the reseller or compiler has implemented to ensure the information is only acquired for legitimate
purposes. Consumers should be provided an opportunity to opt-out from the file if appropriate, for example, if the
information is to be reused for incompatible purposes. Research has shown repeatedly that people are willing to
disclose personal information if the benefits exceed the risks of disclosure and the information will subsequently
be used fairly. 7
Third, the Internet has the potential to serve as an effective vehicle for educating individuals about the ways their
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

50/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

personal information is used so they can make informed choices about the types of use they find acceptable.
Trade associations or public interest organizations could post comparative information about information
resellers and compilers on the World Wide Web, with a link to these firms' home pages. The table comparing
the privacy policies of the four major online service providers which the Center for Democracy and Technology
developed as part of its privacy demonstration project (http://www.cdt.org [7] ) could serve as a model. LexisNexis has already demonstrated the feasibility of using an online form for opting out of a database with its optout and automatic acknowledgment for P-TRAK.
Finally, the Internet is likely to continue to serve as a vehicle for consumers to mobilize when a large segment of
the public perceives that industry practices are at variance with social norms. The Internet discussions about
Lotus MarketPlace was confined primarily to the computer and the privacy communities. Subsequently, the
Internet has gone mainstream and Congress has become wired. Messages about P-TRAK appeared in
discussion groups ranging from consumer issues to music to African folklore in addition to electronic
communities concerned with privacy. As was described above, the Marketry discussion reached at least one
discussion group in Europe. 8 In the future we are also likely to see discussion expand greatly beyond our own
borders, given the global interest in privacy and the global reach of the Internet.
____________________________________
ENDNOTES
1 Mary J. Culnan and H. Jeff Smith, Lotus MarketPlace: HouseholdsManaging Information Privacy Concerns (A)
& (B) (1992), (unpublished teaching case, Georgetown University School of Business).
2 John R. Wilkie, Lotus Product Spurs Fears About Privacy, The Wall Street Journal, November 13, 1990 at B1.
3 Larry Jaffee, List Company resigns E-Mail File, DM News, October 23, 1995 at 1.
4 Much of the information in this section is based on a presentation by Steven Emmert, Corporate Counsel for
Lexis-Nexis, Cyberspace Policy Institute, George Washington University, November 19, 1996.
5 Awash in Information, The Washington Post, September 28, 1996 at A16.
6 It is interesting to note that Marketry was not a member of the Direct Marketing association when it managed
the list of e-mail addresses, and was therefore technically exempt from observing its ethical guidelines.
7 Mary J. Culnan and Pamela K. Armstrong, Information Privacy Concerns & Procedural Fairness: An Empirical
Investigation (1996) (unpublished working paper, Georgetown University School of Business).
8 The online discussion of P-TRAK and Marketry may be tracked using DEJANEWS (http:// www. dejanews.
com).

Table 1: Summary of the Three Case Studies

Product

Lotus MarketPlace:
Households

Marketry

P-TRAK

CD-ROM marketing
database

E-Mail addresses
segmented by interest

Locator service on LEXISNEXIS

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

51/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Date

1990-1991

1995

1996

Data Source

Credit Reports and public


records

Harvested from the


Internet

Credit Report Headers

Direct
Relationship
with
Consumer?

No

No

No

Privacy
Problems

1) Third-party acquisition of
data

4) Third-party acquisition
of data

7) Third-party acquisition
of data

2) "Stealth" opt-out

5) No opt-out

8) "Stealth" opt-out

3) Incompatible use

6) Incompatible use

9) Incompatible use

Outcome

Product canceled

Resigned management of Congress requested FTC


list
study

Competitive
Responses

Lookalike products

Competitor picked up list

Competitors' files are


more complete (no optout)

E-mail, informal pressure


from industry, negative
press

E-mail, negative press

Public Pressure E-mail, negative press


via:

"Whatever Works" The American Public's Attitudes Toward


Regulation
and Self-Regulation on Consumer Privacy Issues
Alan F. Westin
Professor of Public Law and Government, Columbia University,
and Publisher, Privacy & American Business

BACKGROUND

NTIA invited Privacy and American Business and Dr. Alan F. Westin to submit a paper on "The Public's View of
When Privacy Self-Regulation Is Appropriate." This is part of NTIA's solicitation of expert commentaries on the
pros and cons of industry self- regulation in the uses of "telecommunications- related personal information,"
particularly in the online and Internet environments.

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

52/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

IS "PUBLIC OPINION" A FEASIBLE AND USEFUL INPUT?


Discerning and interpreting "the public's view" on the regulation-to-self-regulation continuum in the privacy area is
not an easy task. For example, many interest-group spokespersons claim to know "instinctively" how the public
feels about privacy issues; they generally assume that public majorities share their own values and policy
preferences, or would do so if only properly informed. Still other experts believe that the public really has no
coherent or consistent "views" on how regulation or self-regulation does or should function in the privacy area;
they feel that trying to map public views on this matter is at best a waste of time and energy, and at worst a
diversion from good policy development. Finally, some commentators dismiss the body of survey research that
has been done over the past decade on privacy protection as either unsophisticated or self-serving, and would
expect NTIA to give such input little credence.
Recognizing that we start in a contentious setting, we do believe that a careful exploration of public views on
privacy self-regulation is a useful input to the NTIA inquiry, if used prudently and with a sensible awareness of
both the strengths and the limitations of public-opinion analysis.
OVERALL PUBLIC ATTITUDES TOWARD PRIVACY

The American public continues in the l990's to register strong concerns about threats to their personal
privacy from both government and business, and this concern is still rising.
89% of the public said in late 1996 that they are concerned about threats to their personal privacy in America
today, up from 82% in 1995. Of the concerned, 55.5% said in 1996 they were now "very concerned," up 8% from
1995.
72% say they have read or heard a great deal or a moderate amount about invasion of privacy in the past year
(1995). However, only a quarter of the public (25% in 1991 and 1995) say that they have personally been the
victim of what they felt was an invasion of their privacy.
While a narrow majority of Americans worry primarily about government invasions of privacy (52% in 1994 and
51% in 1995), a substantial minority express primary concern about activities of business (40% in 1994 and
43% in 1995). Approximately two-thirds of the public disagree with the statement that the Federal Government
since Watergate has not been seriously invading people's privacy (64% in 1990 and 62% in 1995).
A rising large percentage of the public feels that consumers have "lost all control over how personal information
about them is circulated and used by companies," (from 71% in 1990 to 80% in 1995 and 83% in 1996). There
has also been a major increase in the percentage of people who say they have refused to give information to a
business or company because they thought it wasn't needed or was too personal (from 42% in 1990 to 59% in
1995).
When people nominate the most important issues facing the nation or that the Federal Government should be
addressing, no survey in the 1990's (by Harris or any other survey organization) has found privacy to be
mentioned in the top ten. In 1995, when the Equifax survey gave respondents a list of nine consumer issues to
rate in importance, privacy finished exactly in the middle (fifth) in terms of being "very important," at 61%. Rated
higher in being very important were controlling the cost of medical insurance (84%); staying out of excessive
debt (83%); reducing insurance fraud (74%); and controlling false advertising (71%).
Throughout the 1990's, a majority of the public (54% in 1995) has said that they do not believe their rights to
privacy are adequately protected today by law or business practice.
Optimism is not the outlook that most Americans hold for privacy protection Eighty-three percent in 1995 say
protection of privacy in the year 2,000 will either remain the same (42%), which they said was weak, or get
worse (41%).
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

53/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

When it comes to specific areas of consumer privacy, a majority of the public looks at business
information practices to make two judgments: Is the information sought or collected relevant and
socially-acceptable to use for this purpose; and are basic fair information practices being observed.
In each of six Equifax-Harris surveys, detailed questions were presented asking consumers how they felt about
information practices in major consumer areas. These have included credit reporting, financial services,
underwriting for life and property insurance, employment, telecommunications, medical records and health care,
direct marketing, and similar matters.
The general pattern of responses was striking and consistent:
When the Equifax surveys give respondents lists of types of information that businesses or employers could ask
for--to decide whether consumers or job applicants receive benefits or opportunities--the public applies pretty
sophisticated notions of decisional relevance. For example, strong majorities accept the relevance of payment
histories, bankruptcy status, litigation pending, and similar matters when credit grantors are asked to make
loans or issue credit cards. Job relevant criteria are approved and non-job-relevant ones are disapproved, and so
on.
The public also holds clear notions as to the social propriety of asking for certain information even if it may be
highly relevant. This often reflects changing national standards as to what kinds of personal information the
public in the l990's thinks it is acceptable for businesses or employers to ask about and use in making
particular decisions, such as gender, race, lifestyle, political activity, associational affiliations, sexual
preference, etc.
If information is considered relevant and proper to use, the public then looks to see whether what have come to
be known as fair information practices standards and procedures are being observed by businesses in specific
contexts. If they are, a majority of the public generally expresses comfort with the organization's information
practices.
The key elements the survey respondents consistently rate as essential are:
notice to the consumer of what information is being collected and how it will be used;
limitation of uses to the broad area of consumer activity the individual is knowingly involved in;
choices offered to consumers as to any additional uses to be made of their personal information by the
collecting organization or by furnishing it to others;
opportunities to examine and, where needed, contest and correct the contents of records compiled about
the individual; and
adoption of effective rules and procedures for assuring the confidentiality and security of the personal data
entrusted to the collecting organization.
In a process shown to operate in all these Equifax surveys, a strong majority concern or even disapproval that is
registered by respondents over specific business or government information practices presented to them will
shift to strong majority approval when the survey presents key fair information practices and asks--if these were
observed (and sometimes, if these were written into law)--whether the information practices would then be
acceptable to respondents.
The movement of majorities from initial registered concern to majority approval--if safeguards are adopted--shows
that privacy is not seen by the large majority of the American public as an absolute, in the sense of expecting
businesses that provide services to consumers or government's social programs to operate without access to
relevant and socially-appropriate personal information. Rather, the judgment process tested in the Equifax
surveys demonstrates that, to most Americans, the key issue is almost always a matter of defining, adopting,
and observing reasonable safeguards to avoid or limit present or potential abuses.

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

54/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Surveys establish that the driving factors behind privacy attitudes, both in general and in specific
consumer areas, are the individual's level of distrust in institutions and fears of technology abuse.
The Harris Distrust Index combines measurement of distrust in institutions (government, voting, and business)
with fear that technology is almost out of control.
Harris Privacy Surveys since 1978, and throughout the Equifax series in the 90's, have found that a respondent's
score on the Distrust Index correlates with a majority of that respondent's positions on privacy in general and the
industry-specific questions on each survey. The higher the Distrust Score, the more a respondent will express
concern about threats to privacy, believe that consumers have lost all control over uses of their information by
business, reject the relevance and propriety of information sought in particular situations, call for legislation to
forbid various information practices, etc.
In 1995, the American public divided as follows on the Distrust Index:
High (distrustful on 3-4 questions) 29%
Medium (distrustful on 2 questions) 42%
Low (distrustful on 1 question) 23%
Not (no distrustful answers) 6%
The 1995 score of 71%, of the American public registering High or Medium Distrust is the highest those levels
have ever been. And, in 13 of the 1995 survey's 16 questions asking about general privacy concerns and
measuring specific privacy attitudes, the strongest privacy positions were registered by the High Distrustful
respondents; the next strongest by the Medium Distrustful; and so on through the Low to Not Distrustful. In
survey terms, this is confirmation of the direct relationship between the Distrust orientation and positions on
privacy issues.
Since not just the Equifax surveys but a barrage of national surveys over the past two decades confirm that a
majority of the American public has deep institutional distrust and technology fears, it is highly likely that the
public's strong privacy concerns will carry forward into the late 1990's. This means that it will take firm, explicit,
and publicly-credible privacy-protection measures--not just promises or good intentions by business and
government--to counter the fallout in the privacy field from the American public's larger negative social attitudes.

The division of the American public into three basic clusters--Privacy Fundamentalists, Privacy
Pragmatists, and the Privacy Unconcerned--continues to be a stable segmentation of public
approaches to privacy.
In 1990, 1991, and 1995, analysis of the samples' responses to a set of representative privacy questions
showed that the American public divides into three cohesive camps on privacy issues as a whole and consumer
privacy issues in particular.
The three divisions the survey data identify are as follows:
Privacy Fundamentalists (about 25%). This group sees privacy as an especially high value, rejects the claims
of many organizations to need or be entitled to get personal information for their business or governmental
programs, thinks more individuals should refuse to give out information they are asked for, and favors enactment
of strong federal and state laws to secure privacy rights and control organizational discretion. Privacy
Fundamentalists score at the High level in the Harris Distrust Index.
Privacy Pragmatists (about 55%). This group weighs the value, both to them and to society, of various
business or government programs calling for personal information, examines the relevance and social propriety
of the information that is sought, looks to see whether fair information practices are being widely enough
observed, and then decides whether they will agree or disagree with specific information activities--with their
trust in the particular industry or company involved a critical decisional factor. The Pragmatists favor voluntary
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

55/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

standards over legislation and government enforcement but they will back legislation when they think not enough
is being done--or meaningfully done--by voluntary means. Privacy Pragmatists generally score at the Medium
and some High levels in Distrust.
Privacy Unconcerned (about 20%). This group doesn't know what the "privacy fuss" is all about, supports the
benefits of most organizational programs over warnings about privacy abuse, has little problem with supplying
their personal information to government authorities or businesses, and sees no need for creating another
government bureaucracy to protect someone's privacy. Not surprisingly, the Privacy Unconcerned score at the
Low to No Distrust levels on the distrust Index.
In the 1995 Equifax-Harris survey, this three-fold division was present in the public's positions on 13 of the 16
privacy-attitude questions. On each of those questions, the strongest privacy concerns and orientations were
held by the Privacy Fundamentalists, middle positions were taken by the Pragamatists, and the least concern
or disapproval was registered by the Privacy Unconcerned.

VOLUNTARY STANDARDS, GOVERNMENT REGULATION, AND INDIVIDUAL CHOICES


Though the American public clearly wants more effective policies and practices to strengthen consumer privacy
rights, just what does the public think should be done and by whom? To test those issues, Equifax surveys over
the past seven years have asked a variety of questions:
Does the public favor passing state or federal legislation to define and enforce consumer privacy rights in
particular sectors, when the public feels that organizational practices and any existing laws are presently
inadequate? The answer is Yes. In the case of medical records and health information, for example, the public in
1993 strongly supported new health privacy legislation at the federal level. On the other hand, majorities rejected
the need to forbid employers to conduct monitoring of workers for quality and observance of rules and laws.
Does the public favor creating a government commission to study how technologies are being applied in a
particular area of personal information collection, and make recommendations to protect privacy and due
process in that setting? Definitely. A 1995 Harris study for the Center for Social and Legal Research found that
85% of the public felt it was "important" for a government commission to be appointed "to examine how genetic
tests are developing and to make recommendations for policies or laws to protect privacy of genetic test
information and control the uses of genetic test results." Forty-eight percent felt this was "very" important.
Does the public feel that a government privacy agency should be created to enforce privacy protections in a
particular sector where no such agency exists today. The answer is again, Yes. For example, a federal agency
with regulatory authority was strongly favored by the public in 1993 to help protect health- information privacy.
In general, do consumers prefer good voluntary privacy policies by business (if those are provided) over the
enactment of government regulation? Yes, says the public. Majorities at 72% in 1995 said they favor the
voluntary approach when this is actively pursued by the private sector.
Finally, does the public feel that some sort of federal privacy commission or agency, either with regulatory
powers or as an advisory body, should be added to existing industry-based federal agencies, such as the
Federal Communications Commission and Federal Trade Commission? We did not have a clear answer on that
prior to 1996. When such a question was posed on the Equifax 1990 survey, it offered three choices--a federal
regulatory agency, a federal advisory agency, and improving existing policies--and the public in 1990 divided
about equally among those three alternatives.
Given the debate over whether the U.S. would be considered by the European Union to have an "adequate" data
protection regime unless there was a federal regulatory agency with powers over the entire private sector, in the
European model, the Equifax-Harris 1996 survey posed the following question:
The present system in the U.S. for protecting the confidentiality of consumer information used by business
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

56/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

combines three main controls: voluntary privacy practices adopted by companies, individual lawsuits and court
decisions, and federal and state laws in specific industries.
Some experts feel that Congress should create a permanent federal government privacy commission, as some
European countries have done. This commission would examine new technology development and could issue
and enforce privacy regulations governing all businesses in the U.S.
Other experts believe the present system is flexible enough to apply those consumer privacy rights that the
American public wants to have protected, and that creating a federal commission gives too much authority to
the federal government.
Which of these choices do you think is best for the U.S.?
The two answers, rotated to avoid bias, were: "creating a federal government privacy commission" or "using the
present system to protect consumer privacy rights."
In addition, to make sure that posing all the specific survey questions to respondents about business
information practices, privacy issues, and potential privacy policies or laws did not exert an imprinting effect on
respondents, the question about a federal privacy agency was presented to half the sample near the beginning
of the survey and to the other half at the end of the survey, just before the demographic queries.
A solid two-thirds (67%) of the combined total sample chose "staying with the present system of privacy
protection" over "creating a federal government privacy commission." And, respondents asked the question at
the end of the survey--after being sensitized to privacy issues by about 18 minutes of questioning--chose staying
with the present system by 70%, compared to the front-end respondents' 64% choice of that position.
Why do less than one in three Americans (28%) think that a federal privacy agency "is best for the U.S." today?
And, why is this true among the groups that regularly show up in the Equifax surveys as the most concerned
about threats to privacy, in general and in particular sectors--such as liberals; computer and on-line service
users; persons with low incomes and education; minorities; etc.? Even among these groups, only 37% or less
of their members believe a federal privacy agency with regulatory power would be the right step for the nation to
take.
The prime explanation for this population-wide view seems to be the application to the consumer privacy area of
a set of powerful political and cultural attitudes of Americans that go back to the Founding Fathers but are still
deeply held in the computer age. Many of these underlying attitudes have been tested and reflected in the
Equifax privacy surveys since 1990. Nine basic orientations are involved:
The basic distrust of government and emphasis on individual rights and choices, a theme that resonates
loudly in the 1990's;
The importance of the First, Fourth, and Fifth Amendments in our Bill of Rights in framing the ways we
define and implement privacy rights and information-handling, and how our courts balance those rights
with other compelling social interests;
The strong states-rights component of our federal system, which sees state experimentation as a prime
social-policy instrument and usually views federal supervision as justified only when most states fail to
act and national uniformity is absolutely necessary;
The high status given to individually-generated litigation and court rulings as a powerful remedy for
perceived harm, especially through interest-group sponsorship and the financing of "rights litigation;"
Our selection of industry sectors for concrete and tailored legislative action and area-expert regulatory
agency supervision (on privacy and many other matters), rather than adopting multi-industry or nationwide
interventions;
A major popular preference for voluntary privacy policies and actions by business and non-profit
organizations over legislative solutions, unless such voluntary actions are seen as insufficient, or in need
of legal reinforcement;
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

57/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

The powerful role of the media in the U.S. in exposing misconduct or abuses by businesses or
government, and the actions that business and government leaders take to either avoid media
thunderbolts or to collect mistakes when they become public embarrassments;
The general preference of Americans for market-based solutions and private choices by individual
consumers about alternative privacy policies, wherever these are meaningfully afforded; and
A general sense that if technological methods can be found to let people set their own different
boundaries and balances of privacy, this would be a preferred solution, given the wide variation in privacy
concerns and preferences from individual to individual that survey research shows to be the continuing
situation in the U.S.
It should be noted that the 1996 question did not ask respondents whether a temporary federal privacy study
commission (as we had in 1975-77) or an executive-branch federal privacy body limited to research and advisory
missions would be viewed as a desirable or necessary step in the late 1990's. Those represent potential
measures that it would be useful for future privacy surveys to test.

THE 1997 PRIVACY & AMERICA BUSINESS SURVEY ON ONLINE PRIVACY


The latest and most important addition to our public opinion resources is "Commerce, Communication and
Privacy Online: A National Survey of Computer Users." This detailed, 25-minute telephone survey of 1,000 adults
using computers at home, work, school, or any other place was conducted in April, 1997 by Louis Harris &
Associates and Dr. Alan F. Westin for Privacy & American Business. Its results were released publicly on June
11, 1997 at the Federal Trade Commission's Public Workshop on Consumer Information Privacy, in Washington,
D.C. The survey was sponsored by 10 companies and industry associations, with advisory input from consumer
and public interest groups, academic experts, and federal government staffs concerned with consumer
protection, telecommunications, and commerce.
The survey's respondents were a representative national sample of Americans 18 and over who currently use
computers; this represents about 100 million persons out of the approximately 190 million adults in a total
national sample. Of these computer users, 42% (representing about 42 million adults) say they are accessing
the Internet once a month or more; 33% (representing about 33 million adults) say they use online services; and
49% (representing about 49 million adults using computers) say they are neither online nor using the Internet.
Among the major findings of the P&AB study were:
Only a very small percentage (5% of Net users and 7% of online service users) experienced what they
regard as an invasion of their privacy online, as compared to 25-35% of the general public who have
experienced privacy invasions in the offline world.
More than half (53% of Internet users and 57% of online users) are concerned that information about
which sites they visit will be available without their consent and 59% of Internet users who send and
receive email are concerned that the content of what they communicate will be obtained by a third party.
Computer users are especially concerned about protecting children's personal information online. Fiftynine percent of computer users say it is not acceptable to collect personally-identified information from
children for site-use statistics; 58% not acceptable for product improvement; 73% not acceptable to
acquire at the time of purchase or registration at a site; and 90% not acceptable to sell to other
marketers.
A majority (52%) of computer users not yet online or using the Internet say that privacy protection is the
most important factor that would influence their decision to go online.
The entire respondent group was asked several questions relating to laws and government regulation. One
question asked whether companies that collect information from children should be legally liable if they violate
their stated policies as to how that information will be used. Ninety-four percent of respondents felt they should
be held legally liable.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

58/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

When asked to agree or disagree with the statement used earlier on Harris-Westin privacy surveys--"if
companies and industry associations adopt good voluntary privacy policies, that would be better than enacting
government regulation, in this country--" 70% of these survey respondents agreed.
Turning specifically to online privacy, the survey asked:
Here are three ways that the government could approach Internet privacy issues. Which ONE of these three do
you think would be best at this stage of Internet development?" The three statements, which were rotated to
prevent any bias in order of presentation, were:
Government should let groups develop voluntary privacy standards, but not take any action now unless
real problems arise.
Government should recommend privacy standards for the Internet but not pass laws at this time.
Government should pass laws now for how personal information can be collected and used on the
Internet.
Fifty-eight percent of respondents favored government passing laws now; 24% supported government
recommending privacy standards but not legislating them; and 15% favored letting groups develop voluntary
privacy standards and government taking action only if real problems arise.
Among adults actually using the Internet, only 47% favored government passing laws now. Those using only
online services favored legislation at 56% and computer users who are neither online nor on the Net supported
legislation at 65%.
Several important points about the Internet-privacy question deserve mention.
It was placed at the end of the survey, after almost 100 items had explored various aspects of respondents'
knowledge, experiences, attitudes, and policy preferences about online and Internet uses and privacy matters,
including the questions about protecting children using the Net. This process clearly sensitized respondents to
privacy concerns.
The question gave no specifics as to what kind of laws would be passed and what their operative definitions
would be; whether this would be federal or state legislation; what kind of penalties or damages would be
installed; whether there would be any agency supervision or regulatory process designated or individual litigation
would be the remedy; and various similar matters which could be expected to have major effects on how the
public would evaluate any actual legislation introduced.
The survey found that 71% of respondents online were not aware of their services' information policies; that
majorities of online and Net users were not aware of software tools to control unwanted advertising email or
access to disfavored web sites; and that most visitors to web sites were not aware of the policies those sites
followed in collecting visitors' personal information.
The survey found low confidence by the total respondents in the way that online services, direct Internet access
providers, and companies offering products on the Internet handle "the personal or confidential information
people give them," with the lowest confidence expressed for companies marketing to children on the Net. Much
higher confidence was expressed in banks, employers, and hospitals, for example.
The 58% support for government legislation on Internet privacy can be seen as an example of the support for
"sectoral privacy legislation" that we documented earlier in this paper, and is not inconsistent with the overall
public's lack of support for creating a federal regulatory agency with authority over the entire business world.
The finding also can be seen as expressing a judgment that--as of 1997--computer users do not see the kinds of
"good voluntary privacy policies" being promulgated as yet by companies and industry associations on the
Internet, or perhaps a feeling that it might take a structure of legal rules to make it possible for good privacy
policies to be followed by companies managing or operating on the Net.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

59/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

In terms of the historic regulation versus self-regulation tension in American social policy, I read this 1997 result
favoring regulation for the Internet as a very early--and accurate--perception by computer users (and probably the
public not using computers) that solid information-policy notices by online services and website operators,
consumer consent mechanisms, and software tools for exercising user controls over spamming and unwanted
communications are simply not yet in place. Nor does the confidence felt about online firms lead a majority of
computer users to feel today that industry and its public-interest-group colleagues in voluntary-privacy-policies
should be given deference in advance of action.
At the same time, it seems clear that the leading federal regulatory and executive agencies properly assessing
both the concerns and the realities of online privacy operations do not have a template at hand for regulating
privacy online. Indeed, most have expressed a desire to see industry, public-interest groups, and technologists
develop standards and procedures which--some time later, after experience accumulates--might become an
empirical basis for legislative action, if that became needed to control non-compliant or "outlaw" behavior.
It will therefore be important to revisit the feelings of the nation's wired population--and the general public--in the
next 2-3 years, when many major industry actions, user-control programs, and individual-choice mechanisms
have had a chance to be installed and tested, and their scope and impact felt.

CONCLUSIONS
Given the elements of our social, political, and legal culture noted above, and the basically pragmatic orientation
of a majority of the American public on the privacy issues per se, it seems fair to conclude that the American
attitude toward self-regulation or regulation along the Information Superhighway and its Internet high wire is
basically--"whatever works!"
The 1997 Privacy & American Business online privacy survey shows that computer users want meaningful
privacy protection for their personal information as they communicate in the online world, that they do not see
this being provided as yet voluntarily, and that this leads them--at this moment--to support the enactment of
laws. If the forces supporting self-regulation are to change that orientation, it will take powerful deeds of policy
and technology, amply publicized and demonstrated to be effective.
________________________________
SOURCES
The sources for this paper are representative opinion surveys of the United States adult general public. The
citations are to the published reports on each survey, with the year given as of the report's publication. While
most of these had the interviews done during the same year, several had their field work conducted one or even
two years before publication of the report. Except where noted below, all of these surveys were conducted by
telephone.
1 The Dimensions of Privacy, Louis Harris & Associates and Dr. Alan F. Westin, for Sentry Insurance; national
public sample, 2,131 adults; 1979. (In-person interviews).
2 The Equifax Report on Consumers in the Information Age, Louis Harris & Associates and Dr. Alan F. Westin,
for Equifax Inc.; national public sample, 2,254 adults; 1990.
3 Harris-Equifax Consumer Privacy Survey, l991, Louis Harris & Associates and Dr. Alan F. Westin, for Equifax
Inc.; national public sample, 1255 adults; 1991.
4 Harris-Equifax Consumer Privacy Survey, 1992, Louis Harris & Associates and Dr. Alan F. Westin, for Equifax
Inc.; national public sample, 1,254 adults; 1992.
5 Health Information Privacy Survey, 1993, Louis Harris & Associates and Dr. Alan F. Westin, for Equifax Inc.;
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

60/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

national public sample; 1,000 adults; 1993.


6 Equifax-Harris Consumer Privacy Survey, 1994, Louis Harris & Associates and Dr. Alan F. Westin, 1,005
adults; 1994.
7 Work place Health and Privacy Issues: A Survey of Private Sector Employees and Leaders, Louis Harris &
Associates and Dr. Alan F. Westin, for the Educational Film Center; sample of 1,000 employees working in
private-sector companies with 15 or more employees; 1994.
8 Consumers and Credit Reporting, 1994, Louis Harris & Associates and Dr. Alan F. Westin, for MasterCard
International and Visa U.S.A.; national public sample, 1,001 adults; 1994.
9 Interactive Services, Consumers, and Privacy, Louis Harris & Associates and Dr. Alan F. Westin, for Privacy
& American Business, sponsored by Bell Atlantic, Citicorp, and U.S. WEST; national public sample; 1,000
adults; 1994.
10 Equifax-Harris Mid-Decade Consumer Privacy Survey, Louis Harris & Associates and Dr. Alan F. Westin, for
Equifax Inc.; national public sample, 1,006 adults; 1995.
11 Consumer Privacy Issues, Louis Harris and Associates and Dr. Alan F. Westin, for Privacy & American
Business; national public sample, 1,000 adults, 1995.
12 Genetic Testing and Privacy, Louis Harris & Associates and Dr. Alan F. Westin, for the Center for Social and
Legal Research; national public sample, 1,002 adults; 1995.
13 l996 Equifax-Harris Consumer Privacy Survey, Louis Harris & Associate and Dr. Alan F. Westin, for Equifax
Inc.; national public sample, 1,005 adults; 1996.
14 Public Attitudes Toward Local Telephone Company Use of CPNI, Opinion Research Corporation and Dr. Alan
F. Westin, for Pacific Telesis; national public sample, 1,000 adults; 1996.
15 Commerce, Communications, and Privacy Online, Louis Harris & Associates and Dr. Alan F. Westin, for
Privacy & American Business, national sample of computer users, 1009 respondents; 1997.

The Limits and the Necessity of Self-Regulation: The Case


for Both

Deidre K. Mulligan
Staff Counsel
The Center for Democracy and Technology
Janlori Goldman
Deputy Director
The Center for Democracy and Technology

The Center for Democracy and Technology (CDT) is pleased to respond to the National Telecommunications and
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

61/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Information Administration's letter of November 14, 1996, requesting that CDT submit a paper addressing the
"Limits and the Necessity of Self-regulation" in the privacy policy arena. The Center for Democracy and
Technology is a nonprofit, 501(c)3 organization working to preserve and enhance civil liberties on the Internet
and in other interactive communications media. Having commented upon the Information Infrastructure Task
Force's Principles for Providing and Using Personal Information and the NTIA's report, "Privacy and the National
Information Infrastructure: Safeguarding Telecommunications-Related Personal Information," CDT is pleased to
have the opportunity to continue this important dialogue about designing and implementing privacy safeguards
for interactive communications media.

OVERVIEW
There is growing consensus that interactive communications media such as the Internet hold great potential for
enhancing democratic values and supporting the full realization of individual freedoms. But this potential will only
be realized if policies are put in place that support and encourage the development of technologies that give
individuals control over both the ideas and beliefs to which they are exposed, and the collection, use and
disclosure of their personal information. It is unclear whether these two policy goals will require similar or
disparate actions by the government and private sectors. While supporting the First Amendment on the Internet
demands a hands-off policy of no regulation, advancing privacy on the Internet requires the establishment of
affirmative public policy that gives people notice of information practices and the ability to make meaningful
decisions regarding the flow of personal information. The regulatory model that will best move us toward privacy
solutions for the Internet is debatable.
It is broadly recognized that the First Amendment potential of the Internet will be best actualized through
policies that limit government control over content. Civil libertarians, industry, and most recently a panel of three
federal judges in Philadelphia, believe that the decentralized, open nature of the Internet, coupled with tools that
allow users to control information, will best achieve the First Amendment goals of abundance and diversity while
allowing individuals to limit their, and their children's, exposure to unacceptable material. Through the
development of filtering and blocking devices that empower individuals to control the inflow of information, the
Internet can give true meaning to the core First Amendment principle that individuals should determine the ideas
and beliefs deserving of expression, consideration, and adherence. 1 For as Judge Dazell concluded in his
opinion finding the Communications Decency Act unconstitutional:
It is no exaggeration to conclude that the Internet has achieved, and continues to achieve, the most
participatory mark etplace of mass speech that this country-- and indeed the world--has yet seen. the Internet
deserves the broadest possible protection from government-imposed, content-based regulation.
Unlike the First Amendment area, the question of how best to achieve a strong, effective privacy regime on the
Internet remains unanswered. In fact, at this moment the impact of the Digital Age on individual privacy is
uncertain and presents a number of questions: How can we ensure that the Digital Age offers a renewed
opportunity for privacy? What policies and actions will best support individual privacy on-line. Can self- regulation
adequately protect privacy on the Internet? Will the development of technologies that empower individuals to
control the collection and use of personal information and communications--such as encryption, and anonymous
remailers, web browsers and payment systems--coupled with self-regulatory policies that mandate notice and
choice be adequate? Or will the protection of privacy online require a new regulatory regime for the Internet? How
we can best ensure that the architecture of the Internet is designed to advance individual privacy by facilitating
individual control over personal information is yet to be decided.
The goal of this paper is to examine both the necessity and the limits of self-regulation in the privacy arena.
Through an examination of industry-supported privacy policy and the strengths and shortcomings of existing
privacy laws and self-regulatory systems, this paper will attempt to illustrate the necessity of self-regulation--as
both a model for future policy and a pragmatic solution in the absence of legal safeguards--and its very real
limitations.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

62/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

SELF-REGULATION
The Debate
Debate over the capacity of self-regulation and market forces to adequately address privacy concerns continues
to rage in the privacy and consumer protection arenas. Advocates often take the position that self-regulation is
inadequate due to both a lack of enforcement and the absence of legal redress to harmed individuals. Industry
tends to strongly favor self-regulation, stating that it results in workable, market-based solutions that respond
directly to consumer's needs while placing minimal burdens on affected companies. 2 These positions, while in
tension, are not mutually exclusive, and in the past both have accurately described the self-regulatory process.
A closer look at the enactment of federal privacy legislation over the years reveals a story much more complex
and sophisticated than these standard position statements portray.
Historical Relation Between Self-regulation and Legislation
Industry positions on the desirability of legislative or regulatory privacy solutions have varied. While industry has
frequently opposed legislative efforts, at times it has vigorously supported, and even actively pursued, privacy
legislation where it believed a law was necessary to build public trust and confidence in a particular industry or
technology. The majority of industry-supported privacy efforts have resulted in legislation that limits the ability of
government--particularly law enforcement--to gain access to information about individuals. However, a number of
industry-supported privacy laws have actually placed limits on the private sector's use of personal information. In
such instances good industry actors have led the way, crafting self-regulatory policies. These policies are the
prototype for subsequent legislation supported by self-regulated players who, for reasons of public trust, liability,
and/or government concern want to bind bad industry actors.
It is instructive to examine the factors that have led industry and the public interest community to join together in
support of privacy legislation aimed at regulating both government and private sector use of personal information.
The Electronic Communications Privacy Act of 1986 (ECPA), which updated the 1968 Wiretap Act, was the
result of a collaborative public interest/private sector effort. 3 Industry feared that without legal protection against
eavesdropping and interception, consumers would be reluctant to use emerging electronic media, such as
cellular phones and email, to communicate. The resulting law extended legal protection akin to that provided
First Class mail, and was developed and supported by a diverse coalition of business, civil liberties, and
consumer advocates who understood that consumers would be unwilling to fully embrace electronic mail and
other new technologies without strong privacy protections.
Similarly, 1995 amendments to ECPA crafted privacy protections for transactional information that was contentlike in its ability to reveal facts about a person's life. 4 In these instances, developing and enacting a legislative
privacy regime was viewed by the business community as a necessary component of creating and supporting a
flourishing market for their products. The nexus between privacy protection and business necessity resulted in a
diverse public interest/industry coalition supporting increased protections for transactional data.

The Cable Communications Privacy Act of 1984 and the Video Privacy Protection Act of 1988 reflect a similar
coalescing of interests. 5 Enacted within a couple of years of each other, both laws resulted from the affected
industry's realization that a lack of assurance that viewing preferences were protected from prying eyes, would
have a chilling effect on consumers' viewing and renting habits. The revelation in a Washington, DC weekly paper
that a reporter--or anyone for that matter--could walk in off the street and discover Supreme Court nominee
Judge Bork's taste in movies provided privacy advocates with the perfect story to gain Congress' attention.
Privacy advocates arrived on the Hill with Erols, the Video Software Dealer's Association, the Direct Marketing
Association, and others who realized that the viability of their businesses depended on consumer trust and
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

63/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

confidence that video rental lists were safeguarded by strong legal restrictions on government and private sector
access.
In other instances, industry has been moved to support privacy legislation in the wake of public revelations of
bad practices or a particularly compelling horror story. The Fair Credit Reporting Act of 1970 (FCRA) was initially
drafted and supported by the credit reporting industry in response to congressional hearings which revealed
widespread misuse of credit information and an alarming rate of inaccuracies in credit reports. An enraged
Congress, with the support of privacy and consumer organizations, indicated a commitment to passing a law
regulating the use of consumer credit information. Realizing that legislation was inevitable, the industry set
about crafting a policy that they could support. The Driver's Privacy Protection Act of 1994 was largely triggered
by the murder of actress Rebecca Shaffer, and eventually garnered the support of the majority of the affected
industries. 6 Through information in her driver's license file at the department of motor vehicles, Shaffer's stalker
was able to learn her whereabouts.
A recent example of the relation between self-regulation and legislation comes to us from Canada. In May, two
Ministers in the Canadian government announced their intention to enact a national law to protect privacy online
based upon self-regulatory principles that were adopted by the Canadian Standards Association early in the
year. John Gustavson, president and CEO of the Canadian Direct Marketing Association (CDMA) stated that,
"CDMA believes legislation is the most effective means of ensuring all private sector organizations adhere to the
same basic set of rules in handling information." 7 The interest in creating uniformity and engendering consumer
trust and confidence in an emerging technology brought industry to craft self-regulatory policy and eventually
back legislation based upon its framework.
This brief review of federal privacy legislation indicates that historically, for privacy legislation to garner the
support of industry it must either implement accepted practices within industry-- binding bad actors to the rules
being followed by industry leaders--or be critically tied to the viability of a business service or product as with the
Video Privacy Protection Act and the Electronic Communications Privacy Act.

The Limits of Self-regulation


Recognizing the role that self-regulation has played--both as a test-bed for policy and a gap -filler in the absence
of law--it is important to also examine its limitations. Historically, the two primary shortcomings of industry selfregulation in the privacy are: 1) the lack of oversight and enforcement; and, 2) the absence of legal redress to
harmed individuals. 8
Oversight and Enforcement. Oversight and enforcement often have been missing components of industry selfregulation. Without a strong commitment to ensuring adherence to policies, self- regulation is doomed to be
inadequate--and will appear to the public, policy-makers, and advocates as window-dressing designed to
squelch needed regulatory activity. 9 For example, while many trade associations put forth model policies, few
have the ability to enforce member companies adherence. Often it is precisely this lack of oversight and
enforcement power that eventually drives good-industry actors to seek legislation codifying self-regulatory
principles in an effort to bind bad- industry actors who are tarnishing the reputation of the industry as a whole.
Without a strong commitment to oversight and enforcement of industry supported policy, self-regulation will
ultimately fail.
Legal Redress. Similarly, the absence of effective and responsive legal redress to harmed individuals is a
recurring problem with self- regulatory solutions. Industry generated policies rarely offer consumers meaningful
relief in instances of abuse. Without meaningful opportunity to have grievances addressed, and to be
compensated for breaches of policy, consumers will find policies--whether they be self-regulatory or legislative-sorely lacking.
The Limits of Existing Law
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

64/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Even acknowledging the number of instances where diverse interests have successfully supported legislative
privacy solutions, the fact remains that large quantities of personal information are unprotected by privacy rules
and vulnerable to misuse and abuse. The patchwork quilt of sector by sector privacy protection in the US is
comprised of horror-story-inspired legislation that often provides inadequate privacy protections and rarely gives
individuals any direct say in how information about them is collected, used and disclosed. For example, the
Right to Financial Privacy Act 10 (RFPA)--enacted the year following a Supreme Court ruling that one has no
constitutionally protected privacy interest in personal records held by a bank, 11--only limits government access
to personal bank records. Under the RFPA the private sector's use of personal financial information is unfettered.
The Fair Credit Reporting Act, while establishing some limits on the private sector's use of credit information,
does so by legislating so-called"permissible purposes" for which industry can use and disclose credit
information rather than crafting a consent mechanism that would allow the individual to be the arbiter of access
to her own information. 12 Finally, there are crucial areas of personal information that have little if any federal
privacy protection, such as personal health and genetic information. Even where there has been an attempt to
codify fair information practices through federal statutes, the results have generally fallen far short of the desired
goal of privacy advocates, which is to have individuals control the collection, use, and disclosure of personal
information. 13
This is not to underestimate the importance of hard fought battles to craft statutory privacy protections for
personal information. Existing privacy laws in areas such as banking, cable, credit, educational and video
records set important limits on the use and disclosure of personal information. However, there is not a statute on
the books that gives the individual simple, meaningful, up-front, control over personal information. The sector by
sector approach of existing U.S. law makes analytic sense, but progress has been slow and many gaps remain.
As a result, efforts to preserve information privacy can be characterized as a constant struggle to set limits on
the invasions of privacy-- the misuse, unauthorized collection, and unauthorized disclosure of personal
information--made possible and practical through technology.

THE CHALLENGE AND PROMISE OF THE INTERNET


It is in this context that we must ask the question: What roles can self-regulation and legislation play in
protecting individual privacy in the Digital Age?
Historically, legislation has been the product of a consensus coming out of either successful self-regulatory
efforts, or consensus surrounding both the necessity and parameters of legislation in a specific sector. In the
absence of consensus, legislation has routinely failed--the inability to enact legislation to protect personal health
information over the past 25 years is a painful example--or has resulted in weak legislative solutions. If history is
an accurate predictor, then a self-regulatory effort has a role to play in developing a privacy regime for the
Internet. In addition, there is an added need to proceed cautiously in this area--whether in legislation or selfregulation-- due to the evolving nature and our evolving understanding of this new media. 14
The nature and architecture of the Internet, as well as its users and industry participants, give CDT reason to
believe that self-regulation must play a role in preserving privacy online. At this juncture, the decentralized, open,
and global characteristics of the Internet may make it resistant to traditional nation by nation, state by state,
top- down regulatory regimes. 15 Similar to the characteristics of the medium itself, the social and political nature
of the Internet is distinct from more traditional media in ways that may be important to privacy considerations
and key to industry's motivation to effectively self-regulate. In addition, many pioneers of the electronic frontier
are self- described libertarians. And it must be noted that most of the personal information generated, collected,
used and disclosed online is not governed by existing privacy policy.

An Opportunity to Forge Consensus on Policy

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

65/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Past battles over privacy policy have largely centered on the mechanism for measuring individual consent. In the
traditional information privacy realm, various interests have wrestled with awkward, mechanistic, and largely
unsuccessful approaches to allowing people some say over how and whether their personal information should
be used by others. Industry has staunchly defended the merits of an "opt-out" approach which presupposes
permission to use and disclose personal information unless the consumer lodges an objection. Privacy and
consumer advocates have engaged in a largely unsuccessful effort to move industry towards a more privacyprotective "opt- in" standard that would require individual consent prior to the use or disclosure of personal
information for unrelated purposes.
The Internet may offer us the opportunity to shift this debate. The transaction costs that attach to the "optout"
" opt-in" tug of war in the traditional paper and database worlds may be negligible in an interactive
environment. And more importantly, by giving individuals the ability to make up-front decisions regarding the use
and disclosure of personal information, the interactive world may offer a policy option that simultaneously
supports stronger privacy policy, more secure communications and transactions, and vigorous commerce.
For instance, interactivity makes possible real- time notice and choice options that allow individuals to establish
a direct relationship with companies with which they do business. Notice and choice options can be presented
at varying levels of granularity, depending on the individual's desire at the time of a particular transaction.
Interactions can be tailored by and for the individual. The need for a default standard of either "use and disclose"
or "don't use and disclose" may be absent in a world of swift and simple two-way communication.

A New Medium
Even in its nascent stage, the Internet has shown itself to be responsive to those who populate it. Users have
sent powerful responses to those who have "abused" the Internet. Mass, unsolicited emails typically result in
spamming--tons of angry messages deluging the original sender. Users sent Lexis-Nexis a quick and decisive
message rejecting their locator service "P-trak" on privacy grounds. 16 In response to users' concerns, Deja
News quickly added a mechanism to allow users to flag usenet and newsgroup postings that they did not want
archived and searchable. 17 Similarly, most "look-up" services on the Internet--unlike those in traditional media-give individuals the opportunity to opt-out revealing a sensitivity to Internet users' heightened privacy concerns. In
addition, a variety of privacy tools have appeared on the Internet designed to block the collection of
information. 18
The Internet is alive with people engaged in a host of activities that many consider sensitive. 19 Both the people
engaged in these activities and those establishing the areas where they take place have a strong interest in
developing an environment that engenders trust and confidence in its users. This may bode well for privacy. The
combination of a medium that has been responsive to its users, early users who are known privacy
fundamentalists, and a tradition of people engaging in activities that they want to keep private, may prove a
powerful tonic for individual privacy.

Moving Forward
Acknowledging the existing tensions surrounding privacy and self-regulation, CDT and others such as America
Online, Microsoft, National Consumers League, and the Electronic Frontier Foundation have created a unique,
collaborative model for addressing and resolving these tensions in the privacy area. The Internet Privacy Working
Group (IPWG) seeks to be the latest demonstration of the power of bringing diverse stakeholders from the public
interest and private sectors together (which is usually what must happen before privacy policy becomes law).
While optimistic about the privacy enhancing potential of the Digital Age, CDT believes that the core privacy
principles of notice and individual control over personal information will only be realized if they inform both the
policies and the technologies that are the backbone of the information infrastructure. There is a window of
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

66/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

opportunity offering the chance to put privacy-enhancing technologies into the hands of individuals. To realize
this promise, all members of the Internet community must come together to build an infrastructure that supports
privacy policies and applications.
IPWG grows out of a shared interest in examining the possibility of harnessing and expanding the capacity of
individual empowerment technologies, such as the Platform for Internet Content Selection (PICS), 20 to enhance
user privacy. By expanding upon existing technical specifications built to facilitate individual control over content
coming into the home, it seems possible that technology can facilitate the communication of Web site
operator's information practices to users, and the communication of individuals' privacy preferences to Web site
operators. While increased consumer demand and increased industry focus has resulted in some forward
progress on privacy issues, there is growing agreement that a collaborative effort to identify and craft workable
privacy policies to guide technical developments is needed. At both the November 1995 and June 1996 Federal
Trade Commission Privacy Workshops, there were discussions about formalizing a process for moving forward.
IPWG is forming to house this process.
During the June, 1996 Federal Trade Commission Privacy workshop many industry and public interest panelists
expressed a commitment to pursuing the development of privacy-enhancing technical applications and
specifications to meet the public policy goal of providing individuals with notice of information practices and the
ability to make decisions about the flow of personal information. 21 Working collaboratively with the World Wide
Web Consortium (W3C), IPWG hopes to craft a descriptive vocabulary that will allow individuals--and where
dictated by law countries--to make a wide range of normative decisions regarding the collection, use and
disclosure of personal information. Through its work with the W3C, IPWG will support technical specifications
that allow these privacy decisions to become a seamless and ubiquitous part of the Internet experience.
At this point IPWG members have crafted a draft mission and principles statement to guide its efforts.

MISSION
IPWG is dedicated to developing policies and technologies that support individual privacy by implementing fair
information practice principles. IPWG believes that for the Internet to achieve its full potential for speech, civic
and political participation, and commerce, individual privacy must be respected. To realize the benefits of
interactive media, individuals must be fully informed about information policies and practices, and able to make
informed choices about the use and transfer of personal information. IPWG will craft policies and technical tools
that give users the ability to make decisions about the flow of personal information at the front-end while serving
multiple interests, including seamlessness, the free flow of information, and the development of global
commerce. Through the development of model policies, technical specifications, and public policy guidelines,
IPWG will seek to outline a framework for privacy on the Internet.

GOALS
To achieve its mission, the IPWG's goals are to:
Identify a set of fair information practice principles to guide its efforts.
Develop a set of scenarios that illustrate the implementation of fair information practice principles on the
Internet.
Develop a common vocabulary to enable seamless communications about fair information practice
principles between users, and content and service providers. 22
Foster the development of individual empowerment technologies that facilitate both the seamless
communication of service and content provider's information practices to users, and the seamless
communication of individuals' privacy preferences to service and content providers.
Foster the development of policies, practices, and tools to ensure adherence to fair information practices.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

67/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Conduct public and policy-maker education on policies and privacy-enhancing tools that will advance fair
information practices on the Internet.

PRINCIPLES
The following fair information practice principles will guide IPWG's efforts.

Fair Information Practice


Individuals must be provided timely and effective notice regarding the information practices of all entities
operating on the Internet.
Individuals using the Internet must be given the ability to make choices about the collection, use, and
disclosure of personal information during interactions on the Internet. User empowerment technologies
should enable individuals to make meaningful decisions about the collection, use and disclosure of
personal information through a set of individually-chosen preferences.
Individuals must be afforded the ability to access personal information that they have affirmatively provided
(such as subscription information), and the means to challenge and correct it. Individuals must be
afforded the ability to access personally identifiable transactional data (such as clickstream data) where
it will be disclosed in personally identifiable form for purposes other than supporting the transaction for
which it was collected.
Internet users must be educated about the personal information generated during Internet use, including:
the potential privacy concerns raised; the system purpose served; and the customization of content and
service enabled by its authorized use.

Operating Guidelines
The privacy potential of interactive communications media will be realized only through the collaborative efforts of
policy makers, the public interest community, and the communications and computer industries. IPWG is
dedicated to ensuring the participation of all affected users of the Internet. The vocabulary and specifications
developed by IPWG must be available to and flexible enough to address the concerns of users, and content and
service providers. If embraced and implemented, we believe that technology tools, coupled with fair information
practices and policies, can provide an effective method of making individual privacy a reality on the Internet.

CONCLUSION
At this early stage in the process it is difficult to predict whether this particular collaborative endeavor will
succeed. Historically self- regulation has played a necessary role in creating model policies that provide policymakers practical blue-prints for later legislation, and give industry leaders the opportunity to establish best
practices. CDT strongly believes that the privacy potential of interactive communications media will be realized
only through the concerted efforts of policy makers, the public interest community and the communications and
computer industries. At this point, we believe that if embraced and implemented, technology tools coupled with
fair information practices 23 can provide an effective method of making individual privacy a reality on the Internet
and serve as a test bed for later legislative or regulatory activity in this area.
By building privacy in at the front-end we have the opportunity to craft an environment where each individual may
exercise control over personal information. As the 1995 NTIA report aptly noted, we may have an opportunity to
move beyond the current debate over the intrusive nature of technology and seize the opportunity to ensure that
privacy protection is a core element of this new communications media:
The promised interactivity of the NII may diminish the need to mak e a policy choice between opt-in and opt-out.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

68/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Such interactivity would mak e it possible for service providers to obtain consent to use [transaction-related
personal information] from subscribers electronically before any services were rendered. 24
At this point, such a regime is preferable to our current lack of policy, a weak privacy law, or a law that doesn't
respect or reap the full privacy-potential of this unique new communications media. Through a combination of
policy and technical tools we can take a first step down the road toward effective protections for individual
privacy online--if history repeats itself, legislation will surely follow.
_____________________________
ENDNOTES
1 See Turner Broadcasting Sys., Inc. v. FCC, 114 S.Ct. 2445, 2458 (1994).
2 For example, from the Direct Marketing Association's Personal Information Protection Guidelines: "These
Guidelines are also part of the DMA's general philosophy that self-regulatory measures are more desirable than
government mandates whenever possible."
3 The Electronic Communications Privacy Act of 1986, 18 U.S.C. 2510 et seq. (1995).
4 See, Section 207 of the Communications Assistance and Law Enforcement Act of 1994 (providing heightened
protections for transactional data). Pub. L. No. 103414, 108 Stat. 4279 (1994). There is dispute over whether
other sections of CALEA solve or create privacy problems.
5 The Cable Communications Act of 1984, Pub. L. No. 98-549, 98 Stat. 2779 (1984) (codified as amended in
scattered sections of 47 U.S.C.); and Video Privacy Protection Act of 1988, 18 U.S.C. 2710 (1995).
6 The newspaper and journalism industry did not endorse the bill.
7 See 16 Privacy Times No. 11 (May 30, 1996).
8 It's worth noting that advocates have voiced similar concern with the lack of effective oversight and enforcement
provisions in existing legislative privacy solutions, which often lack private rights of action, significant penalties,
and/or require the individual to show actual harm or damages to seek redress.
9 Recent activities in the area of children's online privacy offer an early warning to industry of the backlash that
can result from a failure to buttress industry policy with oversight and enforcement. At the Federal Trade
Commission's (FTC) workshop on Privacy in Cyberspace (June 4-5, 1996) the DMA in conjunction with the
Interactive Services Association, and the Children's Advertising Review Unit of the Council of Better Businesses
released similar policy statements on the collection of information from children online. Both advocated: 1)
providing notice of information collection and the marketing purpose behind it; 2) limiting the collection of data
from children; and 3) supporting parents ability to limit data collection on children. Unfortunately few content and
service providers operating on the Internet have heeded these guidelines. As the Center for Media Education
(CME) and the Consumer Federation of America (CFA) aptly point out in a joint letter to FTC Chairman Pitofsky
(November 25, 1996), "five months later companies are continuing to collect personally identifiable information
from children at their Web sites without disclosing how the information will be used or who will have access to it"
In their letter, CME and CFA provide a long list of Web sites aimed at children that fail to meet basic notice
standards--a long standing DMA principle, and a core component of the draft guidelines DMA and ISA released
at the FTC workshop. (See attached letter)
10 12 U.S.C. 3401 (1978).
11 United States v. Miller, 425 U.S. 435 (1976). The Miller decision ultimately turned on the fact that the bank
customer could not assert ownership of his documents. The Court held that because Miller's documents were
the bank's business records, the expectation of privacy that he asserted was not reasonable. The Court reached
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

69/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

this conclusion even though most bank customers probably do have an actual expectation of privacy in those
records.
12 12 U.S.C. 3401 (1978).
13 Section 702 of the Telecommunications Reform Act of 1995, "Privacy of Customer Information," is an
important exception to this generalization. Under the new CPNI provisions, the use of information available to a
carrier through an individual's use of a telecommunications service may not be used to market other services
that the carrier may offer--for example credit or financial services--nor could they provide the information to
another company for such marketing--unless the individual makes an affirmative written request for disclosure-placing the individual in control of the flow of personal information.
14 In the First Amendment area, the rush to address the issue of protecting minors from objectionable content
on the Internet led Congress to enact the unworkable, over-broad, and unconstitutional Communications
Decency Act. A number of first attempts at crafting privacy legislation for the Internet have met with concerns
from privacy and First Amendment advocates as well as communications, computer, and information
companies. (see letter to Representative Bob Franks (R-NJ) from the Center for Democracy and Technology, the
Electronic Frontier Foundation, People for the American Way, and Voters Telecommunications Watch, raising
concerns with the "Children's Privacy Protection and Parental Empowerment Act," June 4, 1996.
15 As countries are discovering in the First Amendment area, enacting a law limiting or criminalizing specific
content domestically has little effect on citizens' ability to access the objectionable material.
16 Internet users' outrage over P-trak resulted in action being taken by both the FTC and Congress.
17 DejaNews is a service that organizes all usenet postings into a searchable index by author's name. After
several press stories about DejaNews, the company stated that they were instituting a flag that would allow
people to notify them that they did not want a particular posting to be archived. This is an example of a limit on
subsequent use of information, it is particularly interesting because many users of the Internet would state that
usenet postings are public and subject to no reasonable privacy expectation.
18 For example, the Anonymizer offered by Community Connexion allows users to surf the Web anonymously,
while PGPcookie. cutter lets users block attempts to access "cookie" files of personal information.
19 Support groups on topics ranging from sexual abuse to drug addiction, discussions on political topics from
anarchy to Cuba to Newt Gingrich, and pictures and stories of sexual and other fantasies abound on the
Internet.
20 The values-neutral infrastructure initially developed to assist parents in limiting their children's access to
inappropriate material.
21 In addition, Commissioner Christine Varney confirmed the FTC's interest in this approach, emphasizing that
action must be taken quickly to address this issue. Commissioner Varney followed her workshop statement
with a letter reiterating the FTC's intention to revisit the issue of privacy in six months and requesting a report on
the progress and feasibility of individual empowerment technologies at that time.
22 The IPWG reviewed various privacy models in crafting its principles: "Privacy and Security Related Principles
for the NII," National Information Infrastructure Advisory Council, adopted March 1995; privacy policies developed
by various entities operating online such as America Online, Compuserve, Microsoft, Prodigy, and others;
"Principles for Providing and Using Personal Information: A Report of the Privacy Working Group," Information
Policy Committee of the Information Infrastructure Task Force, finalized October 1995; "The Directive of the
European Parliament and of the Council on the protection of individuals with regard to the processing of personal
data and on the free movement of such data," adopted in July, 1995; the recommendations of various public
interest organizations such as CDT, CPSR, EPIC, EFF, and CME; and the recommendations made by various
trade associations such as the DMA and the ISA.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

70/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

23 In addition, CDT believes that this model will provide a framework for the FTC to pursue unfair and deceptive
practices actions.
24 U.S. Dep't Com. Privacy and the NII: Safeguarding Telecomunications-Related Prsonal Information (October
1995) at 26.

Children's Privacy and the GII


Submitted by
Center for Media Education and Consumer Federation of America(4) [8]
Shelley Pasnik
Mary Ellen R. Fise

Armed with sophisticated new research, advertisers and marketers have begun to target the rapidly growing
numbers of children online. World Wide Web (Web) sites and other interactive online services are being
designed to capture the loyalty and spending power of the "lucrative cybertot category." A variety of interactive
advertising and marketing techniques have been developed specifically for this new medium. Many of them
threaten to manipulate children and rob them of their privacy. If allowed to develop without any intervention, these
practices will become widespread and even more egregious.
In March, 1996, the Center for Media Education (CME) released the results of a major investigation of online
advertising and marketing practices directed at children. This study uncovered a number of disturbing new
practices posing two kinds of threats: 1) invasion of children's privacy through solicitation of personal information
and tracking of online computer use; and 2) exploitation of vulnerable, young computer users through new unfair
and deceptive forms of advertising.
This report was an early warning to parents, child advocates, health professionals, and policy makers unaware
of the new practices for targeting children online. There is now a window of opportunity to develop safeguards to
protect children. The Center for Media Education (CME) along with the Consumer Federation of America (CFA)
urge the National Telecommunications and Information Administration to adopt safeguards that will help assure
that children fully benefit from the tremendous resources offered via electronic media, without having their privacy
rights infringed upon. This paper discusses: the problem of protecting children's privacy online; the limitations of
self-regulation in addressing this problem; the limitations of blocking and labeling technologies; and proposed
guidelines to protect children.

INVADING CHILDREN'S PRIVACY ONLINE


The Web is a powerful tool for reaching young people, with children and adolescents constituting a significant
and growing percentage of Internet users. According to Jupiter Communications, nearly five million youth
between the ages of two and seventeen used the Internet or an online service from school or home in 1996. 1
America Online--the largest proprietary provider of online service--found that of households with children (46
percent of about 3.5 million), 54 percent of children age six to 17 use the service. 2 Children, like adults, find the
interactive nature of online networks extremely compelling. They can also easily communicate with others
online, wherever they are located. They can make new friends, and exchange e-mail with old ones. They can
post messages on bulletin boards, and chat in real-time. Unlike TV, which is a prepackaged, one-way medium,
online media are dynamic and two-way. 3 They give children the power to converse one-to-one, and to display
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

71/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

their creations for anyone online to see. Significantly, many children are choosing to spend time online rather
than watch television. 4 They are logging on rather than tuning in.
The interactive nature of online networks gives them the potential to become the most important medium for
children, even more significant than television. Although still in their early stages, online technologies are
evolving rapidly. Several recent technological breakthroughs will make online media even more appealing to
children. Real-time audio technologies will give children access to music and news from around the world,
allowing them to listen to live broadcasts of events or to sample songs from new bands. Shockwave and Java-applications that enhance the multi-media capabilities of the Web--will permit a new level of interactivity online,
enabling children to manipulate 3-D objects or remix musical numbers. Real-time video technologies will permit
children to see cartoons, music videos, and film clips, and will eventually enable them to watch full-length
movies and TV shows online. VRML (Virtual Reality Modeling Language) will be used to create 3-D worlds in
which children will immerse themselves, and where they will interact in real-time with other visitors. These new
technologies will make children's areas online more and more captivating and more and more lucrative for
marketers.

Targeting Children Online


Evolving online networks are golden opportunities for advertisers and marketers, who are using them to gain
direct access to children of all ages from preschoolers to teens. The sooner they can turn them into obliging
consumers, the better.
In the past decade, children have become an extremely valuable market. In 1995, children under twelve spent
$14 billion, teenagers another $67 billion, and together they influenced $160 billion of their parents' annual
spending. 5 In addition to having unprecedented spending power, children are early adopters of high-tech
products, making them a disproportionately important market for the new interactive media.
Marketers see online networks as a fertile new frontier for tapping into the children's market. "This is a medium
for advertisers that is unprecedentedthere's probably no other product or service that we can think of that is like
it in terms of capturing kids' interest," remarked Erica Gruen, director of Saatchi & Saatchi Interactive. 6
Ad agencies have begun to devote major resources to using online media to give their clients unprecedented
access to children. A perfect example is Saatchi & Saatchi, which has set up special units to carefully study
children online and to develop sophisticated marketing strategies to target them. 7 Cultural anthropologists have
been hired to examine the nature of "kids' culture;" researchers have studied how children process information
and respond to advertising; and psychologists have conducted one-on-one sessions with sample groups of
children. These experts found that children, whose "learning skills are at their peak," can easily master the new
media's learning curve, which is often daunting for adults. 8 They also determined that the online world
corresponds to the "four themes of childhoodattachment/separation, attainment of power, social interaction, and
mastery/learning." 9 And, perhaps most important, they found that when children go online, they quickly enter
the "flow state," that "highly pleasurable experience of total absorption in a challenging activity." All of these
factors make online media a perfect vehicle for advertising to children. Says Gruen: "There is nothing else that
exists like it for advertisers to build relationships with kids. 10
While children are grappling with the four fundamental trials of growing up--gaining independence, developing
strength, getting along with others, and mastering new skills--their vulnerabilities are exposed. Having identified
how children use their online experience to meet developmental needs, the advertising industry is learning how
to exploit young computer users more effectively. The practices advertisers are using to build relationships with
children are calculated to make children believe that many of their needs can be met through their online
experiences. Once children are totally absorbed in an online advertising environment, they are at their most
defenseless, and are perfect targets for pitches of all sorts.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

72/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Online media are ideal for one-to-one marketing that involves "selling to customers one at a time and getting
each one to buy as many products as possible over a lifetime." 11 The sooner marketers can reach children, the
more products they can sell to them over the years. Online advertisers are now targeting children as young as
four. 12
One-to-one marketing gives advertisers unprecedented power over children. By capturing their attention online,
marketers are able to circumvent their normal guardians. Rather than being mediated by parents and teachers,
advertising reaches children directly, enabling companies to establish individual relationships with vulnerable
young computer users. Using the personal information actively and passively disclosed by each child, it is
possible for companies to craft individualized messages and ads. Whether a child receives a personalized
message from the Power Rangers or a special offer to buy a product he or she really wants, it will be hard to
resist. If advertisers can create and nurture relationships through microtargeting, they will be able to develop
unique and long lasting brand loyalty.

New Techniques for Captivating Children


The Center for Media Education's research uncovered a number of marketing and advertising practices that are
potentially very harmful to children. These practices have been grouped into two categories: 1) invasion of
children's privacy through solicitation of detailed personal information and tracking of online computer use; and
2) exploitation of vulnerable young computer users through new manipulative forms of advertising. The first set of
practices includes:
eliciting personal information from children through the use of prizes, games, and surveys;
monitoring children's online activities and compiling detailed personal profiles; and
designing personalized advertising aimed at individual children.
The second set of practices includes:
designing advertising environments to capture children's attention for extended periods of time;
seamlessly integrating advertising and content; and
creating product "spokescharacters" to develop interactive relationships with children.

Collection of Information from Children


The interactive nature of the Internet gives marketers unprecedented power to gather detailed personal
information from children. This information can be collected in two ways: 1) overtly, using sophisticated
techniques to elicit data from children, and 2) covertly, using state-of-the-art software to track children's online
behavior. Both complementing approaches enable marketers to compile profiles of each child, and then to
"microtarget" to them individually with personalized advertising. Information collected from children can also be
sold to third parties.

Using Prizes, Games and Surveys


A growing number of children's areas are now eliciting personal information. Some use incentives, promising free
gifts such as T-shirts, mousepads, and screensavers, in exchange for such personal data as e-mail address,
street address, purchasing behavior and preferences, and information about other family members. Disclosures
of personal information often are mandatory when a child wants to play a game, join a club, or enter a contest.
Other Web sites require children to complete registration forms and questionnaires in order to proceed into the
site.
Children are not aware of the potential consequences of disclosing information about themselves and their
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

73/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

families. Youngsters are less capable than adults of discerning the motives behind such giveaways, contests
and surveys, and easily fall prey to such marketing techniques. They also are still quite egocentric and live to
answer questions about themselves. Children also imbue their personal computers with a certain level of trust.
One online children's service recently published results from a survey that asked children who they trusted
more--their parents or their computers. The majority of respondents said they put more trust in their
computers. 13
Even when children are not required to supply information, it is often compelled by the use of hard-to-resist
incentives--prizes, club memberships, or the opportunity to role-play in a superhero's town. Left with few
defenses, children willingly answer probing questionnaires. The following are examples of the techniques
currently being used to elicit personal information from children:
The KidsCom communications playground, aimed at children 4 to 15, uses a forceful approach. In order to enter
the site, each child is required to disclose his/her name, age, sex and e-mail address. The mandatory
questionnaire also requests his/her favorite TV show, commercial and musical groups, as well as the name of
the child who referred him/her to KidsCom. Once children have entered the playground, they are encouraged to
supply additional personal information in order to win "KidsCash," a form of virtual money that can be used to
purchase conspicuously-placed products that are highly popular with children. (http://www.k idscom.com/)14
Taking advantage of children's desire to belong to a group, the Splash Kids area on the Microsoft Network
promises to make children "Splash" Kids" if they cooperate and answer questions about themselves. To ensure
that children comply, a Sony Discman is offered as a prize. A prominently-placed icon, which reads "Sign-up
and Win," is linked to the questionnaire that children are asked to complete in order to be eligible for the
monthly give-away. (Splash Kids is also a Web site: http://www.splash.com/)15
At the Batman Forever Web site, supplying personal information becomes a test of loyalty. "Good citizens of
the Web, help Commissioner Gordon with the Gotham Census," children are urged. Although the survey uses
the guise of a virtual city's census, much of the information sought pertains to purchasing habits and video
preferences. For example, respondents are asked how likely they are to buy Batman Forever and Apollo 13 on
video. (http:// www.batmanforever.com/)16
The manipulative forms of these information requests secure children's cooperation without their parents'
intervention. These surveys are skillfully blended into children's online environments so that children do not
perceive them as any kind of threat. Marketers can circumvent parents very successfully. Even in rare cases
when children are told that some information is optional and to "check with your parents first," such disclaimers
are likely to be ineffective. Burying such comments in the middle of a survey can hardly be construed as an
adequate warning, and it is certainly no match for a chance of winning a Sony Discman. 17

Tracking Children
One of the unique features of online communications is the ability to collect what is known as usage information
or navigational data by auditing Web sites and content areas. Computer technologies make it possible to track
all interactions users have online, often referred to as clickstream data or "mouse droppings." Such covert data
collection is becoming an essential tool for online advertisers. Unlike TV ratings, which generally use
anonymous aggregate numbers, to reveal the viewing behavior of key demographic groups, online usage data
can track how individuals respond to and interact with advertising. A burgeoning industry has developed to
provide such online tracking services.
To attract advertisers to online areas, companies need assurance that their ads will be seen by a significant
number of people. To meet this need, corporations, such as Netscape and I/PRO, have developed elaborate
systems for collecting visitor information. These two companies have devised some of the most popular tracking
methods. 18 Netscape Communications Corp., maker of the most widely used Web browser, utilizes "cookies"
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

74/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

to track computer users' online activities. Cookies are files stored on the hard drives of all Netscape users,
which log every site they visit, and every page they access at each site. Companies using Netscape software
can access the detailed logs of previous visits to their site each time a user returns. 19 Using a different covert
measurement tool, I/PRO assigns an identification code to Web users so that they can be tracked on any
I/PRO customer's site. 20 The information collected by these systems and others like them can be used to
compile detailed individual profiles.
A variety of other companies are also refining software to surreptitiously monitor the behavior of users online.
Such covert tools are being used to track children as well as adults:
Though presented as a playground for children, the entire KidsCom site is really a sophisticated market
research tool. Operated by the SpectraCom Company, the primary purpose of KidsCom is to collect information
about children for the company's clients. SpectraCom boasts of its "proprietary data verification and usage
report system that runs periodical, statistical usage programs which enable tracking of all Web server
connections and usage statistics for each server, page, and item a user may select."
(http://www.spectracom.com/description.html

21

[9] )

By marrying tracking data with personal information elicited from users, advertisers are able to compile detailed
profiles of individuals--a key to what industry insiders call "one-to-one marketing." 22 Hotwired--the online sibling
of Wired magazine--is experimenting with customized ads. Hotwired's "smart messaging" allows marketers to
customize their advertising messages based on the location of the computer being used to view their Web
site. 23 As Hotwired's Louis Rosetto recently noted, "What we do well is deepen relationships and create more
immediate one-to-one connections with a customer base." 24

Microtargeting Children
This practice of tailoring ads to individuals, known as "microtargeting," is expected to become the predominant
mode of marketing online. It is at the heart of current plans for a number of children's online services. In this first
stage of microtargeting children, many sites have begun responding to new users with personalized messages.
Once a child visits a site, she begins receiving unsolicited e-mail messages, urging her to return, and promising
exciting gifts and new activities.
Registering at the Kellogg's Clubhouse Web site prompted the following e-mail message:

From: k ellogg@magnet.com
Date: Thu, 8 Feb 1996 20:33:35 -0500
To: sp@cme.org
Subject: Happy Valentine's Day!

Check out the Interactive Valentines on Kellogg's homepage on the World Wide Web. Design
one! e-mail it to your cyberspace friends! or print it for personal delivery! Go to the Rec Room right
now (http://www. k elloggs.com/ec.html), and Crack le (TM) will help you send a special Kellogg's
(R) Valentine card!
Happy Valentines Day!
Love,
Snap! Crack le! Pop! (TM)
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

75/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

http://www.k elloggs.com
In an effort to test how the Web site in the above example dealt with very young children, the age of five was
listed when a marketing survey was completed for the site. Kelloggs seized the opportunity to cultivate a one-toone relationship with a five year-old.
If left unchecked, these techniques quickly will evolve into even more sophisticated efforts to target children.
Using individualized advertising, based on intimate knowledge of each child's interests, behavior, and socioeconomic status, will give online marketers unprecedented powers to tap each child's unique vulnerabilities.
BroadVision, a business software company, has developed what it refers to as a "One-to-One" systems
application. Central to BroadVision's software is the concept of building advertising relationships one customer
at a time. As a recent article explains, "This goes beyond simple transaction processing and secure payment
systems: it's about building relationships with customers online--knowing each customer by name, knowing
their preferences and buying patterns, observing the customers over time, and using this data to sell more
effectively to them." 25
The use of electronic surveys, such as the one found on the KidsCom's Web site, may foster a new wave of
direct marketing. The collection of children's personal information using more traditional methods has already
generated much public concern. Marc Klaas, father of Polly Klaas and founder of the Klaas Foundation for
Children, is attempting to stop the collection and selling of children's personalized market research data. Klaas
is focusing on one company in particular, R. R. Donnelley & Sons and its Metromail subsidiary, to bring
attention to the way in which children's information has become a valuable commodity among marketers. As the
Klaas Foundation's Web site claims, Metromail adds information on 67,000 children to its database each week.
Much of the information in Metromail's library on 6.5 million Californians came from consumer-completed
surveys and response cards. 26
Currently, there are no regulations that prevent personal information on children from being collected or sold to
third parties. Sophisticated data collection and microtargeting techniques could be used to prey on children,
exploiting their sense of trust, and manipulating both their preferences and their behavior.

LIMITATIONS OF SELF-REGULATION
Representatives of online service providers and advertisers have argued that consumer education and industry
self-regulation will be sufficient to protect the privacy of online users. 27 However, nearly a year after the release
of CME's report we find that companies are continuing to collect personally identifiable information from children
at their Web sites without disclosing how the information will be used or who will have access to it, and without
requesting parental consent. It is clear that industry self-regulation does not provide adequate protection for
children's privacy.
While industry leaders may argue that they can police themselves through self-regulation, past experience
demonstrates that effective self-regulation is highly unlikely, and will not develop at all without some government
intervention. There is little evidence that industry leaders even recognize the need to set limits on the marketing
onslaught aimed at vulnerable youngsters. Instead, the emphasis is on continuing to refine techniques for
creating loyal, lifetime consumers.
CME/CFA believe that children's privacy can be effectively protected only by a combination of comprehensive,
understandable disclosure and verifiable parental consent. Currently, many sites offer no disclosure regarding
the collection of information, whether it is aggregate and anonymous or personally identifiable. 28 Where sites do
offer some form of disclosure, it is usually insufficient to permit meaningful consent. Where sites disclose that
information is being collected, they almost universally fail to ask children to obtain parental consent before
providing personally identifiable information. At the handful of sites that actually request parental consent, none
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

76/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

can actually verify whether consent has been granted.


Nor is it reasonable to expect parents to effectively protect their children from these practices. While many
parents may try to monitor their children's use of online services, it is not an easy task. Unlike television, which
the entire family may watch together, many children use their computers alone. 29 Children also tend to have
greater computer skills than their parents, which makes periodic monitoring more difficult. And because of the
"halo effect," arising out of the educational uses of computers, many parents implicitly trust computers,
preferring that their children go online instead of watching television. 30 They are unaware that children's Web
sites can be more intrusive and manipulative than the worst children's television.

LIMITATIONS OF BLOCKING AND LABELING TECHNOLOGIES


Given children's limited power to protect themselves on the information superhighway, the Direct Marketing
Association (DMA) and other industry representatives have argued that the best defense from exploitative people
and practices encountered online may be well-placed traffic lights. In other words, let problems caused by the
development of new technologies be solved by even newer technological solutions. DMA has asserted in past
FTC proceedings that parents can "take advantage of available software tools and parental access controls to
restrict their children's access to particular sites if they so desire."
Not surprisingly, a number of companies have begun developing software tools to aid parents in their efforts to
screen material. The new technological screening software services--e.g., SurfWatch, 31 Cyber Patrol, 32 Net
Nanny, SafeSurf, 33 CYBERsitter34--are also not likely to adequately address the problems presented by online
advertising and marketing to children. Most of the software programs were developed to protect children from
sexual materials, rather than manipulative advertising and intrusive marketing practices. 35 Even if software were
created that could effectively screen out such practices, its value would be limited to those parents who could
afford it, learn how to use it, and to devote the time needed to install and regularly update it. Although CME/CFA
strongly support the use of technology to customize the information that is released by children to comport with
their parents' judgment and preferences--we recently joined the Internet Privacy Working Group's efforts to
create a PICS-based system--these technology-based tools have significant limitations when trying to protect
children's privacy overall. 36

Blocking Technology
While technology alone will not solve the problem, it could be used to reinforce future regulatory safeguards,
setting tight limits on online marketing and technological approaches that might be employed to help protect
children from commercial excesses online. The first four of these involve parental control software, which
enables parents to prevent children from having access to "objectionable" material. Developers of parental
control software have focused on sexual material, and paid little attention to advertising. But there are several
elements of such software which might be utilized to support regulations shielding children from the coming
marketing onslaught.
Blocking access to particular sites. A number of parental control programs allow parents to block access to
any specific sites they choose. Given the huge and rapidly growing number of sites online, this may not seem
like a very practical way to screen out offensive advertising. Some of the programs streamline this process by
generating a list of sites a child has visited, allowing parents the opportunity to easily examine and restrict
access to sites that concern them. If parents learn that their children are captivated by Chester CheetahTM or
spending hours in the Kellogg Clubhouse with Snap! Crackle! and Pop!, they can block access to these sites.
One drawback with this approach is that it achieves blockage after the child has been to the site or the parent
has learned of the site on his/her own.
Blocking access to certain types of material. Some of the screening software also can block out entire
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

77/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

categories of content, using key words to identify objectionable sites. While this type of filtering program is able
to screen out advertising sites, it only blocks advertisements for pre-designated products. For example, children
would not be exposed to banner ads for beer and cigarettes if their parents had selected "alcohol and tobacco"
as a restricted category. Given the abundance of products being pushed in cyberspace, it would be next-toimpossible to use a key-word mechanism to combat online commercials. In fact, some of the parental control
programs may actually be contributing to the problem of online advertising targeted at children. Cyber Patrol is a
case in point. On the one hand, Cyber Patrol has added some types of advertising (e.g., alcohol and tobacco) to
its list of indecent, sexual or otherwise offensive content sites. But Cyber Patrol is itself a vehicle for advertising
targeted at children. Whenever children try to view a blocked site, they are subjected instead to an ad for one of
Cyber Patrol's home edition sponsors, that will be "hotlinked" directly to that advertiser's Web site.
Preventing children from disclosing personal information. Software also may help protect children from
invasions of privacy by marketers. Although developed for a much different purpose (to protect children from
potential child molesters), CYBERsitter allows parents to prevent children from disclosing their addresses or
phone numbers when they are online. It may be possible to develop software that will further limit the amount of
personal information online advertisers request from children, whether it is requested to join a club or be eligible
for a prize.
Expanding parents' choice of rating systems. Today none of the parental control programs are geared to
advertising (with the limited exception of Cyber Patrol), and none are expected to focus on the problem of online
commercial manipulation of children in the near future. However, a new set of online protocols has been
developed that may enable parents to select from a variety of rating systems, and use whichever combination is
best suited to their children.
Blocking access to ad banners. So far only one program, WebFilter, has been created for the sole purpose of
blocking ads online. It illustrates the limits of technological solutions to the problem of online children's
advertising.
Unlike parental control software, which has other goals, WebFilter blocks access to ad banners on the Web.
Developed by Axel Boldt, a math graduate student at University of California, Santa Barbara, WebFilter is
freeware that uses a library of filter scripts to prevent ad banners from being seen. Boldt maintains a "Black List
of Internet Advertisers," and is promoting a "No-Ads" icon that serves as a seal of approval for sites completely
free of advertising (http://emile.math.ucsb. edu8000/~boldt/NoShit).
WebFilter is powerless against banner-free corporate Web sites where commercial messages are merged with
content. Boldt warns that without some advances in artificial intelligence programming, WebFilter will be
ineffective against the upcoming melange of interactive advertising using animation, audio clips, and video. Some
technological breakthroughs (such as Sun Microsystems' Java programming language, which will make the Web
more interactive) raise as many problems as they solve, Boldt maintains.
While the blocking approach has its merits when applied to potentially objectionable material, this same
approach is fundamentally flawed when applied to children's personal information. 37 Objectionable content is
discrete and can be identified by each individual according to his or her values. A person can point to certain
content areas on the GII, declaring "the pictures on this Web site and the topic of that chat room are something
I don't want my daughter to see." That same daughter's personal information, on the other hand, is much more
malleable, changing from one context to the next. For example, a father certainly may wish to prevent his son
from giving his name and street address to an online stranger or to an online salesperson, but he may want his
son to give the same information to his baseball coach and this week's carpool driver.
Arguably, the father could override the blocking software in the last two cases and permit his son to provide the
necessary information, allowing the coach and carpool driver to use e-mail to communicate and organize. But
what about all of the other instances when the boy has legitimate and desirable reasons to share his name and
street address? What about all those instances when his son wants to sign his name to an e-mail message
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

78/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

sent to a keypal? Or telnets to the local library to research a class project and is asked to supply his name and
library card number? Or wants to compete in an online soccer tournament sponsored by his church youth group,
which needs his name in order to place him on a team?
There are a countless number of online situations where the boy would want to use his name. By installing
blocking technologies onto the family computer, the father not only thwarts nefarious requests for his son's
information, but the legitimate ones as well. If children's privacy is to be protected, then it is not enough to allow
all kinds of information solicitation to take place with the excuse that they can be blocked. This would be
analogous to decriminalizing all criminal acts and telling citizens they can buy a gun and fend for themselves:
the result is ineffectual vigilantism. It is unreasonable to expect that parents must block all requests for
information--the noncommercial ones included--just so they can prevent their children from being bombarded
with commercial solicitations for personal data.
Only by arming parents and children with adequate explanations of collection practices are they able to
determine when to release information. Blocking software alone is not sufficient as it does not create the
policies necessary to ensure that marketers fully and effectively disclose their collection and tracking policies.
As the Center for Democracy & Technology (CDT) explained: "To have privacy in the Digital Age one must be
able to both enjoy solitude and to make decisions about what, if any, personal information to divulge, to whom
and for what purpose. In the Digital Age technology can be harnessed to advance privacy by empowering
individuals to control the flow of information on a case by case, setting by setting basis, by expressing his or her
privacy desire." 38 Full and effective disclosure affords young computer users and their parents the opportunity to
make decisions about the release of information on this much-needed case-by-case basis.
The strength of this new medium is its ability to communicate, educate and inform. Overly broad technological
"fixes" miss the goal of providing children with access to a multitude of ideas and interactive experiences. As
Jon Katz elucidates in his recent article "The Rights of Kids in the Digital Age:" Some of these programs have
thousands of potentially forbidden categories, going far beyond sex and violence. Once applied, censoring and
restrictions inevitably will spread into other areas that adults want to place off-limits: political topics that differ
from their own values, music and movie forums that don't conform to their adult tastes, online friends that don't
meet their approval, Darwinian theory. 39 CME/CFA wholeheartedly agree with CDT's "strong interest in
developing an environment that engenders trust and confidence" 40 and believes that the surest way to
accomplish this goal in the commercial arena is to educate consumers through effective disclosure and to
implement fair information practices.

A Ratings System is an Inadequate Way to Safeguard Children's Privacy


A ratings system, principally the Platform for Internet Content Selection (PICS), is another technological
approach that has been suggested as a viable solution to privacy concerns. Once implemented, PICS is
intended to be "a viewpoint-neutral technology platform that will empower organizations and individuals to
categorize and selectively access information according to their own needs." 41 PICS was developed by the MITbased World Wide Web Consortium42 and includes Netscape, Microsoft, SafeSurf and a host of other major
online and computer firms. DMA and others contend that PICS can be used to label Web sites according to the
privacy preferences of individual users. CDT's statement presents three scenarios demonstrating how the PICS
system works in an attempt to further the notion that PICS will help safeguard privacy.
While CME/CFA acknowledge the potential benefit of a ratings system--parents will have a shorthand way of
identifying appropriate content for their children--CME/CFA also are fully cognizant of the inadequacies of the
PICS system. Most importantly, the PICS system cannot be applied to e-mail, chat rooms, news groups or
listservs. This is a grave omission given children's proclivity for these applications within the GII. Basic
communication--exchanging e-mail letters, posting messages to bulletin boards, comparing thoughts about
shared interests--is the biggest attraction that computers have for children. 43 Apart from the many
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

79/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

implementation problems that PICS must overcome, this rating system should not be used as a substitute for a
comprehensive set of privacy guidelines.
But while PICS will enable public interest groups to develop and distribute their own rating systems to parents,
such services will require substantial resources to establish and maintain. Organizations providing rating
systems will have the labor-intensive tasks of regularly evaluating all new sites and continually reevaluating
existing ones. Most groups will not have the resources to develop and market a workable rating system. 44
It is unlikely that a technological fix will be found to address the complex concerns raised by online advertising
to children. The seamless interweaving of content and advertising online makes it very difficult to determine
where ads begin and end. The resources that advertisers can devote to developing new technologies for
marketing to children dwarf those available to public interest organizations concerned with protecting children.
So far remarkably little effort has been made to develop software that will shield children from commercial
manipulation.

PROPOSED GUIDELINES TO PROTECT CHILDREN ONLINE


To address online violations of children's privacy, CME/CFA propose swift adoption of guidelines pertaining to
the collection and tracking of information from children for commercial marketing purposes. 45 The guidelines,
described below, are intended to protect children from deceptive and unfair practices on the Global Information
Infrastructure and in other interactive media.
Commercial marketing purposes46 include, but are not limited to, practices that:
Promote, sell or deliver goods and services through direct sales pitches, brand awareness-building campaigns,
and other similar marketing strategies;
Perform market research;
Foster the promotion, sale or delivery of goods and services through the sale, rental, compilation, or exchange of
lists; or
Delete and add individual children, members of their families, other household members, and other persons the
child knows to lists.
Commercial marketing is achieved through information collection and tracking. Information may be collected
through surveys, registration forms, questionnaires, chat rooms, clubs, contests, and other means. Information
may be track ed, analyzed or audited (hereinafter "tracked") by using navigational tools. These tools reveal
information such as pages visited, length of stay, images and information downloaded, referring URL, 47 and
content viewed.
These guidelines require all persons who collect or track information48 [hereinafter "information
collectors/trackers"] including, but not limited to Web masters, online content providers, Internet service
providers, Web measurement companies, and commercial online service providers to fully and effectively
disclose all of their collection and tracking practices. In addition, these guidelines prohibit the collection or
tracking of personally identifiable information from children unless valid parental consent is obtained.
These guidelines apply to the collection and tracking of both personally identifiable information and information
that is aggregate and anonymous.
Personally identifiable information includes:
any information that is linked to or allows for the identity of individual children, their families, household members
or other individuals the child knows to be determined. This information includes, but is not limited to, a child's
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

80/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

name, address, e-mail address, telephone number, and social security number;
other information such as physical description, psychological description, health, school, date of birth, family
income, and information regarding a child's likes, dislikes, habits, and opinions when used in conjunction with
identifying information as described above; or
any information collected from a child about that child's family, household members or other individuals the child
knows when used in conjunction with identifying information as described above.
Aggregate and anonymous information includes:
only data that provide demographic characterizations and information that cannot be traced to an individual child,
their families, household members or other individuals the child knows; or
any information about the likes, dislikes, habits, opinions and preferences that may not be traced to a particular
child, family, household members, or other individuals the child knows.
These guidelines are intended to balance children's interest in receiving diverse information with the interest in
protecting children from deceptive and unfair commercial marketing practices that take advantage of their unique
vulnerabilities in these new and evolving media.

General Requirements
First, anyone who uses the Global Information Infrastructure or other interactive media to collect or track
information from children under the age of 16 for commercial marketing purposes must provide full and effective
disclosure. Second, all information collectors/trackers must obtain valid parental consent before collecting
and/or tracking personally identifiable information from children. Third, where children and parents have chosen
to disclose personal information, information collectors/trackers must provide a procedure through which
information that has changed over time may be corrected. Finally, where a parent has consented to the release
of the child's personal information, but later decides not to permit its use, information collectors/trackers must
provide a process for preventing the further use of information previously disclosed.
Disclosure: In order to be effective, disclosure notices must be sufficiently clear and prominent to prevent
deception. All disclosures must comply with the requirements listed below:
Information collectors/trackers must disclose all information necessary to permit a child/parent to
make an informed decision, including, but not limited to:
A description of what information is being collected or tracked. Collectors/ trackers must disclose
whether the information they are collecting is personally identifiable information, such as an individual's name,
street address, phone number, social security number; or aggregate and anonymous information, such as
product preferences and hobbies. Collectors/trackers must also disclose what information they are
collecting/tracking through navigational software including information pertaining to sites visited, length of stay,
and the images downloaded.
An explanation of the mechanism(s) through which the information is collected and/or tracked.
Collectors/trackers must disclose the means through which they collect the information. For example,
collectors/trackers must disclose whether information is collected through on-screen surveys, questionnaires,
contests, and sweepstakes entries, or through server-based navigational data tracking and browser files such as
Netscape's cookies.
A summary of how the information will be used. Collectors/trackers must disclose whether the information
will be used to promote products through e-mail messages, to refine internal marketing strategies, or for re-sale
to interested third parties. The collector/tracker must also disclose whether the information will be used to rehttp://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

81/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

contact a child online or through other media including, but not limited to, fax, phone, and postal mail. This must
be disclosed if the collector/tracker plans to re-contact the child itself or if the collector/tracker plans to rent, sell
or otherwise release the information to a third party who may re-contact the child.
Identification of who is collecting/tracking the information, their relationship to the information and
how they can be contacted.
Identification of all other persons that will have access to the information and their commercial
interest in the information. No persons other than those originally identified in the disclosure notice may be
provided with this information until a new disclosure notice is issued and parental consent is again obtained.
Notice that valid parental consent must be obtained prior to the collection of personally identifiable
information.
Notice of the procedure to correct previously collected personally identifiable information.
Notice of the process for preventing further use of personally identifiable information previously
collected.
Effective disclosure also requires that the disclosure notice described above be easy to understand,
compelling, and prominently displayed from the perspective of a child.
Language must be appropriate for children. For example, the same level of vocabulary that a company
uses to describe a game or other activity on its Web site must be used when disclosing information practices.
The language must be easily read, and if possible, also audible to children in the online medium.
Print which by its small size, placement, or other visual characteristic is likely to substantially affect the
legibility or clarity of the offer or expectations to it should not be used. 49 A site that places its disclosure notice
behind an unidentified hyperlink at the bottom of the page also is not in compliance with this requirement.
The disclosure must directly precede and be on the same page as the collection or tracking practice.
Contrary claims will undercut the effectiveness of the disclosure and must not be present. Many
children are incapable of wading through facetious or mixed messages. For example, if a young visitor to a site
is told to "ignore the fine print and click here to get to the good stuff" or a disclosure notice is marked "boring
adult area" the child might be disinclined to continue reading.
Parental Consent: Information collectors/ trackers must obtain valid parental consent whenever personally
identifiable information is collected from children. Consent must be obtained at points of promotion, points of
transaction, and all other points where information is collected or tracked. Consent is only valid for the
information practices described in the disclosure. Parental consent must be received from a person authorized
to consent on the child's behalf, such as a parent or legal guardian.
To obtain valid parental consent, the following steps must be followed:
It must be made clear to the child that parental consent must be obtained before proceeding and the
parent must receive complete disclosure as described above;
Access to those areas of the site where information is collected or tracked must be conditioned on
receipt of valid parental consent; and
The burden is on the collector/tracker to obtain valid parental consent, which may be obtained:
in writing, received via postal mail; or
through effective electronic mechanism(s) once developed. 50
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

82/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

Correction Procedures: Information collectors/trackers must provide a procedure through which personally
identifiable information previously collected may be corrected if that information has changed since its original
collection.
Preventing the Use of Information: Information collectors/trackers must provide a process for preventing
further use of personally identifiable information previously collected.

CONCLUSION
Given the unique concerns surrounding children and online technologies, safeguards prohibiting the collection of
personally identifiable information from children should be adopted. Online advertisers are developing invasive
information collection and tracking practices, which deceive and treat children unfairly. Industry self-regulation
will not adequately protect young people nor will blocking technology and selection software. Not only should
disclosure be required when a company collects data from children online, valid parental consent for the release
of personal information should be obtained. A clear set of enforceable privacy guidelines will help assure that
children fully benefit from the wealth of resources available online.
__________________________________
ENDNOTES
1 "Children using Internet/online services, 1995-2002." Digital Kids, February, 1997.
2 Randolph, E. AOL: Kids only channel, Digital Kids Report, 24, January 1997.
3 P. Friedman, Vice President and General Manager, eWorld, referred to the Internet as a "participatory
medium."
4 According to Nielsen Media Research, CommerceNet, and WebTrack, three companies that regularly conduct
Internet demographic surveys, time spent online is starting to cut into time spent watching television for both
children and adults. "What Did Kids Do Before the Internet, Grandpa?" Mark eting Tools, March/ April 1996. J.
Dibbel, "Nielsen Rates the Net," Time, November 13, 1995. "The Net Vs. The Tube," Variety, November 13,
1995.
5 "Big Allowance," Interactive Mark eting News, November 10, 1995. "Teens Spend Money--Their Family's and
Their Own," Youth Mark ets ALERT, March 1996.
6 E. Gruen, "Defining the Digital Consumer IV Agenda: Digital Kids Pre-Conference Seminar," New York, NY,
October 25, 1995.
7 "Children Get Growing Online Attention," Interactive Mark eting News, November 10, 1995.
8 E. Gruen, "Defining the Digital Consumer IV Agenda: Digital Kids Pre-Conference Seminar," New York, NY,
October 25, 1995.
9 "Children Get Growing Online Attention," Interactive Mark eting News, November 10, 1995.
10 E. Gruen, "Defining the Digital Consumer IV Agenda: Digital Kids Pre-Conference Seminar," New York, NY,
October 25, 1995.
11 M. Perkins, "Mining the Internet," The Red Herring, March 1996.
12 The KidsCom Web site describes itself as a "communications playground just for kids ages 4 to 15."
(http://www.k idscom.com/)
13 The survey was on the Nickelodeon area of America Online, January 1996. Similarly, explaining the
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

83/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

techniques used by Kid2Kid, a self-described "kids marketing research, design and consultation resource,"
Whiton S. Paine and Dr. Mitch Meyers explained how their company uses children's comfort with computers to
elicit data from young users. "TECHNO Kids Conference," Chicago, IL, September 13, 1995.
14 February 29, 1996 contents of Web site.
15 November 29, 1996 contents of Web site.
16 November 10, 1995 contents of Web site.
17 The survey on the Splash Kids Web site and content area on the Microsoft Network uses this approach.
(http://www.splash.com/)
18 Among the log analysis tools resources listed in a recent Mark eting Tools article are: Yahoo!
(http://www.yahoo.com/Computers_and _Internet/Internet/World_Wide_Web/HTTP/ Servers/Log_
Analysis_Tools/); BrowserCounter 1.0 (http://www.netimages.com/ ~snowhare/utilities/browsercounter.html);
wwwstat (http:www.ics.uci.edu/stats/gwstat. html); AccessWatch (http://www.eg.buck nell.
edu/~dmaher/accesswatch/); Open Market WebReporter V1.0 (http://www.openmark et. com/
products/Webreport.html); Web Audit (http://www.wishing.com/Webaudit). In addition to I/PRO, the
measurement and tracking firms listed are: The Delahaye Group (http://www.delahaye.com/); WebTrack
(http://www.webtrack . com/); Audit Bureau of Circulations (http://www.accessabl.com/); Nielsen Media Research
(http://www. nielsenmedia.com/). K. Bayne, "Is Your Site a Success?" Mark eting Tools, March/April 1996.
19 Netscape 2.0 includes cookies as one of its features. The company has indicated that upcoming versions
may allow users the right of refusal. In the meantime, Netscape has been asked by the Internet Engineering
Task Force to propose cookies as a standard for the Internet. J. Rigdon, "Internet Users Say They'd Rather Not
Share Their 'Cookies,'" The Wall Street Journal, February, 14, 1996. Netscape: (http://home.netscape.com/)
20 K. Murphy, "Net.Genesis Tool to Track Site Usage," WebWeek , February 1996. Internet Profiles
Corporation's (I/PRO): (http://www.ipro. com/)
21 March 8, 1996 contents of Web site. Company information about SpectraCom is provided in the "Parents and
Teachers Place" on the Web site, but it is hidden behind an area that gives tips on how to use computers and
the Internet. Only if one clicks on the "hyperlink" to the SpectraCom Web site will the true intent of the KidsCom
site be revealed.
22 D. and M. Rogers, The One to One Future: Building Relationships One Customer at a Time (New York:
Currency Doubleday, 1993).
23 Hotwired's promotion was in conjunction with a four-week sponsorship with Argos P.L.C.
24 S. Elliot, "How to Focus A Sales Pitch in Cyberspace," New York Times, March 4, 1996.
25 M. Perkins, "Mining the Internet," The Red Herring, March 1996.
26 The Klaas Foundation for Children's Web site: (http://www.k laask ids.Inter.net/) "Data Firms Sell Personal
Information on Nation's Children," Business Wire, March 14, 1996.
27 See generally DMA Commentary on FTC Work shop on Privacy and Cyberspace. See also CME/CFA Post
Hearing Comments, June 19, 1996. (Details the weaknesses of the proposed DMA guidelines.)
28 Personally identifiable information includes any information that allows the individual child to be identified.
Aggregate and anonymous information includes all data that cannot be traced to an individual child.
29 How children use computers was a topic that was discussed at the "TECHNO Kids" and "Digital Kids"
conferences. Representatives from the industry presented their market research and focus group findings.
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

84/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

"TECHNO Kids Conference," Chicago, IL, September 13, 1995.


30 D. Britt, "Defining the Digital Consumer IV Agenda: Digital Kids Pre-Conference Seminar," New York, NY,
October 25, 1995.
31 http://www.surfwatch.com/
32 http://www.microsys.com/CYBER/
33 http://www.safesurf.com/
34 http://www.solidoak.com/
35 A recent article in The Washington Post reiterates that exploitative advertising and privacy invasions still are
not the focus of selection software. Linton Weeks, "A Safety Net for Children, New Software Blocks What Kids
Can Access," February 27, 1997.
36 CME/CFA are concerned that a browser-based privacy protection system may lead to more exploitative
marketing behavior. It is possible that the system will include a negotiation process, thereby allowing companies
the opportunity to bribe children with incentives to change their privacy preferences.
37 Ironically, in order to obtain a "free" copy of one of the software packages, users must provide information to
the company. As the Communications Decency Act decision recently noted, parents can download a seven day
demo of the full version of Cyber Patrol from the Microsystems Internet World Wide Web Server. At the end of
the seven day trial period, users are offered the opportunity to purchase the complete version of Cyber Patrol or
provide Microsystems some basic demographic information in exchange for unlimited use of the Home Edition.
The demographic information is used for "marketing and research purposes." (paragraph 60, CDA.)
38 CDT filing, "The Empowered User: Implementing Privacy Policy in the Digital Age," submitted to the FTC,
p.11.
39 J. Katz, Wired, July 1996.
40 CDT, p.15.
41 According to MIT's Albert Vezza, a spokesman for the PICS working group.
42 http://www.w3.org/hypertext/WWW/PICS/
43 E-mail is often referred to as the "killer ap" (application) for children. How children use computers was a topic
that was discussed at the "TECHNO Kids" and "Digital Kids" conferences. Representatives from the industry
presented their market research and focus group findings. "TECHNO Kids Conference," Chicago, IL, September
13, 1995.
44 NewView, a blocking software company, recently announced that it will be using advertising as one of 15
labeling criteria. According to NewView, "iscreen" can be used to block pages with ad banners and ordering
information.
45 See CME/CFA's "Guidelines for the Collection and Tracking of Information from Children on the Global
Information Infrastructure and in Interactive Media," submitted to the Federal Trade Commission on June 5,
1996, for detailed examples illustrating how these proposed guidelines work in practice.
46 The definition of commercial marketing purposes generally adopts the Direct Marketing Association's
definition of "direct marketing purposes" in its Guidelines for Personal Information Protection (Aug. 1995).
47 Some sites have the ability to know which site a computer user has just visited. The previous site is called
the "referring URL."
http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

85/86

12/8/2014

Chapter 1: Theory of Markets and Privacy

48 Persons herein includes individuals and incorporated groups.


49 See Direct Marketing Association's Guidelines for Ethical Business Practices, Article #3 (Apr. 1995).
50 Future electronic mechanisms might include digital signature systems or personal privacy software
applications. However, to ensure valid parental consent is obtained, digital signature systems would have to
accurately identify the computer user to assure that only a qualified parent or guardian consented to the
disclosure and use of the child's information. Similarly, to constitute valid consent, personal privacy software
applications must have a default of "no release of personal information." This "opt-in" approach would require
parents to reconfigure the default if they want their children's personal information to be released.

1. Associate Professor, Ohio State University College of Law. My thanks to a number of colleagues for helpful
conversations, and especially to Ted Janger for comments on a previous draft. Phone: (614) 292-2547; e-mail:
swire.1@osu.edu; Web: www.osu.edu/units/law/swire.htm.
2. Assistance by Thomas Aust, Bruce Olcott and Jerome Wagner is gratefully acknowledged.
3. Research supported in part by NSF Grant SES-93-20481. Thanks to Pam Samuelson for providing useful
comments on an earlier draft. Author's homepage and contact information available at
http://www.sims.berkeley.edu/hal.
4. * The Center for Media Education (CME) is a nonprofit research and advocacy organization founded in 1991 to
educate the public and policymakers about critical media policy issues. CME's Action for Children in
Cyberspace project is designed to help ensure that children's needs are met in the new media environment. The
Consumer Federation of America is the nation's largest consumer advocacy organization, composed of over 250
state and local groups with some 50 million members. Founded in 1968, CFA's mission is to represent the
consumer interest before Congress, in courts, and at federal agencies.
National Telecommunications and Information Administration
1401 Constitution Ave., NW Washington, DC 20230
commerce.gov | Privacy Policy | Web Policies | FOIA | Accessibility | usa.gov

Source URL: http://www.ntia.doc.gov/page/chapter-1-theory-markets-and-privacy


Links:
[1] http://www.ntia.doc.gov/reports/privacy/selfreg1.htm#N_2_
[2] mailto:enoam@research.gsb.columbia.edu
[3] http://www.ctr.columbia.edu/citi/
[4] http://www.ntia.doc.gov/reports/privacy/selfreg1.htm#N_3_
[5] mailto:klaudon@stern.nyu.edu
[6] mailto:culnanm@gunet.georgetown.edu
[7] http://www.cdt.org/
[8] http://www.ntia.doc.gov/reports/privacy/selfreg1.htm#N_4_
[9] http://www.spectracom.com/description.html

http://www.ntia.doc.gov/print/page/chapter-1-theory-markets-and-privacy

86/86

S-ar putea să vă placă și