Documente Academic
Documente Profesional
Documente Cultură
By
Vaibhav Kumar
Arun Unni
Tasneen Padiath
Anshu Anand
Rupali Bhandari
TABLE OF CONTENTS
INTRODUCTION 4
National backbone 11
International bandwidth 12
Economic Bottlenecks 14
PRICING BASICS 22
Background on Network Pricing 22
Pricing Alternatives 23
Flat Pricing Scheme 24
Usage based 26
Priority 27
Tiered usage 28
Congestion pricing 28
Two-part tariff 29
CONGESTION 36
CASE STUDIES 47
Conclusion 57
In the past decade, we have all witnessed the Internet's rapid expansion, which has
outgrown any other industry. We have entered an era dominated by network
technology. The advancement of networking technology is bringing about the
convergence of computing and communication technologies. This convergence,
including technologies such as television, telephony, and computers, has in turn
stimulated the reach of the innovations of the Internet. Digital video, audio, and
interactive multimedia are growing in popularity and increasing the demand for
Internet bandwidth. However, there has been no convergence on the economics of
the Internet. While advanced information and communication technologies make the
network work, economic issues must also be addressed to sustain the growth cited
above and expand the scope of the network.
In the 1960's, in response to the nuclear threat during the cold war, the Advanced
Research Projects Administration (ARPA) of USA engaged in a project to build a
reliable communication network. The network deployed as a result of this research,
ARPANet, was based on packet-switching protocols, which could dynamically reroute
messages in such a way that messages could be delivered even if parts of the
network were destroyed. ARPANet demonstrated the advantages of packet-
switching protocols, and it facilitated the communication among the research
institutes involved in the project. As more universities were connected to the network,
ARPANet grew quickly and soon spanned the United States. In the mid 1970's the
existing protocols were replaced by the TCP/IP protocols, a fact that was facilitated
by their integration into Berkeley UNIX.
In the 1980's the American National Science Foundation (NFS) created several
supercomputer centers around the country. The NFS also deployed a high-speed
network based on Internet protocols to provide universities with remote access to the
supercomputer centers. Since connection to the NFSNet was not restricted to
universities with Department of Defence (DoD) contracts, the network grew
dramatically as all kind of non-profit entities, as well as universities and research
groups, connected to it. A nonprofit Michigan-based consortium, Michigan
Educational Research and Industrial Triad (MERIT) managed NFSNet. Since Internet
1
http://www-inst.eecs.berkeley.edu/~eecsba1/s97/reports/eecsba1b/Final/final.html
access was subsidized by the NFS and by the non-profit entities connected to the
network, economic issues such as accounting, pricing and settlements were for the
most part ignored.
As NFSNet grew and its potential became obvious, many for-profit entities wanted
access to the Internet. Since the NFS did not want to subsidize Internet access for
these private groups, it gave control of NFSNet to the nonprofit corporation Advanced
networks and Services (ANS). ANS was created from resources provided by MERIT,
MCI and IBM, and was free to sell Internet access to all users, both nonprofit and for-
profit. Meanwhile, some for-profit backbone providers such as PSI and UUNET
started selling Internet interconnection services. As the Internet became more
commercialized, people began studying and experimenting with the Internet
economics.
In 1995, ANSNet was sold to America Online, and a new commercial Internet
replaced the NFSNet-based Internet. The new Internet consists of a series of
network backbones interconnected at Network Access Points (NAPs). NFS is
phasing out its subsidies of the backbones but still subsidizes four NAPs: San
Francisco (PacBell), Chicago (Ameritech), Washington DC (MFS) and New Jersey
(Sprint). The popularization of the Internet and the perception of an imminent
convergence of voice, video and data networks provided impetus to the
telecommunications deregulation in 1996. At the same time, it became even more
obvious that such a network convergence will require a coherent system of
settlements and pricing. With the different networks being able to provide the same
(or similar) services, the old telephone and cable pricing structures may become
inadequate, and new structures must be created to replace the old ones.
2
http://www-inst.eecs.berkeley.edu/~eecsba1/s97/reports/eecsba1b/Final/final.html
Services via the network. Network Access Providers (ISPs) are defined as the
companies that provide network access to Users and Services so that they can
communicate. Finally, Infrastructure is defined as the physical network infrastructure
and its protocols to allow information exchange in the network.
Therefore the four main components in the network services market are the Users,
Network Access Providers, Infrastructure, and Services.
3
Electronic Commerce -- An Introduction, May 1996
Another important interaction that affects the Network Access Providers are those
between the Infrastructure providers.
1. Network Access Providers and Users
ISPs usually suffer from diseconomies of scale when dealing with users. Customer
support, accounting, billing and hardware maintenance all increase disproportionately
with the number of users. Furthermore, anything that inconveniences the user will not
4
be tolerated . Pricing, therefore, must recover the fixed and growing marginal cost
but not inconvenience the users. Also, a pricing scheme should also provide
incentives for both the Network Access Provider and the Users to act in a socially
responsible way.
Costs that ISPs incur are:
Hardware and software: A ISP must recover the costs of hardware, software and
customer support. The hardware and software costs will vary depending upon the
type of access the ISP will support (most support also depends upon the
customer's preference). Customers can choose between dialup or leased line
access. Dialup service requires that the ISP purchase a terminal server, modem
pool and dial-up lines. The software support costs of providing dialup service are
negligible. Occasionally, the hardware must be upgraded. These upgrade costs
tend to incur in large chunks rather than incrementally over time. ISPs providing
leased line access are required to provide a router at either end of the leased line
(one at the ISP site and one at the customer site), but terminal servers and
5
modems are not necessary . The software required for leased line service is
more complicated than that required for dialup service as configuration for the
former case may take considerably more time.
Customer support: Customer support costs can be categorised into three support
types that occur over the life of the ISP/customer relationship: costs of acquiring a
customer, costs for supporting an ongoing customer, and costs of terminating a
customer relationship.
4
Hal Varian, "Economic Issues Facing the Internet", June, 1996
5
Padmanabhan Srinagesh, "Internet Cost Structures and Interconnection Agreements", presented at the MIT
Workshop on Internet Economics, March, 1995.
Users above in that they need to purchase Internet access. Hence, the pricing
schemes for Users can also be targeted to Services in their capacity as network user.
The "advertising alternative" to pricing mentioned above would not be applicable,
however, since the Services are the targets of that cost recovery model rather than
the benefactors.
Like most developing countries India is also faced with the problem of being able to
provide only limited access to internet services. The main cause for this is the limited
infrastructure currently available in terms of telephone access. While phenomenal
growth is expected ISPs may face challenges in getting enough telephone lines in
the four big Indian cities - Bombay, Delhi, Bangalore and Madras from where up to
70 per cent of new ISP connection demand is expected to come.
India has less than 25 million telephones and 0.7 million Internet Access for 1000
million people while it needs 150 to 200 million Telecom and Internet connections to
meet the expected demand. With the new ISP policy not permitting last mile
connectivity for dial-up access, this requirement will need to be met by the operators
of basic telecommunications services. Past experience over the last three years has
shown the requirement of access lines and their quality to be the most significant
limiting factor for the growth of Internet in India. It will need to be seen how this will
be overcome under the new liberalised scenario by merely increasing the number of
ISPs.
The forecast of online access (source: Financial Times) shows that dial-up access
will remain by far the most common method of access outlining cable or ISDN access
by a factor of over 10:1. Under these circumstances the policy announcement of
opening up Internet access via Cable TV is not believed to provide a radical solution
to the issue of access to Internet services. It will require a concerted and planned
effort to meet this demand over a short time frame of less than six months.
National backbone6
Another major issue in the provision of Internet services is that of providing a
national backbone for India-wide connectivity as well as inter-connection
between the multiple ISPs. Under the ISP policy dispensation, a Statewide
access been provided under the dialling scheme of "17222" access which
connects a subscriber to the nearest ISP node. With more than 800 cities in
India being available on STD/ ISD and having a high potential for growth of
Internet services, the above type of access is likely to lead to a loading of the
Indian trunk network which is designed for "high tariff low holding time traffic"
with low paying, high holding time traffic". This will undoubtedly put
considerable load on the already scarce resources and is not a long term
solution. Moreover, merely providing access to the nearest ISD node does not
solve the issue of connecting to more than 50 ISPs who may exist within a
State. There is, thus, an urgent need to provide a common access backbone to
which customers from any part of the State can dial to access any lSP Internet
node. The need is, therefore, to isolate the access service from the ISP service
and the content services. A national backbone can be provided by private
operators in addition to those provided by DOT and
VSNL, and this should form an indispensable part of the National Information
infrastructure. No ISP policy would be complete without defining a national
infrastructure for India-wide Internet access by multiple ISPs.
International bandwidth
With the operation of six gateways by VSNL, and in addition the use of optical
fibre submarine cables, the bandwidth already provided by VSNL is over 80
MBPS and is adequate for meeting Indias requirements with the current level
of subscribers. India is also well connected with optical fibre cable systems1
with FLAG (5 Gbps per fibre), SEA-ME-WE-3 (10 Gbps per fibre) and other
cables in the pipeline. Complemented by multiple satellite connectivity, no
shortage is envisaged, and the requirement of all ISPs can be fully met from
"day one" so far as international Internet connectivity requirements are
concerned. VSNL has already prepared itself for this scenario, whereby its
leased lines to the ISPs can be increased to accommodate all the new ISPs
requiring connectivity via VSNL. It maybe mentioned that VSNL has today over
350 leased line circuits operating for Internet alone and is, thus, well versed
with this business.
6
http://www.bnetindia.com
overtaking the telephone network bandwidth by a significant margin. On the
Internet backbone, the traffic is today doubling every 100 days and major
backbones at peak times are suffering packet losses which can go up to 400A.
The question is how will such large bandwidths be provided for India?
Fortunately, VSNLs advanced planning in cable systems comes to the rescue of this
urgent and pressing issue. VSNL, through its acquisition of capacity in FLAG and
SEA-ME-WE-3 can ensure that the countrys requirements are met for the next five
years with over 30 Gbps of capacity being available. VSNL has already signed a
MOU for Project Oxygen, which will be a 300 Gbps system operational in the year
2000-1. The increasing number of ISPs in India will drive up the bandwidth demand.
This, coupled with larger bandwidth per user through the use of bandwidth-hungry
applications, will make it possible to order large bandwidths. This will then provide
economies of scale and make the bandwidth available at lower and lower costs.
VSNL is already negotiating for much larger capacities on the optical fibre systems
for Internet. India-US connectivity for a D53 (45 Mbps capacity) can today be had at
US$ 150,000 per month. This is roughly the equivalent of US$7,000 per month as
against an average figure of US$ 21,000 per month payable via satellite circuits to
USA. It is, thus, evident that even at relatively lower levels of bandwidth utilisation,
e.g., 45Mbps, the cable systems provide a price advantage of 3:1 over satellite
circuits. VSNL believes that this will be a significant factor in gradually lowering the
cost of Internet access for international Internet connectivity. As the bandwidth
requirements increase beyond D53 to ATM levels, e.g., 155 Mbps, the cost could
come down further by a factor of 2 or 3, thus reaching levels at which such
bandwidths are available in developed countries.
With the arena now clear for a deregulated and open playing ground for ISPs, the
stage is set for the rapid growth of Internet in India. However, the growth will be
critically dependent on how some of the factors such as requirement of access lines
and national backbone connectivity are addressed. With the wisdom which has gone
into the formulation of the new ISP policy, it is believed that these issues requiring
urgent resolution will also be addressed on priority and resolved, opening up a path
for multi-fold growth in Internet services in India.
Economic Bottlenecks7
Economic Bottlenecks that limit access in developing countries are in terms of
service affordability. The average expenditure per month on communications in USA
is $30 which almost 90% of the households can afford.
The above table clearly shows that even if we take the lower US prices @ $30 per
month only 1.6% of the households can afford the service, thus acting as a severe
constraint in improving access to these services in the country.
Also in USA a $ 360 per year revenue may justify $1000 per line Network cost and is
affordable to almost 90% of households. This implies that there is no incentive to
reduce cost rather the focus of R&D is not to reduce costs but to enhance basket of
services and features while keeping cost constant.
In developing countries even with $ 125 per year revenue, the service is affordable to
only the top 30 % of households. Thus here the emphasis continues to be on
reducing network costs given the market size of hundreds of millions.
7
from a paper discussd at Commsphere 2000, IIT Madras, by Prof. Ashok Jhunjhunwala
Network Effect
80
% of households
(cumulative)
60
% of households
40
(cumulative)
20
0
0 500 1000 1500
network cost per line
The above graph shows the importance of improving service affordability. Access
would increase if affordability could be increased. As more number of households get
access the network costs per line would come down and this circularity would
continue. Thus it is imperative that service costs be brought down.
Given low affordability even providing access may not be enough as the user would
still have to pay call charges which might be prohibitive. Some steps to rectify this
situation are being considered e.g. Government telecom service provider and soon-
to-be ISP Mahanagar Telephone Nigam Ltd. (MTNL) is considering waiving local
phone call charges for Internet users, or bundling calling charges for its own Internet
service
Legal and regulatory issues
Legal and regulatory challenges still arise in areas like setting of access tariffs for
private ISPs, international gateways, Internet telephony, and opening up of the last-
mile telecom market.
The most revolutionary aspect of India's Internet policy is letting ISPs do the last mile
connect and this could well be the source of litigation from basic service license
holders who worry about voice over IP.
On 6th November 1998 a new ISP policy was unveiled. The policy permitted
unlimited number of Internet players with no licence fees for the first five years, thus
setting the stage for a completely deregulated operating environment.
A deregulated environment requires the most disciplined set of regulations to
oversee the growth and to ensure and protect the interests of the customers and the
country It will be important to ensure that no anti-competitive practices are indulged
in by any of the operators, particularly those responsible for providing infrastructural
facilities.
Demand for Internet Access and Usage
The first issue, which needs to be highlighted, is the difference between demand for
Internet access, and demand for usage. This is an important distinction since "the
important characteristic of Internet demand for access is that it is binary . An end user
either has access or he does not. By contrast, Internet usage refers to an individuals
utilisation of Internet resources once access has been obtained. The rate of
data/traffic transferred in a given period will be used as the measure of Internet
usage. Other methods of measuring usage could be total amounts of data transferred
or hours of time connected to a service.
In the context of the Internet, network size could refer to the number of users
connected to the Internet or the number of WWW sites accessible by Internet users,
the choice depending on the particular problem one wishes to analyse. It is important
then to remember, that in this analysis network size does not refer to the physical
capacity of the network. The model to be developed in this essay also assumes that
increases in network size do not result in changes in the physical capacity of the
network.
For all network sizes up to N* the marginal benefit of increased size is positive. This
reflects the earlier explanation of larger network size creating more 'goods'.
Furthermore, the MSB curve is drawn above the MPB for all network sizes less than
8
http://users.hunterlink.net.au/~ddhrg/econ/honours/demand3.html
N* due to the presence of the external benefits which existing network users receive
from new users joining the network.
Examining the shape of the marginal benefit curves it can be seen that initially the
marginal benefit is positive and increasing due to the increasing benefits from
expanding the network. Eventually however the marginal benefit reaches a maximum
after which the marginal benefit of increased size begins to diminish. The diminishing
marginal benefit observed in Figure 1 may be due to a number of influences.
Although some activities may require a certain critical mass to exist, increased size
beyond this level may not contribute significant gains. Internet shopping is a possible
example. Although a certain number of Internet users may be necessary for on-line
shopping to become viable the gains once this critical point has been reached may
begin to fall.
Increased network size may also lead to higher levels of 'spamming' and other anti-
social behaviour, which may reduce the marginal benefit of increased size. Similarly,
increased network size can produce the Internet equivalent of highway traffic jams,
again acting to reduce the benefits of increased size. The problem of congestion is
significant enough that it will be examined in more detail later. Eventually the effects
of these negative factors may actually change the network externality from being a
positive to a negative.
9
http://users.hunterlink.net.au/~ddhrg/econ/honours/demand3.html
At network size N+, the corresponding demand curve is DN+. At a price of P* the
quantity demanded is Q+ megabytes per hour. As the network size increases to N*,
the demand curve shifts outwards to the right and is represented in Figure 2 by the
demand curve DN*. At price P* a greater quantity, Q* megabytes per hour is
demanded. However as the network grows past N* and moves to N- , the marginal
benefit becomes negative due to the previously mentioned influences. The demand
curve thus shifts inwards to the left. At price P*, quantity Q- megabytes per hour is
demanded, with Q- < Q*. Demand for Internet usage is therefore maximised at the
network size where the marginal benefit of increased size is zero.
Pricing Basics
10
Michael L. Katz and Harvey S. Rosen. "Microeconomics" 2nd Edition. IRWIN, Inc
surplus from each individual consumer. However this scheme is usually very
difficult or impossible to implement, and sometimes its implementation may be
illegal.
Second-degree price discrimination: Consumers are divided into segments
based on some attribute that the consumers are induced to reveal. An example
of second-degree price discrimination in a network context would be the
versioning of a network service. For example, regular e-mail may be free, but
with no delivery time guarantees, while urgent e-mail may have an attached
fee, with certain delivery guarantees attached.
Third-degree price discrimination: Consumers are divided into segments based
on some verifiable attribute, such as students or senior citizens.
In price discrimination schemes, profit-seeking firms try to extract as much consumer
surplus from each segment as possible. Each segment is charged an optimal price
based on the estimated willingness to pay of that segment. For example, businesses
who rely on telecommunication services are willing to pay (and therefore are
charged) higher rates than individuals and households.
The established algorithm for determining prices is:
1. If the firm can use a two-part tariff, then a per-unit price is found by intersecting
the demand and marginal cost curves.
2. An entry fee is set at the individual consumer's surplus when she can buy as
many units as wanted at the set price.
3. If consumers cannot be divided on the basis of a verifiable attribute, then
second-degree price discrimination is used (e.g. versioning or quantity
discounts).
4. The firm has to segment the market and price each individual segment.
5. Lastly, when considering a network pricing structure, it is important to
distinguish between four types of charges that the above (and other) pricing
schemes can apply11. These are:
Pricing Alternatives
Given below are some pricing schemes that may be implemented:
12
The case for public subsidy
11
J. Walrand and P. Varaiya, "High Performance Communication Networks", chapter 8.
12
Sandra Schickele, "The Internet and the Market System: Externalities, Marginal Cost and Public Interest", August,
1993
Before considering any one pricing scheme, it is useful to ask whether it is
technically, economically and socially feasible to charge for Internet service at all.
There are some schools of thought that believe that the answer is "no". Some units of
pricing, such as number of packets (units of communication) sent, require more
computing resources to do the packet accounting than to send the packet, thus
rendering those pricing schemes infeasible. As far as the economic and social
feasibility is concerned, there is a very strong argument that the Internet access
market cannot succeed and, therefore, the prices charged will neither be
economically nor socially optimal. From the trend of ISP insecurity and price
flexibility, one can conclude that the Internet access market is currently competitive.
There is a strong belief, however, that a market "shakeout" will occur from which only
a few ISPs will survive. If this is the case, then those firms will be able to charge
prices much higher than at the marginal cost. Market failure is said to occur at that
point - when the market is incapable of producing an economically efficient and
socially optimal allocation of resources. When a market fails, economic theory says
that government intervention is required at that point, especially for quasi-public good
such as the Internet.
13
Loretta Anania and Richard J. Solomon, "Flat: The Minimalist B-ISDN Rate",
presented at the MIT Workshop on Internet Economics, March, 1995.
Flat-rate pricing in the current context of the Internet is likely to run into severe
problems. the continuance of flat rate pricing is likely to severely impair the current
discursive nature of the Internet .
The basic role of a pricing mechanism is to lead to an optimal allocation of scarce
resources, and to give proper signals for future investments. The mechanism in place
should lead to the optimization of social benefits by ensuring that scarce resources
are utilized in such a manner as to maximize productivity in ways society thinks fit.
One critical issue however is the basis on which an appropriate pricing scheme can
be designed.
Given that the marginal cost of sending an additional packet of information over the
network is virtually zero once the transmission and switching infrastructures are in
place, marginal cost pricing in its simplistic form is inapplicable. Cost-based return on
investment (ROI) pricing is both not feasible, given the multiplicity of providers who
would have to chip in to bring about an end-to-end service, and inefficient, given the
chronic problem of allocating joint costs. A "what the market can bear" policy would
be likely to have unforeseen implications, especially if the markets are not
competitive in each and every segment of the network.
The principle that is most likely to be effective in this scenario is a modified version of
the marginal cost approach, where the social costs imposed by the scarcity of
bandwidth - the bottleneck resource - is taken into consideration. Bandwidth being
the speed at which data is transmitted through its networks, its scarcity implies
delays due to network congestion. This then is the social cost that needs to be
incorporated into any efficient pricing scheme.
The major fear in some quarters is that the present system of flat-rate, predictable
pricing for a fixed bandwidth connection will be replaced by some form of vendor
preferred, usage-based metered pricing. Users feel that the Internet should continue
to function primarily as a vast, on-line public library from where they can retrieve
virtually any kind of information at minimal costs.
In addition to the fear that a popular discussion would have to pay enormous
amounts to send messages to its members, it is feared that usage based pricing
would introduce a wide range of problems regarding the use of ftp, gopher, and
mosaic servers, since the providers of the "free" information would be liable to pay, at
a metered rate, the costs of sending the data to those who request for it. This would
have a negative effect on such information sites, and would eliminate many such
sources of free information.
In essence, the argument is that usage based pricing would imply severe economic
disincentives to both users and providers of "free" information, and would therefore
destroy the essentially democratic nature of the Internet.
Usage based
Usage based charges are determined by the quantity of use and can theoretically be
measured in a number of different ways: speed of the connection (i.e. the modem
speed), connection time, number of packets sent, length of the connection to the ISP
in minutes, and so on. Pricing based on the number of packets actually sent has an
advantage in that it is fair in the sense that the users are charged for exactly what
they use. Pricing based on number of minutes of the connection is unfair, however,
because it does not distinguish between the length of the connection from the
number of packets actually downloaded, although there may not be any correlation
between the two. It is certainly feasible, for example, that a user reads information
downloaded at the beginning of a session for an hour; another user could download a
new page of information every five minutes. Usage based pricing does provide a
disincentive for users to be wasteful of network resources since they must pay for the
resources they use. In practice, however, setting rates and measuring the usage is
very difficult. It could take more computing power to compute the resources used by
sending a packet than to actually send the packet. Therefore, usage pricing based on
number of packets is economically infeasible. When other accounting measures such
as connection speed or connection time are used, however, users will complain
these are unfair because people (with the same connection speed / connection time)
receiving and sending different amounts of traffic would be charged the same. Also,
there could be a lot of "idle" times in which no network traffic is done but the
connection is still maintained. Finally, usage based pricing is very controversial
because it endangers the vitality of the Internet. Users would undoubtedly not "surf"
the web as freely when there is a virtual meter ticking in the background. Where
usage based pricing has been tried, the growth has slowed down.
The Telephone Pricing Model
One form of usage based pricing would be to use the system of posted prices as in
telephony. One way to do this would be to adopt the telephone model, where the cost
of Internet usage is based on the distance between the sender and the receiver, and
on the number of nodes through which data need to travel before they reach their
destination. This however would be difficult to implement given the inherent nature of
the connectionless net technology, which is based on redundancy and reliability,
where packets are routed by a dynamic process through an algorithm that balances
load on the network, while giving each packet alternative routes should some links
14
fail . The associated accounting problems are also enormous. In addition, the
sender would prefer that packets are routed through a minimum number of nodes in
order to minimize costs, while the algorithm in the Internet would base its calculations
on the concept of redundancy and reliability, and not necessarily on the fewest links
or the lowest costs.
The telephone model of pricing is not likely to work for another reason. Posted prices
are not flexible enough to indicate the state of congestion of the network at any given
15
moment . As we have seen earlier, congestion in the network can peak from an
average load very quickly depending on the kind of application being used. Also,
time-of day pricing means that unused capacity at any given moment cannot be
made available at a lower price whereby it would be beneficial to some other users.
Conversely, at moments of congestion, the network stands to lose revenue because
users who are willing to pay higher amounts than posted rates are being crowded out
of the network through the randomized first-in-first-out (FIFO) process of network
resource allocation.
In essence, the system of posted fixed prices implies multiple problems: while it does
not allow for revenue maximization or lead to optimal capacity utilization, it also does
not address the social costs of congestion because it cannot allow for prioritization of
packets. It is thus clear that the answer to the Internet's pricing problem does not lie
at either ends of the pricing spectrum defined by flat-rate pricing and pure usage
based pricing, but possibly in an innovative approach.
Priority
In a priority scheme (also sometimes called Quality of Service scheme), the user
chooses the quality of service that they want and pay a flat fee for that quality of
service. A user could choose between high or low priority connections, for example.
Another example of priority pricing is to allow the user to actually choose the priority
of their packets (both sending and receiving) in the Internet. This latter type of priority
pricing is not currently available because the underlying infrastructure does not
differentiate between different packets' priorities. However, this type of pricing might
provide better quality of service than a faster line because although the faster line
could provide better service at the endpoint of the user's connection, it does not
provide the end-to-end guarantee that packet priorities would. The idea behind
priority pricing is that the user pays for what they get but does not have to deal with
14
(Varian & MacKie-Mason, 1993, p. 3)
that "ticking meter" feeling. Priority charges also have the advantage that they allow
the ISPs to charge for "luxury items" and, therefore, attempt to charge a price closer
to the user's willingness to pay. However, priority based schemes may not provide
enough granularity to allow ISPs to charge at the highest level possible for each
customer.
Tiered usage
In a tiered usage pricing scheme, the user is charged a certain amount for the first X
units of use, then a higher amount for the next Y units of use, etc. The advantage of
tiered pricing is that it might allow whimsical browsing without encouraging excessive
use. The disadvantage is that the user would be inconvenienced by having to keep
track of their usage. However as stated previously, user inconvenience is not
acceptable. This could be remedied in a number of ways, however: perhaps by
sending a message to the user once they've crossed the threshold of a new tier or
allowing the user to access to their account records thus far.
Congestion pricing
One reason to introduce pricing schemes into the Internet is to make users
understand the value of what they are gaining (an ability to communicate and to
access information) and to give them an incentive to act in a socially conscious way
which reduces the harm to others. For example, everyone is accustomed to higher
daytime rates for long-distance telephone service. The rates are higher during the
day because phone lines are congested during that time. Higher prices serve to
inform the customer of the extra value of calling during periods of congestion. The
customer, then, will meter their daytime use according to their willingness-to-pay for
that telephone call: if the call is relatively urgent, they will phone during the daytime; if
not, they will wait until the evening. In the Internet, something similar can be done by
charging according to the state of congestion of the network. However the drawback
of a congestion-pricing scheme is that it provides an incentive for the ISP to cause
congestion by restricting its capacity. There are several ways in which congestion
can be spuriously introduced. For example, an ISP can:
Withhold capacity: An ISP can purposefully not build capacity to match the
demand. Besides introducing congestion, this would save in management
costs since smaller systems are cheaper to maintain than larger systems.
15
(Varian & MacKie-Mason, 1993, p . 19)
Hide capacity: An ISP can simply shut down a portion of their modems for
dial-up service. Another benefit is, if and when they turn the modems back on,
they can take credit for innovation in upgrading.
Augment Demand: An ISP can cause congestion by "demand pseudo
augmentation," in which the apparent demand is increase by some kind of
supplier self-dealing. The ISP, or its collaborator, can use the bandwidth
wastefully just to drive up the price.
Two-part tariff
A two-part tariff is comprised of a fixed (f) portion and a variable (v) portion. The fixed
portion includes charges for network access and capacity (a capacity charge is
based on the network's maximum possible bandwidth). This is determined by the
fixed costs, the willingness to pay of the customer population, and the size of the
population. The variable portion would be based on the actual usage of the users and
the priority of their service. The fact that the variable portion extracts the consumer
surplus means that the two-part tariff scheme maximizes the consumer surplus
extracted from customers, and therefore provides the ISP with a disincentive to
induce congestion, which would reduce the number of connections and the
network usage.
16
Varian & MacKie-Mason (1994)
the highest bids. A great deal of consensus will be required along the network for
smooth functioning and to ensure that priority packets are not held up .
Users will be billed the lowest price acceptable under the routing "auction," and not
necessarily the price that they have indicated as their bid. A user would thus pay the
lower amount between his bid and the bid of the marginal user, which will be
necessarily lower than the bids of all admitted packets. As a result, the Varian and
MacKie-Mason model ensures that while everyone would have the incentive to reveal
his or her true willingness to pay, there are systemic incentives to conserve on
scarce bandwidth while simultaneously allowing effectively free services to continue.
Smart Market proposal
The Smart Market proposal provides an intelligent way to price the variable portion
(v) of the two-part tariff mentioned above based on network congestion. In an ideal
world, the price charged for network use would be a continuous function of the
congestion. The congestion level would determine the price charged to the user at
the time the packet was transmitted. However, this would be inconvenient for the
user and the ISP as the ISP would constantly have to monitor congestion and the
user would have to constantly monitor the price to determine if the price has
surpassed the user's willingness to pay.
The Smart-Market proposal suggests that users specify a bid for each packet sent.
That bid should reflect the user's willingness to pay. In times of congestion, packets
are prioritised according to their bids. Packets are charged at the bid of the highest
priority packet that is dropped not the bid on each packet. This provides an incentive
for the users to bid based on their true willingness to pay.
Usage-based
Considerations for the implementation of usage-based pricing at the ISP level also
apply to the infrastructure. In addition, a usage-based pricing for the infrastructure
can provide extra revenue for the development of more efficient and increased
capacity networks. Provided that an environment is available which makes the
adoption of a usage-based pricing attainable, charges based on the volume of traffic
is a relatively simple and cost-effective scheme. Costs for providing this service
include accounting hardware, software, and a business unit to bill the users. In an
environment that is hooked on flat-rates, such as in the United States, attractive
features of usage-based pricing must exist before the customers (ISPs) will accept
the switch.
Figure 3 illustrates the congestion externality for Internet data/traffic. In Figure 3 there
is an Internet connection with a fixed capacity of Q# megabytes per hour. For all
quantities of traffic up to Q# , the marginal cost of additional traffic is constant at P0
and no delays are experienced. Indeed, it has been suggested that on an
uncongested network the marginal cost of additional traffic is close to zero. However
once the quantity of traffic demanded exceeds Q# , all traffic is delayed. Thus, as a
user generates additional traffic past Q# they experience delays. The Marginal
Private Cost MPC increases since "time spent by users waiting for a file transfer is a
social cost, and should be recognised as such in any economic accounting.
Furthermore, since the additional traffic generated creates delays for all users, not
just the user generating the marginal traffic, the Marginal Social Cost MSC lies above
the MPC. In Figure 3, this is represented by both the MPC and MSC rising once the
capacity of the link Q# is exceeded.
The negative congestion externality arises because of this divergence between MPC
and MSC. Consider the demand curve D0 in Figure 3. Let us assume that the
consumer pays a price equal to the marginal private cost of their additional traffic.
With demand curve D0 the quantity demanded is Q0 and the consumer pays a price
of P0. Now, let us assume that there is a positive relationship between network size
and network traffic4. As the network size grows the quantity of data/traffic demanded
will increase and eventually exceed the capacity of the network.5 In Figure 3 this is
represented by an increase in demand from D0 to D1.
With demand curve D1 consumers now demand quantity Q1 megabytes per hour,
where the price P2 equals the MPC of that quantity of traffic. However at Q1 the
traffic of all users has been slowed and the MSC is equal to P1. The socially optimal
level of traffic occurs where price = marginal social cost. In Figure 3 this is at price P3
and quantity Q2. Therefore at quantity Q1 a negative congestion externality of P1-P2
exists.
It is important to remember however that the increase in demand from D0 to D1
could also be bought about by other factors besides the network effect. For example,
the increase in demand could be the result of changes in the price of substitutes and
compliments, changes in income or changes in consumer preferences. These factors
can produce variations in demand even though the network size may be constant.
The existence of the congestion externality has been demonstrated. Once the
capacity of the network is exceeded, all users experience delays, not just the user
creating the additional traffic. A divergence between MPC and MSC thus occurs,
resulting in a negative congestion externality.
Since the congestion externality is equal to the gap between the MPC and MSC, the
size of the congestion externality will thus depend on the application mix of Internet
users when the congestion occurs. Where the application mix is dominated by larger
ADU applications, the congestion externality is greater.
Now let us consider the impact of a decision to reduce the congestion externality by
increasing the capacity of the network. In Figure 5, this is represented by extending
the horizontal portion of the marginal cost curves from Q# to Q##. The MSC and
MPC curves are thus shifted outwards to the right by the amount of the capacity
expansion. The new marginal cost curves MSC1 and MPC1 are drawn parallel to the
original curves. With the new curves MSC1 and MPC1 , a higher quantity of traffic Q0
is demanded at the lower price P3. The congestion externality at this quantity of
traffic is now a lower amount, P2-P3.
However it has been shown previously that because lowering congestion raises the
marginal benefit of usage at that network size, that lowering congestion would lead to
an increase in demand for usage. Another explanation which supports this
hypothesis is based on the argument that time is complimentary to Internet usage
and that congestion reduces the quality of service of the Internet. Thus a reduction in
congestion will increase demand because it reduces the time taken to complete a
given task and is seen by most Internet users as a quality improvement.
By reducing the congestion at the network size associated with demand curve D0,
this will result in an increase in the marginal benefit of network size at that particular
network size. As a consequence of this, assuming no other changes, demand will
increase until a price/quantity equilibrium is reached which restores the original
congestion externality. The new demand curve is given by D1 with a new
price/quantity equilibrium at price P2 and quantity Q1. At this combination the
congestion externality is again equal to P1-P2. Thus, whilst a temporary reduction in
the congestion externality was achieved by expansion of the physical capacity of the
network, the original congestion externality will be restored ceteris paribus .
This situation, where a lowering of congestion leads to an increase of demand and a
return to the original problem of congestion, is similar to the situation with respect to
highways and motor vehicle traffic. Adding more roads to alleviate traffic congestion
may not provide relief from congestion as suppressed demand soon soaks up the
expanded capacity.
However let us now consider the impact of the policy if the reduction in congestion
leads to more companies and individuals deciding to connect to the Internet,
increasing the network size. In Figure 5, the capacity expansion initially reduced the
congestion externality however the externality was eventually restored at the original
size. Now, if the reduction in congestion produces an increase in network size, what
will be the impact on the congestion externality?
In Figure 6, demand curve D1 represents the demand curve for the original network
size with expanded capacity. (i.e. The final position of the demand curve after the
expansion and subsequent increase in demand.) However if the reduction in
congestion increases the network size, then so long as the marginal private benefit of
increased network size is positive, the demand curve for the larger network size will
sit to the right of the original network size. In Figure 6 this is represented by the
demand curve D2. The higher level of demand D2 produces a new equilibrium at a
higher quantity of traffic Q2 and higher price P4. However what is more significant is
that P5-P4 > P1-P2 . That is, the congestion externality is larger than under the
original network size. Thus capacity expansion, where it leads to increased network
size, may increase the size of the congestion externality.
In a competitive environment with excess capacity, there is a tension between the
large sunk costs of physical networks and very low incremental costs of usage. On
the one hand, the need to recover sunk costs suggests using price structures with
high up-front charges and low (or zero) usage rates. On the other hand, with
significant excess capacity present, short-run profits can be increased by selling at
any price above incremental cost. Economic theory would suggest that the pricing
outcome in this situation might be unstable, unless regulatory forces or other
influences inhibiting competition were present.
From its inception, AOL used a two-part tariff scheme, with a monthly access charge
of $9.95 and a usage charge of zero for up to five hours, and $3.50 per hour
thereafter. AOL was a self-contained network, and users had a high willingness to
pay for the unique services it offered.
At that time AOL had no interconnection with the Internet, which was still unknown to
most users, nor with other online services. This lack of interconnection and limited
competition allowed the other online services, such as CompuServe and Prodigy, to
use similar two-part tariff schemes.
AOL had its own proprietary network access and content technologies. Similarly,
other online services had developed their own proprietary technologies. Because the
content technologies of each online service were different, independent content
providers usually were forced to provide their content through only one of these
online services, thus limiting their audience to the users of that service.
GNN was now absorbed into AOL. The adoption of the $19.95 flat fee is important
because it signaled the absorption of AOL into the Internet. AOL had been
transformed from a service company, whose main product was its content and in
which the network was just a necessary means to access the service, to a network
company, whose main product was network access. The multitude of pricing
schemes may
indicate signs of desperation and loss of focus, since apparently AOL was trying to
match all existing pricing schemes from competitive networks (ISP's, MSN), and had
made AOL disregard what had been one of its main core competencies: its content.
A flat rate pricing scheme does not extract as much consumer surplus as a multiple-
part tariff scheme does. In fact, a flat rate pricing scheme may barely cover the
services' huge fixed costs. Therefore AOL's new emphasis is on expanding its
customer base and on developing alternative sources of income. Given the
knowledge that an ISP like AOL has about its customers (e.g. address, online
navigation habits), advertising and sales are obvious choices for alternative sources
of income.
However, the aggressive acquisition methods that AOL has used have had major
economic consequences - acquisition costs are from $50 to $300 per new user
(depending on the sources), and churn rates are very high. Acquisition costs are
deferred over several months, so the actual profitability of the company may not be
what is indicated by its financial statements.
The flat rate pricing scheme, together with the aggressive acquisition campaign,
attracted a huge number of customers, who remained connected for extended
periods of time. As a result, AOL's infrastructure became congested - users had a
very hard time accessing the system, and when they were successful, the system
was painfully slow. AOL miscalculated the impact of the introduction of a flat rate,
and as a result it alienated thousands of customers and faced many lawsuits. Since
one of the main features that differentiated AOL from other ISPs was the ease of
installation and connection, this lack of sufficient infrastructure put AOL in a very
dangerous position. AOL reacted by investing millions of dollars in additional
infrastructure.
Lessons learned
America Online (and other online services) initially positioned itself as a service
provider, and limited access to its services to users of its proprietary network. It did
not license its content technologies, so they remained proprietary and incompatible
with those of the competition. When an alternative technology (WWW) emerged in
the public domain, people had a big economic incentive to use the open technology.
As happens many times, when the company took notice of the new technology, there
was already a critical mass of people who had adopted the new technology. So AOL
had to abandon its proprietary technology in favour of the open one. A flat rate
scheme encourages network congestion, because users are not conscious of the
resources that they are consuming and the cost of those resources. As a result, the
quality of the service provided by the network is degraded. Investing more in
infrastructure may alleviate the problem somehow, but only temporarily. Furthermore,
eventually companies may stop further investments in infrastructure that the flat rate
will not be able to recover.
Multiple-part tariff schemes such as the access+usage scheme used originally by
AOL and other online services are easy to implement under monopolistic conditions.
However, under intense competition, services seem to gravitate toward flat-rate
schemes. Part of this phenomenon may be due to the characteristics of the TCP/IP
protocols, which were designed when the Internet was a subsidized, not-for-profit
network. New protocols that allow the implementation of different types of services,
such as those based on quality or congestion may allow services to implement
differential pricing strategies. Meanwhile, services may be forced to subsidize their
flat-rate pricing plans through other means of revenue, such as selling marketing
information or advertising.
Background
New Zealand: The development of the New Zealand network (NZGate) began in
1990 when six New Zealand universities and NASA established a 9600 bps analog
cable link from New Zealand to Hawaii. In April 1991, the network expanded to link all
of the seven New Zealand universities to form the Kawaihiko network. Later, the Tuia
network was established. It linked Kawaihiko to two pre-existing government
managed networks - the Department of Scientific and Industrial Research (DSIR) and
Ministry of Agriculture and Fisheries (MAF) - on an informal basis.
In July 1992, the Tuia Society was created, which consisted of three major
management groups, i.e. Kawaihiko representing the universities; Industrial
Research Limited (IRL) which was the old DSIR; and AgResearch which was the old
MAF. Two smaller groups, the National Library and Ministry of Research, Science
and Technology (MoRST), also joined the Tuia Society. At that time, a Frame Relay
backbone was also set up to provide connectivity between the groups. The Frame
Relay backbone was provided by a private organization, Netway Communications,
which was a subsidiary of Telecom New Zealand.
Figure 5 and Figure 6 summarize the interconnections and the configuration of the
management groups and sites within the Tuia Society and Kawaihiko up to 1992,
respectively.
Pricing schemes
New Zealand: The general principles followed by the New Zealand institutions for the
establishment, maintenance, and development of their network were:
(1) initially share the traffic costs and if possible, have each site pay for their own
access costs and
(2) once a proper accounting system was established, "pay for what you use" (both
access and traffic costs).
For the initial establishment of New Zealand's connection to the U.S. in 1990, NASA
provided the majority of the support for the costs of the U.S. end of the link, but no
subsidy was provided by the New Zealand government for the New Zealand end of
the link. As a result, all the costs had to be recovered by charging the users. An
agreement was made between the six universities that each site would pay for 1/6 of
the start-up and ongoing costs to get the project established. A similar pricing
scheme was used to establish the Kawaihiko network in 1991, where costs were
divided in fixed proportions with Lincoln University paying for 1/13 and each of the
other six sites paying 2/13 of the costs. (There are seven universities in the
Kawaihiko network.)
In April 1992, when the entire Tuia network went under re-engineering, sites within
the Kawaihiko were provided with the opportunity to pay for their own access costs.
Netway Communications (an infrastructure provider), which provided the Frame
Relay, charged a monthly fee for both the access and traffic costs. Sites within the
Kawaihiko management groups could select their own access rates (i.e. speed) at
different prices. Since some sites had more costly access fees than others, they
agreed that each site would pay its own access charges. Moreover, access costs for
sites providing common access for other sites were divided using a set of
percentages agreed locally at each site. Traffic costs were still shared among
participants
as they were initially, since an accounting system was not yet implemented to
monitor traffic volumes between sites.
The past success of a usage-based pricing for international Internet traffic helped to
encourage the sites to initially share the start-up costs. They knew that once an
accounting system was established, users eventually would only have to "pay for
what they used".
Usage-based pricing was first implemented for international traffic, just after the
NZGate connection was made in 1990. They adopted a volume-charge pricing
scheme, with the following characteristics:
Measure traffic in both directions through NZGate for each participating site and
charge for it by volume, i.e. for the number of Megabytes moved in and out each
month
Charge enough to cover actual costs, plus a percentage for development
Use the resulting development funds to buy more capacity as demand grows
"Committed traffic volume" pricing methodology
The notion of "committed traffic volume" provided users with predictability as to how
much they would be charged per month. The pricing method was as follows: Each
site made an initial choice of their committed volume, and thus their monthly charge.
If a site's traffic fell into a different "charging step" for more than one month, that site's
committed volume was updated to reflect the actual traffic. However, for that unusual
month, the site would still be responsible to pay for their previous committed volume,
whether their actual usage had changed or not. This provided a site at least a
month's warning of a change in the monthly fees. Committed volumes were updated
automatically by the NZGate Management, which simplified the administrative work.
Because of the success of this volume-based pricing, sites within the Kawaihiko
group, in particular, were willing to divide the costs for the initial establishment of the
network with a view that a fair pricing scheme would later be implemented.
In summary, the key factors that brought about the success of usage-based pricing in
New Zealand were:
Unified group of major organizations which agreed and encouraged the
implementation of a usage-based pricing scheme
Single, dominant infrastructure provider
Cost-effective accounting system
"Fair" and attractive pricing methodologies
The common pricing philosophy and mutual trust between and within the
management groups were essential for both the initial establishment and eventual
adoption of the usage-based system. The availability of a cost-effective accounting
system, as well as a simple and "predictable" pricing system, further encouraged the
implementation of a cost-effective "pay-what-you-use" system. Moreover, the
existence of a single, dominant infrastructure provider, significantly simplified and
reduced the accounting
costs that otherwise would most likely make usage-based pricing cost-ineffective.
Chile: After the establishment of both the REUNA and Unired networks in 1992, both
organizations were facing the problem of finding a proper pricing scheme to cover
both the maintenance and development costs. It was quite difficult for the groups to
come up with a solution. In fact, this difficulty actually led REUNA to select a very
unreasonable solution. The heads of the member institutions of REUNA decided that
all the network costs were to be split in proportion to the budgets of the institutions,
with the exception that the international traffic would be charged at a per-megabyte
rate. This of course brought about serious disapproval, and eventually forced REUNA
to implement a flat rate with unlimited access for national traffic. However, REUNA
still kept a usage-based pricing scheme for international traffic. Unired, on the other
hand, implemented a flat rate pricing scheme for both national and international
usage for their academic and non-profit customers. To recover some costs,
commercial customers were charged heavily for international traffic, but were still
provided the option of flat fees for national traffic.
In contrast to the New Zealand experience, the network in Chile found it difficult to
implement usage-based pricing. The political competition and unreasonable pricing
solutions in the past left both REUNA and Unired with no reasonable alternative but
to charge flat fees with unlimited access. Any other pricing besides flat-rate pricing
was not encouraged, in fear that an "unfair" and expensive usage-based pricing
would be implemented. It has been argued that it would be difficult for REUNA even
to implement a volume-based charging system for international traffic, especially
since their competitor, Unired, had implemented a flat-rate system for its non-profit
customers.
If, however, by reducing costs to the users, REUNA or Unired could gain complete
market share, then they could implement a usage-based pricing scheme more easily.
Alternatively, within a competitive market, a possible situation that would encourage
usage-based pricing would be if the congestion was so heavy that people desired to
have improved quality of service for real-time applications, for example, video
conferencing.
Conclusion
In conclusion, to implement a usage-based pricing methodology in a monopolisitic
and cooperative environment which desires usage-based pricing is not so difficult. In
a competitive environment where disjointed service and infrastructures exist, a
usage-based pricing system could be implemented by:
Obtaining a monopoly (i.e. by reducing costs to user to gain complete market
share), OR
Consolidating the disjointed organizations to agree to implement a usage-based
pricing scheme, OR
Convincing the users to demand for it, i.e. they should not pay for others' access
and traffic costs (but an inexpensive accounting system must be available, else it
may cost the users same or even more to use the Internet) and have a better
quality of service, OR
Having the government enforce it.
Human Factors
While pricing schemes should help to control the network flow, they should not cause
too much inconvenience to the users. Clearly any pricing schemes other than flat-
rate pricing will bring some form of "extra" inconvenience to the users. For instance, if
usage-based pricing is adopted, the users might find keeping track of the amount of
their traffic inconvenient; and if congestion pricing is used, they might find that
differentiating between the congested hours, non-congested hours and learning the
prices charged at each point inconvenient. Unless these procedures can be made
transparent to the users, it is arguable that many users will be reluctant to use a new
pricing scheme, even if they would have to pay a little less compared to flat-rate
pricing.
Human factor is also involved in terms of getting used to new pricing schemes and
settlement models, when they are implemented. Productivity is expected to decrease
while cost is expected to increase due to this factor. It may hinder the pace at which
new pricing schemes and settlement models are adopted.
Collaborative Design
Collaborative technologies have brought exciting new network applications like "white
board" and video conferencing. Along these lines, it is possible that the development
of collaborative technologies could eventually enhance the successful
implementation of different pricing schemes and settlement models. For example, the
success of developing a network with symmetric links (the network links nowadays
are mostly asymmetric, that is, with high speed downstream but low speed upstream)
may make the accounting of packets much easier and cost effective.
Industrial Organization
While people are discussing which pricing scheme and settlement model should be
adopted, an interesting question arises is whether a new industrial organization
should be formed to facilitate the adoption of these new pricing schemes and
settlement models?
The following are the possibilities. First, an "external" organization could be formed to
explicitly deal with all the network settlements among network access providers and
infrastructure providers. Second, some form of alliance between network access
providers and infrastructure providers could be formed. For example, an alliance of
infrastructure providers can be formed to standardize the next generation of
infrastructure to facilitate new pricing schemes, or alliances of infrastructure providers
and network access providers can be formed to ensure the smoothest possible
transition.
Inter-Organizational Design
The issues related to Inter-Organizational Design are very similar to those related to
Industrial Organization. The difference is in the way in which we identify the
organizations, and the way in which companies will form the organizations.
Therefore, the related issue here is that when the network access providers and
infrastructure providers see a need to collaborate so that settlements issues can be
resolved more easily and properly, how will they come together and form an
organization or alliance - a standard body, a consortium, a joint venture or a
technology web.
Standards
An infrastructure standard standardizing the degree of content awareness, should
one be adopted either by legislation or de facto, would affect the possibility of
implementating different pricing schemes. And surely, if different standards of
infrastructure arise, it would be difficult to unify the pricing schemes and settlement
models.
Similarly, it is feasible that standards for pricing users and services could evolve in
the future and whether they will evolve and when they will evolve would depend on
government decisions and economics of the network. For instance, since flat-rate
pricing makes network access providers difficult to recover their costs, it is possible
that usage-based or congestion will become standards because of the driving force
of economics.
Finally, one last interesting issue to be addressed here is that the de facto standard
of TCP/IP (IPv4) actually hinders the development of some pricing schemes, such as
priority pricing, because it has no mechanism for differentiating packet priorities. With
ATM and IPv6 under development, it remains a question as to whether TCP/IP (IPv4)
is the best protocol for network transports.
The ISP scenario in India
17
ISPs are frantically searching for new sources of revenue in India . Indian ISPs are
increasingly pursuing the market for internet related corporate services. Within which,
the largest source of revenue could come from running virtual private networks. VPN
networks link together a companys offices corporate intranets as well as its
suppliers and distributors which are corporate extranets.
Other sources of revenues involve activities like web hosting and e-commerce.
Access revenues are still the major source of revenue as of now, but in the long run,
ISPs will have to look for newer revenue sources.
Even existing ISPs like Satyam Infoway are trying to figure out viable revenue
models.
New entrants to the market include HCL Infinet, the ISP arm of infotech major HCL
Infosystems, and new ISP ventures from the Reliance and the Tata group. Caltiger,
which has already shaken up the market with its free access model, claims a
subscriber base of two lakh, less than a year after its launch. Other major new
players could be Wipronet, Wipros ISP arm and BPL.net.
These new entrants will pose a major challenge to what can be described as the
original or old set of private ISPs. These include Sify, the Bharti group controlled
Mantra Online, the C Sivasankaran controlled Dishnet DSL and, of course, the
granddaddy among ISPs, Videsh Sanchar Nigam Limited.
The subscriber base numbers do not as yet add up to any sort of plausible revenue
model. Consider advertising revenue, for instance. In calendar year 1999, adspend
on the internet was between Rs 25-30 crore. This is supposed to double in 2000, to
all of Rs 60 crore. Thus advertising revenue models are not plausible in India as of
right now.
Adspend on the internet is bound to increase as the number of subscribers and users
increase, but the base is as yet too small. Nasscom estimates that adspend on the
internet will reach Rs 250 crore by 2003.
The search for a viable revenue model has, therefore, become an imperative for
Indian ISPs. This is all the more so since access charges have dropped dramatically.
Dishnet DSL, for instance, offers unlimited access for Rs 250 per month while those
wishing to access the internet for one hour per day need to pay only Rs 99 per
month. Satyam charges Rs 299 per month. Mantra Onlines tariffs are similar. In the
17
__________, Drawing The Bottomline, CORPORATE DOSSIER, Economic Times, Aug 04 - Aug 09 2000
context of falling access charges, a lot of the interest has focused on Caltiger, the
Calcutta-based ISP, which was the first to introduce free access.
It is still a debate as to whether free access is viable or not.
According to Merrill Lynch
We believe that Caltiger will encounter challenges to its free access business model.
That is because there is limited advertising in India....in our opinion only revenue
sharing between ISPs and telcos that will bring about the true advent of free ISPs in
India,"
Other debates rage as to whether free ISPs will be able to offer high quality access.
Caltiger officials, however, deny that their revenue model is not advertising based.
"Only 20 per cent of our revenues will come from advertising. 40 per cent will be from
running corporate networks while 30 per cent will be from e-commerce. We will also
get 10 per cent from transferring our proprietary technology for delivering
advertisements to viewers," says R Vishnu Kumar, vice-president North,
Caltiger.com. Caltigers e-commerce related revenues will be from integrating
customer relationship management and supply chain management systems. Caltiger
also has a proprietary technology in delivering advertisements to viewers. This
consists of an ad bar which stays in place for the whole duration during which a
customer is logged on to Caltigers web site. Caltiger has transferred this technology
to an Hungarian and a Sri Lankan company for $1 million. It hopes to earn between
$3-4 million per annum by way of royalty payments on this technology. "Because of
our unique ad model we expect to capture a proportion of advertising revenues on
the internet," says Kumar.
However, it is the battle to grab a chunk of the market for internet related corporate
services which could decide who emerges the winner in the internet sweepstakes.
Corporate services include access provision, VPN networks, and application service
provision. Of these, provision of VPN services could be the most lucrative part of the
corporate services market. According to the Merrill Lynch report as many as 10,000
corporates in India may wish to set up VPNs. Merrill Lynch assumes five locations
per corporate and a cost of Rs 5,00,000 per location. Given these assumptions, total
spending on VPNs could be much as Rs 2,500 crore or $582 million at an exchange
rate of Rs 43 to a dollar. Even if cost per location is halved to Rs 2.5 lakh, the
potential VPN pie adds up to Rs 1,250 crore. All these are potential numbers
because Merrill Lynch estimates that only 500 crorporates have initiated the process
of setting up VPN networks. The major contenders for VPN market are likely to be
Satyam Infoway, HCL Infinet, Wipro and BPL. Dishnet-DSL, which has been a
pioneer in slashing access rates, is also pitching for the corporate market. Right now
pure access provision accounts for a major portion of its revenue, the company is
unwilling to discuss exact numbers, but corporate services would gain in importance.
"We are well positioned to serve the corporate market by providing value added
services such as VPN, collocation and web hosting. DSL technology provides access
to broadband", says Bill Crawley, senior vice-president sales and marketing, Dishnet
DSL. Direct Subscriber Line technology helps boost capacity of ordinary copper lines.
Mumbai-based ISP Pacific Internet expects 60 to 70 per cent of revenues to originate
from access fees, running VPN, advertising and web hosting. Despite falling rates,
revenue from access continues to be the predominant source of revenue for most
ISPs. Mantra Online, a joint venture between the Bharti group and British Telecom,
presently derives 80 per cent of its revenues from access. The remaining 20 per cent
comes from advertising on its portal.. Like all ISPs Bharti-BT also plans to offer
corporate services. VSNL, the countrys largest ISP, currently derives all its revenues
from pure access. In fiscal 1999-2000, the revenue accruing to its internet business
was Rs 275 crore. VSNL makes a lot of money from providing leased lines and
bandwidth to other ISPs. Among the new entrants Reliance plans to emerge as an
ISPs ISP by leasing or selling bandwidth to ISPs. While RIL declined to comment for
this story, sources familiar with the companys plans say that its revenues would
come mainly from hiring out bandwidth.
Conclusion
The pricing model is India does seem to be following the Chile way at least in terms
of Chile, if only in the way prices are crashing. Very soon, pure access pricing is
unlikely to bring in viable revenues. Pricing is going to focus more on usage and
quality of usage. As of now, precedence and smart market pricing seem to be far
away.
Impact on Indian Infrastructure development
We live in a time where the flow of information and access to it is as important as the
flow and access to goods and services. This project tries to show how access can be
increased in a country like India through appropriate pricing of Internet services.
In the wake of the increased demand for bandwidth, which is imminent as users
graduate to higher level, bandwidth hungry applications, proper pricing will act as an
allocating mechanism. It is this issue that we have explored in this project by looking
at various pricing models and the context in which their use may be warranted. Given
the comfortable position with respect to bandwidth that we currently find ourselves
thanks to VSNLs planning, it makes sense from a social point of view to go in for free
access models in order to increase access levels in the country. But as the project
has discussed, this model has its drawbacks especially in India where alternative
sources of revenues for ISPs are few.
When bandwidth requirements do catch up with the supply then ensuring access
may not be good enough as access alone would not guarantee usage. As more
bandwidth hungry applications like internet telephony etc. take off users may find it
difficult to log on to the net. In such a scenario, the role of pricing becomes all the
more important in ensuring proper usage of the service. It is in this situation that
usage based systems could be explored. This would ensure efficient and fair
allocation of charges to users.