Sunteți pe pagina 1din 98

1

1.

Information Technology in
21st Century

INTRODUCTION

The basic motivations behind all scientific and technological inventions and discoveries are
two: (1) mans inherent desire to live with the principle of least action and (2) mans inherent
desire to be a master like nature for which they quest for to know what are there in natures
actions and designs. All the discoveries from the fires to computers conform to the going with
the principle of least actions. Mans aim of becoming the creator or master of all has lead to
design or redesign himself or herself which has been manifested in the recent development of
cones in laboratory, in continuing research on high speed computing, autonomic computing,
quantum computing and in possible designing of intelligent or brainy computer in near future.
In the field of communication engineering, its trends of development duly conform to these
two basic motivations of discoveries and inventions. To achieve all sorts of communication
with least action, the developmental phase of communication has proceeded as: connecting
geographically separated but location-fixed machines (conventional wired telephones/fax) to
connecting geographically separated but movable machines (chord less /mobile phones) to
connecting people rather than machines (communication that supports both man and machine
mobility which is personal communication). This is how the total wireless communication is
the lust of tomorrows communication. In order to achieve the nature like communication, the
communication we do in our day-to-day life, PTN/UTN (Personal Telecommunication Number)/
(Universal Telecommunication Number) has evolved out. In the existing communication the
connection number changes from location to location and from service to service. We are having
separate telephone numbers while at Calcutta than that from while at Delhi. This is not the
case in the natural communication. A person is called by his name where he is in Calcutta or
in Delhi. A person is called or addressed by his unique name whether it is voice communication
or letter communication. Basic motivations behind scientific and technological development
have moved the communication research and development on the footings of TOTAL WIRELESS
COMMUNICATION and PTN/UTNin the combined form of Personal Communication
Network/Service(PCN/PCS). There are several other parameters including techno-economic
and socio-economic aspects that have caused the total wireless communication becoming pillar
of tomorrows communication; and to name a few are the lower maintenance cost of wireless,
easier up gradation and reconfiguration of wireless networks, easier installation of wireless
network over difficult regions like over hills and seas, and avoiding threats of theft of costly
copper wire used in wired communication. Only existing disadvantage of wireless

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

communication is the higher initial deployment cost of wireless networks over wired networks
and high error rate probability of the wireless links. But over the time and once the maturity
of the wireless technology and its systems is attained, these disadvantage will undoubtedly be
the past issues. High-speed communication and integrated services are other two important
directions of communication technology. High bit rate carriers like SONET and integrated
transport technology ATM are future power of communication technology.
In the same conformity of principle of least action and mans earnest desire to be a
master of nature, the knowledge age is believed to follow the current information age. The
technical capability and the technology are readily available to transform data into knowledge
and that is how there emerge challenges of expanding vision to turning from data to knowledge.
Actually knowledge age is the next natural consequence of networked age. In the knowledge
age, knowledge workers, knowledge factories, knowledge organizations and knowledge economy
will be the rule of law. The main wealth of the knowledge age will be knowledge rather than
any physical wealth. The subject knowledge management (KM) is therefore will be key issue
in the 21st century.
This chapter reviews the growth of computer and communication technologies along
with knowledge management that are all trying to merge with human axis (Fig. 1)[1], critically
analyze the problems thereon, attempts for possible solution and predicts what is there after
knowledge age.

2. RECENT PROGRESS OF COMPUTER TECHNOLOGIES


Since the inception of electronic classical computer in the year 1948 by the brand name ENIAC,
computer has undergone four generations. Present age is of the fifth generation. Hectic research is going on to make brainy computers[2,3]. Worldwide research on optical, chemical
and quantum technology is being reported [4-6]. Classical computer was the brainchild of Von
Neumann. Classical computer is also known as serial computer. Problems of classical computer were two folds:
How to use its power for general purpose small computing jobs thereby having a cost
effective solution and raising the system productivity.
How to raise its power, performance and capacity to tackle extensive, complex
numerical jobs (for example design of supersonic aircraft, modeling of global weather
etc.) where if a serial computer is used, it may take even a year to many years to solve
the problem.
The solution to the first problem came in the year 1960, with the introduction of timesharing multi-user concept. This was based on the philosophy of utilization of slowness of
human as compared to computer, so that if one user is thinking, the computer can be used by
other users (resources sharing by time slice). This provided a means of distribution of the cost
of computation over many users. Other early solution to the first problem is batch system
which remained dominant where large amount of data was processed with minimum human
interaction (one operator). But as it was not of interactive type, it lost itself to time-sharing
system. One of the answers to the second problem gave the birth of parallel computers, which
is the ultimate aim of the fifth generation computing system. A few parallel computers are in
operation in world. Parallel computing is to speed up operation. With this in mind the concept

INFORMATION TECHNOLOGY IN 21st CENTURY

of optical computer was developed. In optical computers it is the light that will carry the
signals; and in universe it is the light that has the ultimate speed. Accordingly, non-linear
optics emerged as the new frontier of science and technology. The other important deviation
from classical computer, that emerged due to technological growth and demand, was the design
of brainy computers. The chemical computer is a bold step in formulating the brainy
computer. Optical and chemical computers are now merged under a new field of electronics
known as molecular electronics.
There are several empirical laws that correlate, govern and predict the technological
progress and growth in the last few decades [7-9]. These are:
1. Joys law, which states that the computing power, expressed in MIPS (Millions of
Instructions Per Second), doubles every 2 years,
2. Ruges law estimates that the communication capacity necessary for each MIPS is
0.3-1 Mbps (Million of Bits Per Second),
3. Metcalfes law which states that if there are n computers in a network, the power of
the computers in a network like Internet is multiplied by n square times. The law
has been applied on Table (1) that lists the growth of Internet users over several
years; assuming year 1988 as the reference year, and assuming that in that year the
power of a computer was one unit (used for normalization). In that case the power of
a computer over different years would be as shown in the table. Assume that each
user on average uses only a computer for world access through Internet. Applying
the Metcalfes law to a lowest extent that the power of individual computer in the
Internet is multiplied by square of the number of users in the Internet, the power of
computer would be as shown in the last column of the Table (1). From a figure of
0.25 1012 in 1988 to 2433600 1012 in 2000, a 9734400 (107) times increase over a
gap of only 12 years! What a future is ahead of ! Super information power or infinite
information power! Due to this power, the flexible transport technology, ATM and
very high rate carriers like SONET/SDH (Table 2), the requirement of any services
at any time at anywhere with a single device and with a single communication number
may be possible even through modest Internet, which was basically designed to carry
data only.
Table 1: Trend in Internet/Computer power
Year

Internet Users in 106

1988

0.5

0.25

1989

1.3

1.5

2.535

1990

2.4

Computer power normalized to


year 1988 on standalone condition

Computer power on
networking in 1012

11.52

1991

4.4

58.08

1992

8.7

302.76

1993

14.8

1314.24

1994

26.1

5449.68

1995

49.2

12

29047.68

2000

195

64

DHARM

N-BHUNIA/BHU1-1.PM5

2433600

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Computer Technology Progress

Brainy Computer
Human Axis
Year

Personal Communication

Communication Technology Progress

Fig. 1: Trends of Computer and Communication Technology.

Table 2: Bit Rates of Digital Hierarchy


North American
Type

European
Type

Bit rates

Type

E1

2.048 Mbps or 2 Mbps

DS0

T1 or DS1 1.544 Mbps

E2

8.448 Mbps or 8 Mbps


( 4 2 Mbps)

J1

1.544 Mbps

T2 or DS2 6.312 Mbps (or 4 1.5


Mbps)

E3

34.368 Mbps or 34Mbps


( 4 8 Mbps)

J2

6.312 Mbps ( 4 1.5


Mbps)

T3 or DS3 44.736 Mbps (or 7 6


Mbps), sometimes
referred to as 45

E4

139.264 Mbps or 140


Mbps ( 4 34 Mbps)

J3

32.064 Mbps ( 5 6
Mbps)

T4 or DS4 (1) 139.264 Mbps


(or 3 45 Mbps)
(2) 278.176 Mbps
(or 6 45 Mbps)

E5

564.992 Mbps or 565 J4


Mbps ( 4 140 Mbps)

DS0

Bit rates

Other (used predominantly


by Japan)

64 Kbps

Bit rates
64 Kbps

97.728 Mbps ( 3 32
Mbps)

4. Moores laws state that (a) the number of components on an IC would double every
year (this is the original Moores law predicted in 1965 for the then next ten years),
(b) the doubling of circuit complexity on an IC every 18 months (this is known as
revised Moores law), (c) the processing power of computer will double every year and
a half (Moores second law which closely resembles to Joys law).
5. Law of Price and Power that states that over the years the computing, processing,
storage and speed up power of computers will continue to increase whereas the price
of computers will continue to fall.
6. For a new law of communication, readers may refer to Appendix-A.
In table (3), a list of computer generations with power in terms of information processing, storage and speed up factor is given. It is seen that first three laws fit well into the list. In

DHARM

N-BHUNIA/BHU1-1.PM5

INFORMATION TECHNOLOGY IN 21st CENTURY

pace with increased processing power in terms of volume and speed, and the wide and flexible
use of computers, the communication transport technology and transmission media have been
developed.
Table 3: Computer power over years
Generation of Intel processors
Processor

Number of Transistors
in the chip

Word length
in bits

Internal bus
size in bits

External bus
size in bits

8080

8088

16

16

8086

16

16

16

80286

134,000

16

16

16

i386

275,000

32

32

32

i486

1,600,000

32

32

32

32

64

32

P24T
Pentium

3,300,000

32

64

64

Celeron

4,000,000

64

64

64

Pentium Pro

5,500,000

64

64

64

Pentium with MMX


(multimedia)
Technology

4,500,000

64

32

64

Pentium II

7,500,000

64

64

64

In the chip level integration till date, Moores laws say the last word. From SSI to ULSI,
the trend set (Table 4) by Moores law is followed. But beyond ULSI, what is there? The
extrapolation of the trend predicts that the future will be the age of molecular dimension
inherited by the already established subject of molecular electronics that is based on organic
materials rather than inorganic semiconductor. Beyond ULSI, the further integration on a
chip will face serious problem from physical constrain like the quantum effect. This may lead
to the death of Moores law. But another interesting dimension may be added to the cause of
the death of Moores law. This is based on the law of Price and Power. It is said that: The
price per transistor will bottom out sometime between 2003 and 2005. From that point on,
there will be no economic point in making transistors smaller. So Moores law ends in a few
years. In fact, economies may constrain Moores law before physics does.
Table 4: Generation of IC integration
Generation

Number of components

Small Scale Integration (SSI)

264

Medium Scale Integration (MSI)

642000

Large Scale Integration (LSI)

200064,000

Very Large Scale Integration (VLSI)

64,0002,000,000

Ultra Large Scale Integration (ULSI)

2,000,000100,000,000

DHARM

N-BHUNIA/BHU1-1.PM5

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

2.1 Newer Technologies


To go beyond the conventional laws mentioned above, the computer technology has taken a
few new direction: (1) recently reported Intels Terahertz Transistor, (2) molecular electronics,
(3) autonomic computers and (4) quantum computers.
2.1.1 Terahertz Transistor
The Intels terahertz transistor is reported to be a new method of making transistors with a
new class of material to overcome the problem of heat dissipation and quantum effect. This
transistor will save power and provide miniature chips. The transistors will stay cooler and be
smaller in size but with faster operating speed. This is made possible, as the new method will
be based on innovative design that will eliminate leakage. It is believed that by 2007, this
transistor will be available in the market.
2.1.2 Molecular Electronics
The subject of molecular electronics has emerged as an important area of research and
application during 1980s [10]. The definition of molecular electronics is not unique and simple.
Even within a country scientists differ. A leading scientist of the field [11] molecular electronics
can be divided into two main themes: these are molecular materials for electronics (MME) and
molecular scale electronics (MSE). The topic of molecular materials for electronics deals with
the use of macroscopic properties of organic materials in devices, and includes current and
near-term application. In the near-term it seems likely that conductive polymers will offer the
prospect of novel electronic devices and that organic materials with pronounced non-linear
optical properties will find application in upto electronics. A simplistic extrapolations of the
reduction of time leads eventually to the molecular scale i.e. molecular scale electronics. Prof.
Bloor further observed [12] many regard the quest for molecular scale devices as true molecular
electronics. However it can be argued that the distinction between MME and MSE is somewhat
arbitrary and that both need to be considered as constituent parts of molecular electronics if
the topic is to grow and prosper. Ashwell, Sage and Trundle [13] defined that its definition
has broadened from electronics at molecular level to include molecular materials with potential
electronics and photonic applications. Peterson defined that in the most general sense,
molecular electronics covers the use of molecular (and hence essentially organic) materials to
perform signal processing or transformation function. However, the famous Link programme
of Britain defines molecular electronics as [14] systematic exploitation of molecular, including
macro molecular, materials in electronics and related area such a photo-electronics.
The molecular electronics is therefore to explore the potential application of organic
materials and non-linear optics in the field of electronics. It is a highly interdisciplinary field
and prospects lies on the successful interaction and co-operation of scientists of different fields
like biology, chemistry, computing, physics and electronics.
2.1.2.1 History of molecular electronics
In history the concept of molecular electronics dates back to the last century. The familiar
example is the use of organic materials in displays. The use of liquid crystals display found in
watches, calculators and TV sets in historically patented over fifty years ago [3]. As Prof. Bloor
pointed out [15] that molecular exhibit great variety in their structure and properties from
simple diatomic species through to very large synthetic and bio-macro-molecules. It is not
surprising therefore that molecules can be found that process unique combination of properties
which find application in fields of electronics and opto-electronics. This idea stimulated work
on MME since 1950s. The reduction of size of active electronic device compound problems in

DHARM

N-BHUNIA/BHU1-1.PM5

INFORMATION TECHNOLOGY IN 21st CENTURY

regard to quantum effects.. At this juncture, molecular electronics, the application of molecular
materials in electronics, started exploiting some of the new advanced technologies that may be
beyond the scope of the silicon chop. Prof. Bloor explained [16-17] that the continuing
development of silicon micro-electronic devices of smaller size and grater complexity has brought
more compact and powerful instrumentation and computing facilities into the laboratory and
office. Though silicon technology holds a dominant position the continuing reduction in
dimensions of an individual device creates problems both at the fundamental and systems
level. On one hand quantum effects must ultimately come into play dissipation and the design
of testable architectures are already with us. These pressures lead inevitably to a search for
alternatives to current technology that can offer prospects for the realization of devices with
even higher densities of active components. MSE is one avenue which is being explored with
these targets in mind.
The research and the interest in molecular electronics were mainly initiated by the late
Forest Carter who conducted a series of international conferences on molecular electronics
[18-20] in 1980s. Prof. Bloor wrote that [21] organic solids have attracted the interest of materials scientists and solid-state physicists since the 1950s both as alternative semiconductor
and because of their optical properties. Strong research groups grew up in the USA, Russia,
Germany and France at this time.
Although the progress of molecular electronics has not always been smooth, yet the
prospects for the future are good. In this article, we shall review the present position and
future aspects of molecular electronics.
2.1.2..2 Molecular Materials for Electronics (MME/M2E)
The study of MME is to see the use of molecular materials in key and active roles in electronic
and opto-electronic devices and systems. It is based on understanding and use of macroscopic
properties of the bulk molecular materials i.e. of the organic materials. The main categories of
MME are[22]
Organic semiconductors and metals
Liquid crystalline materials
Piezo/pyro-electric materials
Photo/Electro-chromic materials
Non-linear optical materials/photonics.
Organic Semiconductors and Applications
Organic semiconductors and metals have been much less studied than their inorganic counterpart. Under MME, a good study is gradually emerging. The major applications of organic
semiconductor are in (1) electronic active devices and (2) xerography.
Therefore before going to organic semiconductors, the process in amorphous materials
is required to be studied. What are amorphous materials? In crystal, atoms or molecules are
arranged in a regular structure with periodicity. But in amorphous materials there is no ordered
structure.
The developments of electronic devices in last few decades were tremendous because
the electrical conductivity of crystalline semiconductors such as silicon can be controlled over
much order of magnitudes by doping. But [23] there are a number of areas where the expenses
of preparing. These crystals and where the limited size to which they can be grown (at present
about 25 cm in diameter) have prevented any very large-area applications. For example,

DHARM

N-BHUNIA/BHU1-1.PM5

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

crystalline silicon solar cells are widely used in space vehicles for converting sunlight into
electrical power, but the economics of their production is such that their use here on earth is
relatively limited. Silicon can be prepared very cheaply in large areas by vacuum evaporation
or by sputting, but the materials is then amorphous rather than crystalline sine (the)
work on doping amphorous silicon (a-Si) was published, there has been a considerable research
into and development of this materials, leading to a member of commercial products. Table 1
[46] shows a progress list.
MME makes a study with electronic processes as distinct from ionic processes, in organic
crystals. What are organic crystals? By organic we usually mean a compound containing carbon.
Almost 90% of 2 millions compounds known to us are organic. But for MME, there is choice
and limitation that need a careful study.
Till today organic materials have not presented to be a real competitor to the silicon/
inorganic material in terms of active electronics devices. However, during last five years the
progress in the synthesis of high purity semiconductor polymers and oligomers is note worthy.
Experiments showed that conductive polymers could be employed as either metallic or the
semi conducting component of metal-semiconductor junction devices [14]. semi conducting
polymers can be used to produced Schottky diodes [6]. Where the polymer has temperature
dependent properties have been observed, with rectifying behavior at room temperature
changing to ohmic behavior above 100C [15].
Burroughes et al. first reported an active polymer transistor in 1988 [16,17]. The
important characteristic of this device were: (1) no chemical doping or side reactions and (2)
the characteristic of the polymers device was insensitive to disorder. But the major disadvantage
of the device was that its maximum operating frequency was limited. This is because the
carrier mobility in the amorphous polyacetylene layer is very low. The mobilitys of electrons
in semi conducting polymers, amorphous silicon and crystalline silicon are of the order of
104, 1 and 103 cm2 /Vs respectively. One can see the large gap between properties of polymers
and silicon. However a dramatic lead was done by Frincis Garnier and co-workers [18-19].
They reported a totally organic transistor. This transistor is known as thin film transistor
(TFT) or organic FET. This transistor is a metal insulator semiconductor structure comprising
an oxidized silicon substrate and a semi conducting polymer layer. It has grater flexibility and
can even function when it is bent (disorder is acceptable). The operating speed is still poor. The
problem of low carrier mobility of insulating polymer is under active research.
The diodes made of semiconductor with rectifications ratios in excess of 103 have been
reported in [23], and light emitting diodes, made in organic semiconductor with external
quantum efficiencies in excess of 1% photons per electrons are reported in [16-22]; and organic
photovoltaic cells are reported in [19-22]. However, within a short period, a rapid progress has
been observed on use of semi conductive polymers and oligomers in electronic devices. If this
progress is maintained, in near future it could be competitive to silicon.
The field of optical computation starts with the search of a bi-stable optical switch based
on non-linear optical properties of materials. Non-linearity can be used for device basically by
two techniques: frequency conversion and reflective index modulation. The frequency conversion
technique, which is due to second order non-linearity, may be used to second harmonic
generation frequency mixing and parametric amplification etc. Refractive index modulation

DHARM

N-BHUNIA/BHU1-1.PM5

INFORMATION TECHNOLOGY IN 21st CENTURY

particularly Kerr effect which is due to third order non-linearity may be used for optical bistable switches and parallel processing. Till date a few optical gates and all optical bi-stable
switches have been reported, but the field is still confined in the laboratories. Yet optical
computation is a promising field.
Optical computing and processing of information are the important application of
photonic. The gain of photonics switching speed (of order of femto second 10-12 ) is many order
of magnitudes over that of electronic switching. Optical processing is free from interference
from electrical or magnetic sources. Based on the prospect of three dimensional
interconnectivity between sources and receptors of light concepts of optical neural networks
that mimic the fuzzy algorithms by which learning takes place in the brain have been proposed
and experimentation has begun. Integrated optical circuits, which are counterparts of electrical
circuits photons, can provide for various logic, memory, and multiplexing operations. Utilizing
non-linear optical effects, analogs of transistors or optical bistable devices with which light
controls light have also been demonstrated [23]. So far nlo materials are concerned, all materials
in forms of gases, liquids or solids, exhibit nlo phenomena. However broadly we can defined
two classes of nlo materials : (1) molecular materials or organic materials which consist of
chemically bonded molecular units that interacts in the bulk through weak van der waals
interactions and (2) bulk materials and traditional inorganic materials. Today rapid progress
and research in organic nlo materials proved to be attractive. The nlo devices utilize two different
techniques: frequency conversion and refractive index modulation. Based on letter effects, the
developments of frequency converter and light modulator have been reported in [23]. However
organic materials are seen to be quite attractive for electro-optic light modulation as their low
-frequency dielectric constant is quit low leading to a small RC time constant, thus permitting
a higher bandwidth for light modulation compared to that achievable using inorganic materials.
The application of second order non-linearity needs that the crystal must not be
centrosymmetric structure. In centrosymmetric structure the non-linearities, which are
vectorial, cancel each other to give zero microscopic effect. This is a stumbling block in the
progress of application of second order non-linearity. To solve the problem two approaches are
being examined:
1. Use of LB films with either alternating layers of a polar molecule or molecules which
inherently from polar multi-layers,
2. Inclusion of non-linear optically active molecules in polymer films which are poled
with an applied electrical field.
In a single way, a materials with a bulk where, its molecules are non-centrosymmtric
nature may be defined as anisotropically oriented over volumes measure in cm3 . These
conditions are best achieved by growing a crystal. The Langmuir-Blodgett (LB) technique is a
comparable high tech organic fabrication method, appropriate when the implementation of
the function requires a high degree of molecular anisotropy in an extremely thin layer of uniform thickness. For OICs, particularly for single processing, L-B technique offers the possibility to orient molecules with in a thin layer of highly precise thickness. It has thus become an
attraction. However films are not the final answers. There are many drawbacks with films
namely mechanical softness, limited high temperature range and extremely slow rate of deposition etc. But rapid research is going on L-B film technology and its application in molecular
electronics materials both for ME and MSE .

DHARM

N-BHUNIA/BHU1-1.PM5

10

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

2.1.2.3 Molecular Scale Electronic (MSE)


The quest for an ever decreasing size but more complex electronic components with high speed
ability gave the birth of MSE. The concept that molecules may be designed to operate as a selfcontained device was put forwarded by Carter, and he only proposed some molecular analogous
of conventional electronic switches, gates and connections [9]. Accordingly Aviram and Ratner
first advanced a molecular P-N junction idea. MSE is a simple interpolation IC scaling.
Scaling is an attractive technology. Scaling of FET and MOS transistors is more rigorous
and well defined than that of bipolar transistor. But there are problems in scaling of silicon
technology. In scaling on the one hand propagation delay should be minimum and packing
density should be high; on the other hand these should not be at the expenses of the power
dissipated. With these scaling rules in minds, scaling technology of silicon is to reach a limit.
Another thing is that scaling can due to the quantum nature of physics. At this junction
molecular scale scaling technology.
Dr. Barker reported in [9] that change, spin, conformation, color, reactivity and lockand-key recognition are just a few examples of molecular properties, which might be useful for
representing and transforming logical information. To be useful, molecular scale logic will
have to function close to the information theoretical limit of one bit on one carrier. Experimental
practicalities suggest that it will be too easiest to construct regular molecular arrays, preferable
by chemical and physical self-organization. This suggests that the natural logic architectures
should be cellular automata: regular arrays of locally connected finite state machines where
the state of each molecule might be represented by color or by conformation. Schemes such as
spectral hole burning already exist for storing and retrieving information in molecular arrays
using light. The general problem of interfacing to a molecular system remains problematic.
Molecular structures may be the first to take practical advantages of novel logic concepts such
as emergent computation and floating architecture in which computation is viewed as a selforganizing process in a fluid-like medium.
MSE spans several disciplines and requires a co-ordination of scientists of different
group if the subject is to grow and prosper based on cross fertilization of ideas of different
subjects.
But problem is how can the properties of individual molecules and/or small aggregates
be studied? Fortunately day-by-day we are evolving new techniques and methods to tackle
this problem. At present we are having technologies like STM (scanning tunneling microscope)
AFM(atomic force microscope) and NFOM (near field optical microscope) etc. In addition, submicron lithography, L-B films and adsorption/reaction in 2D/3D are also there. L-B technique
is particularly important because it provides one of the few ways of marketing separate electrical
connection to two ends of a molecule. A very good illustration of molecular electronics logic and
architecture can be seen in [10].
2.1.2.4 Bio/Chemical Computer
A new radical information processing system is being thought of where organic cells or bacteria
are to act as the basic element. Living organisms are made of organic compounds. As such
thinking function can be easily realized in such system. As scaling will be at biological level,
very high-density circuit can be at biological level, very high density circuit can be achieved.
Our average brain comprises 1011 neurons ranging in size from 0.2mm linear dimension to

DHARM

N-BHUNIA/BHU1-1.PM5

10

INFORMATION TECHNOLOGY IN 21st CENTURY

11

about 100 mm, each with an average connectivity of 104 giving a crude bit-count of 1011 to
1015. An equivalent artificial brain may therefore be of such dense circuit. Enzymes and proteins
are being studied. We should not forget that an example of a natural molecular device. Is the
bacterial photo-reaction center. Recent research to produce analogous have been successful
through the synthesis of single and complex molecules, which release charge on photo-excitation.
This subject of molecular electronics has moved from conjuncture to experimental study
and scientific development. With the rapid growth of research and development of few liquid
crystals, polymers, L-B films and NLO materials; molecular electronics is now with us. With
advances in Physics, Chemistry, Materials Science, Biology and Engineering as our
understanding of molecular materials both at microscopic and microscopic level with grow; the
field of molecular electronics will prosper. The better understanding of natural system and
processes and living organisms, will enhances the capability and potentiality of molecular
electronics particularly in terms of its application in radical new computational machines and
engineering. Much more work remains to be done. It needs scientific, intellectual and
technological challenges on one hand; and Government and Industrial supports on the other
hand. The progress of all these will determine actually whether molecular electronics if so,
when. But research in molecular electronics and device technology it, will emerge as exciting
and frontier fields of science and technology in the current century.
The molecular electronics is a revolutionary idea. To attain maximum miniaturization,
it is proposed that instead of using transistors states, namely ON and OFF to implement 1s
and 0s, the characteristics of electrons may be used for the same. For example, the positive
and the negative spin be respectively used to implement 1s and 0s. The idea is new. It will take
lots of time to mature and to develop the technology. This will be the last resort of
miniaturization. The molecular electronics is believed to be based on new organic material
technology that may lead to bio or chemical computer. A new radical information processing
system is being thought of where organic cells or bacteria are to act as the basic element.
Living organisms are made of organic compounds. As such thinking function can be easily
realized in such system. As scaling will be at biological level, very high density circuit can be
achieved. Our average brain comprises 1011 neurons ranging in size from 0.2mm linear
dimension to about 100 mm, each with an average connectivity of 104 giving a crude bit-count
of 1011 to 1015. An equivalent artificial brain may therefore be of such dense circuit. Enzymes
and proteins are being studied. We should not forget that an example of a natural molecular
device is the bacterial photo-reaction center. Recent research to produce analogous have been
successful through the synthesis of single and complex molecules, which release charge on
photo-excitation.
However while the above new technologies aim to attain miniaturization going in line
and/or beyond Moores law, the autonomous computing technology aims at the economic aspect of technology.
2.1.2.5 Autonomic Computing
Consider the computing paradigms of the Internet. Fig. 2 and Fig. 3 show the exponential
growth of Internet users and Information Technology. It is therefore understood the need of
huge technologists to keep on running Internet without much disruption of services. A statistics says: At current rates of expansion, there will not be enough skilled IT people to keep the
worlds computing systems running. Even in uncertain economic times, demand for skilled IT
workers is expected to increase by over 100 percent in the next six years.

DHARM

N-BHUNIA/BHU1-1.PM5

11

12

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Growth Rate of Internet Users

Internet users in thousand

250000
200000
India

150000

USA
100000

UK

50000
0
1997

2002

2004

Year

Fig. 2: Growth of Internet Users.


IT as % share of GDP in India : Source NASSCOM
3.5
3.15

2.87

2.66

2.5
2
1.87
1.5
1

1.45

1.22

0.5
0
1997-98

1998-99

1999-00

2000-01

2001-02

2002-03

Fig. 3: IT growth related to economy.

Under such a scenario, it is not unbelievable to believe that there might be an exponential
relationship between the growing complexity and power of the computing systems and the
technical manpower required to manage and administer them. A new paradigm to relieve
humans of the burden of managing, administering, and maintaining the computer systems,
and thereby passing these back to computers is to design Computers that help themselves,
now known as Autonomic Computers. Consider how we, the humans do act when we face
problems. When we are physically attacked, we protect ourselves. This solution uses a biological
metaphor. Just as the autonomic nervous system of our bodies monitors, regulates, controls,
repairs and responds to hazardous conditions without any conscious effort on our part, so the
autonomic computer systems. The autonomous computers are to self control, self monitor, self
regulate, self-repair and respond to problematic conditions, again without any conscious effort
of humans.

DHARM

N-BHUNIA/BHU1-1.PM5

12

INFORMATION TECHNOLOGY IN 21st CENTURY

13

The autonomous computing technology therefore is a major deviation from the conventional rules like Moores law. The aim is not to attain more complex, more integrated, more
powerful computers but self healing computers that will be economic in terms of maintenance
and operation.
The key characteristics of an autonomic computer systems system are:
They should be able to fix failures, and able to configure and reconfigure themselves
under varying, undefined and unpredictable conditions so that they prevent system
freezes and crashes
The systems should known themselves fully and comprise components with proper
identity
The systems should work always in optimize conditions and adopt itself accordingly
to varying conditions
The systems should be self healing, self correcting and capable of recovering from
common, routine and extraordinary, known and unknown events that might cause
some of its parts to malfunction or crash
The systems should be self protective against unwanted intrusion
The systems should be expert to know its environment and the surrounding activity,
and act accordingly in order to easy recovery from crashes and interoperations
The systems should adhere to open standards to ensure interoperability among myriad
devices
The system should better prevent themselves from failures at first place
The systems should optimize resource in anticipation while keeping its operation
hidden to users.
The self-managed computers will have four major components (Fig. 4):
Self optimizedcomponents and devices of the system will automatically and continually check their performance and seek to improve the same
Self configurablecomponents and systems will automatically configure and reconfigure to required adjustments seamlessly
Self healingsystem will automatically detects and repairs localized problems
Self protectedSystem automatically protects itself from intentional attacks

Self Healing
Self
Optimized
Self Configurable
Self
Protected

SELF MANAGED
/AUTONOMOUS
COMPUTER

Fig. 4: Autonomous Computer.

2.1.2.6 Quantum Computing


The conventional computing is based on the concepts of bits. The bits in the classical computation
may have two possible states 0 and 1. The fundamental concept of the quantum computing is

DHARM

N-BHUNIA/BHU1-1.PM5

13

14

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

the quantum bits, referred to as qubit. Two possible states of qubit are |0> and |1>. Like
binary bits of the classical computing, all possible superposition of qubits are possible. Therefore,
a two qubit system has four computational states, namely |00>, |01>, |10> and |11>. With
Moores Law being saturated, it is expected that quantum computers will be one of the future
solutions for high speed and high power computing. A few theoretical work has been reported
but practical implementation is yet to reach.
However an important milestone in application of quantum computers has been achieved
due to pioneer work of Bennett et al in quantum cryptography in the area of data security.
BOX 1
Quantum Computing: a bit review
QUANTUM GATES
The information processing in the quantum computing has a component of qubit manipulation. The qubit manipulation is performed by unitary operations. A quantum logic gate is a
device that performs a particular unitary operation on the selected qubits at a given time.
There are infinite numbers of single-qubit quantum gates unlike only two (identity and the
logical NOT) in classical information. The quantum NOT gate performs |0> to |1> and vice
versa analogous to classical NOT Two-qubits quantum gates performs many possible unitary
operation, an interesting subset of which is |0> <0| I + |1> <1| U where I single-qubit
identity operation and U is some other single-qubit gate. Such gates are called controlled gates
as action of I or U on the second qubit is controlled by whether second qubit is in state |0> or
|1>. This gives to define controlled NOT, CNOT gate as:
|00>
|00>
|01>
|01>
|10>
|11>
|11>
|10>
this shows that: (a) second qubit undergoes NOT if and only if the first qubit is in state |1>;
(Fig. 1) (b) the effect of CNOT on states |x> |y> may be written as :
x x, y xy for the reason of which this gate is also called. XOR gate Fig. (1).
X

Xo = X

Yo

Notes: (a) x-wire means NOT but controlled by o-wire.


(b) Each horizontal line represents a single qubit evolving in time from left to right A symbol on
a line represents a single qubit gate.
(c) A vertical line connects two or more qubits. Symbols on two qubits connected by a vertical line
represent a two-qubits gate on those two qubits.
CNOT GATE: The output, Yo at x-wire is controlled by the input, X of the o-wire. When input to
o-wire is |1> , the output Yo of the x-wire is NOT of its input state, Y.
XOR GATE: Whatever the first qubit, the output second at the x-wire is always XOR of the two
input qubits.
Fig. 1: CNOT/ XOR gate.

DHARM

N-BHUNIA/BHU1-1.PM5

14

INFORMATION TECHNOLOGY IN 21st CENTURY

15

Other logical operations do require additional qubits. The most popular three qubits
gate is Controlled- Controlled NOT gate/CCN or C2NOT gate (Fig. 2). This gate is also known
as Toffoli gate that demonstrated that the classical version is universal for classical reversible
computation. A gate is reversible when for a given output; one can reconstruct the input(s).
The output of the gate on o-wire can be described as:
(a) if third qubit is in state |0>, then output is AND of two other qubits. The effect on
the input states |x> |y> |0> is x x, y y and output x.y. (b) the effect on the input state
|x> |y> |1> is that output is XOR of x and z, (c) the effect on |1> |1> |z> is that output is not
of z.
X

Xo = X

Yo = Y

Zo = Z

Note:

Sum generation

(X.Y)

to mean toggle control. Others as in Fig. 1.


Fig. 2: Controlled Controlled NOT/ CCN gate.

It has been argued that any logic circuit can be made of only CN and CCN gates only.
For example, Fig. 3 illustrates a half adder circuit.

Xo = X

Yo = X

Zo = (X.Y) Carry generation

Fig. 3: Half adder using CN and CCN gates.

DHARM

N-BHUNIA/BHU1-1.PM5

15

Sum generation

16

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Table 2
Superposition

In general this means that two things can overlap with each
other with interfering with each other. In quantum
mechanics two electrons can overlap with each other making
a combined waveform that is a set of amplitude probabilities.

Principal ideas of quantum physics

Energy is in discrete units. Photons are each a discrete


bundle of energy. A photon of characteristics frequency,n
carries a quanta of energy equals to h.n where h is the
Plancks constant. The particles in quantum physics behave
both a particles and waves. The state vector of particles obeys
Schrodinger wave equation.

Uncertainty principle

It is impossible to measure both the position and the


momentum of the particles at the same time. More accurately
one is measures, less precisely the other is known.

Entanglement

With entanglement the systems are correlated in a way that


does not involve force and the restriction of the speed of light
is not applicable .

QUANTUM TELEPORTATION
Teleportation is by which an object or person while physically remains present in one place, is
made to appear as a perfect replica somewhere else. The classical or conventional approach of
teleportation is illustrated in Fig. 4. Fax machine is an example of teleportation machine. Till
recently the quantum teleportation was assumed impossible as it would violate the uncertainty
principle of quantum mechanics. The uncertainty principle prohibits any scanning or measuring
process to extract all the information in an atom or such object. As the more accurately an
object is scanned, the more accurately the object is disturbed that may ultimately lead to
complete change of the original state of the object even before the whole of information is
extracted to make a perfect replica of the original one. But quantum mechanics has an aspect
known as entanglement. If outside force is applied on two atoms, the aspect of entanglement
occurs whereby the second atom can take the properties of the first atom. Thus if left alone, an
atom will spin in all directions; but the instant it is disturbed it chooses one spin, or one value;
and at the same time, the second entangled atom will choose an opposite spin or value. This
allows learning the value of qubits without actually looking at them, which could collapse
them back into 1s or 0s.
Sending Station
Original object, A physically present at location, P
Receiving Station
A replica of the original Object A is Generated / Received
at a location, Q away from P

A is Scanned or Processed

Send Data

Original, A remains intact at the sending location

Fig. 4: Classical Teleportation/FAX.

DHARM

N-BHUNIA/BHU1-1.PM5

16

Apply treatment

Raw Material

INFORMATION TECHNOLOGY IN 21st CENTURY

17

The property of the EPR (Einstein Podolsky Rosen) or entanglement has made the
quantum teleportation possible hurdling the principle of uncertainty. Fig. (5) illustrates the
quantum teleportation. In the process, part of the information of the original object is scanned
out. The un scanned part of the information is passed viz EPR effect into anther object C. the
object C was never in contact with the original object A. the intermediary object or the delivery
vehicle, B conveyed the un scanned part of information from A to C. It is now possible to apply
treatment on C to make it as A before A, was disrupted by the scanning process. So a real
transportation is achieved in C rather than replica.
Sending Station
Original object, A physically present at location, P
Receiving Station
A replica of the original Object A is Generated / Received
at a location, Q away from P

A is Scanned or Processed

Apply treatment

Send Data
B

Original, A becomes completely


disrupted

Entangled pair, B and C

The intermediary object, B

Fig. 5: Quantum teleportation.

QUANTUM CRYPTOGRAPHY
The disadvantage of key distribution in secret key cryptography can be removed with the aid
of quantum technology. If key distribution problem is solved, the use of Vernam technique will
be best technique of security. In order to solve distribution problem, use of quantum channel
for sending information about key is being explored. In quantum mechanics one cannot measure something without causing noise to other related parameter. For example Hysenbergs
uncertainty principle state that x.m.= constant. Thus if x. is changed, m is bound to change.
An ideal quantum channel supports transportation of the single photon. Thus a single photon
can represent a bit 0 (zero) or 1 (one). The phase or state of polarization of photon may be used
for identifying the 0 or 1. For example. Photons with 0 and 90 of polarization may therefore
be treated as bit 0; and photons with 45 and 135 (also known as 45) of polarization may be
assumed as bit 1. Data security through quantum channel is under active research in the UK
and USA. Some positive breakthroughs have been made by Charles Bennet of IBM Research
at Yorktown Heights, New York, and by Gilles Brassard at the University of Montreal.
If, in the example discussed earlier, Alice wants to send Bob the secret key as required
in the Vernam cipher, she can send the key, say of N bits, through quantum channels. Bob will
be instructed by Alice to detect the photons (bits) from the quantum channel starting from a
given time. There may be some transmission loss, and Bob may be able to detect some fraction
of photons or bits. Bob will have to inform Alice over a telephone as to which photon he has
seen. For this, they may share both a common and variable key. For instance, if Alice sends

DHARM

N-BHUNIA/BHU1-1.PM5

17

18

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

11110000 as the key, and Bob replies that he has seen the first, seventh and eighth photons
(starting from the leftmost bit), then their common key shall be 100.
Alice can send data haphazardly using different polarized photons. Alice can do so (Fig. 6)
either on rectilinear basis:
When a horizontal polarized photon represents a 0 and a vertical polarization represents a 1
Or on diagonal basis:
When a 450 polarized photon represents a 0 and a + 450 polarized photon represents a 1.

=1

=0

Fig. 6: Use of polarization for representing 1s and 0s typically.

Alice haphazardly uses both to send qubits (Fig. 7). Bob will haphazardly try to filter out
the qubits. For the purpose of qubits detection Bob will use a polarization beam splitter. The
polarization beam splitter is a device that allows the photons of orthogonal polarization to pass
through but shunts the photon of other polarization. The quantum nature dictates that: (a) the
same basis beam splitter will pass the received same basis polarized photons, but (b) the
rectilinear beam splitter will pass the received diagonally polarized photons either as vertical
or horizontal polarization with equal probability and the diagonal beam splitter will pass the
received rectilinear polarized photons either as vertical or horizontal polarized photons with
equal probability. This will provide the different combinations of Alices sent photons and
Bobs detected photons. Therefore when both Alice and Bob use the splitter on same basis they
with correctly communicate qubits, but when they use on different basis, the chance of matching
between sent and received qubits is 50%. Bob now tells Alice (over conventional method, say
telephone, as there is no need to keep secret these) how he used the beam splitter to detect
received qubits. Assume Bobs choice was as rectilinear, rectilinear, diagonal, rectilinear,
diagonal (Fig. 7). Bob does not announce the results of detection. Alice replies publicly (means
over conventional method as there is no need to keep this secret) Bob, which times her choices
of base match with Bobs choices. Then they use the qubits of those instant when they use
same base (in those instant they correctly communicate the bits), and ignores the bits of other
instants. The matching bits (Fig. 7) generate the secret key for the session.

(Rectilinear) (Diagonal) (Rectilinear) (Rectilinear) (Diagonal)

(a) Alice sends qubits to Bob randomly (we have taken only 5 qubits for illustration)

DHARM

N-BHUNIA/BHU1-1.PM5

18

INFORMATION TECHNOLOGY IN 21st CENTURY

19

(Rectilinear) (Rectilinear) (Diagonal) (Rectilinear) (Diagonal)

(b) Bob measures the received photons using random polarization basis
Same Base

Different
Base

Different
Base

Same Base Same Base

(Correctly
(Uncertain) (Uncertain) (Correctly
(Correctly
detected by
detected by detected by
Bob)
Bob)
Bob)

Alice and Bob Communicate and identify locations whether they correctly
used the polarization base. COMPARE (a) with (b). BUT THEY KEEP SECRET
THE POLARIZATION OF SENT OR RCEIVED PHOTONS.
1

Ignored

Ignored

(d) Correct bits are taken for key. Bits of other positions are ignored.
So the key in this example is 111.
Fig. 7: Key exchange between Alice and Bob.

Should any eavesdropper attempt to intercepted photon transmission; there shall be


garbage with the key accepted by Alice and Bob. This is because the quantum theory ensures
that, without changing the phase of the photon, an intercepted photon cannot be retransmitted.
Therefore, a change in the polarity of the photon will let Alice and Bob immediately known of
an interception. The scheme of sending information at the one-photon-per bit level as proposed
by IBM research and research of university of Montreal reported that to send the key, the
transmitter (Alice) tells the receiver (Bob) that the plans to send n bits (photons) starting at a
given time. Alice than sends the bits by randomly switching the phase in the transmitter
between 00 to 1800; this switches the output in the receiver between 0 and 1. Although
transmission and detection losses mean that Bob will only see a small classical communication
channel (the telephone, for example) to tell Alice which photons he has seenbut not which
detector he has seen than in. This allows Alice and Bob to share the same random number. For
example, Alice uses ten photons to send the random number 1001011101; Bob replies that he
only received the second, fifth and last photon; therefore they have shared the random number
001.
However, it is conceivable that an eavesdropper could intercept the signal, copy Alices
message, and send it on to Bob without either Alice or Bob realizing. One way to overcome
this, and ensure absolute security, is for both the transmitter and receiver to use non-orthogonal
measurement bases. In other words, Alice sends parts of the message by switching the
transmitter phase between 900 and 2700, say, and other part by switching between 00 and
1800. When the Bob and Alice are using the same base, the system works as before. However,
if Alice is using 00/1800 and Bob is using 900/2700 (or vice versa), the message is meaningless

DHARM

N-BHUNIA/BHU1-1.PM5

19

20

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

- a photon that Alice sends as a 0 has a 50% chance of being received as a 1 and vice versa.
Therefore when Bob tells Alice which photons he has received, he now also says which base he
was using and Alice must tell him if that is a valid photon (i.e. one which was sent and received
when they were both using the same base). Paul Townsend of British Telecom, working with
the Malvern group, recently demonstrated self-interference of short light pulses, containing
on average 0.1 photons, down 10 km of standard communications fiber using the technique.
There is anther technique to minimize the hacking by Eve. The technique is known as
privacy amplification protocol. In the protocol, Alice randomly chooses pair of bits from the key
they have got over quantum channel. Then she performs XOR on the pairs. She then tells
publicly to Bob on which bits the XOR operation was made but not the results. Bob then
performs the XOR operation on the bits that Alice informed him. Alice and Bob then replace
the pair with XOR results to design the new key. The is illustrated as below:
(a) Alice and Bob have secret key 111 as in Fig. 7.
(b) Alice chooses first and second bit as pair and she informs these to Bob publicly. She
gets XOR result 1 1 = 0 and keeps it secret.
(c) Bob performs XOR on the informed bits and get the result 1 1 = 0.
(d) Alice and Bob both replace the pair by XOR result. So their new key = 01.
(e) Note that even if Eve definitely knows one bit of the chosen pair, until & unless she
gets the result of XOR (which Alice and Bob never communicates) she can not replace
the pair for hacking the key.
Quantum computer is very promising. It has numerous advantages over classical
computers, namely in terms of speed (parallelism inherent in quantum computer), power
consumption (nearly at the half of classical computer due to superposition), and tackling of
computational problems here to impossible with conventional computers. The quantum
computer will be based on quantum logic gates based on quantum circuit, and the technology
for these is even prior to the infancy stage. On the other two problems of the quantum computers
have been identified. It is estimated that the quantum error correction will generate more
power than the chips can dissipate; the technology of quantum computer may not be so easy to
develop. The problem of decoherence intervals that measure how long a qubit can maintain
synchronized waveform to represent either 1 and 0 simultaneously. The decoherence time is
estimated on average to be less than 1 microsecond. The challenge remains how to increase
this interval time. Yet there is no stop, and shall not be a stop in development of quantum
computer. We will be wrong to think that the quantum computers will replace classical
computers. The quantum physics has not replaced the classical physics. They co exist each
within their own parameter.

2.2 Quantum Security


The disadvantage of key distribution can be removed with the aid of quantum technology. If
key distribution problem is solved, the use of Vernum technique will be best technique of
security. In order to solve distribution problem, use of quantum channel for sending information
about key is being explored. In quantum mechanics one cannot measure something without
causing noise to other related parameter. For example Hysenbergs uncertainty principle state
that Dx.Dm.= constant. Thus if Dx. is changed, Dm is bound to change. An ideal quantum
channel supports transportation of the single photon. Thus a single photon can represent a
bit 0 (zero) or 1 (one). The phase or state of polarization of photon may be used for identifying

DHARM

N-BHUNIA/BHU1-1.PM5

20

INFORMATION TECHNOLOGY IN 21st CENTURY

21

the 0 or 1. For example. Photons with 0 and 90 of polarization may therefore be treated as bit
0; and photons with 450 and 1350 of polarization may be assumed as bit 1. Data security
through quantum channel is under active research in the UK and USA. Some positive
breakthroughs have been made by Charles Bennet of IBM Research at Yorktown Heights,
New York, and by Gilles Brassard at the University of Montreal.
If, in the example discussed earlier, Alice wants to send Bob the secret key as required
in the Vernam cipher, she can send the key, say of N bits, through quantum channels. Bob will
be instructed by Alice to detect the photons (bits) from the quantum channel starting from a
given time. There may be some transmission loss, and Bob may be able to detect some fraction
of photons or bits. Bob will have to inform Alice over a telephone as to which photon he has
seen. For this, they may share both a common and variable key. For instance, if Alice sends
11110000 as the key, and Bob replies that he has seen the first, seventh and eighth photons
(starting from the leftmost bit), then their common key shall be 100. Eavesdropping can be
tackled by sending photons with different phases. For example, the bit 0 may be represented
by a photon having a phase of 0 or 180, and the bit 1 can be denoted by a photon with a 90 or
270 phase. When Bob uses, he will be able to detect the bits correctly.
Alice can send data haphazardly using different polarized photons. Bob will haphazardly
try to filter out the bits. After the operation, Bob will inform Alice over the telephone of the
timings and the state of filter used by him. Alice can then inform him at what instances they
have used the same state of filters. Based on this exchange of information. Bob and Alice will
get to know their keys. Should any eavesdropper attempt to intercepted photon transmission;
there shall be garbage with the key accepted by Alice and Bob. This is because the quantum
theory ensures that, without changing the phase of the photon, an intercepted photon cannot
be retransmitted. Therefore, a change in the polarity of the photon will let Alice and Bob
immediately known of an interception. The scheme of sending information at the one-photonper bit level as proposed by IBM research and research of university of Montreal reported that
to send the key, the transmitter (Alice) tells the receiver (Bob) that the plans to send n bits
(photons) starting at a given time. Alice than sends the bits by randomly switching the phase
in the transmitter between 00 to 1800; this switches the output in the receiver between 0 and
1. Although transmission and detection losses mean that Bob will only see a small classical
communication channel (the telephone, for example) to tell Alice which photons he has seen
but not which detector he has seen than in. This allows Alice and Bob to share the same
random number. For example, Alice uses ten photons to send the random number 1001011101;
Bob replies that he only received the second, fifth and last photon; therefore they have shared
the random number 001.
However, it is conceivable that an conceivable that an eavesdropper could intercept the
signal, copy Alices message, and send it on to Bob without either Alice or Bob realizing. One
way to overcome this, and ensure absolute security, is for both the transmitter and receiver to
use non-orthogonal measurement bases. In other words, Alice sends parts of the message by
switching the transmitter phase between 90 and 270, say, and other part by switching between
0 and 180. When the Bob and Alice are using the same base, the system works as before.
However, if Alice is using 00/1800 and Bob is using 90/2700 (or vice versa), the message is
meaninglessa photon that Alice sends as a 0 has a 50% chance of being received as a 1
and vice versa.
Therefore when Bob tells Alice which photons he has received, he now also says which
base he was using and Alice must tell him if that is a valid photon (i.e. one which was sent and
received when they were both using the same base). Paul Townsend of British Telecom, working

DHARM

N-BHUNIA/BHU1-1.PM5

21

22

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

with the Malvern group, recently demonstrated self-interference of short light pulses, containing
on average 0.1 photons, down 10 km of standard communications fiber using the technique.
But remember Moores laws here to stay for at least anther decade!
BOX 2
ILLION-TRANSISTOR ICHope or Hype
Since the inception of digital electronic in the brand name of ENIAC in 1948, the computer has
gone through a number of generations, and it is now in the fifth generation. The so vast and
rapid changes of five generations of the computer technology just over a period of 50 years
results in one hand the reduction of size & cost of computers and on the other hand the
tremendous increase in the processing power & capacity of computers. The credit for these is
due to IC (Integrated Circuit) technology. Out of many others the famous empirical laws known
as Moores Laws, basically govern the pattern of growth of computers and that of IC technology.
Mr Gordon Moore, Head of Research & Development of Fairchild coined these laws around
1965. Moores laws state that (a) the number of components on an IC would double every year
(this is the original Moores law), (b) the doubling of circuit complexity on an IC every 18
months (this is known as revised Moores law), (c) the processing power of computer will double
every year and a half (Moores second law).
Presently ICs are made of around 250 million transistors. If Moores law continues to
hold good, it is predicted that by 2010 ICs will be made of billion transistors. The threats to the
survival of Moores laws are heat dissipation and quantum effect that is a physical limit to IC
integration. Several predictions were therefore earlier made for imminent death of Moores
laws. Contrary to these predictions, Moores laws are surviving and hold true for IC integration.
Recent two research reports have further showed confidence of survival of Moores laws for al
least another few years.
A survey conducted jointly by IEEE (Institute of Electrical and Electronics Engineers)
and the Response Center Inc of USA (a market research firm) over the fellows of IEEE showed
that 17%, 52% and 31% respondents respectively predicts the Moores laws continuation for
more than 10 years, 5-10 years and less than 5 years. The average predicted life term for the
laws is then about 6 years. Moores laws existence if then guaranteed up to 2009, by the time
of which following the laws the billion transistors IC will be a reality.
The expectation of realizing billion transistors IC by 2010 has been further brightened
by the current research of Intel expanding Moores laws. Mr Pat Gelsinger s vision of expanding Moores laws includes Intels 90-nanometer fabrication process. Although a several alternative technologies, namely quantum computing, bio computing, molecular electronics and
chemical computing are under investigation as possible replacement digital computing, the
year 2010 may achieve the landmark of billion transistor IC, an another leap forward in IC
technology really a high hope and not a hype.

3. CURRENT AND FUTURE COMMUNICATION TECHNOLOGIES


3.1 Personal Communication
Personal Communication is poised to bring a revolution in communication. Personal
Communication shall be wireless, service independent and like natural communication. It
shall support all sorts of mobility. Active research is going on in this field all over the world. As

DHARM

N-BHUNIA/BHU1-1.PM5

22

INFORMATION TECHNOLOGY IN 21st CENTURY

23

of today, personal communication is seen as a sum total of existing wireless communications


like cellular communication, paging, mobile satellite services, VSAT (Very Small Aperture
Terminal), wireless LAN, Wireless Internet etc., although ultimately personal communication
shall be a UTN (Universal telecommunication Number) service. Personal communication is
believed to be a total wireless communication. It is aimed to provide global coverage and to
serve any sort of information like voice, data, messaging etc., to anywhere, at any time[24]. At
any location or anywhere could imply home, office or in-transit or any other place. Personal
communication has two different attractions. First it is total wireless and thereby it supports
both man mobility and machine mobility[25]. Personal (or man) mobility and terminal (machine)
mobility have distinct and separate characteristics[26]. For personal mobility a person need
not to carry a terminal and needs to have a personal communication number. Personal
communication number is typically a UTN. (UTN is discussed later). For terminal mobility, a
person needs to carry a terminal and needs to be within its radio coverage. With personal
mobility, all sorts of communication can be made through the personal number. A caller getting
connection of a callee through callees personal number may opt for a particular terminal like
telephone or fax, for the session. In terminal mobility different types of communication need
different numbers and different call sessions. For example for mobile fax, we need to have
separate numbers, and for mobile telephone we need to have another separate number. For
personal communication, any device like conventional home phone, cellular phone, key phone,
fax and pager can be used. Service wise therefore, personal communication is much more
flexible, portable, accessible, and reachable compared to wired communication. The philosophy
behind personal communication is unique. Personal communication is for connecting people
rather than machines. Personal communication is believed to provide a single Universal
Telephone Number (UTN) or Universal Personal Telecommunication (UPT) number to a
subscriber for all sorts of communication at any time, anywhere. Today, we cannot reach many
people most of the time, at most places even though they have a number of telecommunication
devices like telephone, fax, telex, e-mail, etc. With single UTN, a subscriber can communicate
all over the world. In to-days communication scenario, a person has many different numbers
for communication at different locations (ones telephone number in Kolkata is different from
that in Delhi) and for different uses (fax number is different from home telephone number).
With the two of the above stated characteristics, personal communication will seem like almost
natural communication. In our day-to-day natural communications (acoustic communication),
we use wireless mode and single name for addressing caller (or callee). A persons name does
not change whether he/she is in Kolkata or in New York. Personal communication is postulated
as universal communication. Technology development, standardization, system development,
performance analysis and spectrum allocation etc., for personal communication is actively
underway.
Experts and scientists view personal communication from different angles. One group
views personal communication as a distinct and separate total mobile communication solution.
Others see personal communication as a migration of the existing conventional wireless
communication with enhanced features. The latter view is quite balanced one. Therefore, as of
today personal communication can be viewed as a combination of various existing wireless
services and new proposed services like UTN[27-28]. Personal communication includes the
platforms of each of the existing services like cellular, wireless PBX, Centrex, cordless both
home and public, CT2 and wireless LAN etc. Partial application of existing services by personal
communication includes paging, SLMR (Special Land Mobile Radio), PSTN (Public Switched
Telephone Network), VSAT, and Common Channel Signaling System-7 and ISDN (Integrated
Services Digital Network)[29-30]. Personal communication shall fully include future services

DHARM

N-BHUNIA/BHU1-1.PM5

23

24

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

like next generation cellular or TGMS (Third Generation Cellular System) and UTN services.
As it is expected that personal communication shall operate globally using the concept of UTN,
the required switching and processing systems for personal communication shall be huge and
complex. Intelligent capabilities of switching and nodes are a must. On the basis of this, we
can define personal communication as an intelligence-based and natural-like communication.
Wireless transmission can take place using different frequency bands. An overview of different
frequency bands is given in table 5. The frequency allocation to some of the wireless
communication is given at table 6.
Table 5: Different Frequency Bands and their applications
Frequency
Band

Wavelength

Name of the
Band

Usual Transmission
Line Covering the
band

Application

<30 KHz

>10 km

Very Low Frequency (VLF)

Twisted Pair

30-300 KHz

10-1km

Low Frequency
(LF)

Twisted Pair/Coaxial Long Radio waves/ Used in


Cable
submarines because these
waves can penetrate waters and follow earths surface

300 KHz-3
MHz

1 km-100 m

Medium Frequency (MF)

Twisted Pair/Coaxial Radio Waves/AM between


Cable Long
520 KHz to 1605.5 KHz

3-30MHz

100-10 m

High Frequency
(HF)

Twisted Pair/Coaxial Short Radio Waves/ AM


Cable/Radio waves
with 5.9 MHz to 26.1 MHz

30-300MHz

10-1 m

Very High
Frequency (VHF)

Twisted Pair/Coaxial FM between 87.5-108


cable/Radio waves
MHz/TV between 174-230
MHz

300MHz
-3 GHz

1 m-100 cm

Ultra High
Coaxial cable/Radio
Frequency (UHF) waves/Micro waves

TV between 470-790 MHz

3-30 GHz

100 cm-1 mm

Super High
Frequency (SHF)

Micro waves

Analog mobile phone (450465 MHz)/ Digital GSM


(890-960 MHz)/DECT at
1880-1900 MHz/Fixed Satellites Service in C-band
(4/6 GHz), Ku band (11/14
GHz) and Ka band (19/29
GHz)/Digital
TV
is
planned at 470-862 MHz.

>30GHz

Less than 10
micrometer

Extra High
Frequency (SHF)

Optical fiber/Infrared
links

DHARM

N-BHUNIA/BHU1-1.PM5

24

INFORMATION TECHNOLOGY IN 21st CENTURY

25

Table 6: Frequency Bands in some of the important wireless applications


US
Mobile Phones

AMPS, TDMA,
CDMA
824-849 MHz
869-894 MHz
GSM, TDMA, CDMA
1850-1910 MHz
1930-1990 MHz

Europe

Japan

GSM
890-915 MHz
935-960 MHz
1710-1785 MHz
1805-1880 MHz

PDC
810-826 MHz
940-956 MHz
1429-1465 MHz
1477-1513 MHz

Cordless Telephones

PACS
1850-1910 MHz
1930-1990 MHz
1910-1930 MHz

CT +
885-887 MHz
930-932 MHz
CT2
864-868 MHz
DECT
1880-1900 MHz

PHS
1895-1918 MHz
JCT
254-380 MHz

Wireless LAN

IEEE 802.11
2400-2483 MHz

IEEE 802.11
2400-2483 MHz
HIPERLAN 1
5176-5270 MHz

IEEE 802.11
2471-2497 MHz

3.2 Cellular Communication


The world of wireless communication actually began in the USA around 1930s when the
American police started using radiotelephones for communicating with the field offices. Public
radio applications like PLMR (Public Land Mobile Radio) and SLMR (Special Land Mobile
Radio) gradually developed. In early 1980s four more wireless services were introduced. These
are AMPS (Advanced Mobile Phone Systems) developed in Bell laboratory in 1980, airphone
services, cordless services and telepoint. AMPS is the earlier example of cellular communication.
AMPS and for that purpose, cellular communication, migrated from wide area radio
communication system. Wide area radio transmission technology is the earliest form of mobile
communication. The objective of wide area radio transmission technology of the early days is
to cover as large area as possible, with a single base station. The single base-station of the wide
area cell is equipped with high tower antennae. The transmitter of the base station is very
high powered. The configuration of the system once designed is fixed. On the other hand, in
cellular concept, smaller cells, each with a low powered base station are used. In cellular concept,
the objective is to increase the customers (specifically subscriber density per MHz of allocated
spectrum) rather than the coverage area. This objective is met by the concept of cell-splitting
and frequency re-use. These have been illustrated with examples in reference. Cell-splitting
and frequency re-usable plan are changeable; and hence configuration is flexible and changeable.
In cellular system many cells are used. Each cells cover relatively small coverage radii of the
order of 0.5 km to 10 km, compared to 50 km to 100 km of the early day mobile communication
system. Small cells of cellular communication are formed in splitting large cells of previous
mobile systems. For frequency re-use, the cells are clustered into a group with say, k number
of cells per cluster. Allocated band of cellular may be divided into k, and each of the divided
bands may be allocated to a cell for communication of a mobile base pair. Frequency re-use
may be defined as use of some carrier frequency to cover different cells separated by a distance

DHARM

N-BHUNIA/BHU1-1.PM5

25

26

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

so that co-channel interference does not cause problems. Carrier re-use follows well-defined
rules described in standard literature.

3.3 First Generation Cellular


The first generation cellular is analog. AMPS is the standard analog cellular used in USA,
Canada and Australia etc. Other first generation cellular standards are TACS (Total Access
Communication System) use in UK, Austria, Spain and Italy; C-450 of Germany and RTMS
(Radio Telephone Mobile System) to Italy etc. All the first generation cellular systems use
frequency modulation for speech and frequency shift keying technique for signaling. Band
sharing among users is done by frequency division multiple access (FDMA) technique.
In USA a total of 50 MHz is allocated in bands of 824-849 MHz and 869-894 MHz for
analog cellular communication. In AMPS system, each channel is 30 kHz wide. Hence 832
channels are provided in AMPS. Frequency modulation with 8 kHz deviation is used in speech,
and frequency shift keying with 10 kbps is used for signaling. In AMPS, cluster size is either
12 with omni directional antennas or 7 with directional antennas per cell. In Japan, a total of
56 MHz is allocated for analog cellular communications in band of 860-885/915-940 MHz and
843-846/898-901 MHz. NTT (Nippon Telephone and Telegraph) employed a system in 1979
using bands of 925-940 MHz and 870-885 MHz respectively for uplink and downlink. 25 kHz
channel spacing was used and 600 duplex channels were provided. The signaling rate was 300
bps. This system was upgraded in 1988 with reduced channel spacing of 12.5 kHz and increased
signaling rate of 2400 bps. Frequency interleaving technique was used. Number of channels
increased to 2400. For farther and more information, reference can be seen.

3.4 Second Generation Cellular


Second generation cellular systems were evolved with digitization, digital technology and digital
signal processing. With digital techniques in hand and application, it was seen that TDMA
(Time Division Multiple Access) and CDMA (Code Division Multiple) could be other viable and
potential alternatives to FDMA. Digital techniques offer a number of advantages over analog
techniques, namely flexibility (digital systems can support mixed and/or integrated
communication and wide range of services), reliability (digital systems are less noise/error
prone; can support security easily), cost effective (one transceiver can be used in base station
to serve a number of users in digital systems whereas in FDMA, this number increases with
number of users) and reduced complexity etc. Digital cellular is known as second generation
cellular.
Digital cellular technology and techniques are well standardized. GSM (Global System
for Mobile Communication), ADC (American Digital Cellular), IS-54 (developed by Electronic
Industries AssociationTIA of America and JDC (Japanese Digital Cellular) are examples of
second generation cellular standards. They are respectively used in Europe (and some parts of
Asia including India), USA and Japan. GSM was actually standardized in 1982 as Group
Special Mobile by CEPT (Conference European Post & Telecommunication). In GSM, 50 MHz
band is allocated for cellular communication in the bands of 890-915 (mobile transit) and 935960 MHz (base transmit). Each radio channel is allocated 200 kHz. Thus there can be maximum
25 MHz/200 kHz = 125 carriers. As a convention, only 124 carriers are used. First 200 kHz in
uplink and last 200 kHz in downlink are not used. Minimum and maximum number of carriers
per cell can be respectively 1 and 15. TDMA is used with 8 slots per radio channel. Each mobile
transmits periodically in its slot and receives in the corresponding slot. Each slot is of 0.577
msec duration. Each frame duration is 0.577 8 = 4.615 msec. GSM supports full rate operation

DHARM

N-BHUNIA/BHU1-1.PM5

26

INFORMATION TECHNOLOGY IN 21st CENTURY

27

at 22.8 kbps with 8 slots per frame as well as half rate operation at 11.4 kbps with 16slots per
frame. For voice communication speech coders compatible with both the rates are available.
For data communication various asynchronous and synchronous services at different rates of
9600, 4800 and 2400 bps are specified for both full and half rate service operation. These data
services interface to audio modems (like V.22 bis or V.32) and ISDN (Integrated Services Digital
Network). GSM can also support connectionless packet switched network X.25, Internet and
group 3 FAX (Fly Away Xerox).
GSM has recently extended to include group calls and push to talk services. Extension bands of GSM which are yet to be explored are 880-890 MHz for uplink communication
and 925-935 MHz for downlink communication.

3.5 DCS 1800


DCS 1800 is an extension of GSM. In DCS 1800 standard uplink and downlink bands are
respectively 1710-1795 MHz and 1805-1880 MHz. It is working at around 1800 MHz which is
higher than that of GSM. Higher frequencies always have more penetration power. Therefore,
compared to GSM, DCS 1800 system is better in terms of interference and fading. DCS 1800,
besides third generation cellular is preferable to in personal communication.

3.6 CDMA Cellular


Code Division Multiple Access (CDMA) cellular is another example of second generation cellular.
It is a good competitor of TDMA cellular. In TDMA cellular, different users signals use the
same frequency band, but are distinguished by different codes. The codes are spread codes.
This is done basically by two techniques: frequency hopping and direct sequence. In frequency
hopping, the transmitter jumps from one narrow band frequency to another according a sequence
mutually known to transmitter and receiver. Thus several data bits may be sent at different
frequencies. In direct sequence each bit (yes or no) of data is represented by a sequence of
bits to be transmitted in the same time. The length of sequence is known as chip ratio. For
example, one user may code yes and no states respectively as 0000 and 1111; whereas
another user may do so by codes 0101 and 1010 respectively (chip ratio is four). With such
spreaded code, signal is disturbed over a wide band. As signal is spread over wide band,
effectively signal looks like a noise and becomes indistinguishable from noise. The CDMA
system is a secure communication; and this makes it advantageous over TDMA cellular. Another
plus point of CDMA is capacity. The capacity of CDMA is more than that of TDMA cellular.
Other aspects of CDMA cellular are parallel with TDMA cellular and for that purpose GSM. In
[14] it was shown that narrow band propagation, path loss (path loss of say, TDMA) should be
applied to wide band path loss (path loss of CDMA). IS-95 is EIA/TIA standard of CDMA
cellular system used in AMERICA. The basic user channel rate is 9.6 kbps. It is spread by a
factor of 128 and channel chip rate becomes 1.2288 Mchips/sec.

3.7 Wide Area Connectivity


In cellular, the wide area coverage or the world wide coverage is done through a basic
connectivity scheme. In this scheme a group of basic stations are connected to a MSC (Master
Switching Center). MSC is connected to other public and national or inter national networks.
Through base stations the mobiles access the network over radio links. Base stations provide
overall management and controls switching between radio channels and TDMA time slots in
order to connect the mobile to MSC. Through MSC a mobile can connect to other mobiles of
other cells as well as connect to subscribers of all the public national or international networks

DHARM

N-BHUNIA/BHU1-1.PM5

27

28

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

connected to MSC. A mobile of one MSC can connect to any other mobile of other MSC s via
MSC-MSC switching.

3.8 Continuous Operation


Originally a mobile belongs to a base station and is assigned a number for communication.
This original number is its number for communication kept stored in HLR (Home Location
Register) of MSC. When the mobile is within the coverage area of his original base location
permanent number is made use of for communication. A mobile can cross its base region and
enter into other base regions (foreign) while talking or communicating. In such situations to
maintain continuous operation it is required that foreign base stations should take control of
visiting mobile. That is, for continuous operation the control of mobile shall pass from original
base station to visiting foreign base station. The pass over technique is called hand off operation.
Hand off is decided upon comparing the signal strengths received by mobile from the original
base station and the foreign base station. As the mobile proceeds to cross the area of original
base, the received signal strength from original base gradually diminishes; while the received
signal strength from the foreign base station gradually increases. The cross over instant of the
signal may be taken as the time of hand off operation. However to avoid falls due to noise some
hysteresis is often used for cross over decision. On hand off operation a visiting mobile is
assigned a temporary number for communication; and the said information is kept stored in
VLR (visiting Location Register) of MSC for farther and future management and control. The
hand off operation and technique are equally applicable when a mobile roams from one foreign
location to other foreign location i.e. when a mobile crosses over boundaries to boundaries.

3.9 Cordless Telephone


First generation cordless is analog. In USA, analog communication cordless is allocated 46.647.0 MHz (base transmit) and 49.6-50 MHz (handset transmit). Ten frequency pairs are used
in these bands. Frequency modulation is used for voice. In Europe first standard that was used
for cordless telephone is known as CT0. Eight channel pairs are used in this standard near 1.7
MHz (base transmit) and 47.5 MHz (handset transmit). The CEPT developed a standard for
analog cordless, which is known as CT1. The bands used in CT1 standards are 914-915 MHz
(base transmit) and 959-960 MHz (handset transmit). Forty 25 kHz duplex channel pairs are
used in these bands. CT1+ standard was later developed with bands 885-887 and 930-932
MHz with provision of 80 channel pairs. It may be noted that CT1+ bands are chosen to avoid
overlapping with GSM bands. In Japan, for analog cordless telephones using FM, 89 duplex
channels are provided near 254 MHz (handset transmit) and 380 MHz (base transmit).
Digital cordless is known as second generation cordless. Digital cordless is like digital
cellular to some extent. Cordless is usually for walking indoors and outdoors. Naturally, cell
size, antenna height, mobile speed, handset design complexity and handset transmitter power,
in cordless are less compared to those of cellular.
CT2 is the first standard of digital cordless in Europe. CT2 is allotted bands 864-868
MHz; and it can support 40 FDMA channels with 100 kHz spacing. In CT2 voice is digitized
with 32 kbps ADPCM (Adaptive Differential Pulse Code Modulation) encoder. CT2 can also
support data up to 2.4 kbps through speech codec, upto 4.8 kbps with increased error rates and
higher data rates using 32 kbps voice channel. Telepoint concept is a migration of CT2 technique.
Telepoint is wireless pay phone service.
Another standard of digital cordless is DECT(Digital European Cordless Telecommunication). It uses TDMA with 12 slots per carrier for each upward and downward

DHARM

N-BHUNIA/BHU1-1.PM5

28

INFORMATION TECHNOLOGY IN 21st CENTURY

29

communication. With TDMA we have earlier seen in case of cellular communication that
multiple users can simultaneously communicate with a single transceiver. The same is true
for DECT also. It uses 32 kbps ADPCM technique for voice digitization. In addition DECT can
support telepoint, wireless PBX and RLL (Radio Local Loop).
In Japan, HS (Personal Handy phone System) is the main standard for digital cordless.
PHS uses TDMA. Each channel has a width of 300 kHz. 77 channels are permitted in the band
of 1895-1981.1 MHz. 37 carriers within band1895-1960.1 are allocated for home and office
cordless; and 40 carriers within band of 1906.1-1918.1 MHz are allocated to public cordless.
Digital cordless in USA was developed by Bellcore (Bell Communication Research) with
a title WACS (Wireless Access Communication System). Actually PACS (Personal Access Communication Service) is now in use. It is a combination of WACS and PHS. In North America,
ISM (Industrial, Scientific and Medical) bands like 902-928 MHz, 2400-2483.5 MHz and 57255850 MHz are in use for digital cordless.

3.10 Wireless Data


Trend is towards wireless. Wireless communication offers a number of advantages including
high performance-cost ratio. Cellular communication and cordless communication are basically for voice communication although they can be used for data communication for voice
communication messaging. Wireless data networks are basically designed for packet mode
communication. Cordless is for synchronous services (uses circuit switched techniques), whereas
wireless data communication is asynchronous in nature (uses packet switched techniques).
But we shall see in IEEE 802.11 standard, that wireless LAN has been proposed to provide
both the asynchronous and synchronous services.
BOX 3
Know thy Elegant Ethernet

1. INTRODUCTION
One of the hottest topics of IT is the Local Area Network (LAN). LAN bears an indispensable
role of service to information community. LAN provides basically a shared data access of an
organisation, which has several systems, and nodes distributed geographically, logically and
physically. The three main physical attributeslimited geographic scope (in the range of
0.110 KM [1], low delay or very high data rate (over 1 MBPS [02], and users ownership, make
LANs substantially different from conventional computer networks. Moreover, while Wide
Area Network (WAN) and Metropolitan Area Network (MAN) allow user in network to access
the shared databases, LANs go a step ahead and allow users to have shared access to many
common hardware and software resources [3] such as storage I/O peripherals and communication devices. For example, a costly high resolution laser printer is usually shared by users in a
LAN, and all users in a LAN use an inexpensive single transmission medium in a multidrop
environment as well as they use whenever required a single bridge or gateway to communicate with other homogeneous or heterogeneous network respectively. LAN is hence a resource
sharing data communication network that is usually used to connect computers, printers terminal controllers (servers), terminals (keyboard NDU), plotters, mass storage units (hard disk)
and any other piece of equipment (exam. Word-processing machine) that has some form of
computer connectivity. LAN is to solve for MY problem [4] of 80/20 rule [5] of communication in a cost-effective scope in an office, factory, university and such relevant environment.

DHARM

N-BHUNIA/BHU1-1.PM5

29

30

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

However PABX (Private Automatic Branch Exchange) differs from LAN in that unlike LAN,
PABX user a separate pair of wires (transmission medium) to connect each device (or extension), low bandwidth (limited to that of telephone line) and rugged hardware switching for
interconnection. The communication in LANs is peer to peer and not via intermediaries as
with WANs and MANs. MANs coverage is from a few miles to 100 miles and WANs coverage
is from 100s miles to 1000s miles [6]. These entire three networks follow layered architectural
standard protocol like 7 like 7-layer ISO-OSI protocol or SNA protocol etc. for interconnection
strategies [6]. LANs continue to be driving force to implement futures white hope of digital
wall socket [7], which will act like todays electricity socket and telephone socket. The digital
wall socket is to be used in handing explicitly low or high data rate devices like copying machines, word processing machines, facsimile displays. VDU, keyboard, microcomputers/PC,
large computers etc. This may ultimately lead to 100 percent paper less Office-of-the-future
and 100 percent automated factory-of-the-future with diskless managers, administrators
and engineers etc.
One of the most successful LANs is Ethernet. Ethernet was the most popular LAN in
1987. As per Forrester Research Inc, [5], in U.S.A Ethernet covers 33 percent of LAN market
with IBM token ring lagging behind at 22 percent. Dataquest estimated that Ethernet had
covered 52 percent of installed LANs U.S.A is Ethernet hottest now? Whatever may be the
answer to this question, it is a fact that Ethernet is still today very popular and will continue
to be so at least for some time to come.
This paper will make a thorough review of Ethernet.

2. Ethernet
Historically, Ethernet was developed by the Xerox Corporation on an experimental basis [8]
around 1972. Based on this experimental experience, the second-generation system was soon
developed by the Xerox Corporation in late 1970s [9]. Around 1080-81, under a joint effort of
DEC (Digital Equipment Corporation), Intel and Xerox, an update version of Ethernet
specifications (table I) [8] was designed. This historically leads to development of IEEE (Institute
of Electrical and Electronics Engineering Inc) 802 standards (table II) [4,6] of LAN in reference
to 7-layer OSI-ISO (Open System Interconnection of International Standards Organisation)
the LIC (Logical Link Control) is covered by IEEE 802.3 standard at MAC actually specify the
accessing mechanism, physical level covers the electromechanical connectivity at network
medium, LIC and MAC of LAN jointly form the data link of OSI-ISO protocol standard. Nowa-day Ethernet is available from many vendors [10]. Such Ethernet is as per IEEE 802.3
standard. These are actually Ethernet-like [11] networks. However, all LANs covering IEEE
802.3 standard are not Ethernet. But all Ethernets cover IEEE 802.3 standard.
Table 1: Specification of Ethernet
Parameters

Experiment Ethernet

Industrial
Commercial Ethernet

1. Data rate

2.94 MBPS

10 MBPS

2. Maximum end-to-end length


coverage using repeaters/bridge

1 KM

2.5 KM

3. Maximum segment length

1 KM

500 M

DHARM

N-BHUNIA/BHU1-1.PM5

30

INFORMATION TECHNOLOGY IN 21st CENTURY

4. Data encoding technique

Manchester

Manchester

5. Co-axial cable impedance

75

50

6. Co-axial cable signal level

0 to +3 volt

0 to 2 volts

7. Transceiver cable connectors size

25 and 15 pin D series

Only 15 pin D Series

8. Preamble

1 byte of a pattern of
10101010

1 byte of a pattern of
10101010

9. Size of CRC (Cycle Redundancy


Check)

2 byte

4 bytes

1 byte

6 bytes

10. Size of address field

31

Table 2: IEEE 802 Standard


Standard for
MAC &
Physical layer

Access Technique
and Topology

Transmission medium
with allowed data

Basic
application area

802.3

CSMA/CD with
BUS topology

Broad band: Co-axial


cable with 1 MBPS/
5 MBPS/10 MBPS/
20 MBPS
Base Band : Co-axial
Cable with 1 MBPS

Office
Automation (OA)

802.4

Token passing with


BUS topology

Broad : Co-axial cable


with 1.5444 MBPS/
5 MBPS/ 10 MBPS
20 MBPS.
Base band: Co-axial cable
With 1 MBPS/ 5 MBPS
10 MBPS.

Manufacturing
Automation (MA)

802.5

Token passing with


RING topology

Base band: Shielded


twisted wire pair with
1.4 MBPS. Co-axial cable
with 4 MBPS/ 20 MBPS
40 MBPS.

Process real time


application

802.6
802.7
802.8
802.9

Yet to be finalized
-do-do-do-

Yet to be Finalized
-do-do-do-

MAN
Broadband LAN
LAN with fiber optical
LAN in ON (Integral
service digital network)

802.2 standard is for LIC of LAN.


802.10 is for network security.

DHARM

N-BHUNIA/BHU1-1.PM5

31

32

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

2.1 Features of Ethernet


Why Ethernet is so popular? This is due to some of its important features. The most appealing
features of the Ethernet are its protocol simplicity, and the relative low-cost and elegant implementation of LAN system which meets the following desirable characteristics [6,7] of a local
networking facility.
High flexibility i.e easily adaptability when devices are system to be added or removed.
This is due to the bus topology and the cable tapping facility of Ethernet
The transmission medium and access control is easily extensible with minimum service
disruption.
High reliability, which assures the continuation of the operation of the network in
failure of one or more active element (node) like PC, terminal or workstation etc. This
is due to the passive feature of Ethernet cable. Moreover, there is no centralized control
but distributed control in Ethernet.
The traffic will be bursty in nature. In office and engineering environment, nature of
data is infrequently bursty [10] and ironically Ethernet, was specially made for office
automation, although not in general.

2.2 Components and Operation of Ethernet


The Ethernet is itself a hardware system. Ethernet can connect typically a maximum number
of nodes of 100 per segment [5] and 1024 per total Ethernet [10]. An Ethernet LAN must have
Ethernet cable, transceiver, and interface unit, control unit, the user system (Fig 1) and terminals. Two types [12] of co-axial cable popularly known as thick Ethernet and thin Ethernet
is used, mainly as backbone Ethernet. On this back bond cable, the communicating systems
and peripherals are attached (tapped). Taps may be intrusive where the cable is cut for tapping
or may be non-intrusive where the cable is cut drilled and a tap added without hampering the
operation of the network. The most common Ethernet, the baseband Ethernet is tapped nonintrusively, whereas the broadband Ethernets used intrusive tapping using T-junctions scheme.
Baseband Ethernet is an implementation where the entire bandwidth of backbone cable is
used only for Ethernet communications. Singles in cable are not modulated signal. Thick
Ethernet cable resembles a marking every 02.4 meters usually by black ring around cable to
show where the taps go. However, thick wire co-axial cable has maximum length limitation of
500 meters and thin wire co-axial has limitation in the range from 189 meters to 1 km depending upon the vendors of transceivers and controllers. Ethernet may also run on twisted pair
under certain restriction and on fiber. The length of twisted pair may range from 20 meters to
100 meters. Ethernets on fiber optic medium have length restriction in the range of 30 meters
to 5 km. In some cases, thin wire Ethernet may be required to be connected to a thick wire
Ethernet. Thin wire cable may be connected to thick wire though a barrel connector. In such
case, the restriction on segment length will follow the formula [5].
(3.28* thin wire length) + thick wire length <c 500 meters. Thus if thin wire is 100
meters in length, the length of thick wire will be below 172 meters. Thin wire cable has higher
signal loss problem.
However, Ethernet is a passive system. This means that system is powered by connected
nodes only. Ethernet cable is also passive. This makes the system more reliable.
The Ethernet is terminated at both ends 50 ohms special terminators, and is made
grounded on one end only to earth. Terminators prevent the signals being reflected back down
the cable causing interference. Ethernet, like all other LANs of IEEE 802.3 standard, uses

DHARM

N-BHUNIA/BHU1-2.PM5

32

INFORMATION TECHNOLOGY IN 21st CENTURY

33

straight Manchester coding ensures simple synchronization and a dc value. At any instant
cable can be in any one of the three states: transmitting a 1 bit (high followed by low), transmitting a 0 bit (low followed by high) or idle state (0 volts). The high and low level are represented by respectively + 0.85 volts and 0.85 volts. However, Ethernet using differential Manchester coding is also there [6]. Such 10 MBPS baseband Ethernet actually uses a signaling
rate of 20 MHz due to the adoption of differential Manchester encoding. This encoding actually
uses bit times to transfer 1 bit of information and a clock single.
By this time, you may probably be wondering of why Ethernet is called Ethernet. It was
once thought that Ether a hypothetical passive universal element is there to bound together
the entire universe and its all parts. And as you see that this LANs transmission medium is a
passive Ethernet that is bounding the smart devices in a net. This is why the name Ethernet
was adopted.
The Ethernet is a broadcast LAN. All nodes can listen each and every message transmitted on the net.
Transceiver is another important component of any LAN. It is carped securely onto the
Ethernet cable so that its tap makes contact with inner core. Transceiver is available in many
different shapes, sizes and price-ranges, but they all provide users devices to communicate
with the cable. They also contain electronics circuit that handles carrier detection and collision
detection too. A transceiver is so named because it allows simultaneous transmission and reception. A transceiver is fairly a dumb system. It transmits data, receives data and detects
collision and notifies the same if occurred to the controller.
Transceiver cable (maximum length is 50 meter) contains usually five numbers of
individual shielded twisted pairs. Two of these pairs are used for data in and data out. Two
more are similarly used for control signals in and out. The fifth pair is not always used, and it
is used to allow the node to power the transceiver. Some transceivers allow upto eight nearby
computers/workstation/users terminals to be attached to them to reduce the number of
transceiver needed. For example DEC has developed special box (DELNIDigital Ethernet
Local Network Interconnect) that allows upto eight systems to connect to the box, and a single
Ethernet transceiver taps the eight systems onto the main cable. DELNI has the ability to
work star alone and emulate an eight node Ethernet cable. When the systems are no more than
50 meters away from DELNI or there are no more than eight co-located system that require
being on an Ethernet, DELNI is cost-effective than eight transceivers and cable. The
disadvantage is that DELNI is self-powered. So failure of DELNI will fail eight nodes to access
network.
The interfacing unit detects data and accepts the data if it is mean for this address. It
also creates and checks the CRE for error correction and recovery.
The controller unit (is a firm wire or sift wire device) transmits data frames to and
receives data frames from transceiver via interfacing unit. It also buffers the data and
retransmits it when collision occurs, and determines the retransmission interval (which varies
with load etc.) and other aspects of network management.
For a complete network, one has to procure the components of LAN, network software
and hardware and communication software (e.g. Netware 2.2, super LAN, MS net).
Now, the basic components of Ethernet are discussed. Next thing is that how does the
Ethernet is the accessing technique known as CSMA/CD (Carrier sense multiple access/collision
Detection). There are many different types of CSMA technique [12], the technique adopted in
Ethernet is 1-persistent CSMA/CD. The problem of the non-persistent strategy is that after the
current transmission and line is idle. The alternative to this technique is 1-persistent where

DHARM

N-BHUNIA/BHU1-2.PM5

33

34

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

the nodes continuously sense the line and transmit data as soon as it is free. The CSMA/CD is
a simple and straightforward way of providing every user a chance to transmit whenever it has
something to do. The concept behind CSMA/CD may appear to be derived from a technique
used when people are talking in a mass gathering or meeting. If no one is talking, one people
may start talking. If two or more people start talking at the same time, collision occurs, and
both stop and wait for some random time before again starting to talk. In Ethernet if any node
wishes to send data to another node on the network, the source listens to see if the line is free
(quite/idle). This is called carrier sensing. If the cable is idle, source node starts transmission.
Some times it may so happen that two or more station accidentally may start transmission at
the same time. The collision is also possible in some other cases. For example if two nodes
separated by a distance of propagation time t, both start transmission at an interval of time t,
there will be collision. When collision will occur, transmitted data will be corrupted. A mechanism
to detect collision is used by adopting the technique of listen-while-transmitting. In this scheme
at the source node, the transceivers transmitting unit while sending the data, the receiving
unit is listening to the data that is being sent. If the transceiver detects that the data received
by receiving circuitry do not match with that transmitted by transmitting circuitry, it senses
the occurrence of collision and accordingly sends a message to the controller of node. If there is
match, the transmission process is allowed to go on. On receiving a collision-detection signal,
controller stops sending data, and sends a burst of noise on the line(Jamming) to assure that
the other nodes sending data listen a collision. All collision-detecting stations back off on
detection of collision. The controller than waits for a random time before at empting for
retransmission. For this a random generator is used. However, the mean wait is initially
equivalent to an end-to-end round trip delay on the cable (which is about 2 see for 500 meter
co-axial cable). However, in case of second time collision, the controller doubles the previously
generated random number there by ensuring double of the mean delay of first collision and so
on (doubling operation) on repeated collision. Usually random generated is counted from assigned
number to zero for measure of delay. The doubling operation is allowed for a prescribed number
of times, which are usually 16. After that the controller sends an error message to the host
(system manager) notifying the occurrence of multiple-collision. Due to this collision and
retransmission scheme, 100 percent channel utilization is not achieved. Ethernet, however,
come close to 100 percent due to CSMA/CD technique, which polling and other techniques
cannot achieve. The minimum Ethernet packet size (64 bytes) and maximum Ethernet cable
segment length and propagation time, used together, guaranteed, that by the time the last bit
of information is transmitted, the source node can accurately detect a collision i. Any other
node attempt transmission at the same time.
However, if the utilization rate of the cable is low (i.e. load on the network is low), collision is rare, and the mean delay time rarely exceeds its minimum value of one end-to-end
round-trip delay. When utilization is high (i.e. traffic load becomes heavy), collision becomes
more common. Due to this feature, controller dynamically changes the retransmission interval. This is why doubling operation is there in use.
When data is being transmitted, all nodes hear the data. On examining the first (after
preamble) 6 bytes (address field) of the data packet, nodes may determine whether the data is
destined for itself or not. If the message is for itself, it passes the message to the users device
through controller. Otherwise it ignores the message usually. But why is CSMA for Ethernet?
Because of distributed nature of the random accessing technique, they are well suited to LANs
where simplicity of operation and flexibility are most important. Besides, since a large bandwidth
is available in LAN, LAN under such accessing technique can be operated at a relatively low

DHARM

N-BHUNIA/BHU1-2.PM5

34

INFORMATION TECHNOLOGY IN 21st CENTURY

35

loading avoiding unstable [13] conditions. However, the performance of CSMA/CD is inversely
proportional to the end-to-end propagation delay [14]. Thus Ethernet for OA can use CSMA/CD
most appropriately.

2.3 Ethernet IC (integrated circuit) chips


The following is a brief Ethernet IC chips [6, 15] that may be used to design an Ethernet :
Vendor

Controller/Interface Chip

Intel

82586 (controller)
82501 (interface)
DP8390
DP8790
DP8341
DP8342
7996
7990
8003
8023
LANCE

National
Semi conductor

Advanced
Seeg
Tech
AMD/Mostek/Motorola

2.4 Application of Ethernet


Ethernet historically and traditionally is used in office automation (OA). Today some
organizations are experimenting with video on Ethernets as well as high-resolution graphic
access technique by which disk-less workstation may access shared disk structure. Ethernet
are also used in laboratories and industries, robotics applications, factory automation, process
control and many other non-office applications but in rare cases. Only consideration for adoption
of Ethernet in non-office applications is its tolerance of interference from electrical motors,
electro-magnetic radiation and other sources of distortions. But its use of CSMA/CD accessing
technique (which is probabilistic in nature), a node in the network may have to wait for arbitrarily
long period to send a message. Moreover, IEEE 802.3 standard does not have priorities in
accessing scheme. This makes it unsuitable in which important message should not be delayed
for unimportant frames to pass. These two factors reserve the application of Ethernet in
manufacturing factory automation (MA) and in real time process control system.
However, while in office the typical required response time is 02 to 10 second, in factory
and process control the same is respectively in the ranges of 0.5 to 02 sec. and 0.1 to 0.5 sec.
Ethernet can meet a response time of 02 to 10 sec. Ethernet is hence best for OA. For MA,
LANs covering IEEE 802.4 standards are suitable.

2.5 Limitation of Ethernet


The limitation of the Ethernet from application point of view due to non-deterministic
(probabilistic) accessing has already been discussed. Next is that Ethernet does not perform
well under heavy load condition. Due to randomness both in data arrival and service, tests
have shown that Ethernets can utilize only 90 to95 percent of available resources, under a full
load condition. Maximum throughput of 10 MBPS, 500 meter Ethernet with a propagation
aped of 2 X 108 m/sec is only 9.96 MBPS [6, 16]. Ethernet does not guarantee of delivery of

DHARM

N-BHUNIA/BHU1-2.PM5

35

36

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

message, as there is no scheme of sequence number checking, missing message re-transmission requests and other such facilities.

3. MODIFICATION OF ETHERNET
3.1. Improving Ethernet for MA
The problem of load balancing [17] in CSMA/CD technique can be achieved to a large extent if
each station on getting a transmission access is restricted to transmit only a fixed ore-assigned
number (pay, P) of packets (non-exhaustive mode) [16]. After transmitting P packets, the station
has to back off, for a time, which must not be less than the time required for a bit to end-to-end
of round of the bus. After passing off this time, the station can check the carrier further, and
the process repeats.
Priority in CSMA/CD can be achieved by assigning each station a priority number. Any
station when transmitting data, may transmit its priority byte after say each q packets (q>p).
Any other station which is desirous to send urgent message, if sees that transmission is going
on, may check the priority is less than its priority, it will distort the priority. The on going
transmitting station not getting back the proper priority byte will stop, immediately transmission to allow the higher priority station to access. However, if checked priority is greater than
its priority, it has to wait for free carrier. A modified and deterministic Ethernet is already
there in France-defence department [5]. This is of course a proprieratory item.

3.2 Ethernet for Data and Voice


ISDN (Integrated Service Digital Network) is becoming more and more attractive to
communication engineers. In spirit with goals of ISDN, a concept of ISLN (Integrated Service
Local Network) [17] was introduced. But why? Statistics show that about 15 percent of their
office time is spent by senior managers on telephone, and not more than 3 percent of the same
is used in handling data oriented jobs. Besides, real managers dont use terminals [19]. But
today, of course, they do. Therefore in a complete and cost effective OA system, the integration
of voice and data is an essential requirement. In any organization, why shall be there one
PAPX (for telephone) and one LAN (for data). However, early problem of LAN design was to
communicate data, but the real problem is to provide users requirements of both data and
voice communication. The pioneer vendors of Ethernet can examine whether Ethernet can be
extended to cover ISLN requirement in either of two technique [19] : (i) conventional voice +
data upto facsimile or (ii) upto full moving video.

4. EXTENDED ETHERNET
A number of Ethernet segments may be connected together (Fig. 1) via repeater or bridge [5,12,20].
A repeater consists of some sort of microprocessor (like Intel 8088, Motorola MC 68000) and
memory etc. They are standalone units. They repeat everything what is received from any
segment to other segment and vice versa. They connect two Ethernet segment via transceiver.
Bridge, on the other hand, store and forward the intended data only from a source segment to
a destination segment. Bridge is made of some sort of processor, storage, buffers and a set of
software.

DHARM

N-BHUNIA/BHU1-2.PM5

36

INFORMATION TECHNOLOGY IN 21st CENTURY

37

User device

Segment-2
TR

Terminal
Controller

TR

CI

Repeater

Controller
Interface

maximum
2.5 meter
TR

CI

Transceiver
cable (max50 m/15 ft)

Printer
Top

Terminal

TR
Original segment

Terminal
TR

User device

TR

CI Controller
Interface

TR

terminal
either (may be
twister wire pair,
Co-axial cable,
Fibre or radio link)

Terminal
Service
Like DELN

TR = Transceiver
PC

User

TTY

Single Ethernet Segment, max. 500 m/1500 ft

Bridge

TR

TR
Segment-3
TR

TR

Gateway

CI

WAN or MAN

Plotter

Fig. 1

5. CONCLUSION
A number of important considerations of Ethernet have been highlighted. Ethernet is seen to
be very effective for OA. If the next generation of Ethernets is to be developed, they must be
done in a direction to extend the application to MA utilizing the proposed suggestions in paper.

DHARM

N-BHUNIA/BHU1-2.PM5

37

38

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

C. David Tsqo, A local area network architecture review, IEEE Communication Magazine,
Vol. 22, No. 8, pp 7, Aug.1984.
D.D. Clark, K.T. Pogran and D.P. Reed, An introduction to local area network, Proc. IEEE,
Vol. 66, No. 11, pp. 1497-1517. Nov. 1978.
John E. McNamara, Local area Network, Prentice Hall of India, Ch. 1, 1991.
Stephen P.M. Bridge, Low cost local area Networks, Galgotia Pub. Pvt. Ltd. Ch 1, 1990.
Bill Hancock, Designing and implementing Ethernet Networks, QED information Science, Inc,
1989.
Paul J. Fortier, Handbook of LAN Technology, McGraw Hill Inc., NY, 1989.
James Martin, Computer networks and distributed processing, Prentice Hall, Inc, Ch. 26,
9181.
John F. Shoch, Young K. Dalal, David D. Redell and Ronald C. Crane, Ethernet, Advances in
Local Area Networks, IEEE Press, NY. pp 29-48, 1987.
Timothy A Gonsalves, Measured Performance of the Ethernet Advances in Local area Network, IEEE Press, pp. 383-387, 1987.
William L Schweber, Data Communication, McGraw Hill, Intl. Ch. 11, 1988.
Neil Willis, Computer Architecture and Communications, Paradigm Pub. Ltd., U.K., Ch. 14,
1988.

3.11 Wireless LAN


Wireless LAN offers wireless data communication for a limited geographical area. Wireless
LAN is like wireless PBX for data. It solves My Problem] of any organization utilizing the 80/
20 rule of communication. Wireless LANs are meant for private or organizational uses where
wired communication is impossible or impractical or not desirable or expensive ( examples are
historic building, trading floors, manufacturing floors, conventions etc.) and/or where some
sort of mobility is required (examples university environment, conference room, hospital environment). Wireless LANs are aimed at data rates of 1 Mbps or more. Basically there are two
forms of wireless LAN : radio LAN and Infrared LAN. IRLAN is less popular than radio LAN
due to : IRLAN can cover wide area but it is almost twice in cost of that an equivalent radio
LAN. IRLAN requires license for use of spectrum. Radio LAN uses ISM spectrum for which
license may not be required. Unlicensed radio LAN s are available in USA in the bands of 902928 MHz, 2.4-2.4835 GHz and 5.725-5.85 GHz. IEEE committee for standardization of radio
LAN has proposed to use 2.4 GHz band (discussed later). IRLAN is only for point to point
communication. Radio LAN is much more flexible and can be used in multiple users communication. IRLAN is short ranged (if it is made wide ranged, it becomes costly), radio LAN is long
ranged.
Architecture wise, both radio LAN and IRLAN assume one of the two basic technologies :
infrastructure or ad hoc. Infrastructure topology is the most common in radio LAN. Under
infrastructure network stations (computers fitted with adapters/transceivers) communicate with
each other under a coverage area of an access point, as well they communicate with any other
station in the network through backbone wired network. Backbone network is accessed via
access points. In IEEE 802.11 standard, access points are known as base points; and backbone
network is known as distribution system. Access point is a combination of transceiver and data
bridge. Each access point provides a certain coverage area. Number of access points required
for an infrastructure network, thus depends on the required coverage area. Infrastructure
topology is useful in covering a building or campus or an institute under radio communication.

DHARM

N-BHUNIA/BHU1-2.PM5

38

INFORMATION TECHNOLOGY IN 21st CENTURY

39

In ad hoc network stations independently communicate with each other and there is nothing like
access point for communication through backbone network. Ad hoc network can be either temporary
or semi-permanent. Semi-permanent networks are used for a few months and useful for companies,
which move frequently. Field construction companies, military camps on war days may use semipermanent ad hoc networks. Temporary networks are used for a day or for a few hours of business.
They may be used in sharing files, databases in a company meeting or convention.
There are two important standards wireless LAN. IEEE is developing IEEE 802.11 standard, which is proposed to be used in USA. HIPERLAN (High Performance Radio LAN) is the
standard developed by European Telecommunications Standard Institute and is for use in Europe. HIPERLAN standard has already been ratified by CEPT. IEEE 802.11 draft standard defines three different physical layers : a) 2.4 GHz ISM band with frequency hopping spread spectrum radio, b) 2.4 GHz ISM band with direct sequence spread spectrum radio and c) infrared light
2.4 GHz ISM band has been allowed both in USA and Europe for IEEE 802.11 version LAN;
whereas Japan has allocated the band 2.471-2.497 GHz for IEEE 802.11 LAN. Japan has allowed
such a narrow band in 2.4 ISM in order to provide radio LAN in medium data rates of 256 kbps to
2 Mbps where spread spectrum technique is used. Japan has allocated another band near 18 GHz
for high rate of 10 Mbps or more radio LAN where QAM (Quadrature Amplitude Modulation),
QPSK (Quadrature Phase Shift Keying) are used. In frequency hopping system, 79 and 23 different frequencies are used respectively in USA/Europe and Japan for data transmission under
IEEE 802.11 scheme. In direct sequence the processing gain is proposed to be 10.4 dB in IEEE
802.11 draft standard. Frequency hopping system can support large number of channels compared to direct sequence scheme. Frequency hopping is also having superior performance when
interference is high. However, direct sequence is simpler in design and implementation. Service
wise, IEEE 802.11 has proposed to serve asynchronous and time sensitive (synchronous/isochronous)
services. In radio LAN when an access point is shared by all stations, all stations use same
hopping/sequence pattern. As such there is always a fair chance of interference and collision.
Hidden node problem of radio network on the other hand has a tendency to increase collision.
When two transmitters send data to a single receiver, single receiver can hear to transmissions;
but transmitters cannot hear each other. This is known as hidden node problem of radio system
that depends on physical sensing of carrier. Thus a good medium access control (MAC) strategy is
essential In radio system. In IEEE 802.11 standard, MAC is CSMA/CA (Carrier Sense Multiple
Access/Collision Avoidance) rather than CSMA or CD (Collision Detection) used in Ethernet.
Radio Technique does not allow the collision detection mechanism. In CSMA/CD technique, when
a station senses a free carrier it backs off transmission for a random amount of time. Thus if more
than one station detects free carrier at the same time , due to random back off periods, collision
may be avoided. For tackling hidden node problem, in IEEE 802.11 scheme, two controls frames,
RTS (Request to Send) and CTS (Clear to Send) are used. Theses are like RS-232-C transfer
protocol.
HIPERLAN differs from IEEE 802.11 on number of accounts. IEEE 802.11 does not
support multi-hop communication. No access point or station can act as a data router or relay
point. HIPERLAN does not support multi-hop communication by the way of cellular architecture.
It is targeted for higher data rates than IEEE 802.11 and may support 23.5294 Mbps. That is
why a large and dedicated band of order of 150 MHz (5.150-5.300 GHz) near 5 GHz another
band of 17.1-17.2 GHz near 17 GHz are allocated to HIPERLAN. HIPERLAN is also aimed to
be indistinguishable from wired LAN of Ethernet and to support some sort of isochronous
services. For modulation, Gaussian minimum shift keying is used. A (31,26) BCH mode is used
for error control. It aims to achieve BER (Bit Error Rate) of 10-3 or less for fair service. MAC in

DHARM

N-BHUNIA/BHU1-2.PM5

39

40

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

HIPERLAN is different from both CSMA/CD of IEEE 802.11. In HIPERLAN accessing scheme, if
a station senses free medium for 1700 bit times, it can transmit immediately. If not, channel
accessing is done through three phases of prioritization, elimination and yield. HIPERLAN MAC
can reduce chances of collision to a less than 3 per cent. IRLAN works on IEEE 802.3 and IEEE
802.5 protocols. IRLAN is based on line of sight technology, and hence it can support high data
rates up to 10 Mbps for Ethernet configuration and up to 16 Mbps for token ring configuration.
IRLAN is costly and hence some vendors are hopeful to go through the technology. An association
of vendors has made their own standards for IRLAN.
The IEEE standards for different LANs are 802.3 for CSMA/CD Bus LAN, 802.4 for
Token Passing Bus LAN, 802.5 for Token Passing Ring LAN and finally 802.11 for WLAN
(Wireless LAN). Of course in general IEEE standards, 802.3, 802.4 and 802.5 is commonly
known as 802.x; and these standards are for wired LANs.
As of today two basic transmission technologies those are in use to set up WLAN (Wireless Local Area Network) are: Infrared light at THz wavelength and Radio wave at GHz (2.4
GHz in the license-free ISM-Industrial, Scientific and Medical band). Infrared technology uses
either diffuse light reflected at obstacle like furniture, walls etc or directed light if line of sight
path exists between the sender and the receiver. Simple transmitter may be light emitting
diodes or laser diodes; and the receiver can be a photodiode. But most of the wireless systems
use radio waves. IEEE 802.11 LAN can use both Infrared and Radio wave, but HIPER LAN1
uses only Radio wave. A comparison of Infrared and Radio wave transmission technology is
given in the table (7).
Table 7: Comparison of Infrared and Radio waves
Transmitter
Receiver r

Data rate

Shielding

Infrared Technology

Very Simple

Very low as due to low


bandwidth of Infrared.
115Kbps to 4 Mbps is
the data rate

Infrared can be easily


shielded. It can not penetrate obstacles like
walls etc.

Radio Wave

Not simple as in Infrared

Higher than Infrared

Shielding is not so simple.

Like other 802.x standards, the standard 802.11 covers only physical layer and MAC sub
layer. IEEE 802.11 supports three different physical layers: one layer on using infrared, and another two layers on using basically 2.4 GHz ISM band available free on world wide. ISM bands are:
902 to 928 MHz, 2,4000 to 2.4835 GHz and 5.7250 to 5.825 GHz. Radio LANs operate in the high
UHF and low microwave range. Infrared LANs do transmission just below visible light. At physical level three different wireless specifications are: Infrared LANs, Frequency Hopping Spread
Spectrum (FHSS) LANs and Direct Sequence Spread Spectrum (DSSS) LANs. FHSS and DSSS
LANs belong to radio LANs. FHSS LANs are specified to support data rate of 1 Mbps with a faster
specification of 2 Mbps. DSSS LANs are specified for 1 Mbps and 2 Mbps also. FHSS is a spread
spectrum technique that allows for the coexistence of a multiple number of networks in the same
area by allowing different networks the different hopping sequence. Under IEEE 802.11 standard,
79 hopping channels for North America and Europe; and 23 hopping channels for Japan are specified each with a bandwidth of 1 MHz in 2.4 ISM band. A particular channel is identified by a

DHARM

N-BHUNIA/BHU1-2.PM5

40

INFORMATION TECHNOLOGY IN 21st CENTURY

41

pseudo random hopping pattern. The maximum transmitter power is 1 watt EIRP (Equivalent
Isotropic Radiated Power) in US and 100 mW EIRP in Europe. In DSSS, the separation is done by
codes rather than frequency. Except this, all other like bit rate and transmission power remain
same as in FHSS. The frame formats of physical layer of 802.11 are shown in Fig. (5). The figures
in the bracket in the fields refer to the size of the fields in bits. In FHSS frame, the synchronization
field is a bit pattern of 010101. The star Frame delimiter( SFD) is 0000110010111101. PLW refers
to PDU Length Word i.e. length of payload including 32 bit error control CRC bits at the end
payload. It ranges from 0 to 4,095. PSF is for signaling. Out of its 4 bits only one bit is specified to
indicate either 1 or 2 Mbps. HEC is a 16 bit header error check field for which ITU-T CRC-16
standard is used. In DSSS frame, 128 synchronization field is made of only scrambled 1 bits. 16
bits start frame delimiter is 1111001110100000. Signal refers to bit rate. Service field is reserved
for use. Length is used to indicate payload size with CRC field. HEC is used to check error on
header with IUT-T CRC-16 standard.
The MAC data frame of IEEE 802.11 is as shown Fig. (6). The figures in the bracket in
each field refer to the size of the field in bytes. Frame control is used for several reasons like
protocol version and the type of the frame etc. Duration ID indicates the virtual reservation
mechanism. Address 1 to 4 which has 46 bits in each is used as they are done for 802.x LANs.
Sequence control is used for acknowledgement and error and flow control. CRC is used as it is
done 802.x LANs.
Synchronization

SFD

PLW

PSF

HEC

Payload

-on (80)

(16)

(12)

(4)

(4)

(Variable)

(a): For FHSS


Synchroni
zation (128)

SFD
(16)

Signal
(8)

Service
(8)

Length
(16)

HEC
(16)

Payload
(variable)

(b) For DSSS


Fig. 5: Physical frame format of IEEE 802.11 radio WLAN
Frame

Control
(2)

Duration
ID
(2)

Address
1
(6)

Address
2
(6)

Sequence
3
(6)

Address
Control
(2)

Data
4
(6)

CRC
(0-2312)
(4)

Fig. 6: Data format of IEEE802.11

The world is rapidly shifting towards wireless and faster network. In such a rapidly
changing scenario, let us see how one of the oldest local area networks, namely Ethernet is
keeping pace with the changes. Ethernet dominates as a LAN (Local Area Network), as it is
time tested highly reliable, scalable, elegant and low cost network. IEEE 802.3 Ethernet is
the established corporate LAN technology, and most of its implementations are with IEEE
802.3u or 100 Base T that defines a 100 Mbps data rate using four pairs of twisted wire pair
wiring or Ethernet cable. Tree of the Ethernet is shown in Fig. (7).
The Ethernet was originally wired network. It follows the IEEE standard 802.3 for logical link control by which the several nodes can share the single physical medium. The physical
layer implementation is made with wires.

DHARM

N-BHUNIA/BHU1-2.PM5

41

42

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Ethernet
Wireless

Wired

802.3

802.11

Conventional Ethernet
10 Base5 Thick Co axial
10 Base2 Thin Co axial
10 Base T UTP

802.11b
(11Mbps)

Fast Ethernet

802.11a
(125 Kbps54 Mbps

802.11g
(54 Mbps)

100 Base T4 (CAT 3 UTP)


100 base Tx(CAT 5 UTP)

Gigabit Ethernet
1000 Base LX
1000 Base SX
1000 Base CX
1000 Base T (CAT 5+)
10 Gigabit Ethernet under IEEE 802.3ae

Fig. 7: Ethernet as grows.

The IEEE standards for different LANs are 802.3 for CSMA/CD Bus LAN, 802.4 for Token
Passing Bus LAN, 802.5 for Token Passing Ring LAN and finally 802.11 for WLAN (Wireless
LAN). Of course in general IEEE standards, 802.3, 802.4 and 802.5 is commonly known as 802.x;
and these standards are for wired LANs. IEEE 802.11 mainly provides connectivity to corporate
LAN. It is very costly for home LAN.

3.12 IEEE 802.11 Architectures


In IEEE 802.11 LAN standard, there are two different configurations of a network: ad-hoc and
infrastructure. In the ad-hoc network, there is no fixed structure to the network and no fixed
access point. There are no fixed points and the computers are brought together to form a network
on the fly as shown in Fig. (8) and usually every node is able to communicate with every other
node. A good example of this configuration is the unscheduled meeting where officials bring
laptop computers together to communicate and share information to arrive at a decision. In
this type of configuration it is difficult to fix the type of the nodes, but algorithms such as the
spokesman election algorithm (SEA) may be used to elect one machine as the master of the
network with the others as slaves. To know whos who in the ad hoc networks, a broadcast and
flooding method may be used. The infrastructure network (Fig. 9) uses fixed network access
points with which mobile nodes can communicate. These network access points may also be
connected to landlines to widen the LANs capability by bridging wireless nodes to other wired
nodes. As and when service areas overlap, handoffs can occur. The structure is very similar to the
current cellular networks around the world.

DHARM

N-BHUNIA/BHU1-2.PM5

42

INFORMATION TECHNOLOGY IN 21st CENTURY

43

Computer

Computer

Computer

Computer

Fig. 8: Ad hoc Network.

Computer

Computer

Computer

Computer

Computer

Fig. 9: Infrastructure Network.

Wireless computing, Wireless communication and Wireless networks shall be the rule if
future. In such a scenario, WLAN will play a major role. In the last few decades two important
wireless technologies those emerge as viable and promising are LEO (Lower Earth Orbital
satellites) and 3G (Third Generation) cell phones. But both the technologies fail to meet the
expected aspirations. Here at, the WLAN has come out as an alternative. Presently under
IEEE 802.11, two major WLAN standards are operating: 802.11a and 802.11b (Table 8). The
first 802.11 standard is 802.11b that was approved by the IEEE in 1999. The 802.11b is the
first standard that broke the wired brethren of 802.3 wired Ethernets. The 802.11b standard
transports data at 11 Mbps using CCK (Complementary Code Keying) using 2.4 GHz band.
The 802.11b has is a very successful track record as it is learnt that the sale of IEEE 802.11b
wireless LANs has increased dramatically from 5000 to 70,000 units per month since early
2000. It is also reported that: The growing popularity and ubiquity of WLANs will likely
cause wireless carriers to lose nearly a third of 3G revenue as more corporate users begin using
WLANs to connect to the Internet and office networks Many analysts feel that the ease of
installing and using WLANs is making it alternative to mobile 3G. In contrast to the reported
$650 billion spent worldwide by carriers to get ready for 3G, setting up a WLAN hotspot requires only an inexpensive base station, a broadband connection and one of many interface
cards using the 802.11b. But the speed of 802.11b is one-tenth of wired Ethernets. Therefore
the IEEE to have high speed wireless access approved the 802.11a standard concurrently. The
IEEE 802.11a standard provides scalable data rates from 125 Kbps to 54 Mbps in increments
of 125 Kbps with OFDM (Orthogonal Frequency Division Multiplexing) using 5 GHz band.
Actually 54 Mbps is known as turbo rates. IEEE 802.11b standard defines only two lower
levels of OSI (Open Systems Interconnection) reference model, the physical layer and the Data

DHARM

N-BHUNIA/BHU1-2.PM5

43

44

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Link Layer Medium Access Control (MAC) sublayer. IEEE 802.11b uses two pieces of equipment, a wireless station, which is usually a PC or a Laptop with a wireless network interface
card (NIC), and an Access Point (AP),which acts as a bridge between the wireless stations and
Distribution System (DS) or wired networks. There are two operation modes in IEEE 802.11b,
Infrastructure Mode and Ac Hoc Mode as discussed earlier in the IEEE 802.11 standard. The
physical layer covers the physical interface between devices and is concerned with transmitting physical raw bits over the communication channel. IEEE 802.11b supports different data
rates (Table 9).
The problems of 802.11a are many: It does not support different devices with different
speed, design and complexities. The standards 802.11a and 802.11b are not interoperable.
802.11a is presently used only in North America, and 802.11b is used in the whole of Europe
and Asia.
The IEEE 802.11e is tasked with a new protocol to non-guaranteed quality service in ad
hoc connectivity. The IEEE task group G for 802.11 has now deliberating on the next generation standard for 802.11 that would transmit data at the speed of wired Ethernet. The new
standard will be 802.11g. The mission of 802.11g standard is to have wireless access at the
turbo speed of 54 Mbps while maintaining the interoperability.
The IEEE task group G for 802.11 has now deliberating on the next generation standard for 802.11 that would transmit data at the speed of wired Ethernet. The new standard will
be 802.11g. The mission of 802.11g standard is to have wireless access at the turbo speed of 54
Mbps while maintaining the interoperability.

3.13 GIGABIT ETHERNET


Over the decades the speed of Ethernet has grown every time by a factor of 10 starting from 10
Mbps to 100 Mbps to 1000 Mbps (1 Gbps). The Ethernet that carries data at the rate of 1 Gbps
or more is known as Gigabit Ethernet. The physical media initially recommended for Gigabit
Ethernet is the fiber (table 10) But another IEEE committee is considering the use of UTP
cable for Gigabit Ethernet called 1000 Base-T.
Keeping the growth of speed, the next Ethernet will be 10 Gbps. The IEEE 802.3ae
standardization is going on for 10 Gbps Ethernet. The goal is to achieve very high speed transport keeping maximum compatibility with already installed base of Ethernet 802.3. The 10
Gbps Ethernet will provide almost zero latency service to users. Thus even when coverage area
is increased, the remote application and services will appear as local.
Table 8: IEEE Standards for LAN
IEEE Standards

Definition

802.0
802.1
802.2
802.3

Sponsor Executive Committee


Higher Layer LAN Protocol
Logical Link Control
Medium Access Control (MAC) of CSMA/CD Bus LAN (Example:
Ethernet)
10 Gbps Ethernet
Data Terminal Equipment -electrical via balanced cabling for 802.3
interface
MAC of Token Bus LAN

802.3ae
802.3af
802.4

DHARM

N-BHUNIA/BHU1-2.PM5

44

INFORMATION TECHNOLOGY IN 21st CENTURY

802.5
802.5t
802.5v
802.5z
802.6
802.7
802.8
802.9
802.10
802.11
802.11a
802.11b
802.11g
802.12
802.13
802.14
802.15
802.16

45

MAC of Token Ring LAN


100 Mbps Token Ring LAN
Gigabit Token Ring LAN
Link Aggregation
MAN Working Group
Broadband Technical Advisory Group
Fiber Optic Technical Advisory Group
Isochronous or Integrated Services LAN (ISLAN)
Inter operable LAN Security Working Group
Wireless LAN (WLAN) Working Group
56 Mbps WLAN
11 Mbps WLAN
Next Generation WLAN
Demand Priority Working Group
Inactive
Cable Modem or Cable TV
Wireless Personal Area Network (WPAN)
Broad Band Wireless Access (BBWA)

Table 9. Different data rates of IEEE 802.11b


IEEE 802.11b Data Rate Specifications
Data Rate in
Mbps

Code Length

Modulation

Symbol Rate
in MSps

Bits/Symbol

11 (Barker Sequence)

BPSK

11 (Barker Sequence)

QPSK

5.5

8 (CCK)

QPSK

1.375

11

8 (CCK)

QPSK

1.375

Table 10. Gigabit Ethernet


Gigabit
Ethernet

Mode
Supported

1000 Base LX (Long wave


laser over single mode and

Fiber diameter
in micron

Maximum Distance
in segment

Single

10 Km

Multi

50

550 m

Single

50

3 Km

Multi

62.5

440 m

1000 Base SX (Short Wave


laser over multimode fiber)

Multi

50

550 m

Multi

62.5

260 m

1000 Base CX (Balanced


shielded 150 ohm copper cable)

BALANCED SHIELDED CABLE

25 m

1000 Base T (UTP Cable)

UTP Cable

100 m

multimode fiber)

DHARM

N-BHUNIA/BHU1-2.PM5

45

46

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

3.14 Trade Off


In the speed jargon, the wired Ethernet still outplays the wireless Ethernets. But Wireless
LANs are also picking up speed. Trade off lies in wired LANs speed versus wireless LANs
flexibility, reliability and low maintenance cost or looking at today IEEE 802.3ae versus 802.11g.
Alan Mc Adams, Chair of IEEE-USA committee on communication and information policy
(CCIP) said Gigabit Ethernet over fiber will allow the transfer of Ethernet technology, concepts and benefits from Local Area Networks to Metropolitan Area Networks and Regional
Area Networks. It is reported in the IEEE The Institute of Oct2002 that The June approval
of the IEEE 802.3ae standard for 10 gigabit per second Ethernet has the potential to allow
Gigabit Ethernet over fiber (GEF) technology to supplant current telecommunications infrastructures with its cost, speed and distance advantages.

3.15 Integrating Wireless Protocols


In nature, only one thing is permanent and that is nature. This law of nature appears to be
equally applicable to Ethernet, which is ever changing and growing.. The present scenario is
that of cellular data and of wireless LAN data, respectively under protocols of TIA/EIA
(Telecommunications Industry Association/Electronics Industry Alliances) IS-856 and IEEE
802.11.Whereas IEEE 802.11 meets short-range high speed data network, IS-856 is meant for
wireless voice data. They are complement to each other. They may take advantage of each
other and integrate to provide typical bridge (Fig. 6) to satisfy demand for access to the wireless
Internet. On the other hand, the IEEE 802.11 will immensely take part in the seamless
integration of total wireless access and networking in the next-G era (Figs. 10 and 11).

Computer

IEEE 802.11
access point

Wireless
Station

Computer

Independent
device

802.11

Fig. 10: Typical integration for a corporate connection.

DHARM

N-BHUNIA/BHU1-2.PM5

46

IS-856

INFORMATION TECHNOLOGY IN 21st CENTURY

Seamless
Next G

47

Mobile
wireless
Integrat

3G/Data
WLAN
801.11a

2G/Digital
WLAN
802.11b

1G/Cellular
WLAN/
Conventional

Fig. 11: Seamless integration of wireless access and networkingIEEE802.11.

3.16 IEEE 802.15.4 StandardLow Data Rate, Low Cost Wireless Home Networking Solution
Due to applications of networking in almost everywhere, several attempts are being made to
offer solutions that aim to be flexible, cost effective, reliable and consume less power, the features
particularly so important for home or residential networking. In the wired communication, the
DSL (digital subscriber loop) technology (discussed later) is one important driver. The cost
effectiveness is achieved by utilizing the existing copper line in the local loop. But the wireless
communication and networking has an edge over wired technologies for which a wireless local
loop solution is needed. The wireless networking and communication technologies that have
appeal in voice and data applications in residential or home services are among others cellular,
cordless and IEEE 802.11b. The consideration of cost effectiveness and low power consumption
has motivated for development of a new standard IEEE 802.15.4 for home networking with low
data rate wireless solution.
The initiative to develop a standard for low powered and low cost home wireless
networking was taken by IEEE working group 15 in 2000. Besides home automation, the
standard is poised to be applied in different services of industry like industrial control,
automotive sensing (monitoring tire pressure, sensing of soil moisture, pesticide, pH levels),
and disaster management (sensing and determining location of disaster) etc. In home
applications, the services of the standard will be PC peripherals (Keyboard, PDA, Mouse),
consumer electronics (TV, Radios, VCR, CD etc), automation (heating, air conditioning,
ventilation, windows and doors lock), remote control, health monitoring, security and PC enabled

DHARM

N-BHUNIA/BHU1-2.PM5

47

48

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

services. These applications need data range ranging from a few kilobits per second (kbps) to
115.2 kbps. The acceptable delay or latency for these services ranges from 15 ms to 100 ms.
The major features of the IEEE 802.15.4 proposed standard are:
like all other IEEE standards, the IEEE 802.15.4 refers to lower layer specification.
In reference to OSI ISO 7 layer protocol, the IEEE 802.15.4 refers to DLL (Data Link
Layer). DLL is split into two sub layers: LLC (Logical Link Control ) and the MAC
(Medium Access Control) sub layer. The LLC is as per other specification of 802.3 etc.
The IEEE 802.15.4 defines a separate MAC sub layer.
The IEEE 802.15.4 as recommended two versions of physical layers: (1) 868/915 MHz
and (2) 2400 MHz
the IEEE 802.15.4 supports both star and peer to peer networks including ad hoc
networks
the generic frame format of IEEE 802.15.4 is made of : frame control and sequence
number that are respectively 2 and 1 bytes. The Address field is variable from 0 to 20
bytes. Payload is variable, but the full MAC frame is limited to 127 bytes. Frame
check sequence is 2 bytes and it uses 16-bit CRC control. In the physical layer frame,
the total header length s 6 bytes with preamble of 4 bytes, start of packet delimiter of
1 byte and physical header of 1 byte. The payload is limited to 127 bytes, being the
MAC frame.
The header fields of the physical layer frame format are: 4 bytes preamble that is
used for synchronization, 1 byte start of packet delimiter that is used to indicate the
end of preamble, and 1 byte physical header used to specify the length of physical
service data unit
The physical layers in IEEE802.15.4 uses DSSS (direct sequence spread spectrum)
methods with different channel frequencies and modulation parameters.
The DSSS method is chosen in order to use low cost IC for implementation by which
the cost of the system is made low
IEEE 802.15.4 aims to provide excellent battery life, low transmit power
IEEE 802.15.4 devices aim as much as 99.9 percent of sleeping time
The simplicity is the another attraction of IEEE 802.15.4

3.17 IEEE 1394 for Home Network


A recognized definition of home network is A home network interconnects electronic products
and systems, enabling remote access to and control of those products and systems, and any
available content such as music, video or data. Several standards are in perspectives for application in home networks. For example, IEEE 802.11 is the most talked of standards for wireless interface. The standard has got several modification in order to meet with high speed
requirements as well as other different requirements of home networking. Unfortunately the
standard is still costly for home networks. Nonetheless the standard is yet to overcome many
obstacles for wide spread deployment. Several modifications are proposed in the IEEE 802.11
standard for different requirements. For example, the task group G is proposing 802.11g that
would transmit data at the speed of wired Ethernet. The IEEE 802.3ae standardization is
going on for 10 Gbps Ethernet. The goal is to achieve very high speed transport keeping maximum compatibility with already installed base of Ethernet802.3. The 10 Gbps Ethernet will
provide almost zero latency service to users. Thus even when coverage area is increased, the
remote application and services will appear as local. But the cost factor is not considered in
such modifications.

DHARM

N-BHUNIA/BHU1-2.PM5

48

INFORMATION TECHNOLOGY IN 21st CENTURY

49

The IEEE 1394 working group defined a standard known as IEEE 1394. Truly speaking,
the standard was originated by Apple Computer Company for desktop LANs. IEEE1394 is a
low cost digital interface that can work over existing copper, fiber and co axial cables too. The
Broadband Home Company has used co axial cable to extend IEEE1394 interface beyond the
local audio & video cluster. The solution so provided looks like a virtual IEEE1394 wire
connection to other IEEE1394 networks. It supports hot plugging, thereby allowing users to
add and/or remove devices when the interface bus is active. It provides both hardware and
software specification for peer to peer connection at different operating speed of 100, 200 or 400
Mbps. The enhancement in speed may go to support 800, 1600 and 3200 Mbps. It supports a
scalable architecture to meet with different speeds of different requirements, thereby providing
a cost effective solution. That standard integrates communication, entertainment, and computing
to provide a single digital interface for consumer multimedia. It supports both asynchronous
and synchronous types of data transfer as required in home networks. Asynchronous transfer
is related to conventional data/computer file transfer. But for multimedia application of voice
and video where delay is the most sensitive issue, transport at the guaranteed delay is done by
synchronous or isochronous technique that is duly supported by IEEE 1394. It supports high
speed communication at low cost interface. IEEE 1394 has been recognized as digital interface
by many organization for different purposes that includes entertainment, consumer applications,
digital TV, Home multimedia, conventional file transfer, digital video conference etc.
A typical integration of several networks including IEEE1394 in a single cable is as
shown in Fig. (12).

Phones

2.5 MHz

TV
Channels

Cable

Infrared

5 MHz

55 MHz

1000 MHz

IEEE 13
94

Ethernet

1500 MHz

Fig. 12: Typical several networks in a cable.

3.18 Paging
Paging is a one way message system unlike two way interactive mode system of communication of cellular. Pagers transfers message on wireless network; and thereby support mobility.
Person having a pager can be contacted anywhere at any time. Pagers are quite useful for
doctors, journalists etc. Paging is basically a back up system to telephones. It enhances the
productivity of telephones. They work on a simple technique. The caller may dial the paging
center through usual telephone and leave the message with operator along with callees pager
number. The operator shall send the message to the callees pager. The message will then be
flashed out on the callees pager with an activating signal. There are two basic types of paging
transmission standards : POCSAG (Post Office Code Standard Advisory Group) and RDS (Radio Data System). In India frequency allocation for POCSAG is 134-168 MHz whereas idle
band of AIRs (All India Radio) existing FM network is used to RDS. AIR is operating RDS
paging. POCSAG is under DoT (Department of Telecommunication). Different types of pagers
are available in the market like numeric, alpha-numeric and recently introduced English type.
In advanced countries, two way paging is being developed.

DHARM

N-BHUNIA/BHU1-2.PM5

49

50

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

3.19 VSAT
Satellite communication started with the pioneer work of Dr. Artheer C. Clarke. He showed
that using just three satellites placed each at 1200 apart from each other and at a height of
about 36000 Kms from earth surface, world wide communication is possible. The satellites
placed in orbits of about 36000 kms away from earth surface are known as geostationary
satellites as they rotate in their orbits once in 24 hrs. therefore, from any point on the earth,
these satellites appear stationary to any person on earth.
Due to two big advantages of satellite communication over other means of communication, satellite communication has a big appeal to users. It is said that Going for Satellite
means Going for Wireless Communication. Wireless communication is more reliable, flexible
and adaptable than wireless communication. We do communication by acoustics in wireless
communication. Auther saying is Going for Satellite means going for wide area coverage.
Wide area coverage has a natural attraction.
Over the years, hence, the satellite communication has diversified its area of application
and technologies. One of the major technologies is VSAT.
VSAT (Very Small Aperture Terminal) is a cost-effective technology meant for networking
computers and terminates mainly for the purpose of data communication. The VSAT network
may be a wide network and extended in any remote location easily.
The basic components of VSAT networks are:
1. a geological satellite.
2. a master earth station or hub.
3. micro earth stations(VSAT stations or nodes)
VSAT node is made of VSAT ports, VSAT controller and VSAT antenna. The size of the
antenna is 1.2 m 1.8 m.
Basically two types of topology are used in VSAT communication. Star and Mesh. Star
topology uses TDM-TDMA (Time Division MultiplexingTime Division Multiplexing Access)
CDMA(Code Division Multiplexing Access) and mesh topology uses DAMASCPC (Demand
Assigned Multiple AccessSingle Channel Per Carrier). VSAT communication is broadcast
communication. VSAT nodes cannot communicate directly to each other. They communicate
with each other via master earth station or hub. Naturally VSAT communication is known as
two-hop-communication. This means a VSAT signal from node has to travel at least 36000 2
x 2 = 144000 kms to reach another node. The delay for which shall be around 480 msec at least.
The quality voice and video communication do not allow more than 80 msec delay between
transmitter and receiver. Delay is not a issue for data transportation. VSAT communication is,
hence, most suitable for data communication.
The characteristics of VSAT are:
1. cost effectiveness is a big advantage of VSAT communication. An STD call between
Delhi and Mumbai can be around Rs. 40 whereas a VSAT call may be about Rs 10.
2. Reliability and flexibility are always present in VSAT communication as it is wireless
communication. A leased telephone line can have at most 90% up time. A VSAT line shall have
around 99.5% up time. Due to wireless, ease of expansion of VSAT is there. This is how flexibile
is VSAT network.
VSAT(Very Small Aperture Terminal)communication is useful for huge organization like
DVC,ONGC, IOC, BHEL, etc., as a means of cost effective date communication system for
within the organization VSAT is a small dish antenna 60 cm or 120 cm which communicates

DHARM

N-BHUNIA/BHU1-2.PM5

50

INFORMATION TECHNOLOGY IN 21st CENTURY

51

with central hubs and terminals via satellites VSAT is cheaper then conventional earth station communication using satellites. Power budget calculation [26] shows that in order to meet
with required bit energy to noise ratio a large antenna is essential for covering a wide area.
Cost increases with antenna size. For intra organizational communication a small antenna is
justified. DoT has allocated extended C band for VSAT communication in India. Nowadays
VAST can also provide cost effective telephony and fax services. VAST is a low speed 1200 BPS
data communication system and employs TDMA for accessing.

3.20 Mobile Satellite Service


MSS (Mobile satellite Service) is a form of cellular like wireless communication. With terrestrial
wireless networks, wireless communication is either not economically viable (examples are
:remote areas, semi-hill areas etc.) or may not be physically possible (overseas, over large
mountains). In such cases, wireless communication via satellites is an alternative proposition.
Satellites can be of three types: GEOs (Geo-stationary Satellites), MEOs (Medium Earth Orbital
Satellites) and LEOs (Lower Earth Orbital Satellites). Orbital altitude of GEOs, is 35786 km;
whereas the same for MEOs and LEOs are respectively of the order of 10,000 km and 1000 km.
INMARST, INTELSAT5, INSATAs are examples of GEOS. One example of LEOs is Odyssey
with 12 satellites at an altitude of 10600 km. Odyssey is proposed to use CDMA technology
and ground based switching for voice communication. Project 21 is another example of MEOs.
In project 21, 10 satellites are used and they are placed at an altitude of 10500km. Project 21
is proposed to use TDMA technology to support both voice and data. Examples of LEOs are
Iridium with 66 satellites oat an altitude of 765 km., Globstar with 48 satellites at an altitude
of 1389 km. and Ellipso with 24 satellites at an altitude of 429-2903 km. They are respectively
proposed to use FDMA/TDMA, CDMA and FDMA/CDMA technology. Iridium and Globstar are
to support voice and paging. A good account of GEOS, MEOS and LEOs can be seen in[29,34].
GEOs have two major advantages : they are costly systems (due to high transmitter power and
large antenna size) and round trip propagation delay is about 270 ms. Large round trip
propagation delay is unwarranted in voice and in real time interactive communication. LEOs
and MEOs can overcome the problem. In these systems many satellites are required to be
placed in the orbits. As the satellites are not geo-stationary, for continuous communication,
hand off operation among satellites are required. MSS can therefore support mobility. In MSS,
MEOs, LEOs are base stations and they are on motion. Here lies the difference between cellular
communication and MSS. In MSS, actually base stations are assumed as mobiles. Another
disadvantage of LEOs and MEOs is their short life span. HEOs (Highly Elliptical Orbital
Satellite) may also be used in MSS for wireless communication. A good account of MSS in
personal communication is found out in[30,31].
Satellites Communication (GEOs/MEOs/LEOs)
In general going for satellites means going for two important philosophies - going wireless and going for large area, even upto whole world coverage. These greatly influenced the use
of satellites in communication. Till date, the major satellites involved in communication are
GEOs (Geo Stationary Satellite). GEOs are placed at orbit of high 35800 Kms away from earth
surface, and therefore they move around the earth once in 24 hrs. Thus GEOs look stationary
at anywhere relative to earth, and communication using fixed antenna is possible. A world
wide coverage with just three GEOs is possible, if they are placed at equidistant apart of 1200.
But there are number of disadvantages with GEO based communication: (1) GEO has poor
elevation at higher latitudes and no coverage of the polar regions, (2) mobile communication
using GEO under INMARST (International Maritime Satellite Organization) is possible to

DHARM

N-BHUNIA/BHU1-2.PM5

51

52

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

provide voice, data, telex and facsimile services to ships; but for this, there remains the requirements of very high power for both terminal and the space craft, (3) the large distance
between earth and GEO causes a high propagation loss of about 200 db and a time delay of
about 350 msec in one way. Such long delay is not acceptable to 80% of users.
MEOs are placed at orbit height of about 10,000 Km or above; whereas LEOs are placed
at orbit height of 2000Km or less. By this LEO overcome the disadvantages of GEO based
communication pointed out above. Besides, there some major differences between LEO based
and GEO based communications.
# The communication in LEO is done through a constantly moving and tracking switching network and antenna rather than fixed system of GEO. Mobile communication in LEO is
based on the relative mobility. LEO systems move and moving users appear stationary. For
example in Iridium system, the LEO speed relative to earth is 26,676 Km/ hr, whereas average
mobile speed is around 90 Km/hr.
# The GEO based communication is single hop (earthsatelliteearth) communication;
while LEO based communication is multi hop communication.
# Under LEO, the communication across the world is low cost. For example while a
typical GEO can provide about 10,000 channels for global services, the LEO can provide 7000
channels for regional services and 35,000 to 70000 channels for global services. This typical
means the cost per channel for global services in LEO is about one half of that of the for global
services in GEO. LEO is effective for global services rather than regional services.
# LEOs are smaller than GEOs. The mass of LEO satellite range from 50 to 700 Kg
(whereas that of GEOs range from 18002000 Kg). Therefore economic multiple launching of
LEOs is possible.
The variety of services offered by the satellites was divided into three groups by ITU
(International Telecommunication Union). These are: (1) Fixed Satellite Service (FSS), which
offers radio communication services between fixed location in earth through one or more
satellites. (2) Broadcast Satellite Services (BSS) which provide direct reception of satellite
broadcast by public and/or community and (3) Mobile satellite Service (MSS) which provide a
communication between mobiles through one or more satellites. As in past twenty five to thirty
years, FSS and BSS shall continue to be served with GEO mainly. LEOs shall dominate MSS.
It is said, The LEO and MEO systems offers an innovative approach to providing service to a
country, a region, or to the whole world. Instead of transmission to and from a fixed point in
sky (as for geostationary satellite systems) the user transmits to and receives from a network
of lower altitude satellites, that move overhead with some satellites disappearing from view as
others come over the horizon. The system can provide service to all parts of the world as the low
altitudes satellites pass over different parts of the earth.
LEO Systems
LEOs are classified into two groups: Little-LEOs and Big-LEOs. The little-LEOs group
consists of satellites, which are small in size and low in weight. Little LEOs are expected
to provide services of only low bit rates of the order of 1 Kbps (kilo bits per second) and they
are placed near orbit height of around 1000 km. Naturally they are used for non-voice
services. The frequency band allocated for mobile satellite services (MSS) under littleLEO
group are: 148-150.50 MHz (uplink) and 137-138 MHz (down link).
Big-LEO group of satellites are expected to provide near-toll-quality voice service and
other related services like paging, data communication, facsimile, and position location. BigLEO group contains MEO (International Circular Orbit) satellites. The important three BigLEO systems are: Globalstar, Odyssey and Iridium.

DHARM

N-BHUNIA/BHU1-2.PM5

52

INFORMATION TECHNOLOGY IN 21st CENTURY

53

Iridium
Iridium system was proposed by Motorola to provide global services of voice, data, fax, paging,
RDSS; and was scheduled to operate in 1998. The cost of the system is about US$ 3.4 billion.
The system is composed of 66 (77) satellites with 11 satellites in each of 6(7) polar orbits placed
at the orbit height of 780 Km above earth surface. Satellite shall provide 3168 cells out of
which only 2150 cells shall remain simultaneously active to provide global coverage of mobile/
cellular telephone service. In the system the same frequency band 16161626.5 MHz shall be
used for both uplink and downlink communication on time-shared basis. Message on one
telephone to another is transmitted from mobile to satellite using 23 GHz (22.5523.55)
intersatellite link until the satellite viewing the destination mobile is reached. The system
uses FDMA (Frequency Division Multiplexing Access) and TDMA (Time Division Multiplexing
Access) on uplink and downlink respectively. The connection to the terrestrial network is done
via earth station gateway. Voice circuits per satellite are 1100. Voice service rate is 2.4 Kbps.
Data service rate is 7.2 Kbps. Modulation technique used in the system is OPSK ( Quadrature
Phase Shift Keying). Footprint diameter of each satellite is 4700 Km. and therefore satellite
visibility is 11.1 minutes. Satellite life span is rather less and it is 5 years. Satellite antenna
type is fixed and six feet in size. Beams per satellite is 48 and therefore total beams in the
system are 3168. Feeder uplink and downlink frequency 27.5-30 GHz and 18.8-20.2 GHz.
Minimum and maximum one way propagation delay are respectively 2.6 msec and 8.22
msec. Airtime charge per minute is US$ 3.0.
Iridium System is working but not to the level of satisfaction expected before launch.
Globalstar
Qualcomm proposed Globalstar LEO system to provide services of voice, data, facsimile and
RDSS. The Globalstar system shall use 48 satellite in 8 polar orbits. The orbit height is 1400
Km above earth. It provides global coverage and can work with existing PSTN (Public Switched
Telephone Network). Calls are granted through satellites only when access is available to the
terrestrial all network. The PSTN can be used via gateways for long distance communication.
The system does not support intersatellite link. The gateways to the PSTN shall use 6.5 GHz
and 5.2 GHz respectively for uplink and downlink communication. Access technology for MSS
is CDMD (Code Division Multiplexing Access) via L-band (1610.01626.6 MHz) and S-band
(2483.52500.0 MHz) for uplink and downlink communication. The modulation technique used
in the system is QPSK. The system can support 2000 - 3000 voice circuits / satellite. The voice
and the data service rate in the system are 2.4 to 9.6 KBPS respectively. Minimum and maximum one way propagation delays are respectively 4.63 msec and 11.5 msec. Mobile terminal
cost is about US$ 750. Air time charge per minute is 30 cents.
The satellite foot print diameter is 5850 Km. The satellite visibility is 16.4 minutes and
lifespan is 7.5 years. The satellite on a orbit mass is 450 Kg. The system cost is US$ 1.8 billion.
The satellite antenna is fixed and size is 03 feet. Feeder uplink and downlink frequencies are respectively 5.091 - 5.250 GHz and 6.875 - 7.875 GHz. The satellite output power is
1000 watt. Beams per satellite is 16 and total beams are 768.
Comparing Iridium with Globalstar, a report says globalstar has capital costs (at $1
billion) one-half Iridium, circuit costs one-third Iridiums and terminal cost (at $750 each) onefourth Iridiums. With no intelligence in space, Globalstar relies entirely on the advance of
intelligent phones and portable computer devices on the ground; it is the Ethernet of satellite
architectures. Costing one-half as much as Iridium, it will handle nearly 20 times more calls.

DHARM

N-BHUNIA/BHU1-2.PM5

53

54

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

The advantages of Globalstar stem only partly from its avoidance of complex intersatellite
connection and use of infrastructure already in place on the ground. More IMPORTANT is its
avoidance of exclusive spectrum assignments. Originating several years before spread-spectrum technology was thoroughly tested for cellular phones, Iridium employs time division multiple access, an obsolescent system that requires exclusive command of spectrum but offers far
less capacity than code division multiple access. It is said that Iridiums voice service cannot
complete with GlobalStars cheaper and more robust CSMA system. It is also reported that
Iridium satellite together use 80% more power than Globalstars, yet employ antennas nearly
twice as larger and offer 18.2 times less capacity per unit area.
Odyssey
TRW proposed a system known as odyssey to provide voice, data, facsimile and RDSS services
on global basis. In the system, 12 satellites and 3 polar orbits are used. The orbit height is
10370 Km above earth surface, and therefore this system is better known as MEO system. The
orbital period of satellites is 359.5 minutes and visibility is 94.5 minutes. The satellite mass is
2207 Kg. Footprint diameter of each satellite is 10540Km.
The access technology of the system is CDMA, and modulation technique is QPSK. The
system operates at L and S brand. The mobile uplink and downlink frequencies are respectively 1601.01626.5 MHz (L brand) and 2483.52500.0 MHz ( S brand). The system supports
to 3000 to 9500 voice circuits per satellite. Voice and data services are made with respectively
4.8 KBPS and 2.4 KBPS. Delay are respectively 34.6 msec and 44.3 msec. Airtime charge in
the system is US$ 0.65 per minute.
Satellite antenna type is steer able. Uplink and downlink feeder frequencies are 29.1
29.4 GHz (ka - band) and 19.319.6 GHz (ka band) respectively. The system supports 61 beams
per satellite, thereby supporting total 732 beams. Satellite output power 6177 watt.
Ellipso
Ellipsat proposed a LEO satellites system known as ELLIPSP to provide voice, data, facsimile, and RDSS, using 15 (9) satellites placed in 3(1) polar orbits. The orbit height is 7800
Km over earth surface and provides coverage over entire northern hemisphere and to southern
hemisphere upto 50 south latitude. It uses L and C-band for communication. The mass of
satellite on orbit is 300Kg. The system supports voice and data at 4.2 KBPS and 0.3 to 9.6
KBPS respectively. The satellite life span is 5 years. Air call charge per minute is US$ 0.50.
Access technology is CDMA.
ICO
Hughes proposed ICO system to provide services of voice, data, fax, paging, massaging and
position location employing 10 satellites in 2 orbits placed at 10355 Km. The system is MEO
rather than LEO. The system supports voice and data at the rates of 4.8 KBPS and 2.4KBPS
respectively. The satellite life time is 10 yrs. Air charge is US$ 1 to US$ 2. The system covers
service all over world. Orbit period is 358.9 minutes and satellites visibility is 115.6 minutes.
The down-link and the up-link frequencies for MSS are 1980.02010.0 MHz and 2170.0
2200.0 MHz respectively. Satellite antenna type is fixed. Feeder uplink and downlink frequencies
are respectively 5.0915.250 GHz (C-band). The system supports voice and data service at the
rates of 4.8KBPS and 2.4 to 9.6 KBPS respectively. Minimum and maximum one way
propagation delays are respectively 34.6 msec and 48 msec. Air time charge per minute is 2.00.

DHARM

N-BHUNIA/BHU1-2.PM5

54

INFORMATION TECHNOLOGY IN 21st CENTURY

55

Teledesic
Teledesic system of LEOs is a different class. Difference stems from the application point of
view. The system is aimed at providing wireless broad band access and computer networking.
Little LEOs are equivalent of paging. Big-LEOs like iridium, globalstar and ICO, are equivalent of fiber. The system comprises 840 small satellites in proposed 21 orbital planes and
20000 super cells on the earth in order to provide broadband-on-demand service by 2002 for
99% of the earth. The orbit height is 700Kms. Teledesic system is expected to use haband of
frequencies, between 17 GHz and 30 GHz. And antennas of size 66 cms. The Teledesic system
is Giga Band system. A comparative study says in the long run Iridium could be trumped by
Teledesic. Although Teledesic has no such plans, the incremental cost of cost of incorporating
an L band transceiver in Teledesic, to perform the Iridium functions for voice would be just
10% of Teledesics total outlays, or less than $ 1 billion (compared with the $ 3.4 billion initial
capital costs of Iridium). But 840 linked satellites could offer far more cost-effective service
than Iridiums 66.
Iridiums dilemma is that the complexities and costs of its ingenious mesh of intersatellite
links and switches can be justified only by offering broadband computer services. Yet Iridium
is doggedly narrow band system focused on voice.
The evolutionary process of development of personal communication shall go on using
existing cellular, cordless, satellites, wireless data networks, WLL ( Wireless Local Loop) VAST
(very small Aperture Terminal), wireless centrex / PBX, and other GMS (Third General Mobile
System/Cellular) and MSS (Mobile Satellite Service) etc. But the use of MSS in personal
communication may be revolutionary of evolutionary, which remains to be seen. The MSS
under different LEO projects-both big LEOs and small LEOs, is believed to be a high hope of
implementing personal communication.

3.21 Wireless ATM


ATM (Asynchronous Transfer Mode) technology is believed to be the only suitable presently
available technology for integrated services-present and future, time sensitive and time insensitive, voice, video and data. Thus ATM can support multimedia and can be service or application independent technology for transport and switching. Therefore, a future direction for development of wireless ATM is initiated. WAND (Wireless ATM Network Demonstrator) is such
a project of European unions. HIPERLAN standards of 5 GHZ shall aim towards wireless
ATM. In USA, a high speed multimedia network using ATM is under development. It is targeted to operate at 25 Mbps using 25 MHz channel in 5 GHz band (5.15-35 and 5.7255.875
GHz). On the other hand, work has started to provide air interface to internet.

3.22 Changing Scenario of Internet


Internet is going to have several changes and version like IPv4 to IPv6, VoIP, Internet2, Wireless
Internet and under sea /Cable Internet. But most immediate is from IPv4 to IPv6 and VoIP.
3.22.1 IPv6
Presently Ipv6, Internet works on IPv4 (Internet Protocol Version 4) as defined in RFC791. By
the middle of 1990s, by the time of which the IPv4 became about 15 years old, it was recognized
that there are several limitations in the IPv4. Table (XI) lists the major studies on the run up
of IPv4. Two important limitations are the inadequate address space available with 32-bit
address space of IPv4 and inability of the IPv4 to support real time services or time-sensitive
services. The 32-bit address space is not sufficient to cope up with the growing Internet users.

DHARM

N-BHUNIA/BHU1-2.PM5

55

56

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Since it is estimated that the Internet has been growing by a factor of two every year, the
underlying principles and assumptions based on which IPv4 was designed are going to be
invalid. What was duly sufficient for a few million users or a few thousands of networks will no
longer can support a world with tens of billions of nodes and hundreds of millions of networks.
Inability of IPv4 to support real time services was the stumbling block to realize Internet
telephone. IPng (Internet Protocol Next Generation) initiative (RFC 1752) was then, started
by the Internet Engineering Task Force (IETF). By 1996, the IETF proposes IPv6 (Internet
Protocol Version 6) under IPng initiatives, which is supposed to solve the problems of IPv4
including the two major limitations mentioned above. IPv6 is therefore the future replacement
of IPv4. From the experience over IPv4, it was felt that new version should take care of: More
addresses, Reduced overhead, Better routing, Support for address renumbering, Improved header
processing, Reasonable security and Support for mobility. Under the IPng initiatives the main
techniques investigated were:
TUBA that refers to TCP (Transmission Control Protocol) and UDP (Users Datagram
Protocol) with bigger addresses
CATNIP that means common architecture for the Internet. The main idea is to define
a common packet format that will be compatible to IP, CLNP (Connectionless Network Protocol) and IPX (Internet work Packet Exchange). CLNP has been proposed
by OSI (Open System Interconnection) as a new protocol to replace IP, but never been
adopted because of its inefficiency
SIPP (Simple Internet Protocol Plus) that proposes to increase of the number of address bits from 32 to 64, and to get rid of unused fields of IPv4 header
As none of the above three was seen to be suitable. As such, a mixture of all these three
along with other modifications was suggested in RFC 1883. The RFC 1883 suggested the modifications as below:
Expanded Addressing in suggesting 128 bits for address that may allow more levels
of address hierarchy, increased address space and simpler auto configurable addressing
Improved IP header format by dropping the least used options
Improved support for Extensions that will bring flexibility in operations
Flow Label that will make the real time services possible over Internet
Based on the experience gained in operation of IPv4 over about 20 years, the design of
IPv6 has considered four major simplifications:
assigning fixed format to each header. This ensures the removal of header length field
that is essential in IPv4
removing header checksum. The main advantage in removing header checksum is to
diminish the cost and the time delay in header processing. This may cause the data to
get misrouted. But experience has shown that the risk is minimal as most of data
pack is encapsulated by the packet checksum at other layers like MAC (Media Access
Control) procedure in IEEE 802.X and in adoption layer of ATM (Asynchronous Transfer
Mode) etc.
removing the hop by hop segmentation procedure
removing TOS (Type Of Service) field that IPv4 provides, since experience has shown
that this field has ever been set by applications.
On the other hand, IPv6 has considered two new fields, flow label and priority. These
are included to facilitate the handling of real time services like voice, video and high quality
multimedia etc.

DHARM

N-BHUNIA/BHU1-2.PM5

56

INFORMATION TECHNOLOGY IN 21st CENTURY

57

Thus the IPv6 was finally come up with packet format as in the Fig. (13). The final
specifications of IPv6 were produced in RFC 1883. The new features of IPv6 are:
Version (4 bits)

Priority (4 bits)

Payload Length (16 bits)

Flow label (24 bits)


Next Header (8 bits)

Hop Limit (8 bits)

Source Address (128 bits)


Destination Address (128 bits)
Variable length TCP pack (which is TCP header + Payload)..
Fig. 13: IPv6 Packet Format

A fixed and streamlined 40-byte header: IPv6 is having fixed header bytes like
that in ATM (Asynchronous Transfer Mode) cell. This makes the node processing delay to minimize, and thereby becomes more suitable for real time services like voice,
video and multimedia.
Expanded addressing capabilities: A 128-bit address space in IPv6 instead of 32
bit as in IPv4, is believed to ensure that the world wont run out of IP addresses. The
128 bit address size gives rise to a total of 256 1036 different addresses. It is expected
the Internet under IPv6 to support 1015 (quadrillion) hosts and 1012 (trillion) networks.
The Internet under IPv4 can support maximum 232 hosts. Therefore the IPv 6 address
space is about 64 109 times more that that of IPv4. This is why it is expected that
future and exponential growing demand for Internet connection be met with IPv6.
New Address Class: Besides unicast and multicast, IPv6 has the provision of anycast
addressing. Anycast address allows a packet addressed to an anycast address to be
delivered to any one of a group of hosts.
A single address associated with multiple interfaces
Address auto configuration and CIDR (Classless Inter-domain Routing) addressing
Provision of extension header by which special needs like checksum, security options
may be introduced.
Flow labeling and priority: Flow level and priority headers are used to comfortably support the real time services. By assigning higher priority to the real time packets, the necessity of time sensitiveness is restored. Data packets and for that purpose
time insensitive packets are assigned low priority and serviced by the best effort approach. As per RFC 1752 and RFC 2460, this new feature allows labeling of packets
belonging to particular flows for which the sender requests special handling, such as
a non-default quality of service or real-time service. Hence video and audio may be
treated as flows whereas traditional data, file transfer and e-mail may not be treated
as flows.
Support for real time services
Security support that could be eventually seen as the biggest advantage of IPv6. Today,
billion dollars business is done over Internet. To keep the business secure, public
crypto system has emerged out as one of the important tools. IPv6 with its ancillary
security protocol has provided a better communication tool for transacting business
over Internet
Enhanced routing capability including support for mobile hosts.
IPv6 as such is not simple extension of IPv4, but a definite improvement over IPv4 in
order to meet growing demand of Internet connectivity and the services of real time communication via Internet.

DHARM

N-BHUNIA/BHU1-2.PM5

57

58

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

The functions of IPv6 headers that is of base headers of fixed 40 bytes are:
Version field (4 bits). It contains the version number. Versions are 4 and 6. For
version 6, this field is 6 (i.e. 0110). The various assigned values for IP version label
are shown in table (12). But it must be remembered that just putting a number 6 or
4 does not make the corresponding IP packet. For the corresponding IP packet the
proper format is required to be made.
Priority (4 bits). The bits in the field indicate the priority of the datagram. The
priority levels are 16 from 0 to 15. The first 8 priority levels (from 0 to 7) are for the
services that provide congestion control. If the congestion occurs, the traffic is backed
off. These are suitable for non-real time services like data. The different priority levels under the first 8 levels are: 0 that defines no priority, 1 that defines background
traffic like Netnews, 2 that defines unattended transfer like e-mail, 3 remains reserved, 4 that defines attended bulk transfer like FTP (File Transfer Protocol), NFS,
5 remains reserved, 6 that defines interactive traffic such as Telnet, X-windows, and
7 that defines control traffic such as SNMP (Simple Network Management Protocol)
and routing protocols. The higher 8 priority levels (from 8 to 15) are used for services
that will not back off in response to congestion. Real time traffics are examples of
such services. The lowest priority level of this group 8 refers to traffic that most willing to be discarded on congestion and the highest priority level15 is for traffic that is
least willing to be discarded.
Flow level (24 bits). It is proposed to be used to identify different data flow characteristics, which will be assigned by the source and can be used to label packets. The
packet labels may be required to provide special handling of packet by IPv6 routers,
such as defined quality of service (QoS) or real time services. The combination of the
sender IP address and the flow label creates a unique path identifier that can be used
to route the datagrams more efficiently. The field is still being experimented. Flow is
actually a sequence of packets coming from a particular source and destined for a
particular destination. A flow may require a special handling by routers. Each flow is
uniquely defined by the combination of the source address and a non-zero flow label.
The flow label can be from (000001)H to (FFFFFF)H in hex. The packets having no
flow label are given a zero label. All packets in the same flow must have same flow
label, same source and destination addresses and same priority level. The initial flow
label is obtained by the source by pseudo random generator, and the subsequent flow
numbers are obtained sequentially.
Payload length (16 bits): The field indicates the total size of the payload of the IP data
gram that excludes header fields. It can define up to 65,536 bytes of payload.
Next header (8 bits): The field indicates which header follows the IP header. The next
header can be either one of the optional extension headers used by IP or the header
for an upper layer protocols such as UDP or TCP. The field defines the type of extension header. For example 0 defines IP information, 1 defines ICMP (Internet Control
Message Protocol) information, 6 define TCP information, 44 defines fragmentation
header, 51 defines authentication header and 80 defines ISO (International Standard
Organization) /IP information. Each extension header again contains an Extension
Header Field and a Header Length Field (Fig. 14). When there is no other extension
header, the next header will be TCP and hence the next header field will contain
6.The length of the base header is fixed 40 bytes. The extension header gives the

DHARM

N-BHUNIA/BHU1-2.PM5

58

INFORMATION TECHNOLOGY IN 21st CENTURY

59

functional flexibility to the IPv6 datagram. Maximum six extension headers can be
used. The extension headers may be source routing, fragmentation, authentication
and security etc. IPv6 has currently defines six extension headers: (1) Hop by hop
option header, (2) Routing header, (3) Fragment header, (4) Authentication header,
(5) Encrypted security payload header and (6) Destination options header. If one or
more extension headers are used, they must the order in which they are presented
above. For example, if Authentication header and routing extension header are to be
used, the extension header fields must follow as: (1) main IPv6 header, (2) routing
extension header (3) Authentication header and (4) TCP header with data. Each
extension header must have one 8-bit next header field. For all extension headers
except the fragment header(as in case of fragment header the flags and offset is 16
bits fixed), the next header field is immediately followed by a 8-bit extension header
length that indicates the length of current extension header in multiple of 8 bytes. In
the last extension header the next header field contains the value 59. The example
that we considered earlier, the next header in main IPv6 packet will contain the routing extension header, the next header field in the routing header will show the authentication extension header, and the next header field of the authentication header
will contain the value 59.
Hop Limit (8 bits): This field indicates the maximum number of hops that the datagram
is allowed to traverse in the network before it reaches its destination. If after traversing this maximum number of hops the data gram does not reach the destination, the
datagram is discarded from the network. The field is used to avoid the congestion that
may be caused by the datagram. Each router decreases the hop limit by 1 while releasing the datagram to the network. When the hop limit reaches 0, it is deleted. The
hop limit of IPv6 is exactly what is called Time To Live in IPv4. The new name of Hop
Limit has been given as the name suits better to its function.
Source Address and Destination Address (Each 128 bits): Both the addresses can be
called IP address and are described in RFC 2373. IP address that defines the original
source of datagram is called source address. The IP address that defines the final
destination of the datagram is called the destination address. The three main groups
of IP addresses are: unicast, multicast and anycast. Unicast address defines a particular host. A unicast packet is identified by its unique single address for a single
interface NIC (Network Interface Card), and is transmitted point-to-point. A multicast
address defines all the hosts of a particular group to receive the datagram. The anycast
address will be addressed to a number of interfaces on a single multicast address. The
anycast packet therefore goes to the closer interface and does not attempt to reach the
other interfaces with the same address. A multicast packet, like anycast packet has a
destination address that is associated with number of interfaces, but unlike the anycast
packet, it is destined to each interfaces with that address. Unlike IPv4, IPv6 addresses do not have classes. But the address space of IPv6 is subdivided in various
ways for the purpose of use. The sub division is done based on leading bits of addresses. The present division of IPv6 address space is as shown in table(XIII). The
IPv6 address space is huge enough. So a portion of the IPv6 is reserved for computer
system using Novells Internet Packet Exchange (IPX) network layer protocol, as well
as the Connection Less Network Protocol (CLNP).

DHARM

N-BHUNIA/BHU1-2.PM5

59

60

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

It is found that several fields present in IPv4 are no longer present in the IPv6; and
notably among them are:
Checksum field. The main issue of designing IPv6 was the fast processing of packets. This results in designing with fixed header fields and removing the redundant
fields. The error check is done at upper layers namely TCP/UDP. As such the check
sum field further at IP layer was assumed as redundant and accordingly it was removed from IPv6. Again with check sum at IPv4 packet, the error checking at every
node was essential. It was a very time consuming and costly thing and duly unwanted
at IPv6.
Options field. Dropping of options field has made the IPV6 a fixed header packet. Of
course if required the IPv6 packet may use next header field for the purpose of header
extension.
Fragmentation. The IPv6 version has dropped the fragmentation and reassembly
feature at intermediate routers. The data is fragmented for packetization at the source
only. The reassembly is done at destination only. If a IP packet received by any
intermediate router is found as too large to be forwarded on the outgoing link, the
router simply drops the packet; and in turn send a ICMP error message of Packet Too
Big to the sender. Sender on receiving the ICMP error message of Packet Too Big,
retransmit the data with smaller packet size. Actually the fragmentation and the
reassembling the datagram at routers is a time consuming matter; and removing
these from routers functions to end users functions, makes the network to speed up.
ICMP (Internet Control Message Protocol)
ICMP for version IPv4 is used by hosts, nodes, routers and gateways to communicate network
layer information to each other. ICMP is specified in RFC 792. ICMP information is carried as
IP payload like TCP or UDP information. ICMP messages are basically used for error reporting
among others (Table XIV). An ICMP message is made of a type field and a code field and also
the first eight bytes of the IP datagram for which the ICMP message is to be generated in the
first place so that the sender can know the packet that caused the error. A new version of ICMP
defined for IPv6 in RFC 2463. The new ICMP has the reorganized existing types and codes as
well as added new types and codes. The added new ICNP type includes Packet Too Big, and
unrecognized IPV6 options among others.
Auto configuration and multiple IP addresses
IPv4 address structure is a stateful address structure, which means that if a node moves from
one subnet to another the user has either to reconfigure the IP address or to request for a new
IP address from DHCP (Dynamic Host Configuration Protocol). With DCHP, an IP address is
leased to a particular host or computer for a defined period of time. But IPv6 supports a stateless
auto configuration whereby on moving from a subnet to another subnet a host can construct its
own IP address. This is done by host on adding its MAC (Media Access Control) address to the
subnet prefix. IPv6 also supports multiple addresses for each host. The addresses can be either
valid, deprecated or invalid. With valid address new and existing communication may be done.
With deprecated address, the existing communication may be done. With invalid address no
communication is done.
Address Notation
Like IPv4, the IPv6 has special notation for representing the IP addresses. The IPv6
address is represented by hexadecimal colon notation. The 128 bits are divided into eight sections

DHARM

N-BHUNIA/BHU1-2.PM5

60

INFORMATION TECHNOLOGY IN 21st CENTURY

61

each of two bytes in length. Each of the eight sections is represented in four hex digits (or a
pair of hexagonal numbers separated by a colon. A pair of hex means a byte) and is separated
by a colon. One example is:
AB12:0978:CF56:00FE: 1234:127E:CB65:7890
The notion allows to drop leading zeros. This means and for example 0045 can be just
represented as 45, and 0A456 can similarly be represented as A456, and 0000 as simply 0. The
notion also allows removing a zero leaving a colon, and therefore for example
2456:AC67:0:0:67:D4E5:A456:A678 can be written as 2456:AC67::67:D4E5:A456:A678. The
stated double colon notation can be used at the beginning or at the end of an address but only
once. The double colon at the start indicates leading zeros and that at the end indicates
contiguous zeros at the end. If more that one location double colons are used, it will not possible
to know how many zeros are there at a particular double colon location. This is why double
colon notation is used only once. By counting the other bytes, the number of zeros at the single
double colon location can be found out.
IPv6 and IPv4 address compatibility
For a long interim period, the IPv6 and the IPv4 have to coexist. During this period, an IPv4
address can be converted to an IPv6 address by pre pending 12 bytes of zero. For example, an
IPv4 address 126.34.67.10 will be converted to an IPv6 address as
0:0:0:0:0:0:0:0:0:0:0:0:126.34.67.10 or::126.34.67.10. Similarly a host having an IPv4 address
as 128.67.56.9 may be mapped (read as IPv4 mapped IPv6) could have an IPv6 address as
::AC45:128.67.56.9. The different special notations of version 4 and version 6 will make them
separable.
Version (4 bits)

Priority (4 bits)

Flow label (24 bits)

Payload Length (16 bits)

Next Header (8 bits)

Hop Limit (8 bits)

Source Address (128 bits)


Destination Address (128 bits)
Next header

Header length

Variable header fields

Next Header

Header length

Variable header fields.

..
Variable length TCP pack (which is TCP header + Payload).
Fig. 14: Illustration of use of Next Header Fields

Table 11: Reports of different studies on IPv4 address space run up


Study group

Recommendation

Two leaders of IETF Address


Lifetime Expectations (ALE)s
recommendation

IPv4 address space would be exhausted


in 2008 and 20018 respectively

DHARM

N-BHUNIA/BHU1-2.PM5

61

62

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Final recommendation of ALE in 1994

IPv4 address space will be exhausted at


some time between 2005 to 2011.

American Registry for Internet


Numbers (ARIN)s report in 1996

All class A Address has been assigned;


62% of class B address and 37% of Class C
address have been assigned

Table 12: Different IP version labels


Value

Key

Description
Reserved

IP

Internet Protocol (RFC 791)

ST

ST datagram Mode (RFC 1190)

SIP

Simple Internet Protocol (IPv6)

TP/IX

TP/IX: The Next Internet

PIP

The P Internet Protocol

TUBA

TUBA

10-14

Unassigned

15

Reserved

Table 13: IPv6 address space subdivision based on prefix assignments of bits
Prefixed bits

Use of address space

0000 0000

Reserved

0000 0001

Unassigned

0000 001

Reserved for NSAP application

0000 010

Reserved for IPX application

0000 011
0000 1
0001

Unassigned
Unassigned
Unassigned

001

Aggregatable Global Unicast address

010
011
100
101
110
1110
1111 0

Unassigned
Unassigned
Unassigned
Unassigned
Unassigned
Unassigned
Unassigned

DHARM

N-BHUNIA/BHU1-2.PM5

62

INFORMATION TECHNOLOGY IN 21st CENTURY

1111 10
1111 110
1111 1110 0

Unassigned
Unassigned
Unassigned

1111 1110 10

Addresses for Link Local use

1111 1110 11

Addresses for Site Local use

1111 1111

Multicast addresses

63

Table 14: Selected ICMP messages


ICMP type

CODE

Remarks

Echo reply (to ping)

Destination Network unreachable

Destination Host unreachable

Destination Protocol unreachable

Destination Port unreachable

Destination Network unknown

Destination Host Unknown

Source quench (Congestion Control)

Echo requested

Router advertisement

10

Router Discovery

11

TTL expired

12

IP header bad

3.22.2 Voice over Internet or Internet Telephony


Internet has established itself as the most important and the single most tool of global information age. It was developed for transporting packet data, a non real-time service. But today,
Internet telephony has emerged as an important technology. Internet telephony is supposed to
carry real time and jitter-free voice over Internet. Active and hectic researches are being carried over the subject of VoIP (Voice over Internet Protocol). Generally speaking the use of Internet
for all real time services, like voice, video, and multimedia is being explored. Table (15)[12]
shows a growth estimation of VoIP traffic. There are several motivations[11] for transmitting
voice over IP. These are: (1) long distance calls at low cost and may be of low quality, (2)
cheaper two in one service, (3) Use of PC as a true multimedia terminal, (4) one connection for
all services, (5) local exchanges can support telephone with Internet as backbone and without
high investment in expensive back bone infrastructure, and (6) use of packetized voice allows
voice compression that in turn decreases transmission time and cost. Earlier, telecommunication traffic or telephony connections outnumbered the data traffic. The future is to see the
explosion of data traffic. When there will be crossover is debatable, but sooner or later data

DHARM

N-BHUNIA/BHU1-2.PM5

63

64

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

traffic will dominant the telecommunication traffic. Consequently, now should be the time for
datacom to act as a carrier for telecom. But Internet, as such cannot be used to carry real time
service as it was designed to carry data and as the characteristics of real time services like
voice and video are different from data. Table (16) shows the different characteristics and
different requirements of voice, video and data The need to deploy Internet for the real time
services like voice and video, have lead to redesign some features of Internet. The important
two features related to this emerging issue are: (i) redesign of IP datagram format, and (ii) to
use RTP (Real Time data transfer Protocol) and IP for carrying voice over conventional IP
datagram and Internet. It is believed that with deployment of Ipv6, VoIP will be reached.
Table 15: Projected growth of IP telephone
(A) As per[12]
Voice IP Traffic
1998

310 million minutes

1999

2.7 billion minutes

2004 (expected)

135 billion minutes

(B) As per [16]


Year

Average unit
(millions per year)

Unit growth
rate (%)

Yearly revenues
(millions)

Yearly revenue
growth rate (%)

2000

3987.2

256

388.75

209

2002

22,386.2

162

1511.07

136

2004

167,896.2

114

8814.55

88

2006

587,636.9

75

22036.38

46

Table 16: Characteristics of different services


Voice

LAN data

Transactional
Data

Video

Predictability

Constant/On-Off

Bursty

Highly bursty

Constant/Bursty

Bandwidth/
Bit rate

Very Low to Low

Medium to High

Low to Medium

High

Delay/Jitter

Sensitive

Tolerant

Tolerant

Sensitive

Loss

Sensitive/
No recovery

Sensitive but
can recover

Sensitive but
can recover

Very sensitive/
No recovery

Error/Integrity

Can tolerate

Can not tolerate

Can not tolerate

May tolerate

Technical problems of voice packet transmission over Internet


PSTN (Public Switched Telephone Network) based on circuit switching provides voice service
with guaranteed quality of service. This is not the case in case of voice service provided by
Internet that acts on packet switching. Many technical challenges the voice packet faces while

DHARM

N-BHUNIA/BHU1-2.PM5

64

INFORMATION TECHNOLOGY IN 21st CENTURY

65

in transition over packet switching network like Internet. These include packet loss, packet
transfer delay and jittering delay. Voice communication is involved with human interaction.
As such, a few losses of the voice packets could be tolerated due to human intelligence and
perception involved in recovery. But too much loss of the voice packets may seriously degrade
the voice quality. Moreover, PSTN is a reliable voice service provider whereas Internet is not,
as because Internet is datagram based.
Table 17: End to end voice packet latency delay
Delay source

Typical value (end to end or Phone to Phone) in ms.

Recording

10-40

Encoding/Decoding (CODEC)

Each 5-10/Both together 10-20

Compression/Decompression
(SPEECH)

Each 5-10/Both together 10-20

Internet Delivery

70-120

Jitter buffer

50-200

Average

150-400

Delay is the more serious issue for real time interactive services like voice. By delay it is
meant that the time difference between the time the sender releases the packet to the network
and the time at which the receiver receives the packet from the network. Delay refers to: (1)
total transfer delay of a packet that includes coding/decoding delay, propagation delay,
transmission delay, node processing and queue delay, switching and routing delay; and (2)
jittering delay that refers to the phase delay between two successive packets. Typical delay
from different sources are as in Table (17)[12]. If the total delay exceeds a certain value, customers
may get irritated to the service. A statistic says that a delay up to 80 msec between the caller
and callee is acceptable but beyond it causes irritations to the users. The total delay is a variable
quantity, and it varies from packet to packet. The jittering delay is very serious issue. If the
phase lags between the voice packets at the source and destination varies, the service quality
degrades. The phase lag between packets differs from the source end to the destination end
because the total transfer delay varies from packet o packet. Due to jittering problem, a sending
voice I shall go home may be received as I shall go home. Compared to the transmitter, the
phase delay between i and shall has increased and that between shall and go has reduced
to zero at the receiver. While the total delay could be limited by increasing the bit rate capacities
of the link and by adopting efficient routing technique among others, the jittering effect can not
be solved so simply. There are several techniques to reduce the affect of the jittering problem.
One such technique is known as accelerating and de accelerating. In fact the jittering problem
is due (Di + 1 Di) which is finite and a variable. Here, Di + 1 and Di are both variable quantities
and represent respectively the total transfer delay of (i + 1)th packet and ith packet. To avoid the
jittering effect, it is required that Di + 1 Di = 0. In the accelerating and de accelerating technique,
at the receiver end a variable delay (say Wi for ith packet) is caused to each packet such that Di
+ Wi = K, a constant for all packets (i.e. for i = 0, 1, 2, 3..) before delivery of the packets to the
terminal equipment for play back. By the process, the variable delay caused by the network
between two successive packets is made zero as (Di + 1 + Wi + 1) (Di + Wi) = 0. This ensures that
the phase delay between packets at the transmitter remains same at the receiver. The scheme is
illustrated in the Table (18). As illustrated in the table, the success of the technique depends on
the choice of K.

DHARM

N-BHUNIA/BHU1-2.PM5

65

66

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Table 18: Illustrating of accelerating and de accelerating technique


to cope up with the problems of jittering
Instant at which
a packet is released
at the transmitter
(xi) in ms th.

Variable delay with


which the packet
reaches the receiving
node in ms (Di)

Variable delay (Wi)


caused at the
receiving buffer
(100Di) in ms
(K has been
chosen as
100 ms)

Delay with which


the packet is
delivered to the
terminal device
(xi +100ms)

Packet-1

80

20

100

Packet-2

10

70

30

110

Packet-3

15

85

15

115

Packet-4

25

100

125

Packet-5

30

110

10

130

(packet-4 is the marginal case. Packet-5 is the failed case. Both could have been avoided had
the constant K been chosen more than 110 ms in this case. So the success of the technique
depends on the choice of fixing K)
VoIP is going to be a dominant service issue of IP. VoIP has several motivations as we
discussed earlier. PSTN supports only toll-quality sound (4 KHz sound), and not suitable for
high-fidelity sound. VoIP can support higher grades of sound. This will be another major driving
factor for VoIP. But there are several issues that need to be resolved before VoIP is used.
Standards are still not finalized, although H.323 of ITU is being projected as a possible standard.
H.323 may be under new version 2 be used for interoperability between different service networks
like PSTN and Internet to support voice. The standard H.323 is for multimedia or
videoconferencing. The audio G.7xx standard of H.323 may be many based on choice of xs. The
choice of xs will define the intelligibility of the voice service provided.
3.22.3 Ipv6 for real time services
The conventional packet switching is not appropriate to carry real time services. There are
many reasons for this. For example HDLC or SDLC packets are variable in size. To synchronize and identify a packet, flags are required to be located. To avoid occurrence of flag byte in
the payload, stuffing and de stuffing are done. These cause huge node processing delay, and
hence packet transfer delay. ATM was proposed as the replacement of packet switching to
support real time services. The problems of conventional packet switching were solved in ATM
by making ATM packet, called cell simpler. The simplicity in ATM is in two respects: (1) shorter
cell and (2) fixed size cell. This philosophy was extended to design Ipv6 datagram to replace
Ipv4 datagram so that IP can carry real time services.
IPv6 has a simple and basically fixed header format. The overhead bits of Ipv6 are less
than that of Ipv4. The overhead bits in Ipv4 is 12 bytes in the header format of 20 bytes (8
bytes are for address), whereas the overhead bits in Ipv6 is 8 bytes in the header format of 40
bytes (32 bytes are for address). IPv 6 proposes to provide QoS (Quality of Service) service
support to real time services like voice and video. The flow level and priority in the header of
Ipv6 facilitate the support of real time data. Ipv6 has an efficient header format compared to
Ipv4.

DHARM

N-BHUNIA/BHU1-2.PM5

66

INFORMATION TECHNOLOGY IN 21st CENTURY

67

3.22.4 Wireless Internet


Two proposals for further development of the Internet are: (i) under sea super speed Internet
and (ii) wireless Internet. A proposal for a global optical-fiber under sea cable network called
Project Oxygen has significant industry support and financial backing. This project is called
the best of bandwidth on demand project as per the company release. Experts say Project
Oxygen is the most ambitious communication project in the 20th century..The Internet and
video transmission are the major drivers for the expansion..a global optical fiber network
could erase the boundaries between Internet and the traditional communications, and shift
the profit model from voice service to data and video. Construction of the under sea network
began in September98. In the first phase, the cable shall be stretched over 158000 Km in 74
countries with three major network management centers in USA, Spain and Singapore. The
major transatlantic and transpacific links are likely to be operational by 2000. Phase two shall
start in 2002 and cover the whole of the world. The speed of cable is projected at 1920 GBPS
with minimum capacity of 640 GBPS. It is reported that with under sea Internet, a video-based
Internet shall come with over 10,000 video channels.
The growth of wireless technology is immense. At the same time Internet traffic is growing exponentially. These have motivated for wireless Internet access research and development. The proliferation of the Internet-enabled wireless devices[56] has also excited the wireless Internet project. The wireless Internet access to mobile subscriber of UMTS (Universal
Mobile Telecommunication Systems), GSM and other 3G and 2G technologies; and even with
wireless ATM, are studied in literatures[46,54,57]. The modification of RSVP (Resource Reservation Protocol)[58] is directed to implement wireless Internet that can support for different
broadband services. The wireless Internet will provide a scalable global mobile system for
different services.

4 LOCAL LOOP TRANSPORT TECHNOLOGY


4.1 Fiber-free optical or Optical Wireless Communication and Networks
It is often said that science is back to basics. Old science is science of light. Modern science and
technologies are now finding and exploring the viable and potential applications of light. The
two pillars of the information technology, namely computer and communication see their brighter
future in use of light. The future computer technology is heavily dependent upon optical storage
because of getting higher density, lower error and higher speed etc. Even the secure
transportation of computer data is seeing a high hope with quantum transportation and
computing that uses nothing but a single photon characteristics. The communication technology
prefers fiber optics as the best choice as transmission medium because of fibers several
advantages like low noise contamination, and very high bandwidth etc. But the cost of fiber
optics is considerable. So if optical communication is made possible without fiber? Grapes the
technology and use it. That is what is fiber-free optical communication or free-space optics
(FSO) or optical wireless(OW) as known in industries.
FSO appears nothing new, but the clever and intelligent application of ancient basic
technology of use of light for communication purposes. Remember the early men used to use
light signal or smoke signal for messaging with no cost as light travels through air with no
cost. FSO technology has the attraction that its cheaper to beam data through the air than to
build infrastructure with wires. Light through free-space or air provides high speed transport
over short distance and that too at no cost. This transmission medium may be used with proper
transmitter and receiver to realize FSO. Thus for economic advantages the free-space optics

DHARM

N-BHUNIA/BHU1-2.PM5

67

68

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

technology may be used for Gbps (giga bit per second) transport over metropolitan or city
distances. The appealing other advantages of FSO are: no cable cost, no cable installation,
trenching & digging cost, no cable maintenance cost, and no link failure (virtually link
availability is almost 100%!). It is said that Free space optics really only provides a very
limited application when you consider five 9s of reliability. Some of the free space optics
companies will tell you that the five 9s are outdated and that they actually have trials with
alternative operators that are just going to three 9s and four 9s and Five 9s is probably the
greatest myth that exists today in the world of telecom. Free-space optics is the hybrid of the
optical and the wireless technology, presently the two most important carrier technology of
communication. FSO offers free-for-all transmission medium. A study says FSOs also offer
lower deployment costs and reduced installation time compared with metro fiber builds. Business
cases we have seen start at one-fifth the cost of metro fiber and can be six months faster to
install in some metro areas. As the name implies, FSO uses optical laser technology to transmit
data across open spaces and uses the property of straight line propagation of the light beam.
The low-power infrared beams that do not harm the eyes are used in FSO technology to transmit
data through the open space between transceivers. The transceivers are mounted on rooftops
or behind windows (Fig. 15) which are in line of sight with each other over the distances of a
few hundred meters to a few kilometers. The part of the electromagnetic spectrum above 300
GHz that includes infrared is unlicensed and available free of cost. The FSO technology then is
to ensure only that the radiated power does not exceed the standard defined by the International
Committees. Usually the equipment works either at the 850 nm or the 1550 nm laser. Lasers
of 850 nm are much cheaper than those of 1550 nm. But the safety regulations permit the
lasers of 1550 nm to operate at higher level than that of the 850 nm laser. The FSO with 850
nm laser thus suitable for moderate distance whereas FSO with 1550 nm is favored for distance
of kilometer ranges.. Actually 1550 nm has two fold power advantages and five fold distance
advantages over 850 nm laser but about ten fold cost disadvantages compared to 850 nm.
Table 19 gives a comparative study.
A few major applications of FSO are in the areas of metro network extension, last-mile
access, enterprise connectivity, dense wave division multiplexing services, SONET ring closures,
wireless backhaul, back up, disaster recovery, service acceleration, storage-area network and
LAN interconnectivity. FSO may be deployed to extend the existing Fiber Ring of MAN
(Metropolitan Area network) by connecting with other networks. This may compete with SONET
(Synchronous Optical Network) network. FSO may be deployed in the last-mile access in the
sense that it may be used in high speed links that connect Internet service providers or other
networks with end users. It is reported that domestic service providers and foreign carriers
are using FSO not only as a broadband backup but also as a viable last-mile technology. For a
technology that depends on straight lines, free space optics is taking a circuitous route to
espectability. FSO may be used as redundant back up in lieu of a second fiber link, particularly
over short distance communication. This has a clear advantage. Consider the Sept11 disaster.
Had there been FSO, some means alternative communication could have been available in
case of fiber failure. A report goes on saying While FSO will never defy the laws of physics, it
can provide a valuable last link between the fiber network and the end user-including as a
backup to more conventional methods. A key example was the Sept. 11 tragedy, when carriers
learned that having a backup fiber optic network was of little use if both fibers went dark. AS
a backhaul, FSO may be used to carry cellular telephone traffic from towers back to fixed wire
PSTN (Public Switched Telephone Network).FSO may further be used to provide immediate or
instant service to customers while their fiber link is being laid.

DHARM

N-BHUNIA/BHU1-2.PM5

68

INFORMATION TECHNOLOGY IN 21st CENTURY

69

FSO or OW has another important application in the last mile solution for broadband
services. This application is otherwise known as bridging technology. To support broadband
services to residential customers, the problem of the last mile made of twisted wire pair exist.
The clever utilization of last mile has made the access rates to vary from 128 Kbps to 2.3 Mbps.
One important technology of the clever utilization is the DSL (Digital Subscriber Line) technology which provides access rate at 144 Kbps. With OW technology the access rate is believed
to increase to Mbps. This a great offer of OW technology.
FSO technology is believed to change the optical communication and Optical networking technology is radically changing the foundation of carrier backbones, boosting Internet
bandwidth exponentially while slashing costs dramatically. But FSO is not free from disadvantages. FSO link may suffer from weather conditions, for example the Fog may hamper the
link operation. Till date no standard is available for FSO operation. The vendors have to do a
lot to utilize the technologys viability and consequent products marketability. Let us hope for
the best for this old technology. It is concluded with a few observations of some industrialists
and members of academic:
1. Optics Alliance and chief technical officer for vendor fSONA said: To have alternate paths
using free space optics is getting much more interest from carriers.
2. People are realizing that if they have two fibers, theyre not necessary protected if its a
correlated event and they both go out, said Steve Mecherle, chair of the Free Space
3. Michael Sabo, senior vice president of sales and marketing for vendor AirFiber, said FSO
is earning a place as more than a fiber backup. Billions of dollars have been spent on longhaul fiber builds out on the trunks. This technology fits the last-mile kinds of applications
to fill in all the leaves of those networks.
4. Qwest uses FSO in commercial deployments because they are the vast majority of the
users of Qwests broadband network. Were pleased with the technology, but we cannot
[speculate] about its future deployment in the Qwest market,Qwest Communications.
5. Nevertheless, fiber doesnt go everywhere, and it cant always be deployed quickly. In
all those cases, FSO is a superb alternative, Werne , CEO, Utfors, A Sweden broadband
carrier
6. Ken Corriveau, Tribals IT director: You could rent dark fiber, but that would take
forever to figure out in the city. You could rent a T-1 or DS-3, but both of those are 30 to
90 days out.
7. In Madrid, 80% of business users are within 500 meters of fiber, said Paul Kearney,
Aluas (a carrier in Spain)chief technical officer. He further said : We plan our [FSO]
network by using very short ranges to be within the weather limitations.
8. In general, the technology has a lot of future for the carrier networksif its marketed
well, said Gartners Tratz-Ryan. And therefore to many, at least FSO cleared the first
hurdle on its circuitous obstacle course
Table 19: Comparison of lasers used in FSO
Laser in FSO

Typical Cost

Typical Data Rate

Typical Coverage distance

850 nm

US$ 5000

10-100 Mbps

A few hundred meters

1550 nm

US$50000

Upto Gbps

1-2 Kilometers

DHARM

N-BHUNIA/BHU1-2.PM5

69

70

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

T
T

TR

T
T

TR

T
MAN/LAN
TR

TR = Transceivers
Free space or air ______
Fiber link

Fig. 15: FSO operation.

4.2 DSL Technologies: ADSL and VDSL


Over more than a century, a vast analog telephone network is existing world over. The telephone
line in general and the last miles, in particular is made of twisted copper wire pair that is
suitable for voice communication. Due to information technology, the need was arisen to transport
many diverged types of information. Information relates to many different applications and
services, viz. voice, video, data, image, facsimile etc. The information age is motivated by a new
culture of value added communication where communication of video, data, image, facsimile
and graphics etc has become imperative besides, basic communication of the voice or speech.
The characteristic of diverse services, voice, data, video, image, graphics and facsimile etc are
quite different from each other. Therefore for each of the services logically there is a requirement
to have each ones nature based communication system. Obviously, such a proposition is not
techno-economically viable and sound. The only other economically viable alternative left was
to find techniques to use existing vast copper cable of telephone network for the value added
services. Even after evolution of fiber optics, a vast copper line is still existing that can be
guessed from the following statistics of 1997[35,36]
1. In USA, the unloaded twisted wire pairs up to 18000 ft (between central office and customer - local loop) account around 70% of all loops.
2. In USA, the Loaded loops (>18000ft) account around 15% only of all loops,
3. In USA, the derived loops up to 12000 ft with unloaded twisted pair connected with
FTTC / DLC, Fiber - To - The - Curb or Digital - Loop - Carrier, accounts around 15%
only,

DHARM

N-BHUNIA/BHU1-2.PM5

70

INFORMATION TECHNOLOGY IN 21st CENTURY

71

4. The world picture in this respect is 600 million unloaded twisted copper wire pair versus
6 million hybrid fiber/coaxial lines, i, e the ratio is 100:1,
5. The annual growth of telephone network in 1990-95 in Africa, Arab States, Latin America
and Asia Pacific was respectively 8%, 9%, 10% and 27%
6. Around 1000 million telephone subscribers exist in the world in 2003.
Actually the varied services like video conferencing, video on demand, fast access to
Internet and interactive multimedia services require higher bandwidth than that of voice.
Therefore new technology and signal processing are prime needs if the copper is used to carry
these services in the last miles.
xDSL (Digital Subscriber Line) is the unique technology that supports more than one
services like voice, video and data simultaneously over a shared access line of copper. The DSL
is established as a scalable service that provides quality service delivery and at the same time
provides a cost effective local loop infrastructure. The DSL appears to be an efficient solution
for providing multimedia services.
In order to provide value added services, broadband services and multimedia services
using existing unloaded telephone lines, over the last few years communication engineers developed a number of techniques. These are: Modem culture and xDSL technology. XDSL technology[38-40] includes: HDSL (Higher-rate Digital Subscriber Line), ADSL (Very-high-rate
Digital Subscriber Line), G.lite (splitter less ADSLthis is also called UDSL, Universal DSL),
SDSL (Symmetric DSL), VDSL (Very High Rate DSL), IDSL ( ISDN DSL), RADSL (RateAdaptive DSL) etc.
4.2.1 Modem Versus xDSL
Using modem, the copper wire provides the data services. For example Internet access with
dial up facility is done through the modem. As of today the modem speed is 56 Kbps. The speed
of 56 Kbps is not sufficient to support high quality broadband services. Moreover modems
occupy the entire 0-4Khz bandwidth allocated to voice, thereby preventing simultaneous services
of voice and data over copper of local loop. Within last few years, the slogan of communication
technology has become Speed is the ultimate. Technology is being developed in pace to serve
with the demand of more and more data rate, namely, from bits per second (bps) to Kbps to
Mbps to Gbps and finally, to Tbps with WDM, Fiber amplifier, solution and fiber optics
communication in hand. High bit rate communication is not possible with copper twisted wire
per. Alternative may be to use optical fiber link at loops. This may be a long run solution, but
xDSL technology was developed out of this race of more and more fast data communication but
using copper cable. The oldest technology of communication digital data through along twisted
pair cable of telephone loop, is modem technology. Bit (oldest modem, example is V.21/Bell
103) to as high as 33.6 Kbps (as predicted in V.34 extended standard). With standardization of
V.43 modem with 28.8 Kbps, it has been postulated to provide low graded multimedia service
to customers through POTS. But there is a big bug in modem technology. A 3 KHz voice line
(local analog loop) with 30 dB signal-to-noise ratio can have maximum bit rate of about 30
Kbps as per Shannon theory. Thus using modem technology to carry data over analog telephone
line is handicapped by the above speed constraint. This is the reason that modems sometime
do not work at the vendors advertised speed. There is also report of 56 Kbps modem technology,
which can well fit to carry multimedia and Internet services to customers promises using
telephone lines. But the 56 Kbps technology does not communicate data in between two modems.
It communicates data between a modem and a digital ISP (interface signal processor) system
which creates a reduced noise like environment. Therefore use of 56 Kbps modem technology to
transport high bit rate services to customers, promises may be the limit.

DHARM

N-BHUNIA/BHU1-2.PM5

71

72

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

It was already mentioned that local loops of copper twisted pairs designed for caring
voice signals, are not suitable to carry high speed digital data. Local copper loops are primarily
designed to carry voice traffic. Voice traffic is relatively short duration and on an average of 3
minutes. Internet traffic is on average of 30 minutes duration. The impulse noise and pulse
dispersion of copper loops is the main obstacles in carrying data at high speed. But with growing
World Wide Web culture and demand of multimedia services like video-on-demand, boosting of
capacity of copper twisted pair local loops, by using some alternative technology of modem, was
felt essential. This gave the birth of xDSL technology in general and ADSL technology in
particular. It is often said that ADSL is for boosting the capacity of installed copper and fiber
optics link.
In xDSL technology, special circuits and software called transceiver are used. Transceiver software perform the function of encoding/decoding or modulation/demodulation by which
serial binary digital data streams are converted into signal suitable for transmission through
analog copper twisted pair link. Transceiver also performs the other functions like equalization, signal shaping and processing, and amplification to compensate for signal attenuation
and phase distortion. The other important function performed by transceiver is error detection
and correction of data.
4.2.2 ISDN versus xDSL technology
ISDN (Integrated Services Digital Network) was developed to provide integrated and
simultaneous services of voice, data and low speed video at a basic rate signal of 144 Kbps. The
payload of 144Kbps consist of two B channels each 64 Kbps and one D channel 16 Kbps. The
DSL signals was first coined to carry 144 Kbps of ISDN over copper loops of 18000 ft or less.
This was made with 2BIQ four level line code. The 2BIQ code provides baseband signal spanning
from zero to voice frequency band. In this mode of ISDN, voice is served in digital mode using
PCM (Pulse Code Modulation) and B channel at the rate of 64 Kbps; but ISDN does not support
POTS(Plain Old Telephone Service). Data at the rate of B channel of 64 Kbps (which is much
higher than the maximum permissible rate in MODEM cultureabout two folds) is served in
ISDN. Therefore, why to go for xDSL ? Reasons behind going for xDSL technology are two.
First, xDSL technology provides much higher data rate than ISDN. With growing web culture
and demand of multimedia services, bit rate of the order of a few Mbps become common. Services
like video on demand can not be meet with 64 Kbps or even of 64*2 = 128 Kbps of ISDN
technology. Second, ADSL and VDSL are different from ISDN in the respect that unlike ISDN,
they retain the service of POTS while providing high rate data service.
4.2.3 ADSL technology
ADSL technology has become most appropriate technology out of all xDSL technologies. HDSL
is a variant of ISDN technology which provides data communication at the bit rate of about
784 Kbps (T1 carrier) over twisted copper paid loop upto 12000 ft. like ISDN, HDSL uses 2 BIQ
line code.
ADSL technology was developed mainly to provide multimedia service like video-ondemand service and growing Web service. The characteristics of these two service are quit
asymmetric in nature. For Web accessing and / or interactive video two ways communication is
essential. Out of the two ways communication downstream (towards the subscriber)
communication requires much higher bandwidth then upstream (towards central exchange/

DHARM

N-BHUNIA/BHU1-2.PM5

72

INFORMATION TECHNOLOGY IN 21st CENTURY

73

office) communication. This is because, typically Web surfer is more interested in downloaded
on uplink request. ADSL technology[37-43] offers higher data rate of say 6 Mbps for downstream
data and lower data payload of say 640 Kbps for uplink data using copper installed loop of
telephone. In addition, ADSL provides POTS or conventional voice service. As the service nature
is asymmetric, SDSL technology got lost to ADSL technology.
Due to the asymmetric nature of ASDL technology, it provides an interesting technological benefit. When many wires are squeezed together in a cable, cross talk is inevitable due
signal overlapping. In case of downstream data, signal amplitudes are same because they all
originate form the exchange. Due to the same amplitude, there is no effect of destruction of
weak signal by strong signal. For uplink data, signal may originate from different customer
premises, which are the different locations. Therefore signal reaching through wire pairs of a
cable may greatly varies in amplitude. But as the cross talk increases with frequency, problem
is tackled by limiting upstream data and keeping it at low end of spectrum. This is exactly
what is done in ADSL.
ADSL technology increases capacity of installed copper link of telephone to 6 Mbps. In
the technology data traffic and voice is carried simultaneously. It carries data in digital form
and voice in analog form, unlike ISDN which carries both in digital form.
ADSL System
POTS splitter/filter preserves the 4 KHz spectrum for POTS service; and prevents hampering
of POTS service due to any fault of ADSL equipment. The rest available bandwidth of 10 KHz
is used for ADSL data communication at the rate 6 BPS for every hertz of available bandwidth.
Fig. (16) portrays the operation of ADSL system. The transceiver software of ADSL uses an
advances modulation technique known as discrete multitone (DMT) technology. The ANSI
T1E1.4 has standardized DMT as the line code for ADSL. DMT divides bandwidth 10 KHz to 1
MHz in 256 independent subgroups each of 4 KHz width. Each of the sub channels referred to
as tone, is QAM modulated on the separated carrier. The carrier frequencies are multiples of
basic frequency of 4.3125 KHz. The DMT is used in ADSL technology as because it has the
unique ability to overcome typical noise and interrupts in the local loop twisted wire pair
cable.
The ADSL frequency spectrum is shown in Fig. (17). The available spectrum ranges
from about 20 KHz to 1.1 MHz. The low 20 KHz is reserved for voice services under normal
POTS. To perform bi directional communication, ADSL modems divide the bandwidth in one of
two ways: (1) FDM where non overlapping bands are used separately for upstream and
downstream links, (2) echo cancellation where for both the upstream and the downstream the
overlapping bands are used but separation is made by local echo cancellation technique. Echo
cancellation technique is bandwidth efficient. Advanced forward error correction techniques
are used to tolerate error bursts as long as 500 msec.
A comparison of different DSL technologies is given in Table (20). ADSL is about 400
times faster than most sophisticated modem and 60 or more times faster than ISDN.
However ADSL down stream speeds depend on the loop distance as shown in Table (21).
But typical coverage distance is about 4 km. Over distances, the natural degradation of data
rate occurs. To provide services to customers beyond 4 km, an embedded rate adaptive mechanism may be used.

DHARM

N-BHUNIA/BHU1-2.PM5

73

74

INFORMATION TECHNOLOGY, NETWORK AND INTERNET


Customer Premise
Telephone

POTS Splitter
Copper Local Loop

Computer

ADSL Modem

Local Switching
Exchange

Network

Pots Splitter

Processing
Circuit

Network

Local Exchange

Fig. 16: ADSL System

Pots band

Guard band Up stream band

Down stream
band

4 KHz

138 KHz

30 KHz

1.104 KHz

Fig. 17: Frequency Spectrum of the ADSL

The arrangement may be coupled with growing ATM (Asynchronous Transfer mode)
network, which is predicted to be a network for multimedia services. Recent advances in ADSL
technology promises to transfer data at the rate as high as 50 Mbps to the customers over a
short distance of twisted copper pair from FTTC. This advancement is termed as VDSL.
ADSL technology and WDM technology support the predicted cyclic nature of analog
digital transmission.
Table 20: Comparison of DSL Technologies
Service / Network

Data Rate

POTS (Plain Old Telephone


System) with Modem
ISDN
ADSL
VDSL
HDSL
IDSL
SDSL

28.8 - 56 kbps
64-128 kbps
1.544-8.448 Mbps for downstream 16-640 kbps for upstream
12.96 - 55.2 Mbps
784, 1544, 2048 kbps
128 kbps
8002000 kbps for downstream 64 - 200 kbps for upstream

RADSL

1.5448 Mbps for downstream 64 kbps1.544 Mbps for up stream

DHARM

N-BHUNIA/BHU1-2.PM5

74

INFORMATION TECHNOLOGY IN 21st CENTURY

75

Table 21: Down stream speed versus distance of ADSL technology


Distance in feet

Speed in Mbps

18,000
16,000
12,000
9,000

1.544 (T-1 carrier)


2.048 (E-1 carrier)
6.312 (DS-2)
8.448

The major applications of ADSL technology are: (1) Information highway to wide
community, (2) High speed to Internet access, (3) Distance learning by the process of video
conferencing etc (4) Video on Demand, (5) Video telephony.
ADSL was standardized by the ITU-T in recommendation G.992.1 in 1999. The splitter
less ADSL known as ADSL lite was recommended in G.992.2. In the ADSL lite the use of
splitter in the customers premises are avoided at the cost of lower transfer capacity as 1.5
Mbps and 512 Kbps respectively for downstream and upstream.
4.2.4 VDSL Technology
Very high speed or high rate DSL technology is the most recent and important addition to the
DSL technologies. The technology is believed to provide the bridge between todays existing
copper infrastructures with near futures futures entire fiber infrastructure. VDSL modems
[140-43] are placed in the customers premises and at the end of fiber installation. The end of
fiber installation is the neighborhood or exchange point where the fiber link terminates. With
the technology, very high speeds are possible on the copper link spanning about 1.5 km between fiber end and customers premises with as high as 15 Mbps total in both directions and
over a short distance of 300 m or less with 52 Mbps. VDSL offers about 100 times faster tan
normal modems. The proposed VDSL can use up to 30 MHz bandwidth compared to 1.104 MHz
of ADSL and 300, 580, 1100 kHz for HDSL. VDSL supports two service classes : Asymmetric
known as Class I service and Symmetric known as Class II service. Asymmetric service type is
compatible to ADSL technology and primarily aims to meet residential customers. Symmetric
service aims to serve business purposes. VDSL is supposed to provide broadband services to
both business and residential communities on existing copper infrastructure. Data rates of
VDSL is at Table (22).
VDSL system
VDSL is aimed to be coupled with FTTC (Fiber To The Curb) and FTTB (Fiber To The Building)/FTTH (Fiber To The Home), the technologies that uses fiber in part of the local loop. In
that context the VDSL reference model is shown in Fig. (18).
Table 13: Typical VDSL data rates
Service Class

Upstream data rate


in Mbps

Downstream data
rate in Mbps

Spanning
Distance in m

Asymmetric

6.4

52

300

3.2

26

900

1.3

13

1500

26

26

300

13

13

900

6.5

6.5

1500

Symmetric

DHARM

N-BHUNIA/BHU1-2.PM5

75

76

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Customer Premises

Copper Link

Fiber Link

Central Office/Exchange

VDSL

VDSL
Transceiver
at NT

Transceiver
at ONU

NT = Network Termination
ONU = Optical Network Unit
(a) System Reference
LT = LINE
Termination

Splitter

Splitter
Copper
wire

PSTN/ISDN

NT = Line
termination

Network
Interface

PSTN/ISDN

(b) VDSL reference model


Fig. 18: VDSL System/Model

Attenuation and Cross Talk


The subscriber loop is made of copper wire of different gauges. A number of pairs are grouped
together in cable bundles. The attenuation of the signal in copper wire depends on the dielectric
used, gauge, type of twisting, and length. But attenuation usually increases with both frequency
and the length. That is why the data rates in ADSL and VDSL falls wit length as pointed out
earlier as well the distance coverage is lower in VDSL than that of ADSL It may be noted tat
NEXT is not attenuated by the line transfer function. That is why NEXT is more harmful than
FEXT. In both ADSL and VDSL, by FDM technique the effect of NEXT is made lower. But both
cause the data rates to fall with lengths.
Both the technologies, ADSL and VDSL are believed to provide wider broadband services
to residential and business users using the existing copper link of last miles. However future
research directions will aim to tackle the issue of falling rates with length.

5. MULTIMEDIA COMMUNICATION AND CONFERENCING STANDARDS


It is said that in the future, multimedia shall be the rule and monomedia shall be the exception.
Multimedia is a tele-service concept that provide integrated and simultaneous services of more
than one telecommunication service, namely, voice-world, video-world and data-world. Truly,
multimedia is supposed to provide such service in real time and in interactive mode. Typical
examples of multimedia applications are teleconferencing, videoconferencing, telemedicine,
telemarketing, teleshopping etc.
Multimedia is fast emerging as an important tool of information technology and as a
basic tool of tomorrows life. Multimedia proposes to simulate human-like communication and
services in an environment of you see as I see and you feel as I feel. Virtually reality is
envisaged in multimedia services. Multimedia transferred your message in your way. Multimedia

DHARM

N-BHUNIA/BHU1-2.PM5

76

INFORMATION TECHNOLOGY IN 21st CENTURY

77

is believed to prosper with the general human trend from nice to have to value to have to
essential to have. With multimedia a society with plug and play, look and fell and point and
feel and point and click shall emerge. In near future, we shall have multimedia cities and
centres. It is often said that in near future multimedia shall be the rule and the monomedia shall
be the exception. Interactive multimedia is a service, which provides simultaneous access,
dissemination, transportation and processing of more than one information service like voice,
video and data in the interactive mode and in the real time environment. Multimedia is to
integrate three communication worlds, namely, telephone world, data world and video/TV world
into a single world communication. multimedia application shall comprise more than one
information type, namely the non real time service of data, images, text and graphics, and the
real time service of voice and video. Future world of information and communication shall be
converged to multimedia application and shall provide comfort, competition, mobility, efficiency
and flexibility. As per Fred T. Hofstetter Multimedia is the use of a computer to present and
combine text, graphics, audio and video with links and tools that let the user navigate, interact,
create and communication. Technologically multimedia shall be service of services and nontechnically a community of communities. Multimedia shall enable people to communicate and
access at any time at any where at reasonable costs with acceptable quality with manageability.
Location of man, materials and machine resources shall be irrelevant in business in the era of
multimedia. It is said that It makes no sense to ship atoms when you can ship bits. Virtual
reality with virtual presence in virtual worlds, virtual cities, business enters, virtual schools
and virtual rooms will emerge in the next future For example, virtual reality at short
notice allows collaboration between changing partners on specific tasks, sitting at virtual writing
tables without real offices and addresses other than the network. Transactions in this enhanced
telecooperative working environment would be electronic analogies of the normal world. Faster
work flow, comprehensive 24-hour service, remote operation and maintenance, easier trouble
shooting, life long and leisure time activities, less travel, less cost and more fun shall be the
important attraction of the multimedia world. Multimedia communications provide a chickenegg benefits to information world, and have acceptance at all levels: (1) contact acceptance,
viz., service availability, user-interface, (2) economic acceptance, viz., less cost, more benefits,
(3 ) content acceptance, viz. quality, and (4) social acceptance, viz., desirability, privacy.

5.1 Standards
A great challenge is to standardize broadband services and system for the purpose of deployment.
In fact, the deployment of seamless integrated mobile broadband services will greatly benefited
from the standardization process[48]. In order to define any standard, the International
Telecommunication Union (ITU) usually forms a study group. This study group submits
recommendations for standards pertaining to the assigned functions. A list of different study
groups along with their assigned functions, made by ITU for 1997-2000, is given in Table (23).
SG9 and SG16 respectively deal with television and sound transmission, and multimedia services
and systems.
The low bit-rate (kilo bits per second-kbps) audio coding standards specified by ITU for
multimedia application are listed in Table (24). The standards G71X and G72X are mainly
used in different multimedia applications. MPEC-I (removing picture export group) audio coding decoding is applied in H.310 multimedia conferencing standard.

DHARM

N-BHUNIA/BHU1-2.PM5

77

78

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Table 23: ITU Study Groups


SG 1
SG2
SG3
SG4
SG5
SG6
SG7
SG8
SG9
SG10
SG11
SG12
SG13
SG14
SG15

Service definition
Network and service operation
Tariff and accounting principles, economic and policy issue
Telecommunication management network (TMN) and network maintenance issue.
Protection and policies against electromagnetic environmental effects.
Outside plant
Data network/open system intercommunications
Features and characteristics of telemetric system
TV sound transmission
Software aspects of telecommunication systems
Signal and Protocol
End-to-end transmission performance of network and terminals
Network aspects in general
Modems and transmission techniques
Transport networks systems and equipments

SG16

Multimedia services and systems.

Table 24: Standards of low bit rate audio coding for multimedia communication
Standard
G.723,D
G.723,1
G729,A
G.729
G.711
(PCM/POTS)
G.722
(Broadcast quality)
G.723
(Low bit rate POTS)
G.726
G.728
MPEG.1 layer
(CD audio)

Bit rate
In kbps

Frame size
in mg/cc

Algorithms delay
in m sec

Required RAM size


with 16 bit words.

5.3
6.3
8
8
56

30
30
10
10

37.5
37.5
15
15

2.2k
2.2k
2k
2.7k

48-64

5-6
32
16

32-256

Different video coding standards for multimedia services are listed in Table (25) along
with bit rate and applications. H.26X standards are used for videoconferencing and MPEG-I is
used for video-on-demand. H.26X standards are mostly used in multimedia videoconferencing
standards like H.320, H.324, H.323 and H.310

DHARM

N-BHUNIA/BHU1-2.PM5

78

79

INFORMATION TECHNOLOGY IN 21st CENTURY

Table 15: Video-coding standards for multimedia applications


Standards

Bit rate

Typical Multimedia Application

H.261
H.263
MPEG.1
MPEG.2

64kbps-1.92Mbps
15kbps-34kbps
1.2Mbps2Mbps
3-15Mbps

Videoconferencing (N-ISDON-64)
Low rate videoconferencing
Video on demand
Temperature-Diagnostic video on demand

Table 16: Multimedia conferencing and terminal standards


Standard

Network

Video
Coding

Audio
Coding

Data
Standard

Multiplexing

Control

Remarks
Application

H.320
(1990)

N-ISDN

H.261

G.711
G.722
G.728

T.120

H.221

H.242

Multimedia
conferencing
with G.711

H.324
(1996)

PSTN/
GSTN/
POTS

H.263
H.261

G.732.1
G.729

T120

H.223

H245

Multimedia
conferencing
with H.263
and G.723.1

H.322
(1996)

LAN
internets
packet
switching

H.261
H263

G.711
G.722
G.728
G.723.1
G.729

T.120

H.225.0

H.245

Multimedia
conferencing
H.261, G.711

H.322

Isoethernet

H.261

G.711
G.722
G.728

T.120

H.221

H.242

H.321

B-ISDN/
ATM

H.261

G.711
G.722
G.728

T.120

H.224

H.242

H.310

B-ISDN/
ATM

T.120

H.222.0
H.222.1

H.245

Multimedia
conferencing
with H.262,
MPEG.1,
H.222.0

H.262
MPEC.1
MPLG.2
G.711
H.261
G.722
G.728

Table (26) is a comprehensive list of different multimedia standards, their network performs,
video coding, audio coding, and data standard multiplexing standard, control standard and
applications. The standard H.324 may be used to provide videoconferencing, putting to work the
existing telephone network. H.323 may be used for the same over LAN (local area network) and
H.320 may be used over N-ISDN using nx64 kbps channel, whereas H.310 may be used using BISDN/ATM. The table also lists the users terminal requirement for different multimedia standards.

DHARM

N-BHUNIA/BHU1-2.PM5

79

80

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

5.1.1 H.320 multimedia conferencing standard


H.320 is the narrow band (< 2 Mbps) conferencing standard meant for conferencing over telephone
networks such as ISDN with bandwidth typically in the range 384 kbps. H.320 family of standard
is to serve video conferencing with any H.320 compatible terminal irrespective of whether it is
stand alone video conferencing unit or video telephone or PC based system. H.320 is often
treated as a de facto standard of video conferencing.
H.261 is the video coding standard has a lot of similarity with H.320. The H.261 standard
has a lot of similarity with MPEG technique, and uses the DCT transformation technique with
motion compensation and Huffman coding [see-Box 4] to active compression. But unlike MPEG, it
has rate control to cope up with variable video bandwidth within the rate of 40kbps to 2Mbps.
H.261 standard supports two picture sizes: the larger one is called CIF with pixel size of 352X288
and the smaller one is called QCIF with size of 176 x 144. H.320 terminals are having H.261 video
code.
The audio coding of H.320 standard can be any one of three: G.711, G,722 and G.728. The
G.711 is equivalent to a -low and m-low PCM such coding. It supports 3.1 KHz audio at 64, 56 or
48 kbps. The G.722 provides higher quality audio with 7KHz bandwidth using 64kbps. The G.728
is equivalent to a -law and m-law encoding and supports 3.1 KHz bandwidth of 16kbps.
The H.221 is the framing standard. The audio and video bit streams are multiplexed
together to create a frame that is to be sent. The H.221 define how the frame is achieved. Each
frame is made to the 80bytes of information. Each frame creates 8 sub-channels with each bit
within each byte allocate to a sub channel. They are numbered 1 to 8. First seven channels are
used to carry video and audio data. The 8th channel is used not only to carry data but to carry
other codes also. The other codes are:
FAS: Frame alignment signal.
BAS: Bit-rate allocation signal.
ECS: Encryption control signal.
These sub-channels are called service channels.
The standard H.230 provides frame synchronization control and audio-video signal indication control. H.242 is for achieving capability exchange, mode switching and frame reinstatement.
The H.243 is multipoint control standard. The video conferencing is not only point-topoint but multipoint too. To control multipoint conferencing, multipoint control unit is required
(MCO). H.243 is a standard for MCU.
The T.120 is for communication of all forms of data between two or more multimedia
terminals.
The H.233 is security coding. It is used to define a method of encrypting data.
Different wireless combination have been investigated in Japan[45] to define multimedia terminals: PDC (Personal Digital Communication System) + PHS ( Personal Handy phone
System), PDC + 3G, 3G + 4G, 4G + MMAC ( Multimedia Mobile Access Communication), 3G +
4G + MMAC. Such investigation will definitely lead to wireless + integration to coexist to
implement true personal communication.

DHARM

N-BHUNIA/BHU1-2.PM5

80

INFORMATION TECHNOLOGY IN 21st CENTURY

81

BOX 4
The Huffman code is a compression code designed by Daceid A Huffman in 1952. It is a simple
improved code over Shannon-Fanon code. In order to illustrate Huffman code, let us say we
have an original body of data which reads only source triple as in table to present some message.
The probability of occurrence of any source triple in the message is also shown. According to the
Huffman coding, the corresponding compressed codes are shown in the table. The average size
of the compressed code under Huffman coding becomes: 1 X 0.4 + 2 X 0.2 + 3 X 0.2 + 4 X 0.1 +
4 X 0.1 = 2.2 bits per code. Whereas the code size of the original source code is 3 bits per code.
Source Triple

Probability of
Occurrence

Corresponding
compressed word

000
001
010
011
100
101
110
111

0.25
0.25
0.125
0.125
0.0625
0.0625
0.0625
0.0625

11
10
011
010
0011
0010
0001
0000

There are several disadvantages to Huffman coding. First, to design the code, one must
know the probability of occurrence of any code in the original block of data. What shall happen if
the probability is not known a priori? And what shall happen if probabilities pattern changes
over time? Second, Huffman coding is not unique in nature. The code is also block code. But the
redundancy under this code is either minimized or optimized

6. UTN PERSONAL COMMUNICATION


To offer UTN services, in USA, a band of 160 MHz near 2 GHz has been allotted. Personal
communication shall mature with UTN. Frequency band allocation for different services of
personal communication is shown in Table (27). Cellular communication is the early personal
communication. Personal communication shall coverage to and merge with total wireless, total
service independent and total UTN based communication. If we consider the growth and
development of wireless communication at the present rate, total wireless (to a constraint)
may be achieved within next 5 years. Total UTN service may need another 5-10 years. Integrated
and application oriented communication needs new technology and new integrated terminal
which shall be affordable to mass customers. Technology is with us with ATM. Integrated
terminal is under development. Computer integrated telephone is the first such device. It may
require another 5-10 years to commercially develop an integrated terminal for voice, video and
data. Therefore a matured PCN (Personal Communication Network) is expected within next 510 years. As per forecast made in literatures, narrow band personal communication service
may cover 750000 km2 and 1500000 km2 in USA in next 5 and 10 years respectively. Indian
communication is lagging behind international communication by several years. As per account
this lag is a uniform 4 years since 1986. GSM started in Europe in 1988, India adopted the in
1994. ISDN started in Europe in 1990, India has adopted only in 1996. On the ATM, BISDN
and UTN aspects, India is yet to open its chapter. Hence it is evident that Indian lag is more

DHARM

N-BHUNIA/BHU1-2.PM5

81

82

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

than 5 years and even may be 7-10 years in PCS/PCN. India is lagging behind its neighbors like
Singapore, Taiwan and Hong Kong.
Table 27: Frequency bands of different PCN services
Service

Frequency Band

Cellular

800-900 MHz Ex : GSM - 890-915MHz


935-960MHz

CT-2

864/944 MHz

Cordless

46/49 MHz

Satellite/VSAT/MSS

C band

Narrowband PCS (FCC)

900-940 MHz

Broadband PCS (FCC)

1850-1890 MHz
1930-1970 MHz
2130-2200 MHz

7. FROM 2G TO 3G
2G (second generation) technology for mobile connection started around 1990s and it was revolved around GSM cellular communication that is mainly for voice communication. 3G were
then expected to be deployed around 2000 and were targeted towards:
implementing anywhere and any time mobile connection with low cost and flexible
handheld devices
implementing wireless data access particularly with wireless Internet connection.
This was motivated by the exponential growth of Internet access. Users are prone to
get Internet access anywhere and anytime with hand held devices
implementing high data rates at 2Mbps whereas previous GSM or 2G offered to 10 to
50 Kbps
implementing high speed multimedia or broadband services causing shift from voice
oriented services to Internet access (both data and voice particularly with technology
of VoIP), Video, Music, Graphics and other multimedia services
use of spectrum around 2 GHz whereas spectrum allocation for 2g was 800/900 MHz
global roaming to support global communication
flexible network to support existing and future changing requirement
a mobile multimedia services that will be able to transmit data, voice, video, image
etc over variety of networks like point to point, point to multi point, broadcast, symmetric and asymmetric etc.
The key benefits of 3G will be: delivery of broadband information direct to users and
global access with a unified single radio interface.
Several major challenges are to be overcome to implement 3G: wireless Internet for
exponential growing users will be difficult to implement till IPv6 is implemented, global roaming
with single number as proposed in PCN, fixed access with technologies like ADSL with high
data rates of 12 Mbps has become competitor as that of IEEE 802.11 b WLAN in wireless local
data interface, low cost flexible devices are yet to mature.

DHARM

N-BHUNIA/BHU1-2.PM5

82

INFORMATION TECHNOLOGY IN 21st CENTURY

83

7.1 Beyond 3G
Mobile comprehensive broadband integrated communication will step forward into 4G (fourth
Generation) all mobile services and communication. The 4G technologies will be migration
from other generation of mobile services with an aim to overcome limit of boundary and achieving total integration. The evolutionary approach towards a wireless information age proceeds
as in Fig. (19)[44,47,59] in comparison with other technologies as progress in pace to pace. The
key characteristics of 4G systems will be: higher transmission capacities per user, larger frequency band, higher traffic densities, and integrated services. The technical challenges behind
the expected technology lie with the associated different technologies as discussed earlier.

2G
GSM,
PDC,
IS95

IG
Analog
cellular

3G :
UMTS,
CDMA

Towards wireless communication society


Multimedia Content, High Bit Rate and IP Transport

802.11b
WLAN

WLAN

Circuit
Switched
Networks

Wired
Internet

802.11a
WLAN

Broadband
Internet/
DSL

4G Total
Wireless,
Seamless
coverage &
Integration,
Anytime &
anywhere
communication

Wireless/Mobile
Local Area
Integration

Broadband
FTH (Fiber to
Home)/Fiber
to Business

Fig. 19: PCN Evolution /Migration and other technologies as progress in pace to pace

The motivation behind aiming 4G information society are many: high speed transmission, next generation Internet support (Ipv6, VoIP, Mobile IP), high capacity, seamless integrated services and coverage, utilization of higher frequency, lower system cost, seamless personal mobility (LEO), adoption and integration of fixed and wireless support (ADSL/VDSL/
WILL/FSO), mobile multimedia (Standards), efficient spectrum use, QoS service, flexible and
re configurable network and end to end IP systems.
The convergence of local fixed wired network including wireless home or local network
with broadband fixed and coming up ad hoc wireless networks will shape how we will communicate in next decades that may include[49.60]: Complete unification and integration of all and
every services, Single communication number for each and every services, and Freedom to
communicate any time any where. All these provisions are required to be meet with simplicity,
cost effective, reliability and flexibility. The problems to be solved in achieving the expected
results are: lack of bandwidth, lack of standardization, high error probability of wireless links,

DHARM

N-BHUNIA/BHU1-2.PM5

83

84

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

multiplicity of different systems & operators, and cost reduction. The problems are being addressed. The research in tackling the high error probability of wireless links has reached the
expected directions[50-53] with BEC (Backward Error Control) technique. The research[55] in
this context for optimizing Internet access over IEEE 802.11b has demonstrated with frame
level FEC (Forward Error Control) technique.

8. e-BUSINESS AND e-COMMERCEA DURABLE APPLICATION OF IT


Noble Laureate of Chemistry, Ilya Prigogine once told: We cant predict the future, but we can
prepare it. Certainly the future will be what we make of it with todays single most technology, IT (Information Technology). It is a technology of the networks, the telecommunications
and the computers. IT will make an impact in all aspects of our life. It is believed to bring about
a profound change. The success and the effectiveness of the changes will be measured with
proper perspectives in future. But it has undoubtedly brought an unprecedented change in
business and commerce, by giving time, space and volume continuity. Today business and
commerce have no geographic boundary, no volume restriction, and no time limitation. They
provide just-in-time and just-on-scale solutions. On the scale of effectiveness, the business is
measured by the low of the loss (W) of process cost. If P is the process cost, OPE (Overall
Process Efficiency) is the efficiency factor of business, W = P - (P X OPE). With application of IT
and its derivative like KM (Knowledge Management), OPE increases causing W to fall. Besides, IT gives the competitive advantages to the business activities. Information technology
believes that: Investment + Web technology + Users = Big Profits. In such a scenario, ebusiness and e-commerce have eventually been emerged as the sound strategies for business
and commerce. e-business and e-commerce will be facilitated by different technologies like
global-reach Internet, WWW (World Wide Web), E-mail, Electronic publishing, Multimedia
systems and communications, Interactive video, Image recognition and processing, Voice recognition, MSS (Mobile Satellites Services) and Personal Communication among others. The
use of border less Internet is increasing following Moores law that estimates the doubling of
the performance of silicon every 18 months. Even Internet growth may go well beyond Moores
law. Gilders law may be more accurate estimate for growth of Internet traffic. Gilders law
predicts the doubling of packet on the network every few months, and that few months may be
in the range 4 to 9. It is estimated that the Internet traffic will increase by 1000 fold in the next
ten years. As of today, about 50 million users use Internet with about 16 million servers in
more than 140 countries. Internet is the best facilitator of electronic mode of business and
commerce. Today, e-business and e-commerce refer to the business transaction over Internet.
E-Business and e-commerce mean to doing business over wires or over Internet or using Information Technology. They are changing the rules of traditional business pattern, and making
new rules and means for fast and border less business. The confusion on the difference between e-business and e-commerce is standing. e-business defined in most of the literatures
reflect that there is actually no difference between these two. Yet two things appear to be
somewhere and some how different.

8.1 E-Business
E-business refers to the operation of the business objectives through and using IT. It may also
be defined as business activities over digital infrastructure or doing business over wires. As per
Colin, Director of the integration division of CNS, UK the e-business refers to the issue of
supply chain integration. An ideal scenario is when a customer places an order. All of the

DHARM

N-BHUNIA/BHU1-2.PM5

84

INFORMATION TECHNOLOGY IN 21st CENTURY

85

suppliers and agents involved in the transaction are contacted electronically. Every system involved in the supply and delivery of that product is linked to every other system, hence talk of
zero latency transactions whereby there is no waiting for someone to do something because
everything happens at the speed of light. A report says, E-Business relates to how you and
your customers place orders and ensure efficient delivery. E-commerce is the financial aspect of
doing business. Both aspects will affect your operations sooner or later.
The economists usually identified four types of e-business:
Business-to-business (B2B). This refers to transaction between one business house
to another. For example, the transaction between a large organization and their suppliers falls in the category. B2B is the most common business model. One example of B2B
e-business is MetalSite.com
Business-to-customer (B2C). This refers to online retail activities. For example,
software, journals and books sold over Internet using web sites.
Customer-to-business (C2B). The example of this is the booking of railway tickets
or air tickets on any agents computer that has the network or the Internet connection. C2B is just the reverse of B2C.
Customer-to-customer (C2C). Online auction is the best example of this type of
transaction. One example is eBay.com
Currently e-business is mostly confined to B2B. Other areas of business are of course
coming up.

8.2 E-Commerce
E-commerce is basically financial transaction via computer networks, between people and
organizations. E-commerce is a financial part of e-business.. Harvard Academic, Jeffery Rayport
defined e-commerce as selling real products for real money. Eddie Rabinovitch observed Not
surprisingly, the expected pay off of e-commerce projects is, of course, the bottom line: money.
However, despite the prevailing notion of access to global markets as the most important
competitive advantage enabled by e-commerce, most companies expect of e-commerce ways to
reduce spending rather than increase profits. Lets for a moment think about the rationale of
the previous statement, which is also going to answer another e-commerce question: why is
business-to-business (B2B) market considered by many experts several magnitudes more
important than business-to-consumer (B2C)? Well, its probably easier to convince a CEO to
spend $100,000 on a solution that will demonstrably save $1 million than to spend the save
amount on a solution that might make $1 million....... Making money on the Internet is still
quite dicey. But its not too difficult to demonstrate that B2B e-commerce will save money by
improving efficiency and therefore reducing expenses for transactions between companies.

8.3 Problems for e Commerce and e Business


Money is the ultimate motive, if not sole of the business. Thus there will not be any compromise
on financial transaction. E-business has to deal with flexibility, interoperability, scalability,
performance and security of business; e-commerce has to deal firmly with security of the
transaction e-commerce improves quality of service and performance. Security of e-commerce
is required at two levels: Confidentiality and authenticity. Failure at the security of e-commerce
virtually means the failure of the e-commerce itself. Financial transactions are done by several
modes: Electronic cash, Electronic cheque, and Electronic transfer and payment advice etc.
Security of the transaction in electronic payment system is the key to e-commerce. Public key
techniques, Digital Certificate are useful security measures of e-commerce.

DHARM

N-BHUNIA/BHU1-2.PM5

85

86

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

9. KNOWLEDGE AGE AND MANAGEMENT


As we move forward and as more and more human-IT interaction plays role in shaping society,
an all inclusive knowledge society turns into. Knowledge Management has become a central
issue of the knowledge age. What is then Km (knowledge management)? In a theory, KM is
seen as a logical extension society in that its purpose is to cope with explosion of information
and capitalize on increased knowledge in workplace. According to Peter, The successful
companies, in the knowledge management terms are the ones that have looked at the business
processes rather than seeing the solution revolving round the company intranet. According to
his research, the main reasons for using knowledge management techniques are to be
competitive. Through globalization, there are a lot more competitors coming into markets quickly.
Therefore, you need to do more in order to appear different. Another layer of knowledge is how
to integrate things in the organization so that this process makes the organization look different.
His research suggests, Content management is important. He mentioned a figure that out
of 1000 pages of a marketing intranet, 873 pages were not used, the reason being that they
were out of date. KM is not dumping data on the intranet, but for sharing of knowledge and
information. British Telecom (BT) is recorded on saying the following reasons for sharing
knowledge: (1) knowledge is the basis of services, (2) knowledge helps to cope up with changes,
(3) knowledge sharing is the natural next step to information sharing.
Alain J Godbout[61] analyzed the concept of KM from views of Peter Drucker, Nonka,
Tom Davenport, and the American Quality and Productivity center among others. Following
Peter Drucker, he viewed that the KM process is a question of proper vision, organizational
networks, educated decisions and best use of lessons learned as the key to organizational
learning. He further said: In a sense, knowledge management is a form of application of
sound management practices to an object: human resources which are the carrying vector of
knowledge. Tom Davenport and the American Quality and Productivity Centre is believed to
emphasize more on explicit knowledge, and their emphasis is to focus on means of optimizing
these holdings (the explicit knowledge of organizations is contained in information holdings),
improving the methods of formalization and increasing the use or usability of the available
knowledge. Referring to Nonakas model of mental process, Alain said: knowledge management
is a form of sound management practices to another object: information resources with a different
carrying vector of knowledge.
Taylor[62] views knowledge management as a process of ensuring that the organizations knowledge needs are met and exploiting the organizations existing knowledge assets.
DiMattia and Oder [63] defined KM as KM involves blending a companys internal and
external information and turning it into actionable knowledge via a technology platform.
It is to note that in the definitions, sometimes the knowledge and the information are
used interchangeably.
In our eastern philosophy, the knowledge management can be seen in unique terms. As
per Bhagbat Gita any action has two components: Karmayog that proceeds along path of action
and Sankhyog that proceeds along the path of knowledge. To our philosophy, nature is the
best manager. Again the first law of management of nature is the principle of least action and
least time that aims to accomplish most with least effort and least time. In nature, everything
follows the least path of action. Apple falls from a tree to ground on straight line, waters falls
down in straight line, and light travels on straight path etc. Business or Organizational
Management being a action of man where man is a part of nature, any man as manager desires
to accomplish any action with least path and time for which Information Technology is with us
today by our own creation. One can see in Fig. (4) how the information technology, which is due
to the marriage of Computer with Communication, is tending to be like nature type technology.

DHARM

N-BHUNIA/BHU1-2.PM5

86

INFORMATION TECHNOLOGY IN 21st CENTURY

87

Over time the gap between human axis and technology is reducing. Therefore KM is an action
to achieve goals along the path of knowledge with least action both mental and physical, or
otherwise to do management so far done absolutely by man by technology in order to go along
a path of least action, the path of nature by expanding intelligent technologies like brainy
computers and personal communications.
Swamiji made a following few comments over nature, man and knowledge:
Nature with its infinite power is only a machine.
All our knowledge is based upon experience. All human knowledge proceeds out of experience; we can not know anything except by experience.
Man is man so long as he is struggling to rise above nature, and this nature is both
internal and external.
These observations of Swami Vevekananda imply that man by earns knowledge from
experience, and he applies his knowledge to be creator of nature, which is not impossible so
long nature is assumed a machine. It will be pertaining to mention here that Tagore told that
everything in nature follows a rule. This supplements my views that the KM is a step of human
effort where he attempts to be his known creator.

9.1 KM-Conflicts and Confusion


Knowledge management appears to be a collection of organizational knowledge in machines
whereby the collected knowledge can be shared instantly at anywhere, at any time and by any
body for the managerial purposes, be it for the policy decisions or for the routine works. But can
the collected knowledge be ever creative? Or the human knowledge, which is ever creative, can
be collected? Human knowledge has an ever alive and changing creative dimension. The
creativity of human knowledge brings invention and innovation in framing organizational
problems and solving organizational problems. Human and computer have their own different
merits, and merely are not supplement to each other. Therefore the goal of the KM to capture
and keep the knowledge of the employees leaving the organization will how far be successful is
not without doubt.
The objective of the KM is to share the knowledge at intra organizational and inter
organizational level for arriving at a decision. The sharing provides a number of solutions and
practices done and/or in operation in the organizations so that some of them could be deployed
for the current need. All such sharable solutions and practices are definitely pre-programmed
and heuristics in nature. A preprogrammed solution is fitting to a stable environment. A hostile
environment may be of innumerable types caused by the wicked environment. How far a pre
arranged solution is applicable to processes and problems of wicked environment? One recent
example is the fight between ICC (International Cricket Council) and BCCI (Board of Cricket
Control of India) over match referees report against six Indian cricketers during their second
cricket test against South Africa. No pre-programmed knowledge was available for providing a
solution. The problem was first of its kind and dragged into a wicked environment. Only human
creative empowerment has saved the situation. Yogesh Malhotra[64] authoritatively analyzing
the KM in inquiring systems duly highlighted the stated limitation of KM. He showed that out
of the four inquiry systems, namely:
1. Leibnizian systems those are closed systems without access to the external environment: they operate based on given axioms and may fall into competency traps based
on diminishing returns from the tried and tested heuristics embedded in the inquiry
processes. Example: as per some mathematical models (ESPN ratings, Pepsi ratings ..),
the best team and players of test/ one day cricket.

DHARM

N-BHUNIA/BHU1-2.PM5

87

88

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

2. Lockean inquiry systems those are based on consensual agreement and aim to reduce
equivocality embedded in the diverse interpretations of the world view. Example:
Selection board meeting for a cricket team
3. Kantian inquiry systems those attempt to give multiple explicit views of
complementary nature and are best suited for moderate ill-structured problems.
Example: Result of a final match
4. Hegelian inquiry systems those are based on a synthesis of multiple completely
antithetical representations that are characterized by intense conflict because of the
contrary underlying assumptions. Example: Which party is to form government when
no party has got majority in any Indian Parliamentary Election!
The KM may have the significant role in Lockean and Leibnizian systems as they are
suited for stable and predictable organizational environments, but the KM will have
limitations in applying to other two systems as they are better suited for wicked
environments. The wicked environments are characterized by discontinuous change,
and the information technology has a trend to create wicked environment, it is not yet
clear how the KM will suit to information technology driven present and future world.
5. The one of the main features of the KM is sharing of knowledge for improving business
process and activities. The expectation and the results from knowledge sharing in
many cases, particularly in the environment of competition, however cause havoc. In
one final examination the topper of the class and the second topper sat side by side.
The topper wanted to share the answer of a problem which he correctly got as, say 60.
The topper when asked the second topper, although the second topper got 60 as answer,
yet just to confuse the topper he told him that the answer was 50. The topper being
confused scrapped out that answer, and tried another; but before its completion the
time was out. Consequently, in the result the topper went down to the second position
and the second topper moved up to the first position. This shows the possible
consequence and counter productive feature of knowledge sharing particularly in
competitive business environment. This phenomenon of knowledge sharing may be
called calamity of knowledge sharing. The calamity may also occur when sub standard
knowledge is shared.
6. The more serious conflict of knowledge sharing lies in its very definition. If knowledge
is power, if knowledge is saleable, and if knowledge brings prestige, power and
authority; why one should share his or her knowledge? The very basics of knowledge
do not support the knowledge sharing. This being the case, the KM itself lies under a
cover of confusion. Thomas H Davenport described [49] this phenomenon, as sharing
and using knowledge are often unnatural acts. He felt that sharing and usage have
to be motivated through time-honored techniques-performance evaluation, compensation for example ..Lotus Development, now a division of IBM, devotes 25% of the
total performance evaluation of its customer support workers to knowledge sharing.
Buckman Laboratories recognizes its 100 top knowledge sharers with an annual
conference at a resort. ABB evaluates managers based not only on the result of their
decisions, but also on the knowledge and information applied in the decision-making
process. The other type of problem of same nature also exists in the organization. An
employee who is an expert in obsolete technology may do not like to share knowledge
of expert of new generation due to several reasons like ego, inferiority complex, and
fear of being out classed. This phenomenon can be analogically compared with electric
circuit as illustrated in Fig. (20). The organization likes to attain at a knowledge
level, K. It has a storage capacity, C. But the organization offers a resistance. This

DHARM

N-BHUNIA/BHU1-2.PM5

88

INFORMATION TECHNOLOGY IN 21st CENTURY

89

resistance delays the organization to attain at the knowledge level K. Until and unless
the offered resistance is removed by organizational process of transformation, the
conflict will exist and resist the implementation of KM. The organization resistance
(R) restricts the flow of knowledge.

C >> Storage Capacity


of Organization

R >> The Organization Resistance


(Physical, Mental and
Cultrual Resistance)

Fig. 20: An analogy of a conflict

7. KM involves two words: Knowledge and Management. Who is for whom or who will rule
to whom is a big question. Does KM mean the management of organization by the
knowledge or does it mean the management of knowledge of the organization or a hybrid? This confusion is pictorially illustrated in Fig. (21).
8. Lester C Thurow documented some factual conflict that is existing in the USA: The
information technology has been projected as a high productivity in nature. But the
Lester C Thurow studies claimed that Financial services in the United States have
had negative productivity growth for the last ten years. Every year productivity is
falling about 1 percent. His studies on office automation show that offices still use
paper in the same ways for the last 500 years. The paper less office or automated
office still remains a far cry.

Knowledge

Management

Knowledge

or

Management

OR
BOTH ?

Fig. 21: A conflict in picture

Knowledge management is the technology-based management. Therefore its impact and


consequences will change with technology and technological trends over time. However it will
be not wrong to define KM as a management using computer and communication or for that
purpose if we write :
KM = MC2
The technology in general and information technology in particular follow a few empirical
laws. In that light, we can analyze and predict the future technologies and hence future KM.

9.2 What is there after Knowledge Management


Knowledge age has emerged in pace with information technology that innovated information
age a few decades ago. The rapid transition is unprecedented in the history of technology

DHARM

N-BHUNIA/BHU1-2.PM5

89

90

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

applications. Thus logical speculation is: what is next to the knowledge age? One incident reported
in the Indian may through some light to it. The great Akbar once asked his naba- ratnas: what
moves fast? When eights of nine ratnas pointed towards Royal Horse, the ninth ratna, Birbal got
an edge over others by saying Our Mind, Sir. We at least find a technology area where the trend
is to achieve something like speed of mind, and this is nothing but communication. From the
trend of communication we have no hesitation (and I am sure all will agree to it) to conclude that
it is the speed of communication that is growing leaps and bound. We have seen the age of kilobits
per second, and mega bits per second, and presently in the age of gigabits per second, and are
seeing a tomorrow of tera bits per second. This is an indication that after knowledge age, the next
age may be the age of mind or the age of conscious. The universe is made of non-living and living
things. Their comparison in terms of level of intelligence, conscious and communication power is
made in table (28). S Ranade, a great admire of Aurobinda told [65]: Knowledge by identity will
change current science completely. Particularly physics and biology will see radical changes. The
wave-particle duality and the mass-energy equivalence will be seen in the light of the more basic
substance of consciousness and then he defined [65]: consciousness is awareness, awareness of
yourself and of others. In the human being both exist. In the animal, there is only awareness of
others, not awareness of itself, it is a more limited awareness. In plants the awareness is even
less. In the crystal it is still less, but nevertheless it is there. If the crystal is having awareness,
it is surely possible that the next century will be the century of consciousness and you can
focus your body consciousness on a point outside the body. Will the Will power or Mind Power
of Iswar Patuli depicted by great Bengali Novelist Sarat Chandra prevail upon the society,
organization, culture and economy at the fragile end of knowledge age?
Mothers in the historical declaration[66] made on April24 1956 said: The manifestation
of the supramental upon earth is no more a promise but a living fact, a reality. It is at work
here, and one day will come when the most blind, the most unconscious and even the most
unwilling shall be obliged to recognize it. Perhaps that will be in the age of consciousness that
is next to knowledge age. The collaborative views on this prediction is one important research
found in [68].
Table 28: Comparison of different entities in universe in terms
of sense and communication
Non-living things
Living things

Apparently no sense and no communication. Dr


Ranade sees otherwise
Plants

Limited sense and no communication

Animals

Low level sense and communication

Human beings

High level sense and communication

10. AGE OF DIGITAL DIVIDE


Tagore once told we have only one country in this universe, and that is world. Rabindranath
Tagores such a powerful philosophy may ultimately be realized if to-days tenet of one world
one village is implemented in true sense in future. To achieving this, a trend has already been
initiated the world over. Privatization, Liberalization and Globalization are replacing liberty,
fraternity and equality all over the world including the countries of third world. It does not
mean that library and fraternity have no relevance in to-days society. They are ever alive and

DHARM

N-BHUNIA/BHU1-2.PM5

90

INFORMATION TECHNOLOGY IN 21st CENTURY

91

their universal appeal shall ever remain for the noble human society, but to day they are not all
in all. Privatization and universalization shall be the other social partners with them. This is a
wave brought forward by different emerging technologies, which are often interactive, interdependent and diffusive. Information technology, computer, communication, microelectronics, Genetic engineering, Biotechnology, Space technology are a few to name worthy. Developing world
in general is far lagging behind the modern technological evolutions and revolutions. Besides the
developing countries are hardly having capital to deal with such fast, rapid and perpetual changes.
Developing world in general is labor intensive rather than capital intensive. Therefore, debate on
the ability, suitability and the acceptability of liberalization is going on and will continue to go on
for some more time in the developing countries. Initial mismatch and inertia are parts of life and
the fact is that the society never denies mobility. The society ultimately accepts technological
changes, which might be off-touch to the society even a few years back. And irony is that delayed
such acceptance is done in quite haphazard and irregular ways. What has happened to the deployment of computer in government sectors in India today is anybodys guess. This is a lesson that
the third world always forgets. Consequently the third world continues to lag behind International trend, and losses money as, there is hardly any planning for technological up gradation and
applications. We can sight a figure to justify this point. Telecommunications lines of India are
66% digitized; where as figures of Brazil and Hungary are respectively 35.7% and 41%. But the
faults figures are 218 faults per 100 lines in India and 2 faults per 100 lines in USA and Japan.
In Table (29), the percentage share of information technology for America, Europe and Asia, and
that of the e-commerce buyers are shown. It is noticed that in both terms, the position of Asia is
very poor.
Table 29: % share of IT and E-commerce buyers
% share of information
technology in 1995

% share of E-commerce
buyers in 1998

America

45.5

72.57

Europe

30.9

22.8

Asia and pacific

23.7

4.6

Better is not the sole dimension of competitive advantages; faster is equally another
important dimension. Thus it will be a sound strategy for the developing country to take part
in the globalization with out any further loss of time, but with intelligent, selective, judicious
and strategic applications of globalization process, uses of and innovation with few technologies.
Analyzing the problems of Third world in depth Dr. Colombo observed The ability of developing
countries to derive all the benefits of the new technologies faces one stumbling block right from
the start. Although rapidly and seemingly effortlessly permeating the economic and production
systems of the world, these technologies are not available off the peg. They have to be absorbed,
metabolized, mastered and controlled. Their application calls for a pre-existing capability to
insert new ideas, new practices, and new elements into a flexible system. This does not simply
exist in the vast majority of the developing countries. Furthermore, it is essential that as the
new technologies are introduced into the socio-economic fabric of the third world, they do not
impair or destroy existing local cultureswe must equally concern ourselves with safeguarding
the richness of the world cultures, mankinds cultural genoma. Despite these problems it is
strongly believed that the intelligent application of the new technologies in the developing
countries can indeed speed up process of economic growth.

DHARM

N-BHUNIA/BHU1-2.PM5

91

92

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

10.1 Gap Studies


In history of social studies one important component of research deals with the findings reason
and cause of growing gap between rich and poor; and for that purpose to suggest measures and
steps to reduce the gap. But the fact remains that the gap has not been reduced even after
thousands of such studies and the implementation of their recommendations those including
those of some noble laureates.
A few research findings report[67]
1. If the present growth trends in world population, industrialization, pollution, food
production, and resource depletion continue unchanged, the limits to growth on this
planet will be reached sometime within the next 100 years. The most probable result
will be a sudden and uncontrollable decline in both population and industrial capacity.
2. It is possible to alter these growth trends and to establish a condition of ecological
and economic stability that is sustainable far into the future. The state of global
equilibrium could be designed so that the basic material needs of each person on
earth are satisfied and each person has an equal opportunity to realize his or her
individual human potential.
3. If the worlds people decide to strive for this second outcome rather than the first, the
sooner they begin working to attain it, the greater will be their chances of success.
To us those conclusions spelled put not doom but challenge - how to bring about a society
that is materially sufficient, socially equitable, and ecologically sustainable, and one that is
more satisfying in human terms than the growth-obsessed society of today. Whatever gab and
whatever challenge is to be met revolves around three factors : i) economic and social gap, ii)
education gap and iii) status gap between agriculture and industry.

10.2 Problems of Agriculture Sector


The existing economic and social gap between rich and poor is primarily due to two avalanche
affects:
(a) In agriculture sector, negative avalanches is produce and perish and
(b) In business sector, positive avalanche is produce and flourish.
The only solution to bring the balance is that the prices of agriculture produce must be
raised at those of business produces by strict control of governments. Education is an investment
not only in terms of money but also in terms of time and human resources. Parents have noticed
that the boys/ girls after getting school level education become useless/worthless/resource less
rather than resourceful in terms of earnings in the family. They neither get job nor by that time
skillful for laborious jobs including agricultural jobs. Had these boys not been sent to schools
rather been engaged from the childhood in agriculture related sectors; they would be more
useful for earnings for the family. This clearly demonstrate that the education till not is sure
with guaranteed minimum income to family, the poor family does not like to take risk of spending
mainly time and money in education.
Mac Bridge Commission report that the farmer and the agriculture producers must have
the direct market knowledge to get actual price of the produces. This is believed to be possible
only with IT.

10.3 Case of Industries


The state of West Bengal in India has achieved a considerable amount of rural economic growth
in the last two decades. The average income of the rural people has increased and the social

DHARM

N-BHUNIA/BHU1-2.PM5

92

INFORMATION TECHNOLOGY IN 21st CENTURY

93

security of rural people has been established on the solid footings. The disparity in income among
the rural people has decreased considerably. An all around development of rural people and society has been noticed. However this development is due to land reforms and barga system sincerely implemented by the Left-front government of W. B. in their 25 years of rule.
By the process of land reforms and barga system, the agricultural workers or farmers are
given confidence that they will never be thrown out of work and land they do cultivate. This
confidence has led to generate among farmers the more sense of belongingness and sincerity in
their work. This has reduced the victimization and the injustice meted out to them in terms of
payment or no payment earlier by the Land-Lords; which in other ways has caused the
agricultural productivity to increase and loss of agricultural working days to decrease as well
as the agricultural disputes between labor and owner to lessen. The barga solution is our own
and is not something copied from the developed nations.
The economic and productivity failures in all sectors namely agricultural, industrial and
banking is mainly due to disputes between labors and owners. Thus if such disputes in
agriculture sectors are overcome by the barga system; it is logically extensible for other sectors
like industrial and banking too. In this paper we propose an industrial barga system for
Indian Industries. We have achieved something unique by our own system of bargas in
agriculture sector. Similarly the industrial barga not prevailing elsewhere dose not mean it is
inappropriate in India. In Indian environment where economic disparity is huge and where
labor is cheap and for which victimization of labor is easy; the industrial barga will be the right
solution.
The proposed industrial barga aims to provide share of production and profit of industries
with labor, management and owner as in agricultural barga. There may be several means of
implementation. The Industrial barga will not be easy to implement.
With IT age, the difference gap is easily to meet with. What is need of the hour is the
strategy and goodwill for the application in right perspectives.

11. CONCLUSIONS
The goals of both the near and the far futures of IT is Fig. (22). In the field of computer, the
major challenge of the 21st century will be the designing of bio/brainy computer. The basic
science has been searching, since the days of its journey, the design if any behind the universe
as well the theory of birth of the universe; and possibly a new Theory of Everything as Prof
Hawkings prediction made in 1980 of achieving his famous Theory of Everything by the end
20th century is proved wrong. The debate on deterministic vs. probabilistic nature of universe
or whether the nature is a machine or not, is oscillating. In such a scenario the debate on
possibility of designing brainy computer only be a logical extrapolation; and definitely will
take long time to answer. On the other hand, future all wireless, anywhere and any time
communication is relatively non-debatable issue and expected to be achieved, although not
without overcoming many obstacles. Even small deployment like IEEE 802.11 based WLAN
faces many obstacles[69]. Other than systems and standards, two inherent problems of future
communication need to be properly addressed: higher error probability of all wireless links and
information security. Whereas the error control is basically a technical issue, the security of
information has several dimensions. The requirement of security for a durable application of
IT, namely e commerce and e business was illustrated earlier. It is reported[70] that The
increasing frequency of malicious computer attacks on government agencies and Internet
business has caused severe economic waste and unique social threats. As per the second law of
thermodynamics the open systems cannot bring order without making its surroundings disorder.

DHARM

N-BHUNIA/BHU1-2.PM5

93

94

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

The security measures that are for bringing order to information process, inevitably brings disorder
to its surroundings that may again itself be the source of hackers or security breakers. This is a
manifestation of chaos and complexity. Of course the men create problems only to solve it afterwards.
Is the nature likes to see man dancing between problems and solutions? Are we then leading us to
a state of chaos and complexity[71]? This is compounded by the fact that computer system and
network security is increasingly limited by the quality and security of the software running on
constituent machines. Researchers estimate that more than half of all vulnerabilities are from
buffer overruns, an embarrassingly elementary class of bugs[72,73]. The steps to go out of chaos
and complexity will be the major challenge for investigation in 21st century.
High Speed Computing,
Autonomous Computing
Optical Computing,
Quantum Computing
Chemical/Bio/Intelligent
Computing
Seamless Power + Intelligence

In Neat Future
Information Age to Knowledge Age
Knowledge Society, Knowledge
Factory, Knowledge Workers,
Knowledge as Wealth

In Far Future
Age of
conscio
usness

3G Mobile to 4G Mobile,
Cellular, GSM, PDC, PHS, Paging, UMTS,
FSO, xDSL, Next Generation IP and VoIP,
Wireless Ethernet-IEEE 802.11, Wireless
Home Networking IEEE 802.15.4, Wireless
Internet, LEO, Multimedia Standard,
Wireless ATM, PCN
Seamless Mobility, Coverage, and total
Integration

Fig. 22: IT in 21st century

Entering into the knowledge age is the inevitable consequence of the application of
networks in the business, organization, government, society and economy. The entry needs to
break several hurdles. The issue of the acceptability of knowledge economy with non-material
wealth, knowledge along with the new status of human resources as knowledge workers, and
the concept of sharing knowledge for organizational benefits are a few areas to be addressed.
The quantification of the knowledge and the exchange rules of knowledge for the purpose
of sale and business of and with knowledge are the technical challenges and need serious

DHARM

N-BHUNIA/BHU1-2.PM5

94

INFORMATION TECHNOLOGY IN 21st CENTURY

95

investigation in this century. The consciousness as Penrose told is the phenomenon whereby
the Universes very existence is made known. Thus in the age of consciousness, the mans
desire to be the master of nature with which this paper started, may be realized. Will it be
really!
The constructive and judicious application of IT may lead to overcoming the consequences
of digital divide. Several studies[74,75] have suggested for the application of IT in Education
& Training, Telemedicine and Diagnosis, E-Government, Rural Information Sharing for the purpose
of food conservation & sale, and Entertainment among others for deriving maximum benefits in
the developing countries. Like digital divide, another negative application of IT is like what happened
on 11th September in USA. Analyzing the 11th September issue, a famous research work[76] has
reported to examine the issue for developing a system dynamics for positive application of technology.
This is a new direction of research in application of technology. The same direction may be
extended to remove digital divide.

REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.

C.T. Bhunia, Introduction to Knowledge Management, Everest Publishing House, Pune, 2003.
C.T. Bhunia, Modern Computer architecture-Synthesis and Future, Information Technology,
June 1992, pp. 80-81.
C.T. Bhunia, Trends of Modern Computer, CSI Communication, Aug-Sept. 1997, pp. 11-14 &
6-7.
C.T. Bhunia, Molecular electronics, J IETE Tech. Review, Vol 13, No. 1, Jan-Feb. 96, pp. 1115.
Michael et al., Quantum Computing and Quantum Information, Cambridge University Press,
2000.
Charles H. Bennett et al, Quantum Information Theory, IEEE Trans. On Information Theory,
Vol. 44, No. 6, Oct. 1998.
C.T. Bhunia, Tomorrows Computers, Science & Knowledge, Jan. 1995, pp. 7-9.
Vivek S. Nittoor, A Brief Introduction to Quantum Computing and Quantum Information
Procc National CSI Convention, 2002, pp. 6-11.
C.T. Bhunia, On Way to Autonomous Computers Electronics For You, Jan. 2003, pp. 42-44.
J.H. Burroughes, C. A. Jones & R. H. Friend, New Semiconductor device physics in polymer
diodes and transistors, Nature, Vol. 335, No. 6186, 1988, pp. 137-141, 1988.
D.A. Fraser, The physics of Semiconductor Device, Oxford Physics Series, 1977, Ch. 2, 7.
R.W. Whatmore, In: L.S. Miller and Mullin, Electronics Materials, Plenum Press, 1991,Ch. 19.
Y. Hirshberg, Reversible formation and eradication of colors by irradiation at low temperature, A photochemical memory model, J Am Chem Soc, 78, 1956, pp. 2304, 1956.
H. Brown, Photochromism, Techniques for chemistry, Vol. 3, Wiley Interscience, N.Y. 1971.R.
Robert R. Birge, Protein-Based Three-dimension Memory, American Scientist, Vol. 82, 1994,
pp. 348-354.
C.T. Bhunia , Molecular Electronics & Chemical Computing Technology CSI Communication, Nov. 1995, pp. 13-26.
R. W. Munn and C. N. Ironside, Non-linear optical Materials, Blackie Acad & Proc, 1993.
Geoffrey J Ashwell, Molecular Electronics, John Willy & Sons Inc, 1992.
Prasad & Williams, Introduction to non-linear optical effects in molecules & polymers, John
Wiley & Sons Inc, pp. 1-273.
John Fulenwider, The future looks bright for fiber optics, Laser focus world, Dec., 1990, pp.
141-145.

DHARM

N-BHUNIA/BHU1-2.PM5

95

96
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.

33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

Alastair M. Glass, Fiber optics, Physics Today, Oct., 1993, pp. 34-38.
M.N. Islam, Ultrafast switching with non-linear optics, Physics today, may, 1994, pp. 34-40.
Burland et al., Second Order non-linearity in poled polymer systems, Chem Rev, 1994, 1995, pp.
31-75.
C T. Bhunia, Personal Communication, JIETE Edu, Vol. 38, No. 2, April-June 98, pp. 109-118.
Jay E. Padget et al., Overview of Wireless Personal Communication, IEEE Communication
Magazine, Jan. 95, pp. 28-41.
Ashoke Chaterjee et al., Personal Communication New challenges for Digital Services, Proc
IEEE Tencon, New Delhi 97, pp. 146-148.
Guy Cayla, Wireless Local Loop: a Gateway to the Global Information Society. Proc IEEE
Tencon, Asia 97, pp. T.5.
M. V. Pitke, Wireless Technology in Developing Countries: Issues and Alternatives, Proc
Telecom Asia 97, pp. T.5.
Arup Ganz et al., Performance Study of Low Earth Orbit Satellites Systems, IEEE Trans
Com. Vol. 42, No. 2 3 4, Feb./March? April 94, pp. 1866-1871.
William W. Wu et al., Mobile Satellite Communications. Proc IEEE, Vol. 82, No. 9, pp. 14311444.
Markus Werner et al., Analysis of System Parameters for LEO ICO Satellite Communication
Networks, IEEE J on Selected Areas in Communication, Vol. 13, No. 2, Feb. 95, pp. 371-379.
Enrico Del Re et al., Efficient dynamic Channel Allocation Techniques with Handover Queuing for Mobile Satellite Networks, IEEE J on selected Areas in Communication, Vol. 13, No. 2,
Feb. 95, pp. 397-405.
Abbas Jamalipour et al., Traffic Characteristics of LEOs based Global Personal Communication Networks, IEEE Communications Magazine, Feb. 97, pp. 118-112.
C.T. Bhunia, LEO Systems and Communications, J IETE Edn, Vol. 40, No. 3 & 4, July-Dec.
1999, pp. 109-120.
Dan Arazi, Fast Access to the Internet and Interactive Multimedia Using DSL Technologies,
ITU Asia Telecom, 97, pp. 1-10.
Stefano Bregni et al., Local Loop Unbundling in the Italian Network, IEEE Communication
Magazine, Oct. 2002, pp. 86-93.
Ahsan Habib, Channelized Voice Over Digital Subscriber Line, IEEE Communication Magazine, Oct. 2002, pp. 94-100.
Mario Diaz Nava, A Short Overview of the VDSL System Requirements, IEEE Communications Magazine, Dec. 2002, pp. 82-90.
Asymmetric digital subscriber line-ANSI T1 413.
Bell Atlantic to test home video over copper, Intelligent Network News, 1992.
Digital Subscriber Line (HDSL and ADSL) capacity of the outside loop plant, IEEE Journal
on selected areas on communication, 1995.
C.T. Bhunia, Asymmetric Digital Subscriber Line, EFY, Jan. 99, pp. 43-46.
C.T. Bhunia, An insight in xDSL technology, EFY, Sept. 01, pp. 73-76.
Manuel Dinis et al., Provision of Sufficient Transmission Capacity for Broadband Mobile Multimedia: A Step Toward 4G, IEEE Comm Magazine, Vol. 39, No. 8, Aug. 2001, pp. 54.
Nobuo Nakajima et al., Research and Developments of Software-Defined Radio Technologies
in Japan, IEEE Communication Magazine, Vol. 39, No 8, August 2001, pp. 146-154.
Jeong Hyun Park, Wireless Internet Access for Mobile Subscribers Based on the GPRS/UMTS
Network, IEEE Communication magazine, Vol. 40, No. 4, April 2002, pp. 38-49.
Johan De Vriendt et al., Mobile Network Evolution: A Revolution on the Move, IEEE Communication Magazine, Vol. 40, No. 4, April 2002, pp. 104-110.

DHARM

N-BHUNIA/BHU1-2.PM5

96

INFORMATION TECHNOLOGY IN 21st CENTURY

48.
49.

50.
51.
52.
53.

54.

55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.

97

Fernando J. Velez et al., Mobile Braodband Services., IEEE Communication Magazine, Vol. 40,
No. 4, April 2002, pp. 142-150.
William Webb, Broadband Fixed Wireless Access as a Key Component of the Future Integrated
Communications Environment, IEEE Communication Magazine, Vol. 39, No. 9, Sept. 2001, pp.
115-121.
Shyam S. Chakraborty et al., An Adaptive ARQ Scheme with Packet Combining for Time Varying Channels, IEEE Comm Letters, Vol. 3, No. 2, Feb. 1999, pp. 52-54.
Shyam S. Chakraborty et al., An ARQ Scheme with Packet Combining, IEEE Comm Lettters,
Vol. 2, No. 7, July 95, pp. 200-202.
C.T. Bhunia, ARQ Techniques: Review and Modifications, Journal IETE Technical Review,
Vol. 18, No. 5, Sept.-Oct. 2001, pp. 381-401.
C.T. Bhunia, A Few Modified ARQ Techniques, Proceedings of the International Conference
on Communications, Computers & Devices, ICCCD-2000, 14-16, Decedmber 2000, I I T,
Kharagpur, India, Vol. II, pp. 705-708.
Hossein Izadpanah, A Millimeter Wave Broadband Wireless Access Technology Demonstrator for the Next Generation Internet Network Reach Extension, IEEE Communication Magazine, Vol. 39, No. 9, Sept. 2001, pp. 140-145.
Luis Munoz et al., Optimizing Internet Flows over IEEE 802.11b Wireless Local Area Networks.. , IEEE Communication Magazine, Vol. 39, No. 12, Dec. 2001, pp. 60-66.
Vipul Gupta and Sumit Gupta, Securing the Wireless Internet, IEEE Communication Magazine, Vol. 39, No. 12, Dec. 2001, pp. 68-73.
Jeyhan Karaogue, High Rate Wireless Personal Area Networks, IEEE Communication Magazine, Vol. 39, No. 12, Dec. 2001, pp. 96-102.
Geng Sheng Kuo et al., Dynamic RSVP protocol, IEEE Communication Magazine, Vol. 41,
No. 5, May 2003, pp. 130-135.
Shidong Zhou et al., Distributed Wireless communication System., IEEE Communication
Magazine, Vol. 41, No. 3, March 2003, pp. 108-113.
Yungsoo Kim et al., Beyond 3G: Vision, Requirements,and Enabling Technologies, IEEE Communication Magazine, Vol. 41, No. 3, March 2003, pp. 120-123.
Alain J Godbout Information Vs Knowledge, <http://dir.yahoo.com>
Robert Taylor, Knowledge Management, Robert m
Taylor@gb.unisys.com
<mailto:Taylor@gb.unisys.com>
S. DiMattia et al., Hope or Hype, Managing Knowledge, Macmillian Business, UK, 2002.
Yogesh Malhotra, Knowledge in inquring organizations, Proc. 3rd Americas conference on
information systems, August 1997.
S. Ranade, The Technology of Consciousness Dipti Publications, Sri Aurobindo Ashram,
Pondichery, 2000.
Sisir Kumar Mitra, Sri aurobinda, Orient Paperbacks, 1976.
R Sadananda, The Limits to Growth-A Revisit, Knowledge Networks and Sustainable Development, Proc 37th National Convention of CSI 2002, Tata McGrawHill, 2002, pp. 23-31.
Sushil Mukhopadhyaya, Whither Bio-Science?, J IETE Tech Review, Vol. 19, No. 6, Nov-Dec.
2002, pp. 381-386.
Upkar Varshney, The status and Future of 802.11 based WLANs, IEEE Computer, Vol. 1, No.
3, June 2003, pp. 102-104.
Hassan Aljifri, IP Traceback: A New Denial Of Service Deterrent, IEEE Computer, Vol. 1,
No. 3, June 2003, pp. 24-31.
C.T. Bhunia, Cryptography: From Classical to Quantum Age, IT Seminar, Dept of ETC, BEC
(Deemed University), Shibpur, 2001.
Nancy R Mead et al., From the Ground Up., IEEE Computer, Vol. 1, No. 2, March 2003, 59-63.

DHARM

N-BHUNIA/BHU1-2.PM5

97

98
73.

INFORMATION TECHNOLOGY, NETWORK AND INTERNET

D. Wagner et al., A first step towards automated detection of buffer over run vulnerabilities,
Proc 7th Network and Distributed System Security, 2000.
Michael Gurstein, Rural development and food security.., SD Dimensions, FAO, November
2000.
A.K. Roy, The Dawn of an information age. Thought, Vol V, Issue IV, April 2001, pp. 4-7.
Erica Vonderheid, Answering a Wake Up Call, IEEE , The Institute, June 2003, pp. 1 & 12.
Arun N. Netravali, When Networking becomesand Beyond, IETE Technical Review, Vol. 19,
No. 6, Nov-Dec. 2002, pp. 353-362.
P.C. Mabon, Mission CommunicationsThe Story of Bell Laboratories, Bell Telephone Laboratories, Inc, Murray Hill, N J, 1975, p. iv.
Lester C Thurow, The Wealth of Knowledge, Harper Collins Publishers, USA, 2002.
R. McGinn , A Revolution in Networking: Toward a Network of Networks, Network + Interop,
Atlanta, Georgia, Oct. 21, 1998.

74.
75.
76.
77.
78.
79.
80.

APPENDIX-A
Edholms Law
The following table depicts the growth of data rates under different communication/network technologies. The data rate follows Edholms law that states the data rates for all three communications, namely wired, nomadic and wireless are as predictable as Moores law. The rates are
increasing exponentially and the slower rates trail the faster rates within a predictable time gap.
Table: Date rate growth of different Communication/Network Technologies
Year

Wired
Technology/
Standard

19751984

19851994

19952004

Nomadic
Data
rate

Technology/
Standard

Wireless
Data
rate

Technology/
Standard

Data rate

Ethernet

2.94 Mbps

Hayes
Modem

110 bps

Wide Area
paging

A few hundreds
bps

Ethernet

10 Mbps

Modem

9800 bps

Alphanumeric
paging

A few
Kbps

Ethernet

100 Mbps

Modem

28.8 Kbps

Cellular/GSM

50 Kbps

Modem

56.6 Kbps

IEEE
802.11 b

11 Mbps

IEEE
802.11 g

108 Mbps

PCN/UMTS

> 2 Mbps

B3G (Beyond
3 G)

12 Mbps

MIMO

200 Mbps

Ethernet

1 Gbps

DHARM

N-BHUNIA/BHU1-2.PM5

98

S-ar putea să vă placă și