Documente Academic
Documente Profesional
Documente Cultură
1.
Information Technology in
21st Century
INTRODUCTION
The basic motivations behind all scientific and technological inventions and discoveries are
two: (1) mans inherent desire to live with the principle of least action and (2) mans inherent
desire to be a master like nature for which they quest for to know what are there in natures
actions and designs. All the discoveries from the fires to computers conform to the going with
the principle of least actions. Mans aim of becoming the creator or master of all has lead to
design or redesign himself or herself which has been manifested in the recent development of
cones in laboratory, in continuing research on high speed computing, autonomic computing,
quantum computing and in possible designing of intelligent or brainy computer in near future.
In the field of communication engineering, its trends of development duly conform to these
two basic motivations of discoveries and inventions. To achieve all sorts of communication
with least action, the developmental phase of communication has proceeded as: connecting
geographically separated but location-fixed machines (conventional wired telephones/fax) to
connecting geographically separated but movable machines (chord less /mobile phones) to
connecting people rather than machines (communication that supports both man and machine
mobility which is personal communication). This is how the total wireless communication is
the lust of tomorrows communication. In order to achieve the nature like communication, the
communication we do in our day-to-day life, PTN/UTN (Personal Telecommunication Number)/
(Universal Telecommunication Number) has evolved out. In the existing communication the
connection number changes from location to location and from service to service. We are having
separate telephone numbers while at Calcutta than that from while at Delhi. This is not the
case in the natural communication. A person is called by his name where he is in Calcutta or
in Delhi. A person is called or addressed by his unique name whether it is voice communication
or letter communication. Basic motivations behind scientific and technological development
have moved the communication research and development on the footings of TOTAL WIRELESS
COMMUNICATION and PTN/UTNin the combined form of Personal Communication
Network/Service(PCN/PCS). There are several other parameters including techno-economic
and socio-economic aspects that have caused the total wireless communication becoming pillar
of tomorrows communication; and to name a few are the lower maintenance cost of wireless,
easier up gradation and reconfiguration of wireless networks, easier installation of wireless
network over difficult regions like over hills and seas, and avoiding threats of theft of costly
copper wire used in wired communication. Only existing disadvantage of wireless
communication is the higher initial deployment cost of wireless networks over wired networks
and high error rate probability of the wireless links. But over the time and once the maturity
of the wireless technology and its systems is attained, these disadvantage will undoubtedly be
the past issues. High-speed communication and integrated services are other two important
directions of communication technology. High bit rate carriers like SONET and integrated
transport technology ATM are future power of communication technology.
In the same conformity of principle of least action and mans earnest desire to be a
master of nature, the knowledge age is believed to follow the current information age. The
technical capability and the technology are readily available to transform data into knowledge
and that is how there emerge challenges of expanding vision to turning from data to knowledge.
Actually knowledge age is the next natural consequence of networked age. In the knowledge
age, knowledge workers, knowledge factories, knowledge organizations and knowledge economy
will be the rule of law. The main wealth of the knowledge age will be knowledge rather than
any physical wealth. The subject knowledge management (KM) is therefore will be key issue
in the 21st century.
This chapter reviews the growth of computer and communication technologies along
with knowledge management that are all trying to merge with human axis (Fig. 1)[1], critically
analyze the problems thereon, attempts for possible solution and predicts what is there after
knowledge age.
of optical computer was developed. In optical computers it is the light that will carry the
signals; and in universe it is the light that has the ultimate speed. Accordingly, non-linear
optics emerged as the new frontier of science and technology. The other important deviation
from classical computer, that emerged due to technological growth and demand, was the design
of brainy computers. The chemical computer is a bold step in formulating the brainy
computer. Optical and chemical computers are now merged under a new field of electronics
known as molecular electronics.
There are several empirical laws that correlate, govern and predict the technological
progress and growth in the last few decades [7-9]. These are:
1. Joys law, which states that the computing power, expressed in MIPS (Millions of
Instructions Per Second), doubles every 2 years,
2. Ruges law estimates that the communication capacity necessary for each MIPS is
0.3-1 Mbps (Million of Bits Per Second),
3. Metcalfes law which states that if there are n computers in a network, the power of
the computers in a network like Internet is multiplied by n square times. The law
has been applied on Table (1) that lists the growth of Internet users over several
years; assuming year 1988 as the reference year, and assuming that in that year the
power of a computer was one unit (used for normalization). In that case the power of
a computer over different years would be as shown in the table. Assume that each
user on average uses only a computer for world access through Internet. Applying
the Metcalfes law to a lowest extent that the power of individual computer in the
Internet is multiplied by square of the number of users in the Internet, the power of
computer would be as shown in the last column of the Table (1). From a figure of
0.25 1012 in 1988 to 2433600 1012 in 2000, a 9734400 (107) times increase over a
gap of only 12 years! What a future is ahead of ! Super information power or infinite
information power! Due to this power, the flexible transport technology, ATM and
very high rate carriers like SONET/SDH (Table 2), the requirement of any services
at any time at anywhere with a single device and with a single communication number
may be possible even through modest Internet, which was basically designed to carry
data only.
Table 1: Trend in Internet/Computer power
Year
1988
0.5
0.25
1989
1.3
1.5
2.535
1990
2.4
Computer power on
networking in 1012
11.52
1991
4.4
58.08
1992
8.7
302.76
1993
14.8
1314.24
1994
26.1
5449.68
1995
49.2
12
29047.68
2000
195
64
DHARM
N-BHUNIA/BHU1-1.PM5
2433600
Brainy Computer
Human Axis
Year
Personal Communication
European
Type
Bit rates
Type
E1
DS0
E2
J1
1.544 Mbps
E3
J2
E4
J3
32.064 Mbps ( 5 6
Mbps)
E5
DS0
Bit rates
64 Kbps
Bit rates
64 Kbps
97.728 Mbps ( 3 32
Mbps)
4. Moores laws state that (a) the number of components on an IC would double every
year (this is the original Moores law predicted in 1965 for the then next ten years),
(b) the doubling of circuit complexity on an IC every 18 months (this is known as
revised Moores law), (c) the processing power of computer will double every year and
a half (Moores second law which closely resembles to Joys law).
5. Law of Price and Power that states that over the years the computing, processing,
storage and speed up power of computers will continue to increase whereas the price
of computers will continue to fall.
6. For a new law of communication, readers may refer to Appendix-A.
In table (3), a list of computer generations with power in terms of information processing, storage and speed up factor is given. It is seen that first three laws fit well into the list. In
DHARM
N-BHUNIA/BHU1-1.PM5
pace with increased processing power in terms of volume and speed, and the wide and flexible
use of computers, the communication transport technology and transmission media have been
developed.
Table 3: Computer power over years
Generation of Intel processors
Processor
Number of Transistors
in the chip
Word length
in bits
Internal bus
size in bits
External bus
size in bits
8080
8088
16
16
8086
16
16
16
80286
134,000
16
16
16
i386
275,000
32
32
32
i486
1,600,000
32
32
32
32
64
32
P24T
Pentium
3,300,000
32
64
64
Celeron
4,000,000
64
64
64
Pentium Pro
5,500,000
64
64
64
4,500,000
64
32
64
Pentium II
7,500,000
64
64
64
In the chip level integration till date, Moores laws say the last word. From SSI to ULSI,
the trend set (Table 4) by Moores law is followed. But beyond ULSI, what is there? The
extrapolation of the trend predicts that the future will be the age of molecular dimension
inherited by the already established subject of molecular electronics that is based on organic
materials rather than inorganic semiconductor. Beyond ULSI, the further integration on a
chip will face serious problem from physical constrain like the quantum effect. This may lead
to the death of Moores law. But another interesting dimension may be added to the cause of
the death of Moores law. This is based on the law of Price and Power. It is said that: The
price per transistor will bottom out sometime between 2003 and 2005. From that point on,
there will be no economic point in making transistors smaller. So Moores law ends in a few
years. In fact, economies may constrain Moores law before physics does.
Table 4: Generation of IC integration
Generation
Number of components
264
642000
200064,000
64,0002,000,000
2,000,000100,000,000
DHARM
N-BHUNIA/BHU1-1.PM5
DHARM
N-BHUNIA/BHU1-1.PM5
regard to quantum effects.. At this juncture, molecular electronics, the application of molecular
materials in electronics, started exploiting some of the new advanced technologies that may be
beyond the scope of the silicon chop. Prof. Bloor explained [16-17] that the continuing
development of silicon micro-electronic devices of smaller size and grater complexity has brought
more compact and powerful instrumentation and computing facilities into the laboratory and
office. Though silicon technology holds a dominant position the continuing reduction in
dimensions of an individual device creates problems both at the fundamental and systems
level. On one hand quantum effects must ultimately come into play dissipation and the design
of testable architectures are already with us. These pressures lead inevitably to a search for
alternatives to current technology that can offer prospects for the realization of devices with
even higher densities of active components. MSE is one avenue which is being explored with
these targets in mind.
The research and the interest in molecular electronics were mainly initiated by the late
Forest Carter who conducted a series of international conferences on molecular electronics
[18-20] in 1980s. Prof. Bloor wrote that [21] organic solids have attracted the interest of materials scientists and solid-state physicists since the 1950s both as alternative semiconductor
and because of their optical properties. Strong research groups grew up in the USA, Russia,
Germany and France at this time.
Although the progress of molecular electronics has not always been smooth, yet the
prospects for the future are good. In this article, we shall review the present position and
future aspects of molecular electronics.
2.1.2..2 Molecular Materials for Electronics (MME/M2E)
The study of MME is to see the use of molecular materials in key and active roles in electronic
and opto-electronic devices and systems. It is based on understanding and use of macroscopic
properties of the bulk molecular materials i.e. of the organic materials. The main categories of
MME are[22]
Organic semiconductors and metals
Liquid crystalline materials
Piezo/pyro-electric materials
Photo/Electro-chromic materials
Non-linear optical materials/photonics.
Organic Semiconductors and Applications
Organic semiconductors and metals have been much less studied than their inorganic counterpart. Under MME, a good study is gradually emerging. The major applications of organic
semiconductor are in (1) electronic active devices and (2) xerography.
Therefore before going to organic semiconductors, the process in amorphous materials
is required to be studied. What are amorphous materials? In crystal, atoms or molecules are
arranged in a regular structure with periodicity. But in amorphous materials there is no ordered
structure.
The developments of electronic devices in last few decades were tremendous because
the electrical conductivity of crystalline semiconductors such as silicon can be controlled over
much order of magnitudes by doping. But [23] there are a number of areas where the expenses
of preparing. These crystals and where the limited size to which they can be grown (at present
about 25 cm in diameter) have prevented any very large-area applications. For example,
DHARM
N-BHUNIA/BHU1-1.PM5
crystalline silicon solar cells are widely used in space vehicles for converting sunlight into
electrical power, but the economics of their production is such that their use here on earth is
relatively limited. Silicon can be prepared very cheaply in large areas by vacuum evaporation
or by sputting, but the materials is then amorphous rather than crystalline sine (the)
work on doping amphorous silicon (a-Si) was published, there has been a considerable research
into and development of this materials, leading to a member of commercial products. Table 1
[46] shows a progress list.
MME makes a study with electronic processes as distinct from ionic processes, in organic
crystals. What are organic crystals? By organic we usually mean a compound containing carbon.
Almost 90% of 2 millions compounds known to us are organic. But for MME, there is choice
and limitation that need a careful study.
Till today organic materials have not presented to be a real competitor to the silicon/
inorganic material in terms of active electronics devices. However, during last five years the
progress in the synthesis of high purity semiconductor polymers and oligomers is note worthy.
Experiments showed that conductive polymers could be employed as either metallic or the
semi conducting component of metal-semiconductor junction devices [14]. semi conducting
polymers can be used to produced Schottky diodes [6]. Where the polymer has temperature
dependent properties have been observed, with rectifying behavior at room temperature
changing to ohmic behavior above 100C [15].
Burroughes et al. first reported an active polymer transistor in 1988 [16,17]. The
important characteristic of this device were: (1) no chemical doping or side reactions and (2)
the characteristic of the polymers device was insensitive to disorder. But the major disadvantage
of the device was that its maximum operating frequency was limited. This is because the
carrier mobility in the amorphous polyacetylene layer is very low. The mobilitys of electrons
in semi conducting polymers, amorphous silicon and crystalline silicon are of the order of
104, 1 and 103 cm2 /Vs respectively. One can see the large gap between properties of polymers
and silicon. However a dramatic lead was done by Frincis Garnier and co-workers [18-19].
They reported a totally organic transistor. This transistor is known as thin film transistor
(TFT) or organic FET. This transistor is a metal insulator semiconductor structure comprising
an oxidized silicon substrate and a semi conducting polymer layer. It has grater flexibility and
can even function when it is bent (disorder is acceptable). The operating speed is still poor. The
problem of low carrier mobility of insulating polymer is under active research.
The diodes made of semiconductor with rectifications ratios in excess of 103 have been
reported in [23], and light emitting diodes, made in organic semiconductor with external
quantum efficiencies in excess of 1% photons per electrons are reported in [16-22]; and organic
photovoltaic cells are reported in [19-22]. However, within a short period, a rapid progress has
been observed on use of semi conductive polymers and oligomers in electronic devices. If this
progress is maintained, in near future it could be competitive to silicon.
The field of optical computation starts with the search of a bi-stable optical switch based
on non-linear optical properties of materials. Non-linearity can be used for device basically by
two techniques: frequency conversion and reflective index modulation. The frequency conversion
technique, which is due to second order non-linearity, may be used to second harmonic
generation frequency mixing and parametric amplification etc. Refractive index modulation
DHARM
N-BHUNIA/BHU1-1.PM5
particularly Kerr effect which is due to third order non-linearity may be used for optical bistable switches and parallel processing. Till date a few optical gates and all optical bi-stable
switches have been reported, but the field is still confined in the laboratories. Yet optical
computation is a promising field.
Optical computing and processing of information are the important application of
photonic. The gain of photonics switching speed (of order of femto second 10-12 ) is many order
of magnitudes over that of electronic switching. Optical processing is free from interference
from electrical or magnetic sources. Based on the prospect of three dimensional
interconnectivity between sources and receptors of light concepts of optical neural networks
that mimic the fuzzy algorithms by which learning takes place in the brain have been proposed
and experimentation has begun. Integrated optical circuits, which are counterparts of electrical
circuits photons, can provide for various logic, memory, and multiplexing operations. Utilizing
non-linear optical effects, analogs of transistors or optical bistable devices with which light
controls light have also been demonstrated [23]. So far nlo materials are concerned, all materials
in forms of gases, liquids or solids, exhibit nlo phenomena. However broadly we can defined
two classes of nlo materials : (1) molecular materials or organic materials which consist of
chemically bonded molecular units that interacts in the bulk through weak van der waals
interactions and (2) bulk materials and traditional inorganic materials. Today rapid progress
and research in organic nlo materials proved to be attractive. The nlo devices utilize two different
techniques: frequency conversion and refractive index modulation. Based on letter effects, the
developments of frequency converter and light modulator have been reported in [23]. However
organic materials are seen to be quite attractive for electro-optic light modulation as their low
-frequency dielectric constant is quit low leading to a small RC time constant, thus permitting
a higher bandwidth for light modulation compared to that achievable using inorganic materials.
The application of second order non-linearity needs that the crystal must not be
centrosymmetric structure. In centrosymmetric structure the non-linearities, which are
vectorial, cancel each other to give zero microscopic effect. This is a stumbling block in the
progress of application of second order non-linearity. To solve the problem two approaches are
being examined:
1. Use of LB films with either alternating layers of a polar molecule or molecules which
inherently from polar multi-layers,
2. Inclusion of non-linear optically active molecules in polymer films which are poled
with an applied electrical field.
In a single way, a materials with a bulk where, its molecules are non-centrosymmtric
nature may be defined as anisotropically oriented over volumes measure in cm3 . These
conditions are best achieved by growing a crystal. The Langmuir-Blodgett (LB) technique is a
comparable high tech organic fabrication method, appropriate when the implementation of
the function requires a high degree of molecular anisotropy in an extremely thin layer of uniform thickness. For OICs, particularly for single processing, L-B technique offers the possibility to orient molecules with in a thin layer of highly precise thickness. It has thus become an
attraction. However films are not the final answers. There are many drawbacks with films
namely mechanical softness, limited high temperature range and extremely slow rate of deposition etc. But rapid research is going on L-B film technology and its application in molecular
electronics materials both for ME and MSE .
DHARM
N-BHUNIA/BHU1-1.PM5
10
DHARM
N-BHUNIA/BHU1-1.PM5
10
11
about 100 mm, each with an average connectivity of 104 giving a crude bit-count of 1011 to
1015. An equivalent artificial brain may therefore be of such dense circuit. Enzymes and proteins
are being studied. We should not forget that an example of a natural molecular device. Is the
bacterial photo-reaction center. Recent research to produce analogous have been successful
through the synthesis of single and complex molecules, which release charge on photo-excitation.
This subject of molecular electronics has moved from conjuncture to experimental study
and scientific development. With the rapid growth of research and development of few liquid
crystals, polymers, L-B films and NLO materials; molecular electronics is now with us. With
advances in Physics, Chemistry, Materials Science, Biology and Engineering as our
understanding of molecular materials both at microscopic and microscopic level with grow; the
field of molecular electronics will prosper. The better understanding of natural system and
processes and living organisms, will enhances the capability and potentiality of molecular
electronics particularly in terms of its application in radical new computational machines and
engineering. Much more work remains to be done. It needs scientific, intellectual and
technological challenges on one hand; and Government and Industrial supports on the other
hand. The progress of all these will determine actually whether molecular electronics if so,
when. But research in molecular electronics and device technology it, will emerge as exciting
and frontier fields of science and technology in the current century.
The molecular electronics is a revolutionary idea. To attain maximum miniaturization,
it is proposed that instead of using transistors states, namely ON and OFF to implement 1s
and 0s, the characteristics of electrons may be used for the same. For example, the positive
and the negative spin be respectively used to implement 1s and 0s. The idea is new. It will take
lots of time to mature and to develop the technology. This will be the last resort of
miniaturization. The molecular electronics is believed to be based on new organic material
technology that may lead to bio or chemical computer. A new radical information processing
system is being thought of where organic cells or bacteria are to act as the basic element.
Living organisms are made of organic compounds. As such thinking function can be easily
realized in such system. As scaling will be at biological level, very high density circuit can be
achieved. Our average brain comprises 1011 neurons ranging in size from 0.2mm linear
dimension to about 100 mm, each with an average connectivity of 104 giving a crude bit-count
of 1011 to 1015. An equivalent artificial brain may therefore be of such dense circuit. Enzymes
and proteins are being studied. We should not forget that an example of a natural molecular
device is the bacterial photo-reaction center. Recent research to produce analogous have been
successful through the synthesis of single and complex molecules, which release charge on
photo-excitation.
However while the above new technologies aim to attain miniaturization going in line
and/or beyond Moores law, the autonomous computing technology aims at the economic aspect of technology.
2.1.2.5 Autonomic Computing
Consider the computing paradigms of the Internet. Fig. 2 and Fig. 3 show the exponential
growth of Internet users and Information Technology. It is therefore understood the need of
huge technologists to keep on running Internet without much disruption of services. A statistics says: At current rates of expansion, there will not be enough skilled IT people to keep the
worlds computing systems running. Even in uncertain economic times, demand for skilled IT
workers is expected to increase by over 100 percent in the next six years.
DHARM
N-BHUNIA/BHU1-1.PM5
11
12
250000
200000
India
150000
USA
100000
UK
50000
0
1997
2002
2004
Year
2.87
2.66
2.5
2
1.87
1.5
1
1.45
1.22
0.5
0
1997-98
1998-99
1999-00
2000-01
2001-02
2002-03
Under such a scenario, it is not unbelievable to believe that there might be an exponential
relationship between the growing complexity and power of the computing systems and the
technical manpower required to manage and administer them. A new paradigm to relieve
humans of the burden of managing, administering, and maintaining the computer systems,
and thereby passing these back to computers is to design Computers that help themselves,
now known as Autonomic Computers. Consider how we, the humans do act when we face
problems. When we are physically attacked, we protect ourselves. This solution uses a biological
metaphor. Just as the autonomic nervous system of our bodies monitors, regulates, controls,
repairs and responds to hazardous conditions without any conscious effort on our part, so the
autonomic computer systems. The autonomous computers are to self control, self monitor, self
regulate, self-repair and respond to problematic conditions, again without any conscious effort
of humans.
DHARM
N-BHUNIA/BHU1-1.PM5
12
13
The autonomous computing technology therefore is a major deviation from the conventional rules like Moores law. The aim is not to attain more complex, more integrated, more
powerful computers but self healing computers that will be economic in terms of maintenance
and operation.
The key characteristics of an autonomic computer systems system are:
They should be able to fix failures, and able to configure and reconfigure themselves
under varying, undefined and unpredictable conditions so that they prevent system
freezes and crashes
The systems should known themselves fully and comprise components with proper
identity
The systems should work always in optimize conditions and adopt itself accordingly
to varying conditions
The systems should be self healing, self correcting and capable of recovering from
common, routine and extraordinary, known and unknown events that might cause
some of its parts to malfunction or crash
The systems should be self protective against unwanted intrusion
The systems should be expert to know its environment and the surrounding activity,
and act accordingly in order to easy recovery from crashes and interoperations
The systems should adhere to open standards to ensure interoperability among myriad
devices
The system should better prevent themselves from failures at first place
The systems should optimize resource in anticipation while keeping its operation
hidden to users.
The self-managed computers will have four major components (Fig. 4):
Self optimizedcomponents and devices of the system will automatically and continually check their performance and seek to improve the same
Self configurablecomponents and systems will automatically configure and reconfigure to required adjustments seamlessly
Self healingsystem will automatically detects and repairs localized problems
Self protectedSystem automatically protects itself from intentional attacks
Self Healing
Self
Optimized
Self Configurable
Self
Protected
SELF MANAGED
/AUTONOMOUS
COMPUTER
DHARM
N-BHUNIA/BHU1-1.PM5
13
14
the quantum bits, referred to as qubit. Two possible states of qubit are |0> and |1>. Like
binary bits of the classical computing, all possible superposition of qubits are possible. Therefore,
a two qubit system has four computational states, namely |00>, |01>, |10> and |11>. With
Moores Law being saturated, it is expected that quantum computers will be one of the future
solutions for high speed and high power computing. A few theoretical work has been reported
but practical implementation is yet to reach.
However an important milestone in application of quantum computers has been achieved
due to pioneer work of Bennett et al in quantum cryptography in the area of data security.
BOX 1
Quantum Computing: a bit review
QUANTUM GATES
The information processing in the quantum computing has a component of qubit manipulation. The qubit manipulation is performed by unitary operations. A quantum logic gate is a
device that performs a particular unitary operation on the selected qubits at a given time.
There are infinite numbers of single-qubit quantum gates unlike only two (identity and the
logical NOT) in classical information. The quantum NOT gate performs |0> to |1> and vice
versa analogous to classical NOT Two-qubits quantum gates performs many possible unitary
operation, an interesting subset of which is |0> <0| I + |1> <1| U where I single-qubit
identity operation and U is some other single-qubit gate. Such gates are called controlled gates
as action of I or U on the second qubit is controlled by whether second qubit is in state |0> or
|1>. This gives to define controlled NOT, CNOT gate as:
|00>
|00>
|01>
|01>
|10>
|11>
|11>
|10>
this shows that: (a) second qubit undergoes NOT if and only if the first qubit is in state |1>;
(Fig. 1) (b) the effect of CNOT on states |x> |y> may be written as :
x x, y xy for the reason of which this gate is also called. XOR gate Fig. (1).
X
Xo = X
Yo
DHARM
N-BHUNIA/BHU1-1.PM5
14
15
Other logical operations do require additional qubits. The most popular three qubits
gate is Controlled- Controlled NOT gate/CCN or C2NOT gate (Fig. 2). This gate is also known
as Toffoli gate that demonstrated that the classical version is universal for classical reversible
computation. A gate is reversible when for a given output; one can reconstruct the input(s).
The output of the gate on o-wire can be described as:
(a) if third qubit is in state |0>, then output is AND of two other qubits. The effect on
the input states |x> |y> |0> is x x, y y and output x.y. (b) the effect on the input state
|x> |y> |1> is that output is XOR of x and z, (c) the effect on |1> |1> |z> is that output is not
of z.
X
Xo = X
Yo = Y
Zo = Z
Note:
Sum generation
(X.Y)
It has been argued that any logic circuit can be made of only CN and CCN gates only.
For example, Fig. 3 illustrates a half adder circuit.
Xo = X
Yo = X
DHARM
N-BHUNIA/BHU1-1.PM5
15
Sum generation
16
Table 2
Superposition
In general this means that two things can overlap with each
other with interfering with each other. In quantum
mechanics two electrons can overlap with each other making
a combined waveform that is a set of amplitude probabilities.
Uncertainty principle
Entanglement
QUANTUM TELEPORTATION
Teleportation is by which an object or person while physically remains present in one place, is
made to appear as a perfect replica somewhere else. The classical or conventional approach of
teleportation is illustrated in Fig. 4. Fax machine is an example of teleportation machine. Till
recently the quantum teleportation was assumed impossible as it would violate the uncertainty
principle of quantum mechanics. The uncertainty principle prohibits any scanning or measuring
process to extract all the information in an atom or such object. As the more accurately an
object is scanned, the more accurately the object is disturbed that may ultimately lead to
complete change of the original state of the object even before the whole of information is
extracted to make a perfect replica of the original one. But quantum mechanics has an aspect
known as entanglement. If outside force is applied on two atoms, the aspect of entanglement
occurs whereby the second atom can take the properties of the first atom. Thus if left alone, an
atom will spin in all directions; but the instant it is disturbed it chooses one spin, or one value;
and at the same time, the second entangled atom will choose an opposite spin or value. This
allows learning the value of qubits without actually looking at them, which could collapse
them back into 1s or 0s.
Sending Station
Original object, A physically present at location, P
Receiving Station
A replica of the original Object A is Generated / Received
at a location, Q away from P
A is Scanned or Processed
Send Data
DHARM
N-BHUNIA/BHU1-1.PM5
16
Apply treatment
Raw Material
17
The property of the EPR (Einstein Podolsky Rosen) or entanglement has made the
quantum teleportation possible hurdling the principle of uncertainty. Fig. (5) illustrates the
quantum teleportation. In the process, part of the information of the original object is scanned
out. The un scanned part of the information is passed viz EPR effect into anther object C. the
object C was never in contact with the original object A. the intermediary object or the delivery
vehicle, B conveyed the un scanned part of information from A to C. It is now possible to apply
treatment on C to make it as A before A, was disrupted by the scanning process. So a real
transportation is achieved in C rather than replica.
Sending Station
Original object, A physically present at location, P
Receiving Station
A replica of the original Object A is Generated / Received
at a location, Q away from P
A is Scanned or Processed
Apply treatment
Send Data
B
QUANTUM CRYPTOGRAPHY
The disadvantage of key distribution in secret key cryptography can be removed with the aid
of quantum technology. If key distribution problem is solved, the use of Vernam technique will
be best technique of security. In order to solve distribution problem, use of quantum channel
for sending information about key is being explored. In quantum mechanics one cannot measure something without causing noise to other related parameter. For example Hysenbergs
uncertainty principle state that x.m.= constant. Thus if x. is changed, m is bound to change.
An ideal quantum channel supports transportation of the single photon. Thus a single photon
can represent a bit 0 (zero) or 1 (one). The phase or state of polarization of photon may be used
for identifying the 0 or 1. For example. Photons with 0 and 90 of polarization may therefore
be treated as bit 0; and photons with 45 and 135 (also known as 45) of polarization may be
assumed as bit 1. Data security through quantum channel is under active research in the UK
and USA. Some positive breakthroughs have been made by Charles Bennet of IBM Research
at Yorktown Heights, New York, and by Gilles Brassard at the University of Montreal.
If, in the example discussed earlier, Alice wants to send Bob the secret key as required
in the Vernam cipher, she can send the key, say of N bits, through quantum channels. Bob will
be instructed by Alice to detect the photons (bits) from the quantum channel starting from a
given time. There may be some transmission loss, and Bob may be able to detect some fraction
of photons or bits. Bob will have to inform Alice over a telephone as to which photon he has
seen. For this, they may share both a common and variable key. For instance, if Alice sends
DHARM
N-BHUNIA/BHU1-1.PM5
17
18
11110000 as the key, and Bob replies that he has seen the first, seventh and eighth photons
(starting from the leftmost bit), then their common key shall be 100.
Alice can send data haphazardly using different polarized photons. Alice can do so (Fig. 6)
either on rectilinear basis:
When a horizontal polarized photon represents a 0 and a vertical polarization represents a 1
Or on diagonal basis:
When a 450 polarized photon represents a 0 and a + 450 polarized photon represents a 1.
=1
=0
Alice haphazardly uses both to send qubits (Fig. 7). Bob will haphazardly try to filter out
the qubits. For the purpose of qubits detection Bob will use a polarization beam splitter. The
polarization beam splitter is a device that allows the photons of orthogonal polarization to pass
through but shunts the photon of other polarization. The quantum nature dictates that: (a) the
same basis beam splitter will pass the received same basis polarized photons, but (b) the
rectilinear beam splitter will pass the received diagonally polarized photons either as vertical
or horizontal polarization with equal probability and the diagonal beam splitter will pass the
received rectilinear polarized photons either as vertical or horizontal polarized photons with
equal probability. This will provide the different combinations of Alices sent photons and
Bobs detected photons. Therefore when both Alice and Bob use the splitter on same basis they
with correctly communicate qubits, but when they use on different basis, the chance of matching
between sent and received qubits is 50%. Bob now tells Alice (over conventional method, say
telephone, as there is no need to keep secret these) how he used the beam splitter to detect
received qubits. Assume Bobs choice was as rectilinear, rectilinear, diagonal, rectilinear,
diagonal (Fig. 7). Bob does not announce the results of detection. Alice replies publicly (means
over conventional method as there is no need to keep this secret) Bob, which times her choices
of base match with Bobs choices. Then they use the qubits of those instant when they use
same base (in those instant they correctly communicate the bits), and ignores the bits of other
instants. The matching bits (Fig. 7) generate the secret key for the session.
(a) Alice sends qubits to Bob randomly (we have taken only 5 qubits for illustration)
DHARM
N-BHUNIA/BHU1-1.PM5
18
19
(b) Bob measures the received photons using random polarization basis
Same Base
Different
Base
Different
Base
(Correctly
(Uncertain) (Uncertain) (Correctly
(Correctly
detected by
detected by detected by
Bob)
Bob)
Bob)
Alice and Bob Communicate and identify locations whether they correctly
used the polarization base. COMPARE (a) with (b). BUT THEY KEEP SECRET
THE POLARIZATION OF SENT OR RCEIVED PHOTONS.
1
Ignored
Ignored
(d) Correct bits are taken for key. Bits of other positions are ignored.
So the key in this example is 111.
Fig. 7: Key exchange between Alice and Bob.
DHARM
N-BHUNIA/BHU1-1.PM5
19
20
- a photon that Alice sends as a 0 has a 50% chance of being received as a 1 and vice versa.
Therefore when Bob tells Alice which photons he has received, he now also says which base he
was using and Alice must tell him if that is a valid photon (i.e. one which was sent and received
when they were both using the same base). Paul Townsend of British Telecom, working with
the Malvern group, recently demonstrated self-interference of short light pulses, containing
on average 0.1 photons, down 10 km of standard communications fiber using the technique.
There is anther technique to minimize the hacking by Eve. The technique is known as
privacy amplification protocol. In the protocol, Alice randomly chooses pair of bits from the key
they have got over quantum channel. Then she performs XOR on the pairs. She then tells
publicly to Bob on which bits the XOR operation was made but not the results. Bob then
performs the XOR operation on the bits that Alice informed him. Alice and Bob then replace
the pair with XOR results to design the new key. The is illustrated as below:
(a) Alice and Bob have secret key 111 as in Fig. 7.
(b) Alice chooses first and second bit as pair and she informs these to Bob publicly. She
gets XOR result 1 1 = 0 and keeps it secret.
(c) Bob performs XOR on the informed bits and get the result 1 1 = 0.
(d) Alice and Bob both replace the pair by XOR result. So their new key = 01.
(e) Note that even if Eve definitely knows one bit of the chosen pair, until & unless she
gets the result of XOR (which Alice and Bob never communicates) she can not replace
the pair for hacking the key.
Quantum computer is very promising. It has numerous advantages over classical
computers, namely in terms of speed (parallelism inherent in quantum computer), power
consumption (nearly at the half of classical computer due to superposition), and tackling of
computational problems here to impossible with conventional computers. The quantum
computer will be based on quantum logic gates based on quantum circuit, and the technology
for these is even prior to the infancy stage. On the other two problems of the quantum computers
have been identified. It is estimated that the quantum error correction will generate more
power than the chips can dissipate; the technology of quantum computer may not be so easy to
develop. The problem of decoherence intervals that measure how long a qubit can maintain
synchronized waveform to represent either 1 and 0 simultaneously. The decoherence time is
estimated on average to be less than 1 microsecond. The challenge remains how to increase
this interval time. Yet there is no stop, and shall not be a stop in development of quantum
computer. We will be wrong to think that the quantum computers will replace classical
computers. The quantum physics has not replaced the classical physics. They co exist each
within their own parameter.
DHARM
N-BHUNIA/BHU1-1.PM5
20
21
the 0 or 1. For example. Photons with 0 and 90 of polarization may therefore be treated as bit
0; and photons with 450 and 1350 of polarization may be assumed as bit 1. Data security
through quantum channel is under active research in the UK and USA. Some positive
breakthroughs have been made by Charles Bennet of IBM Research at Yorktown Heights,
New York, and by Gilles Brassard at the University of Montreal.
If, in the example discussed earlier, Alice wants to send Bob the secret key as required
in the Vernam cipher, she can send the key, say of N bits, through quantum channels. Bob will
be instructed by Alice to detect the photons (bits) from the quantum channel starting from a
given time. There may be some transmission loss, and Bob may be able to detect some fraction
of photons or bits. Bob will have to inform Alice over a telephone as to which photon he has
seen. For this, they may share both a common and variable key. For instance, if Alice sends
11110000 as the key, and Bob replies that he has seen the first, seventh and eighth photons
(starting from the leftmost bit), then their common key shall be 100. Eavesdropping can be
tackled by sending photons with different phases. For example, the bit 0 may be represented
by a photon having a phase of 0 or 180, and the bit 1 can be denoted by a photon with a 90 or
270 phase. When Bob uses, he will be able to detect the bits correctly.
Alice can send data haphazardly using different polarized photons. Bob will haphazardly
try to filter out the bits. After the operation, Bob will inform Alice over the telephone of the
timings and the state of filter used by him. Alice can then inform him at what instances they
have used the same state of filters. Based on this exchange of information. Bob and Alice will
get to know their keys. Should any eavesdropper attempt to intercepted photon transmission;
there shall be garbage with the key accepted by Alice and Bob. This is because the quantum
theory ensures that, without changing the phase of the photon, an intercepted photon cannot
be retransmitted. Therefore, a change in the polarity of the photon will let Alice and Bob
immediately known of an interception. The scheme of sending information at the one-photonper bit level as proposed by IBM research and research of university of Montreal reported that
to send the key, the transmitter (Alice) tells the receiver (Bob) that the plans to send n bits
(photons) starting at a given time. Alice than sends the bits by randomly switching the phase
in the transmitter between 00 to 1800; this switches the output in the receiver between 0 and
1. Although transmission and detection losses mean that Bob will only see a small classical
communication channel (the telephone, for example) to tell Alice which photons he has seen
but not which detector he has seen than in. This allows Alice and Bob to share the same
random number. For example, Alice uses ten photons to send the random number 1001011101;
Bob replies that he only received the second, fifth and last photon; therefore they have shared
the random number 001.
However, it is conceivable that an conceivable that an eavesdropper could intercept the
signal, copy Alices message, and send it on to Bob without either Alice or Bob realizing. One
way to overcome this, and ensure absolute security, is for both the transmitter and receiver to
use non-orthogonal measurement bases. In other words, Alice sends parts of the message by
switching the transmitter phase between 90 and 270, say, and other part by switching between
0 and 180. When the Bob and Alice are using the same base, the system works as before.
However, if Alice is using 00/1800 and Bob is using 90/2700 (or vice versa), the message is
meaninglessa photon that Alice sends as a 0 has a 50% chance of being received as a 1
and vice versa.
Therefore when Bob tells Alice which photons he has received, he now also says which
base he was using and Alice must tell him if that is a valid photon (i.e. one which was sent and
received when they were both using the same base). Paul Townsend of British Telecom, working
DHARM
N-BHUNIA/BHU1-1.PM5
21
22
with the Malvern group, recently demonstrated self-interference of short light pulses, containing
on average 0.1 photons, down 10 km of standard communications fiber using the technique.
But remember Moores laws here to stay for at least anther decade!
BOX 2
ILLION-TRANSISTOR ICHope or Hype
Since the inception of digital electronic in the brand name of ENIAC in 1948, the computer has
gone through a number of generations, and it is now in the fifth generation. The so vast and
rapid changes of five generations of the computer technology just over a period of 50 years
results in one hand the reduction of size & cost of computers and on the other hand the
tremendous increase in the processing power & capacity of computers. The credit for these is
due to IC (Integrated Circuit) technology. Out of many others the famous empirical laws known
as Moores Laws, basically govern the pattern of growth of computers and that of IC technology.
Mr Gordon Moore, Head of Research & Development of Fairchild coined these laws around
1965. Moores laws state that (a) the number of components on an IC would double every year
(this is the original Moores law), (b) the doubling of circuit complexity on an IC every 18
months (this is known as revised Moores law), (c) the processing power of computer will double
every year and a half (Moores second law).
Presently ICs are made of around 250 million transistors. If Moores law continues to
hold good, it is predicted that by 2010 ICs will be made of billion transistors. The threats to the
survival of Moores laws are heat dissipation and quantum effect that is a physical limit to IC
integration. Several predictions were therefore earlier made for imminent death of Moores
laws. Contrary to these predictions, Moores laws are surviving and hold true for IC integration.
Recent two research reports have further showed confidence of survival of Moores laws for al
least another few years.
A survey conducted jointly by IEEE (Institute of Electrical and Electronics Engineers)
and the Response Center Inc of USA (a market research firm) over the fellows of IEEE showed
that 17%, 52% and 31% respondents respectively predicts the Moores laws continuation for
more than 10 years, 5-10 years and less than 5 years. The average predicted life term for the
laws is then about 6 years. Moores laws existence if then guaranteed up to 2009, by the time
of which following the laws the billion transistors IC will be a reality.
The expectation of realizing billion transistors IC by 2010 has been further brightened
by the current research of Intel expanding Moores laws. Mr Pat Gelsinger s vision of expanding Moores laws includes Intels 90-nanometer fabrication process. Although a several alternative technologies, namely quantum computing, bio computing, molecular electronics and
chemical computing are under investigation as possible replacement digital computing, the
year 2010 may achieve the landmark of billion transistor IC, an another leap forward in IC
technology really a high hope and not a hype.
DHARM
N-BHUNIA/BHU1-1.PM5
22
23
DHARM
N-BHUNIA/BHU1-1.PM5
23
24
like next generation cellular or TGMS (Third Generation Cellular System) and UTN services.
As it is expected that personal communication shall operate globally using the concept of UTN,
the required switching and processing systems for personal communication shall be huge and
complex. Intelligent capabilities of switching and nodes are a must. On the basis of this, we
can define personal communication as an intelligence-based and natural-like communication.
Wireless transmission can take place using different frequency bands. An overview of different
frequency bands is given in table 5. The frequency allocation to some of the wireless
communication is given at table 6.
Table 5: Different Frequency Bands and their applications
Frequency
Band
Wavelength
Name of the
Band
Usual Transmission
Line Covering the
band
Application
<30 KHz
>10 km
Twisted Pair
30-300 KHz
10-1km
Low Frequency
(LF)
300 KHz-3
MHz
1 km-100 m
3-30MHz
100-10 m
High Frequency
(HF)
30-300MHz
10-1 m
Very High
Frequency (VHF)
300MHz
-3 GHz
1 m-100 cm
Ultra High
Coaxial cable/Radio
Frequency (UHF) waves/Micro waves
3-30 GHz
100 cm-1 mm
Super High
Frequency (SHF)
Micro waves
>30GHz
Less than 10
micrometer
Extra High
Frequency (SHF)
Optical fiber/Infrared
links
DHARM
N-BHUNIA/BHU1-1.PM5
24
25
AMPS, TDMA,
CDMA
824-849 MHz
869-894 MHz
GSM, TDMA, CDMA
1850-1910 MHz
1930-1990 MHz
Europe
Japan
GSM
890-915 MHz
935-960 MHz
1710-1785 MHz
1805-1880 MHz
PDC
810-826 MHz
940-956 MHz
1429-1465 MHz
1477-1513 MHz
Cordless Telephones
PACS
1850-1910 MHz
1930-1990 MHz
1910-1930 MHz
CT +
885-887 MHz
930-932 MHz
CT2
864-868 MHz
DECT
1880-1900 MHz
PHS
1895-1918 MHz
JCT
254-380 MHz
Wireless LAN
IEEE 802.11
2400-2483 MHz
IEEE 802.11
2400-2483 MHz
HIPERLAN 1
5176-5270 MHz
IEEE 802.11
2471-2497 MHz
DHARM
N-BHUNIA/BHU1-1.PM5
25
26
so that co-channel interference does not cause problems. Carrier re-use follows well-defined
rules described in standard literature.
DHARM
N-BHUNIA/BHU1-1.PM5
26
27
at 22.8 kbps with 8 slots per frame as well as half rate operation at 11.4 kbps with 16slots per
frame. For voice communication speech coders compatible with both the rates are available.
For data communication various asynchronous and synchronous services at different rates of
9600, 4800 and 2400 bps are specified for both full and half rate service operation. These data
services interface to audio modems (like V.22 bis or V.32) and ISDN (Integrated Services Digital
Network). GSM can also support connectionless packet switched network X.25, Internet and
group 3 FAX (Fly Away Xerox).
GSM has recently extended to include group calls and push to talk services. Extension bands of GSM which are yet to be explored are 880-890 MHz for uplink communication
and 925-935 MHz for downlink communication.
DHARM
N-BHUNIA/BHU1-1.PM5
27
28
connected to MSC. A mobile of one MSC can connect to any other mobile of other MSC s via
MSC-MSC switching.
DHARM
N-BHUNIA/BHU1-1.PM5
28
29
communication. With TDMA we have earlier seen in case of cellular communication that
multiple users can simultaneously communicate with a single transceiver. The same is true
for DECT also. It uses 32 kbps ADPCM technique for voice digitization. In addition DECT can
support telepoint, wireless PBX and RLL (Radio Local Loop).
In Japan, HS (Personal Handy phone System) is the main standard for digital cordless.
PHS uses TDMA. Each channel has a width of 300 kHz. 77 channels are permitted in the band
of 1895-1981.1 MHz. 37 carriers within band1895-1960.1 are allocated for home and office
cordless; and 40 carriers within band of 1906.1-1918.1 MHz are allocated to public cordless.
Digital cordless in USA was developed by Bellcore (Bell Communication Research) with
a title WACS (Wireless Access Communication System). Actually PACS (Personal Access Communication Service) is now in use. It is a combination of WACS and PHS. In North America,
ISM (Industrial, Scientific and Medical) bands like 902-928 MHz, 2400-2483.5 MHz and 57255850 MHz are in use for digital cordless.
1. INTRODUCTION
One of the hottest topics of IT is the Local Area Network (LAN). LAN bears an indispensable
role of service to information community. LAN provides basically a shared data access of an
organisation, which has several systems, and nodes distributed geographically, logically and
physically. The three main physical attributeslimited geographic scope (in the range of
0.110 KM [1], low delay or very high data rate (over 1 MBPS [02], and users ownership, make
LANs substantially different from conventional computer networks. Moreover, while Wide
Area Network (WAN) and Metropolitan Area Network (MAN) allow user in network to access
the shared databases, LANs go a step ahead and allow users to have shared access to many
common hardware and software resources [3] such as storage I/O peripherals and communication devices. For example, a costly high resolution laser printer is usually shared by users in a
LAN, and all users in a LAN use an inexpensive single transmission medium in a multidrop
environment as well as they use whenever required a single bridge or gateway to communicate with other homogeneous or heterogeneous network respectively. LAN is hence a resource
sharing data communication network that is usually used to connect computers, printers terminal controllers (servers), terminals (keyboard NDU), plotters, mass storage units (hard disk)
and any other piece of equipment (exam. Word-processing machine) that has some form of
computer connectivity. LAN is to solve for MY problem [4] of 80/20 rule [5] of communication in a cost-effective scope in an office, factory, university and such relevant environment.
DHARM
N-BHUNIA/BHU1-1.PM5
29
30
However PABX (Private Automatic Branch Exchange) differs from LAN in that unlike LAN,
PABX user a separate pair of wires (transmission medium) to connect each device (or extension), low bandwidth (limited to that of telephone line) and rugged hardware switching for
interconnection. The communication in LANs is peer to peer and not via intermediaries as
with WANs and MANs. MANs coverage is from a few miles to 100 miles and WANs coverage
is from 100s miles to 1000s miles [6]. These entire three networks follow layered architectural
standard protocol like 7 like 7-layer ISO-OSI protocol or SNA protocol etc. for interconnection
strategies [6]. LANs continue to be driving force to implement futures white hope of digital
wall socket [7], which will act like todays electricity socket and telephone socket. The digital
wall socket is to be used in handing explicitly low or high data rate devices like copying machines, word processing machines, facsimile displays. VDU, keyboard, microcomputers/PC,
large computers etc. This may ultimately lead to 100 percent paper less Office-of-the-future
and 100 percent automated factory-of-the-future with diskless managers, administrators
and engineers etc.
One of the most successful LANs is Ethernet. Ethernet was the most popular LAN in
1987. As per Forrester Research Inc, [5], in U.S.A Ethernet covers 33 percent of LAN market
with IBM token ring lagging behind at 22 percent. Dataquest estimated that Ethernet had
covered 52 percent of installed LANs U.S.A is Ethernet hottest now? Whatever may be the
answer to this question, it is a fact that Ethernet is still today very popular and will continue
to be so at least for some time to come.
This paper will make a thorough review of Ethernet.
2. Ethernet
Historically, Ethernet was developed by the Xerox Corporation on an experimental basis [8]
around 1972. Based on this experimental experience, the second-generation system was soon
developed by the Xerox Corporation in late 1970s [9]. Around 1080-81, under a joint effort of
DEC (Digital Equipment Corporation), Intel and Xerox, an update version of Ethernet
specifications (table I) [8] was designed. This historically leads to development of IEEE (Institute
of Electrical and Electronics Engineering Inc) 802 standards (table II) [4,6] of LAN in reference
to 7-layer OSI-ISO (Open System Interconnection of International Standards Organisation)
the LIC (Logical Link Control) is covered by IEEE 802.3 standard at MAC actually specify the
accessing mechanism, physical level covers the electromechanical connectivity at network
medium, LIC and MAC of LAN jointly form the data link of OSI-ISO protocol standard. Nowa-day Ethernet is available from many vendors [10]. Such Ethernet is as per IEEE 802.3
standard. These are actually Ethernet-like [11] networks. However, all LANs covering IEEE
802.3 standard are not Ethernet. But all Ethernets cover IEEE 802.3 standard.
Table 1: Specification of Ethernet
Parameters
Experiment Ethernet
Industrial
Commercial Ethernet
1. Data rate
2.94 MBPS
10 MBPS
1 KM
2.5 KM
1 KM
500 M
DHARM
N-BHUNIA/BHU1-1.PM5
30
Manchester
Manchester
75
50
0 to +3 volt
0 to 2 volts
8. Preamble
1 byte of a pattern of
10101010
1 byte of a pattern of
10101010
2 byte
4 bytes
1 byte
6 bytes
31
Access Technique
and Topology
Transmission medium
with allowed data
Basic
application area
802.3
CSMA/CD with
BUS topology
Office
Automation (OA)
802.4
Manufacturing
Automation (MA)
802.5
802.6
802.7
802.8
802.9
Yet to be finalized
-do-do-do-
Yet to be Finalized
-do-do-do-
MAN
Broadband LAN
LAN with fiber optical
LAN in ON (Integral
service digital network)
DHARM
N-BHUNIA/BHU1-1.PM5
31
32
DHARM
N-BHUNIA/BHU1-2.PM5
32
33
straight Manchester coding ensures simple synchronization and a dc value. At any instant
cable can be in any one of the three states: transmitting a 1 bit (high followed by low), transmitting a 0 bit (low followed by high) or idle state (0 volts). The high and low level are represented by respectively + 0.85 volts and 0.85 volts. However, Ethernet using differential Manchester coding is also there [6]. Such 10 MBPS baseband Ethernet actually uses a signaling
rate of 20 MHz due to the adoption of differential Manchester encoding. This encoding actually
uses bit times to transfer 1 bit of information and a clock single.
By this time, you may probably be wondering of why Ethernet is called Ethernet. It was
once thought that Ether a hypothetical passive universal element is there to bound together
the entire universe and its all parts. And as you see that this LANs transmission medium is a
passive Ethernet that is bounding the smart devices in a net. This is why the name Ethernet
was adopted.
The Ethernet is a broadcast LAN. All nodes can listen each and every message transmitted on the net.
Transceiver is another important component of any LAN. It is carped securely onto the
Ethernet cable so that its tap makes contact with inner core. Transceiver is available in many
different shapes, sizes and price-ranges, but they all provide users devices to communicate
with the cable. They also contain electronics circuit that handles carrier detection and collision
detection too. A transceiver is so named because it allows simultaneous transmission and reception. A transceiver is fairly a dumb system. It transmits data, receives data and detects
collision and notifies the same if occurred to the controller.
Transceiver cable (maximum length is 50 meter) contains usually five numbers of
individual shielded twisted pairs. Two of these pairs are used for data in and data out. Two
more are similarly used for control signals in and out. The fifth pair is not always used, and it
is used to allow the node to power the transceiver. Some transceivers allow upto eight nearby
computers/workstation/users terminals to be attached to them to reduce the number of
transceiver needed. For example DEC has developed special box (DELNIDigital Ethernet
Local Network Interconnect) that allows upto eight systems to connect to the box, and a single
Ethernet transceiver taps the eight systems onto the main cable. DELNI has the ability to
work star alone and emulate an eight node Ethernet cable. When the systems are no more than
50 meters away from DELNI or there are no more than eight co-located system that require
being on an Ethernet, DELNI is cost-effective than eight transceivers and cable. The
disadvantage is that DELNI is self-powered. So failure of DELNI will fail eight nodes to access
network.
The interfacing unit detects data and accepts the data if it is mean for this address. It
also creates and checks the CRE for error correction and recovery.
The controller unit (is a firm wire or sift wire device) transmits data frames to and
receives data frames from transceiver via interfacing unit. It also buffers the data and
retransmits it when collision occurs, and determines the retransmission interval (which varies
with load etc.) and other aspects of network management.
For a complete network, one has to procure the components of LAN, network software
and hardware and communication software (e.g. Netware 2.2, super LAN, MS net).
Now, the basic components of Ethernet are discussed. Next thing is that how does the
Ethernet is the accessing technique known as CSMA/CD (Carrier sense multiple access/collision
Detection). There are many different types of CSMA technique [12], the technique adopted in
Ethernet is 1-persistent CSMA/CD. The problem of the non-persistent strategy is that after the
current transmission and line is idle. The alternative to this technique is 1-persistent where
DHARM
N-BHUNIA/BHU1-2.PM5
33
34
the nodes continuously sense the line and transmit data as soon as it is free. The CSMA/CD is
a simple and straightforward way of providing every user a chance to transmit whenever it has
something to do. The concept behind CSMA/CD may appear to be derived from a technique
used when people are talking in a mass gathering or meeting. If no one is talking, one people
may start talking. If two or more people start talking at the same time, collision occurs, and
both stop and wait for some random time before again starting to talk. In Ethernet if any node
wishes to send data to another node on the network, the source listens to see if the line is free
(quite/idle). This is called carrier sensing. If the cable is idle, source node starts transmission.
Some times it may so happen that two or more station accidentally may start transmission at
the same time. The collision is also possible in some other cases. For example if two nodes
separated by a distance of propagation time t, both start transmission at an interval of time t,
there will be collision. When collision will occur, transmitted data will be corrupted. A mechanism
to detect collision is used by adopting the technique of listen-while-transmitting. In this scheme
at the source node, the transceivers transmitting unit while sending the data, the receiving
unit is listening to the data that is being sent. If the transceiver detects that the data received
by receiving circuitry do not match with that transmitted by transmitting circuitry, it senses
the occurrence of collision and accordingly sends a message to the controller of node. If there is
match, the transmission process is allowed to go on. On receiving a collision-detection signal,
controller stops sending data, and sends a burst of noise on the line(Jamming) to assure that
the other nodes sending data listen a collision. All collision-detecting stations back off on
detection of collision. The controller than waits for a random time before at empting for
retransmission. For this a random generator is used. However, the mean wait is initially
equivalent to an end-to-end round trip delay on the cable (which is about 2 see for 500 meter
co-axial cable). However, in case of second time collision, the controller doubles the previously
generated random number there by ensuring double of the mean delay of first collision and so
on (doubling operation) on repeated collision. Usually random generated is counted from assigned
number to zero for measure of delay. The doubling operation is allowed for a prescribed number
of times, which are usually 16. After that the controller sends an error message to the host
(system manager) notifying the occurrence of multiple-collision. Due to this collision and
retransmission scheme, 100 percent channel utilization is not achieved. Ethernet, however,
come close to 100 percent due to CSMA/CD technique, which polling and other techniques
cannot achieve. The minimum Ethernet packet size (64 bytes) and maximum Ethernet cable
segment length and propagation time, used together, guaranteed, that by the time the last bit
of information is transmitted, the source node can accurately detect a collision i. Any other
node attempt transmission at the same time.
However, if the utilization rate of the cable is low (i.e. load on the network is low), collision is rare, and the mean delay time rarely exceeds its minimum value of one end-to-end
round-trip delay. When utilization is high (i.e. traffic load becomes heavy), collision becomes
more common. Due to this feature, controller dynamically changes the retransmission interval. This is why doubling operation is there in use.
When data is being transmitted, all nodes hear the data. On examining the first (after
preamble) 6 bytes (address field) of the data packet, nodes may determine whether the data is
destined for itself or not. If the message is for itself, it passes the message to the users device
through controller. Otherwise it ignores the message usually. But why is CSMA for Ethernet?
Because of distributed nature of the random accessing technique, they are well suited to LANs
where simplicity of operation and flexibility are most important. Besides, since a large bandwidth
is available in LAN, LAN under such accessing technique can be operated at a relatively low
DHARM
N-BHUNIA/BHU1-2.PM5
34
35
loading avoiding unstable [13] conditions. However, the performance of CSMA/CD is inversely
proportional to the end-to-end propagation delay [14]. Thus Ethernet for OA can use CSMA/CD
most appropriately.
Controller/Interface Chip
Intel
82586 (controller)
82501 (interface)
DP8390
DP8790
DP8341
DP8342
7996
7990
8003
8023
LANCE
National
Semi conductor
Advanced
Seeg
Tech
AMD/Mostek/Motorola
DHARM
N-BHUNIA/BHU1-2.PM5
35
36
message, as there is no scheme of sequence number checking, missing message re-transmission requests and other such facilities.
3. MODIFICATION OF ETHERNET
3.1. Improving Ethernet for MA
The problem of load balancing [17] in CSMA/CD technique can be achieved to a large extent if
each station on getting a transmission access is restricted to transmit only a fixed ore-assigned
number (pay, P) of packets (non-exhaustive mode) [16]. After transmitting P packets, the station
has to back off, for a time, which must not be less than the time required for a bit to end-to-end
of round of the bus. After passing off this time, the station can check the carrier further, and
the process repeats.
Priority in CSMA/CD can be achieved by assigning each station a priority number. Any
station when transmitting data, may transmit its priority byte after say each q packets (q>p).
Any other station which is desirous to send urgent message, if sees that transmission is going
on, may check the priority is less than its priority, it will distort the priority. The on going
transmitting station not getting back the proper priority byte will stop, immediately transmission to allow the higher priority station to access. However, if checked priority is greater than
its priority, it has to wait for free carrier. A modified and deterministic Ethernet is already
there in France-defence department [5]. This is of course a proprieratory item.
4. EXTENDED ETHERNET
A number of Ethernet segments may be connected together (Fig. 1) via repeater or bridge [5,12,20].
A repeater consists of some sort of microprocessor (like Intel 8088, Motorola MC 68000) and
memory etc. They are standalone units. They repeat everything what is received from any
segment to other segment and vice versa. They connect two Ethernet segment via transceiver.
Bridge, on the other hand, store and forward the intended data only from a source segment to
a destination segment. Bridge is made of some sort of processor, storage, buffers and a set of
software.
DHARM
N-BHUNIA/BHU1-2.PM5
36
37
User device
Segment-2
TR
Terminal
Controller
TR
CI
Repeater
Controller
Interface
maximum
2.5 meter
TR
CI
Transceiver
cable (max50 m/15 ft)
Printer
Top
Terminal
TR
Original segment
Terminal
TR
User device
TR
CI Controller
Interface
TR
terminal
either (may be
twister wire pair,
Co-axial cable,
Fibre or radio link)
Terminal
Service
Like DELN
TR = Transceiver
PC
User
TTY
Bridge
TR
TR
Segment-3
TR
TR
Gateway
CI
WAN or MAN
Plotter
Fig. 1
5. CONCLUSION
A number of important considerations of Ethernet have been highlighted. Ethernet is seen to
be very effective for OA. If the next generation of Ethernets is to be developed, they must be
done in a direction to extend the application to MA utilizing the proposed suggestions in paper.
DHARM
N-BHUNIA/BHU1-2.PM5
37
38
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
C. David Tsqo, A local area network architecture review, IEEE Communication Magazine,
Vol. 22, No. 8, pp 7, Aug.1984.
D.D. Clark, K.T. Pogran and D.P. Reed, An introduction to local area network, Proc. IEEE,
Vol. 66, No. 11, pp. 1497-1517. Nov. 1978.
John E. McNamara, Local area Network, Prentice Hall of India, Ch. 1, 1991.
Stephen P.M. Bridge, Low cost local area Networks, Galgotia Pub. Pvt. Ltd. Ch 1, 1990.
Bill Hancock, Designing and implementing Ethernet Networks, QED information Science, Inc,
1989.
Paul J. Fortier, Handbook of LAN Technology, McGraw Hill Inc., NY, 1989.
James Martin, Computer networks and distributed processing, Prentice Hall, Inc, Ch. 26,
9181.
John F. Shoch, Young K. Dalal, David D. Redell and Ronald C. Crane, Ethernet, Advances in
Local Area Networks, IEEE Press, NY. pp 29-48, 1987.
Timothy A Gonsalves, Measured Performance of the Ethernet Advances in Local area Network, IEEE Press, pp. 383-387, 1987.
William L Schweber, Data Communication, McGraw Hill, Intl. Ch. 11, 1988.
Neil Willis, Computer Architecture and Communications, Paradigm Pub. Ltd., U.K., Ch. 14,
1988.
DHARM
N-BHUNIA/BHU1-2.PM5
38
39
In ad hoc network stations independently communicate with each other and there is nothing like
access point for communication through backbone network. Ad hoc network can be either temporary
or semi-permanent. Semi-permanent networks are used for a few months and useful for companies,
which move frequently. Field construction companies, military camps on war days may use semipermanent ad hoc networks. Temporary networks are used for a day or for a few hours of business.
They may be used in sharing files, databases in a company meeting or convention.
There are two important standards wireless LAN. IEEE is developing IEEE 802.11 standard, which is proposed to be used in USA. HIPERLAN (High Performance Radio LAN) is the
standard developed by European Telecommunications Standard Institute and is for use in Europe. HIPERLAN standard has already been ratified by CEPT. IEEE 802.11 draft standard defines three different physical layers : a) 2.4 GHz ISM band with frequency hopping spread spectrum radio, b) 2.4 GHz ISM band with direct sequence spread spectrum radio and c) infrared light
2.4 GHz ISM band has been allowed both in USA and Europe for IEEE 802.11 version LAN;
whereas Japan has allocated the band 2.471-2.497 GHz for IEEE 802.11 LAN. Japan has allowed
such a narrow band in 2.4 ISM in order to provide radio LAN in medium data rates of 256 kbps to
2 Mbps where spread spectrum technique is used. Japan has allocated another band near 18 GHz
for high rate of 10 Mbps or more radio LAN where QAM (Quadrature Amplitude Modulation),
QPSK (Quadrature Phase Shift Keying) are used. In frequency hopping system, 79 and 23 different frequencies are used respectively in USA/Europe and Japan for data transmission under
IEEE 802.11 scheme. In direct sequence the processing gain is proposed to be 10.4 dB in IEEE
802.11 draft standard. Frequency hopping system can support large number of channels compared to direct sequence scheme. Frequency hopping is also having superior performance when
interference is high. However, direct sequence is simpler in design and implementation. Service
wise, IEEE 802.11 has proposed to serve asynchronous and time sensitive (synchronous/isochronous)
services. In radio LAN when an access point is shared by all stations, all stations use same
hopping/sequence pattern. As such there is always a fair chance of interference and collision.
Hidden node problem of radio network on the other hand has a tendency to increase collision.
When two transmitters send data to a single receiver, single receiver can hear to transmissions;
but transmitters cannot hear each other. This is known as hidden node problem of radio system
that depends on physical sensing of carrier. Thus a good medium access control (MAC) strategy is
essential In radio system. In IEEE 802.11 standard, MAC is CSMA/CA (Carrier Sense Multiple
Access/Collision Avoidance) rather than CSMA or CD (Collision Detection) used in Ethernet.
Radio Technique does not allow the collision detection mechanism. In CSMA/CD technique, when
a station senses a free carrier it backs off transmission for a random amount of time. Thus if more
than one station detects free carrier at the same time , due to random back off periods, collision
may be avoided. For tackling hidden node problem, in IEEE 802.11 scheme, two controls frames,
RTS (Request to Send) and CTS (Clear to Send) are used. Theses are like RS-232-C transfer
protocol.
HIPERLAN differs from IEEE 802.11 on number of accounts. IEEE 802.11 does not
support multi-hop communication. No access point or station can act as a data router or relay
point. HIPERLAN does not support multi-hop communication by the way of cellular architecture.
It is targeted for higher data rates than IEEE 802.11 and may support 23.5294 Mbps. That is
why a large and dedicated band of order of 150 MHz (5.150-5.300 GHz) near 5 GHz another
band of 17.1-17.2 GHz near 17 GHz are allocated to HIPERLAN. HIPERLAN is also aimed to
be indistinguishable from wired LAN of Ethernet and to support some sort of isochronous
services. For modulation, Gaussian minimum shift keying is used. A (31,26) BCH mode is used
for error control. It aims to achieve BER (Bit Error Rate) of 10-3 or less for fair service. MAC in
DHARM
N-BHUNIA/BHU1-2.PM5
39
40
HIPERLAN is different from both CSMA/CD of IEEE 802.11. In HIPERLAN accessing scheme, if
a station senses free medium for 1700 bit times, it can transmit immediately. If not, channel
accessing is done through three phases of prioritization, elimination and yield. HIPERLAN MAC
can reduce chances of collision to a less than 3 per cent. IRLAN works on IEEE 802.3 and IEEE
802.5 protocols. IRLAN is based on line of sight technology, and hence it can support high data
rates up to 10 Mbps for Ethernet configuration and up to 16 Mbps for token ring configuration.
IRLAN is costly and hence some vendors are hopeful to go through the technology. An association
of vendors has made their own standards for IRLAN.
The IEEE standards for different LANs are 802.3 for CSMA/CD Bus LAN, 802.4 for
Token Passing Bus LAN, 802.5 for Token Passing Ring LAN and finally 802.11 for WLAN
(Wireless LAN). Of course in general IEEE standards, 802.3, 802.4 and 802.5 is commonly
known as 802.x; and these standards are for wired LANs.
As of today two basic transmission technologies those are in use to set up WLAN (Wireless Local Area Network) are: Infrared light at THz wavelength and Radio wave at GHz (2.4
GHz in the license-free ISM-Industrial, Scientific and Medical band). Infrared technology uses
either diffuse light reflected at obstacle like furniture, walls etc or directed light if line of sight
path exists between the sender and the receiver. Simple transmitter may be light emitting
diodes or laser diodes; and the receiver can be a photodiode. But most of the wireless systems
use radio waves. IEEE 802.11 LAN can use both Infrared and Radio wave, but HIPER LAN1
uses only Radio wave. A comparison of Infrared and Radio wave transmission technology is
given in the table (7).
Table 7: Comparison of Infrared and Radio waves
Transmitter
Receiver r
Data rate
Shielding
Infrared Technology
Very Simple
Radio Wave
Like other 802.x standards, the standard 802.11 covers only physical layer and MAC sub
layer. IEEE 802.11 supports three different physical layers: one layer on using infrared, and another two layers on using basically 2.4 GHz ISM band available free on world wide. ISM bands are:
902 to 928 MHz, 2,4000 to 2.4835 GHz and 5.7250 to 5.825 GHz. Radio LANs operate in the high
UHF and low microwave range. Infrared LANs do transmission just below visible light. At physical level three different wireless specifications are: Infrared LANs, Frequency Hopping Spread
Spectrum (FHSS) LANs and Direct Sequence Spread Spectrum (DSSS) LANs. FHSS and DSSS
LANs belong to radio LANs. FHSS LANs are specified to support data rate of 1 Mbps with a faster
specification of 2 Mbps. DSSS LANs are specified for 1 Mbps and 2 Mbps also. FHSS is a spread
spectrum technique that allows for the coexistence of a multiple number of networks in the same
area by allowing different networks the different hopping sequence. Under IEEE 802.11 standard,
79 hopping channels for North America and Europe; and 23 hopping channels for Japan are specified each with a bandwidth of 1 MHz in 2.4 ISM band. A particular channel is identified by a
DHARM
N-BHUNIA/BHU1-2.PM5
40
41
pseudo random hopping pattern. The maximum transmitter power is 1 watt EIRP (Equivalent
Isotropic Radiated Power) in US and 100 mW EIRP in Europe. In DSSS, the separation is done by
codes rather than frequency. Except this, all other like bit rate and transmission power remain
same as in FHSS. The frame formats of physical layer of 802.11 are shown in Fig. (5). The figures
in the bracket in the fields refer to the size of the fields in bits. In FHSS frame, the synchronization
field is a bit pattern of 010101. The star Frame delimiter( SFD) is 0000110010111101. PLW refers
to PDU Length Word i.e. length of payload including 32 bit error control CRC bits at the end
payload. It ranges from 0 to 4,095. PSF is for signaling. Out of its 4 bits only one bit is specified to
indicate either 1 or 2 Mbps. HEC is a 16 bit header error check field for which ITU-T CRC-16
standard is used. In DSSS frame, 128 synchronization field is made of only scrambled 1 bits. 16
bits start frame delimiter is 1111001110100000. Signal refers to bit rate. Service field is reserved
for use. Length is used to indicate payload size with CRC field. HEC is used to check error on
header with IUT-T CRC-16 standard.
The MAC data frame of IEEE 802.11 is as shown Fig. (6). The figures in the bracket in
each field refer to the size of the field in bytes. Frame control is used for several reasons like
protocol version and the type of the frame etc. Duration ID indicates the virtual reservation
mechanism. Address 1 to 4 which has 46 bits in each is used as they are done for 802.x LANs.
Sequence control is used for acknowledgement and error and flow control. CRC is used as it is
done 802.x LANs.
Synchronization
SFD
PLW
PSF
HEC
Payload
-on (80)
(16)
(12)
(4)
(4)
(Variable)
SFD
(16)
Signal
(8)
Service
(8)
Length
(16)
HEC
(16)
Payload
(variable)
Control
(2)
Duration
ID
(2)
Address
1
(6)
Address
2
(6)
Sequence
3
(6)
Address
Control
(2)
Data
4
(6)
CRC
(0-2312)
(4)
The world is rapidly shifting towards wireless and faster network. In such a rapidly
changing scenario, let us see how one of the oldest local area networks, namely Ethernet is
keeping pace with the changes. Ethernet dominates as a LAN (Local Area Network), as it is
time tested highly reliable, scalable, elegant and low cost network. IEEE 802.3 Ethernet is
the established corporate LAN technology, and most of its implementations are with IEEE
802.3u or 100 Base T that defines a 100 Mbps data rate using four pairs of twisted wire pair
wiring or Ethernet cable. Tree of the Ethernet is shown in Fig. (7).
The Ethernet was originally wired network. It follows the IEEE standard 802.3 for logical link control by which the several nodes can share the single physical medium. The physical
layer implementation is made with wires.
DHARM
N-BHUNIA/BHU1-2.PM5
41
42
Ethernet
Wireless
Wired
802.3
802.11
Conventional Ethernet
10 Base5 Thick Co axial
10 Base2 Thin Co axial
10 Base T UTP
802.11b
(11Mbps)
Fast Ethernet
802.11a
(125 Kbps54 Mbps
802.11g
(54 Mbps)
Gigabit Ethernet
1000 Base LX
1000 Base SX
1000 Base CX
1000 Base T (CAT 5+)
10 Gigabit Ethernet under IEEE 802.3ae
The IEEE standards for different LANs are 802.3 for CSMA/CD Bus LAN, 802.4 for Token
Passing Bus LAN, 802.5 for Token Passing Ring LAN and finally 802.11 for WLAN (Wireless
LAN). Of course in general IEEE standards, 802.3, 802.4 and 802.5 is commonly known as 802.x;
and these standards are for wired LANs. IEEE 802.11 mainly provides connectivity to corporate
LAN. It is very costly for home LAN.
DHARM
N-BHUNIA/BHU1-2.PM5
42
43
Computer
Computer
Computer
Computer
Computer
Computer
Computer
Computer
Computer
Wireless computing, Wireless communication and Wireless networks shall be the rule if
future. In such a scenario, WLAN will play a major role. In the last few decades two important
wireless technologies those emerge as viable and promising are LEO (Lower Earth Orbital
satellites) and 3G (Third Generation) cell phones. But both the technologies fail to meet the
expected aspirations. Here at, the WLAN has come out as an alternative. Presently under
IEEE 802.11, two major WLAN standards are operating: 802.11a and 802.11b (Table 8). The
first 802.11 standard is 802.11b that was approved by the IEEE in 1999. The 802.11b is the
first standard that broke the wired brethren of 802.3 wired Ethernets. The 802.11b standard
transports data at 11 Mbps using CCK (Complementary Code Keying) using 2.4 GHz band.
The 802.11b has is a very successful track record as it is learnt that the sale of IEEE 802.11b
wireless LANs has increased dramatically from 5000 to 70,000 units per month since early
2000. It is also reported that: The growing popularity and ubiquity of WLANs will likely
cause wireless carriers to lose nearly a third of 3G revenue as more corporate users begin using
WLANs to connect to the Internet and office networks Many analysts feel that the ease of
installing and using WLANs is making it alternative to mobile 3G. In contrast to the reported
$650 billion spent worldwide by carriers to get ready for 3G, setting up a WLAN hotspot requires only an inexpensive base station, a broadband connection and one of many interface
cards using the 802.11b. But the speed of 802.11b is one-tenth of wired Ethernets. Therefore
the IEEE to have high speed wireless access approved the 802.11a standard concurrently. The
IEEE 802.11a standard provides scalable data rates from 125 Kbps to 54 Mbps in increments
of 125 Kbps with OFDM (Orthogonal Frequency Division Multiplexing) using 5 GHz band.
Actually 54 Mbps is known as turbo rates. IEEE 802.11b standard defines only two lower
levels of OSI (Open Systems Interconnection) reference model, the physical layer and the Data
DHARM
N-BHUNIA/BHU1-2.PM5
43
44
Link Layer Medium Access Control (MAC) sublayer. IEEE 802.11b uses two pieces of equipment, a wireless station, which is usually a PC or a Laptop with a wireless network interface
card (NIC), and an Access Point (AP),which acts as a bridge between the wireless stations and
Distribution System (DS) or wired networks. There are two operation modes in IEEE 802.11b,
Infrastructure Mode and Ac Hoc Mode as discussed earlier in the IEEE 802.11 standard. The
physical layer covers the physical interface between devices and is concerned with transmitting physical raw bits over the communication channel. IEEE 802.11b supports different data
rates (Table 9).
The problems of 802.11a are many: It does not support different devices with different
speed, design and complexities. The standards 802.11a and 802.11b are not interoperable.
802.11a is presently used only in North America, and 802.11b is used in the whole of Europe
and Asia.
The IEEE 802.11e is tasked with a new protocol to non-guaranteed quality service in ad
hoc connectivity. The IEEE task group G for 802.11 has now deliberating on the next generation standard for 802.11 that would transmit data at the speed of wired Ethernet. The new
standard will be 802.11g. The mission of 802.11g standard is to have wireless access at the
turbo speed of 54 Mbps while maintaining the interoperability.
The IEEE task group G for 802.11 has now deliberating on the next generation standard for 802.11 that would transmit data at the speed of wired Ethernet. The new standard will
be 802.11g. The mission of 802.11g standard is to have wireless access at the turbo speed of 54
Mbps while maintaining the interoperability.
Definition
802.0
802.1
802.2
802.3
802.3ae
802.3af
802.4
DHARM
N-BHUNIA/BHU1-2.PM5
44
802.5
802.5t
802.5v
802.5z
802.6
802.7
802.8
802.9
802.10
802.11
802.11a
802.11b
802.11g
802.12
802.13
802.14
802.15
802.16
45
Code Length
Modulation
Symbol Rate
in MSps
Bits/Symbol
11 (Barker Sequence)
BPSK
11 (Barker Sequence)
QPSK
5.5
8 (CCK)
QPSK
1.375
11
8 (CCK)
QPSK
1.375
Mode
Supported
Fiber diameter
in micron
Maximum Distance
in segment
Single
10 Km
Multi
50
550 m
Single
50
3 Km
Multi
62.5
440 m
Multi
50
550 m
Multi
62.5
260 m
25 m
UTP Cable
100 m
multimode fiber)
DHARM
N-BHUNIA/BHU1-2.PM5
45
46
Computer
IEEE 802.11
access point
Wireless
Station
Computer
Independent
device
802.11
DHARM
N-BHUNIA/BHU1-2.PM5
46
IS-856
Seamless
Next G
47
Mobile
wireless
Integrat
3G/Data
WLAN
801.11a
2G/Digital
WLAN
802.11b
1G/Cellular
WLAN/
Conventional
3.16 IEEE 802.15.4 StandardLow Data Rate, Low Cost Wireless Home Networking Solution
Due to applications of networking in almost everywhere, several attempts are being made to
offer solutions that aim to be flexible, cost effective, reliable and consume less power, the features
particularly so important for home or residential networking. In the wired communication, the
DSL (digital subscriber loop) technology (discussed later) is one important driver. The cost
effectiveness is achieved by utilizing the existing copper line in the local loop. But the wireless
communication and networking has an edge over wired technologies for which a wireless local
loop solution is needed. The wireless networking and communication technologies that have
appeal in voice and data applications in residential or home services are among others cellular,
cordless and IEEE 802.11b. The consideration of cost effectiveness and low power consumption
has motivated for development of a new standard IEEE 802.15.4 for home networking with low
data rate wireless solution.
The initiative to develop a standard for low powered and low cost home wireless
networking was taken by IEEE working group 15 in 2000. Besides home automation, the
standard is poised to be applied in different services of industry like industrial control,
automotive sensing (monitoring tire pressure, sensing of soil moisture, pesticide, pH levels),
and disaster management (sensing and determining location of disaster) etc. In home
applications, the services of the standard will be PC peripherals (Keyboard, PDA, Mouse),
consumer electronics (TV, Radios, VCR, CD etc), automation (heating, air conditioning,
ventilation, windows and doors lock), remote control, health monitoring, security and PC enabled
DHARM
N-BHUNIA/BHU1-2.PM5
47
48
services. These applications need data range ranging from a few kilobits per second (kbps) to
115.2 kbps. The acceptable delay or latency for these services ranges from 15 ms to 100 ms.
The major features of the IEEE 802.15.4 proposed standard are:
like all other IEEE standards, the IEEE 802.15.4 refers to lower layer specification.
In reference to OSI ISO 7 layer protocol, the IEEE 802.15.4 refers to DLL (Data Link
Layer). DLL is split into two sub layers: LLC (Logical Link Control ) and the MAC
(Medium Access Control) sub layer. The LLC is as per other specification of 802.3 etc.
The IEEE 802.15.4 defines a separate MAC sub layer.
The IEEE 802.15.4 as recommended two versions of physical layers: (1) 868/915 MHz
and (2) 2400 MHz
the IEEE 802.15.4 supports both star and peer to peer networks including ad hoc
networks
the generic frame format of IEEE 802.15.4 is made of : frame control and sequence
number that are respectively 2 and 1 bytes. The Address field is variable from 0 to 20
bytes. Payload is variable, but the full MAC frame is limited to 127 bytes. Frame
check sequence is 2 bytes and it uses 16-bit CRC control. In the physical layer frame,
the total header length s 6 bytes with preamble of 4 bytes, start of packet delimiter of
1 byte and physical header of 1 byte. The payload is limited to 127 bytes, being the
MAC frame.
The header fields of the physical layer frame format are: 4 bytes preamble that is
used for synchronization, 1 byte start of packet delimiter that is used to indicate the
end of preamble, and 1 byte physical header used to specify the length of physical
service data unit
The physical layers in IEEE802.15.4 uses DSSS (direct sequence spread spectrum)
methods with different channel frequencies and modulation parameters.
The DSSS method is chosen in order to use low cost IC for implementation by which
the cost of the system is made low
IEEE 802.15.4 aims to provide excellent battery life, low transmit power
IEEE 802.15.4 devices aim as much as 99.9 percent of sleeping time
The simplicity is the another attraction of IEEE 802.15.4
DHARM
N-BHUNIA/BHU1-2.PM5
48
49
The IEEE 1394 working group defined a standard known as IEEE 1394. Truly speaking,
the standard was originated by Apple Computer Company for desktop LANs. IEEE1394 is a
low cost digital interface that can work over existing copper, fiber and co axial cables too. The
Broadband Home Company has used co axial cable to extend IEEE1394 interface beyond the
local audio & video cluster. The solution so provided looks like a virtual IEEE1394 wire
connection to other IEEE1394 networks. It supports hot plugging, thereby allowing users to
add and/or remove devices when the interface bus is active. It provides both hardware and
software specification for peer to peer connection at different operating speed of 100, 200 or 400
Mbps. The enhancement in speed may go to support 800, 1600 and 3200 Mbps. It supports a
scalable architecture to meet with different speeds of different requirements, thereby providing
a cost effective solution. That standard integrates communication, entertainment, and computing
to provide a single digital interface for consumer multimedia. It supports both asynchronous
and synchronous types of data transfer as required in home networks. Asynchronous transfer
is related to conventional data/computer file transfer. But for multimedia application of voice
and video where delay is the most sensitive issue, transport at the guaranteed delay is done by
synchronous or isochronous technique that is duly supported by IEEE 1394. It supports high
speed communication at low cost interface. IEEE 1394 has been recognized as digital interface
by many organization for different purposes that includes entertainment, consumer applications,
digital TV, Home multimedia, conventional file transfer, digital video conference etc.
A typical integration of several networks including IEEE1394 in a single cable is as
shown in Fig. (12).
Phones
2.5 MHz
TV
Channels
Cable
Infrared
5 MHz
55 MHz
1000 MHz
IEEE 13
94
Ethernet
1500 MHz
3.18 Paging
Paging is a one way message system unlike two way interactive mode system of communication of cellular. Pagers transfers message on wireless network; and thereby support mobility.
Person having a pager can be contacted anywhere at any time. Pagers are quite useful for
doctors, journalists etc. Paging is basically a back up system to telephones. It enhances the
productivity of telephones. They work on a simple technique. The caller may dial the paging
center through usual telephone and leave the message with operator along with callees pager
number. The operator shall send the message to the callees pager. The message will then be
flashed out on the callees pager with an activating signal. There are two basic types of paging
transmission standards : POCSAG (Post Office Code Standard Advisory Group) and RDS (Radio Data System). In India frequency allocation for POCSAG is 134-168 MHz whereas idle
band of AIRs (All India Radio) existing FM network is used to RDS. AIR is operating RDS
paging. POCSAG is under DoT (Department of Telecommunication). Different types of pagers
are available in the market like numeric, alpha-numeric and recently introduced English type.
In advanced countries, two way paging is being developed.
DHARM
N-BHUNIA/BHU1-2.PM5
49
50
3.19 VSAT
Satellite communication started with the pioneer work of Dr. Artheer C. Clarke. He showed
that using just three satellites placed each at 1200 apart from each other and at a height of
about 36000 Kms from earth surface, world wide communication is possible. The satellites
placed in orbits of about 36000 kms away from earth surface are known as geostationary
satellites as they rotate in their orbits once in 24 hrs. therefore, from any point on the earth,
these satellites appear stationary to any person on earth.
Due to two big advantages of satellite communication over other means of communication, satellite communication has a big appeal to users. It is said that Going for Satellite
means Going for Wireless Communication. Wireless communication is more reliable, flexible
and adaptable than wireless communication. We do communication by acoustics in wireless
communication. Auther saying is Going for Satellite means going for wide area coverage.
Wide area coverage has a natural attraction.
Over the years, hence, the satellite communication has diversified its area of application
and technologies. One of the major technologies is VSAT.
VSAT (Very Small Aperture Terminal) is a cost-effective technology meant for networking
computers and terminates mainly for the purpose of data communication. The VSAT network
may be a wide network and extended in any remote location easily.
The basic components of VSAT networks are:
1. a geological satellite.
2. a master earth station or hub.
3. micro earth stations(VSAT stations or nodes)
VSAT node is made of VSAT ports, VSAT controller and VSAT antenna. The size of the
antenna is 1.2 m 1.8 m.
Basically two types of topology are used in VSAT communication. Star and Mesh. Star
topology uses TDM-TDMA (Time Division MultiplexingTime Division Multiplexing Access)
CDMA(Code Division Multiplexing Access) and mesh topology uses DAMASCPC (Demand
Assigned Multiple AccessSingle Channel Per Carrier). VSAT communication is broadcast
communication. VSAT nodes cannot communicate directly to each other. They communicate
with each other via master earth station or hub. Naturally VSAT communication is known as
two-hop-communication. This means a VSAT signal from node has to travel at least 36000 2
x 2 = 144000 kms to reach another node. The delay for which shall be around 480 msec at least.
The quality voice and video communication do not allow more than 80 msec delay between
transmitter and receiver. Delay is not a issue for data transportation. VSAT communication is,
hence, most suitable for data communication.
The characteristics of VSAT are:
1. cost effectiveness is a big advantage of VSAT communication. An STD call between
Delhi and Mumbai can be around Rs. 40 whereas a VSAT call may be about Rs 10.
2. Reliability and flexibility are always present in VSAT communication as it is wireless
communication. A leased telephone line can have at most 90% up time. A VSAT line shall have
around 99.5% up time. Due to wireless, ease of expansion of VSAT is there. This is how flexibile
is VSAT network.
VSAT(Very Small Aperture Terminal)communication is useful for huge organization like
DVC,ONGC, IOC, BHEL, etc., as a means of cost effective date communication system for
within the organization VSAT is a small dish antenna 60 cm or 120 cm which communicates
DHARM
N-BHUNIA/BHU1-2.PM5
50
51
with central hubs and terminals via satellites VSAT is cheaper then conventional earth station communication using satellites. Power budget calculation [26] shows that in order to meet
with required bit energy to noise ratio a large antenna is essential for covering a wide area.
Cost increases with antenna size. For intra organizational communication a small antenna is
justified. DoT has allocated extended C band for VSAT communication in India. Nowadays
VAST can also provide cost effective telephony and fax services. VAST is a low speed 1200 BPS
data communication system and employs TDMA for accessing.
DHARM
N-BHUNIA/BHU1-2.PM5
51
52
provide voice, data, telex and facsimile services to ships; but for this, there remains the requirements of very high power for both terminal and the space craft, (3) the large distance
between earth and GEO causes a high propagation loss of about 200 db and a time delay of
about 350 msec in one way. Such long delay is not acceptable to 80% of users.
MEOs are placed at orbit height of about 10,000 Km or above; whereas LEOs are placed
at orbit height of 2000Km or less. By this LEO overcome the disadvantages of GEO based
communication pointed out above. Besides, there some major differences between LEO based
and GEO based communications.
# The communication in LEO is done through a constantly moving and tracking switching network and antenna rather than fixed system of GEO. Mobile communication in LEO is
based on the relative mobility. LEO systems move and moving users appear stationary. For
example in Iridium system, the LEO speed relative to earth is 26,676 Km/ hr, whereas average
mobile speed is around 90 Km/hr.
# The GEO based communication is single hop (earthsatelliteearth) communication;
while LEO based communication is multi hop communication.
# Under LEO, the communication across the world is low cost. For example while a
typical GEO can provide about 10,000 channels for global services, the LEO can provide 7000
channels for regional services and 35,000 to 70000 channels for global services. This typical
means the cost per channel for global services in LEO is about one half of that of the for global
services in GEO. LEO is effective for global services rather than regional services.
# LEOs are smaller than GEOs. The mass of LEO satellite range from 50 to 700 Kg
(whereas that of GEOs range from 18002000 Kg). Therefore economic multiple launching of
LEOs is possible.
The variety of services offered by the satellites was divided into three groups by ITU
(International Telecommunication Union). These are: (1) Fixed Satellite Service (FSS), which
offers radio communication services between fixed location in earth through one or more
satellites. (2) Broadcast Satellite Services (BSS) which provide direct reception of satellite
broadcast by public and/or community and (3) Mobile satellite Service (MSS) which provide a
communication between mobiles through one or more satellites. As in past twenty five to thirty
years, FSS and BSS shall continue to be served with GEO mainly. LEOs shall dominate MSS.
It is said, The LEO and MEO systems offers an innovative approach to providing service to a
country, a region, or to the whole world. Instead of transmission to and from a fixed point in
sky (as for geostationary satellite systems) the user transmits to and receives from a network
of lower altitude satellites, that move overhead with some satellites disappearing from view as
others come over the horizon. The system can provide service to all parts of the world as the low
altitudes satellites pass over different parts of the earth.
LEO Systems
LEOs are classified into two groups: Little-LEOs and Big-LEOs. The little-LEOs group
consists of satellites, which are small in size and low in weight. Little LEOs are expected
to provide services of only low bit rates of the order of 1 Kbps (kilo bits per second) and they
are placed near orbit height of around 1000 km. Naturally they are used for non-voice
services. The frequency band allocated for mobile satellite services (MSS) under littleLEO
group are: 148-150.50 MHz (uplink) and 137-138 MHz (down link).
Big-LEO group of satellites are expected to provide near-toll-quality voice service and
other related services like paging, data communication, facsimile, and position location. BigLEO group contains MEO (International Circular Orbit) satellites. The important three BigLEO systems are: Globalstar, Odyssey and Iridium.
DHARM
N-BHUNIA/BHU1-2.PM5
52
53
Iridium
Iridium system was proposed by Motorola to provide global services of voice, data, fax, paging,
RDSS; and was scheduled to operate in 1998. The cost of the system is about US$ 3.4 billion.
The system is composed of 66 (77) satellites with 11 satellites in each of 6(7) polar orbits placed
at the orbit height of 780 Km above earth surface. Satellite shall provide 3168 cells out of
which only 2150 cells shall remain simultaneously active to provide global coverage of mobile/
cellular telephone service. In the system the same frequency band 16161626.5 MHz shall be
used for both uplink and downlink communication on time-shared basis. Message on one
telephone to another is transmitted from mobile to satellite using 23 GHz (22.5523.55)
intersatellite link until the satellite viewing the destination mobile is reached. The system
uses FDMA (Frequency Division Multiplexing Access) and TDMA (Time Division Multiplexing
Access) on uplink and downlink respectively. The connection to the terrestrial network is done
via earth station gateway. Voice circuits per satellite are 1100. Voice service rate is 2.4 Kbps.
Data service rate is 7.2 Kbps. Modulation technique used in the system is OPSK ( Quadrature
Phase Shift Keying). Footprint diameter of each satellite is 4700 Km. and therefore satellite
visibility is 11.1 minutes. Satellite life span is rather less and it is 5 years. Satellite antenna
type is fixed and six feet in size. Beams per satellite is 48 and therefore total beams in the
system are 3168. Feeder uplink and downlink frequency 27.5-30 GHz and 18.8-20.2 GHz.
Minimum and maximum one way propagation delay are respectively 2.6 msec and 8.22
msec. Airtime charge per minute is US$ 3.0.
Iridium System is working but not to the level of satisfaction expected before launch.
Globalstar
Qualcomm proposed Globalstar LEO system to provide services of voice, data, facsimile and
RDSS. The Globalstar system shall use 48 satellite in 8 polar orbits. The orbit height is 1400
Km above earth. It provides global coverage and can work with existing PSTN (Public Switched
Telephone Network). Calls are granted through satellites only when access is available to the
terrestrial all network. The PSTN can be used via gateways for long distance communication.
The system does not support intersatellite link. The gateways to the PSTN shall use 6.5 GHz
and 5.2 GHz respectively for uplink and downlink communication. Access technology for MSS
is CDMD (Code Division Multiplexing Access) via L-band (1610.01626.6 MHz) and S-band
(2483.52500.0 MHz) for uplink and downlink communication. The modulation technique used
in the system is QPSK. The system can support 2000 - 3000 voice circuits / satellite. The voice
and the data service rate in the system are 2.4 to 9.6 KBPS respectively. Minimum and maximum one way propagation delays are respectively 4.63 msec and 11.5 msec. Mobile terminal
cost is about US$ 750. Air time charge per minute is 30 cents.
The satellite foot print diameter is 5850 Km. The satellite visibility is 16.4 minutes and
lifespan is 7.5 years. The satellite on a orbit mass is 450 Kg. The system cost is US$ 1.8 billion.
The satellite antenna is fixed and size is 03 feet. Feeder uplink and downlink frequencies are respectively 5.091 - 5.250 GHz and 6.875 - 7.875 GHz. The satellite output power is
1000 watt. Beams per satellite is 16 and total beams are 768.
Comparing Iridium with Globalstar, a report says globalstar has capital costs (at $1
billion) one-half Iridium, circuit costs one-third Iridiums and terminal cost (at $750 each) onefourth Iridiums. With no intelligence in space, Globalstar relies entirely on the advance of
intelligent phones and portable computer devices on the ground; it is the Ethernet of satellite
architectures. Costing one-half as much as Iridium, it will handle nearly 20 times more calls.
DHARM
N-BHUNIA/BHU1-2.PM5
53
54
The advantages of Globalstar stem only partly from its avoidance of complex intersatellite
connection and use of infrastructure already in place on the ground. More IMPORTANT is its
avoidance of exclusive spectrum assignments. Originating several years before spread-spectrum technology was thoroughly tested for cellular phones, Iridium employs time division multiple access, an obsolescent system that requires exclusive command of spectrum but offers far
less capacity than code division multiple access. It is said that Iridiums voice service cannot
complete with GlobalStars cheaper and more robust CSMA system. It is also reported that
Iridium satellite together use 80% more power than Globalstars, yet employ antennas nearly
twice as larger and offer 18.2 times less capacity per unit area.
Odyssey
TRW proposed a system known as odyssey to provide voice, data, facsimile and RDSS services
on global basis. In the system, 12 satellites and 3 polar orbits are used. The orbit height is
10370 Km above earth surface, and therefore this system is better known as MEO system. The
orbital period of satellites is 359.5 minutes and visibility is 94.5 minutes. The satellite mass is
2207 Kg. Footprint diameter of each satellite is 10540Km.
The access technology of the system is CDMA, and modulation technique is QPSK. The
system operates at L and S brand. The mobile uplink and downlink frequencies are respectively 1601.01626.5 MHz (L brand) and 2483.52500.0 MHz ( S brand). The system supports
to 3000 to 9500 voice circuits per satellite. Voice and data services are made with respectively
4.8 KBPS and 2.4 KBPS. Delay are respectively 34.6 msec and 44.3 msec. Airtime charge in
the system is US$ 0.65 per minute.
Satellite antenna type is steer able. Uplink and downlink feeder frequencies are 29.1
29.4 GHz (ka - band) and 19.319.6 GHz (ka band) respectively. The system supports 61 beams
per satellite, thereby supporting total 732 beams. Satellite output power 6177 watt.
Ellipso
Ellipsat proposed a LEO satellites system known as ELLIPSP to provide voice, data, facsimile, and RDSS, using 15 (9) satellites placed in 3(1) polar orbits. The orbit height is 7800
Km over earth surface and provides coverage over entire northern hemisphere and to southern
hemisphere upto 50 south latitude. It uses L and C-band for communication. The mass of
satellite on orbit is 300Kg. The system supports voice and data at 4.2 KBPS and 0.3 to 9.6
KBPS respectively. The satellite life span is 5 years. Air call charge per minute is US$ 0.50.
Access technology is CDMA.
ICO
Hughes proposed ICO system to provide services of voice, data, fax, paging, massaging and
position location employing 10 satellites in 2 orbits placed at 10355 Km. The system is MEO
rather than LEO. The system supports voice and data at the rates of 4.8 KBPS and 2.4KBPS
respectively. The satellite life time is 10 yrs. Air charge is US$ 1 to US$ 2. The system covers
service all over world. Orbit period is 358.9 minutes and satellites visibility is 115.6 minutes.
The down-link and the up-link frequencies for MSS are 1980.02010.0 MHz and 2170.0
2200.0 MHz respectively. Satellite antenna type is fixed. Feeder uplink and downlink frequencies
are respectively 5.0915.250 GHz (C-band). The system supports voice and data service at the
rates of 4.8KBPS and 2.4 to 9.6 KBPS respectively. Minimum and maximum one way
propagation delays are respectively 34.6 msec and 48 msec. Air time charge per minute is 2.00.
DHARM
N-BHUNIA/BHU1-2.PM5
54
55
Teledesic
Teledesic system of LEOs is a different class. Difference stems from the application point of
view. The system is aimed at providing wireless broad band access and computer networking.
Little LEOs are equivalent of paging. Big-LEOs like iridium, globalstar and ICO, are equivalent of fiber. The system comprises 840 small satellites in proposed 21 orbital planes and
20000 super cells on the earth in order to provide broadband-on-demand service by 2002 for
99% of the earth. The orbit height is 700Kms. Teledesic system is expected to use haband of
frequencies, between 17 GHz and 30 GHz. And antennas of size 66 cms. The Teledesic system
is Giga Band system. A comparative study says in the long run Iridium could be trumped by
Teledesic. Although Teledesic has no such plans, the incremental cost of cost of incorporating
an L band transceiver in Teledesic, to perform the Iridium functions for voice would be just
10% of Teledesics total outlays, or less than $ 1 billion (compared with the $ 3.4 billion initial
capital costs of Iridium). But 840 linked satellites could offer far more cost-effective service
than Iridiums 66.
Iridiums dilemma is that the complexities and costs of its ingenious mesh of intersatellite
links and switches can be justified only by offering broadband computer services. Yet Iridium
is doggedly narrow band system focused on voice.
The evolutionary process of development of personal communication shall go on using
existing cellular, cordless, satellites, wireless data networks, WLL ( Wireless Local Loop) VAST
(very small Aperture Terminal), wireless centrex / PBX, and other GMS (Third General Mobile
System/Cellular) and MSS (Mobile Satellite Service) etc. But the use of MSS in personal
communication may be revolutionary of evolutionary, which remains to be seen. The MSS
under different LEO projects-both big LEOs and small LEOs, is believed to be a high hope of
implementing personal communication.
DHARM
N-BHUNIA/BHU1-2.PM5
55
56
Since it is estimated that the Internet has been growing by a factor of two every year, the
underlying principles and assumptions based on which IPv4 was designed are going to be
invalid. What was duly sufficient for a few million users or a few thousands of networks will no
longer can support a world with tens of billions of nodes and hundreds of millions of networks.
Inability of IPv4 to support real time services was the stumbling block to realize Internet
telephone. IPng (Internet Protocol Next Generation) initiative (RFC 1752) was then, started
by the Internet Engineering Task Force (IETF). By 1996, the IETF proposes IPv6 (Internet
Protocol Version 6) under IPng initiatives, which is supposed to solve the problems of IPv4
including the two major limitations mentioned above. IPv6 is therefore the future replacement
of IPv4. From the experience over IPv4, it was felt that new version should take care of: More
addresses, Reduced overhead, Better routing, Support for address renumbering, Improved header
processing, Reasonable security and Support for mobility. Under the IPng initiatives the main
techniques investigated were:
TUBA that refers to TCP (Transmission Control Protocol) and UDP (Users Datagram
Protocol) with bigger addresses
CATNIP that means common architecture for the Internet. The main idea is to define
a common packet format that will be compatible to IP, CLNP (Connectionless Network Protocol) and IPX (Internet work Packet Exchange). CLNP has been proposed
by OSI (Open System Interconnection) as a new protocol to replace IP, but never been
adopted because of its inefficiency
SIPP (Simple Internet Protocol Plus) that proposes to increase of the number of address bits from 32 to 64, and to get rid of unused fields of IPv4 header
As none of the above three was seen to be suitable. As such, a mixture of all these three
along with other modifications was suggested in RFC 1883. The RFC 1883 suggested the modifications as below:
Expanded Addressing in suggesting 128 bits for address that may allow more levels
of address hierarchy, increased address space and simpler auto configurable addressing
Improved IP header format by dropping the least used options
Improved support for Extensions that will bring flexibility in operations
Flow Label that will make the real time services possible over Internet
Based on the experience gained in operation of IPv4 over about 20 years, the design of
IPv6 has considered four major simplifications:
assigning fixed format to each header. This ensures the removal of header length field
that is essential in IPv4
removing header checksum. The main advantage in removing header checksum is to
diminish the cost and the time delay in header processing. This may cause the data to
get misrouted. But experience has shown that the risk is minimal as most of data
pack is encapsulated by the packet checksum at other layers like MAC (Media Access
Control) procedure in IEEE 802.X and in adoption layer of ATM (Asynchronous Transfer
Mode) etc.
removing the hop by hop segmentation procedure
removing TOS (Type Of Service) field that IPv4 provides, since experience has shown
that this field has ever been set by applications.
On the other hand, IPv6 has considered two new fields, flow label and priority. These
are included to facilitate the handling of real time services like voice, video and high quality
multimedia etc.
DHARM
N-BHUNIA/BHU1-2.PM5
56
57
Thus the IPv6 was finally come up with packet format as in the Fig. (13). The final
specifications of IPv6 were produced in RFC 1883. The new features of IPv6 are:
Version (4 bits)
Priority (4 bits)
A fixed and streamlined 40-byte header: IPv6 is having fixed header bytes like
that in ATM (Asynchronous Transfer Mode) cell. This makes the node processing delay to minimize, and thereby becomes more suitable for real time services like voice,
video and multimedia.
Expanded addressing capabilities: A 128-bit address space in IPv6 instead of 32
bit as in IPv4, is believed to ensure that the world wont run out of IP addresses. The
128 bit address size gives rise to a total of 256 1036 different addresses. It is expected
the Internet under IPv6 to support 1015 (quadrillion) hosts and 1012 (trillion) networks.
The Internet under IPv4 can support maximum 232 hosts. Therefore the IPv 6 address
space is about 64 109 times more that that of IPv4. This is why it is expected that
future and exponential growing demand for Internet connection be met with IPv6.
New Address Class: Besides unicast and multicast, IPv6 has the provision of anycast
addressing. Anycast address allows a packet addressed to an anycast address to be
delivered to any one of a group of hosts.
A single address associated with multiple interfaces
Address auto configuration and CIDR (Classless Inter-domain Routing) addressing
Provision of extension header by which special needs like checksum, security options
may be introduced.
Flow labeling and priority: Flow level and priority headers are used to comfortably support the real time services. By assigning higher priority to the real time packets, the necessity of time sensitiveness is restored. Data packets and for that purpose
time insensitive packets are assigned low priority and serviced by the best effort approach. As per RFC 1752 and RFC 2460, this new feature allows labeling of packets
belonging to particular flows for which the sender requests special handling, such as
a non-default quality of service or real-time service. Hence video and audio may be
treated as flows whereas traditional data, file transfer and e-mail may not be treated
as flows.
Support for real time services
Security support that could be eventually seen as the biggest advantage of IPv6. Today,
billion dollars business is done over Internet. To keep the business secure, public
crypto system has emerged out as one of the important tools. IPv6 with its ancillary
security protocol has provided a better communication tool for transacting business
over Internet
Enhanced routing capability including support for mobile hosts.
IPv6 as such is not simple extension of IPv4, but a definite improvement over IPv4 in
order to meet growing demand of Internet connectivity and the services of real time communication via Internet.
DHARM
N-BHUNIA/BHU1-2.PM5
57
58
The functions of IPv6 headers that is of base headers of fixed 40 bytes are:
Version field (4 bits). It contains the version number. Versions are 4 and 6. For
version 6, this field is 6 (i.e. 0110). The various assigned values for IP version label
are shown in table (12). But it must be remembered that just putting a number 6 or
4 does not make the corresponding IP packet. For the corresponding IP packet the
proper format is required to be made.
Priority (4 bits). The bits in the field indicate the priority of the datagram. The
priority levels are 16 from 0 to 15. The first 8 priority levels (from 0 to 7) are for the
services that provide congestion control. If the congestion occurs, the traffic is backed
off. These are suitable for non-real time services like data. The different priority levels under the first 8 levels are: 0 that defines no priority, 1 that defines background
traffic like Netnews, 2 that defines unattended transfer like e-mail, 3 remains reserved, 4 that defines attended bulk transfer like FTP (File Transfer Protocol), NFS,
5 remains reserved, 6 that defines interactive traffic such as Telnet, X-windows, and
7 that defines control traffic such as SNMP (Simple Network Management Protocol)
and routing protocols. The higher 8 priority levels (from 8 to 15) are used for services
that will not back off in response to congestion. Real time traffics are examples of
such services. The lowest priority level of this group 8 refers to traffic that most willing to be discarded on congestion and the highest priority level15 is for traffic that is
least willing to be discarded.
Flow level (24 bits). It is proposed to be used to identify different data flow characteristics, which will be assigned by the source and can be used to label packets. The
packet labels may be required to provide special handling of packet by IPv6 routers,
such as defined quality of service (QoS) or real time services. The combination of the
sender IP address and the flow label creates a unique path identifier that can be used
to route the datagrams more efficiently. The field is still being experimented. Flow is
actually a sequence of packets coming from a particular source and destined for a
particular destination. A flow may require a special handling by routers. Each flow is
uniquely defined by the combination of the source address and a non-zero flow label.
The flow label can be from (000001)H to (FFFFFF)H in hex. The packets having no
flow label are given a zero label. All packets in the same flow must have same flow
label, same source and destination addresses and same priority level. The initial flow
label is obtained by the source by pseudo random generator, and the subsequent flow
numbers are obtained sequentially.
Payload length (16 bits): The field indicates the total size of the payload of the IP data
gram that excludes header fields. It can define up to 65,536 bytes of payload.
Next header (8 bits): The field indicates which header follows the IP header. The next
header can be either one of the optional extension headers used by IP or the header
for an upper layer protocols such as UDP or TCP. The field defines the type of extension header. For example 0 defines IP information, 1 defines ICMP (Internet Control
Message Protocol) information, 6 define TCP information, 44 defines fragmentation
header, 51 defines authentication header and 80 defines ISO (International Standard
Organization) /IP information. Each extension header again contains an Extension
Header Field and a Header Length Field (Fig. 14). When there is no other extension
header, the next header will be TCP and hence the next header field will contain
6.The length of the base header is fixed 40 bytes. The extension header gives the
DHARM
N-BHUNIA/BHU1-2.PM5
58
59
functional flexibility to the IPv6 datagram. Maximum six extension headers can be
used. The extension headers may be source routing, fragmentation, authentication
and security etc. IPv6 has currently defines six extension headers: (1) Hop by hop
option header, (2) Routing header, (3) Fragment header, (4) Authentication header,
(5) Encrypted security payload header and (6) Destination options header. If one or
more extension headers are used, they must the order in which they are presented
above. For example, if Authentication header and routing extension header are to be
used, the extension header fields must follow as: (1) main IPv6 header, (2) routing
extension header (3) Authentication header and (4) TCP header with data. Each
extension header must have one 8-bit next header field. For all extension headers
except the fragment header(as in case of fragment header the flags and offset is 16
bits fixed), the next header field is immediately followed by a 8-bit extension header
length that indicates the length of current extension header in multiple of 8 bytes. In
the last extension header the next header field contains the value 59. The example
that we considered earlier, the next header in main IPv6 packet will contain the routing extension header, the next header field in the routing header will show the authentication extension header, and the next header field of the authentication header
will contain the value 59.
Hop Limit (8 bits): This field indicates the maximum number of hops that the datagram
is allowed to traverse in the network before it reaches its destination. If after traversing this maximum number of hops the data gram does not reach the destination, the
datagram is discarded from the network. The field is used to avoid the congestion that
may be caused by the datagram. Each router decreases the hop limit by 1 while releasing the datagram to the network. When the hop limit reaches 0, it is deleted. The
hop limit of IPv6 is exactly what is called Time To Live in IPv4. The new name of Hop
Limit has been given as the name suits better to its function.
Source Address and Destination Address (Each 128 bits): Both the addresses can be
called IP address and are described in RFC 2373. IP address that defines the original
source of datagram is called source address. The IP address that defines the final
destination of the datagram is called the destination address. The three main groups
of IP addresses are: unicast, multicast and anycast. Unicast address defines a particular host. A unicast packet is identified by its unique single address for a single
interface NIC (Network Interface Card), and is transmitted point-to-point. A multicast
address defines all the hosts of a particular group to receive the datagram. The anycast
address will be addressed to a number of interfaces on a single multicast address. The
anycast packet therefore goes to the closer interface and does not attempt to reach the
other interfaces with the same address. A multicast packet, like anycast packet has a
destination address that is associated with number of interfaces, but unlike the anycast
packet, it is destined to each interfaces with that address. Unlike IPv4, IPv6 addresses do not have classes. But the address space of IPv6 is subdivided in various
ways for the purpose of use. The sub division is done based on leading bits of addresses. The present division of IPv6 address space is as shown in table(XIII). The
IPv6 address space is huge enough. So a portion of the IPv6 is reserved for computer
system using Novells Internet Packet Exchange (IPX) network layer protocol, as well
as the Connection Less Network Protocol (CLNP).
DHARM
N-BHUNIA/BHU1-2.PM5
59
60
It is found that several fields present in IPv4 are no longer present in the IPv6; and
notably among them are:
Checksum field. The main issue of designing IPv6 was the fast processing of packets. This results in designing with fixed header fields and removing the redundant
fields. The error check is done at upper layers namely TCP/UDP. As such the check
sum field further at IP layer was assumed as redundant and accordingly it was removed from IPv6. Again with check sum at IPv4 packet, the error checking at every
node was essential. It was a very time consuming and costly thing and duly unwanted
at IPv6.
Options field. Dropping of options field has made the IPV6 a fixed header packet. Of
course if required the IPv6 packet may use next header field for the purpose of header
extension.
Fragmentation. The IPv6 version has dropped the fragmentation and reassembly
feature at intermediate routers. The data is fragmented for packetization at the source
only. The reassembly is done at destination only. If a IP packet received by any
intermediate router is found as too large to be forwarded on the outgoing link, the
router simply drops the packet; and in turn send a ICMP error message of Packet Too
Big to the sender. Sender on receiving the ICMP error message of Packet Too Big,
retransmit the data with smaller packet size. Actually the fragmentation and the
reassembling the datagram at routers is a time consuming matter; and removing
these from routers functions to end users functions, makes the network to speed up.
ICMP (Internet Control Message Protocol)
ICMP for version IPv4 is used by hosts, nodes, routers and gateways to communicate network
layer information to each other. ICMP is specified in RFC 792. ICMP information is carried as
IP payload like TCP or UDP information. ICMP messages are basically used for error reporting
among others (Table XIV). An ICMP message is made of a type field and a code field and also
the first eight bytes of the IP datagram for which the ICMP message is to be generated in the
first place so that the sender can know the packet that caused the error. A new version of ICMP
defined for IPv6 in RFC 2463. The new ICMP has the reorganized existing types and codes as
well as added new types and codes. The added new ICNP type includes Packet Too Big, and
unrecognized IPV6 options among others.
Auto configuration and multiple IP addresses
IPv4 address structure is a stateful address structure, which means that if a node moves from
one subnet to another the user has either to reconfigure the IP address or to request for a new
IP address from DHCP (Dynamic Host Configuration Protocol). With DCHP, an IP address is
leased to a particular host or computer for a defined period of time. But IPv6 supports a stateless
auto configuration whereby on moving from a subnet to another subnet a host can construct its
own IP address. This is done by host on adding its MAC (Media Access Control) address to the
subnet prefix. IPv6 also supports multiple addresses for each host. The addresses can be either
valid, deprecated or invalid. With valid address new and existing communication may be done.
With deprecated address, the existing communication may be done. With invalid address no
communication is done.
Address Notation
Like IPv4, the IPv6 has special notation for representing the IP addresses. The IPv6
address is represented by hexadecimal colon notation. The 128 bits are divided into eight sections
DHARM
N-BHUNIA/BHU1-2.PM5
60
61
each of two bytes in length. Each of the eight sections is represented in four hex digits (or a
pair of hexagonal numbers separated by a colon. A pair of hex means a byte) and is separated
by a colon. One example is:
AB12:0978:CF56:00FE: 1234:127E:CB65:7890
The notion allows to drop leading zeros. This means and for example 0045 can be just
represented as 45, and 0A456 can similarly be represented as A456, and 0000 as simply 0. The
notion also allows removing a zero leaving a colon, and therefore for example
2456:AC67:0:0:67:D4E5:A456:A678 can be written as 2456:AC67::67:D4E5:A456:A678. The
stated double colon notation can be used at the beginning or at the end of an address but only
once. The double colon at the start indicates leading zeros and that at the end indicates
contiguous zeros at the end. If more that one location double colons are used, it will not possible
to know how many zeros are there at a particular double colon location. This is why double
colon notation is used only once. By counting the other bytes, the number of zeros at the single
double colon location can be found out.
IPv6 and IPv4 address compatibility
For a long interim period, the IPv6 and the IPv4 have to coexist. During this period, an IPv4
address can be converted to an IPv6 address by pre pending 12 bytes of zero. For example, an
IPv4 address 126.34.67.10 will be converted to an IPv6 address as
0:0:0:0:0:0:0:0:0:0:0:0:126.34.67.10 or::126.34.67.10. Similarly a host having an IPv4 address
as 128.67.56.9 may be mapped (read as IPv4 mapped IPv6) could have an IPv6 address as
::AC45:128.67.56.9. The different special notations of version 4 and version 6 will make them
separable.
Version (4 bits)
Priority (4 bits)
Header length
Next Header
Header length
..
Variable length TCP pack (which is TCP header + Payload).
Fig. 14: Illustration of use of Next Header Fields
Recommendation
DHARM
N-BHUNIA/BHU1-2.PM5
61
62
Key
Description
Reserved
IP
ST
SIP
TP/IX
PIP
TUBA
TUBA
10-14
Unassigned
15
Reserved
Table 13: IPv6 address space subdivision based on prefix assignments of bits
Prefixed bits
0000 0000
Reserved
0000 0001
Unassigned
0000 001
0000 010
0000 011
0000 1
0001
Unassigned
Unassigned
Unassigned
001
010
011
100
101
110
1110
1111 0
Unassigned
Unassigned
Unassigned
Unassigned
Unassigned
Unassigned
Unassigned
DHARM
N-BHUNIA/BHU1-2.PM5
62
1111 10
1111 110
1111 1110 0
Unassigned
Unassigned
Unassigned
1111 1110 10
1111 1110 11
1111 1111
Multicast addresses
63
CODE
Remarks
Echo requested
Router advertisement
10
Router Discovery
11
TTL expired
12
IP header bad
DHARM
N-BHUNIA/BHU1-2.PM5
63
64
traffic will dominant the telecommunication traffic. Consequently, now should be the time for
datacom to act as a carrier for telecom. But Internet, as such cannot be used to carry real time
service as it was designed to carry data and as the characteristics of real time services like
voice and video are different from data. Table (16) shows the different characteristics and
different requirements of voice, video and data The need to deploy Internet for the real time
services like voice and video, have lead to redesign some features of Internet. The important
two features related to this emerging issue are: (i) redesign of IP datagram format, and (ii) to
use RTP (Real Time data transfer Protocol) and IP for carrying voice over conventional IP
datagram and Internet. It is believed that with deployment of Ipv6, VoIP will be reached.
Table 15: Projected growth of IP telephone
(A) As per[12]
Voice IP Traffic
1998
1999
2004 (expected)
Average unit
(millions per year)
Unit growth
rate (%)
Yearly revenues
(millions)
Yearly revenue
growth rate (%)
2000
3987.2
256
388.75
209
2002
22,386.2
162
1511.07
136
2004
167,896.2
114
8814.55
88
2006
587,636.9
75
22036.38
46
LAN data
Transactional
Data
Video
Predictability
Constant/On-Off
Bursty
Highly bursty
Constant/Bursty
Bandwidth/
Bit rate
Medium to High
Low to Medium
High
Delay/Jitter
Sensitive
Tolerant
Tolerant
Sensitive
Loss
Sensitive/
No recovery
Sensitive but
can recover
Sensitive but
can recover
Very sensitive/
No recovery
Error/Integrity
Can tolerate
May tolerate
DHARM
N-BHUNIA/BHU1-2.PM5
64
65
in transition over packet switching network like Internet. These include packet loss, packet
transfer delay and jittering delay. Voice communication is involved with human interaction.
As such, a few losses of the voice packets could be tolerated due to human intelligence and
perception involved in recovery. But too much loss of the voice packets may seriously degrade
the voice quality. Moreover, PSTN is a reliable voice service provider whereas Internet is not,
as because Internet is datagram based.
Table 17: End to end voice packet latency delay
Delay source
Recording
10-40
Encoding/Decoding (CODEC)
Compression/Decompression
(SPEECH)
Internet Delivery
70-120
Jitter buffer
50-200
Average
150-400
Delay is the more serious issue for real time interactive services like voice. By delay it is
meant that the time difference between the time the sender releases the packet to the network
and the time at which the receiver receives the packet from the network. Delay refers to: (1)
total transfer delay of a packet that includes coding/decoding delay, propagation delay,
transmission delay, node processing and queue delay, switching and routing delay; and (2)
jittering delay that refers to the phase delay between two successive packets. Typical delay
from different sources are as in Table (17)[12]. If the total delay exceeds a certain value, customers
may get irritated to the service. A statistic says that a delay up to 80 msec between the caller
and callee is acceptable but beyond it causes irritations to the users. The total delay is a variable
quantity, and it varies from packet to packet. The jittering delay is very serious issue. If the
phase lags between the voice packets at the source and destination varies, the service quality
degrades. The phase lag between packets differs from the source end to the destination end
because the total transfer delay varies from packet o packet. Due to jittering problem, a sending
voice I shall go home may be received as I shall go home. Compared to the transmitter, the
phase delay between i and shall has increased and that between shall and go has reduced
to zero at the receiver. While the total delay could be limited by increasing the bit rate capacities
of the link and by adopting efficient routing technique among others, the jittering effect can not
be solved so simply. There are several techniques to reduce the affect of the jittering problem.
One such technique is known as accelerating and de accelerating. In fact the jittering problem
is due (Di + 1 Di) which is finite and a variable. Here, Di + 1 and Di are both variable quantities
and represent respectively the total transfer delay of (i + 1)th packet and ith packet. To avoid the
jittering effect, it is required that Di + 1 Di = 0. In the accelerating and de accelerating technique,
at the receiver end a variable delay (say Wi for ith packet) is caused to each packet such that Di
+ Wi = K, a constant for all packets (i.e. for i = 0, 1, 2, 3..) before delivery of the packets to the
terminal equipment for play back. By the process, the variable delay caused by the network
between two successive packets is made zero as (Di + 1 + Wi + 1) (Di + Wi) = 0. This ensures that
the phase delay between packets at the transmitter remains same at the receiver. The scheme is
illustrated in the Table (18). As illustrated in the table, the success of the technique depends on
the choice of K.
DHARM
N-BHUNIA/BHU1-2.PM5
65
66
Packet-1
80
20
100
Packet-2
10
70
30
110
Packet-3
15
85
15
115
Packet-4
25
100
125
Packet-5
30
110
10
130
(packet-4 is the marginal case. Packet-5 is the failed case. Both could have been avoided had
the constant K been chosen more than 110 ms in this case. So the success of the technique
depends on the choice of fixing K)
VoIP is going to be a dominant service issue of IP. VoIP has several motivations as we
discussed earlier. PSTN supports only toll-quality sound (4 KHz sound), and not suitable for
high-fidelity sound. VoIP can support higher grades of sound. This will be another major driving
factor for VoIP. But there are several issues that need to be resolved before VoIP is used.
Standards are still not finalized, although H.323 of ITU is being projected as a possible standard.
H.323 may be under new version 2 be used for interoperability between different service networks
like PSTN and Internet to support voice. The standard H.323 is for multimedia or
videoconferencing. The audio G.7xx standard of H.323 may be many based on choice of xs. The
choice of xs will define the intelligibility of the voice service provided.
3.22.3 Ipv6 for real time services
The conventional packet switching is not appropriate to carry real time services. There are
many reasons for this. For example HDLC or SDLC packets are variable in size. To synchronize and identify a packet, flags are required to be located. To avoid occurrence of flag byte in
the payload, stuffing and de stuffing are done. These cause huge node processing delay, and
hence packet transfer delay. ATM was proposed as the replacement of packet switching to
support real time services. The problems of conventional packet switching were solved in ATM
by making ATM packet, called cell simpler. The simplicity in ATM is in two respects: (1) shorter
cell and (2) fixed size cell. This philosophy was extended to design Ipv6 datagram to replace
Ipv4 datagram so that IP can carry real time services.
IPv6 has a simple and basically fixed header format. The overhead bits of Ipv6 are less
than that of Ipv4. The overhead bits in Ipv4 is 12 bytes in the header format of 20 bytes (8
bytes are for address), whereas the overhead bits in Ipv6 is 8 bytes in the header format of 40
bytes (32 bytes are for address). IPv 6 proposes to provide QoS (Quality of Service) service
support to real time services like voice and video. The flow level and priority in the header of
Ipv6 facilitate the support of real time data. Ipv6 has an efficient header format compared to
Ipv4.
DHARM
N-BHUNIA/BHU1-2.PM5
66
67
DHARM
N-BHUNIA/BHU1-2.PM5
67
68
technology may be used for Gbps (giga bit per second) transport over metropolitan or city
distances. The appealing other advantages of FSO are: no cable cost, no cable installation,
trenching & digging cost, no cable maintenance cost, and no link failure (virtually link
availability is almost 100%!). It is said that Free space optics really only provides a very
limited application when you consider five 9s of reliability. Some of the free space optics
companies will tell you that the five 9s are outdated and that they actually have trials with
alternative operators that are just going to three 9s and four 9s and Five 9s is probably the
greatest myth that exists today in the world of telecom. Free-space optics is the hybrid of the
optical and the wireless technology, presently the two most important carrier technology of
communication. FSO offers free-for-all transmission medium. A study says FSOs also offer
lower deployment costs and reduced installation time compared with metro fiber builds. Business
cases we have seen start at one-fifth the cost of metro fiber and can be six months faster to
install in some metro areas. As the name implies, FSO uses optical laser technology to transmit
data across open spaces and uses the property of straight line propagation of the light beam.
The low-power infrared beams that do not harm the eyes are used in FSO technology to transmit
data through the open space between transceivers. The transceivers are mounted on rooftops
or behind windows (Fig. 15) which are in line of sight with each other over the distances of a
few hundred meters to a few kilometers. The part of the electromagnetic spectrum above 300
GHz that includes infrared is unlicensed and available free of cost. The FSO technology then is
to ensure only that the radiated power does not exceed the standard defined by the International
Committees. Usually the equipment works either at the 850 nm or the 1550 nm laser. Lasers
of 850 nm are much cheaper than those of 1550 nm. But the safety regulations permit the
lasers of 1550 nm to operate at higher level than that of the 850 nm laser. The FSO with 850
nm laser thus suitable for moderate distance whereas FSO with 1550 nm is favored for distance
of kilometer ranges.. Actually 1550 nm has two fold power advantages and five fold distance
advantages over 850 nm laser but about ten fold cost disadvantages compared to 850 nm.
Table 19 gives a comparative study.
A few major applications of FSO are in the areas of metro network extension, last-mile
access, enterprise connectivity, dense wave division multiplexing services, SONET ring closures,
wireless backhaul, back up, disaster recovery, service acceleration, storage-area network and
LAN interconnectivity. FSO may be deployed to extend the existing Fiber Ring of MAN
(Metropolitan Area network) by connecting with other networks. This may compete with SONET
(Synchronous Optical Network) network. FSO may be deployed in the last-mile access in the
sense that it may be used in high speed links that connect Internet service providers or other
networks with end users. It is reported that domestic service providers and foreign carriers
are using FSO not only as a broadband backup but also as a viable last-mile technology. For a
technology that depends on straight lines, free space optics is taking a circuitous route to
espectability. FSO may be used as redundant back up in lieu of a second fiber link, particularly
over short distance communication. This has a clear advantage. Consider the Sept11 disaster.
Had there been FSO, some means alternative communication could have been available in
case of fiber failure. A report goes on saying While FSO will never defy the laws of physics, it
can provide a valuable last link between the fiber network and the end user-including as a
backup to more conventional methods. A key example was the Sept. 11 tragedy, when carriers
learned that having a backup fiber optic network was of little use if both fibers went dark. AS
a backhaul, FSO may be used to carry cellular telephone traffic from towers back to fixed wire
PSTN (Public Switched Telephone Network).FSO may further be used to provide immediate or
instant service to customers while their fiber link is being laid.
DHARM
N-BHUNIA/BHU1-2.PM5
68
69
FSO or OW has another important application in the last mile solution for broadband
services. This application is otherwise known as bridging technology. To support broadband
services to residential customers, the problem of the last mile made of twisted wire pair exist.
The clever utilization of last mile has made the access rates to vary from 128 Kbps to 2.3 Mbps.
One important technology of the clever utilization is the DSL (Digital Subscriber Line) technology which provides access rate at 144 Kbps. With OW technology the access rate is believed
to increase to Mbps. This a great offer of OW technology.
FSO technology is believed to change the optical communication and Optical networking technology is radically changing the foundation of carrier backbones, boosting Internet
bandwidth exponentially while slashing costs dramatically. But FSO is not free from disadvantages. FSO link may suffer from weather conditions, for example the Fog may hamper the
link operation. Till date no standard is available for FSO operation. The vendors have to do a
lot to utilize the technologys viability and consequent products marketability. Let us hope for
the best for this old technology. It is concluded with a few observations of some industrialists
and members of academic:
1. Optics Alliance and chief technical officer for vendor fSONA said: To have alternate paths
using free space optics is getting much more interest from carriers.
2. People are realizing that if they have two fibers, theyre not necessary protected if its a
correlated event and they both go out, said Steve Mecherle, chair of the Free Space
3. Michael Sabo, senior vice president of sales and marketing for vendor AirFiber, said FSO
is earning a place as more than a fiber backup. Billions of dollars have been spent on longhaul fiber builds out on the trunks. This technology fits the last-mile kinds of applications
to fill in all the leaves of those networks.
4. Qwest uses FSO in commercial deployments because they are the vast majority of the
users of Qwests broadband network. Were pleased with the technology, but we cannot
[speculate] about its future deployment in the Qwest market,Qwest Communications.
5. Nevertheless, fiber doesnt go everywhere, and it cant always be deployed quickly. In
all those cases, FSO is a superb alternative, Werne , CEO, Utfors, A Sweden broadband
carrier
6. Ken Corriveau, Tribals IT director: You could rent dark fiber, but that would take
forever to figure out in the city. You could rent a T-1 or DS-3, but both of those are 30 to
90 days out.
7. In Madrid, 80% of business users are within 500 meters of fiber, said Paul Kearney,
Aluas (a carrier in Spain)chief technical officer. He further said : We plan our [FSO]
network by using very short ranges to be within the weather limitations.
8. In general, the technology has a lot of future for the carrier networksif its marketed
well, said Gartners Tratz-Ryan. And therefore to many, at least FSO cleared the first
hurdle on its circuitous obstacle course
Table 19: Comparison of lasers used in FSO
Laser in FSO
Typical Cost
850 nm
US$ 5000
10-100 Mbps
1550 nm
US$50000
Upto Gbps
1-2 Kilometers
DHARM
N-BHUNIA/BHU1-2.PM5
69
70
T
T
TR
T
T
TR
T
MAN/LAN
TR
TR = Transceivers
Free space or air ______
Fiber link
DHARM
N-BHUNIA/BHU1-2.PM5
70
71
4. The world picture in this respect is 600 million unloaded twisted copper wire pair versus
6 million hybrid fiber/coaxial lines, i, e the ratio is 100:1,
5. The annual growth of telephone network in 1990-95 in Africa, Arab States, Latin America
and Asia Pacific was respectively 8%, 9%, 10% and 27%
6. Around 1000 million telephone subscribers exist in the world in 2003.
Actually the varied services like video conferencing, video on demand, fast access to
Internet and interactive multimedia services require higher bandwidth than that of voice.
Therefore new technology and signal processing are prime needs if the copper is used to carry
these services in the last miles.
xDSL (Digital Subscriber Line) is the unique technology that supports more than one
services like voice, video and data simultaneously over a shared access line of copper. The DSL
is established as a scalable service that provides quality service delivery and at the same time
provides a cost effective local loop infrastructure. The DSL appears to be an efficient solution
for providing multimedia services.
In order to provide value added services, broadband services and multimedia services
using existing unloaded telephone lines, over the last few years communication engineers developed a number of techniques. These are: Modem culture and xDSL technology. XDSL technology[38-40] includes: HDSL (Higher-rate Digital Subscriber Line), ADSL (Very-high-rate
Digital Subscriber Line), G.lite (splitter less ADSLthis is also called UDSL, Universal DSL),
SDSL (Symmetric DSL), VDSL (Very High Rate DSL), IDSL ( ISDN DSL), RADSL (RateAdaptive DSL) etc.
4.2.1 Modem Versus xDSL
Using modem, the copper wire provides the data services. For example Internet access with
dial up facility is done through the modem. As of today the modem speed is 56 Kbps. The speed
of 56 Kbps is not sufficient to support high quality broadband services. Moreover modems
occupy the entire 0-4Khz bandwidth allocated to voice, thereby preventing simultaneous services
of voice and data over copper of local loop. Within last few years, the slogan of communication
technology has become Speed is the ultimate. Technology is being developed in pace to serve
with the demand of more and more data rate, namely, from bits per second (bps) to Kbps to
Mbps to Gbps and finally, to Tbps with WDM, Fiber amplifier, solution and fiber optics
communication in hand. High bit rate communication is not possible with copper twisted wire
per. Alternative may be to use optical fiber link at loops. This may be a long run solution, but
xDSL technology was developed out of this race of more and more fast data communication but
using copper cable. The oldest technology of communication digital data through along twisted
pair cable of telephone loop, is modem technology. Bit (oldest modem, example is V.21/Bell
103) to as high as 33.6 Kbps (as predicted in V.34 extended standard). With standardization of
V.43 modem with 28.8 Kbps, it has been postulated to provide low graded multimedia service
to customers through POTS. But there is a big bug in modem technology. A 3 KHz voice line
(local analog loop) with 30 dB signal-to-noise ratio can have maximum bit rate of about 30
Kbps as per Shannon theory. Thus using modem technology to carry data over analog telephone
line is handicapped by the above speed constraint. This is the reason that modems sometime
do not work at the vendors advertised speed. There is also report of 56 Kbps modem technology,
which can well fit to carry multimedia and Internet services to customers promises using
telephone lines. But the 56 Kbps technology does not communicate data in between two modems.
It communicates data between a modem and a digital ISP (interface signal processor) system
which creates a reduced noise like environment. Therefore use of 56 Kbps modem technology to
transport high bit rate services to customers, promises may be the limit.
DHARM
N-BHUNIA/BHU1-2.PM5
71
72
It was already mentioned that local loops of copper twisted pairs designed for caring
voice signals, are not suitable to carry high speed digital data. Local copper loops are primarily
designed to carry voice traffic. Voice traffic is relatively short duration and on an average of 3
minutes. Internet traffic is on average of 30 minutes duration. The impulse noise and pulse
dispersion of copper loops is the main obstacles in carrying data at high speed. But with growing
World Wide Web culture and demand of multimedia services like video-on-demand, boosting of
capacity of copper twisted pair local loops, by using some alternative technology of modem, was
felt essential. This gave the birth of xDSL technology in general and ADSL technology in
particular. It is often said that ADSL is for boosting the capacity of installed copper and fiber
optics link.
In xDSL technology, special circuits and software called transceiver are used. Transceiver software perform the function of encoding/decoding or modulation/demodulation by which
serial binary digital data streams are converted into signal suitable for transmission through
analog copper twisted pair link. Transceiver also performs the other functions like equalization, signal shaping and processing, and amplification to compensate for signal attenuation
and phase distortion. The other important function performed by transceiver is error detection
and correction of data.
4.2.2 ISDN versus xDSL technology
ISDN (Integrated Services Digital Network) was developed to provide integrated and
simultaneous services of voice, data and low speed video at a basic rate signal of 144 Kbps. The
payload of 144Kbps consist of two B channels each 64 Kbps and one D channel 16 Kbps. The
DSL signals was first coined to carry 144 Kbps of ISDN over copper loops of 18000 ft or less.
This was made with 2BIQ four level line code. The 2BIQ code provides baseband signal spanning
from zero to voice frequency band. In this mode of ISDN, voice is served in digital mode using
PCM (Pulse Code Modulation) and B channel at the rate of 64 Kbps; but ISDN does not support
POTS(Plain Old Telephone Service). Data at the rate of B channel of 64 Kbps (which is much
higher than the maximum permissible rate in MODEM cultureabout two folds) is served in
ISDN. Therefore, why to go for xDSL ? Reasons behind going for xDSL technology are two.
First, xDSL technology provides much higher data rate than ISDN. With growing web culture
and demand of multimedia services, bit rate of the order of a few Mbps become common. Services
like video on demand can not be meet with 64 Kbps or even of 64*2 = 128 Kbps of ISDN
technology. Second, ADSL and VDSL are different from ISDN in the respect that unlike ISDN,
they retain the service of POTS while providing high rate data service.
4.2.3 ADSL technology
ADSL technology has become most appropriate technology out of all xDSL technologies. HDSL
is a variant of ISDN technology which provides data communication at the bit rate of about
784 Kbps (T1 carrier) over twisted copper paid loop upto 12000 ft. like ISDN, HDSL uses 2 BIQ
line code.
ADSL technology was developed mainly to provide multimedia service like video-ondemand service and growing Web service. The characteristics of these two service are quit
asymmetric in nature. For Web accessing and / or interactive video two ways communication is
essential. Out of the two ways communication downstream (towards the subscriber)
communication requires much higher bandwidth then upstream (towards central exchange/
DHARM
N-BHUNIA/BHU1-2.PM5
72
73
office) communication. This is because, typically Web surfer is more interested in downloaded
on uplink request. ADSL technology[37-43] offers higher data rate of say 6 Mbps for downstream
data and lower data payload of say 640 Kbps for uplink data using copper installed loop of
telephone. In addition, ADSL provides POTS or conventional voice service. As the service nature
is asymmetric, SDSL technology got lost to ADSL technology.
Due to the asymmetric nature of ASDL technology, it provides an interesting technological benefit. When many wires are squeezed together in a cable, cross talk is inevitable due
signal overlapping. In case of downstream data, signal amplitudes are same because they all
originate form the exchange. Due to the same amplitude, there is no effect of destruction of
weak signal by strong signal. For uplink data, signal may originate from different customer
premises, which are the different locations. Therefore signal reaching through wire pairs of a
cable may greatly varies in amplitude. But as the cross talk increases with frequency, problem
is tackled by limiting upstream data and keeping it at low end of spectrum. This is exactly
what is done in ADSL.
ADSL technology increases capacity of installed copper link of telephone to 6 Mbps. In
the technology data traffic and voice is carried simultaneously. It carries data in digital form
and voice in analog form, unlike ISDN which carries both in digital form.
ADSL System
POTS splitter/filter preserves the 4 KHz spectrum for POTS service; and prevents hampering
of POTS service due to any fault of ADSL equipment. The rest available bandwidth of 10 KHz
is used for ADSL data communication at the rate 6 BPS for every hertz of available bandwidth.
Fig. (16) portrays the operation of ADSL system. The transceiver software of ADSL uses an
advances modulation technique known as discrete multitone (DMT) technology. The ANSI
T1E1.4 has standardized DMT as the line code for ADSL. DMT divides bandwidth 10 KHz to 1
MHz in 256 independent subgroups each of 4 KHz width. Each of the sub channels referred to
as tone, is QAM modulated on the separated carrier. The carrier frequencies are multiples of
basic frequency of 4.3125 KHz. The DMT is used in ADSL technology as because it has the
unique ability to overcome typical noise and interrupts in the local loop twisted wire pair
cable.
The ADSL frequency spectrum is shown in Fig. (17). The available spectrum ranges
from about 20 KHz to 1.1 MHz. The low 20 KHz is reserved for voice services under normal
POTS. To perform bi directional communication, ADSL modems divide the bandwidth in one of
two ways: (1) FDM where non overlapping bands are used separately for upstream and
downstream links, (2) echo cancellation where for both the upstream and the downstream the
overlapping bands are used but separation is made by local echo cancellation technique. Echo
cancellation technique is bandwidth efficient. Advanced forward error correction techniques
are used to tolerate error bursts as long as 500 msec.
A comparison of different DSL technologies is given in Table (20). ADSL is about 400
times faster than most sophisticated modem and 60 or more times faster than ISDN.
However ADSL down stream speeds depend on the loop distance as shown in Table (21).
But typical coverage distance is about 4 km. Over distances, the natural degradation of data
rate occurs. To provide services to customers beyond 4 km, an embedded rate adaptive mechanism may be used.
DHARM
N-BHUNIA/BHU1-2.PM5
73
74
POTS Splitter
Copper Local Loop
Computer
ADSL Modem
Local Switching
Exchange
Network
Pots Splitter
Processing
Circuit
Network
Local Exchange
Pots band
Down stream
band
4 KHz
138 KHz
30 KHz
1.104 KHz
The arrangement may be coupled with growing ATM (Asynchronous Transfer mode)
network, which is predicted to be a network for multimedia services. Recent advances in ADSL
technology promises to transfer data at the rate as high as 50 Mbps to the customers over a
short distance of twisted copper pair from FTTC. This advancement is termed as VDSL.
ADSL technology and WDM technology support the predicted cyclic nature of analog
digital transmission.
Table 20: Comparison of DSL Technologies
Service / Network
Data Rate
28.8 - 56 kbps
64-128 kbps
1.544-8.448 Mbps for downstream 16-640 kbps for upstream
12.96 - 55.2 Mbps
784, 1544, 2048 kbps
128 kbps
8002000 kbps for downstream 64 - 200 kbps for upstream
RADSL
DHARM
N-BHUNIA/BHU1-2.PM5
74
75
Speed in Mbps
18,000
16,000
12,000
9,000
The major applications of ADSL technology are: (1) Information highway to wide
community, (2) High speed to Internet access, (3) Distance learning by the process of video
conferencing etc (4) Video on Demand, (5) Video telephony.
ADSL was standardized by the ITU-T in recommendation G.992.1 in 1999. The splitter
less ADSL known as ADSL lite was recommended in G.992.2. In the ADSL lite the use of
splitter in the customers premises are avoided at the cost of lower transfer capacity as 1.5
Mbps and 512 Kbps respectively for downstream and upstream.
4.2.4 VDSL Technology
Very high speed or high rate DSL technology is the most recent and important addition to the
DSL technologies. The technology is believed to provide the bridge between todays existing
copper infrastructures with near futures futures entire fiber infrastructure. VDSL modems
[140-43] are placed in the customers premises and at the end of fiber installation. The end of
fiber installation is the neighborhood or exchange point where the fiber link terminates. With
the technology, very high speeds are possible on the copper link spanning about 1.5 km between fiber end and customers premises with as high as 15 Mbps total in both directions and
over a short distance of 300 m or less with 52 Mbps. VDSL offers about 100 times faster tan
normal modems. The proposed VDSL can use up to 30 MHz bandwidth compared to 1.104 MHz
of ADSL and 300, 580, 1100 kHz for HDSL. VDSL supports two service classes : Asymmetric
known as Class I service and Symmetric known as Class II service. Asymmetric service type is
compatible to ADSL technology and primarily aims to meet residential customers. Symmetric
service aims to serve business purposes. VDSL is supposed to provide broadband services to
both business and residential communities on existing copper infrastructure. Data rates of
VDSL is at Table (22).
VDSL system
VDSL is aimed to be coupled with FTTC (Fiber To The Curb) and FTTB (Fiber To The Building)/FTTH (Fiber To The Home), the technologies that uses fiber in part of the local loop. In
that context the VDSL reference model is shown in Fig. (18).
Table 13: Typical VDSL data rates
Service Class
Downstream data
rate in Mbps
Spanning
Distance in m
Asymmetric
6.4
52
300
3.2
26
900
1.3
13
1500
26
26
300
13
13
900
6.5
6.5
1500
Symmetric
DHARM
N-BHUNIA/BHU1-2.PM5
75
76
Customer Premises
Copper Link
Fiber Link
Central Office/Exchange
VDSL
VDSL
Transceiver
at NT
Transceiver
at ONU
NT = Network Termination
ONU = Optical Network Unit
(a) System Reference
LT = LINE
Termination
Splitter
Splitter
Copper
wire
PSTN/ISDN
NT = Line
termination
Network
Interface
PSTN/ISDN
DHARM
N-BHUNIA/BHU1-2.PM5
76
77
is believed to prosper with the general human trend from nice to have to value to have to
essential to have. With multimedia a society with plug and play, look and fell and point and
feel and point and click shall emerge. In near future, we shall have multimedia cities and
centres. It is often said that in near future multimedia shall be the rule and the monomedia shall
be the exception. Interactive multimedia is a service, which provides simultaneous access,
dissemination, transportation and processing of more than one information service like voice,
video and data in the interactive mode and in the real time environment. Multimedia is to
integrate three communication worlds, namely, telephone world, data world and video/TV world
into a single world communication. multimedia application shall comprise more than one
information type, namely the non real time service of data, images, text and graphics, and the
real time service of voice and video. Future world of information and communication shall be
converged to multimedia application and shall provide comfort, competition, mobility, efficiency
and flexibility. As per Fred T. Hofstetter Multimedia is the use of a computer to present and
combine text, graphics, audio and video with links and tools that let the user navigate, interact,
create and communication. Technologically multimedia shall be service of services and nontechnically a community of communities. Multimedia shall enable people to communicate and
access at any time at any where at reasonable costs with acceptable quality with manageability.
Location of man, materials and machine resources shall be irrelevant in business in the era of
multimedia. It is said that It makes no sense to ship atoms when you can ship bits. Virtual
reality with virtual presence in virtual worlds, virtual cities, business enters, virtual schools
and virtual rooms will emerge in the next future For example, virtual reality at short
notice allows collaboration between changing partners on specific tasks, sitting at virtual writing
tables without real offices and addresses other than the network. Transactions in this enhanced
telecooperative working environment would be electronic analogies of the normal world. Faster
work flow, comprehensive 24-hour service, remote operation and maintenance, easier trouble
shooting, life long and leisure time activities, less travel, less cost and more fun shall be the
important attraction of the multimedia world. Multimedia communications provide a chickenegg benefits to information world, and have acceptance at all levels: (1) contact acceptance,
viz., service availability, user-interface, (2) economic acceptance, viz., less cost, more benefits,
(3 ) content acceptance, viz. quality, and (4) social acceptance, viz., desirability, privacy.
5.1 Standards
A great challenge is to standardize broadband services and system for the purpose of deployment.
In fact, the deployment of seamless integrated mobile broadband services will greatly benefited
from the standardization process[48]. In order to define any standard, the International
Telecommunication Union (ITU) usually forms a study group. This study group submits
recommendations for standards pertaining to the assigned functions. A list of different study
groups along with their assigned functions, made by ITU for 1997-2000, is given in Table (23).
SG9 and SG16 respectively deal with television and sound transmission, and multimedia services
and systems.
The low bit-rate (kilo bits per second-kbps) audio coding standards specified by ITU for
multimedia application are listed in Table (24). The standards G71X and G72X are mainly
used in different multimedia applications. MPEC-I (removing picture export group) audio coding decoding is applied in H.310 multimedia conferencing standard.
DHARM
N-BHUNIA/BHU1-2.PM5
77
78
Service definition
Network and service operation
Tariff and accounting principles, economic and policy issue
Telecommunication management network (TMN) and network maintenance issue.
Protection and policies against electromagnetic environmental effects.
Outside plant
Data network/open system intercommunications
Features and characteristics of telemetric system
TV sound transmission
Software aspects of telecommunication systems
Signal and Protocol
End-to-end transmission performance of network and terminals
Network aspects in general
Modems and transmission techniques
Transport networks systems and equipments
SG16
Table 24: Standards of low bit rate audio coding for multimedia communication
Standard
G.723,D
G.723,1
G729,A
G.729
G.711
(PCM/POTS)
G.722
(Broadcast quality)
G.723
(Low bit rate POTS)
G.726
G.728
MPEG.1 layer
(CD audio)
Bit rate
In kbps
Frame size
in mg/cc
Algorithms delay
in m sec
5.3
6.3
8
8
56
30
30
10
10
37.5
37.5
15
15
2.2k
2.2k
2k
2.7k
48-64
5-6
32
16
32-256
Different video coding standards for multimedia services are listed in Table (25) along
with bit rate and applications. H.26X standards are used for videoconferencing and MPEG-I is
used for video-on-demand. H.26X standards are mostly used in multimedia videoconferencing
standards like H.320, H.324, H.323 and H.310
DHARM
N-BHUNIA/BHU1-2.PM5
78
79
Bit rate
H.261
H.263
MPEG.1
MPEG.2
64kbps-1.92Mbps
15kbps-34kbps
1.2Mbps2Mbps
3-15Mbps
Videoconferencing (N-ISDON-64)
Low rate videoconferencing
Video on demand
Temperature-Diagnostic video on demand
Network
Video
Coding
Audio
Coding
Data
Standard
Multiplexing
Control
Remarks
Application
H.320
(1990)
N-ISDN
H.261
G.711
G.722
G.728
T.120
H.221
H.242
Multimedia
conferencing
with G.711
H.324
(1996)
PSTN/
GSTN/
POTS
H.263
H.261
G.732.1
G.729
T120
H.223
H245
Multimedia
conferencing
with H.263
and G.723.1
H.322
(1996)
LAN
internets
packet
switching
H.261
H263
G.711
G.722
G.728
G.723.1
G.729
T.120
H.225.0
H.245
Multimedia
conferencing
H.261, G.711
H.322
Isoethernet
H.261
G.711
G.722
G.728
T.120
H.221
H.242
H.321
B-ISDN/
ATM
H.261
G.711
G.722
G.728
T.120
H.224
H.242
H.310
B-ISDN/
ATM
T.120
H.222.0
H.222.1
H.245
Multimedia
conferencing
with H.262,
MPEG.1,
H.222.0
H.262
MPEC.1
MPLG.2
G.711
H.261
G.722
G.728
Table (26) is a comprehensive list of different multimedia standards, their network performs,
video coding, audio coding, and data standard multiplexing standard, control standard and
applications. The standard H.324 may be used to provide videoconferencing, putting to work the
existing telephone network. H.323 may be used for the same over LAN (local area network) and
H.320 may be used over N-ISDN using nx64 kbps channel, whereas H.310 may be used using BISDN/ATM. The table also lists the users terminal requirement for different multimedia standards.
DHARM
N-BHUNIA/BHU1-2.PM5
79
80
DHARM
N-BHUNIA/BHU1-2.PM5
80
81
BOX 4
The Huffman code is a compression code designed by Daceid A Huffman in 1952. It is a simple
improved code over Shannon-Fanon code. In order to illustrate Huffman code, let us say we
have an original body of data which reads only source triple as in table to present some message.
The probability of occurrence of any source triple in the message is also shown. According to the
Huffman coding, the corresponding compressed codes are shown in the table. The average size
of the compressed code under Huffman coding becomes: 1 X 0.4 + 2 X 0.2 + 3 X 0.2 + 4 X 0.1 +
4 X 0.1 = 2.2 bits per code. Whereas the code size of the original source code is 3 bits per code.
Source Triple
Probability of
Occurrence
Corresponding
compressed word
000
001
010
011
100
101
110
111
0.25
0.25
0.125
0.125
0.0625
0.0625
0.0625
0.0625
11
10
011
010
0011
0010
0001
0000
There are several disadvantages to Huffman coding. First, to design the code, one must
know the probability of occurrence of any code in the original block of data. What shall happen if
the probability is not known a priori? And what shall happen if probabilities pattern changes
over time? Second, Huffman coding is not unique in nature. The code is also block code. But the
redundancy under this code is either minimized or optimized
DHARM
N-BHUNIA/BHU1-2.PM5
81
82
than 5 years and even may be 7-10 years in PCS/PCN. India is lagging behind its neighbors like
Singapore, Taiwan and Hong Kong.
Table 27: Frequency bands of different PCN services
Service
Frequency Band
Cellular
CT-2
864/944 MHz
Cordless
46/49 MHz
Satellite/VSAT/MSS
C band
900-940 MHz
1850-1890 MHz
1930-1970 MHz
2130-2200 MHz
7. FROM 2G TO 3G
2G (second generation) technology for mobile connection started around 1990s and it was revolved around GSM cellular communication that is mainly for voice communication. 3G were
then expected to be deployed around 2000 and were targeted towards:
implementing anywhere and any time mobile connection with low cost and flexible
handheld devices
implementing wireless data access particularly with wireless Internet connection.
This was motivated by the exponential growth of Internet access. Users are prone to
get Internet access anywhere and anytime with hand held devices
implementing high data rates at 2Mbps whereas previous GSM or 2G offered to 10 to
50 Kbps
implementing high speed multimedia or broadband services causing shift from voice
oriented services to Internet access (both data and voice particularly with technology
of VoIP), Video, Music, Graphics and other multimedia services
use of spectrum around 2 GHz whereas spectrum allocation for 2g was 800/900 MHz
global roaming to support global communication
flexible network to support existing and future changing requirement
a mobile multimedia services that will be able to transmit data, voice, video, image
etc over variety of networks like point to point, point to multi point, broadcast, symmetric and asymmetric etc.
The key benefits of 3G will be: delivery of broadband information direct to users and
global access with a unified single radio interface.
Several major challenges are to be overcome to implement 3G: wireless Internet for
exponential growing users will be difficult to implement till IPv6 is implemented, global roaming
with single number as proposed in PCN, fixed access with technologies like ADSL with high
data rates of 12 Mbps has become competitor as that of IEEE 802.11 b WLAN in wireless local
data interface, low cost flexible devices are yet to mature.
DHARM
N-BHUNIA/BHU1-2.PM5
82
83
7.1 Beyond 3G
Mobile comprehensive broadband integrated communication will step forward into 4G (fourth
Generation) all mobile services and communication. The 4G technologies will be migration
from other generation of mobile services with an aim to overcome limit of boundary and achieving total integration. The evolutionary approach towards a wireless information age proceeds
as in Fig. (19)[44,47,59] in comparison with other technologies as progress in pace to pace. The
key characteristics of 4G systems will be: higher transmission capacities per user, larger frequency band, higher traffic densities, and integrated services. The technical challenges behind
the expected technology lie with the associated different technologies as discussed earlier.
2G
GSM,
PDC,
IS95
IG
Analog
cellular
3G :
UMTS,
CDMA
802.11b
WLAN
WLAN
Circuit
Switched
Networks
Wired
Internet
802.11a
WLAN
Broadband
Internet/
DSL
4G Total
Wireless,
Seamless
coverage &
Integration,
Anytime &
anywhere
communication
Wireless/Mobile
Local Area
Integration
Broadband
FTH (Fiber to
Home)/Fiber
to Business
Fig. 19: PCN Evolution /Migration and other technologies as progress in pace to pace
The motivation behind aiming 4G information society are many: high speed transmission, next generation Internet support (Ipv6, VoIP, Mobile IP), high capacity, seamless integrated services and coverage, utilization of higher frequency, lower system cost, seamless personal mobility (LEO), adoption and integration of fixed and wireless support (ADSL/VDSL/
WILL/FSO), mobile multimedia (Standards), efficient spectrum use, QoS service, flexible and
re configurable network and end to end IP systems.
The convergence of local fixed wired network including wireless home or local network
with broadband fixed and coming up ad hoc wireless networks will shape how we will communicate in next decades that may include[49.60]: Complete unification and integration of all and
every services, Single communication number for each and every services, and Freedom to
communicate any time any where. All these provisions are required to be meet with simplicity,
cost effective, reliability and flexibility. The problems to be solved in achieving the expected
results are: lack of bandwidth, lack of standardization, high error probability of wireless links,
DHARM
N-BHUNIA/BHU1-2.PM5
83
84
multiplicity of different systems & operators, and cost reduction. The problems are being addressed. The research in tackling the high error probability of wireless links has reached the
expected directions[50-53] with BEC (Backward Error Control) technique. The research[55] in
this context for optimizing Internet access over IEEE 802.11b has demonstrated with frame
level FEC (Forward Error Control) technique.
8.1 E-Business
E-business refers to the operation of the business objectives through and using IT. It may also
be defined as business activities over digital infrastructure or doing business over wires. As per
Colin, Director of the integration division of CNS, UK the e-business refers to the issue of
supply chain integration. An ideal scenario is when a customer places an order. All of the
DHARM
N-BHUNIA/BHU1-2.PM5
84
85
suppliers and agents involved in the transaction are contacted electronically. Every system involved in the supply and delivery of that product is linked to every other system, hence talk of
zero latency transactions whereby there is no waiting for someone to do something because
everything happens at the speed of light. A report says, E-Business relates to how you and
your customers place orders and ensure efficient delivery. E-commerce is the financial aspect of
doing business. Both aspects will affect your operations sooner or later.
The economists usually identified four types of e-business:
Business-to-business (B2B). This refers to transaction between one business house
to another. For example, the transaction between a large organization and their suppliers falls in the category. B2B is the most common business model. One example of B2B
e-business is MetalSite.com
Business-to-customer (B2C). This refers to online retail activities. For example,
software, journals and books sold over Internet using web sites.
Customer-to-business (C2B). The example of this is the booking of railway tickets
or air tickets on any agents computer that has the network or the Internet connection. C2B is just the reverse of B2C.
Customer-to-customer (C2C). Online auction is the best example of this type of
transaction. One example is eBay.com
Currently e-business is mostly confined to B2B. Other areas of business are of course
coming up.
8.2 E-Commerce
E-commerce is basically financial transaction via computer networks, between people and
organizations. E-commerce is a financial part of e-business.. Harvard Academic, Jeffery Rayport
defined e-commerce as selling real products for real money. Eddie Rabinovitch observed Not
surprisingly, the expected pay off of e-commerce projects is, of course, the bottom line: money.
However, despite the prevailing notion of access to global markets as the most important
competitive advantage enabled by e-commerce, most companies expect of e-commerce ways to
reduce spending rather than increase profits. Lets for a moment think about the rationale of
the previous statement, which is also going to answer another e-commerce question: why is
business-to-business (B2B) market considered by many experts several magnitudes more
important than business-to-consumer (B2C)? Well, its probably easier to convince a CEO to
spend $100,000 on a solution that will demonstrably save $1 million than to spend the save
amount on a solution that might make $1 million....... Making money on the Internet is still
quite dicey. But its not too difficult to demonstrate that B2B e-commerce will save money by
improving efficiency and therefore reducing expenses for transactions between companies.
DHARM
N-BHUNIA/BHU1-2.PM5
85
86
DHARM
N-BHUNIA/BHU1-2.PM5
86
87
Over time the gap between human axis and technology is reducing. Therefore KM is an action
to achieve goals along the path of knowledge with least action both mental and physical, or
otherwise to do management so far done absolutely by man by technology in order to go along
a path of least action, the path of nature by expanding intelligent technologies like brainy
computers and personal communications.
Swamiji made a following few comments over nature, man and knowledge:
Nature with its infinite power is only a machine.
All our knowledge is based upon experience. All human knowledge proceeds out of experience; we can not know anything except by experience.
Man is man so long as he is struggling to rise above nature, and this nature is both
internal and external.
These observations of Swami Vevekananda imply that man by earns knowledge from
experience, and he applies his knowledge to be creator of nature, which is not impossible so
long nature is assumed a machine. It will be pertaining to mention here that Tagore told that
everything in nature follows a rule. This supplements my views that the KM is a step of human
effort where he attempts to be his known creator.
DHARM
N-BHUNIA/BHU1-2.PM5
87
88
2. Lockean inquiry systems those are based on consensual agreement and aim to reduce
equivocality embedded in the diverse interpretations of the world view. Example:
Selection board meeting for a cricket team
3. Kantian inquiry systems those attempt to give multiple explicit views of
complementary nature and are best suited for moderate ill-structured problems.
Example: Result of a final match
4. Hegelian inquiry systems those are based on a synthesis of multiple completely
antithetical representations that are characterized by intense conflict because of the
contrary underlying assumptions. Example: Which party is to form government when
no party has got majority in any Indian Parliamentary Election!
The KM may have the significant role in Lockean and Leibnizian systems as they are
suited for stable and predictable organizational environments, but the KM will have
limitations in applying to other two systems as they are better suited for wicked
environments. The wicked environments are characterized by discontinuous change,
and the information technology has a trend to create wicked environment, it is not yet
clear how the KM will suit to information technology driven present and future world.
5. The one of the main features of the KM is sharing of knowledge for improving business
process and activities. The expectation and the results from knowledge sharing in
many cases, particularly in the environment of competition, however cause havoc. In
one final examination the topper of the class and the second topper sat side by side.
The topper wanted to share the answer of a problem which he correctly got as, say 60.
The topper when asked the second topper, although the second topper got 60 as answer,
yet just to confuse the topper he told him that the answer was 50. The topper being
confused scrapped out that answer, and tried another; but before its completion the
time was out. Consequently, in the result the topper went down to the second position
and the second topper moved up to the first position. This shows the possible
consequence and counter productive feature of knowledge sharing particularly in
competitive business environment. This phenomenon of knowledge sharing may be
called calamity of knowledge sharing. The calamity may also occur when sub standard
knowledge is shared.
6. The more serious conflict of knowledge sharing lies in its very definition. If knowledge
is power, if knowledge is saleable, and if knowledge brings prestige, power and
authority; why one should share his or her knowledge? The very basics of knowledge
do not support the knowledge sharing. This being the case, the KM itself lies under a
cover of confusion. Thomas H Davenport described [49] this phenomenon, as sharing
and using knowledge are often unnatural acts. He felt that sharing and usage have
to be motivated through time-honored techniques-performance evaluation, compensation for example ..Lotus Development, now a division of IBM, devotes 25% of the
total performance evaluation of its customer support workers to knowledge sharing.
Buckman Laboratories recognizes its 100 top knowledge sharers with an annual
conference at a resort. ABB evaluates managers based not only on the result of their
decisions, but also on the knowledge and information applied in the decision-making
process. The other type of problem of same nature also exists in the organization. An
employee who is an expert in obsolete technology may do not like to share knowledge
of expert of new generation due to several reasons like ego, inferiority complex, and
fear of being out classed. This phenomenon can be analogically compared with electric
circuit as illustrated in Fig. (20). The organization likes to attain at a knowledge
level, K. It has a storage capacity, C. But the organization offers a resistance. This
DHARM
N-BHUNIA/BHU1-2.PM5
88
89
resistance delays the organization to attain at the knowledge level K. Until and unless
the offered resistance is removed by organizational process of transformation, the
conflict will exist and resist the implementation of KM. The organization resistance
(R) restricts the flow of knowledge.
7. KM involves two words: Knowledge and Management. Who is for whom or who will rule
to whom is a big question. Does KM mean the management of organization by the
knowledge or does it mean the management of knowledge of the organization or a hybrid? This confusion is pictorially illustrated in Fig. (21).
8. Lester C Thurow documented some factual conflict that is existing in the USA: The
information technology has been projected as a high productivity in nature. But the
Lester C Thurow studies claimed that Financial services in the United States have
had negative productivity growth for the last ten years. Every year productivity is
falling about 1 percent. His studies on office automation show that offices still use
paper in the same ways for the last 500 years. The paper less office or automated
office still remains a far cry.
Knowledge
Management
Knowledge
or
Management
OR
BOTH ?
DHARM
N-BHUNIA/BHU1-2.PM5
89
90
applications. Thus logical speculation is: what is next to the knowledge age? One incident reported
in the Indian may through some light to it. The great Akbar once asked his naba- ratnas: what
moves fast? When eights of nine ratnas pointed towards Royal Horse, the ninth ratna, Birbal got
an edge over others by saying Our Mind, Sir. We at least find a technology area where the trend
is to achieve something like speed of mind, and this is nothing but communication. From the
trend of communication we have no hesitation (and I am sure all will agree to it) to conclude that
it is the speed of communication that is growing leaps and bound. We have seen the age of kilobits
per second, and mega bits per second, and presently in the age of gigabits per second, and are
seeing a tomorrow of tera bits per second. This is an indication that after knowledge age, the next
age may be the age of mind or the age of conscious. The universe is made of non-living and living
things. Their comparison in terms of level of intelligence, conscious and communication power is
made in table (28). S Ranade, a great admire of Aurobinda told [65]: Knowledge by identity will
change current science completely. Particularly physics and biology will see radical changes. The
wave-particle duality and the mass-energy equivalence will be seen in the light of the more basic
substance of consciousness and then he defined [65]: consciousness is awareness, awareness of
yourself and of others. In the human being both exist. In the animal, there is only awareness of
others, not awareness of itself, it is a more limited awareness. In plants the awareness is even
less. In the crystal it is still less, but nevertheless it is there. If the crystal is having awareness,
it is surely possible that the next century will be the century of consciousness and you can
focus your body consciousness on a point outside the body. Will the Will power or Mind Power
of Iswar Patuli depicted by great Bengali Novelist Sarat Chandra prevail upon the society,
organization, culture and economy at the fragile end of knowledge age?
Mothers in the historical declaration[66] made on April24 1956 said: The manifestation
of the supramental upon earth is no more a promise but a living fact, a reality. It is at work
here, and one day will come when the most blind, the most unconscious and even the most
unwilling shall be obliged to recognize it. Perhaps that will be in the age of consciousness that
is next to knowledge age. The collaborative views on this prediction is one important research
found in [68].
Table 28: Comparison of different entities in universe in terms
of sense and communication
Non-living things
Living things
Animals
Human beings
DHARM
N-BHUNIA/BHU1-2.PM5
90
91
their universal appeal shall ever remain for the noble human society, but to day they are not all
in all. Privatization and universalization shall be the other social partners with them. This is a
wave brought forward by different emerging technologies, which are often interactive, interdependent and diffusive. Information technology, computer, communication, microelectronics, Genetic engineering, Biotechnology, Space technology are a few to name worthy. Developing world
in general is far lagging behind the modern technological evolutions and revolutions. Besides the
developing countries are hardly having capital to deal with such fast, rapid and perpetual changes.
Developing world in general is labor intensive rather than capital intensive. Therefore, debate on
the ability, suitability and the acceptability of liberalization is going on and will continue to go on
for some more time in the developing countries. Initial mismatch and inertia are parts of life and
the fact is that the society never denies mobility. The society ultimately accepts technological
changes, which might be off-touch to the society even a few years back. And irony is that delayed
such acceptance is done in quite haphazard and irregular ways. What has happened to the deployment of computer in government sectors in India today is anybodys guess. This is a lesson that
the third world always forgets. Consequently the third world continues to lag behind International trend, and losses money as, there is hardly any planning for technological up gradation and
applications. We can sight a figure to justify this point. Telecommunications lines of India are
66% digitized; where as figures of Brazil and Hungary are respectively 35.7% and 41%. But the
faults figures are 218 faults per 100 lines in India and 2 faults per 100 lines in USA and Japan.
In Table (29), the percentage share of information technology for America, Europe and Asia, and
that of the e-commerce buyers are shown. It is noticed that in both terms, the position of Asia is
very poor.
Table 29: % share of IT and E-commerce buyers
% share of information
technology in 1995
% share of E-commerce
buyers in 1998
America
45.5
72.57
Europe
30.9
22.8
23.7
4.6
Better is not the sole dimension of competitive advantages; faster is equally another
important dimension. Thus it will be a sound strategy for the developing country to take part
in the globalization with out any further loss of time, but with intelligent, selective, judicious
and strategic applications of globalization process, uses of and innovation with few technologies.
Analyzing the problems of Third world in depth Dr. Colombo observed The ability of developing
countries to derive all the benefits of the new technologies faces one stumbling block right from
the start. Although rapidly and seemingly effortlessly permeating the economic and production
systems of the world, these technologies are not available off the peg. They have to be absorbed,
metabolized, mastered and controlled. Their application calls for a pre-existing capability to
insert new ideas, new practices, and new elements into a flexible system. This does not simply
exist in the vast majority of the developing countries. Furthermore, it is essential that as the
new technologies are introduced into the socio-economic fabric of the third world, they do not
impair or destroy existing local cultureswe must equally concern ourselves with safeguarding
the richness of the world cultures, mankinds cultural genoma. Despite these problems it is
strongly believed that the intelligent application of the new technologies in the developing
countries can indeed speed up process of economic growth.
DHARM
N-BHUNIA/BHU1-2.PM5
91
92
DHARM
N-BHUNIA/BHU1-2.PM5
92
93
security of rural people has been established on the solid footings. The disparity in income among
the rural people has decreased considerably. An all around development of rural people and society has been noticed. However this development is due to land reforms and barga system sincerely implemented by the Left-front government of W. B. in their 25 years of rule.
By the process of land reforms and barga system, the agricultural workers or farmers are
given confidence that they will never be thrown out of work and land they do cultivate. This
confidence has led to generate among farmers the more sense of belongingness and sincerity in
their work. This has reduced the victimization and the injustice meted out to them in terms of
payment or no payment earlier by the Land-Lords; which in other ways has caused the
agricultural productivity to increase and loss of agricultural working days to decrease as well
as the agricultural disputes between labor and owner to lessen. The barga solution is our own
and is not something copied from the developed nations.
The economic and productivity failures in all sectors namely agricultural, industrial and
banking is mainly due to disputes between labors and owners. Thus if such disputes in
agriculture sectors are overcome by the barga system; it is logically extensible for other sectors
like industrial and banking too. In this paper we propose an industrial barga system for
Indian Industries. We have achieved something unique by our own system of bargas in
agriculture sector. Similarly the industrial barga not prevailing elsewhere dose not mean it is
inappropriate in India. In Indian environment where economic disparity is huge and where
labor is cheap and for which victimization of labor is easy; the industrial barga will be the right
solution.
The proposed industrial barga aims to provide share of production and profit of industries
with labor, management and owner as in agricultural barga. There may be several means of
implementation. The Industrial barga will not be easy to implement.
With IT age, the difference gap is easily to meet with. What is need of the hour is the
strategy and goodwill for the application in right perspectives.
11. CONCLUSIONS
The goals of both the near and the far futures of IT is Fig. (22). In the field of computer, the
major challenge of the 21st century will be the designing of bio/brainy computer. The basic
science has been searching, since the days of its journey, the design if any behind the universe
as well the theory of birth of the universe; and possibly a new Theory of Everything as Prof
Hawkings prediction made in 1980 of achieving his famous Theory of Everything by the end
20th century is proved wrong. The debate on deterministic vs. probabilistic nature of universe
or whether the nature is a machine or not, is oscillating. In such a scenario the debate on
possibility of designing brainy computer only be a logical extrapolation; and definitely will
take long time to answer. On the other hand, future all wireless, anywhere and any time
communication is relatively non-debatable issue and expected to be achieved, although not
without overcoming many obstacles. Even small deployment like IEEE 802.11 based WLAN
faces many obstacles[69]. Other than systems and standards, two inherent problems of future
communication need to be properly addressed: higher error probability of all wireless links and
information security. Whereas the error control is basically a technical issue, the security of
information has several dimensions. The requirement of security for a durable application of
IT, namely e commerce and e business was illustrated earlier. It is reported[70] that The
increasing frequency of malicious computer attacks on government agencies and Internet
business has caused severe economic waste and unique social threats. As per the second law of
thermodynamics the open systems cannot bring order without making its surroundings disorder.
DHARM
N-BHUNIA/BHU1-2.PM5
93
94
The security measures that are for bringing order to information process, inevitably brings disorder
to its surroundings that may again itself be the source of hackers or security breakers. This is a
manifestation of chaos and complexity. Of course the men create problems only to solve it afterwards.
Is the nature likes to see man dancing between problems and solutions? Are we then leading us to
a state of chaos and complexity[71]? This is compounded by the fact that computer system and
network security is increasingly limited by the quality and security of the software running on
constituent machines. Researchers estimate that more than half of all vulnerabilities are from
buffer overruns, an embarrassingly elementary class of bugs[72,73]. The steps to go out of chaos
and complexity will be the major challenge for investigation in 21st century.
High Speed Computing,
Autonomous Computing
Optical Computing,
Quantum Computing
Chemical/Bio/Intelligent
Computing
Seamless Power + Intelligence
In Neat Future
Information Age to Knowledge Age
Knowledge Society, Knowledge
Factory, Knowledge Workers,
Knowledge as Wealth
In Far Future
Age of
conscio
usness
3G Mobile to 4G Mobile,
Cellular, GSM, PDC, PHS, Paging, UMTS,
FSO, xDSL, Next Generation IP and VoIP,
Wireless Ethernet-IEEE 802.11, Wireless
Home Networking IEEE 802.15.4, Wireless
Internet, LEO, Multimedia Standard,
Wireless ATM, PCN
Seamless Mobility, Coverage, and total
Integration
Entering into the knowledge age is the inevitable consequence of the application of
networks in the business, organization, government, society and economy. The entry needs to
break several hurdles. The issue of the acceptability of knowledge economy with non-material
wealth, knowledge along with the new status of human resources as knowledge workers, and
the concept of sharing knowledge for organizational benefits are a few areas to be addressed.
The quantification of the knowledge and the exchange rules of knowledge for the purpose
of sale and business of and with knowledge are the technical challenges and need serious
DHARM
N-BHUNIA/BHU1-2.PM5
94
95
investigation in this century. The consciousness as Penrose told is the phenomenon whereby
the Universes very existence is made known. Thus in the age of consciousness, the mans
desire to be the master of nature with which this paper started, may be realized. Will it be
really!
The constructive and judicious application of IT may lead to overcoming the consequences
of digital divide. Several studies[74,75] have suggested for the application of IT in Education
& Training, Telemedicine and Diagnosis, E-Government, Rural Information Sharing for the purpose
of food conservation & sale, and Entertainment among others for deriving maximum benefits in
the developing countries. Like digital divide, another negative application of IT is like what happened
on 11th September in USA. Analyzing the 11th September issue, a famous research work[76] has
reported to examine the issue for developing a system dynamics for positive application of technology.
This is a new direction of research in application of technology. The same direction may be
extended to remove digital divide.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
C.T. Bhunia, Introduction to Knowledge Management, Everest Publishing House, Pune, 2003.
C.T. Bhunia, Modern Computer architecture-Synthesis and Future, Information Technology,
June 1992, pp. 80-81.
C.T. Bhunia, Trends of Modern Computer, CSI Communication, Aug-Sept. 1997, pp. 11-14 &
6-7.
C.T. Bhunia, Molecular electronics, J IETE Tech. Review, Vol 13, No. 1, Jan-Feb. 96, pp. 1115.
Michael et al., Quantum Computing and Quantum Information, Cambridge University Press,
2000.
Charles H. Bennett et al, Quantum Information Theory, IEEE Trans. On Information Theory,
Vol. 44, No. 6, Oct. 1998.
C.T. Bhunia, Tomorrows Computers, Science & Knowledge, Jan. 1995, pp. 7-9.
Vivek S. Nittoor, A Brief Introduction to Quantum Computing and Quantum Information
Procc National CSI Convention, 2002, pp. 6-11.
C.T. Bhunia, On Way to Autonomous Computers Electronics For You, Jan. 2003, pp. 42-44.
J.H. Burroughes, C. A. Jones & R. H. Friend, New Semiconductor device physics in polymer
diodes and transistors, Nature, Vol. 335, No. 6186, 1988, pp. 137-141, 1988.
D.A. Fraser, The physics of Semiconductor Device, Oxford Physics Series, 1977, Ch. 2, 7.
R.W. Whatmore, In: L.S. Miller and Mullin, Electronics Materials, Plenum Press, 1991,Ch. 19.
Y. Hirshberg, Reversible formation and eradication of colors by irradiation at low temperature, A photochemical memory model, J Am Chem Soc, 78, 1956, pp. 2304, 1956.
H. Brown, Photochromism, Techniques for chemistry, Vol. 3, Wiley Interscience, N.Y. 1971.R.
Robert R. Birge, Protein-Based Three-dimension Memory, American Scientist, Vol. 82, 1994,
pp. 348-354.
C.T. Bhunia , Molecular Electronics & Chemical Computing Technology CSI Communication, Nov. 1995, pp. 13-26.
R. W. Munn and C. N. Ironside, Non-linear optical Materials, Blackie Acad & Proc, 1993.
Geoffrey J Ashwell, Molecular Electronics, John Willy & Sons Inc, 1992.
Prasad & Williams, Introduction to non-linear optical effects in molecules & polymers, John
Wiley & Sons Inc, pp. 1-273.
John Fulenwider, The future looks bright for fiber optics, Laser focus world, Dec., 1990, pp.
141-145.
DHARM
N-BHUNIA/BHU1-2.PM5
95
96
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
Alastair M. Glass, Fiber optics, Physics Today, Oct., 1993, pp. 34-38.
M.N. Islam, Ultrafast switching with non-linear optics, Physics today, may, 1994, pp. 34-40.
Burland et al., Second Order non-linearity in poled polymer systems, Chem Rev, 1994, 1995, pp.
31-75.
C T. Bhunia, Personal Communication, JIETE Edu, Vol. 38, No. 2, April-June 98, pp. 109-118.
Jay E. Padget et al., Overview of Wireless Personal Communication, IEEE Communication
Magazine, Jan. 95, pp. 28-41.
Ashoke Chaterjee et al., Personal Communication New challenges for Digital Services, Proc
IEEE Tencon, New Delhi 97, pp. 146-148.
Guy Cayla, Wireless Local Loop: a Gateway to the Global Information Society. Proc IEEE
Tencon, Asia 97, pp. T.5.
M. V. Pitke, Wireless Technology in Developing Countries: Issues and Alternatives, Proc
Telecom Asia 97, pp. T.5.
Arup Ganz et al., Performance Study of Low Earth Orbit Satellites Systems, IEEE Trans
Com. Vol. 42, No. 2 3 4, Feb./March? April 94, pp. 1866-1871.
William W. Wu et al., Mobile Satellite Communications. Proc IEEE, Vol. 82, No. 9, pp. 14311444.
Markus Werner et al., Analysis of System Parameters for LEO ICO Satellite Communication
Networks, IEEE J on Selected Areas in Communication, Vol. 13, No. 2, Feb. 95, pp. 371-379.
Enrico Del Re et al., Efficient dynamic Channel Allocation Techniques with Handover Queuing for Mobile Satellite Networks, IEEE J on selected Areas in Communication, Vol. 13, No. 2,
Feb. 95, pp. 397-405.
Abbas Jamalipour et al., Traffic Characteristics of LEOs based Global Personal Communication Networks, IEEE Communications Magazine, Feb. 97, pp. 118-112.
C.T. Bhunia, LEO Systems and Communications, J IETE Edn, Vol. 40, No. 3 & 4, July-Dec.
1999, pp. 109-120.
Dan Arazi, Fast Access to the Internet and Interactive Multimedia Using DSL Technologies,
ITU Asia Telecom, 97, pp. 1-10.
Stefano Bregni et al., Local Loop Unbundling in the Italian Network, IEEE Communication
Magazine, Oct. 2002, pp. 86-93.
Ahsan Habib, Channelized Voice Over Digital Subscriber Line, IEEE Communication Magazine, Oct. 2002, pp. 94-100.
Mario Diaz Nava, A Short Overview of the VDSL System Requirements, IEEE Communications Magazine, Dec. 2002, pp. 82-90.
Asymmetric digital subscriber line-ANSI T1 413.
Bell Atlantic to test home video over copper, Intelligent Network News, 1992.
Digital Subscriber Line (HDSL and ADSL) capacity of the outside loop plant, IEEE Journal
on selected areas on communication, 1995.
C.T. Bhunia, Asymmetric Digital Subscriber Line, EFY, Jan. 99, pp. 43-46.
C.T. Bhunia, An insight in xDSL technology, EFY, Sept. 01, pp. 73-76.
Manuel Dinis et al., Provision of Sufficient Transmission Capacity for Broadband Mobile Multimedia: A Step Toward 4G, IEEE Comm Magazine, Vol. 39, No. 8, Aug. 2001, pp. 54.
Nobuo Nakajima et al., Research and Developments of Software-Defined Radio Technologies
in Japan, IEEE Communication Magazine, Vol. 39, No 8, August 2001, pp. 146-154.
Jeong Hyun Park, Wireless Internet Access for Mobile Subscribers Based on the GPRS/UMTS
Network, IEEE Communication magazine, Vol. 40, No. 4, April 2002, pp. 38-49.
Johan De Vriendt et al., Mobile Network Evolution: A Revolution on the Move, IEEE Communication Magazine, Vol. 40, No. 4, April 2002, pp. 104-110.
DHARM
N-BHUNIA/BHU1-2.PM5
96
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
97
Fernando J. Velez et al., Mobile Braodband Services., IEEE Communication Magazine, Vol. 40,
No. 4, April 2002, pp. 142-150.
William Webb, Broadband Fixed Wireless Access as a Key Component of the Future Integrated
Communications Environment, IEEE Communication Magazine, Vol. 39, No. 9, Sept. 2001, pp.
115-121.
Shyam S. Chakraborty et al., An Adaptive ARQ Scheme with Packet Combining for Time Varying Channels, IEEE Comm Letters, Vol. 3, No. 2, Feb. 1999, pp. 52-54.
Shyam S. Chakraborty et al., An ARQ Scheme with Packet Combining, IEEE Comm Lettters,
Vol. 2, No. 7, July 95, pp. 200-202.
C.T. Bhunia, ARQ Techniques: Review and Modifications, Journal IETE Technical Review,
Vol. 18, No. 5, Sept.-Oct. 2001, pp. 381-401.
C.T. Bhunia, A Few Modified ARQ Techniques, Proceedings of the International Conference
on Communications, Computers & Devices, ICCCD-2000, 14-16, Decedmber 2000, I I T,
Kharagpur, India, Vol. II, pp. 705-708.
Hossein Izadpanah, A Millimeter Wave Broadband Wireless Access Technology Demonstrator for the Next Generation Internet Network Reach Extension, IEEE Communication Magazine, Vol. 39, No. 9, Sept. 2001, pp. 140-145.
Luis Munoz et al., Optimizing Internet Flows over IEEE 802.11b Wireless Local Area Networks.. , IEEE Communication Magazine, Vol. 39, No. 12, Dec. 2001, pp. 60-66.
Vipul Gupta and Sumit Gupta, Securing the Wireless Internet, IEEE Communication Magazine, Vol. 39, No. 12, Dec. 2001, pp. 68-73.
Jeyhan Karaogue, High Rate Wireless Personal Area Networks, IEEE Communication Magazine, Vol. 39, No. 12, Dec. 2001, pp. 96-102.
Geng Sheng Kuo et al., Dynamic RSVP protocol, IEEE Communication Magazine, Vol. 41,
No. 5, May 2003, pp. 130-135.
Shidong Zhou et al., Distributed Wireless communication System., IEEE Communication
Magazine, Vol. 41, No. 3, March 2003, pp. 108-113.
Yungsoo Kim et al., Beyond 3G: Vision, Requirements,and Enabling Technologies, IEEE Communication Magazine, Vol. 41, No. 3, March 2003, pp. 120-123.
Alain J Godbout Information Vs Knowledge, <http://dir.yahoo.com>
Robert Taylor, Knowledge Management, Robert m
Taylor@gb.unisys.com
<mailto:Taylor@gb.unisys.com>
S. DiMattia et al., Hope or Hype, Managing Knowledge, Macmillian Business, UK, 2002.
Yogesh Malhotra, Knowledge in inquring organizations, Proc. 3rd Americas conference on
information systems, August 1997.
S. Ranade, The Technology of Consciousness Dipti Publications, Sri Aurobindo Ashram,
Pondichery, 2000.
Sisir Kumar Mitra, Sri aurobinda, Orient Paperbacks, 1976.
R Sadananda, The Limits to Growth-A Revisit, Knowledge Networks and Sustainable Development, Proc 37th National Convention of CSI 2002, Tata McGrawHill, 2002, pp. 23-31.
Sushil Mukhopadhyaya, Whither Bio-Science?, J IETE Tech Review, Vol. 19, No. 6, Nov-Dec.
2002, pp. 381-386.
Upkar Varshney, The status and Future of 802.11 based WLANs, IEEE Computer, Vol. 1, No.
3, June 2003, pp. 102-104.
Hassan Aljifri, IP Traceback: A New Denial Of Service Deterrent, IEEE Computer, Vol. 1,
No. 3, June 2003, pp. 24-31.
C.T. Bhunia, Cryptography: From Classical to Quantum Age, IT Seminar, Dept of ETC, BEC
(Deemed University), Shibpur, 2001.
Nancy R Mead et al., From the Ground Up., IEEE Computer, Vol. 1, No. 2, March 2003, 59-63.
DHARM
N-BHUNIA/BHU1-2.PM5
97
98
73.
D. Wagner et al., A first step towards automated detection of buffer over run vulnerabilities,
Proc 7th Network and Distributed System Security, 2000.
Michael Gurstein, Rural development and food security.., SD Dimensions, FAO, November
2000.
A.K. Roy, The Dawn of an information age. Thought, Vol V, Issue IV, April 2001, pp. 4-7.
Erica Vonderheid, Answering a Wake Up Call, IEEE , The Institute, June 2003, pp. 1 & 12.
Arun N. Netravali, When Networking becomesand Beyond, IETE Technical Review, Vol. 19,
No. 6, Nov-Dec. 2002, pp. 353-362.
P.C. Mabon, Mission CommunicationsThe Story of Bell Laboratories, Bell Telephone Laboratories, Inc, Murray Hill, N J, 1975, p. iv.
Lester C Thurow, The Wealth of Knowledge, Harper Collins Publishers, USA, 2002.
R. McGinn , A Revolution in Networking: Toward a Network of Networks, Network + Interop,
Atlanta, Georgia, Oct. 21, 1998.
74.
75.
76.
77.
78.
79.
80.
APPENDIX-A
Edholms Law
The following table depicts the growth of data rates under different communication/network technologies. The data rate follows Edholms law that states the data rates for all three communications, namely wired, nomadic and wireless are as predictable as Moores law. The rates are
increasing exponentially and the slower rates trail the faster rates within a predictable time gap.
Table: Date rate growth of different Communication/Network Technologies
Year
Wired
Technology/
Standard
19751984
19851994
19952004
Nomadic
Data
rate
Technology/
Standard
Wireless
Data
rate
Technology/
Standard
Data rate
Ethernet
2.94 Mbps
Hayes
Modem
110 bps
Wide Area
paging
A few hundreds
bps
Ethernet
10 Mbps
Modem
9800 bps
Alphanumeric
paging
A few
Kbps
Ethernet
100 Mbps
Modem
28.8 Kbps
Cellular/GSM
50 Kbps
Modem
56.6 Kbps
IEEE
802.11 b
11 Mbps
IEEE
802.11 g
108 Mbps
PCN/UMTS
> 2 Mbps
B3G (Beyond
3 G)
12 Mbps
MIMO
200 Mbps
Ethernet
1 Gbps
DHARM
N-BHUNIA/BHU1-2.PM5
98