Documente Academic
Documente Profesional
Documente Cultură
com
COURSE MATERIAL
COMPUTER NETWORKS Aug – Dec 2009
DEPTT. : IT Paper Code : IT-305E SEMESTER: V
The readings referred to in the table below are recommended material from
A - Fourozan “data communications & networking” 3e
B - Tanenbaum “Computer networks” 4e
ARPANET: In the mid 1960’s,the mainframe computers in research organisations were standalone
devices.Computers from different manufactuers were unable to communicate with each other.The
Advanced research Projects agency in the Department of defense was intrested in finding a way to connect
computers so that the researchers they funded could share their findings.
In 1967, at an association for computing machinery meeting,ARPA presents its ideas for
ARPANET,a small network of connected computers.The idea was that each host computer would be
attached to a specialized computer called an Interface Message Processor(IMP).The IMP’s would be in turn
be connected to one another.Each IMP had to be able to communicate with other IMP’s as well as with its
own attached host.
In 1969, work began on the ARPAnet, grandfather to the Internet. Designed as a computer version of the
nuclear bomb shelter, ARPAnet protected the flow of information between military installations by creating
a network of geographically separated computers that could exchange information via a newly developed
protocol (rule for how computers interact) called NCP (Network Control Protocol). One opposing view to
ARPAnet's origins comes from Charles M. Herzfeld, the former director of ARPA. He claimed that
ARPAnet was not created as a result of a military need, stating "it came out of our frustration that there
were only a limited number of large, powerful research computers in the country and that many research
investigators who should have access were geographically separated from them." ARPA stands for the
Advanced Research Projects Agency, a branch of the military that developed top secret systems and
weapons during the Cold War. The first data exchange over this new network occurred between computers
at UCLA and Stanford Research Institute. On their first attempt to log into Stanford's computer by typing
"log win", UCLA researchers crashed their computer when they typed the letter 'g'.
Four computers were the first connected in the original ARPAnet. They were located in the
respective computer research labs of UCLA (Honeywell DDP 516 computer), Stanford Research
Institute (SDS-940 computer), UC Santa Barbara (IBM 360/75), and the University of Utah (DEC
PDP-10). As the network expanded, different models of computers were connected, creating
compatibility problems. The solution rested in a better set of protocols called TCP/IP
(Transmission Control Protocol/Internet Protocol) designed in 1982.
LECTURE NO 2 READINGS: A-PAGE 16,A-PAGE 8 to 13
INTERNET:
The Internet is a global system of interconnected computer networks that interchange data by packet
switching using the standardized Internet Protocol Suite (TCP/IP). It is a "network of networks" that
consists of millions of private and public, academic, business, and government networks of local to global
scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies.
The Internet carries various information resources and services, such as electronic mail, online chat, file
transfer and file sharing, online gaming, and the inter-linked hypertext documents and other resources of
the World Wide Web (WWW)
The Internet is a specific internetwork. It consists of a worldwide interconnection of governmental,
academic, public, and private networks based upon the networking technologies of the Internet Protocol
Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by
DARPA of the U.S. Department of Defense. The Internet is also the communications backbone underlying
the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun,
for historical reasons and to distinguish it from other generic internetworks.
Participants in the Internet use a diverse array of methods of several hundred documented, and often
standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP
Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service
providers and large enterprises exchange information about the reachability of their address spaces through
the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
Private network
In Internet terminology, a private network is typically a network that uses private IP address space,
following the standards set by RFC 1918 and RFC 4193. These addresses are common in home and office
local area networks (LANs), as using globally routable addresses is seen as impractical or unnecessary.
Private IP addresses were originally created due to the shortage of publicly registered IP addresses created
by the IPv4 standard, but are also a feature of the next generation Internet Protocol, IPv6.
These addresses are private because they are not globally assigned, meaning they aren't allocated to a
specific organisation--instead, any organisation needing private address space can use these addresses
without needing approval from a regional Internet registry (RIR). Consequently, they are not routable on
the public Internet, meaning that if such a private network wishes to connect to the Internet, it must use
either a Network Address Translation (NAT) gateway, or a proxy server.
The most common use of these addresses is in home networks, since most Internet Service Providers (ISPs)
only allocate a single IP address to each customer, but many homes have more than one networking device
(for example, several computers, or a printer). In this situation, a NAT gateway is almost always used to
provide Internet connectivity. They are also commonly used in corporate networks, which for security
reasons, are not connected directly to the internet, meaning globally routable addresses are unnecessary.
Often a proxy, SOCKS gateway, or similar is used to provide restricted internet access to internal users. In
both cases, private addresses are seen as adding security to the internal network, since it's impossible for an
Internet host to connect directly to an internal system.
Because many internal networks use the same private IP addresses, a common problem when trying to
merge two such networks (e.g. during a company merger or takeover) is that both organisations have
allocated the same IPs in their networks. In this case, either one network must renumber, often a difficult
and time-consuming task, or a NAT router must be placed between the networks to translate one network's
addresses before they can reach the other side.
It is not uncommon for private address space to "leak" onto the Internet in various ways. Poorly configured
private networks often attempt reverse DNS lookups for these addresses, putting extra load on the Internet's
root nameservers. The AS112 project mitigates this load by providing special "blackhole" anycast
nameservers for private addresses which only return "not found" answers for these queries. Organisational
edge routers are usually configured to drop ingress IP traffic for these networks, which can occur either by
accident, or from malicious traffic using a spoofed source address. Less commonly, ISP edge routers will
drop such ingress traffic from customers, which reduces the impact to the Internet of such misconfigured or
malicious hosts on the customer's network.
A common misconception is that these addresses are not routable. However, while not routable on the
public Internet, they are routable within an organisation or site.
The Internet Engineering Task Force (IETF) has directed IANA to reserve the following IPv4 address
ranges for private networks, as published in RFC 1918:
RFC1918 number of classful largest CIDR block host id
IP address range
name addresses description (subnet mask) size
10.0.0.0 –
24-bit block 16,777,216 single class A 10.0.0.0/8 (255.0.0.0) 24 bits
10.255.255.255
172.16.0.0 – 16 contiguous 172.16.0.0/12
20-bit block 1,048,576 20 bits
172.31.255.255 class Bs (255.240.0.0)
192.168.0.0 – 256 contiguous 192.168.0.0/16
16-bit block 65,536 16 bits
192.168.255.255 class Cs (255.255.0.0)
Note that classful addressing is obsolete and no longer used on the Internet. For example, while 10.0.0.0/8
would be a single class A network, it is not uncommon for organisations to divide it into smaller /16 or /24
networks.
Network Classification
The following list presents categories used for classifying networks.
Network topology
Computer networks may be classified according to the network topology upon which the network is based,
such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical
topology network. Network Topology signifies the way in which devices in the network see their logical
relations to one another. The use of the term "logical" here is significant. That is, network topology is
independent of the "physical" layout of the network. Even if networked computers are physically placed in
a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus
Topology. In this regard the visual and operational characteristics of a network are distinct; the logical
network topology is not necessarily the same as the physical layout. Networks may be classified based on
the method of data used to convey the data, these include digital and analog networks
1. Bus network
A bus network topology is a network architecture in which a set of clients are connected via a shared
communications line, called a bus. There are several common instances of the bus architecture, including
one in the motherboard of most computers, and those in some versions of Ethernet networks.Bus networks
are the simplest way to connect multiple clients, but may have problems when two clients want to transmit
at the same time on the same bus. Thus systems which use bus network architectures normally have some
scheme of collision handling or collision avoidance for communication on the bus, quite often using Carrier
Sense Multiple Access(Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control
(MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared
transmission medium, such as an electrical bus, or a band of the electromagnetic spectrum."Carrier
Sense" describes the fact that a transmitter listens for a carrier wave before trying to send. That is, it tries
to detect the presence of an encoded signal from another station before attempting to transmit. If a carrier
is sensed, the station waits for the transmission in progress to finish before initiating its own
transmission."Multiple Access" describes the fact that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations using the medium.) or the presence
of a bus master which controls access to the shared bus resource.
The bus topology makes the addition of new devices straightforward. The term used to describe clients is
station or workstation in this type of network. Bus network topology uses a broadcast channel which means
that all attached stations can hear every transmission and all stations have equal priority in using the
network to transmit[1] data.
Advantages and disadvantages of a bus network
Advantages
• Easy to implement and extend
• Well suited for temporary or small networks not requiring high speeds (quick setup)
• Cheaper than other topologies.
• Cost effective as only a single cable is used
• Cable faults are easily identified
Disadvantages
• Limited cable length and number of stations.
• If there is a problem with the cable, the entire network goes down.
• Maintenance costs may be higher in the long run.
• Performance degrades as additional computers are added or on heavy traffic.
• Proper termination is required (loop must be in closed path).
• Significant Capacitive Load (each bus transaction must be able to stretch to most distant link).
• It works best with limited number of nodes.
• It is slower than the other topologies.
2. Star networks are one of the most common computer network topologies. In its simplest form, a star
network consists of one central switch, hub or computer, which acts as a conduit to transmit messages.
Thus, the hub and leaf nodes, and the transmission lines between them, form a graph with the topology of a
star. If the central node is passive, the originating node must be able to tolerate the reception of an echo of
its own transmission, delayed by the two-way transmission time (i.e. to and from the central node) plus any
delay generated in the central node. An active star network has an active central node that usually has the
means to prevent echo-related problems.
The star topology reduces the chance of network failure by connecting all of the systems to a central node.
When applied to a bus-based network, this central hub rebroadcasts all transmissions received from any
peripheral node to all peripheral nodes on the network, sometimes including the originating node. All
peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central
node only. The failure of a transmission line linking any peripheral node to the central node will result in
the isolation of that peripheral node from all others, but the rest of the systems will be unaffected
Advantages
• Better performance: Passing of Data Packet through unnecessary nodes is prevented by this
topology. At most 3 devices and 2 links are involved in any communication between any two
devices which are part of this topology. This topology induces a huge overhead on the central hub,
however if the central hub has adequate capacity, then very high network utilization by one device
in the network does not affect the other devices in the network.
• Isolation of devices: Each device is inherently isolated by the link that connects it to the hub. This
makes the isolation of the individual devices fairly straightforward, and amounts to disconnecting
the device from the hub. This isolated nature also prevents any non-centralized failure to affect the
network.
• Benefits from centralization: As the central hub is the bottleneck, increasing capacity of the
central hub or adding additional devices to the star, can help scale the network very easily. The
central nature also allows the inspection traffic through the network. This can help analyze all the
traffic in the network and determine suspicious behavior.
• Simplicity: The topology is easy to understand, establish, and navigate. The simple topology
obviates the need for complex routing or message passing protocols. As noted earlier, the isolation
and centralization simplifies fault detection, as each link or device can be probed individually.
Disadvantages
The primary disadvantage of a star topology is the high dependence of the system on the functioning of the
central hub. While the failure of an individual link only results in the isolation of a single node, the failure
of the central hub renders the network inoperable, immediately isolating all nodes. The performance and
scalability of the network also depend on the capabilities of the hub. Network size is limited by the number
of connections that can be made to the hub, and performance for the entire network is capped by its
throughput. While in theory traffic between the hub and a node is isolated from other nodes on the network,
other nodes may see a performance drop if traffic to another node occupies a significant portion of the
central node's processing capability or throughput. Furthermore, wiring up of the system can be very
complex.
3. A ring network is a network topology in which each node connects to exactly two other nodes,
forming a single continuous pathway for signals through each node - a ring. Data travels from node to
node, with each node along the way handling every packet.
Because a ring topology provides only one pathway between any two nodes, ring networks may be
disrupted by the failure of a single link. A node failure or cable break might isolate every node attached to
the ring. FDDI networks overcome this vulnerability by sending data on a clockwise and a
counterclockwise ring: in the event of a break data is wrapped back onto the complementary ring before it
reaches the end of the cable, maintaining a path to every node along the resulting "C-Ring". 802.5 networks
-- also known as IBM Token Ring networks -- avoid the weakness of a ring topology altogether: they
actually use a star topology at the physical layer and a Multistation Access Unit to imitate a ring at the
datalink layer.
Advantages
• Very orderly network where every device has access to the token and the opportunity to transmit
• Performs better than a star topology under heavy network load
• Can create much larger network using Token Ring
• Does not require network server to manage the connectivity between the computers
Disadvantages
• One malfunctioning workstation or bad port in the MAU can create problems for the entire
network
• Moves, adds and changes of devices can affect the network
• Network adapter cards and MAU's are much more expensive than Ethernet cards and hubs
• Much slower than an Ethernet network under normal load
LECTURE NO 3 READINGS: A-PAGE 8 TO 13
4. Mesh networking is a way to route data, voice and instructions between nodes. It allows for
continuous connections and reconfiguration around broken or blocked paths by “hopping” from node to
node until the destination is reached. A mesh network whose nodes are all connected to each other is a fully
connected network. Mesh networks differ from other networks in that the component parts can all connect
to each other via multiple hops, and they generally are not mobile. Mesh networks can be seen as one type
of ad hoc network. Mobile ad-hoc networks (MANETs) and mesh networks are therefore closely related,
but MANETs also have to deal with the problems introduced by the mobility of the nodes.Mesh networks
are self-healing: the network can still operate even when a node breaks down or a connection goes bad. As
a result, a very reliable network is formed. This concept is applicable to wireless networks, wired networks,
and software interaction. Wireless mesh networks is the most topical application of mesh architectures.
Wireless mesh was originally developed for military applications but have undergone significant evolution
in the past decade. As the cost of radios plummeted, single radio products evolved to support more radios
per mesh node with the additional radios providing specific functions- such as client access, backhaul
service or scanning radios for high speed handover in mobility applications. The mesh node design also
became more modular - one box could support multiple radio cards - each operating at a different
frequency
5. Tree Topology
Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices
connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This bus/star
hybrid approach supports future expandability of the network much better than a bus (limited in the number
of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection
points) alone
Hybrid mesh
A type of hybrid physical network topology that is a combination of the physical partially connected
topology and one or more other physical topologies the mesh portion of the topology consisting of
redundant or alternate connections between some of the nodes in the network – the physical hybrid mesh
topology is commonly used in networks which require a high degree of availability
LECTURE NO. 4 READINGS: A-PAGE 13 to 15
Types of networks
1.Personal Area Network (PAN)
A Personal Area Network (PAN) is a computer network used for communication among computer
devices close to one person. Some examples of devices that are used in a PAN are printers, fax machines,
telephones, PDAs and scanners. The reach of a PAN is typically about 20-30 feet (approximately 6-9
meters), but this is expected to increase with technology improvements.
2. Local Area Network (LAN)
A Local Area Network (LAN) is a computer network covering a small physical area, like a home, office,
or small group of buildings, such as a school, or an airport. This is a network covering a small geographic
area, like a home, office, or building. Current LANs are most likely to be based on Ethernet technology.
For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g.,
printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically
connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnected
devices and eventually connect to the Internet..
3.Metropolitan Area Network (MAN)
A Metropolitan Area Network (MAN) is a network that connects two or more Local Area Networks or
Campus Area Networks together but does not extend beyond the boundaries of the immediate town/city.
Routers, switches and hubs are connected to create a Metropolitan Area Network.
4.Wide Area Network (WAN)
A Wide Area Network (WAN) is a computer network that covers a broad area (i.e., any network whose
communications links cross metropolitan, regional, or national boundaries [1]). Less formally, a WAN is a
network that uses routers and public communications links [1]. Contrast with personal area networks
(PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks
(MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city)
respectively. The largest and most well-known example of a WAN is the Internet. A WAN is a data
communications network that covers a relatively broad geographic area (i.e. one city to another and one
country to another country) and that often uses transmission facilities provided by common carriers, such as
telephone companies. WAN technologies generally function at the lower three layers of the OSI reference
model: the physical layer, the data link layer, and the network layer.
5. Internetwork
A Internetworking involves connecting two or more distinct computer networks or network segments via
a common routing technology. The result is called an internetwork (often shortened to internet). Two or
more networks or network segments connected using devices that operate at layer 3 (the 'network' layer) of
the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private,
commercial, industrial, or governmental networks may also be defined as an internetwork.
In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants
of internetwork, depending on who administers and who participates in them:
• Intranet
• Extranet
• Internet
Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the
intranet or extranet is normally protected from being accessed from the Internet without proper
authorization. The Internet is not considered to be a part of the intranet or extranet, although it may serve as
a portal for access to portions of an extranet.
Intranet:An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web
browsers and file transfer applications, that is under the control of a single administrative entity. That
administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is
the internal network of an organization. A large intranet will typically have at least one web server to
provide users with organizational information.
Extranet:An extranet is a network or internetwork that is limited in scope to a single organization or
entity but which also has limited connections to the networks of one or more other usually, but not
necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part
of its intranet creating in this way an extranet, while at the same time the customers may not be considered
'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN,
WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must
have at least one connection with an external network.
Internet: The Internet is a specific internetwork. It consists of a worldwide interconnection of
governmental, academic, public, and private networks based upon the networking technologies of the
Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network
(ARPANET) developed by DARPA of the U.S. Department of Defense. The Internet is also the
communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly
spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic
internetworks.
Participants in the Internet use a diverse array of methods of several hundred documented, and often
standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP
Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service
providers and large enterprises exchange information about the reachability of their address spaces through
the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
OSI MODEL:
The Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model) is an
abstract description for layered communications and computer network protocol design. It was developed
as part of the Open Systems Interconnection (OSI) initiative.In its most basic form, it divides network
architecture into seven layers which, from top to bottom, are the Application, Presentation, Session,
Transport, Network, Data-Link, and Physical Layers. It is therefore often referred to as the OSI Seven
Layer Model.
A layer is a collection of conceptually similar functions that provide services to the layer above it
and receives service from the layer below it. For example, a layer that provides error-free
communications across a network provides the path needed by applications above it, while it calls
the next lower layer to send and receive packets that make up the contents of the path.
Description of OSI layers
OSI Model
Data unit Layer Function
Host Data 7. Application Network process to application
layers
6. Presentation Data representation and encryption
5. Session Interhost communication
Segment 4. Transport End-to-end connections and reliability
Packet 3. Network Path determination and logical addressing
Media
Frame 2. Data Link Physical addressing (MAC & LLC)
layers
Bit 1. Physical Media, signal and binary transmission
Layer 7: Application Layer
The application layer is the OSI layer closest to the end user, which means that both the OSI application
layer and the user interact directly with the software application. This layer interacts with software
applications that implement a communicating component. Such application programs fall outside the scope
of the OSI model. Application layer functions typically include identifying communication partners,
determining resource availability, and synchronizing communication. When identifying
communication partners, the application layer determines the identity and availability of communication
partners for an application with data to transmit. When determining resource availability, the application
layer must decide whether sufficient network resources for the requested communication exist. In
synchronizing communication, all communication between applications requires cooperation that is
managed by the application layer. Some examples of application layer implementations include Telnet,
File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP).
Layer 6: Presentation Layer
The Presentation Layer establishes a context between Application Layer entities, in which the higher-layer
entities can use different syntax and semantics, as long as the Presentation Service understands both and the
mapping between them. The presentation service data units are then encapsulated into Session Protocol
Data Units, and moved down the stack.
This layer provides independence from differences in data representation (e.g., encryption) by translating
from application to network format, and vice versa. The presentation layer works to transform data into the
form that the application layer can accept. This layer formats and encrypts data to be sent across a network,
providing freedom from compatibility problems. It is sometimes called the syntax layer.
Layer 5: Session Layer
The Session Layer controls the dialogues/connections (sessions) between computers. It establishes,
manages and terminates the connections between the local and remote application. It provides for full-
duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and
restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a
property of TCP, and also for session checkpointing and recovery, which is not usually used in the Internet
Protocol Suite. The Session Layer is commonly implemented explicitly in application environments that
use remote procedure calls (RPCs).it offers various services, including
1. dialog control:The session layer allows two systems to enter into a dialog.It allows the communication
between two process to take place either in half duplex or full duplex mode.
2. synchronization:It allows a process to add checkpoints or synchronization points to a stream of data.for
eg.,if a system is sending a file of 2000 pages,it is advisable to insert checkpoints after every 100 pages to
ensure that eacg 100 page unit is received & acknowleged indepedently..
LECTURE NO. 6 READINGS: - DO-
Layer 4: Transport Layer
The Transport Layer provides transparent transfer of data between end users, providing reliable data
transfer services to the upper layers. It is responsible for process to process delivery of entire message. It
ensures that whole message arrives intact & in order,overseeing both error & flow control at the source to
destination level.The Transport Layer controls the reliability of a given link through flow control,
segmentation/desegmentation, and error control. Some protocols are state and connection oriented. This
means that the Transport Layer can keep track of the segments and retransmit those that fail.
1.Service Point Addressing: Computers often run several programs at the same time. For this reason,
source to destination delivery means delivery not only from one computer to the next but also from a
specific process on one computer to a specific process on other computer .The transport layer header must
therefore include a type of address called port address .The transport layer gets the entire message to the
correct process or computer.
2. Segmentation& reassemble: A message is divided into transmittable segments with each segment
containing a sequence no. These numbers enable the transport layer to reassemble the message correctly
upon arriving at the destination & to identify &replace packets thatt were lost in transmission.
3.Connection Control :It can be either connectionless or connection-oriented.A connection less Transport
layer treat each segment as an independent packet & delivers it to transport layer at destination machine. A
connection oriented Transport layer makes a connection with the transport layer at the destination machine
before delievering the packets.After all data is transferred & connection is terminated.
4. Flow control: It is performed end to end rather than single link.
5. Error control: t is performed process tp process rather than single link.The sending transport layer
makes sure that the entire message arrives at the receiving tranport layer without error(damage,loss or
duplication).it is achieved through retransmission.
Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition
of the Transport Layer, the best known examples of a Layer 4 protocol are the Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP).
Layer 3: Network Layer
The Network Layer provides the functional and procedural means of transferring variable length data
sequences from a source to a destination via one or more networks, while maintaining the quality of service
requested by the Transport Layer. The Network Layer performs network routing functions, and might also
perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer—sending
data throughout the extended network and making the Internet possible.The best-known example of a Layer
3 protocol is the Internet Protocol (IP). It manages the connectionless transfer of data one hop at a time,
from end system to ingress router, router to router, and from egress router to destination end system. It is
not responsible for reliable delivery to a next hop, but only for the detection of errored packets so they may
be discarded. When the medium of the next hop cannot accept a packet in its current length, IP is
responsible for fragmenting into sufficiently small packets that the medium can accept it.
1.Logical addressing: The physical addressing implemented by data link layer handles the addressing
problem locally.the network layer adds a header to the packet coming from the upper layer includes logical
address of sender & reciever.
2.Routing: When independent networks are connected to create internetworks,the router route the packet to
final destination.
Layer 2: Data Link Layer
The Data Link Layer is the second layer in the OSI model, above the Physical Layer, which ensures that the
error free data is transferred between the adjacent nodes in the network.
1. Framing :It breaks the datagrams passed down by above layers and convert them into frames ready for
transfer. This is called Framing. It provides two main functionalities
• Reliable data transfer service between two peer network layers
• Flow Control mechanism which regulates the flow of frames such that data congestion is not there
at slow receivers due to fast senders.
2. Error Control
• The bit stream transmitted by the physical layer is not guaranteed to be error free. The data link
layer is responsible for error detection and correction. The most common error control method is
to compute and append some form of a checksum to each outgoing frame at the sender's data link
layer and to recompute the checksum and verify it with the received checksum at the receiver's
side. If both of them match, then the frame is correctly received; else it is erroneous. The
checksums may be of two types:
# Error detecting : Receiver can only detect the error in the frame and inform the sender about it. #
Error detecting and correcting : The receiver can not only detect the error but also correct it.
3. Flow Control
Consider a situation in which the sender transmits frames faster than the receiver can accept them. If the
sender keeps pumping out frames at high rate, at some point the receiver will be completely swamped and
will start losing some frames. This problem may be solved by introducing flow control. Most flow control
protocols contain a feedback mechanism to inform the sender when it should transmit the next frame.
4. Access Control: When two or more devices are connected to the same link,data link layer protocols are
necessary to determine which device has control over the link at any given time.
5. Physical addressing: If frames are to be distributed to different systems on the network,the data link
layer adds the header to the frame to define the sender or receiver of the frame.
Layer 1: Physical Layer
The Physical Layer is the first level in the seven-layer OSI model of computer networking. It translates
communications requests from the Data Link Layer into hardware-specific operations to effect transmission
or reception of electronic signals.
The Physical Layer is a fundamental layer upon which all higher level functions in a network are based.
However, due to the plethora of available hardware technologies with widely varying characteristics, this is
perhaps the most complex layer in the OSI architecture. The implementation of this layer is often termed
PHY.
The Physical Layer defines the means of transmitting raw bits rather than logical data packets over a
physical link connecting network nodes. The bit stream may be grouped into code words or symbols and
converted to a physical signal that is transmitted over a hardware transmission medium. The Physical Layer
provides an electrical, mechanical, and procedural interface to the transmission medium. The shapes of the
electrical connectors, which frequencies to broadcast on, which modulation scheme to use and similar low-
level parameters are specified here.
List of Physical Layer services
The major functions and services performed by the Physical Layer are:
• Bit-by-bit delivery
• Providing a standardized interface to physical transmission media, including
o Mechanical specification of electrical connectors and cables, for example maximum
cable length
o Electrical specification of transmission line signal level and impedance
o Radio interface, including electromagnetic spectrum frequency allocation and
specification of signal strength, analog bandwidth, etc.
o Specifications for IR over optical fiber or a wireless IR communication link
• Modulation
• Line coding
• Bit synchronization in synchronous serial communication
• Start-stop signalling and flow control in asynchronous serial communication
• Circuit mode multiplexing,[citation needed] as opposed to statistical multiplexing performed at the higher
level
o Establishment and termination of circuit switched connections
• Carrier sense and collision detection utilized by some level 2 multiple access protocols
• Equalization filtering, training sequences, pulse shaping and other signal processing of physical
signals
• Forward error correction,[citation needed] for example bitwise convolutional coding
• Bit-interleaving and other channel coding
The Physical Layer is also concerned with
• Point-to-point, multipoint or point-to-multipoint line configuration
• Physical network topology, for example bus, ring, mesh or star network
• Serial or parallel communication
• Simplex, half duplex or full duplex transmission mode
• Autonegotiation
Physical Layer examples
• V.92 telephone network modems
• IRDA Physical Layer
• USB Physical Layer
• Firewire
• EIA RS-232, EIA-422, EIA-423, RS-449, RS-485
• ITU Recommendations: see ITU-T
• DSL
• ISDN
• T1 and other T-carrier links, and E1 and other E-carrier links
• 10BASE-T, 10BASE2, 10BASE5, 100BASE-TX, 100BASE-FX, 100BASE-T, 1000BASE-T,
1000BASE-SX and other varieties of the Ethernet physical layer
• Varieties of 802.11
• SONET/SDH
History of TCP/IP
In the late 1960's, most computer users bought a single large system for all of their data processing needs.
As their needs expanded, they rarely bought a different system from a different vendor. Instead, they added
on to their existing platforms, or they replaced it with a newer, larger model. Cross-platform connectivity
was essentially unheard of, nor was it expected by customers.
These systems used proprietary networking architectures and protocols. For the most part, networking
consisted of plugging dumb terminals or line printers into an intelligent communications controller. And
just as the networking protocols were proprietary, the network nodes were proprietary as well. To this day
you still can't plug an IBM terminal into a DEC midrange system and expect it to work. The protocols in
use are completely incompatible with each other.
In an effort to cut the costs of development, the Advanced Research Projects Agency (ARPA) of the
Department of Defense (DOD) began coordinating the development of a vendor- independent network to
tie major research sites together. The logic behind this is clear: the cost and time to develop an application
on one system was too much for each site to re-write the application on different systems. Since each
facility used different computers with proprietary networking technology, the need for a vendor-
independent network was the first priority. In 1968, work began on a private packet-switched network.
In the early 1970's, authority of the project was transferred to the Defense Advanced Research Projects
Agency (DARPA). Although the original ARPAnet protocols were written for use with the ARPA packet-
switched network, they were also designed to be usable on other networks as well, and in 1981, DARPA
switched their focus to the TCP/IP protocol suite, placing it into the public domain. Shortly thereafter,
TCP/IP was adopted by the University of California at Berkeley, who began bundling it with their freely
distributed version of UNIX. In 1983, DARPA mandated that all new systems connecting to the ARPA
network had to use TCP/IP, thus guaranteeing its long-term success.
During the same time period, other government agencies like the National Science Foundation (NSF) were
building their own networks, as were private regional network service providers. These other networks also
used TCP/IP as the native protocols, since they were completely "open" as well as readily available on a
number of different platforms.
When these various regional and government networks began connecting to each other, the term "Internet"
came into use. To "internet" (with a lowercase "i") means to interconnect networks. You can create an
internet of Macintosh networks using AppleTalk and some routers, for example. The term "Internet" (with
a capital "I") refers to the global network of TCP/IP-based systems, originally consisting of ARPA and
some regional networks.
INTERNET PROTOCOL: The Internet Protocol Suite (commonly TCP/IP) is the set of
communications protocols used for the Internet and other similar networks. It is named from two of the
most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP),
which were the first two networking protocols defined in this standard. INTERNET PROTOCOL: IP is the
primary network protocol used on the Internet, developed in the 1970s. On the Internet and many other
networks, IP is often used together with the Transport Control Protocol (TCP) and referred to
interchangeably as TCP/IP.
IP supports unique addressing for computers on a network. Most networks use the IP version 4 (IPv4)
standard that features IP addresses four bytes (32 bits) in length. The newer IP version 6 (IPv6) standard
features addresses 16 bytes (128 bits) in length.
Data on an IP network is organized into [ipackets. Each IP packet includes both a header (that specifies
source, destination, and other information about the data) and the message data itself.
IP corresponds to the Network layer (Layer 3) in the OSI model, whereas TCP corresponds to the Transport
layer (Layer 4) in OSI. In other words, the term TCP/IP refers to network communications where the TCP
transport is used to deliver data across IP networks.
The average person on the Internet works in a predominately TCP/IP environment. Web browsers, for
example, use TCP/IP to communicate with Web servers.Today's IP networking represents a synthesis of
several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local
Area Networks), which emerged in the mid- to late-1980s, together with the invention of the World Wide
Web by Tim Berners-Lee in 1989 (and which exploded with the availability of the first popular web
browser: Mosaic).
The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a
set of problems involving the transmission of data, and provides a well-defined service to the upper layer
protocols based on using services from some lower layers. Upper layers are logically closer to the user and
deal with more abstract data, relying on lower layer protocols to translate data into forms that can
eventually be physically transmitted.
The User Datagram Protocol (UDP)
The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer
protocol. It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an
End System (IP host).
The service provided by UDP is an unreliable service that provides no guarantees for delivery and no
protection from duplication (e.g. if this arises due to software errors within an Intermediate System (IS)).
The simplicity of UDP reduces the overhead from using the protocol and the services may be adequate in
many cases.
UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upper-layer
protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do
not establish end-to-end connections between communicating end systems. UDP communication
consequently does not incur connection
establishment and teardown overheads and there is minimal associated end system state. Because of these
characteristics, UDP can offer a very efficient communication transport to some applications, but has no
inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no
inherent On many platforms, applications can send UDP datagrams at the line rate of the link interface,
which is often much greater than the available path capacity, and doing so would contribute to congestion
along the path, applications therefore need to be designed responsibly (RFC 4505).
One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint encapsulates the
packets of another protocol inside UDP datagrams and transmits them to another tunnel endpoint, which
decapsulates the UDP datagrams and forwards the original packets contained in the payload. Tunnels
establish virtual links that appear to directly connect locations that are distant in the physical Internet
topology, and can be used to create virtual (private) networks. Using UDP as a tunneling protocol is
attractive when the payload protocol is not supported by middleboxes that may exist along the path,
because many middleboxes support UDP transmissions.
UDP does not provide any communications security. Applications that need to protect their
communications against eavesdropping, tampering, or message forgery therefore need to separately provide
security services using additional protocol mechanisms.
Protocol Header
A computer may send UDP packets without first establishing a connection to the recipient. A UDP
datagram is carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for
IPv4 and 65,527 bytes for IPv6. The transmission of large IP packets usually requires IP fragmentation.
Fragmentation decreases communication reliability and efficiency and should theerfore be avoided.
To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and
forwards the data together with the header for transmission by the IP network layer.
The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI)
The UDP header consists of four fields each of 2 bytes in length:
• Source Port (UDP packets from a client use this as a service access point (SAP) to indicate the
session on the local client that originated the packet. UDP packets from a server carry the server
SAP in this field)
• Destination Port (UDP packets from a client use this as a service access point (SAP) to indicate
the service required from the remote server. UDP packets from a server carry the client SAP in
this field)
• UDP length (The number of bytes comprising the combined UDP header information and payload
data)
• UDP Checksum (A checksum to verify that the end to end data has not been corrupted by routers
or bridges in the network or by the processing in an end system. The algorithm to compute the
checksum is the Standard Internet Checksum algorithm. This allows the receiver to verify that it
was the intended destination of the packet, because it covers the IP addresses, port numbers and
protocol number, and it verifies that the packet is not truncated or padded, because it covers the
size field. Therefore, this protects an application against receiving corrupted payload data in place
of, or in addition to, the data that was sent. In the cases where this check is not required, the value
of 0x0000 is placed in this field, in which case the data is not checked by the receiver. )
Like for other transport protocols, the UDP header and data are not processed by Intermediate Systems (IS)
in the network, and are delivered to the final destination in the same form as originally transmitted.
At the final destination, the UDP protocol layer receives packets from the IP network layer. These are
checked using the checksum (when >0, this checks correct end-to-end operation of the network service)
and all invalid PDUs are discarded. UDP does not make any provision for error reporting if the packets are
not delivered. Valid data are passed to the appropriate session layer protocol identified by the source and
destination port numbers (i.e. the session service access points).
UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to transmit to multiple
receivers.
Using UDP
Application designers are generally aware that UDP does not provide any reliability, e.g., it does not
retransmit any lost packets. Often, this is a main reason to consider UDP as a transport. Applications that
do require reliable message delivery therefore need to implement appropriate protocol mechanisms in their
applications (e.g. tftp).
UDP's best effort service does not protect against datagram duplication, i.e., an application may receive
multiple copies of the same UDP datagram. Application designers therefore need to verify that their
application gracefully handles datagram duplication and may need to implement mechanisms to detect
duplicates.
The Internet may also significantly delay some packets with respect to others, e.g., due to routing
transients, intermittent connectivity, or mobility. This can cause reordering, where UDP datagrams arrive at
the receiver in an order different from the transmission order. Applications that require ordered delivery
must restore datagram ordering themselves.
The burdon of needing to code all these protocol mechanims can be avoided by using TCP!
LECTURE NO. 9 READINGS: -DO-
Ports
Generally, clients set the source port number to a unique number that they choose themselves - usually
based on the program that started the connection. Since this number is returned by the server in responses,
this lets the sender know which "conversation" incoming packets are to be sent to. The destination port of
packets sent by the client is usually set to one of a number of well-known ports. These usually correspond
to one of a number of different applications, e.g. port 23 is used for telnet, and port 80 is used for web
servers.
A server process (program), listens for UDP packets received with a particular well-known port number
and tells its local UDP layer to send packets matching this destination port number to the server program. It
determines which client these packets come from by examining the received IP source address and the
received unique UDP source port number. Any responses which the server needs to send to back to a client
are sent with the source port number of the server (the well-known port number) and the destination port
selected by the client. Most people do not memorise the well known ports, instead they look them up in
table (e.g. see below).
TCP:
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite.
TCP is so central that the entire suite is often referred to as "TCP/IP." Whereas IP handles lower-level
transmissions from computer to computer as a message makes its way across the Internet, TCP operates at
a higher level, concerned only with the two end systems, for example a Web browser and a Web server. In
particular, TCP provides reliable, ordered delivery of a stream of bytes from one program on one computer
to another program on another computer. Besides the Web, other common applications of TCP include e-
mail and file transfer. Among its management tasks, TCP controls message size, the rate at which messages
are exchanged, and network traffic congestion.
160/192+ Data
The 28 IP multicast bits are called the multicast group ID. A host group listening to a multicast can span
multiple networks. There are some assigned hostgroup addresses by the internet assigned numbers
authority (IANA). Some of the assignments are listed below:
• 224.0.0.1 = All systems on the subnet
• 224.0.0.2 = All routers on the subnet
• 224.0.1.1 = Network time protocol (NTP)
• 224.0.0.9 = For RIPv2
224.0.1.2 = Silicon graphic's dogfight application
Network Addressing
IP addresses are broken into 4 octets (IPv4) separated by dots called dotted decimal notation. An octet is a
byte consisting of 8 bits. The IPv4 addresses are in the following form:
192.168.10.1
There are two parts of an IP address:
• Network ID
• Host ID
The various classes of networks specify additional or fewer octets to designate the network ID versus the
host ID.
Class 1st Octet 2nd Octet 3rd Octet 4th Octet
Net ID Host ID
A
Net ID Host ID
B
Net ID Host ID
C
When a network is set up, a netmask is also specified. The netmask determines the class of the network as
shown below, except for CIDR. When the netmask is setup, it specifies some number of most significant
bits with a 1's value and the rest have values of 0. The most significant part of the netmask with bits set to
1's specifies the network address, and the lower part of the address will specify the host address. When
setting addresses on a network, remember there can be no host address of 0 (no host address bits set), and
there can be no host address with all bits set.
Class A-E networks
The addressing scheme for class A through E networks is shown below. Note: We use the 'x' character here
to denote don't care situations which includes all possible numbers at the location. It is many times used to
denote networks.
Network Type Address Range Normal Netmask Comments
Class A 001.x.x.x to 126.x.x.x 255.0.0.0 For very large networks
Class B 128.1.x.x to 191.254.x.x 255.255.0.0 For medium size networks
Class C 192.0.1.x to 223.255.254.x 255.255.255.0 For small networks
Class D 224.x.x.x to 239.255.255.255 Used to support multicasting
Class E 240.x.x.x to 247.255.255.255
RFCs 1518 and 1519 define a system called Classless Inter-Domain Routing (CIDR) which is used to
allocate IP addresses more efficiently. This may be used with subnet masks to establish networks rather
than the class system shown above. A class C subnet may be 8 bits but using CIDR, it may be 12 bits.
There are some network addresses reserved for private use by the Internet Assigned Numbers Authority
(IANA) which can be hidden behind a computer which uses IP masquerading to connect the private
network to the internet. There are three sets of addresses reserved. These address are shown below:
• 10.x.x.x
• 172.16.x.x - 172.31.x.x
• 192.168.x.x
Other reserved or commonly used addresses:
• 127.0.0.1 - The loopback interface address. All 127.x.x.x addresses are used by the loopback
interface which copies data from the transmit buffer to the receive buffer of the NIC when used.
• 0.0.0.0 - This is reserved for hosts that don't know their address and use BOOTP or DHCP
protocols to determine their addresses.
• 255 - The value of 255 is never used as an address for any part of the IP address. It is reserved for
broadcast addressing. Please remember, this is exclusive of CIDR. When using CIDR, all bits of
the address can never be all ones.
To further illustrate, a few examples of valid and invalid addresses are listed below:
1. Valid addresses:
o 10.1.0.1 through 10.1.0.254
o 10.0.0.1 through 10.0.0.254
o 10.0.1.1 through 10.0.1.254
2. Invalid addresses:
o 10.1.0.0 - Host IP can't be 0.
o 10.1.0.255 - Host IP can't be 255.
o 10.123.255.4 - No network or subnet can have a value of 255.
o 0.12.16.89 - No Class A network can have an address of 0.
o 255.9.56.45 - No network address can be 255.
o 10.34.255.1 - No network address can be 255.
Network/Netmask specification
Sometimes you may see a network interface card (NIC) IP address specified in the following manner:
192.168.1.1/24
The first part indicates the IP address of the NIC which is "192.168.1.1" in this case. The second part "/24"
indicates the netmask value meaning in this case that the first 24 bits of the netmask are set. This makes the
netmask value 255.255.255.0. If the last part of the line above were "/16", the netmask would be
255.255.0.0.
IP ADDRESS CLASSES
The octets serve a purpose other than simply separating the numbers. They are used to create classes of IP
addresses that can be assigned to a particular business, government or other entity based on size and need.
The octets are split into two sections: Net and Host. The Net section always contains the first octet. It is
used to identify the network that a computer belongs to. Host (sometimes referred to as Node) identifies the
actual computer on the network. The Host section always contains the last octet. There are five IP classes
plus certain special addresses:
• Default Network - The IP address of 0.0.0.0 is used for the default network.
• Class A - This class is for very large networks, such as a major international company might have.
IP addresses with a first octet from 1 to 126 are part of this class. The other three octets are used to
identify each host. This means that there are 126 Class A networks each with 16,777,214 (224 -2)
possible hosts for a total of 2,147,483,648 (231) unique IP addresses. Class A networks account for
half of the total available IP addresses. In Class A networks, the high order bit value (the very first
binary number) in the first octet is always 0.
Net Host or Node
115. 24.53.107
• Loopback - The IP address 127.0.0.1 is used as the loopback address. This means that it is used
by the host computer to send a message back to itself. It is commonly used for troubleshooting and
network testing.
Other IP Classes
• Class B - Class B is used for medium-sized networks. A good example is a large college campus.
IP addresses with a first octet from 128 to 191 are part of this class. Class B addresses also include
the second octet as part of the Net identifier. The other two octets are used to identify each host.
This means that there are 16,384 (214) Class B networks each with 65,534 (216 -2) possible hosts
for a total of 1,073,741,824 (230) unique IP addresses. Class B networks make up a quarter of the
total available IP addresses. Class B networks have a first bit value of 1 and a second bit value of 0
in the first octet.
Net Host or Node
145.24. 53.107
• Class C - Class C addresses are commonly used for small to mid-size businesses. IP addresses
with a first octet from 192 to 223 are part of this class. Class C addresses also include the second
and third octets as part of the Net identifier. The last octet is used to identify each host. This means
that there are 2,097,152 (221) Class C networks each with 254 (28 -2) possible hosts for a total of
536,870,912 (229) unique IP addresses. Class C networks make up an eighth of the total available
IP addresses. Class C networks have a first bit value of 1, second bit value of 1 and a third bit
value of 0 in the first octet.
Net Host or Node
195.24.53. 107
• Class D - Used for multicasts, Class D is slightly different from the first three classes. It has a first
bit value of 1, second bit value of 1, third bit value of 1 and fourth bit value of 0. The other 28 bits
are used to identify the group of computers the multicast message is intended for. Class D
accounts for 1/16th (268,435,456 or 228) of the available IP addresses.
Net Host or Node
224. 24.53.107
• Class E - Class E is used for experimental purposes only. Like Class D, it is different from the
first three classes. It has a first bit value of 1, second bit value of 1, third bit value of 1 and fourth
bit value of 1. The other 28 bits are used to identify the group of computers the multicast message
is intended for. Class E accounts for 1/16th (268,435,456 or 228) of the available IP addresses.
Net Host or Node
240. 24.53.107
• Broadcast - Messages that are intended for all computers on a network are sent as broadcasts.
These messages always use the IP address 255.255.255.255.
LECTURE NO. 11 READINGS:-DO-
LECTURE NO. 12 READINGS:- A-PAGE 486,B-PAGE 449
SUBNET ADDRESSING:
Subnetting is the process of breaking down a main class A, B, or C network into subnets for routing
purposes. A subnet mask is the same basic thing as a netmask with the only real difference being that you
are breaking a larger organizational network into smaller parts, and each smaller section will use a different
set of address numbers. This will allow network packets to be routed between subnetworks. When doing
subnetting, the number of bits in the subnet mask determine the number of available subnets. Two to the
power of the number of bits minus two is the number of available subnets. When setting up subnets the
following must be determined:
• Number of segments
• Hosts per segment
Subnetting provides the following advantages:
• Network traffic isolation - There is less network traffic on each subnet.
• Simplified Administration - Networks may be managed independently.
• Improved security - Subnets can isolate internal networks so they are not visible from external
networks.
A 14 bit subnet mask on a class B network only allows 2 node addresses for WAN links. A routing
algorithm like OSPF or EIGRP must be used for this approach. These protocols allow the variable length
subnet masks (VLSM). RIP and IGRP don't support this. Subnet mask information must be transmitted on
the update packets for dynamic routing protocols for this to work. The router subnet mask is different than
the WAN interface subnet mask.
One network ID is required by each of:
• Subnet
• WAN connection
One host ID is required by each of:
• Each NIC on each host.
• Each router interface.
Types of subnet masks:
• Default - Fits into a Class A, B, or C network category
• Custom - Used to break a default network such as a Class A, B, or C network into subnets.
Although the individual subscribers do not need to tabulate network numbers or provide explicit routing, it
is convenient for most Class B networks to be internally managed as a much smaller and simpler version of
the larger network organizations. It is common to subdivide the two bytes available for internal assignment
into a one byte department number and a one byte workstation ID.
The enterprise network is built using commercially available TCP/IP router boxes. Each router has small
tables with 255 entries to translate the one byte department number into selection of a destination Ethernet
connected to one of the routers. Messages to the PC Lube and Tune server (130.132.59.234) are sent
through the national and New England regional networks based on the 130.132 part of the number.
Arriving at Yale, the 59 department ID selects an Ethernet connector in the C& IS building. The 234 selects
a particular workstation on that LAN. The Yale network must be updated as new Ethernets and departments
are added, but it is not effected by changes outside the university or the movement of machines within the
department.
INTERNET CONTROL PROTOCOL:
In computer networking, Internet Protocol Control Protocol (IPCP) is a network control protocol for
establishing and configuring Internet Protocol over a Point-to-Point Protocol link. IPCP uses the same
packet exchange mechanism as the Link Control Protocol. IPCP packets may not be exchanged until PPP
has reached the Network-Layer Protocol phase, and any IPCP packets received before this phase is reached
should be silently discarded.
IP Frame
Code ID Length IP Information
1 byte 1 byte 2 bytes variable
IP packet encapsulated in a PPP frame
Flag Address Control 8021(hex) Payload (and padding) FCS flag
The information contained in an IPCP packet is such:
• Configure-request
• Configure-ack
• Configure-nak
• Configure-reject
• Terminate-request
• Terminate-ack
• Code-reject
After the configuration is done, the link is able to carry IP data as a payload of the PPP frame. The protocol
field value is 0021(hex). This code indicates that IP data is being carried
LECTURE NO.13 READINGS: A-PAGE 514
ARP:
The address resolution protocol (arp) is a protocol used by the Internet Protocol (IP) [RFC826], specifically
IPv4, to map IP network addresses to the hardware addresses used by a data link protocol. The protocol
operates below the network layer as a part of the interface between the OSI network and OSI link layer. It
is used when IPv4 is used over Ethernet.
The term address resolution refers to the process of finding an address of a computer in a network. The
address is "resolved" using a protocol in which a piece of information is sent by a client process executing
on the local computer to a server process executing on a remote computer. The information received by the
server allows the server to uniquely identify the network system for which the address was required and
therefore to provide the required address. The address resolution procedure is completed when the client
receives a response from the server containing the required address.
An Ethernet network uses two hardware addresses which identify the source and destination of each frame
sent by the Ethernet. The destination address (all 1's) may also identify a broadcast packet (to be sent to all
connected computers). The hardware address is also known as the Medium Access Control (MAC) address,
in reference to the standards which define Ethernet. Each computer network interface card is allocated a
globally unique 6 byte link address when the factory manufactures the card (stored in a PROM). This is the
normal link source address used by an interface. A computer sends all packets which it creates with its own
hardware source link address, and receives all packets which match the same hardware address in the
destination field or one (or more) pre-selected broadcast/multicast addresses.
The Ethernet address is a link layer address and is dependent on the interface card which is used. IP
operates at the network layer and is not concerned with the link addresses of individual nodes which are to
be used.The address resolution protocol (arp) is therefore used to translate between the two types of
address. The arp client and server processes operate on all computers using IP over Ethernet. The processes
are normally implemented as part of the software driver that drives the network interface card.
There are four types of arp messages that may be sent by the arp protocol. These are identified by four
values in the "operation" field of an arp message. The types of message are:
1. ARP request
2. ARP reply
3. RARP request
4. RARP reply
The format of an arp message is shown below:
Format of an arp message used to resolve the remote MAC Hardware Address (HA)
To reduce the number of address resolution requests, a client normally caches resolved addresses for a
(short) period of time. The arp cache is of a finite size, and would become full of incomplete and obsolete
entries for computers that are not in use if it was allowed to grow without check. The arp cache is therefore
periodically flushed of all entries. This deletes unused entries and frees space in the cache. It also removes
any unsuccessful attempts to contact computers which are not currently running.
Example of use of the Address Resolution Protocol (arp)
The address resolution protocol (arp) is a protocol used by the Internet Protocol (IP) [RFC826], specifically
IPv4, to map IP network addresses to the hardware addresses used by a data link protocol. The protocol
operates below the network layer as a part of the interface between the OSI network and OSI link layer. It
is used when IPv4 is used over Ethernet.
The term address resolution refers to the process of finding an address of a computer in a network. The
address is "resolved" using a protocol in which a piece of information is sent by a client process executing
on the local computer to a server process executing on a remote computer. The information received by the
server allows the server to uniquely identify the network system for which the address was required and
therefore to provide the required address. The address resolution procedure is completed when the client
receives a response from the server containing the required address.
An Ethernet network uses two hardware addresses which identify the source and destination of each frame
sent by the Ethernet. The destination address (all 1's) may also identify a broadcast packet (to be sent to all
connected computers). The hardware address is also known as the Medium Access Control (MAC) address,
in reference to the standards which define Ethernet. Each computer network interface card is allocated a
globally unique 6 byte link address when the factory manufactures the card (stored in a PROM). This is the
normal link source address used by an interface. A computer sends all packets which it creates with its own
hardware source link address, and receives all packets which match the same hardware address in the
destination field or one (or more) pre-selected broadcast/multicast addresses.
The Ethernet address is a link layer address and is dependent on the interface card which is used. IP
operates at the network layer and is not concerned with the link addresses of individual nodes which are to
be used.The address resolution protocol (arp) is therefore used to translate between the two types of
address. The arp client and server processes operate on all computers using IP over Ethernet. The processes
are normally implemented as part of the software driver that drives the network interface card.
There are four types of arp messages that may be sent by the arp protocol. These are identified by four
values in the "operation" field of an arp message. The types of message are:
5. ARP request
6. ARP reply
7. RARP request
8. RARP reply
The format of an arp message is shown below:
Format of an arp message used to resolve the remote MAC Hardware Address (HA)
To reduce the number of address resolution requests, a client normally caches resolved addresses for a
(short) period of time. The arp cache is of a finite size, and would become full of incomplete and obsolete
entries for computers that are not in use if it was allowed to grow without check. The arp cache is therefore
periodically flushed of all entries. This deletes unused entries and frees space in the cache. It also removes
any unsuccessful attempts to contact computers which are not currently running.
Example of use of the Address Resolution Protocol (arp)
The figure below shows the use of arp when a computer tries to contact a remote computer on the same
LAN (known as "sysa") using the "ping" program. It is assumed that no previous IP datagrams have been
received form this computer, and therefore arp must first be used to identify the MAC address of the remote
computer.
The arp request message ("who is X.X.X.X tell Y.Y.Y.Y", where X.X.X.X and Y.Y.Y.Y are IP addresses)
is sent using the Ethernet broadcast address, and an Ethernet protocol type of value 0x806. Since it is
broadcast, it is received by all systems in the same collision domain (LAN). This is ensures that is the target
of the query is connected to the network, it will receive a copy of the query. Only this system responds. The
other systems discard the packet silently.
The target system forms an arp response ("X.X.X.X is hh:hh:hh:hh:hh:hh", where hh:hh:hh:hh:hh:hh is the
Ethernet source address of the computer with the IP address of X.X.X.X). This packet is unicast to the
address of the computer sending the query (in this case Y.Y.Y.Y). Since the original request also included
the hardware address (Ethernet source address) of the requesting computer, this is already known, and
doesn't require another arp message to find this out.
datagrams have been received form this computer, and therefore arp must first be used to identify the MAC
address of the remote computer.
The arp request message ("who is X.X.X.X tell Y.Y.Y.Y", where X.X.X.X and Y.Y.Y.Y are IP addresses)
is sent using the Ethernet broadcast address, and an Ethernet protocol type of value 0x806. Since it is
broadcast, it is received by all systems in the same collision domain (LAN). This is ensures that is the target
of the query is connected to the network, it will receive a copy of the query. Only this system responds. The
other systems discard the packet silently.
The target system forms an arp response ("X.X.X.X is hh:hh:hh:hh:hh:hh", where hh:hh:hh:hh:hh:hh is the
Ethernet source address of the computer with the IP address of X.X.X.X). This packet is unicast to the
address of the computer sending the query (in this case Y.Y.Y.Y). Since the original request also included
the hardware address (Ethernet source address) of the requesting computer, this is already known, and
doesn't require another arp message to find this out.
RARP:
RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine in a local area
network can request to learn its IP address from a gateway server's Address Resolution Protocol (ARP)
table or cache. A network administrator creates a table in a local area network's gateway router that maps
the physical machine (or Media Access Control - MAC address) addresses to corresponding Internet
Protocol addresses. When a new machine is set up, its RARP client program requests from the RARP
server on the router to be sent its IP address. Assuming that an entry has been set up in the router table, the
RARP server will return the IP address to the machine which can store it for future use.
RARP is available for Ethernet, Fiber Distributed-Data Interface, and Token Ring LANs.
LECTURE NO.15 READINGS: A-PAGE 525
ICMP:
Internet Control Message Protocol
Internet Control Message Protocol (ICMP) defined by RFC 792 and RFC 1122 is used for network error
reporting and generating messages that require attention. The errors reported by ICMP are generally related
to datagram processing. ICMP only reports errors involving fragment 0 of any fragmented datagrams. The
IP, UDP or TCP layer will usually take action based on ICMP messages. ICMP generally belongs to the IP
layer of TCP/IP but relies on IP for support at the network layer. ICMP messages are encapsulated inside IP
datagrams.
ICMP will report the following network information:
• Timeouts
• Network congestion
• Network errors such as an unreachable host or network.
The ping command is also supported by ICMP, and this can be used to debug network problems.
ICMP Messages:
The ICMP message consists of an 8 bit type, an 8 bit code, an 8 bit checksum, and contents which vary
depending on code and type. The below table is a list of ICMP messages showing the type and code of the
messages and their meanings.
Type Codes Description Purpose
0 0 Echo reply Query
3 0 Network Unreachable Error
3 1 Host Unreachable Error
3 2 Protocol Unreachable Error
3 3 Protocol Unreachable Error
3 4 Fragmentation needed with don't fragment bit set Error
3 5 Source route failed Error
3 6 Destination network unknown Error
3 7 Destination host unknown Error
3 8 Source host isolated Error
3 9 Destination network administratively prohibited Error
3 10 Destination host administratively prohibited Error
3 11 Network Unreachable for TOS Error
3 12 Host Unreachable for TOS Error
3 13 Communication administratively prohibited by filtering Error
3 14 Host precedence violation Error
3 15 Precedence cutoff in effect Error
4 0 Source quench Error
5 0 Redirect for network Error
5 1 Redirect for host Error
5 2 Redirect for type of service and network Error
5 3 Redirect for type of service and host Error
8 0 Echo request Query
9 0 Normal router advertisement Query
9 16 Router does not route common traffic Query
10 0 Router Solicitation Query
11 0 Time to live is zero during transit Error
11 1 Time to live is zero during reassembly Error
12 0 IP header bad Error
12 1 Required option missing Error
12 2 Bad length Error
13 0 Timestamp request Query
14 0 Timestamp reply Query
15 0 Information request Query
16 0 Information reply Query
17 0 Address mask request Query
18 0 Address mask request Query
ICMP is used for many different functions, the most important of which is error reporting. Some of these
are "port unreachable", "host unreachable", "network unreachable", "destination network unknown", and
"destination host unknown". Some not related to errors are:
• Timestamp request and reply allows one system to ask another one for the current time.
• Address mask and reply is used by a diskless workstation to get its subnet mask at boot time.
• Echo request and echo reply is used by the ping program to test to see if another unit will respond
LECTURE NO.16 READINGS: B-PAGE 579
DOMAIN NAME SYSYTEM:
DNS (Domain Name System) is used on the Internet as well on many private networks. Networks using
Microsoft Active Directory directory service use DNS to resolve computer names and to locate computers
within their local networks and the Internet. Networks based on Windows 2000 Server and Windows
Server 2003 use DNS as a primary means of locating resources in Active Directory.
The domain namespace is the naming scheme that provides the hierarchical structure for the DNS database.
Each node, referred to as a domain, represents a partition of the DNS database. The DNS database is
indexed by name, so each domain must have a name. As you add domains to the hierarchy, the name of the
parent domain is added to its child domain (subdomain). A domain’s name identifies its position in the
hierarchy.
At the top of the DNS hierarchy, there is a single domain called the root domain, which is represented by a
single period (.).
Top level domains are grouped by organization type or geographic location. Top level domains are
controlled by the Internet Architecture Board (IAB), an Internet authority controlling the assignment of
domain names, among other things. Examples are .com, .gov and .net
Anyone can register a second level domain name. Second level domain names are registered to individuals
and organizations by a number of different domain registry companies. A second level name has two name
parts: a top level name and a unique second level name such as microsoft.com.
A DNS name server stores the zone database file. Name servers can store data for one zone or multiple
zones. A name server is said to have authority for the domain name space that the zone encompasses. One
name server contains the master zone database file, referred to as the primary zone database file, for the
specified zone. As a result, there must be at least one name server for a zone. Changes to a zone, such as
adding domains or hosts, are performed on the server that contains the primary zone database file.
Name resolution is the process of resolving names to IP addresses. It is similar to looking up a name in a
telephone book, in which the name is associated with a telephone number. For example, when you connect
to the Microsoft Web site, you use the name www.microsoft.com. DNS resolves www.microsoft.com to its
associated IP address. The mapping of names to IP addresses is stored in the DNS distributed database.
DNS name servers resolve forward and reverse lookup queries. A forward lookup query resolves a name to
an IP address, and a reverse lookup query resolves an IP address to a name. A name server can resolve a
query only for a zone for which it has authority. If a name server cannot resolve the query, it passes the
query to other name servers that can resolve it. The name server caches the query results to reduce the DNS
traffic on the network.
1. The client passes a forward lookup query for www.microsoft.com to its local name server.
2. The local name server checks its zone database file to determine whether it contains the name-to-IP
address mapping for the client query. The local name server does not have authority for the microsoft.com
domain, so it passes the query to one of the DNS root servers, requesting resolution of the host name. The
root name server sends back a referral to the com name server.
3. The local name server sends a request to a com name server, which responds with a referral to the
Microsoft name server.
4. The local name server sends a request to the Microsoft name server. Because the
Microsoft name server has authority for that portion of the domain namespace, when it receives the request,
it returns the IP address for www.microsoft.com to the local name server.
5. The local name server sends the IP address for www.microsoft.com to the client.
6. The name resolution is complete, and the client can access www.microsoft.com.
Structure of DNS
E-MAIL:
The birth of electronic mail (email) occurred in the early 1960s. The mailbox was a file in a user's home
directory that was readable only by that user. Primitive mail applications appended new text messages to
the bottom of the file, making the user had to wade through the constantly growing file to find any
particular message. This system was only capable of sending messages to users on the same system.
The first network transfer of an electronic mail message file took place in 1971 when a computer engineer
named Ray Tomlinson sent a test message between two machines via ARPANET — the precursor to the
Internet. Communication via email soon became very popular, comprising 75 percent of ARPANET's
traffic in less than two years.
Today, email systems based on standardized network protocols have evolved into some of the most widely
used services on the Internet. Red Hat Enterprise Linux offers many advanced applications to serve and
access email.
This chapter reviews modern email protocols in use today and some of the programs designed to send and
receive email.
Email Protocols
Today, email is delivered using a client/server architecture. An email message is created using a mail client
program. This program then sends the message to a server. The server then forwards the message to the
recipient's email server, where the message is then supplied to the recipient's email client.
To enable this process, a variety of standard network protocols allow different machines, often running
different operating systems and using different email programs, to send and receive email.
The following protocols discussed are the most commonly used in the transfer of email.
Mail Transport Protocols
Mail delivery from a client application to the server, and from an originating server to the destination
server, is handled by the Simple Mail Transfer Protocol (SMTP) .
Mail Access Protocols
There are two primary protocols used by email client applications to retrieve email from mail servers: the
Post Office Protocol (POP) and the Internet Message Access Protocol (IMAP).
Unlike SMTP, both of these protocols require connecting clients to authenticate using a username and
password. By default, passwords for both protocols are passed over the network unencrypted.
POP
The default POP server under Red Hat Enterprise Linux is /usr/sbin/ipop3d and is provided by the imap
package. When using a POP server, email messages are downloaded by email client applications. By
default, most POP email clients are automatically configured to delete the message on the email server after
it has been successfully transferred, however this setting usually can be changed.
POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail
Extensions (MIME), which allow for email attachments.
POP works best for users who have one system on which to read email. It also works well for users who do
not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for
those with slow network connections, POP requires client programs upon authentication to download the
entire content of each message. This can take a long time if any messages have large attachments.
The most current version of the standard POP protocol is POP3.
There are, however a variety of lesser-used POP protocol variants:
• APOP — POP3 with MDS authentication. An encoded hash of the user's password is sent from
the email client to the server rather then sending an unencrypted password.
• KPOP — POP3 with Kerberos authentication. Refer to Chapter 18 Kerberos for more information
about Kerberos.
• RPOP — POP3 with RPOP authentication. This uses a per-user ID, similar to a password, to
authenticate POP requests. However, this ID is not encrypted, so RPOP is no more secure than
standard POP.
For added security, it is possible to use Secure Socket Layer (SSL) encryption for client authentication and
data transfer sessions. This can be enabled by using the ipop3s service or by using the /usr/sbin/stunnel
program. Refer to Section 12.5.1 Securing Communication for more information.
IMAP
The default IMAP server under Red Hat Enterprise Linux is /usr/sbin/imapd and is provided by the imap
package. When using an IMAP mail server, email messages remain on the server where users can read or
delete them. IMAP also allows client applications to create, rename, or delete mail directories on the server
to organize and store email.
IMAP is particularly useful for those who access their email using multiple machines. The protocol is also
convenient for users connecting to the mail server via a slow connection, because only the email header
information is downloaded for messages until opened, saving bandwidth. The user also has the ability to
delete messages without viewing or downloading them.
For convenience, IMAP client applications are capable of caching copies of messages locally, so the user
can browse previously read messages when not directly connected to the IMAP server.
IMAP, like POP, is fully compatible with important Internet messaging standards, such as MIME, which
allow for email attachments.
Email Program Classifications
In general, all email applications fall into at least one of three classifications. Each classification plays a
specific role in the process of moving and managing email messages. While most users are only aware of
the specific email program they use to receive and send messages, each one is important for ensuring that
email arrives at the correct destination.
Mail Transfer Agent
A Mail Transfer Agent (MTA) transfers email messages between hosts using SMTP. A message may
involve several MTAs as it moves to its intended destination.
While the delivery of messages between machines may seem rather straightforward, the entire process of
deciding if a particular MTA can or should accept a message for delivery is quite complicated. In addition,
due to problems from spam, use of a particular MTA is usually restricted by the MTA's configuration or
access configuration for the network on which the MTA resides.
Many modern email client programs can act as an MTA when sending email. However, this action should
not be confused with the role of a true MTA. The sole reason email client programs are capable of sending
email like an MTA is because the host running the application does not have its own MTA. This is
particularly true for email client programs on non-Unix-based operating systems. However, these client
programs only send outbound messages to an MTA they are authorized to use and do not directly deliver
the message to the intended recipient's email server.
Since Red Hat Enterprise Linux installs two MTAs, Sendmail and Postfix, email client programs are often
not required to act as an MTA. Red Hat Enterprise Linux also includes a special purpose MTA called
Fetchmail.
A HTTP request made using telnet. The request, response headers and response body are highlighted.
HTTP defines eight methods (sometimes referred to as "verbs") indicating the desired action to be
performed on the identified resource. What this resource represents, whether pre-existing data or data that
is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to
a file or the output of an executable residing on the server.
HEAD
Asks for the response identical to the one that would correspond to a GET request, but without the response
body. This is useful for retrieving meta-information written in response headers, without having to
transport the entire content.
GET
Requests a representation of the specified resource. Note that GET should not be used for operations that
cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET
may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a
request should cause. See safe methods below.
POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in
the body of the request. This may result in the creation of a new resource or the updates of existing
resources or both.
PUT
Uploads a representation of the specified resource.
DELETE
Deletes the specified resource.
TRACE
Echoes back the received request, so that a client can see what intermediate servers are adding or changing
in the request.
OPTIONS
Returns the HTTP methods that the server supports for specified URL. This can be used to check the
functionality of a web server by requesting '*' instead of a specific resource.
CONNECT
Converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted
communication (HTTPS) through an unencrypted HTTP proxy.
Safe methods
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they
are intended only for information retrieval and should not change the state of the server. In other words,
they should not have side effects, beyond relatively harmless effects such as logging, caching, the serving
of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to
the context of the application's state should therefore be considered safe.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side
effects either on the server, or external side effects such as financial transactions or transmission of email.
Such methods are therefore not usually used by conforming web robots or web crawlers, which tend to
make requests without regard to context or consequences.
Despite the prescribed safety of GET requests, in practice their handling by the server is not technically
limited in any way, and careless or deliberate programming can just as easily (or more easily, due to lack of
user agent precautions) cause non-trivial changes on the server. This is discouraged, because it can cause
problems for Web caching, search engines and other automated agents, which can make unintended
changes on the server.
HTTP versions
HTTP has evolved into multiple, mostly backwards-compatible protocol versions. RFC 2145 describes the
use of HTTP version numbers. The client tells in the beginning of the request the version it uses, and the
server uses the same or earlier version in the response.
HTTP/0.9 (1991)
Deprecated. Supports only one command, GET, which does not specify the HTTP version. Does not
support headers. Since this version does not support POST, the information a client can pass to the server is
limited by the URL length.
HTTP/1.0 (May 1996)
This is the first protocol revision to specify its version in communications and is still in wide use, especially
by proxy servers.
HTTP/1.1 (1997-1999)[3][4]
Current version; persistent connections enabled by default and works well with proxies. Also supports
request pipelining, allowing multiple requests to be sent at the same time, allowing the server to prepare for
the workload and potentially transfer the requested resources more quickly to the client.
HTTP/1.2
The initial 1995 working drafts of the document PEP—an Extension Mechanism for HTTP (which
proposed the Protocol Extension Protocol, abbreviated PEP) were prepared by the World Wide Web
Consortium and submitted to the Internet Engineering Task Force. PEP was originally intended to become
a distinguishing feature of HTTP/1.2.[5] In later PEP working drafts, however, the reference to HTTP/1.2
was removed. The experimental RFC 2774, HTTP Extension Framework, largely subsumed PEP. It was
published in February 2000.
The major changes between HTTP/1.0 and HTTP/1.1 include the way HTTP handles caching; how it
optimizes bandwidth and network connections usage, manages error notifications; how it transmits
messages over the network; how internet addresses are conserved; and how it maintains security and
integrity.[6]
HTTP
Persistence · Compression · SSL
Headers
ETag · Cookie · Referrer
Status codes
200 OK
207 Multi-Status
301 Moved permanently
302 Found
303 See Other
403 Forbidden
404 Not Found
This box: view • talk • edit
Status codes
In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric
status code (such as "404") and a textual reason phrase (such as "Not Found"). The way the user agent
handles the response primarily depends on the code and secondarily on the response headers. Custom status
codes can be used since, if the user agent encounters a code it does not recognize, it can use the first digit of
the code to determine the general class of the response.
Also, the standard reason phrases are only recommendations and can be replaced with "local equivalents" at
the web developer's discretion. If the status code indicated a problem, the user agent might display the
reason phrase to the user to provide further information about the nature of the problem. The standard also
allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the
standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable
Persistent connections
In HTTP/0.9 and 1.0, the connection is closed after a single request/response pair. In HTTP/1.1 a keep-
alive-mechanism was introduced, where a connection could be reused for more than one request.
Such persistent connections reduce lag perceptibly, because the client does not need to re-negotiate the
TCP connection after the first request has been sent.
Version 1.1 of the protocol made bandwidth optimization improvements to HTTP/1.0. For example,
HTTP/1.1 introduced chunked transfer encoding to allow content on persistent connections to be streamed,
rather than buffered. HTTP pipelining further reduces lag time, allowing clients to send multiple requests
before a previous response has been received to the first one. Another improvement to the protocol was
byte serving, which is when a server transmits just the portion of a resource explicitly requested by a client.
HTTP session state
HTTP is a stateless protocol. The advantage of a stateless protocol is that hosts do not need to retain
information about users between requests, but this forces web developers to use alternative methods for
maintaining users' states. For example, when a host needs to customize the content of a website for a user,
the web application must be written to track the user's progress from page to page. A common method for
solving this problem involves sending and receiving cookies. Other methods include server side sessions,
hidden variables (when the current page is a form), and URL encoded parameters (such as /index.php?
session_id=some_unique_session_code).
IPv6
IPv6 is 128 bits.It has eight octet pairs, each with 16 bits and written in hexadecimal as follows:
2b63:1478:1ac5:37ef:4e8c:75df:14cd:93f2
Extension headers can be added to IPv6 for new features.
It is the next-generation Internet Layer protocol for packet-switched internetworks and the Internet. IPv6
has a much larger address space than IPv4. This is based on the definition of a 128-bit address, whereas
IPv4 used only 32 bits. The new address space thus supports 2128 (about 3.4×1038) addresses. This
expansion provides flexibility in allocating addresses and routing traffic and eliminates the need for
network address translation (NAT). NAT gained wide-spread deployment as an effort to alleviate IPv4
address exhaustion.
IPv6 also implements new features that simplify aspects of address assignment (stateless address
autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet
connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier
portion of an address to 64 bits to facilitate automatic mechanism for forming the host identifier from Link
Layer media addressing information (MAC address).
Network security is integrated by design in the IPv6 architecture. Internet Protocol Security (IPsec) was
originally developed for IPv6, but found wide-spread optional deployment first in IPv4 into which it was
re-engineered. The IPv6 specifications mandate IPsec implementation as a fundamental interoperability
requirement.
The general requirements for implementing IPv6 on a net
IPv6 packet format
The IPv6 packet is composed of two main parts: the header and the payload.
Header
+ Bits 0–3 4–7 8–11 12-15 16–23 24–31
0 Version Traffic Class Flow Label
32 Payload Length Next Header Hop Limit
64
96
12 Source Address
8
16
0
19
2
22
4
Destination Address
25
6
28
8
The header is in the first 40 octets (320 bits) of the packet and contains:
• Version - version 6 (4-bit IP version).
• Traffic class - packet priority (8-bits). Priority values subdivide into ranges: traffic where the
source provides congestion control and non-congestion control traffic.
• Flow label - QoS management (20 bits). Originally created for giving real-time applications
special service, but currently unused.
• Payload length - payload length in bytes (16 bits). When cleared to zero, the option is a "Jumbo
payload" (hop-by-hop).
• Next header - Specifies the next encapsulated protocol. The values are compatible with those
specified for the IPv4 protocol field (8 bits).
• Hop limit - replaces the time to live field of IPv4 (8 bits).
• Source and destination addresses - 128 bits each.
The payload can have a size of up to 64KiB in standard mode, or larger with a "jumbo payload" option.
Fragmentation is handled only in the sending host in IPv6: routers never fragment a packet, and hosts are
expected to use PMTU discovery.
The protocol field of IPv4 is replaced with a Next Header field. This field usually specifies the transport
layer protocol used by a packet's payload.
In the presence of options, however, the Next Header field specifies the presence of an extra options
header, which then follows the IPv6 header; the payload's protocol itself is specified in a field of the
options header. This insertion of an extra header to carry options is analogous to the handling of AH and
ESP in IPsec for both IPv4 and IPv6.
LECTURE NO.21 READINGS:
http://en.wikipedia.org/wiki/Local_area_network,
http://www.wb.nic.in/nicnet/lan.asp
INTRODUCTION TO LANs:
http://en.wikipedia.org/wiki/Local_area_network,
local area network (LAN) is a computer network covering a small physical area, like a home, office, or
small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast
to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic range,
and lack of a need for leased telecommunication lines.
Ethernet over unshielded twisted pair cabling, and Wi-Fi are the two most common technologies currently,
but ARCNET, Token Ring and many others have been used in the past.
FEATURES OF LAN:
Internet Access over LAN
There are various methods of connecting a LAN to the Internet Gateway, which are explained as
below :
Dial-up
Leased Line
ISDN
VSAT Technology
RF Technology (Wireless Access)
Cable Modem
Dial - Up
A common way of accessing Internet over LAN is the Dial-Up approach. In this method, a remote user gets
to Internet as follows - Initially the remote user¹s PC is linked to the local gateway through an existing
dialup line using modems, once the user has reached the local gateway, further routing up to Internet is
taken care of, by the local gateway itself. The routing procedures are transparent to the end user.
Leased line
Leased line facility provides reliable, high speed services starting as low as 2.4kbps and ranging as high as
45 Mbps (T3 service). A leased line connection is an affordable way to link two or more sites for a fixed
monthly charge. Leased Lines can be either fiber optic or copper lines High capacity leased line service is
an excellent way to provide data, voice and video links between sites. Leased line service provides a
consistent amount of bandwidth for all your communication needs.
ISDN
Integrated Services digital Network (ISDN) is a digital telephone system. ISDN involves the digitization of
telephone network so that voice, data, graphics, text, music, video and other source material can be
provided to end users from a single end-user terminal over existing telephone wiring.
ISDN BRI (Basic Rate ISDN) delivers two 64 kbps channels called B channels and one at 16kbps (D
channel). ISDN offers speed at 64 Kbps and 128 Kbps and is an alternative for those with a need for greater
Bandwidth than dial service.For utilizing the ISDN service, the User needs to have an ISDN Terminal
Adapter and an ISDN Card on the system.
VSAT
VSAT technology has emerged as a very useful, everyday application of modern telecommunications.
VSAT stands for 'Very Small Aperture Terminal' and refers to 'receive/transmit' terminals installed at
dispersed sites connecting to a central hub via satellite using small diameter antenna dishes (0.6 to 3.8
meter). VSAT technology represents a cost effective solution for users seeking an independent
communications network connecting a large number of geographically dispersed sites. VSAT networks
offer value-added satellite-based services capable of supporting the Internet, data, voice/fax etc. over LAN.
Generally, these systems operate in the Ku-band and C-band frequencies.
Cable Modem
The Internet Access over cable modem is a very new and fast emerging technology. A "Cable Modem" is a
device that allows high speed data access via a cable TV (CATV) network. A cable modem will typically
have two connections, one to the cable wall outlet and the other to the PC. This will enable the typical array
of Internet services at speeds of 100 to 1000 times as fast as the telephone modem. The speed of cable
modems range from 500 Kbps to 10 Mbps
COMPONENTS OF LAN:
Basic LAN components
http://www.wb.nic.in/nicnet/lan.asp,
http://www.rocw.raifoundation.org/computing/BCA/datacommunication/lectu
re-notes/lecture-21.pdfINTERNET
http://www.rocw.raifoundation.org/computing/BCA/datacommunication/lecture-notes/lecture-
21.pdfINTERNET
USAGE OF LANs
LAN STANDARDS:
There are many LAN standards as Ethernet, Token Ring , FDDI etc
IEEE 802 STANDARDS:
IEEE 802 Standard
The Data Link Layer and IEEE
When we talk about Local Area Network (LAN) technology the IEEE 802 standard may be heard. This
standard defines networking connections for the interface card and the physical connections, describing
how they are done. The 802 standards were published by the Institute of Electrical and Electronics
Engineers (IEEE). The 802.3 standard is called ethernet, but the IEEE standards do not define the exact
original true ethernet standard that is common today. There is a great deal of confusion caused by this.
There are several types of common ethernet frames. Many network cards support more than one type.
The ethernet standard data encapsulation method is defined by RFC 894. RFC 1042 defines the IP to link
layer data encapsulation for networks using the IEEE 802 standards. The 802 standards define the two
lowest levels of the seven layer network model and primarily deal with the control of access to the network
media. The network media is the physical means of carrying the data such as network cable. The control of
access to the media is called media access control (MAC). The 802 standards are listed below:
• 802.1 - Internetworking
• 802.2 - Logical Link Control *
• 802.3 - Ethernet or CSMA/CD, Carrier-Sense Multiple Access with Collision detection LAN *
• 802.4 - Token-Bus LAN *
• 802.5 - Token Ring LAN *
• 802.6 - Metropolitan Area Network (MAN)
• 802.7 - Broadband Technical Advisory Group
• 802.8 - Fiber-Optic Technical Advisory Group
• 802.9 - Integrated Voice/Data Networks
• 802.10 - Network Security
• 802.11 - Wireless Networks
• 802.12 - Demand Priority Access LAN, 100 Base VG-AnyLAN
*The Ones with stars should be remembered in order for network certification testing.
ALOHA
ALOHAnet, also known as ALOHA, was a pioneering computer networking system developed at the
University of Hawaii. It was first deployed in 1970, and while the network itself is no longer used, one of
the core concepts in the network is the basis for the widely used Ethernet.
Like the ARPANET group, ALOHA was important because it used a shared medium for transmission. This
revealed the need for more modern medium access control schemes such as CSMA/CD, used by Ethernet.
Unlike the ARPANET where each node could only talk to a node on the other end of the wire, in ALOHA
all nodes were communicating on the same frequency. This meant that some sort of system was needed to
control who could talk at what time. ALOHA's situation was similar to issues faced by Ethernet (non-
switched) and Wi-Fi networks.
This shared transmission medium system generated interest by others. ALOHA's scheme was very simple.
Because data was sent via a teletype the data rate usually did not go beyond 80 characters per second.
When two stations tried to talk at the same time, both transmissions were garbled. Then data had to be
manually resent. ALOHA proved that it was possible to have a useful network without solving this
problem, and this sparked interest in others, most significantly Bob Metcalfe and other researchers working
at Xerox PARC. This team went on to create the Ethernet protocol.
The ALOHA protocol
The ALOHA protocol is an OSI layer 2 protocol for LAN networks with broadcast topology.
The first version of the protocol was basic:
• If you have data to send, send the data
• If the message collides with another transmission, try resending "later"
Many people have made a study of the protocol. The critical aspect is the "later" concept. The quality of the
backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel
capacity, and the predictability of its behavior.
The difference between Aloha and Ethernet on a shared medium is that Ethernet uses CSMA/CD, which
broadcasts a jamming signal to notify all computers connected to the channel that a collision occurred,
forcing computers on the network to reject their current packet or frame. The use of a jamming signal
enables early release of the transmission medium where transmission delays dominate propagation delays,
and is appropriate for many Ethernet variants. As Aloha was a wireless system, there were additional
problems, such as the hidden node problem, which meant that protocols which work well on a small scale
wired LAN would not always work. Even though the extent of the Hawaiian island network is about 400
km in diameter, propagation delays were almost certainly small in comparison with transmission delays, so
the protocol used had to be one which was robust enough to cope.
Pure Aloha had a maximum throughput of about 18.4%. This means that about 81.6% of the total available
bandwidth was essentially wasted due to losses from packet collisions. The basic throughput calculation
involves the assumption that the aggregate arrival process follows a Poisson distribution with an average
number of arrivals of 2G arrivals per 2X seconds. Therefore, the lambda parameter in the Poisson
distribution becomes 2G. The mentioned peak is reached for G = 0.5 resulting in a maximum throughput of
0.184, i.e. 18.4%.
An improvement to the original Aloha protocol was Slotted Aloha, which introduced discrete timeslots and
increased the maximum throughput to 36.8%. A station can send only at the beginning of a timeslot, and
thus collisions are reduced. In this case, the average number of aggregate arrivals is G arrivals per 2X
seconds. This leverages the lambda parameter to be G. The maximum throughput is reached for G = 1.
It should be noted that Aloha's characteristics are still not much different from those experienced today by
Wi-Fi, and similar contention-based systems that have no carrier sense capability. There is a certain amount
of inherent inefficiency in these systems. For instance 802.11b sees about a 2-4 Mbit/s real throughput with
a few stations talking, versus its theoretical maximum of 11 Mbit/s. It is typical to see these types of
networks' throughput break down significantly as the number of users and message burstiness increase. For
these reasons, applications which need highly deterministic load behavior often use token-passing schemes
(such as token ring) instead of contention systems. For instance ARCNET is very popular in embedded
applications. Nonetheless, contention based systems also have significant advantages, including ease of
management and speed in initial communication.
Because Listen before send (CSMA - Carrier Sense Multiple Access), as used in the Ethernet, works much
better than Aloha for all cases where all the stations can hear each other, Slotted Aloha is used on low
bandwidth tactical Satellite communications networks by the US Military, subscriber based Satellite
communications networks, and contactless RFID technologies
LECTURE NO. 23: READINGS: A-PAGE 312
CSMA:
Carrier sense multiple access
Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in
which a node verifies the absence of other traffic before transmitting on a shared transmission medium,
such as an electrical bus, or a band of the electromagnetic spectrum.
"Carrier Sense" describes the fact that a transmitter listens for a carrier wave before trying to send. That is,
it tries to detect the presence of an encoded signal from another station before attempting to transmit. If a
carrier is sensed, the station waits for the transmission in progress to finish before initiating its own
transmission.
"Multiple Access" describes the fact that multiple stations send and receive on the medium. Transmissions
by one node are generally received by all other stations using the medium.
Types of CSMA
• 1-persistent CSMA
When the sender (station) is ready to transmit data, it checks if the physical medium is busy. If so, it senses
the medium continually until it becomes idle, and then it transmits a piece of data (a frame). In case of a
collision, the sender waits for a random period of time and attempts to transmit again.
• p-persistent CSMA
This protocol is a generalization of 1-persistent CSMA. When the sender is ready to send data, it checks
continually if the medium is busy. If the medium becomes idle, the sender transmits a frame with a
probability p. In case the transmission did not happen (the probability of this event is 1-p) the sender waits
until the next available time slot and transmits again with the same probability p. This process repeats until
the frame is sent or some other sender starts transmitting. In the latter case the sender waits a random
period of time, checks the channel, and if it is idle, transmits with a probability p, and so on.
• Nonpersistent CSMA
When the sender is ready to send data, it checks if the medium is busy. If so, it waits for a random amount
of time and checks again. When the medium becomes idle, the sender starts transmitting. If collision
occurs, the sender waits for a random amount of time, and checks the medium, repeating the process.
CSMA/CD
Carrier Sense Multiple Access With Collision Detection (CSMA/CD), in computer networking, is a
network control protocol in which
• a carrier sensing scheme is used.
• a transmitting data station that detects another signal while transmitting a frame, stops transmitting
that frame, transmits a jam signal, and then waits for a random time interval (known as "backoff
delay" and determined using the truncated binary exponential backoff algorithm) before trying to
send that frame again.
CSMA/CD is a modification of pure Carrier Sense Multiple Access (CSMA).
A Shared Medium
The Ethernet network may be used to provide shared access by a group of attached nodes to the physical
medium which connects the nodes. These nodes are said to form a Collision Domain. All frames sent on
the medium are physically received by all receivers, however the Medium Access Control (MAC) header
contains a MAC destination address which ensure only the specified destination actually forwards the
received frame (the other computers all discard the frames which are not addressed to them).
Consider a LAN with four computers each with a Network Interface Card (NIC) connected by a common
Ethernet cable:
One computer (Blue) uses a NIC to send a frame to the shared medium, which has a destination address
corresponding to the source address of the NIC in the red computer.
The cable propagates the signal in both directions, so that the signal (eventually) reaches the NICs in all
four of the computers. Termination resistors at the ends of the cable absorb the frame energy, preventing
reflection of the signal back along the cable.
All the NICs receive the frame and each examines it to check its length and checksum. The header
destination MAC address is next examined, to see if the frame should be accepted, and forwarded to the
network-layer software in the computer.
Only the NIC in the red computer recognises the frame destination address as valid, and therefore this NIC
alone forwards the contents of the frame to the network layer. The NICs in the other computers discard the
unwanted frame.
The shared cable allows any NIC to send whenever it wishes, but if two NICs happen to transmit at the
same time, a collision will occur, resulting in the data being corrupted.
ALOHA & Collisions
To control which NICs are allowed to transmit at any given time, a protocol is required. The simplest
protocol is known as ALOHA (this is actually an Hawaiian word, meaning "hello"). ALOHA allows any
NIC to transmit at any time, but states that each NIC must add a checksum/CRC at the end of its
transmission to allow the receiver(s) to identify whether the frame was correctly received.
ALOHA is therefore a best effort service, and does not guarantee that the frame of data will actually reach
the remote recipient without corruption. It therefore relies on ARQ protocols to retransmit any data which
is corrupted. An ALOHA network only works well when the medium has a low utilisation, since this leads
to a low probability of the transmission colliding with that of another computer, and hence a reasonable
chance that the data is not corrupted.
Collision Detection (CD)
A second element to the Ethernet access protocol is used to detect when a collision occurs. When there is
data waiting to be sent, each transmitting NIC also monitors its own transmission. If it observes a collision
(excess current above what it is generating, i.e. > 24 mA for coaxial Ethernet), it stops transmission
immediately and instead transmits a 32-bit jam sequence. The purpose of this sequence is to ensure that any
other node which may currently be receiving this frame will receive the jam signal in place of the correct
32-bit MAC CRC, this causes the other receivers to discard the frame due to a CRC error.
To ensure that all NICs start to receive a frame before the transmitting NIC has finished sending it, Ethernet
defines a minimum frame size (i.e. no frame may have less than 46 bytes of payload). The minimum frame
size is related to the distance which the network spans, the type of media being used and the number of
repeaters which the signal may have to pass through to reach the furthest part of the LAN. Together these
define a value known as the Ethernet Slot Time, corresponding to 512 bit times at 10 Mbps.
When two or more transmitting NICs each detect a corruption of their own data (i.e. a collision), each
responds in the same way by transmitting the jam sequence. The following sequence depicts a collision:
After a period, equal to the propagation delay of the network, the NIC at B detects the other transmission
from A, and is aware of a collision, but NIC A has not yet observed that NIC B was also transmitting. B
continues to transmit, sending the Ethernet Jam sequence (32 bits).
After one complete round trip propagation time (twice the one way propagation delay), both NICs are
aware of the collision. B will shortly cease transmission of the Jam Sequence, however A will continue to
transmit a complete Jam Sequence. Finally the cable becomes idle.
TOKEN PASSING:
In telecommunication, token passing is a channel access method where a signal called a token is passed
around between nodes that authorizes the node to communicate.
Token passing schemes are a technique in which only the system which has the token can communicate.
The token is a control mechanism which gives authority to the system to communicate or use the resources
of that network. Once the communication is over, the token is passed to the next candidate in a sequential
manner. The most well-known examples are token ring and ARCNET.
Token passing schemes provide round-robin scheduling. If the packets are equally sized, the scheduling is
max-min fair.
The advantage over contention based channel access is that collisions are eliminated, and that the channel
bandwidth can be fully utilized without idle time when demand is heavy.
The disadvantage is that even when demand is light, a station wishing to transmit must wait for the token,
increasing latency.
e.g
TOKEN RING: Token ring local area network (LAN) technology is a local area network protocol which
resides at the data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token that
travels around the ring. Token ring frames travel completely around the loop.
Token frame
When no station is transmitting a data frame, a special token frame circles the loop. This special token
frame is repeated from station to station until arriving at a station that needs to transmit data. When a
station needs to transmit data, it converts the token frame into a data frame for transmission. Once the
sending station receives its own data frame, it converts the frame back into a token. If a transmission error
occurs and no token frame, or more than one, is present, a special station referred to as the Active Monitor
detects the problem and removes and/or reinserts tokens as necessary (see Active and standby monitors).
On 4 Mbit/s Token Ring, only one token may circulate; on 16 Mbit/s Token Ring, there may be multiple
tokens.
The special token frame consists of three bytes as described below (J and K are special non-data characters,
referred to as code violations).
Token priority
Token ring specifies an optional medium access scheme allowing a station with a high-priority
transmission to request priority access to the token.
8 priority levels, 0-7, are used. When the station wishing to transmit receives a token or data frame with a
priority less than or equal to the station's requested priority, it sets the priority bits to its desired priority.
The station does not immediately transmit; the token circulates around the medium until it returns to the
station. Upon sending and receiving its own data frame, the station downgrades the token priority back to
the original priority.
Token ring frame format
A data token ring frame is an expanded version of the token frame that is used by stations to transmit media
access control (MAC) management frames or data frames from upper layer protocols and applications.
Token Ring and IEEE 802.5 support two basic frame types: tokens and data/command frames. Tokens are
3 bytes in length and consist of a start delimiter, an access control byte, and an end delimiter.
Data/command frames vary in size, depending on the size of the Information field. Data frames carry
information for upper-layer protocols, while command frames contain control information and have no data
for upper-layer protocols. Token ring can be connected to physical rings via equipment such as 100Base-
TX equipment and CAT5e UTP cable.
Data/Command Frame
SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS
8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits
Token Frame
SD AC ED
8 bits 8 bits 8 bits
Abort Frame
SD ED
8 bits 8 bits
Starting Delimiter
consists of a special bit pattern denoting the beginning of the frame. The bits from most significant to least
significant are J,K,0,J,K,0,0,0. J and K are code violations. Since Manchester encoding is self clocking, and
has a transition for every encoded bit 0 or 1, the J and K codings violate this, and will be detected by the
hardware.
J K 0 J K 0 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Access Control
this byte field consists of the following bits from most significant to least significant bit order:
P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is a token
frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees this frame, and R
bits are reserved bits.
+ Bits 0–2 3 4 5-7
0 Priority Token Monitor Reservation
Frame Control
a one byte field that contains bits describing the data portion of the frame contents.Indicates whether the
frame contains data or control information. In control frames, this byte specifies the type of control
information.
+ Bits 0–2 3
0 Frame type Control Bits
Frame type - 01 indicates LLC frame IEEE 802.2 (data) and ignore control bits 00 indicates MAC frame
and control bits indicate the type of MAC control frame
Destination address
a six byte field used to specify the destination(s) physical address .
Source address
Contains physical addressa of sending station . It is six byte field that is either the local assigned address
(LAA) or universally assigned address (UAA) of the sending station adapter.
Data
a variable length field of 0 or more bytes, the maximum allowable size depending on ring speed containing
MAC management data or upper layer information.Maximum length of 4500 bytes
Frame Check Sequence
a four byte field used to store the calculation of a CRC for frame integrity verification by the receiver.
Ending Delimiter
The counterpart to the starting delimiter, this field marks the end of the frame and consists of the following
bits from most significant to least significant: J,K,1,J,K,1,I,E. I is the intermediate frame bit and E is the
error bit.
J K 1 J K 1 I E
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Frame Status :a one byte field used as a primitive acknowledgement scheme on whether the frame was
recognized and copied by its intended receiver.
A C 0 0 A C 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
A = 1 , Address recognized C = 1 , Frame copied
Abort Frame :Used to abort transmission by the sending station.
LECTURE NO. 24 READINGS: A-PAGE 314 to 316
Ethernet
The IEEE 802.3 standard defines ethernet at the physical and data link layers of the OSI network model.
Most ethernet systems use the following:
• Carrier-sense multiple-access with collision detection (CSMA/CD) for controlling access to the
network media.
• Use baseband broadcasts
• A method for packing data into data packets called frames
• Transmit at 10Mbps, 100Mbps, and 1Gbps.
Types of Ethernet
• 10Base5 - Uses Thicknet coaxial cable which requires a transceiver with a vampire tap to connect
each computer. There is a drop cable from the transceiver to the Attachment Unit Interface (AIU).
The AIU may be a DIX port on the network card. There is a transceiver for each network card on
the network. This type of ethernet is subject to the 5-4-3 rule meaning there can be 5 network
segments with 4 repeaters, and three of the segments can be connected to computers. It uses bus
topology. Maximum segment length is 500 Meters with the maximum overall length at 2500
meters. Minimum length between nodes is 2.5 meters. Maximum nodes per segment is 100.
• 10Base2 - Uses Thinnet coaxial cable. Uses a BNC connector and bus topology requiring a
terminator at each end of the cable. The cable used is RG-58A/U or RG-58C/U with an impedance
of 50 ohms. RG-58U is not acceptable. Uses the 5-4-3 rule meaning there can be 5 network
segments with 4 repeaters, and three of the segments can be connected to computers. The
maximum length of one segment is 185 meters. Barrel connectors can be used to link smaller
pieces of cable on each segment, but each barrel connector reduces signal quality. Minimum
length between nodes is 0.5 meters.
• 10BaseT - Uses Unshielded twisted pair (UTP) cable. Uses star topology. Shielded twisted pair
(STP) is not part of the 10BaseT specification. Not subject to the 5-4-3 rule. They can use
category 3, 4, or 5 cable, but perform best with category 5 cable. Category 3 is the minimum.
Require only 2 pairs of wire. Cables in ceilings and walls must be plenum rated. Maximum
segment length is 100 meters. Minimum length between nodes is 2.5 meters. Maximum number of
connected segments is 1024. Maximum number of nodes per segment is 1 (star topology). Uses
RJ-45 connectors.
• 10BaseF - Uses Fiber Optic cable. Can have up to 1024 network nodes. Maximum segment length
is 2000 meters. Uses specialized connectors for fiber optic. Includes three categories:
o 10BaseFL - Used to link computers in a LAN environment, which is not commonly done
due to high cost.
o 10BaseFP - Used to link computers with passive hubs to get cable distances up to 500
meters.
o 10BaseFB - Used as a backbone between hubs.
• 100BaseT - Also known as fast ethernet. Uses RJ-45 connectors. Topology is star. Uses
CSMA/CD media access. Minimum length between nodes is 2.5 meters. Maximum number of
connected segments is 1024. Maximum number of nodes per segment is 1 (star topology).
IEEE802.3 specification.
o 100BaseTX - Requires category 5 two pair cable. Maximum distance is 100 meters.
o 100BaseT4 - Requires category 3 cable with 4 pair. Maximum distance is 100 meters.
o 100BaseFX - Can use fiber optic to transmit up to 2000 meters. Requires two strands of
fiber optic cable.
100VG-AnyLAN - Requires category 3 cable with 4 pair. Maximum distance is 100 meters with cat 3 or 4
cable. Can reach 150 meters with cat 5 cable. Can use fiber optic to transmit up to 2000 meters. This
ethernet type supports transmission of Token-Ring network packets in addition to ethernet packets. IEEE
802.12 specification. Uses demand-priority media access control. The topology is star. It uses a series of
interlinked cascading hubs. Uses RJ-45 connectors.
Types of ethernet frames
• Ethernet 802.2 - These frames contain fields similar to the ethernet 802.3 frames with the addition
of three Logical Link Control (LLC) fields. Novell NetWare 4.x networks use it.
• Ethernet 802.3 - It is mainly used in Novell NetWare 2.x and 3.x networks. The frame type was
developed prior to completion of the IEEE 802.3 specification and may not work in all ethernet
environments.
• Ethernet II - This frame type combines the 802.3 preamble and SFD fields and include a protocol
type field where the 802.3 frame contained a length field. TCP/IP networks and networks that use
multiple protocols normally use this type of frames.
• Ethernet SNAP - This frame type builds on the 802.2 frame type by adding a type field indicating
what network protocol is being used to send data. This frame type is mainly used in AppleTalk
networks.
The packet size of all the above frame types is between 64 and 1,518 bytes.
Ethernet Message Formats
The ethernet data format is defined by RFC 894 and 1042. The addresses specified in the ethernet protocol
are 48 bit addresses.
The types of data passed in the type field are as follows:
1. 0800 IP Datagram
2. 0806 ARP request/reply
3. 8035 RARP request/reply
There is a maximum size of each data packet for the ethernet protocol. This size is called the maximum
transmission unit (MTU). What this means is that sometimes packets may be broken up as they are passed
through networks with MTUs of various sizes. SLIP and PPP protocols will normally have a smaller MTU
value than ethernet. This document does not describe serial line interface protocol (SLIP) or point to point
protocol (PPP) encapsulation.
LAYER 2 & 3 SWITCHING
Layer 2 switches are frequently installed in the enterprise for high-speed connectivity between end stations
at the data link layer. Layer 3 switches are a relatively new phenomenon, made popular by (among others)
the trade press.
Layer 2 Switches
Bridging involves segmentation of local-area networks (LANs) at the Layer 2 level. A multiport bridge
typically learns about the Media Access Control (MAC) addresses on each of its ports and transparently
passes MAC frames destined to those ports. These bridges also ensure that frames destined for MAC
addresses that lie on the same port as the originating station are not forwarded to the other ports.
Layer 2 switches effectively provide the same functionality. They are similar to multiport bridges in that
they learn and forward frames on each port. The major difference is the involvement of hardware that
ensures that multiple switching paths inside the switch can be active at the same time. For example,
consider Figure 1, which details a four-port switch with stations A on port 1, B on port 2, C on port 3 and D
on port 4. Assume that A desires to communicate with B, and C desires to communicate with D. In a single
CPU bridge, this forwarding would typically be done in software, where the CPU would pick up frames
from each of the ports sequentially and forward them to appropriate output ports. This process is highly
inefficient in a scenario like the one indicated previously, where the traffic between A and B has no relation
to the traffic between C and D.
Figure 1: Layer 2 switch with External Router for Inter-VLAN traffic and connecting to the Internet
Enter hardware-based Layer 2 switching. Layer 2 switches with their hardware support are able to forward
such frames in parallel so that A and B and C and D can have simultaneous conversations. The parallel-ism
has many advantages. Assume that A and B are NetBIOS stations, while C and D are Internet Protocol (IP)
stations. There may be no rea-son for the communication between A and C and A and D. Layer 2 switching
allows this coexistence without sacrificing efficiency.
Characteristics
Layer 2 switches themselves act as IP end nodes for Simple Network Management Protocol (SNMP)
management, Telnet, and Web based management. Such management functionality involves the presence
of an IP stack on the router along with User Datagram Protocol (UDP), Transmission Control Protocol
(TCP), Telnet, and SNMP functions. The switches themselves have a MAC address so that they can be
addressed as a Layer 2 end node while also providing transparent switch functions. Layer 2 switching does
not, in general, involve changing the MAC frame. However, there are situations when switches change the
MAC frame.
The same principles also apply towards Layer 2 switches, and most commercial Layer 2 switches support
the Spanning-Tree Protocol. The previous discussion provides an outline of Layer 2 switching func-tions.
Layer 2 switching is MAC frame based, does not involve altering the MAC frame, in general, and provides
transparent switching in par-allel with MAC frames. Since these switches operate at Layer 2, they are
protocol independent. However, Layer 2 switching does not scale well because of broadcasts. Although
VLANs alleviate this problem to some extent, there is definitely a need for machines on different VLANs
to communicate. One example is the situation where an orga-nization has multiple intranet servers on
separate subnets (and hence VLANs), causing a lot of intersubnet traffic. In such cases, use of a router is
unavoidable; Layer 3 switches enter at this point.
Layer 3 Switches
one school uses this term to describe fast IP routing via hardware, while another school uses it to describe
Multi Protocol Over ATM (MPOA). For the purpose of this discussion, Layer 3 switches are superfast rout-
ers that do Layer 3 forwarding in hardware. In this article, we will mainly discuss Layer 3 switching in the
context of fast IP routing, with a brief discussion of the other areas of application.
Evolution
Consider the Layer 2 switching context shown in Figure 1. Layer 2 switches operate well when there is
very little traffic between VLANs. Such VLAN traffic would entail a router. one of the ports as a one-
armed router or present internally within the switch. To augment Layer 2 functionality, we need a router?
which leads to loss of performance since routers are typically slower than switches. This scenario leads to
the question: Why not implement a router in the switch itself, as discussed in the previous section, and do
the forwarding in hardware?
Although this setup is possible, it has one limitation: Layer 2 switches need to operate only on the Ethernet
MAC frame. This scenario in turn leads to a well-defined forwarding algorithm which can be implemented
in hardware. The algorithm cannot be extended easily to Layer 3 protocols because there are multiple Layer
3 routable protocols such as IP, IPX, AppleTalk, and so on; and second, the forwarding decision in such
protocols is typically more complicated than Layer 2 forwarding decisions.
What is the engineering compromise? Because IP is the most common among all Layer 3 protocols today,
most of the Layer 3 switches today perform IP switching at the hardware level and forward the other
protocols at Layer 2 (that is, bridge them). The second issue of complicated Layer 3 forwarding decisions is
best illustrated by IP option processing, which typically causes the length of the IP header to vary,
complicating the building of a hardware forwarding engine. However, a large number of IP packets do not
include IP options?so, it may be overkill to design this processing into silicon. The compromise is that the
most common (fast path) forwarding decision is designed into silicon, whereas the others are handled
typically by a CPU on the Layer 3 switch.
To summarize, Layer 3 switches are routers with fast forwarding done via hardware. IP forwarding
typically involves a route lookup, decrementing the Time To Live (TTL) count and recalculating the
checksum, and forwarding the frame with the appropriate MAC header to the correct output port. Lookups
can be done in hardware, as can the decrementing of the TTL and the recalculation of the checksum. The
routers run routing protocols such as Open Shortest Path First (OSPF) or Routing Information Protocol
(RIP) to communicate with other Layer 3 switches or routers and build their routing tables. These routing
tables are looked up to determine the route for an incoming packet.
Figure 2 illustrates the combined Layer 2/Layer 3 switching function-ality. The combined Layer 2/Layer 3
switch replaces the traditional router also. A and B belong to IP subnet 1, while C and D belong to IP
subnet 2. Since the switch in consideration is a Layer 2 switch also, it switches traffic between A and B at
Layer 2. Now consider the situ-ation when A wishes to communicate with C. A sends the IP packet
addressed to the MAC address of the Layer 3 switch, but with an IP destination address equal to C?s IP
address. The Layer 3 switch strips out the MAC header and switches the frame to C after performing the
lookup, decrementing the TTL, recalculating the checksum and inserting C?s MAC address in the
destination MAC address field. All of these steps are done in hardware at very high speeds.
Now how does the switch know that C?s IP destination address is Port 3? When it performs learning at
Layer 2, it only knows C?s MAC address. There are multiple ways to solve this problem. The switch can
perform an Address Resolution Protocol (ARP) lookup on all the IP subnet 2 ports for C?s MAC address
and determine C?s IP-to-MAC mapping and the port on which C lies. The other method is for the switch to
determine C?s IP-to-MAC mapping by snooping into the IP header on reception of a MAC frame.
Characteristics
Configuration of the Layer 3 switches is an important issue. When the Layer 3 switches also perform Layer
2 switching, they learn the MAC addresses on the ports?the only configuration required is the VLAN
configuration. For Layer 3 switching, the switches can be configured with the ports corresponding to each
of the subnets or they can perform IP address learning. This process involves snooping into the IP header of
the MAC frames and determining the subnet on that port from the source IP address. When the Layer 3
switch acts like a one-armed router for a Layer 2 switch, the same port may consist of multiple IP subnets.
Management of the Layer 3 switches is typically done via SNMP. Layer 3 switches also have MAC
addresses for their ports?this setup can be one per port, or all ports can use the same MAC address. The
Layer 3 switches typically use this MAC address for SNMP, Telnet, and Web management communication.
Conceptually, the ATM Forum?s LAN Emulation (LANE) specificat-ion is closer to the Layer 2 switching
model, while MPOA is closer to the Layer 3 switching model. Numerous Layer 2 switches are equipped
with ATM interfaces and provide a LANE client function on that ATM interface. This scenario allows the
bridging of MAC frames across an ATM network from switch to switch. The MPOA is closer to combined
Layer2/Layer 3 switching, though the MPOA client does not have any routing protocols running on it.
(Routing is left to the MPOA server under the Virtual Router model.)
Do Layer 3 switches completely eliminate need for the traditional router ? No, routers are still needed,
especially where connections to the wide area are required. Layer 3 switches may still connect to such
routers to learn their tables and route packets to them when these packets need to be sent over the WAN.
The switches will be very effective on the workgroup and the backbone within an enterprise, but most
likely will not replace the router at the edge of the WAN (read Internet in many cases). Routers perform
numerous other functions like filtering with access lists, inter-Autonomous System (AS) routing with
protocols such as the Border Gateway Protocol (BGP), and so on. Some Layer 3 switches may completely
replace the need for a router if they can provide all these functions (see Figure 2).
FAST ETHERNET:
Definition: Fast Ethernet supports a maximum data rate of 100 Mbps. It is so named because original
Ethernet technology supported only 10 Mbps. Fast Ethernet began to be widely deployed in the mid-1990s
as the need for greater LAN performance became critical to universities and businesses.
A key element of Fast Ethernet's success was its ability to coexist with existing network installations.
Today, many network adapters support both traditional and Fast Ethernet. These so-called "10/100"
adapters can usually sense the speed of the line automatically and adjust accordingly. Just as Fast Ethernet
improved on traditional Ethernet, Gigabit Ethernet improves on Fast Ethernet, offering rates up to 1000
Mbps instead of 100 Mbps.
Also Known As: 100 Mbps Ethernet
GIGABIT ETHERNET
Definition: Gigabit Ethernet is an extension to the family of Ethernet computer networking and
communication standards. The Gigabit Ethernet standard supports a theoretical maximum data rate of 1
Gbps (1000 Mbps).
At one time, it was believed that achieving Gigabit speeds with Ethernet required fiber optic or other
special cables. However, Gigabit Ethernet can be implemented on ordinary twisted pair copper cable
(specifically, the CAT5e and CAT6 cabling standards).
Migration of existing computer networks from 100 Mbps Fast Ethernet to Gigabit Ethernet is happening
slowly. Much legacy Ethernet technology exists (in both 10 and 100 Mbps varieties), and these older
technologies offers sufficient performance in many cases.
Today, Gigabit Ethernet can only be found mainly in research institutions. A decrease in cost, increase in
demand, and improvements in other aspects of LAN technology will be required before Gigabit Ethernet
surpasses other forms of wired networking in terms of adoption.
Also Known As: 1000 Mbps Ethernet
Switch is a marketing term that encompasses routers and bridges, as well as devices that may distribute
traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more
OSI model layers, including physical, data link, network, or transport (i.e., end-to-end). A device that
operates simultaneously at more than one of these layers is called a multilayer switch.
Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand
networking. Many experienced network designers and operators recommend starting with the logic of
devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device
selection is an advanced topic that may lead to selecting particular implementations, but multilayer
switching is simply not a real-world design concept.
Routers
Routers are networking devices that forward data packets between networks using headers and forwarding
tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP
model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media
(RFC 1812). This is accomplished by examining the Header of a data packet, and making a decision on the
next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their
hardware interfaces, and routing protocols to select the best route between any two subnets. A router is
connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some
DSL and cable modems, for home (and even office) use, have been integrated with routers to allow
multiple home/office computers to access the Internet through the same connection. Many of these new
devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11g/b
wireless enabled devices to connect to the network without the need for cabled connections.
GATEWAYS:
A device that connects a networked computer with other computers on dissimilar networks. The gateway is
capable of converting data frames and network protocols into the format needed by another network.A
node on a network that serves as an entrance to another network. In enterprises, the gateway is the
computer that routes the traffic from a workstation to the outside network that is serving the Web pages. In
homes, the gateway is the ISP that connects the user to the internet.
In enterprises, the gateway node often acts as a proxy server and a firewall. The gateway is also associated
with both a router, which use headers and forwarding tables to determine where packets are sent, and a
switch, which provides the actual path for the packet in and out of the gateway.
(2) A computer system located on earth that switches data signals and voice signals between satellites and
terrestrial networks.
(3) An earlier term for router, though now obsolete in this sense as router is commonly used.
http://compnetworking.about.com/cs/lanvlanwan/g/bldef_wan.htm
A-497
WAN:
Definition: A WAN spans a large geographic area, such as a state, province or country. WANs often
connect multiple smaller networks, such as local area networks (LANs) or metro area networks (MANs).
The world's most popular WAN is the Internet. Some segments of the Internet, like VPN-based extranets,
are also WANs in themselves. Finally, many WANs are corporate or research networks that utilize leased
lines.
WANs generally utilize different and much more expensive networking equipment than do LANs. Key
technologies often found in WANs include SONET, Frame Relay, and ATM.
ROUTING:
Routing (also spelled routeing) is the process of selecting paths in a network along which to send network
traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data
networks (such as the Internet), and transportation (transport) networks. This article is concerned primarily
with routing in electronic data networks using packet switching technology.
In packet switching networks, routing directs forwarding, the transit of logically addressed packets from
their source toward their ultimate destination through intermediate nodes; typically hardware devices called
routers, bridges, gateways, firewalls, or switches. Ordinary computers with multiple network cards can also
forward packets and perform routing, though they are not specialized hardware and may suffer from limited
performance. The routing process usually directs forwarding on the basis of routing tables which maintain a
record of the routes to various network destinations. Thus constructing routing tables, which are held in the
routers' memory, becomes very important for efficient routing. Most routing algorithms use only one
network path at a time, but multipath routing techniques enable the use of multiple alternative paths.
Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that
network addresses are structured and that similar addresses imply proximity within the network. Because
structured addresses allow a single routing table entry to represent the route to a group of devices,
structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large
networks, and has become the dominant form of addressing on the Internet, though bridging is still widely
used within localized environments.
Systems that do not implement adaptive routing are described as using static routing, where routes through
a network are described by fixed paths (statically). A change, such as the loss of a node, or loss of a
connection between nodes, is not compensated for. This means that anything that wishes to take an affected
path will either have to wait for the failure to be repaired before restarting its journey, or will have to fail to
reach its destination and give up the journey
LECTURE NO.27 READINGS: A-PAGE638
Congestion control:
Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid
congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of
the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of
sending packets. It should not be confused with flow control, which prevents the sender from
overwhelming the receiver.
Theory of congestion control
The modern theory of congestion control was pioneered by Frank Kelly, who applied microeconomic
theory and convex optimization theory to describe how individuals controlling their own rates can interact
to achieve an "optimal" network-wide rate allocation
Examples of "optimal" rate allocation are max-min fair allocation and Kelly's suggestion of proportional
fair allocation, although many others are possible.
The mathematical expression for optimal rate allocation is as follows. Let xi be the rate of flow i, Cl be the
capacity of link l, and rli be 1 if flow i uses link l and 0 otherwise. Let x, c and R be the corresponding
vectors and matrix. Let U(x) be an increasing, strictly convex function, called the utility, which measures
how much benefit a user obtains by transmitting at rate x. The optimal rate allocation then satisfies
such that
The Lagrange dual of this problem decouples, so that each flow sets its own rate, based only on a "price"
signalled by the network. Each link capacity imposes a constraint, which gives rise to a Lagrange
multiplier, pl. The sum of these Lagrange multipliers,
yi = ∑ plrli,
l
is the price to which the flow responds.
Congestion control then becomes a distributed optimisation algorithm for solving the above problem. Many
current congestion control algorithms can be modelled in this framework, with pl being either the loss
probability or the queueing delay at link l.
A major weakness of this model is that it assumes all flows observe the same price, while sliding window
flow control causes "burstiness" which causes different flows to observe different loss or delay at a given
link.
Classification of congestion control algorithms
There are many ways to classify congestion control algorithms:
• By the type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit
explicit signals
• By incremental deployability on the current Internet: Only sender needs modification; sender and
receiver need modification; only router needs modification; sender, receiver and routers need
modification.
• By the aspect of performance it aims to improve: high bandwidth-delay product networks; lossy
links; fairness; advantage to short flows; variable-rate links
• By the fairness criterion it uses: max-min, proportional, "minimum potential delay"
WAN TECHNOLOGIES:
WANs are all about exchanging information across wide geographic areas. They are also, as you can
probably gather from reading about the Internet, about scalability—the ability to grow to accommodate the
number of users on the network, as well as to accommodate the demands those users place on network
facilities. Although the nature of a WAN—a network reliant on communications for covering sometimes
vast distances—generally dictates slower throughput, longer delays, and a greater number of errors than
typically occur on a LAN, a WAN is also the fastest, most effective means of transferring computer-based
information currently available.
The Way of a WAN
To at least some extent, WANs are defined by their methods of transmitting data packets. True, the means
of communication must be in place. True, too, the networks making up the WAN must be up and running.
And the administrators of the network must be able to monitor traffic, plan for growth, and alleviate
bottlenecks. But in the end, part of what makes a WAN a WAN is its ability to ship packets of data from
one place to another, over whatever infrastructure is in place. It is up to the WAN to move those packets
quickly and without error, delivering them and the data they contain in exactly the same condition they left
the sender, even if they must pass through numerous intervening networks to reach their destination.
Picture, for a moment, a large network with many subnetworks, each of which has many individual users.
To the users, this large network is (or should be) transparent—so smoothly functioning that it is invisible.
After all, they neither know nor care whether the information they need is on server A or server B, whether
the person with whom they want to communicate is in city X or city Y, or whether the underlying network
runs this protocol or that one. They know only that they want the network to work, and that they want their
information needs satisfied accurately, efficiently, and as quickly as possible.
Now picture the same situation from the network's point of view. It "sees" hundreds, thousands, and
possibly even tens of thousands of network computers or terminals and myriad servers of all kinds—print,
file, mail, and even servers offering Internet access—not to mention different types of computers,
gateways, routers, and communications devices. In theory, any one of these devices could communicate
with, or transmit information through, any other device. Any PC, for instance, could decide to access any of
the servers on the network, no matter whether that server is in the same building or in an office in another
country. To complicate matters even more, two PCs might try to access the same server, and even the same
resource, at the same time. And of course, the chance that only one node anywhere on the network is active
at any given time is minuscule, even in the coldest, darkest hours of the night.
So, in both theory and practice, this widespread network ends up interconnecting thousands or hundreds of
thousands of individual network "dots," connecting them temporarily but on demand. How can it go about
the business of shuffling data ranging from quick e-mails to large (in terms of bytes) documents and even
larger graphic images, sound files, and so on, when the possible interconnections between and among
nodes would make a bowl of spaghetti look well organized by comparison? The solution is in the routing,
which involves several different switching technologies.
Switching of any type involves moving something through a series of intermediate steps, or segments,
rather than moving it directly from start point to end point. Trains, for example, can be switched from track
to track, rather than run on a single, uninterrupted piece of track, and still reach their intended destination.
Switching in networks works in somewhat the same way: Instead of relying on a permanent connection
between source and destination, network switching relies on series of temporary connections that relay
messages from station to station. Switching serves the same purpose as the direct connection, but it uses
transmission resources more efficiently.
WANs (and LANs, including Ethernet and Token Ring) rely primarily on packet switching, but they also
make use of circuit switching, message switching, and the relatively recent, high-speed packet-switching
technology known as cell relay.
Circuit Switching
Circuit switching involves creating a direct physical connection between sender and receiver, a connection
that lasts as long as the two parties need to communicate. In order for this to happen, of course, the
connection must be set up before any communication can occur. Once the connection is made, however, the
sender and receiver can count on "owning" the bandwidth allotted to them for as long as they remain
connected.
Although both the sender and receiver must abide by the same data transfer speed, circuit switching does
allow for a fixed (and rapid) rate of transmission. The primary drawback to circuit switching is the fact that
any unused bandwidth remains exactly that: unused. Because the connection is reserved only for the two
communicating parties, that unused bandwidth cannot be "borrowed" for any other transmission.
The most common form of circuit switching happens in that most familiar of networks, the telephone
system, but circuit switching is also used in some networks. Currently available ISDN lines, also known as
narrowband ISDN, and the form of T1 known as switched T1 are both examples of circuit-switched
communications technologies.
Message Switching
Unlike circuit switching, message switching does not involve a direct physical connection between sender
and receiver. When a network relies on message switching, the sender can fire off a transmission—after
addressing it appropriately—whenever it wants. That message is then routed through intermediate stations
or, possibly, to a central network computer. Along the way, each intermediary accepts the entire message,
scrutinizes the address, and then forwards the message to the next party, which can be another intermediary
or the destination node.
What's especially notable about message-switching networks, and indeed happens to be one of their
defining features, is that the intermediaries aren't required to forward messages immediately. Instead, they
can hold messages before sending them on to their next destination. This is one of the advantages of
message switching. Because the intermediate stations can wait for an opportunity to transmit, the network
can avoid, or at least reduce, heavy traffic periods, and it has some control over the efficient use of
communication lines.
Packet Switching
Packet switching, although it is also involved in routing data within and between LANs such as Ethernet
and Token Ring, is also the backbone of WAN routing. It's not the highway on which the data packets
travel, but it is the dispatching system and to some extent the cargo containers that carry the data from
place to place. In a sense, packet switching is the Federal Express or United Parcel Service of a WAN.
In packet switching, all transmissions are broken into units called packets, each of which contains
addressing information that identifies both the source and destination nodes. These packets are then routed
through various intermediaries, known as Packet Switching Exchanges (PSEs), until they reach their
destination. At each stop along the way, the intermediary inspects the packet's destination address, consults
a routing table, and forwards the packet at the highest possible speed to the next link in the chain leading to
the recipient.
As they travel from link to link, packets are often carried on what are known as virtual circuits—temporary
allocations of bandwidth over which the sending and receiving stations communicate after agreeing on
certain "ground rules," including packet size, flow control, and error control. Thus, unlike circuit switching,
packet switching typically does not tie up a line indefinitely for the benefit of sender and receiver.
Transmissions require only the bandwidth needed for forwarding any given packet, and because packet
switching is also based on multiplexing messages, many transmissions can be interleaved on the same
networking medium at the same time.
Connectionless and Connection-Oriented Services
So packet-switched networks transfer data over variable routes in little bundles called packets. But how do
these networks actually make the connection between the sender and the recipient? The sender can't just
assume that a transmitted packet will eventually find its way to the correct destination. There has to be
some kind of connection—some kind of link between the sender and the recipient. That link can be based
on either connectionless or connection-oriented services, depending on the type of packet-switching
network involved.
• In a (so to speak) connectionless "connection," an actual communications link isn't established
between sender and recipient before packets can be transmitted. Each transmitted packet is
considered an independent unit, unrelated to any other. As a result, the packets making up a
complete message can be routed over different paths to reach their destination.
In a connection-oriented service, the communications link is made before any packets are transmitted.
Because the link is established before transmission begins, the packets comprising a message all follow the
same route to their destination. In establishing the link between sender and recipient, a connection-oriented
service can make use of either switched virtual circuits (SVCs) or permanent virtual circuits (PVCs):
• Using a switched virtual circuit is comparable to calling someone on the telephone. The
caller connects to the called computer, they exchange information, and then they
terminate the connection.
• Using a permanent virtual circuit, on the other hand, is more like relying on a leased line.
The line remains available for use at all times, even when no transmissions are passing
through it.
Types of Packet-Switching Networks
As you've seen, packet-based data transfer is what defines a packet-switching network. But—to confuse the
issue a bit—referring to a packet-switching network is a little like referring to tail-wagging canines as dogs.
Sure, they're dogs. But any given dog can also be a collie or a German shepherd or a poodle. Similarly, a
packet-switching network might be, for example, an X.25 network, a frame relay network, an ATM
(Asynchronous Transfer Mode) network, an SMDS (Switched Multimegabit Data Service), and so on.
X.25 packet-switching networks
Originating in the 1970s, X.25 is a connection-oriented, packet-switching protocol, originally based on the
use of ordinary analog telephone lines, that has remained a standard in networking for about twenty years.
Computers on an X.25 network carry on full-duplex communication, which begins when one computer
contacts the other and the called computer responds by accepting the call.
Although X.25 is a packet-switching protocol, its concern is not with the way packets are routed from
switch to switch between networks, but with defining the means by which sending and receiving computers
(known as DTEs) interface with the communications devices (DCEs) through which the transmissions
actually flow. X.25 has no control over the actual path taken by the packets making up any particular
transmission, and as a result the packets exchanged between X.25 networks are often shown as entering a
cloud at the beginning of the route and exiting the cloud at the end.
A recommendation of the ITU (formerly the CCITT), X.25 relates to the lowest three network layers—
physical, data link, and network— in the ISO reference model:
• At the lowest (physical) layer, X.25 specifies the means—electrical, mechanical, and so on—by
which communication takes place over the physical media. At this level, X.25 covers standards
such as RS-232, the ITU's V.24 specification for international connections, and the ITU's V.35
recommendation for high-speed modem signaling over multiple telephone circuits.
• At the next (data link) level, X.25 covers the link access protocol, known as LAPB (Link Access
Protocol, Balanced), that defines how packets are framed. The LAPB ensures that two
communicating devices can establish an error-free connection.
• At the highest level (in terms of X.25), the network layer, the X.25 protocol covers packet formats
and the routing and multiplexing of transmissions between the communicating devices.
On an X.25 network, transmissions are typically broken into 128-byte packets. They can, however, be as
small as 64 bytes or as large as 4096 bytes.
LECTURE NO.28 READINGS:
http://www.networkdictionary.com/protocols/dqdb.php
http://www.nationmaster.com/encyclopedia/Synchronous-optical-networking
DQDB:
Distributed-queue dual-bus
In telecommunication, a distributed-queue dual-bus network (DQDB) is a distributed multi-access
network that (a) supports integrated communications using a dual bus and distributed queuing, (b) provides
access to local or metropolitan area networks, and (c) supports connectionless data transfer, connection-
oriented data transfer, and isochronous communications, such as voice communications.
IEEE 802.6 is an example of a network providing DQDB access methods.
DQDB Concept of Operation
The DQDB Medium Access Control (MAC) algorithm is generally credited to Robert Newman who
developed this algorithm in his PhD thesis in the 1980s at the University of Western Australia. To
appreciate the innovative value of the DQDB MAC algorithm, it must be seen against the background of
LAN protocols at that time, which were based on broadcast (such as ethernet IEEE 802.3) or a ring (like
token ring IEEE 802.5 and FDDI). The DQDB may be thought of as two token rings, one carrying data in
each direction around the ring. The ring is broken between two of the nodes in the ring. (An advantage of
this is that if the ring breaks somewhere else, the broken link can be closed to form a ring with only one
break again. This gives reliability which is important in Metropolitan Area Networks (MAN), where repairs
may take longer than in a LAN because the damage may be inaccessible).
The DQDB standard IEEE 802.6 was developed while ATM (Broadband ISDN) was still in early
development, but there was strong interaction between the two standards. ATM cells and DQDB frames
were harmonized. They both settled on essentially a 48-byte data frame with a 5-byte header. In the DQDB
algorithm, a distributed queue was implemented by communicating queue state information via the header.
Each node in a DQDB network maintains a pair of state variables which represent its position in the
distributed queue and the size of the queue. The headers on the reverse bus communicated requests to be
inserted in the distributed queue so that upstream nodes would know that they should allow DQDB cells to
pass unused on the forward bus. The algorithm was remarkable for its extreme simplicity.
Currently DQDB systems are being installed by many carriers in entire cities, with lengths that reach up to
160 Km (100 miles) with speeds of a DS3 line (44.736 Mbps) [5]. Other implementations use optical fiber
for a length of up to 100 Km and speeds around 150 Mbps
DQDB: Distributed Queue Dual Bus Defined in IEEE 802.6
Data Over Cable Service Interface Distributed Queue Dual Bus (DQDB) is a Data-link layer
communication protocol for Metropolitan Area Networks (MANs), specified in the IEEE 802.6 standard,
designed for use in MANs. DQDB is designed for data as well as voice and video transmission based on
cell switching technology (similar to ATM). DQDB, which permits multiple systems to interconnect using
two unidirectional logical buses, is an open standard that is designed for compatibility with carrier
transmission standards such as SMDS, which is based on the DQDB standards.
For a MAN to be effective it requires a system that can function across long, ¡°city-wide¡± distances of
several miles, have a low susceptibility to error, adapt to the number of nodes attached and have variable
bandwidth distribution. Using DQDB, networks can be thirty miles long and function in the range of 34
Mbps to 155 Mbps. The data rate fluctuates due to many hosts sharing a dual bus as well as the location of
a single host in relation to the frame generator, but there are schemes to compensate for this problem
making DQDB function reliably and fairly for all hosts.
The DQDB is composed of a two bus lines with stations attached to both and a frame generator at the end
of each bus. The buses run in parallel in such a fashion as to allow the frames generated to travel across the
stations in opposite directions.
Below is a picture of the basic DQDB architecture:
Protocol Structure - DQDB: Distributed Queue Dual Bus Defined in IEEE 802.6
DQDB cell has the similar format as the ATM:
SDH/SONET : SDH/SONET:
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH), are two closely
related multiplexing protocols for transferring multiple digital bit streams using lasers or light-emitting
diodes (LEDs) over the same optical fiber. The method was developed to replace the Plesiochronous
Digital Hierarchy (PDH) system for transporting larger amounts of telephone calls and data traffic over the
same fiber wire without synchronization problems.
SONET and SDH were originally designed to transport circuit mode communications (eg, T1, T3) from a
variety of different sources. The primary difficulty in doing this prior to SONET was that the
synchronization source of these different circuits were different, meaning each circuit was actually
operating at a slightly different rate and with different phase. SONET allowed for the simultaneous
transport of many differnet circuits of differing origin within one single framing protocol. In a sense, then,
SONET is not itself a communications protocol per se, but a transport protocol.
Due to SONET's essential protocol neturality and transport-oriented features, SONET was the obvious
choice for transporting ATM (Asynchronous Transfer Mode) frames, and so quickly evolved mapping
structures and concatenated payload containers so as to transport ATM connections. In other words, for
ATM (and eventually other protocols such as TCP/IP and ethernet), the internal complex structure
previously used to transport circuit-oriented connections is removed, and replaced with a large and
concatenated frame (such as STS-3c) into which ATM frames, IP packets, or ethernet is placed.
Both SDH and SONET are widely used today: SONET in the U.S. and Canada and SDH in the rest of the
world. Although the SONET standards were developed before SDH, their relative penetrations in the
worldwide market dictate that SONET now is considered the variation.
The two protocols are standardized according to the following:
• SDH or Synchronous Digital Hierarchy standard developed by the International
Telecommunication Union (ITU), documented in standard G.707 and its extension G.708
• SONET or Synchronous Optical Networking standard as defined by GR-253-CORE from
Telcordia and T1.105 from American National Standards Institute
A STM-1 Frame. The first 9 columns contain the overhead and the pointers. For the sake of simplicity, the
frame is shown as a rectangular structure of 270 columns and 9 rows, but the protocol does not transmit the
bytes in this order in practice
For the sake of simplicity, the frame is shown as a rectangular structure of 270 columns and 9 rows. The
first 3 rows and 9 columns contain Regenerator Section Overhead (RSOH) and the last 5 rows and 9
columns contain Multiplex Section Overhead (MSOH). The 4th row from the top contains pointers
The STM-1 (Synchronous Transport Module level - 1) frame is the basic transmission format for SDH or
the fundamental frame or the first level of the synchronous digital hierarchy. The STS-1 frame is
transmitted in exactly 125 microseconds, therefore there are 8000 frames per second on a fiber-optic circuit
designated OC-1 (optical carrier one). The STM-1 frame consists of overhead plus a virtual container
capacity. The first 9 columns of each frame make up the Section Overhead, and the last 261 columns make
up the Virtual Container (VC) capacity. The VC plus the pointers (H1, H2, H3 bytes) is called the AU
(Administrative Unit).
Carried within the VC capacity, which has its own frame structure of 9 rows and 261 columns, is the Path
Overhead and the Container. The first column is for Path Overhead; it’s followed by the payload container,
which can itself carry other containers. Virtual Containers can have any phase alignment within the
Administrative Unit, and this alignment is indicated by the Pointer in row four,
The Section overhead of an STM-1 signal (SOH) is divided into two parts: the Regenerator Section
Overhead (RSOH) and the Multiplex Section Overhead (MSOH). The overheads contain information from
the system itself, which is used for a wide range of management functions, such as monitoring transmission
quality, detecting failures, managing alarms, data communication channels, service channels, etc.
The STM frame is continuous and is transmitted in a serial fashion, byte-by-byte, row-by-row.
STM–1 frame contains
• Total content : 9 x 270 bytes = 2430 bytes
• overhead : 9 rows x 9 bytes
• payload : 9 rows x 261 bytes
• Period : 125 μsec
• Bitrate : 155.520 Mbit/s (2430 x 8 bits x 8000 frame/s )
• payload capacity : 150.336 Mbit/s (2349 x 8 bits x 8000 frame/s)
The transmission of the frame is done row by row, from the top left corner.
LECTURE NO.29 READINGS: A-PAGE 446,441 to 443,
FRAME RELAY:
In the context of computer networking, frame relay consists of an efficient data transmission technique
used to send digital information. It is a message forwarding "relay race" like system in which data packets,
called frames, are passed from one or many start-points to one or many destinations via a series of
intermediate node points.
Network providers commonly implement frame relay for voice and data as an encapsulation technique,
used between local area networks (LANs) over a wide area network (WAN). Each end-user gets a private
line (or leased line) to a frame-relay node. The frame-relay network handles the transmission over a
frequently-changing path transparent to all end-users.
With the advent of MPLS, VPN and dedicated broadband services such as cable modem and DSL, the end
may loom for the frame relay protocol and encapsulation. However many rural areas remain lacking DSL
and cable modem services. In such cases the least expensive type of "always-on" connection remains a 64-
kbit/s frame-relay line. Thus a retail chain, for instance, may use frame relay for connecting rural stores
into their corporate WAN.
A basic Frame relay network
Design
The designers of frame relay aimed at a telecommunication service for cost-efficient data transmission for
intermittent traffic between local area networks (LANs) and between end-points in a wide area network
(WAN). Frame relay puts data in variable-size units called "frames" and leaves any necessary error-
correction (such as re-transmission of data) up to the end-points. This speeds up overall data transmission.
For most services, the network provides a permanent virtual circuit (PVC), which means that the customer
sees a continuous, dedicated connection without having to pay for a full-time leased line, while the service-
provider figures out the route each frame travels to its destination and can charge based on usage.
An enterprise can select a level of service quality - prioritizing some frames and making others less
important. Frame relay can run on fractional T-1 or full T-carrier system carriers. Frame relay complements
and provides a mid-range service between basic rate ISDN, which offers bandwidth at 128 kbit/s, and
Asynchronous Transfer Mode (ATM), which operates in somewhat similar fashion to frame relay but at
speeds from 155.520 Mbit/s to 622.080 Mbit/s.
Frame relay has its technical base in the older X.25 packet-switching technology, designed for transmitting
data on analog voice lines. Unlike X.25, whose designers expected analog signals, frame relay offers a fast
packet technology, which means that the protocol does not attempt to correct errors. When a frame relay
network detects an error in a frame, it simply drops that frame. The end points have the responsibility for
detecting and retransmitting dropped frames. (However, digital networks offer an incidence of error
extraordinarily small relative to that of analog networks.)
Frame relay often serves to connect local area networks (LANs) with major backbones as well as on public
wide-area networks (WANs) and also in private network environments with leased lines over T-1 lines. It
requires a dedicated connection during the transmission period. Frame relay does not provide an ideal path
for voice or video transmission, both of which require a steady flow of transmissions. However, under
certain circumstances, voice and video transmission do use frame relay.
Frame relay relays packets at the data link layer (layer 2) of the Open Systems Interconnection (OSI) model
rather than at the network layer (layer 3). A frame can incorporate packets from different protocols such as
Ethernet and X.25. It varies in size up to a thousand bytes or more.
Frame Relay originated as an extension of Integrated Services Digital Network (ISDN). Its designers aimed
to enable a packet-switched network to transport the circuit-switched technology. The technology has
become a stand-alone and cost-effective means of creating a WAN.
Frame Relay switches create virtual circuits to connect remote LANs to a WAN. The Frame Relay network
exists between a LAN border device, usually a router, and the carrier switch. The technology used by the
carrier to transport the data between the switches is variable and changes between carrier (i.e. Frame Relay
does not rely directly on the transportation mechanism to function).
The sophistication of the technology requires a thorough understanding of the terms used to describe how
Frame Relay works. Without a firm understanding of Frame Relay, it is difficult to troubleshoot its
performance.
Frame Relay has become one of the most extensively-used WAN protocols. Its cheapness (compared to
leased lines) provided one reason for its popularity. The extreme simplicity of configuring user equipment
in a Frame Relay network offers another reason for Frame Relay's popularity.
Frame-relay frame structure essentially mirrors almost exactly that defined for LAP-D. Traffic analysis can
distinguish frame relay format from LAP-D by its lack of a control field.
Each frame relay PDU consists of the following fields:
1. Flag Field. The flag is used to perform high-level data link synchronization which indicates the
beginning and end of the frame with the unique pattern 01111110. To ensure that the 01111110
pattern does not appear somewhere inside the frame, bit stuffing and destuffing procedures are
used.
2. Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4, or octet 2 to 5,
depending on the range of the address in use. A two-octet address field comprises the
EA=ADDRESS FIELD EXTENSION BITS and the C/R=COMMAND/RESPONSE BIT.
3. DLCI-Data Link Connection Identifier Bits. The DLCI serves to identify the virtual connection so
that the receiving end knows which information connection a frame belongs to. Note that this
DLCI has only local significance. A single physical channel can multiplex several different virtual
connections.
4. FECN, BECN, DE bits. These bits report congestion:
o FECN=Forward Explicit Congestion Notification bit
o BECN=Backward Explicit Congestion Notification bit
o DE=Discard Eligibility bit
5. Information Field. A system parameter defines the maximum number of data bytes that a host
can pack into a frame. Hosts may negotiate the actual maximum frame length at call set-up time.
The standard specifies the maximum information field size (supportable by any network) as at
least 262 octets. Since end-to-end protocols typically operate on the basis of larger information
units, frame relay recommends that the network support the maximum value of at least 1600 octets
in order to avoid the need for segmentation and reassembling by end-users.
6. Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit error-rate of the
medium, each switching node needs to implement error detection to avoid wasting bandwidth due
to the transmission of erred frames. The error detection mechanism used in frame relay uses the
cyclic redundancy check (CRC) as its basis.
The frame relay network uses a simplified protocol at each switching node. It achieves simplicity by
omitting link-by-link flow-control. As a result, the offered load has largely determined the performance of
frame relay networks. When high offered load is high, due to the bursts in some services, temporary
overload at some frame relay nodes causes a collapse in network throughput. Therefore, frame-relay
networks require some effective mechanisms to control the congestion.
Congestion control in frame-relay networks includes the following elements:
1. Admission Control. This provides the principal mechanism used in frame relay to ensure the
guarantee of resource requirement once accepted. It also serves generally to achieve high network
performance. The network decides whether to accept a new connection request, based on the
relation of the requested traffic descriptor and the network's residual capacity. The traffic
descriptor consists of a set of parameters communicated to the switching nodes at call set-up time
or at service-subscription time, and which characterizes the connection's statistical properties. The
traffic descriptor consists of three elements:
2. Committed Information Rate (CIR). The average rate (in bit/s) at which the network guarantees to
transfer information units over a measurement interval T. This T interval is defined as: T =
Bc/CIR.
3. Committed Burst Size (BC). The maximum number of information units transmittable during the
interval T.
4. Excess Burst Size (BE). The maximum number of uncommitted information units (in bits) that the
network will attempt to carry during the interval.
Once the network has established a connection, the edge node of the frame relay network must monitor the
connection's traffic flow to ensure that the actual usage of network resources does not exceed this
specification. Frame relay defines some restrictions on the user's information rate. It allows the network to
enforce the end user's information rate and discard information when the subscribed access rate is
exceeded.
Explicit congestion notification is proposed as the congestion avoidance policy. It tries to keep the network
operating at its desired equilibrium point so that a certain Quality of Service (QOS) for the network can be
met. To do so, special congestion control bits have been incorporated into the address field of the frame
relay: FECN and BECN. The basic idea is to avoid data accumulation inside the network. FECN means
Forward Explicit Congestion Notification. The FECN bit can be set to 1 to indicate that congestion was
experienced in the direction of the frame transmission, so it informs the destination that congestion has
occurred. BECN means Backwards Explicit Congestion Notification. The BECN bit can be set to 1 to
indicate that congestion was experienced in the network in the direction opposite of the frame transmission,
so it informs the sender that congestion has occurred.
WIRELESS LINKS:
A WWAN differs from a WLAN (wireless LAN) in that it uses Mobile telecommunication cellular network
technologies such as WIMAX (though it's better applicated into WMAN Networks), UMTS, GPRS,
CDMA2000, GSM, CDPD, Mobitex, HSDPA or 3G to transfer data. It can use also LMDS and Wi-Fi to
connect to the Internet. These cellular technologies are offered regionally, nationwide, or even globally and
are provided by a wireless service provider for a monthly usage fee.[1] WWAN connectivity allows a user
with a laptop and a WWAN card to surf the web, check email, or connect to a Virtual Private Network
(VPN) from anywhere within the regional boundaries of cellular service. Various computers now have
integrated WWAN capabilities (Such as HSDPA in Centrino). This means that the system has a cellular
radio (GSM/CDMA) built in, which allows the user to send and receive data. There are two basic means
that a mobile network may use to transfer data:
• Packet-switched Data Networks (GPRS/CDPD)
• Circuit-switched dial-up connections
Since radio communications systems do not provide a physically secure connection path, WWANs
typically incorporate encryption and authentication methods to make them more secure. Unfortunately
some of the early GSM encryption techniques were flawed, and security experts have issued warnings that
cellular communication, including WWANs, is no longer secure.[2] UMTS(3G) encryption was developed
later and has yet to be broken.
Examples of providers for WWAN include Sprint Nextel, Verizon, and AT&T.
ATM:
In electronic digital data transmission systems, the network protocol Asynchronous Transfer Mode
(ATM) encodes data traffic into small fixed-sized cells. The standards for ATM were first developed in the
mid 1980s. The goal was to design a single networking strategy that could transport real-time video and
audio as well as image files, text and email. Two groups, the International Telecommunications Union and
the ATM Forum were involved in the creation of the standards.
ATM, as a connection-oriented technology, establishes a virtual circuit between the two endpoints before
the actual data exchange begins. ATM is a cell relay, packet switching protocol which provides data link
layer services that run over Layer 1 links. This differs from other technologies based on packet-switched
networks (such as the Internet Protocol or Ethernet), in which variable sized packets (known as frames
when referencing Layer 2) are used. ATM exposes properties from both circuit- and packet switched
networking, making it suitable for wide area data networking as well as real-time media transport. It is a
core protocol used in the SONET/SDH backbone of the public switched telephone network.
When purchasing ATM service, you generally have a choice of four different types of service:
constant bit rate (CBR): specifies a fixed bit rate so that data is sent in a steady stream. This is
analogous to a leased line.
variable bit rate (VBR): provides a specified throughput capacity but data is not sent evenly. This is a
popular choice for voice and videoconferencing data.
available bit rate (ABR): provides a guaranteed minimum capacity but allows data to be bursted at
higher capacities when the network is free.
unspecified bit rate (UBR): does not guarantee any throughput levels. This is used for applications,
such as file transfer
, that can tolerate delays.
ATM addressing
A Virtual Channel (VC) provides the transport of ATM cells which have the same unique identifier,
called the Virtual Channel Identifier (VCI). This identifier is encoded in the cell header. A virtual channel
represents the basic means of communication between two end-points, and is analogous to an X.25 virtual
circuit.[1]
A Virtual Path (VP) transports ATM cells belonging to virtual channels which share a common identifier,
called the Virtual Path Identifier (VPI), which is also encoded in the cell header. A virtual path, in other
words, is a grouping of virtual channels which connect the same end-points, and which share a traffic
allocation. This two layer approach can be used to separate the management of routers and bandwidth from
the setup of individual connections.
ATM concepts
Why cells?
The designers of ATM utilized small data cells in order to reduce jitter (delay variance, in this case) in the
multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly
important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal
is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced (in
time) stream of data items. If the next data item is not available when it is needed, the codec has no choice
but to produce silence or guess - and if the data is late, it is useless, because the time period when it should
have been converted to a signal has already passed.
Now consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (traffic
with some large data packets). No matter how small the speech packets could be made, they would always
encounter full-size data packets, and under normal queuing conditions, might experience maximum
queuing delays.
At the time of the design of ATM, 155 Mbit/s SDH (135 Mbit/s payload) was considered a fast optical
network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 to
45 Mbit/s in the USA (2 to 34 Mbit/s in Europe).
At this rate, a typical full-length 1500 byte (12000-bit) data packet would take 77.42 µs to transmit. In a
lower-speed link, such as a 1.544 Mbit/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds.
A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over,
in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for
speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce
good-quality sound. A packet voice system can produce this in a number of ways:
• Have a playback buffer between the network and the codec, one large enough to tide the codec
over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced
by passage through the buffer would require echo cancellers even in local networks; this was
considered too expensive at the time. Also, it would have increased the delay across the channel,
and conversation is difficult over high-delay channels.
• Build a system which can inherently provide low jitter (and minimal overall delay) to traffic which
needs it.
• Operate on a 1:1 user basis (i.e., a dedicated pipe).
The design of ATM aimed for a low-jitter network interface. However, to be able to provide short queueing
delays, but also be able to carry large datagrams, it had to have cells. ATM broke up all packets, data, and
voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be
reassembled later. The choice of 48 bytes was political rather than technical.[2] When the CCITT was
standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a
good compromise between larger payloads optimized for data transmission and shorter payloads optimized
for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size
(and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most
of the European parties eventually came around to the arguments made by the Americans, but France and a
few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an
ATM-based voice network with calls from one end of France to the other requiring no echo cancellation.
48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides, but it was ideal for
neither and everybody has had to live with it ever since. 5-byte headers were chosen because it was thought
that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these
53-byte cells instead of packets. Doing so reduced the worst-case queuing ji tter by a factor of almost 30,
removing the need for echo cancellers.
Structure of an ATM cell
An ATM cell consists of a 5-byte header and a 48-byte payload. The payload size of 48 bytes was chosen
as described above ("Why cells?").
ATM defines two different cell formats: NNI (Network-Network Interface) and UNI (User-Network
Interface). Most ATM links use UNI cell format.
Diagram of the NNI ATM Cell
Diagram of the UNI ATM Cell 7 4 3 0
7 4 3 0 VPI
GFC VPI VPI VCI
VPI VCI VCI
VCI VCI PT CLP
VCI PT CLP HEC
HEC
LECTURE NO. 30
Class of Service
Class of Service (CoS) is a way of managing traffic in a network by grouping similar types of traffic (for
example, e-mail, streaming video, voice, large document file transfer) together and treating each type as a
class with its own level of service priority. Unlike Quality of Service (QoS) traffic management, Class of
Service technologies do not guarantee a level of service in terms of bandwidth and delivery time; they offer
a "best-effort." On the other hand, CoS technology is simpler to manage and more scalable as a network
grows in structure and traffic volume. One can think of CoS as "coarsely-grained" traffic control and QoS
as "finely-grained" traffic control.
Class of Service (CoS) is a 3 bit field within a layer two Ethernet frame header when using IEEE 802.1Q.
It specifies a priority value of between 0 (signifying best-effort) and 7 (signifying priority real-time data)
that can be used by Quality of Service disciplines to differentiate traffic.
Class of Service (CoS) is a way of managing traffic in a network by grouping similar types of traffic (for
example, e-mail, streaming video, voice, large document file transfer) together and treating each type as a
class with its own level of service priority. Unlike Quality of Service (QoS) traffic management, Class of
Service technologies do not guarantee a level of service in terms of bandwidth and delivery time; they offer
a "best-effort." On the other hand, CoS technology is simpler to manage and more scalable as a network
grows in structure and traffic volume. One can think of CoS as "coarsely-grained" traffic control and QoS
as "finely-grained" traffic control.
Voice terminology
Class of Service as related to legacy telephone systems, is often used to define the permissions an extension
will have on a PBX or Centrex. The Class of Service acronym is normally written as COS vs. CoS as is
often used in data networking parlance. Certain groups of users may have a need for extended voice mail
message retention while another group may need the ability to forward calls to a cell phone, and still others
have no need to make calls outside the office. Permissions for a group of extensions can be changed by
modifying a COS variable applied to the entire group.
COS is also used on trunks to define if they are full-duplex, incoming only, or outgoing only.
Most IP Phones tag the VoIP packets with CoS marking of 5 or 6 in the Ethernet header of the outgoing
frame.
FIREWALLS
Firewalls are mainly used as a means to protect an organization's internal network from those on the outside
(internet). It is used to keep outsiders from gaining information to secrets or from doing damage to internal
computer systems. Firewalls are also used to limit the access of individuals on the internal network to
services on the internet along with keeping track of what is done through the firewall. Please note the
difference between firewalls and routers as described in the second paragraph in the IP Masquerading
section.
Types of Firewalls
1. Packet Filtering - Blocks selected network packets.
2. Circuit Level Relay - SOCKS is an example of this type of firewall. This type of proxy is not
aware of applications but just cross links your connects to another outside connection. It can log
activity, but not as detailed as an application proxy. It only works with TCP connections, and
doesn't provide for user authentication.
3. Application Proxy Gateway - The users connect to the outside using the proxy. The proxy gets the
information and returns it to the user. The proxy can record everything that is done. This type of
proxy may require a user login to use it. Rules may be set to allow some functions of an
application to be done and other functions denied. The "get" function may be allowed in the FTP
application, but the "put" function may not.
Proxy Servers can be used to perform the following functions.
• Control outbound connections and data.
• Monitor outbound connections and data.
• Cache requested data which can increase system bandwidth performance and decrease the time it
takes for other users to read the same data.
Application proxy servers can perform the following additional functions:
• Provide for user authentication.
• Allow and deny application specific functions.
Apply stronger authentication mechanisms to some applications.
Packet Filtering Firewalls
In a packet filtering firewall, data is forwarded based on a set of firewall rules. This firewall works at the
network level. Packets are filtered by type, source address, destination address, and port information. These
rules are similar to the routing rules explained in an earlier section and may be thought of as a set of
instructions similar to a case statement or if statement. This type of firewall is fast, but cannot allow access
to a particular user since there is no way to identify the user except by using the IP address of the user's
computer, which may be an unreliable method. Also the user does not need to configure any software to
use a packet filtering firewall such as setting a web browser to use a proxy for access to the web. The user
may be unaware of the firewall. This means the firewall is transparent to the client.
Circuit Level Relay Firewall
A circuit level relay firewall is also transparent to the client. It listens on a port such as port 80 for http
requests and redirect the request to a proxy server running on the machine. Basically, the redirect function
is set up using ipchains then the proxy will filter the package at the port that received the redirect.
LECTURE NO.33
VLAN: A virtual LAN, commonly known as a VLAN, is a group of hosts with a common set of
requirements that communicate as if they were attached to the Broadcast domain, regardless of their
physical location. A VLAN has the same attributes as a physical LAN, but it allows for end stations to be
grouped together even if they are not located on the same network switch. Network reconfiguration can be
done through software instead of physically relocating devices.
Uses
VLANs are created to provide the segmentation services traditionally provided by routers in LAN
configurations. VLANs address issues such as scalability, security, and network management. Routers in
VLAN topologies provide broadcast filtering, security, address summarization, and traffic flow
management. By definition, switches may not bridge IP traffic between VLANs as it would violate the
integrity of the VLAN broadcast domain.
This is also useful if one wants to create multiple Layer 3 networks on the same Layer 2 switch. For
example if a DHCP server (which will broadcast its presence) were plugged into a switch it would serve
anyone on that switch that was configured to do so. By using VLANs you easily split the network up so
some hosts won't use that server and default to Link-local addresses.
Virtual LANs are essentially Layer 2 constructs, compared with IP subnets which are Layer 3 constructs. In
a LAN employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although
it is possible to have multiple subnets on one VLAN or have one subnet spread across multiple VLANs.
Virtual LANs and IP subnets provide independent Layer 2 and Layer 3 constructs that map to one another
and this correspondence is useful during the network design process.
By using VLAN, one can control traffic patterns and react quickly to relocations. VLANs provide the
flexibility to adapt to changes in network requirements and allow for simplified administration’
Technologies able to implement VLANs are:
• Asynchronous Transfer Mode (ATM)
• Fiber Distributed Data Interface (FDDI)
• Fast Ethernet
• Gigabit Ethernet
• 10 Gigabit Ethernet
• HiperSockets
Protocols and design
The protocol most commonly used today in configuring virtual LANs is IEEE 802.1Q. The IEEE
committee defined this method of multiplexing VLANs in an effort to provide multivendor VLAN support.
Prior to the introduction of the 802.1Q standard, several proprietary protocols existed, such as Cisco's ISL
(Inter-Switch Link, a variant of IEEE 802.10) and 3Com's VLT (Virtual LAN Trunk). ISL is no longer
supported by Cisco.
Both ISL and IEEE 802.1Q tagging perform explicit tagging as the frame is tagged with VLAN
information explicitly. ISL uses an external tagging process that does not modify the existing Ethernet
frame whereas 802.1Q uses an internal tagging process that does modify the Ethernet frame. This internal
tagging process is what allows IEEE 802.1Q tagging to work on both access and trunk links, because the
frame appears to be a standard Ethernet frame.
The IEEE 802.1Q header contains a 4-byte tag header containing a 2-byte tag protocol identifier (TPID)
and a 2-byte tag control information (TCI). The TPID has a fixed value of 0x8100 that indicates that the
frame carries the 802.1Q/802.1p tag information. The TCI contains the following elements:
• Three-bit user priority
• One-bit canonical format indicator (CFI)
• Twelve-bit VLAN identifier (VID)-Uniquely identifies the VLAN to which the frame belongs
The 802.1Q standard can create an interesting scenario on the network. Recalling that the maximum size
for an Ethernet frame as specified by IEEE 802.3 is 1518 bytes, this means that if a maximum-sized
Ethernet frame gets tagged, the frame size will be 1522 bytes, a number that violates the IEEE 802.3
standard. To resolve this issue, the 802.3 committee created a subgroup called 802.3ac to extend the
maximum Ethernet size to 1522 bytes. Network devices that do not support a larger frame size will process
the frame successfully but may report these anomalies as a "baby giant."
Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect multiple switches and maintain
VLAN information as traffic travels between switches on trunk links. This technology provides one method
for multiplexing bridge groups (VLANs) over a high-speed backbone. It is defined for Fast Ethernet and
Gigabit Ethernet, as is IEEE 802.1Q. ISL has been available on Cisco routers since Cisco IOS Software
Release 11.1.
With ISL, an Ethernet frame is encapsulated with a header that transports VLAN IDs between switches and
routers. ISL does add overhead to the packet as a 26-byte header containing a 10-bit VLAN ID. In addition,
a 4-byte CRC is appended to the end of each frame. This CRC is in addition to any frame checking that the
Ethernet frame requires. The fields in an ISL header identify the frame as belonging to a particular VLAN.
A VLAN ID is added only if the frame is forwarded out a port configured as a trunk link. If the frame is to
be forwarded out a port configured as an access link, the ISL encapsulation is removed.
Early network designers often configured VLANs with the aim of reducing the size of the collision domain
in a large single Ethernet segment and thus improving performance. When Ethernet switches made this a
non-issue (because each switch port is a collision domain), attention turned to reducing the size of the
broadcast domain at the MAC layer. Virtual networks can also serve to restrict access to network resources
without regard to physical topology of the network, although the strength of this method remains debatable
as VLAN Hopping [1] is a common means of bypassing such security measures.
Virtual LANs operate at Layer 2 (the data link layer) of the OSI model. Administrators often configure a
VLAN to map directly to an IP network, or subnet, which gives the appearance of involving Layer 3 (the
network layer). In the context of VLANs, the term "trunk" denotes a network link carrying multiple
VLANs, which are identified by labels (or "tags") inserted into their packets. Such trunks must run between
"tagged ports" of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather
than links to hosts. (Note that the term 'trunk' is also used for what Cisco calls "channels" : Link
Aggregation or Port Trunking). A router (Layer 3 device) serves as the backbone for network traffic going
across different VLANs.
On Cisco devices, VTP (VLAN Trunking Protocol) maintains VLAN configuration consistency across the
entire network. VTP uses Layer 2 trunk frames to manage the addition, deletion, and renaming of VLANs
on a network-wide basis from a centralized switch in the VTP server mode. VTP is responsible for
synchronizing VLAN information within a VTP domain and reduces the need to configure the same VLAN
information on each switch.
VTP minimizes the possible configuration inconsistencies that arise when changes are made. These
inconsistencies can result in security violations, because VLANs can crossconnect when duplicate names
are used. They also could become internally disconnected when they are mapped from one LAN type to
another, for example, Ethernet to ATM LANE ELANs or FDDI 802.10 VLANs. VTP provides a mapping
scheme that enables seamless trunking within a network employing mixed-media technologies.
VTP provides the following benefits:
• VLAN configuration consistency across the network
• Mapping scheme that allows a VLAN to be trunked over mixed media
• Accurate tracking and monitoring of VLANs
• Dynamic reporting of added VLANs across the network
• Plug-and-play configuration when adding new VLANs
As beneficial as VTP can be, it does have disadvantages that are normally related to the Spanning Tree
Protocol (STP) as a bridging loop propagating throughout the network can occur. Cisco switches run an
instance of STP for each VLAN, and since VTP propagates VLANs across the campus LAN, VTP
effectively creates more opportunities for a bridging loop to occur.
Before creating VLANs on the switch that will be propagated via VTP, a VTP domain must first be set up.
A VTP domain for a network is a set of all contiguously trunked switches with the same VTP domain
name. All switches in the same management domain share their VLAN information with each other, and a
switch can participate in only one VTP management domain. Switches in different domains do not share
VTP information.
Using VTP, each Catalyst Family Switch advertises the following on its trunk ports:
• Management domain
• Configuration revision number
• Known VLANs and their specific parameters
LECTURE NO. 34 READINGS: A-PAGE 738
Proxy Server:
A proxy server is a server that acts as an intermediary between a workstation user and the Internet so that
the enterprise can ensure security, administrative control, and caching service. A proxy server is associated
with or part of a gateway server that separates the enterprise network from the outside network and a
firewall server that protects the enterprise network from outside intrusion.
A proxy server receives a request for an Internet service (such as a Web page request) from a user. If it
passes filtering requirements, the proxy server, assuming it is also a cache server , looks in its local cache
of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to
forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on
behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet.
When the page is returned, the proxy server relates it to the original request and forwards it on to the user.
To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly
with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a
configuration option to the browser or other protocol program.)
An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are
frequently requested, these are likely to be in the proxy's cache, which will improve user response time. In
fact, there are special servers called cache servers. A proxy can also do logging.
The functions of proxy, firewall, and caching can be in separate server programs or combined in a single
package. Different server programs can be in different computers. For example, a proxy server may in the
same machine with a firewall server or it may be on a separate server and forward requests through the
firewall.
Proxy servers implement one or more of the following functions:-
Caching proxy server
A caching proxy server accelerates service requests by retrieving content saved from a previous request
made by the same client or even other clients. Caching proxies keep local copies of frequently requested
resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost,
while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These
machines are built to deliver superb file system performance (often with RAID and journaling) and also
contain hot-rodded versions of TCP. Caching proxies were the first kind of proxy server.
The HTTP 1.0 and later protocols contain many types of headers for declaring static (cacheable) content
and verifying content freshness with an original server, e.g. ETAG (validation tags), If-Modified-Since
(date-based validation), Expiry (timeout-based invalidation), etc. Other protocols such as DNS support
expiry only and contain no support for validation.
Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user
authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching Problems).
Another important use of the proxy server is to reduce the hardware cost. In organization there may be
many systems working in the same network or under control of one server, now in this situation we can not
have individual connection for all systems with internet. We can simply connect those systems with one
proxy server and proxy server with the main server.
Web proxy
A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web proxy is to
serve as a web cache. Most proxy programs (e.g. Squid) provide a means to deny access to certain URLs in
a blacklist, thus providing content filtering. This is usually used in a corporate environment, though with
the increasing use of Linux in small businesses and homes, this function is no longer confined to large
corporations. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell phones
and PDAs).
AOL dialup customers used to have their requests routed through an extensible proxy that 'thinned' or
reduced the detail in JPEG pictures. This sped up performance, but caused trouble, either when more
resolution was needed or when the thinning program produced incorrect results. This is why in the early
days of the web many web pages would contain a link saying "AOL Users Click Here" to bypass the web
proxy and to avoid the bugs in the thinning software
Network operating systems provide three basic mechanisms that are used to the support the services
provided by the operating system and applications. These mechanisms are (1) Message Passing (2)
Remote Procedure Calls and (3) Distributed Shared Memory. These mechanisms support a feature called
Inter Process Communication or IPC. While all the above mechanisms are suitable for all kinds of
interprocess
communication, RPC and DSM are favored over message passing by programmers.
1. Message Passing
Message passing is the most basic mechanism provided by the operating system. This mechanism allows
a process on one machine to send a packet of raw, uninterpreted stream of bytes to another process.
In order to use the message passing system, a process wanting to receive messages (or the receiving
process) creates a port (or mailbox). A port is an abstraction for a buffer, in which incoming messages
are stored. Each port has a unique system-wide address, which is assigned, when the port is created. A
port is created by the operating system upon a request from the receiving process and is created at the
machine where the receiving process executes. Then the receiving process may choose to register the port
address with a directory service.
After a port is created, the receiving process can request the operating system to retrieve a message from
the port and provide the received data to the process. This is done via a receive system call. If there are
no messages in the port, the process is blocked by the operating system until a message arrives. When a
message arrives, the process is woken up and is allowed to access the message.
A message arrives at a port, after a process sends a message to that port. The sending process creates the
data to be sent and packages the data in a packet. Then it requests the operating system to deliver this
message to the particular port, using the address of the port. The port can be on the same machine as the
sender, or a machine connected to the same network.
When a message is sent to a port that is not on the same machine as the sender (the most common case)
this message traverses a network. The actual transmission of the message uses a networking protocol that
provides routing, reliability, accuracy and safe delivery. Then most common networking protocol is TCPIP.
Other protocols include IPX/SPX, AppleTalk, NetBEUI, PPTP and so on. Network protocols use
techniques such as packetizing, checksums, acknowledgements, gatewaying, routing and flow control to
ensure messages that are sent are received correctly and in the order they were sent.
Message passing is the basic building block of distributed systems. Network operating system use
message passing for inter-kernel as well as inter-process communications. Inter-kernel communications
are necessary as the operating system on one machine needs to cooperate with operating systems on other
machines to authenticate users, manage files, handle replication and so on.
Programming using message passing is achieved by using the send/receive system calls and the port
creation and registering facilities. These facilities are part of the message passing API provided by the
operating system. However, programming using message passing is considered to be a low-level
technique that is error prone and best avoided. This is due to the unstructured nature of message passing.
Message passing is unstructured, as there are no structural restrictions on its usage. Any process can send
a message to any port. A process may send messages to a process that is not expecting any. A process
may wait for messages from another process, and no message may originate from the second process.
Such situations can lead to bugs that are very difficult to detect. Sometimes timeouts are used to get out
of the blocked receive calls when no messages arrive – but the message may actually arrive just after the
timeout fires.
Even worse, the messages contain raw data. Suppose a sender sends three integers to a receiver who is
expecting one floating-point value. This will cause very strange and often undetected behaviors in the
programs. Such errors occur frequently due to the complex nature of message passing programs and
hence better mechanisms have been developed for programs that need to cooperate.
Even so, a majority of the software developed for providing services and applications in networked
environments use message passing. Some minimization of errors is done by strictly adhering to a
programming style called the client-server programming paradigm. In this paradigm, some processes are
pre-designated as servers. A server process consists of an infinite loop. Inside the loop is a receive
statement which waits for messages to arrive at a port called the service port. When a message arrives,
the server performs some task requested by the message and then executes a send call to send back
results to the requestor and goes back to listening for new messages.
The other processes are clients. These processes send a message to a server and then waits for a response
using a receive. In other words, all sends in a client process must be followed by a receive and all
receives at a server process must be followed by a send. Following this scheme significantly reduced
timing related bugs.
The performance of client-server based programs are however poorer than what can be achieved by
other, nastier coding techniques. To alleviate this, often a multi-threaded server is used. In a
multithreaded server several parallel threads can listen to the same port for incoming messages and
perform requests in parallel. This causes quicker service response times.
Two better inter-process communication techniques are RPC and DSM, described below.
2. Remote Procedure Calls (RPC)
Remote Procedure Calls, or RPC is a method of performing inter-process communication with a familiar,
procedure call like mechanism. In this scheme, to access remote services, a client makes a procedure call,
just like a regular procedure call, but the procedure executes within the context of a different process,
possibly on a different machine. The RPC mechanism is similar to the client-server programming style
used in message passing. However, unlike message passing where the programmer is responsible for
writing all the communication code, in RPC a compiler automates much of the intricate details of the
communication.
In concept, RPC works as follows: A client process wishes to get service from a server. It makes a remote
procedure call on a procedure defined in the server. In order to do this the client sends a message to the
RPC listening service on the machine where the remote procedure is stored. In the message, the client
sends all the parameters needed to perform the task. The RPC listener then activates the procedure in the
proper context, lets it run and returns the results generated by the procedure to the client program.
However, much of this task is automated and not under programmer control.
An RPC service is created by a programmer who (let us assume) writes the server program as well as the
client program. In order to do this; he or she first writes an interface description using a special language
called the Interface Description Language (IDL). All RPC systems provide an IDL definition and an IDL
compiler. The interface specification of a server documents all the procedures available in the server and
the types of arguments they take and the results they provide.
The IDL compiler compiles this specification into two files, one containing C code that is to be used for
writing the server program and the other containing code used to write the client program.
The part for the server contains the definitions (or prototypes) of the procedures supported by the server.
It also contains some code called the server loop. To this template, the programmer adds the global
variables, private functions and the implementation of the procedures supported by the interface. When
the resulting program is compiled, a server is generated. The server loop is inserted by the IDL compiler
contains code to:
1. Register the service with a name server.
2. Listen for incoming requests (could be via the listening service provided by the operating system).
3. Parse the incoming request and call the appropriate procedure using the supplied parameters. This
step requires the extraction of the parameters from the message sent by the client. The extraction
process is called unmarshalling. During unmarshalling some type-checking can also be performed.
4. After the procedure returns, the server loop packages the return results into a message (marshalling)
and sends a reply message to the client.
Note that all the above functionality is automatically inserted into the RPC server by the IDL compiler
and the programmer does not have to write any of these.
Then the programmer writes the client. In the client program, the programmer #include’s the header file
for clients generated by the IDL compiler. This file has the definitions and pseudo-implementations (or
proxies) of the procedures that are actually in the server. The client program is written as if the calls to
the remote procedures are in fact local procedure calls. When the client program is run, the stubs inserted
via the header files play an important role in the execution f the RPC’s.
When the client process makes a call to a remote procedure, it actually calls a local procedure, which is a
proxy for the remote procedure. This proxy procedure (or stub) gets all the arguments passed to it and
packages them in some predefined format. This packaging is called marshalling. After the arguments are
marshaled, they are sent to the RPC server that handles requests for this procedure. Of course, as
described above, the RPC server unmarshals arguments, runs the procedure and marshals results. The
results flow back to the client, and the proxy procedure gets them. It unmarshals the results and returns
control to the calling statement, just like a regular local procedure.
One problem remains. How does the client know what is the address of the server handling a particular
procedure call? This function is automated too. The IDL compiler, when compiling an interface
definition, obtains a unique number from the operating system and inserts it into both the client stub and
the server stub, as a constant. The server registers this number with its address on the name service. The
client uses this number to look up the server’s address from the name service.
The net effect is that a programmer can write a set of server routines, which can be used from multiple
client processes running on a network of machines. The writing of these routines take minimal effort and
calling them from remote processes is not difficult either. There is no need to write communications
routines and routines to manage arguments and handle type checking. Automation reduces chances of
bugs quite heavily. This has led to the acceptance of RPC as the preferred distributed programming tool.
3. Distributed Shared Memory (DSM)
While message passing and RPC are the mainstays of distributed programming, and is available on all
network operating systems, Distributed Shared Memory or DSM is not at all ubiquitous. On a distributed
system, DSM provides a logical equivalent to (real) shared memory, which is normally available only on
multiprocessor systems.
Multiprocessor systems have the ability of providing the same physical memory to multiple processors.
This is a very useful feature and has been utilized heavily for parallel processing and inter-process
communication in multiprocessor machines. While RPC and message passing is also possible on
multiprocessor systems, using shared memory for communication and data sharing is more natural and is
preferred by most programmers.
While shared memory is naturally available in multiprocessors, due to the physical design of the
computer, it is neither available nor was thought to be possible on a distributed system. However, the
DSM concept has proven that a logical version of shared memory, which works just like the physical
version, albeit at reduced performance, is both possible and is quite useful.
DSM is a feature by which two or more processes on two or more machines can map a single shared
memory segment to their address spaces. This shared segment behaves like real shared memory, that is,
any change made by any process to any byte in the shared segment is instantaneously seen by all the
processes that map the segment. Of course, this segment cannot be at all the machines at the same time,
and updates cannot be immediately propagated, due to the limitations of speed of the network.
DSM is implemented by having a DSM server that stores the shared segment, that is, it has the data
contained by shared segment. The segment is an integral number of pages. When a process maps the
segment to its address space, the operating system reserves the address range in memory and marks the
virtual addresses of the mapped pages as inaccessible (via the page table). If this process accesses any
page in the shared segment, a page fault is caused. The DSM client is the page fault handler of the
process.
The workings of DSM are rather complex due to the enormous number of cases the algorithm has to
handle. Modern DSM systems provide intricate optimizations that make the system run faster but are
hard to understand. In this section, we discuss a simple, un-optimized DSM system – which if
implemented would work, but would be rather inefficient.
DSM works with memory by organizing it as pages (similar to virtual memory systems). The mapped
segment is a set of pages. The protection attributes of these pages are set to inaccessible, read-only or
read-write:
1. Inaccessible: This denotes that the current version of the page is not available on this machine and
the server needs to be contacted before the page can be read or written.
2. Read-only: This denotes that the most recent version of the page is available on this machine, i.e. the
process on this machine holds the page in read mode. Other processes may also have the page in
read-only mode, but no process has it in write mode. This page can be freely read, but not updated
without informing the DSM server.
3. Read-write: This denotes that this machine has the sole, latest version of the page, i.e. the process on
this machine holds the page in write mode. No other process has a copy of this page. It can be freely
read or updated. However, if this page is needed anywhere else, the DSM server may yank the
privileges by invalidating the page.
The DSM client or page fault handler is activated whenever there is a page fault. When activated, the
DSM client first determines whether the page fault was due to a read access or a write access. The two
cases are different and are described separately, below:
Read Access Fault:
On a read access fault, the DSM client contacts the DSM server and asks for the page in read mode. If
there are no clients that have already requested the page in write mode, the server sends the page to the
DSM client. After getting the page, the DSM client copies it into the memory of the process, at the
correct address, and sets the protection of the page as readonly. It then restarts the process that caused the
page fault.
If there is one client already holding the page in write mode (there can be at most one client in write
mode) then the server first asks the client to relinquish the page. This is called invalidation. The client
relinquishes the page by sending it back to the server and marking the page as inaccessible. After the
invalidation is done, the server sends the page to the requesting client, as before.
Write Access Fault:
On a write access fault, the DSM client contacts the server and requests the page in write mode. If the
page is not currently used in read or write mode by any other process, the server provides a copy of the
page to the client. The client then copies the page to memory, sets the protection to read-write and
restarts the process.
If the page is currently held by some processes in read or write mode, the server invalidates all these
copies of the page. Then it sends the page to the requesting client, which installs it and sets the protection
to read-write.
The net effects of the above algorithm are as follows:
1. Only pages that are used by a process on a machine migrate to that machine.
2. Pages that are read by several processes migrate to the machines these processes are running on.
Each machine has a copy.
3. Pages that are being updated, migrate to the machines they are being updated on, however there is at
most one update copy of the page at any point in time. If the page is being simultaneously read and
updated by two or more machines, then the page shuttles back and forth between these machines.
Page shuttling is a serious problem in DSM systems. There are many algorithms used to prevent page
shuttling. Effective page shuttling prevention is done by relaxed memory coherence requirements, such
as release consistency. Also, with careful design of applications page shuttling can be minimized.
The first system to incorporate DSM was Ivy (5). Several DSM packages are available, these include
TreadMarks, Quarks, Avalanche and Calypso.
Kernel Architectures
Operating systems have been always constructed (and often still are) using the monolithic kernel
approach. The monolithic kernel is a large piece of protected software that implements all the services
the operating system has to offer via a system call interface (or API). This approach has some significant
disadvantages. The kernel, unlike application programs, is not a sequential program. A kernel is an
interrupt driven program. That is, different parts of the kernel are triggered and made to execute at
different (and unpredictable) points in time, due to interrupts. In fact, the entire kernel is interrupt driven.
The net effect of this structure is that:
1. The kernel is hard to program. The dependencies of the independently interrupt-triggerable parts are
hard to keep track of.
2. The kernel is hard to debug. There is no way of systematically running and testing the kernel. When a
kernel is deployed, random parts start executing quite unpredictably.
3. The kernel is crucial. A bug in the kernel causes applications to crash, often mysteriously.
4. The kernel is very timing dependent. Timing errors are very hard to catch problems that are not
repeatable and the kernel often contains many such glitches that are not detectable.
The emergence of network operating systems saw the sudden drastic increase in the size of kernels. This
is due to the addition of a whole slew of facilities in the kernel, such as message passing, protocol
handling, network device handling, network file systems, naming systems, RPC handling, time
16
management and so on. Soon it was apparent that this bloat led to kernel implementations that are
unwieldy, buggy and doomed to fail.
This rise in complexity, resulted in the development of an innovative kernel architecture, targeted at
network operating systems, called the microkernel architecture. A true microkernel places only those
features in the kernel, that positively have to be in the kernel. This includes low-level service such as
CPU scheduling, memory management, device drivers, network drivers. Then it places a low-level
message passing interface in the kernel. The user-level API is just essentially the message passing
routines.
All other services are built outside the kernel, using server processes. It has been shown that almost every
API service and all networking services can be placed outside the kernel. This architecture has some
significant benefits, a few of which are listed below:
1. Services can be programmed and tested separately. Changes to the service do not need recompiling
the microkernel.
2. All services are insulated from each other – bugs in one service do not affect another service. This is
not only a good feature, but makes debugging significantly easier.
3. Adding, updating and reconfiguring services are trivial.
4. Many different implementations of the same service can co-exist.
Microkernel operating systems that proved successful include Amoeba (10), Mach (12) and the VSystem
(14). A commercial microkernel operating system called Chorus is marketed by Chorus Systems
(France).
The advantages of microkernels come at a price, namely performance. Performance of operating systems
is an all-important feature that can make or break the usage of the system, especially commercial
systems. Hence, commercial systems typically shun the microkernel approach but choose a compromise
called the hybrid kernel. A hybrid kernel is a microkernel in spirit, but a monolithic kernel in reality. The
Chorus operating system pioneered the hybrid kernel. Windows NT is also a hybrid system.
A hybrid system starts as a microkernel. Then as services are developed and debugged they are migrated
into the kernel. This retains some of the advantages of the microkernel, but the migration of services into
the kernel significantly improves the performance.
Network operating systems (NOS) typically are used to run computers that act as servers. They provide the
capabilities required for network operation. Network operating systems are also designed for client
computers and provide functions so the distinction between network operating systems and stand alone
operating systems is not always obvious. Network operating systems provide the following functions:
File and print sharing.
Account administration for users.
Security.
Installed Components
Client functionality
Server functionality
Functions provided:
Account Administration for users
Security
File and print sharing
Network services
File Sharing
Print shari