Sunteți pe pagina 1din 20

COMPUTER NETWORKING

Q1. What is OSI model? Explain all its layers with diagram.

Ans: OSI (Open Systems Interconnection) is a standard description or "reference model" for how messages
should be transmitted between any two points in a telecommunication network. Its purpose is to guide product
implementors so that their products will consistently work with other products. The reference model defines
seven layers of functions that take place at each end of a communication.
OSI divides telecommunication into seven layers. The layers are in two groups. The upper four layers are used
whenever a message passes from or to a user. The lower three layers (up to the network layer) are used when
any message passes through the host computer. Messages intended for this computer pass to the upper layers.
Messages destined for some other host are not passed up to the upper layers but are forwarded to another host.
The seven layers are:

Layer 7: The application layer...This is the layer at which communication partners are identified, quality of
service is identified, user authentication and privacy are considered, and any constraints on data syntax are
identified. (This layer is not the application itself, although some applications may perform application layer
functions.)

Layer 6: The presentation layer...This is a layer, usually part of an operating system, that converts incoming
and outgoing data from one presentation format to another (for example, from a text stream into a popup
window with the newly arrived text). Sometimes called the syntax layer.

Layer 5: The session layer...This layer sets up, coordinates, and terminates conversations, exchanges, and
dialogs between the applications at each end. It deals with session and connection coordination.

Layer 4: The transport layer...This layer manages the end-to-end control (for example, determining whether
all packets have arrived) and error-checking. It ensures complete data transfer.

Layer 3: The network layer...This layer handles the routing of the data (sending it in the right direction to the
right destination on outgoing transmissions and receiving incoming transmissions at the packet level). The
network layer does routing and forwarding.

Layer 2: The data-link layer...This layer provides synchronization for the physical level and does bit-stuffing
for strings of 1's in excess of 5. It furnishes transmission protocol knowledge and management.

1
Layer 1: The physical layer...This layer conveys the bit stream through the network at the electrical and
mechanical level. It provides the hardware means of sending and receiving data on a carrier.

Q2. Write a short note on ALOHA protocols.

Ans: ALOHA is a system for coordinating and arbitrating access to a shared communication Networks channel.
It was developed in the 1970s by Norman Abramson and his colleagues at the University of Hawaii. The
original system used for ground based radio broadcasting, but the system has been implemented in satellite
communication systems.

Aloha means "Hello". Aloha is a multiple access protocol at the datalink layer and proposes how multiple
terminals access the medium without interference or collision. The Slotted Aloha protocol involves dividing the
time interval into discrete slots and each slot interval corresponds to the time period of one frame. This method
requires synchronization between the sending nodes to prevent collisions.

There are two different versior.s/types of ALOHA:


(i) Pure ALOHA
(ii) Slottecl ALOHA

(i) Pure ALOHA

 In pure ALOHA, the stations transmit frames whenever they have data to send.
 When two or more stations transmit simultaneously, there is collision and the frames are destroyed.
 In pure ALOHA, whenever any station transmits a frame, it expects the acknowledgement from the
receiver.
 If acknowledgement is not received within specified time, the station assumes that the frame (or
acknowledgement) has been destroyed.
 If the frame is destroyed because of collision the station waits for a random amount of time and sends it
again. This waiting time must be random otherwise same frames will collide again and again.
 Therefore pure ALOHA dictates that when time-out period passes, each station must wait for a random
amount of time before resending its frame. This randomness will help avoid more collisions.

(ii) Slotted ALOHA

 Slotted ALOHA was invented to improve the efficiency of pure ALOHA as chances of collision in pure
ALOHA are very high.
 In slotted ALOHA, the time of the shared channel is divided into discrete intervals called slots.
 The stations can send a frame only at the beginning of the slot and only one frame is sent in each slot.
 In slotted ALOHA, if any station is not able to place the frame onto the channel at the beginning of the
slot i.e. it misses the time slot then the station has to wait until the beginning of the next time slot.
 Slotted ALOHA still has an edge over pure ALOHA as chances of collision are reduced to one-half.

2
Q3. What is the function of data link layer? Explain the services of the data link layer.

Ans: The data link Layer is the second layer of the OSI model. The data link layer performs various functions
depending upon the hardware protocol used, but has four primary functions:
 COMMUNICATION WITH NETWORK LAYER
 SEGMENTATION & REASSEMBLY
 BIT ORDERING
 COMMUNICATION WITH PHYSICAL LAYER

1. COMMUNICATION with the Network layer above.

2. SEGMENTATION of upper layer datagrams (also called packets) into frames in sizes that can be handled by
the communications hardware.

3. BIT ORDERING. The data link layer organizes the pattern of data bits into frames before transmission. The
frame formatting issues such as stop and start bits, bit order, parity and other functions are handled here.
Management of big-endian / little-endian issues are also managed at this layer.

4. COMMUNICATION with the Physical layer below: This layer provides reliable transit of data across a
physical link. The data link layer is concerned with physical addressing, network topology, physical link
management, error notification, ordered delivery of frames, and flow control.

Data link layer services


 Encapsulation of network layer data packets into frames
 Frame synchronization
 Logical link control (LLC) sublayer:
 Error control (automatic repeat request,ARQ), in addition to ARQ provided by some transport-layer
protocols, to forward error correction (FEC) techniques provided on the physical layer, and to error-
detection and packet canceling provided at all layers, including the network layer. Data-link-layer
error control (i.e. retransmission of erroneous packets) is provided in wireless networks and V.42
telephone network modems, but not in LAN protocols such as Ethernet, since bit errors are so
uncommon in short wires. In that case, only error detection and canceling of erroneous packets are
provided.
 Flow control, in addition to the one provided on the transport layer. Data-link-layer error control is
not used in LAN protocols such as Ethernet, but in modems and wireless networks.
 Media access control (MAC) sublayer:
 Multiple access protocols for channel-access control, for example CSMA/CD protocols for collision
detection and re-transmission in Ethernet bus networks and hub networks, or the CSMA/CA
protocol for collision avoidance in wireless networks.
 Physical addressing (MAC addressing)
 LAN switching (packet switching) including MAC filtering and spanning tree protocol
 Data packet queuing or scheduling
 Store-and-forward switching or cut-through switching
 Quality of Service (QoS) control
 Virtual LANs (VLAN)

3
Q4. Explain Shortest Path routing Algorithm.

Ans: This technique is widely used because of its simplicity and easy to understand. It is a static algorithm.
Consider a subnet given in figure 6.4 (a). Several algorithms for computing shortest path between the two nodes
of a graph are known. We will be discussing the Dijkstra method.

Figure: The computation of shortest path


*Note: The arrow in figure indicates the working node.

Each node of the graph is labeled with its distance from the source node along the best possible known path.
Initially no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and paths are
found, the labels change, reflecting better paths. A label may be either permanent or tentative. Initially all labels
are tentative. When it is discovered that a label represents the shortest possible path from source to that node, it
is made permanent and never changed later.

Now referring to the above figure (a) is a directed graph where the metric used is a distance. The steps for
finding the shortest path from A node to D node are illustrated in figure 6.4 from (b) to (f). To start with mark
the node A as permanent indicated by darkening the node A as shown in above figure 6.4 (a). Then we make
changes at all the adjacent nodes of A, re-labeling them with the distance to A. Then we examine the nodes that
are labeled recently and then choose the node with the smallest label as permanent as shown in above figure (b).
Now examine all the adjacent nodes of B. If the sum of the label on B and the distance from B to the node being
considered is less than the label on that node, we have a shorter path, so the node is relabeled. After all nodes
adjacent to the working node have been inspected and tentative labels changed, the entire graph is searched for
the tentatively labeled node with the smallest value. This node is made permanent. This method continues until
the destination node is obtained. The steps are clearly indicated in above figure (c) to (f), and the shortest path is
ABEFHD with distance equal to 10kms as metric.

One way of measuring path length is the number of hops and another is the distance in kilometers. Many other
metrics are possible like each arc labeled with the mean queuing and transmission delay for some standard test
packets.

4
Q5. Discuss any two design issues of Session Layer.

Ans: There are following two types of design issues of session layer discussed below:

1. Authentication
Authentication is the act of establishing or confirming something (or someone) as authentic, that is that claims
made by or about the thing are true. This might involve confirming the identity of a person, the origins of an
artifact, or assuring that a computer program is a trusted one. Authentication is the act of confirming the truth of
an attribute of a datum or entity. This might involve confirming the identity of a person or software program,
tracing the origins of an artifact, or ensuring that a product is what it’s packaging and labeling claims to be.
Authentication often involves verifying the validity of at least one form of identification.

Security research has determined that for a positive authentication, elements from at least two, and preferably
all three, factors should be verified. The three factors (classes) and some of elements of each factor are:
 The knowledge factors: Something the user knows (e.g., a password, pass phrase, or personal
identification number (PIN), challenge response (the user must answer a question), pattern)
 The inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence
(there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or
other biometric identifier.

2. Permissions or Access control


One familiar use of authentication and authorization is access control. A computer system supposed to be used
only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually
controlled by insisting on an authentication procedure to establish with some established degree of confidence
the identity of the user, thence granting those privileges as may be authorized to that identity.

In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card
network does not require a personal identification number, and small transactions usually do not even require a
signature. The security of the system is maintained by limiting distribution of credit card numbers, and by the
threat of punishment for fraud.

An access control list (ACL), with respect to a computer file system, is a list of permissions attached to an
object. An ACL specifies which users or system processes are granted access to objects, as well as what
operations are allowed on given objects

Q6. Explain Recursive queries and Iterative queries that a DNS resolver (either a DNS client or another
DNS server) can make to a DNS server.

Ans:

Recursive queries
In a recursive query, the queried name server is requested to respond with the requested data or with an error
stating that data of the requested type or the specified domain name does not exist. The name server cannot just
refer the DNS resolver to a different name server. A DNS client typically sends this type of query.

5
Iterative queries
In an iterative query, the queried name server can return the best answer it currently has back to the DNS
resolver. The best answer might be the resolved name or a referral to another name server that is closer to
fulfilling the DNS client's original request. DNS servers typically send iterative queries to query other DNS
servers.

DNS Name Resolution Example


To show how recursive and iterative queries are used for common DNS name resolutions, consider a computer
running a Microsoft Windows XP operating system or Windows Server 2003 connected to the Internet. A user
types http://www.example.com in the Address field of their Internet browser.

When the user presses the ENTER key, the browser makes a Windows Sockets function call, either
gethostbyname() or getaddrinfo(), to resolve the name http://www.example.com to an IP address. For the DNS
portion of the Windows host name resolution process, the following occurs:

1. The DNS resolver on the DNS client sends a recursive query to its configured DNS server, requesting the IP
address corresponding to the name "www.example.com". The DNS server for that client is responsible for
resolving the name and cannot refer the DNS client to another DNS server.

2. The DNS server that received the initial recursive query checks its zones and finds no zones corresponding to
the requested domain name; the DNS server is not authoritative for the example.com domain.

3. The DNS server of the DNS client sends an iterative query for www.example.com. to the name server that is
authoritative for the com. top-level domain.

4. The DNS server of the DNS client sends an iterative query for www.example.com. to the name server that is
authoritative for the example.com. domain.

5. The example.com. name server replies with the IP address corresponding to the FQDN www.example.com.

6. The DNS server of the DNS client sends the IP address of www.example.com to the DNS client.

Figure: Example of recursive and iterative queries in DNS name resolution

6
Q7. Explain sliding window protocol.

Ans: A sliding window protocol is a feature of packet-based data transmission protocols. Sliding window
protocols are used where reliable in-order delivery of packets is required, such as in the Data Link Layer (OSI
model) as well as in the Transmission Control Protocol (TCP).

Conceptually, each portion of the transmission (packets in most data link layers, but bytes in TCP) is assigned a
unique consecutive sequence number, and the receiver uses the numbers to place received packets in the correct
order, discarding duplicate packets and identifying missing ones. The problem with this is that there is no limit
on the size of the sequence numbers that can be required.

By placing limits on the number of packets that can be transmitted or received at any given time, a sliding
window protocol allows an unlimited number of packets to be communicated using fixed-size sequence
numbers. The term "window" on transmitter side represents the logical boundary of the total number of packets
yet to be acknowledged by the receiver. The receiver informs the transmitter in each acknowledgment packet
the current maximum receiver buffer size (window boundary). The TCP header uses a 16 bit field to report the
receive window size to the sender.

Therefore, the largest window that can be used is 216 = 64 kilobytes. In slow-start mode, the transmitter starts
with low packet count and increases the number of packets in each transmission after receiving
acknowledgment packets from receiver. For every ack packet received, the window slides by one packet
(logically) to transmit one new packet. When the window threshold is reached, the transmitter sends one packet
for one ack packet received. If the window limit is 10 packets then in slow start mode the transmitter may start
transmitting one packet followed by two packets (before transmitting two packets, one packet ack has to be
received), followed by three packets and so on until 10 packets.

For the highest possible throughput, it is important that the transmitter is not forced to stop sending by the
sliding window protocol earlier than one round-trip delay time (RTT). The limit on the amount of data that it
can send before stopping to wait for an acknowledgment should be larger than the bandwidth-delay product of
the communications link. If it is not, the protocol will limit the effective bandwidth of the link.

Q8. What is firewall? Explain components of firewall system.

Ans:

Firewall
A firewall is a set of related programs, located at a network gateway server that protects the resources of a
private network from users from other networks. An enterprise with an intranet that allows its workers access to
the wider Internet installs a firewall to prevent outsiders from accessing its own private data resources and for
controlling what outside resources its own users have access to.

Basically, a firewall, working closely with a router program, examines each network packet to determine
whether to forward it toward its destination. A firewall also includes or works with a proxy server that makes
network requests on behalf of workstation users. A firewall is often installed in a specially designated computer
separate from the rest of the network so that no incoming request can get directly at private network resources.

7
Firewall Components

1. Packet Filtering
The primary activity of a firewall is filtering packets that pass to and from the Internet and the protected subnet.
A firewall could filter the following fields within packets:
 Packet type, such as IP, UDP, ICMP, or TCP;
 Source IP address, the system from which the packet originated;
 Destination IP address, the system for which the packet is destined;
 Destination TCP/UDP port, a number designating a service such as telnet, ftp, smtp, nfs, etc., located
on the destination host, and
 Source TCP/UDP port, the port number of the service on the host originating the connection.

2. Application Gateways
After packet filtering and logging, application gateways function to provide a higher level of security for
applications such as telnet, ftp, or SMTP that are not blocked at the firewall. The protocol for connecting to
specific internal hosts would be as follows:
1. A user first telnets to the application gateway and enters the name of the desired host;
2. The gateway then creates a telnet connection to the desired host;
3. The user's system knows only that the telnet session is between the user's system and the application
gateway; and

3. Logging and Detection of Suspicious Activity


Packet-filtering routers unfortunately suffer from a number of weaknesses. In addition to standard logging that
would include statistics on packet types, frequency, and source/destination addresses, the following types of
activity should be captured:
 Connection information
 Attempts to use any ``banned'' protocols
 Attempts to spoof internal systems
 Routing re-directions

Q9. What is Message and Packet switching?

Ans:

Message switching
Message switching was the precursor of packet switching, where messages were routed in their entirety and one
hop at a time. It was first introduced by Leonard Kleinrock in 1961. Message switching systems are nowadays
mostly implemented over packet-switched or circuit-switched data networks.

Hop-by-hop Telex forwarding are examples of message switching systems. E-mail is another example of a
message switching system. When this form of switching is used, no physical path is established in advance in
between sender and receiver. Instead, when the sender has a block of data to be sent, it is stored in the first
switching office (i.e. router) then forwarded later at one hop at a time.

Each block is received in its entity form, inspected for errors and then forwarded or re-transmitted. It is a form
of store-and-forward network. Data is transmitted into the network and stored in a switch. The network transfers
the data from switch to switch when it is convenient to do so, and as such the data is not transferred in real-time.
Blocking can not occur, however, long delays can happen. The source and destination terminal need not be
compatible, since conversions are done by the message switching networks.

8
Packet switching
Packet switching splits traffic data (for instance, digital representation of sound, or computer data) into chunks,
called packets. Packet switching is similar to message switching. Any message exceeding a network-defined
maximum length is broken up into shorter units, known as packets, for transmission. The packets, each with an
associated header, are then transmitted individually through the network. These packets are routed over a shared
network. Packet switching networks do not require a circuit to be established and allow many pairs of nodes to
communicate almost simultaneously over the same channel. Each packet is individually addressed precluding
the need for a dedicated path to help the packet find its way to its destination.

Packet switching is used to optimize the use of the channel capacity available in a network; to minimize the
transmission latency, and to increase robustness of communication.

The most well-known use of packet switching is the Internet. The Internet uses the Internet protocol suite over a
variety of data link layer protocols. For example, Ethernet and Frame relay are very common. Newer mobile
phone technologies (e.g., GPRS, I-mode) also use packet switching. Packet switching is also called
connectionless networking because no connections are established.

Q10. List the design issues related to Data Link Layer.

Ans: To accomplish the above goals, the data link layer takes the packets from the network layer and
encapsulates bit stream into units called frames for transmission. Note that frames are nothing more than
“packets'' or “messages''. By convention, we'll use the term ``frames'' when discussing DLL packets. Each frame
contains a frame header, a payload field for holding the packet, and a frame trailer as illustrated in below figure.

Figure: Frame Format

In general the DLL design issues are listed below:


1. In general, the Data Link Layer provides services to the network layer. The network layer should be able
to send packets to its neighbors without worrying about the details of getting it there in one piece.
2. The DLL does the process of framing the bits, i.e. encapsulating the packets
3. Sender checksums the frame and sends checksum together with data. The checksum allows the receiver
to determine when a frame has been damaged in transit.
4. Receiver recomputes the checksum and compares it with the received value. If they differ, an error has
occurred and the frame is discarded.
5. Perhaps return a positive or negative acknowledgment to the sender. A positive acknowledgment
indicates the frame was received without errors, while a negative acknowledgment indicates the
opposite.
6. Flow control: A data link protocol discusses tasks like error control and flow control but these tasks are
also dealt at transport layer along with some other protocols.

9
Q11. Briefly explain Point-to-Point Protocol.

Ans: Point-to-Point Protocol (PPP) is a network-specific standard protocol with STD number 51. Its status is
elective, and it is described in RFC 1661 and RFC 1662. The standards defined in these RFCs were later
extended to allow IPv6 over PPP, defined in RFC 2472.

There are a large number of proposed standard protocols, which specify the operation of PPP over different
kinds of point-to-point links. Each has a status of elective. We advise you to consult STD 1 – Internet Official
Protocol Standards for a list of PPP-related RFCs that are on the Standards Track.

Point-to-point circuits in the form of asynchronous and synchronous lines have long been the mainstay for data
communications. In the TCP/IP world, the de facto standard SLIP protocol has served admirably in this area,
and is still in widespread use for dial-up TCP/IP connections. However, SLIP has a number of drawbacks that
are addressed by the Point-to-Point Protocol.

PPP has three main components:


 A method for encapsulating datagrams over serial links.
 A Link Control Protocol (LCP) for establishing, configuring, and testing the data-link connection.
 A family of Network Control Protocols (NCPs) for establishing and configuring different network-layer
protocols. PPP is designed to allow the simultaneous use of multiple network-layer protocols.

Before a link is considered to be ready for use by network-layer protocols, a specific sequence of events must
happen. The LCP provides a method of establishing, configuring, maintaining, and terminating the connection.
LCP goes through the following phases:

1. Link establishment and configuration negotiation: In this phase, link control packets are exchanged and
link configuration options are negotiated. After options are agreed on, the link is open, but is not necessarily
ready for network-layer protocols to be started.

2. Link quality determination: This phase is optional. PPP does not specify the policy for determining
quality, but does provide low-level tools, such as echo request and reply.

3. Authentication: This phase is optional. Each end of the link authenticates itself with the remote end using
authentication methods agreed to during phase 1.

4. Network-layer protocol configuration negotiation: After LCP has finished the previous phase, network-
layer protocols can be separately configured by the appropriate NCP.

5. Link termination: LCP can terminate the link at any time. This is usually done at the request of a human
user, but can happen because of a physical event.

Q12. Briefly explain five parts of Multipurpose Internet Mail Extensions.

Ans: MIME is a draft standard that includes mechanisms to resolve these problems in a manner that is highly
compatible with existing RFC 2822 standards. Because mail messages are frequently forwarded through mail
gateways, it is not possible for an SMTP client to distinguish between a server that manages the destination
mailbox and one that acts as a gateway to another network. Because the mail that passes through a gateway

10
might be tunneled through further gateways, some or all of which can be using a different set of messaging
protocols, it is not possible in general for a sending SMTP to determine the lowest common denominator
capability common to all stages of the route to the destination mailbox. For this reason, MIME assumes the
worst: 7-bit ASCII transport, which might not strictly conform to or be compatible with RFC 2821. It does not
define any extensions to RFC 2821, but limits itself to extensions within the framework of RFC 2822.
Therefore, a MIME message is one which can be routed through any number of networks that are loosely
compliant with RFC 2821 or are capable of transmitting RFC 2821 messages.

MIME can be described in five parts:


 Protocols for including objects other than US ASCII text mail messages within the bodies of messages
conforming to RFC 2822. These are described in RFC 2045.
 General structure of the MIME media typing system, which defines an initial set of media types. This is
described in RFC 2046.
 A protocol for encoding non-U.S. ASCII text in the header fields of mail messages conforming to RFC
2822. This is described in RFC 2047.
 Various IANA registration procedures for MIME-related facilities. This is described in RFC 2048.
 MIME conformance criteria. This is described in RFC 20410.

A MIME-compliant message must contain a header field with the following verbatim text: MIME-Version: 1.0

As with RFC 2822 headers, the case of MIME header field name is never significant, but the case of field
values can be, depending on the field name and the context. For the MIME fields described later, the values are
case-insensitive unless stated otherwise.
The general syntax for MIME header fields is the same as that for RFC 2822, having the format:
Keyword: Value

Therefore, the following field is valid (parenthetical phrases are treated as comments and ignored):
MIME-Version: 1.0 (this is a comment)

Q13. Distinguish between IPV4 and IPV6 addressing schemes

Ans:

IPV4 Addressing scheme


An IP address is an identifier that is assigned at the Internet layer to an interface or a set of interfaces. Each IP
address can identify the source or destination of IP packets. For IPV4, every node on a network has one or more
interfaces, and you can enable TCP/IP on each of those interfaces. When you enable TCP/IP on an interface,
you assign it one or more logical IPV4 addresses, either automatically or manually. The IPV4 address is a
logical address because it is assigned at the Internet layer and has no relation to the addresses that are used at
the Network Interface layer. IPV4 addresses are 32 bits long.

IPV4 Address Syntax: If network administrators expressed IPV4 addresses using binary notation, each address
would appear as a 32-digit string of 1s and 0s. Because such strings are cumbersome to express and remember,
administrators use dotted decimal notation, in which periods (or dots) separate four decimal numbers (from 0 to
255). Each decimal number, known as an octet, represents 8 bits (1 byte) of the 32-bit address.

11
IPV6 Addressing scheme
The most obvious difference between IPV6 and IPV4 is address size. An IPV6 address is 128 bits long, which is
four times larger than an IPV4 address. A 32-bit address space allows for 232 or 4,294,967,296 possible
addresses. A 128-bit address space allows for 2128 or 340,282,366,920, 938,463,463,374,607,431,768,211,456
(or 3.4´1038) possible addresses.

The IPV4 address space was designed in the late 1970s when few people, if any, imagined that the addresses
could be exhausted. However, due to the original allocation of Internet address class-based address prefixes and
the recent explosion of hosts on the Internet, the IPV4 address space was consumed to the point that by 1992 it
was clear a replacement would be necessary.

IPV6, it is even harder to conceive that the IPV6 address space will be consumed. To help put this in
perspective, a 128-bit address space provides 655,570,793,348,866,943,898,599 (6.5´1023) addresses for every
square meter of the Earth’s surface. The decision to make the IPV6 address 128 bits long was not so that every
square meter of the Earth could have 6.5´1023 addresses. Rather, the relatively large size of the IPV6 address
space is designed for efficient address allocation and routing that reflects the topology of the modern-day
Internet and to accommodate 64-bit media access control (MAC) addresses that newer networking technologies
are using.

Q14. Distinguish IPV4 and IPV6 addressing schemes

Ans:

IPV4 Addressing scheme


An IP address is an identifier that is assigned at the Internet layer to an interface or a set of interfaces. Each IP
address can identify the source or destination of IP packets. For IPV4, every node on a network has one or more
interfaces, and you can enable TCP/IP on each of those interfaces. When you enable TCP/IP on an interface,
you assign it one or more logical IPV4 addresses, either automatically or manually. The IPV4 address is a
logical address because it is assigned at the Internet layer and has no relation to the addresses that are used at
the Network Interface layer. IPV4 addresses are 32 bits long.

IPV4 Address Syntax: If network administrators expressed IPV4 addresses using binary notation, each address
would appear as a 32-digit string of 1s and 0s. Because such strings are cumbersome to express and remember,
administrators use dotted decimal notation, in which periods (or dots) separate four decimal numbers (from 0 to
255). Each decimal number, known as an octet, represents 8 bits (1 byte) of the 32-bit address.

IPV6 Addressing scheme


The most obvious difference between IPV6 and IPV4 is address size. An IPV6 address is 128 bits long, which is
four times larger than an IPV4 address. A 32-bit address space allows for 232 or 4,294,967,296 possible
addresses. A 128-bit address space allows for 2128 or 340,282,366,920, 938,463,463,374,607,431,768,211,456
(or 3.4´1038) possible addresses.

The IPV4 address space was designed in the late 1970s when few people, if any, imagined that the addresses
could be exhausted. However, due to the original allocation of Internet address class-based address prefixes and
the recent explosion of hosts on the Internet, the IPV4 address space was consumed to the point that by 1992 it
was clear a replacement would be necessary.

12
IPV6, it is even harder to conceive that the IPV6 address space will be consumed. To help put this in
perspective, a 128-bit address space provides 655,570,793,348,866,943,898,599 (6.5´1023) addresses for every
square meter of the Earth’s surface. The decision to make the IPV6 address 128 bits long was not so that every
square meter of the Earth could have 6.5´1023 addresses. Rather, the relatively large size of the IPV6 address
space is designed for efficient address allocation and routing that reflects the topology of the modern-day
Internet and to accommodate 64-bit media access control (MAC) addresses that newer networking technologies
are using.

Q15. What is Stop-and-Wait Automatic Repeat Request? Briefly explain.

Ans: This protocol adds a simple error control mechanism to the stop-and-wait protocol. To detect and correct
corrupted frames, we need to add redundancy bits to our data frame. When the frame arrives at the receiver site,
it is checked and if it is corrupted, it is silently discarded. The detection of errors in this protocol is manifested
by the silence of the receiver. Lost frames are more difficult to handle than corrupted ones. The corrupted and
lost frames need to be resent in this protocol. If the receiver does not respond when there is an error, how can
the sender know which frame to resend? To remedy this problem, the sender keeps a copy of the sent frame. At
the same time it starts a timer. If the timer expires and there is no acknowledgement for the sent frame, the
frame is resent, the copy is held and the timer is restarted. Since the protocol uses the stop-and-wait mechanism,
there is only one specific frame that needs an ACK even though several copies of the same frame can be in the
network.

Since an ACK frame can also be corrupted and lost, it too needs redundancy bits and a sequence number. The
ACK frame for this protocol has a sequence number field. In this protocol, the sender simply discards a
corrupted ACK frame or ignores an out-of-order one.

This protocol specifies the frames to be numbered. This is done by using sequence numbers. A field is added to
the data frame to hold the sequence number of that frame.

Go – Back – N - Automatic- Repeat – Request


To improve the transmission efficiency, multiple frames must be in transition while waiting for
acknowledgement. i.e. we need to let more than one frame be outstanding to keep the channel busy while the
sender is waiting for acknowledgement. There are two protocols developed for achieving this goal:

 Go – Back - N – Automatic – Repeat Request: In this protocol we can send several frames before
receiving acknowledgements; we keep a copy of these frames until the acknowledgment arrives. Frames
from a sending station are numbered sequentially.

 Sliding Window Protocol: In this protocol, the sliding window is an abstract concept that defines the
range of sequence numbers that is the concern of the sender and receiver. The sender and the receiver
need to deal with only part of the possible sequence numbers.

13
Q16. What is the role of Internet Protocol version 4 (IPV4) in addressing and routing packets between
hosts? Briefly explain the structure of an IPV4 packet.

Ans: IPV4 is a datagram protocol primarily responsible for addressing and routing packets between hosts. IPV4
is connectionless, which means that it does not establish a connection before exchanging data, and unreliable,
which means that it does not guarantee packet delivery. IPV4 always makes a “best effort” attempt to deliver a
packet. An IPV4 packet might be lost, delivered out of sequence, duplicated, or delayed. IPV4 does not attempt
to recover from these types of errors. A higher-layer protocol, such as TCP or an application protocol must
acknowledge delivered packets and recover lost packets if needed. IPV4 is defined in RFC 791. An IPV4 packet
consists of an IPV4 header and an IPV4 payload. An IPV4 payload, in turn, consists of an upper layer protocol
data unit, such as a TCP segment or a UDP message.

The Figure given below shows the basic structure of an IPV4 packet.

Figure: The basic structure of an IPV4 packet

Below Table Lists and describes the key fields in the IPV4 header

IP Header Field Description


Source IP Address The IPV4 address of the source of the IP
packet.
Destination IP The IPV4 address of the intermediate or final
Address destination of the IPV4 packet.
Identification An identifier for all fragments of a specific
IPV4 packet, if fragmentation occurs.
Protocol An identifier of the upper-layer protocol to
which the IPV4 payload must be passed.
Checksum A simple mathematical computation used to
check for bit-level errors in the IPV4 header.
Time-to-Live (TTL) The number of network segments on which
the datagram is allowed to travel before a
router should discard it. The sending host sets
the TTL, and routers decrease the TTL by one
when forwarding an IPV4 packet. This field
prevents packets from endlessly circulating on
an IPV4 network.
Table: Fields in the IPV4 Header

If a router receives an IPV4 packet that is too large for the network segment on which the packet is being
forwarded, IPV4 on the router fragments the original packet into smaller packets that fit on the forwarding
network segment. When the packets arrive at their final destination, IPV4 on the destination host reassembles
the fragments into the original payload. This process is referred to as fragmentation and reassembly.
Fragmentation can occur in environments that have a mix of networking technologies, such as Ethernet or
Token Ring.

14
Q17. Discuss about different DNS Resource Record Types.

Ans: For small networks, DNS name resolution is simpler and more efficient by having the DNS client query a
DNS server that is maintained by an ISP. Most ISPs will maintain domain information for fee. If your
organization wants to have control over its domain or not incur the costs of using an ISP, you can set up your
organization's own DNS servers. At least two computers as DNS servers are recommended for reliability and
redundancy–a primary and a secondary name server. The primary name server maintains the database of
information, which is then replicated from the primary name server to the secondary name server.

Resource Record Types


The DNS standards define many types of resource records. The most commonly used resource records are the
following:
 SOA: Identifies the start of a zone of authority. Every zone contains an SOA resource record at the
beginning of the zone file, which stores information about the zone, configures replication behavior, and
sets the default TTL for names in the zone.
 A: Maps an FQDN to an IPv4 address.
 AAAA: Maps an FQDN to an IPv6 address.
 NS: Indicates the servers that are authoritative for a zone. NS records indicate primary and secondary
servers for the zone specified in the SOA resource record, and they indicate the servers for any delegated
zones. Every zone must contain at least one NS record at the zone root.
 PTR: Maps an IP address to an FQDN for reverse lookups.
 CNAME: Specifies an alias (synonymous name).
 MX: Specifies a mail exchange server for a DNS domain name. A mail exchange server is a host that
receives mail for the DNS domain name.
 SRV: Specifies the IP addresses of servers for a specific service, protocol, and DNS domain. The DNS
Server service in Windows Server 2003 also supports the following resource record types that are
Microsoft -specific:
 WINS: Indicates the IPv4 address of a Windows Internet Name Service (WINS) server for WINS
forward lookup. The DNS Server service in Windows Server 2003 can use a WINS server for looking up
the host portion of a DNS name.
 WINS-R: Indicates the use of WINS reverse lookup, in which a DNS server uses a NetBIOS Adapter
Status message to find the host portion of the DNS name given its IPv4 address.
 ATMA: Maps DNS domain names to Asynchronous Transfer Mode (ATM) addresses.

Q18. What is Fast Ethernet? Explain in brief.

Ans: Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate
of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the 100 megabit Ethernet standards
100baseTX is by far the most common and is supported by the vast majority of Ethernet hardware currently
produced. Full duplex fast Ethernet is sometimes referred to as "200 Mbit/s" though this is somewhat
misleading as that level of improvement will only be achieved if traffic patterns are symmetrical. Fast Ethernet
was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by
gigabit Ethernet.

A fast Ethernet adaptor can be logically divided into a medium access controller (MAC) which deals with the
higher level issues of medium availability and a physical layer interface (PHY). The MAC may be linked to the

15
PHY by a 4 bit 25 MHz synchronous parallel interface known as MII. Repeaters (hubs) are also allowed and
connect to multiple PHYs for their different interfaces.
 100BASE-T is any of several Fast Ethernet standards for twisted pair cables.
 100BASE-TX (100 Mbit/s over two-pair Cat5 or better cable),
 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct),
 100BASE-T2 (100 Mbit/s over two-pair Cat3 or better cable, also defunct).

The segment length for a 100BASE-T cable is limited to 100 meters. Most networks had to be rewired for 100-
megabit speed whether or not they had supposedly been CAT3 or CAT5 cable plants. The vast majority of
common implementations or installations of 100BASE-T are done with 100BASE-TX.

100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of category 5 or above cable. A
typical category 5 cable contains 4 pairs and can therefore support two 100BASE-TX links. Each network
segment can have a maximum distance of 100 meters. In its typical configuration, 100BASE-TX uses one pair
of twisted wires in each direction, providing 100 Mbit/s of throughput in each direction (full-duplex).

The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area
network, the devices on the network are typically connected to a hub or switch, creating a star network.
Alternatively it is possible to connect two devices directly using a crossover cable.

In 100BASE-T2, the data is transmitted over two copper pairs, 4 bits per symbol. First, a 4 bit symbol is
expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a linear feedback shift
register. 100BASE-FX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical
fiber for receive (RX) and transmit (TX).

Q19. How are services of data link layer categorized? Explain in brief

Ans: The main function of the data link layer is to provide services to the network layer. The principal service is
transferring data from the network layer on the source machine to the network layer on the destination machine.
The services of the data link layer can be categorized as:

1. Unacknowledged Connectionless Service: In this the source machine sends independent frames to the
destination machine without having the destination machine acknowledge them. No connection is established
before hand or afterwards. This class of service is appropriate when the error rate is very low and recovery is
left to the higher layers.

2. Acknowledged Connectionless Service: In this type of service, there are still no connections used, but each
frame sent is individually acknowledged. If the sender does not receive the acknowledgement within a specified
time interval, it would be retransmitted.

3. Connection-Oriented Service: It is the most sophisticated service provided by the data link layer to the
network layer. The source and the destination machines establish a connection before any data transfer takes
place. Each frame sent over the connection is numbered and the data link layer guarantees that each frame sent
is indeed received. Furthermore, it guarantees that each frame is received exactly once and all frames are
received in the right order.

16
The service primitives used by the data link layer are as follows:

1. Request: Used by the network layer to ask the data link layer to do something.

2. Indication: Used to indicate to the network layer that an event has happened, for example, establishment or
release of a connection.

3. Response: Used on the receiving side by the network layer to reply to a previous indication.

4. Confirm: These primitives provide a way for the data link layer on the requesting side to learn whether the
request was successfully carried out and if not, why.

In order to provide service to the network layer, the data link layer must use the services provided by the
physical layer to it. The physical layer accepts a raw bit stream and attempts to deliver it to the destination. This
bit stream is not guaranteed to be error-free. The number of bits received may be less than, equal to, or more
than the number of bits transmitted, and they have different values. It is up to the data link layer to detect, and if
necessary, correct the errors.

Q20. How SMTP works? Briefly explain applications of SMTP.

Ans:

Working of SMTP
SMTP is based on end-to-end delivery: An SMTP client contacts the destination host's SMTP server directly, on
well-known port 25, to deliver the mail. It keeps the mail item being transmitted until it has been successfully
copied to the recipient's SMTP. This is different from the store-and-forward principle that is common in many
mailing systems, where the mail item can pass through a number of intermediate hosts in the same network on
its way to the destination and where successful transmission from the sender only indicates that the mail item
has reached the first intermediate hop.

In various implementations, it is possible to exchange mail between the TCP/IP SMTP mailing system and the
locally used mailing systems. These applications are called mail gateways or mail bridges. Sending mail
through a mail gateway can alter the end-to-end delivery specification, because SMTP only guarantees delivery
to the mail-gateway host, not to the real destination host located beyond the TCP/IP network. When a mail
gateway is used, the SMTP end-to-end transmission is host-to-gateway, gateway-to-host, or gateway-to-
gateway; the behavior beyond the gateway is not defined by SMTP.

Applications of SMTP
Simple Mail Transfer Protocol is an Internet protocol designed to send and receive e-mail messages between e-
mail servers over the Internet. SMTP was developed to send e-mail messages across the Internet. In the OSI
model, SMPT is an application layer protocol that utilizes TCP as the transport protocol to transmit mail to a
destination mail exchanger, in other words, SMTP is used to transmit mail to a mail server. Mail can be
transmitted by a client to the mail exchanger server, or from mail exchanger to mail exchanger. Mail sent via
SMTP is usually sent from one mail exchanger to another, directly. E-mail was never designed to be
instantaneous, but that is often how it appears to us.

Although electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages,
user-level client mail applications typically use SMTP only for sending messages to a mail server for relaying.

17
For receiving messages, client applications usually use either POP3 or IMAP. Although proprietary systems
(such as Microsoft Exchange and Lotus Notes/Domino) and webmail systems (such as Hotmail, Gmail and
Yahoo! Mail) use their own non-standard protocols to access mail box accounts on their own mail servers, all
use SMTP when sending or receiving email from outside their own systems.

Q21. What is SSL protocol? How SSL handles a message?

Ans: The SSL protocol is located at the top of the transport layer. SSL is also a layered protocol itself. It simply
takes the data from the application layer, reformats it, and transmits it to the transport layer.

SSL handles a message as follows:


 The sender performs the following tasks:
a. Takes the message from upper layer.
b. Fragments the data to manageable blocks.
c. Optionally compresses the data.
d. Applies a message authentication code (MAC).
e. Encrypts the data.
f. Transmits the result to the lower layer.

 The receiver performs the following tasks:


a. Takes the data from lower layer.
b. Decrypts.
c. Verifies the data with the negotiated MAC key.
d. Decompresses the data if compression was used.
e. Reassembles the message.
f. Transmits the message to the upper layer.

An SSL session works in different states. These states are session and connection states. The SSL handshake
protocol coordinates the states of the client and the server. In addition, there are read and write states defined to
coordinate the encryption according to the change CipherSpec messages.
When either party sends a change CipherSpec message, it changes the pending write state to current write state.
Again, when either party receives a change CipherSpec message, it changes the pending read state to the current
read state. The session state includes the following components:

Component Description
Session identifier An arbitrary byte sequence chosen by the server to
identify an active or resumable session state.
Peer certificate Certificate of the peer. This field is optional; it can be
empty.
Compression method The compression algorithm
CipherSpec Specifies data encryption algorithm (such as null, DES)
and a MAC algorithm.
Master secret 48-byte shared secret between the client and the server.
Is resumable A flag indicating whether the session can be used for
new connections.

18
The connection state includes the following components:

Component Description
Server and client random An arbitrary byte sequence chosen by the client and
server for each connection.
Server write MAC secret The secret used for MAC operations by the server.
Client write MAC secret The secret used for MAC operations by the client
Server write key The cipher key for the server to encrypt the data and
the client to decrypt the data.
Client write key The cipher key for the client to encrypt the data and
the server to decrypt the data.
Initialization vectors Initialization vectors store the encryption information.
Sequence numbers A sequence number indicates the number of the
message transmitted since the last change CipherSpec
message. Both the client and the server maintain
sequence numbers.

Q22. Write a note on the following:


 Point-to-Point Channels
 Broadcast Channels

Ans:

Point-to-Point Channels
It is like port to communication. It does not connect to each other unless both side communication need Is
needed. When a point-to-point subnet is used, an important design issue is what the IMP interconnection
topology should look like, LANs have a symmetric topology, whereas WANs have asymmetric topology.

A Point-to-Point Channel ensures that only one receiver consumes any given message. If the channel has
multiple receivers, only one of them can successfully consume a particular message. If multiple receivers try to
consume a single message, the channel ensures that only one of them succeeds, so the receivers do not have to
coordinate with each other. The channel can still have multiple receivers to consume multiple messages
concurrently, but only a single receiver consumes any one message.

Broadcasting Channels
Most LANs and a small number of WANs are of this type. In a LAN, the IMP is reduced to a single chip
embedded inside the host, so that there is always one host per IMP, whereas in a WAN there may be many hosts
per IMP.

19
Broadcasting is transmitting the information at one end and receiving the information at rest of the systems in
the network. Information here is data packets. Each data packet contains information like to whom it is
delivered and address field, it will be checked after receiving. If it is not matching with the address then it
ignores the data packet.

Broadcast systems also support transmission to a subset of machines, something known as Multicasting. A
common scheme is to have all addresses with high order bit set to 1. The remaining n-1 address bits form a bit
map corresponding to n-1 groups. Each machine can subscribe to any or all of the n-1 groups.

A Broadcast Channel (BCH) is a downlink channel in a GSM system that is used by the base stations to provide
signaling information to the mobile stations. The mobile station needs this information to find a network,
synchronize with it and to connect to it.

Q23. What is framing? Briefly explain Fixed-Size Framing and Variable Size Framing.

Ans: Data transmission in the physical layer means moving bits in the form of a signal from the source to
destination. The physical layer provides bit synchronization to ensure that the sender and receiver use the same
bit durations and timing. The data link layer on the other hand needs to pack bits into frames, so that each frame
is distinguishable from another.

Framing in the data link layer separates a message from one source to a destination, or from other messages to
other destinations, by adding a sender address and a destination address. The destination address defines where
the packet is to go; the sender address helps the recipient acknowledge the receipt.

Although the whole message could be packed into one frame, it is not normally done. When a message is
carried in one very large frame, even a single – bit error would require the retransmission of the whole message.
When a message is divided into smaller frames, a single-bit error affects only that small frame.

Fixed-Size Framing In this, there is no need for defining the boundaries of the frames; the size itself can be
used as a delimiter. Example: The ATM WAN, which uses frames of fixed size called cells.

Variable Size Framing This type of framing is prevalent in LANs. In this, we need a way to define the end of
the frame and the beginning of the next. Two approaches are sued for this purpose: a character-oriented
approach and a bit oriented approach.

 Character-Oriented Protocols: In a character-oriented protocol, data to be carried are 8-bit characters


from a coding system such as ASCII. The header, which normally carries the source and destination
addresses and other control information, and the trailer, which carries error detection or error correction
redundant bits, are also multiples of 8 bits. To separate one frame from the next, an 8-bit (I-byte) flag is
added at the beginning and the end of a frame. The flag, composed of protocol-dependent special
characters, signals the start or end of a frame.

 Bit-Oriented Protocols: In a bit-oriented protocol, the data section of a frame is a sequence of bits to be
interpreted by the upper layer as text, graphic, audio, video, and so on. However, in addition to headers
(and possible trailers), we still need a delimiter to separate one frame from the other. Most protocols use
a special 8-bit pattern flag 01111110 as the delimiter to define the beginning and the end of the frame.

20

S-ar putea să vă placă și