Sunteți pe pagina 1din 31

Q1:-

BASIS FOR SYNCHRONOUS ASYNCHRONOUS

COMPARISON TRANSMISSION TRANSMISSION

Meaning Transmission starts with It uses start bit and stop bit

the block header which preceding and following a

holds a sequence of bits. character respectively.

Transmission Sends data in the form of Sends 1 byte or character

manner blocks or frames at a time

Synchronization Present with the same Absent

clock pulse.

Transmission Fast Slow

Speed

Gap between the Does not exist Exist

data

Cost Expensive Economical


BASIS FOR SYNCHRONOUS ASYNCHRONOUS

COMPARISON TRANSMISSION TRANSMISSION

Time Interval Constant Random

Implemented by Hardware and software Hardware only

Examples Chat Rooms, Video Letters, emails, forums,

Conferencing, Telephonic etcetera.

Conversations, etcetera.

Q2 functiona of presentation and application layer

Ans:-

What is presentation layer?


The presentation layer is located at the sixth level of the OSI model, it is responsible for the delivery
and formatting of information to the application layer for further processing or display. This type of
service is needed because different computer architectures use different data representations. In
contrast to providing transparent data transport at the fifth level, the presentation layer handles all
issues related to data presentation and transport, including translation, encryption, and compression.

Presentation layer functions


The actual functions of the presentation layer include the following aspects:

1. Network security and confidentiality management, text compression and packaging, virtual
terminal protocol (VTP).
2. Syntax conversion - The abstract syntax is converted to the transfer syntax, and the other side
to achieve the opposite conversion (transfer syntax will be converted to abstract syntax).
Involved in the contents of the code conversion, character conversion, data format modification,
as well as data structure operation adaptation, data compression, encryption and so on.
3. Grammar negotiation - According to the requirements of the application layer to negotiate the
appropriate choice of context, that is, to determine the transmission syntax and transmission.
4. Connection management - Including the use of the session layer service to establish a
connection, manage data transport and synchronization control over this connection (using the
corresponding services at the session level), and terminate the connection either normally or
absently.
A

Application layer:-

The application layer is the highest abstraction layer of the TCP/IP model that provides
the interfaces and protocols needed by the users. It combines the functionalities of the
session layer, the presentation layer and the application layer of the OSI model.
The functions of the application layer are:

1. It facilitates the user to use the services of the network.


2. It is used to develop network-based applications.
3. It provides user services like user login, naming network devices, formatting messages, and
e-mails, transfer of files etc.
4. It is also concerned with error handling and recovery of the message as a whole.

This layer uses a number of protocols, the main among which are as follows:

1. Hyper Text Transfer Protocol, HTTP: It is the underlying protocol for world wide web. It
defines how hypermedia messages are formatted and transmitted.
2. File Transfer Protocol, FTP: It is a client-server based protocol for transfer of files between
client and server over the network.
3. Simple Mail Transfer Protocol, SMTP: It lays down the rules and semantics for sending and
receiving electronic mails (e-mails).
4. Domain Name System, DNS: It is a naming system for devices in networks. It provides
services for translating domain names to IP addresses.
5. TELNET: It provides bi-directional text-oriented services for remote login to the hosts over
the network.
6. Simple Network Management Protocol, SNMP: It is for managing, monitoring the network
and for organizing information about the networked devices.

Q3:- network topologies


The arrangement of a network which comprises of nodes and connecting lines via
sender and receiver is referred as network topology. The various network topologies are
:

a) Mesh Topology :

In mesh topology, every device is connected to another device via particular channel.

Figure 1 : Every device is connected with another via dedicated channels. These
channels are known as links.
• If suppose, N number of devices are connected with each other in mesh topology, then
total number of ports that is required by each device is N-1. In the Figure 1, there are 5
devices connected to each other, hence total number of ports required is 4.
• If suppose, N number of devices are connected with each other in mesh topology, then
total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In the Figure
1, there are 5 devices connected to each other, hence total number of links required is
5*4/2 = 10.
Advantages of this topology :
• It is robust.
• Fault is diagnosed easily. Data is reliable because data is transferred among the devices
through dedicated channels or links.
• Provides security and privacy.
Problems with this topology :
• Installation and configuration is difficult.
• Cost of cables are high as bulk wiring is required, hence suitable for less number of
devices.
• Cost of maintenance is high.

b) Star Topology :

In star topology, all the devices are connected to a single hub through a cable. This hub
is the central node and all others nodes are connected to the central node. The hub can
be passive in nature i.e. not intelligent hub such as broadcasting devices, at the same
time the hub can be intelligent known as active hubs. Active hubs have repeaters in
them.
Figure 2 : A star topology having four systems connected to single point of connection
i.e. hub.

Advantages of this topology :


• If N devices are connected to each other in star topology, then the number of cables
required to connect them is N. So, it is easy to set up.
• Each device require only 1 port i.e. to connect to the hub.
Problems with this topology :
• If the concentrator (hub) on which the whole topology relies fails, the whole system will
crash down.
• Cost of installation is high.
• Performance is based on the single concentrator i.e. hub.

c) Bus Topology :

Bus topology is a network type in which every computer and network device is
connected to single cable. It transmits the data from one end to another in single
direction. No bi-directional feature is in bus topology.

Figure 3 : A bus topology with shared backbone cable. The nodes are connected to the
channel via drop lines.

Advantages of this topology :


• If N devices are connected to each other in bus topology, then the number of cables
required to connect them is 1 which is known as backbone cable and N drop lines are
required.
• Cost of the cable is less as compared to other topology, but it is used to built small
networks.
Problems with this topology :
• If the common cable fails, then the whole system will crash down.
• If the network traffic is heavy, it increases collisions in the network. To avoid this, various
protocols are used in MAC layer known as Pure Aloha, Slotted Aloha, CSMA/CD etc.

d) Ring Topology :

In this topology, it forms a ring connecting a devices with its exactly two neighbouring
devices.

Figure 4 : A ring topology comprises of 4 stations connected with each forming a ring..

The following operations takes place in ring topology are :


1. One station is known as monitor station which takes all the responsibility to perform the
operations.
2. To transmit the data, station has to hold the token. After the transmission is done, the
token is to be released for other stations to use.
3. When no station is transmitting the data, then the token will circulate in the ring.
4. There are two types of token release techniques : Early token release releases the token
just after the transmitting the data and Delay token release releases the token after the
acknowledgement is received from the receiver.
Advantages of this topology :
• The possibility of collision is minimum in this type of topology.
• Cheap to install and expand.
Problems with this topology :
• Troubleshooting is difficult in this topology.
• Addition of stations in between or removal of stations can disturb the whole topology.

e) Hybrid Topology :

This topology is a collection of two or more topologies which are described above. This
is a scalable topology which can be expanded easily. It is reliable one but at the same it
is a costly topology.

Figure 5 : A hybrid topology which is a combination of ring and star topology.


Q4:-ISDN
Ans

Integrated Services Digital Network (ISDN)


These are a set of communication standards for simultaneous digital transmission of
voice, video, data, and other network services over the traditional circuits of the public
switched telephone network. Before Integrated Services Digital Network (ISDN), the
telephone system was seen as a way to transmit voice, with some special services
available for data. The main feature of ISDN is that it can integrate speech and data on
the same lines, which were not available in the classic telephone system.
ISDN is a circuit-switched telephone network system, but it also provides access to
packet switched networks that allows digital transmission of voice and data. This results
in potentially better voice or data quality than an analog phone can provide. It provides a
packet-switched connection for data in increments of 64 kilobit/s. It provided a maximum
of 128 kbit/s bandwidth in both upstream and downstream directions. A greater data
rate was achieved through channel bonding. Generally ISDN B-channels of three or four
BRIs (six to eight 64 kbit/s channels) are bonded.
In the context of the OSI model, ISDN is employed as the network in data-link and
physical layers but commonly ISDN is often limited to usage to Q.931 and related
protocols. These protocols introduced in 1986 are a set of signaling protocols
establishing and breaking circuit-switched connections, and for advanced calling
features for the user. ISDN provides simultaneous voice, video, and text transmission
between individual desktop videoconferencing systems and group videoconferencing
systems.

ISDN Interfaces:
The following are the interfaces of ISDN:
1. Basic Rate Interface (BRI) –
There are two data-bearing channels (‘B’ channels) and one signaling channel (‘D’
channel) in BRI to initiate connections. The B channels operate at a maximum of 64 Kbps
while the D channel operates at a maximum of 16 Kbps. The two channels are
independent of each other. For example, one channel is used as a TCP/IP connection to a
location while the other channel is used to send a fax to a remote location. In iSeries ISDN
supports basic rate interface (BRl).
The basic rate interface (BRl) specifies a digital pipe consisting two B channels of
64 Kbps each and one D channel of 16 Kbps. This equals a speed of 144 Kbps. In
addition, the BRl service itself requires an operating overhead of 48 Kbps.
Therefore a digital pipe of 192 Kbps is required.
2. Primary Rate Interface (PRI) –
Primary Rate Interface service consists of a D channel and either 23 or 30 B channels
depending on the country you are in. PRI is not supported on the iSeries. A digital pipe
with 23 B channels and one 64 Kbps D channel is present in the usual Primary Rate
Interface (PRI). Twenty-three B channels of 64 Kbps each and one D channel of 64 Kbps
equals 1.536 Mbps. The PRI service uses 8 Kbps of overhead also. Therefore PRI
requires a digital pipe of 1.544 Mbps.
3. Broadband-ISDN (B-ISDN) –
Narrowband ISDN has been designed to operate over the current communications
infrastructure, which is heavily dependent on the copper cable however B-ISDN relies
mainly on the evolution of fiber optics. According to CCITT B-ISDN is best described as ‘a
service requiring transmission channels capable of supporting rates greater than the
primary rate.
ISDN Services:
ISDN provides a fully integrated digital service to users. These services fall into 3
categories- bearer services, teleservices and supplementary services.
1. Bearer Services –
Transfer of information (voice, data and video) between users without the network
manipulating the content of that information is provided by the bearer network. There is no
need for the network to process the information and therefore does not change the
content. Bearer services belong to the first three layers of the OSI model. They are well
defined in the ISDN standard. They can be provided using circuit-switched, packet-
switched, frame-switched, or cell-switched networks.
2. Teleservices –
In this the network may change or process the contents of the data. These services
corresponds to layers 4-7 of the OSI model. Teleservices relay on the facilities of the
bearer services and are designed to accommodate complex user needs. The user need
not to be aware of the details of the process. Teleservices include telephony, teletex,
telefax, videotex, telex and teleconferencing. Though the ISDN defines these services by
name yet they have not yet become standards.
3. Supplementary Service –
Additional functionality to the bearer services and teleservices are provided by
supplementary services. Reverse charging, call waiting, and message handling are
examples of supplementary services which are all familiar with today’s telephone company
services.
Principle of ISDN:
The ISDN works based on the standards defined by ITU-T (formerly CCITT). The
Telecommunication Standardization Sector (ITU-T) coordinates standards for
telecommunications on behalf of the International Telecommunication Union (ITU) and
is based in Geneva, Switzerland. The various principles of ISDN as per ITU-T
recommendation are:
• To support switched and non-switched applications
• To support voice and non-voice applications
• Reliance on 64-kbps connections
• Intelligence in the network
• Layered protocol architecture
• Variety of configurations

Q5:= Computer networking

Guide

Computer networking
Benefits of computer networks
Setting up a computer network is a fast and reliable way of sharing information and resources within a
business. It can help you make the most of your IT systems and equipment.
Advantages of computer networking
Main benefits of networks include:
• File sharing – you can easily share data between different users, or access it remotely if you keep it
on other connected devices.
• Resource sharing – using network-connected peripheral devices like printers, scanners and
copiers, or sharing software between multiple users, saves money.
• Sharing a single internet connection – it is cost-efficient and can help protect your systems if you
properly secure the network.
• Increasing storage capacity – you can access files and multimedia, such as images and music,
which you store remotely on other machines or network-attached storage devices.
Networking computers can also help you improve communication, so that:
• staff, suppliers and customers can share information and get in touch more easily
• your business can become more efficient - eg networked access to a common database can avoid
the same data being keyed multiple times, saving time and preventing errors
• staff can deal with queries and deliver a better standard of service as a result of sharing customer
data
Cost benefits of computer networking
Storing information in one centralised database can also help you reduce costs and drive efficiency.
For example:
• staff can deal with more customers in less time since they have shared access to customer and
product databases
• you can centralise network administration, meaning less IT support is required
• you can cut costs through sharing of peripherals and internet access
You can reduce errors and improve consistency by having all staff work from a single source of
information. This way, you can make standard versions of manuals and directories available to them,
and back up data from a single point on a scheduled basis, ensuring consistency.
A computer network consists of two or more computers that are linked in order to share
resources such as printers and CD-ROMs, exchange files, or allow electronic
communications. The computers on a computer network may be linked through cables,
telephone lines, radio waves, satellites, or infrared light beams.

Explanation of networking principles


Networks constitute systems formed by links. Websites that allow people to create links
to each other with their pages are calledsocial networking sites. A set of related ideas
can be called a conceptual network. The connections you have with all your friendscan
be called your personal network.
The following networks used every day:
•Mail delivery system.
• Telephone system
• Internet.
• Public Transportation System.
• Corporate computer network.
Computers can be connected by networks to share data and resources. A network can
be as simple as two computers connected by a single cable or as complex as hundreds
of computers connected to devices that control the flow of information.
Converged data networks can include general-purpose computers, such as personal
computers and servers, as well as devices with morespecific functions, such as printers,
phones, televisions, and game consoles.
All converged networks of data, voice, and video share information and use various
methods to direct the flow of information. The information on the network moves from
one place to another, sometimes by different routes, to reach the correct destination.
Stop & Wait Protocol

• In this method of flow control, the sender sends a single frame to receiver & waits for an
acknowledgment.
• The next frame is sent by sender only when acknowledgment of previous frame is
received.
• This process of sending a frame & waiting for an acknowledgment continues as long as
the sender has data to send.
• To end up the transmission sender transmits end of transmission (EOT) frame.
• The main advantage of stop & wait protocols is its accuracy. Next frame is transmitted
only when the first frame is acknowledged. So there is no chance of frame being lost.
• The main disadvantage of this method is that it is inefficient. It makes the transmission
process slow. In this method single frame travels from source to destination and single
acknowledgment travels from destination to source. As a result each frame sent and
received uses the entire time needed to traverse the link. Moreover, if two devices are
distance apart, a lot of time is wasted waiting for ACKs that leads to increase in total
transmission time.
Q6:-Quality of service layer

Ans

Quality of service (QoS) refers to any technology that manages data traffic to
reduce packet loss, latency and jitter on the network. QoS controls and
manages network resources by setting priorities for specific types of data on
the network.

Enterprise networks need to provide predictable and measureable services as


applications -- such as voice, video and delay-sensitive data -- traverse the
network. Organizations use QoS to meet the traffic requirements of sensitive
applications, such as real-time voice and video, and to prevent the
degradation of quality caused by packet loss, delay and jitter.

Organizations can achieve QoS by using certain tools and techniques, such
as jitter buffer and traffic shaping. For many organizations, QoS is included in
the service-level agreement (SLA) with their network service provider to
guarantee a certain level of performance.

QoS parameters

Organizations can measure QoS quantitatively by using several parameters, including


the following:

• Packet loss happens when network links become congested and routers and
switches start dropping packets. When packets are dropped during real-time
communication, such as a voice or video calls, these sessions can experience jitter
and gaps in speech.

• Jitter is the result of network congestion, timing drift and route changes. Too
much jitter can degrade the quality of voice and video communication.
• Latency is the time it takes a packet to travel from its source to its destination.
Latency should be as close to zero as possible. If a voice over IP call has a high
amount of latency, it can experience echo and overlapping audio.

• Bandwidth is the capacity of a network communications link to transmit the


maximum amount of data from one point to another in a given amount of time.
QoS optimizes the network by managing bandwidth and setting priorities for
applications that require more resources than others.

• Mean opinion score (MOS) is a metric to rate voice quality that uses a five-point
scale, with a five indicating the highest quality.
Implementing QoS

Three models exist to implement QoS: Best Effort, Integrated Services and
Differentiated Services.

Best Effort is a QoS model where all the packets receive the same priority and there
is no guaranteed delivery of packets. Best Effort is applied when networks have not
configured QoS policies or when the infrastructure does not support QoS.

Integrated Services (IntServ) is a QoS model that reserves bandwidth along a


specific path on the network. Applications ask the network for resource reservation,
and network devices monitor the flow of packets to make sure network resources can
accept the packets.

Implementing IntServ requires IntServ-capable routers and uses the Resource


Reservation Protocol (RSVP) for network resource reservation. IntServ has limited
scalability and high consumption of network resources.

Q7:-

Connection management of transmission control protocol

Ans:-
The transmission Control Protocol (TCP) is one of the most important protocols of
Internet Protocols suite. It is most widely used protocol for data transmission in
communication network such as internet.

Features
• TCP is reliable protocol. That is, the receiver always sends either positive or negative
acknowledgement about the data packet to the sender, so that the sender always has bright
clue about whether the data packet is reached the destination or it needs to resend it.
• TCP ensures that the data reaches intended destination in the same order it was sent.
• TCP is connection oriented. TCP requires that connection between two remote points be
established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.
• TCP provides full duplex server, i.e. it can perform roles of both receiver and sender.

Header
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.

• Source Port (16-bits) - It identifies source port of the application process on the sending
device.
• Destination Port (16-bits) - It identifies destination port of the application process on the
receiving device.
• Sequence Number (32-bits) - Sequence number of data bytes of a segment in a session.
• Acknowledgement Number (32-bits) - When ACK flag is set, this number contains the next
sequence number of the data byte expected and works as acknowledgement of the previous
data received.
• Data Offset (4-bits) - This field implies both, the size of TCP header (32-bit words) and the
offset of data in current packet in the whole TCP segment.
• Reserved (3-bits) - Reserved for future use and all are set zero by default.
• Flags (1-bit each)
o NS - Nonce Sum bit is used by Explicit Congestion Notification signaling process.
o CWR - When a host receives packet with ECE bit set, it sets Congestion Windows
Reduced to acknowledge that ECE received.
o ECE -It has two meanings:
▪ If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
▪ If SYN bit is set to 1, ECE means that the device is ECT capable.
o URG - It indicates that Urgent Pointer field has significant data and should be
processed.
o ACK - It indicates that Acknowledgement field has significance. If ACK is cleared to
0, it indicates that packet does not contain any acknowledgement.
o PSH - When set, it is a request to the receiving station to PUSH data (as soon as it
comes) to the receiving application without buffering it.
o RST - Reset flag has the following features:
▪ It is used to refuse an incoming connection.
▪ It is used to reject a segment.
▪ It is used to restart a connection.
o SYN - This flag is used to set up a connection between hosts.
o FIN - This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers, they
are processed in correct order.
• Windows Size - This field is used for flow control between two stations and indicates the
amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how much data is
the receiver expecting.
• Checksum - This field contains the checksum of Header, Data and Pseudo Headers.
• Urgent Pointer - It points to the urgent data byte if URG flag is set to 1.
• Options - It facilitates additional options which are not covered by the regular header. Option
field is always described in 32-bit words. If this field contains data less than 32-bit, padding is
used to cover the remaining bits to reach 32-bit boundary.

Addressing
TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:

• System Ports (0 – 1023)


• User Ports ( 1024 – 49151)
• Private/Dynamic Ports (49152 – 65535)
Connection Management
TCP communication works in Server/Client model. The client initiates the connection
and the server either accepts or rejects it. Three-way handshaking is used for connection
management.

Establishment
Client initiates the connection and sends the segment with a Sequence number. Server
acknowledges it back with its own Sequence number and ACK of client’s segment which
is one more than client’s Sequence number. Client after receiving ACK of its segment
sends an acknowledgement of Server’s response.
Release
Either of server and client can send TCP segment with FIN flag set to 1. When the
receiving end responds it back by ACKnowledging FIN, that direction of TCP
communication is closed and connection is released.

Bandwidth Management
TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data byte
segments the receiver at this end can receive. TCP uses slow start phase by using
window size 1 and increases the window size exponentially after each successful
communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and next
sent the segment sent will be 4 data bytes long. When the acknowledgement of 4-byte
data segment is received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received NACK,
then the window size is reduced to half and slow start phase starts again.

Error Control &and Flow Control


TCP uses port numbers to know what application process it needs to handover the data
segment. Along with that, it uses sequence numbers to synchronize itself with the remote
host. All data segments are sent and received with sequence numbers. The Sender
knows which last data segment was received by the Receiver when it gets ACK. The
Receiver knows about the last segment sent by the Sender by referring to the sequence
number of recently received packet.
If the sequence number of a segment recently received does not match with the
sequence number the receiver was expecting, then it is discarded and NACK is sent
back. If two segments arrive with the same sequence number, the TCP timestamp value
is compared to make a decision.

Multiplexing
The technique to combine two or more data streams in one session is called Multiplexing.
When a TCP client initializes a connection with Server, it always refers to a well-defined
port number which indicates the application process. The client itself uses a randomly
generated port number from private port number pools.
Using TCP Multiplexing, a client can communicate with a number of different application
process in a single session. For example, a client requests a web page which in turn
contains different types of data (HTTP, SMTP, FTP etc.) the TCP session timeout is
increased and the session is kept open for longer time so that the three-way handshake
overhead can be avoided.
This enables the client system to receive multiple connection over single virtual
connection. These virtual connections are not good for Servers if the timeout is too long.

Congestion Control
When large amount of data is fed to system which is not capable of handling it,
congestion occurs. TCP controls congestion by means of Window mechanism. TCP sets
a window size telling the other end how much data segment to send. TCP may use three
algorithms for congestion control:
• Additive increase, Multiplicative Decrease
• Slow Start
• Timeout React
Timer Management
TCP uses different types of timer to control and management various tasks:
Keep-alive timer:
• This timer is used to check the integrity and validity of a connection.
• When keep-alive time expires, the host sends a probe to check if the connection still exists.
Retransmission timer:
• This timer maintains stateful session of data sent.
• If the acknowledgement of sent data does not receive within the Retransmission time, the data
segment is sent again.
Persist timer:
• TCP session can be paused by either host by sending Window Size 0.
• To resume the session a host needs to send Window Size with some larger value.
• If this segment never reaches the other end, both ends may wait for each other for infinite
time.
• When the Persist timer expires, the host re-sends its window size to let the other end know.
• Persist Timer helps avoid deadlocks in communication.
Timed-Wait:
• After releasing a connection, either of the hosts waits for a Timed-Wait time to terminate the
connection completely.
• This is in order to make sure that the other end has received the acknowledgement of its
connection termination request.
• Timed-out can be a maximum of 240 seconds (4 minutes).

Crash Recovery
TCP is very reliable protocol. It provides sequence number to each of byte sent in
segment. It provides the feedback mechanism i.e. when a host receives a packet, it is
bound to ACK that packet having the next sequence number expected (if it is not the last
segment).
When a TCP Server crashes mid-way communication and re-starts its process it sends
TPDU broadcast to all its hosts. The hosts can then send the last data segment which
was never unacknowledged and carry onwards.
Q8:- twisted cable

Twisted-Pair Cable
Twisted-pair cable is a type of cabling that is used for telephone communications and most modern Ethernet
networks. A pair of wires forms a circuit that can transmit data. The pairs are twisted to provide protection
against crosstalk, the noise generated by adjacent pairs. When electrical current flows through a wire, it creates a
small, circular magnetic field around the wire. When two wires in an electrical circuit are placed close together, their
magnetic fields are the exact opposite of each other. Thus, the two magnetic fields cancel each other out. They also
cancel out any outside magnetic fields. Twisting the wires can enhance this cancellation effect. Using cancellation
together with twisting the wires, cable designers can effectively provide self-shielding for wire pairs within the network
media.

Two basic types of twisted-pair cable exist: unshielded twisted pair (UTP) and shielded twisted pair (STP). The
following sections discuss UTP and STP cable in more detail.

UTP Cable
UTP cable is a medium that is composed of pairs of wires (see Figure 8-1). UTP cable is used in a variety of
networks. Each of the eight individual copper wires in UTP cable \is covered by an insulating material. In addition, the
wires in each pair are twisted around each other.

Figure 8-1 Unshielded Twisted-Pair Cable

UTP cable relies solely on the cancellation effect produced by the twisted wire pairs to limit signal degradation
caused by electromagnetic interference (EMI) and radio frequency interference (RFI). To further reduce crosstalk
between the pairs in UTP cable, the number of twists in the wire pairs varies. UTP cable must follow precise
specifications governing how many twists or braids are permitted per meter (3.28 feet) of cable.

UTP cable often is installed using a Registered Jack 45 (RJ-45) connector (see Figure 8-2). The RJ-45 is an eight-
wire connector used commonly to connect computers onto a local-area network (LAN), especially Ethernets.

Figure 8-2 RJ-45 Connectors

When used as a networking medium, UTP cable has four pairs of either 22- or 24-gauge copper wire. UTP used as a
networking medium has an impedance of 100 ohms; this differentiates it from other types of twisted-pair wiring such
as that used for telephone wiring, which has impedance of 600 ohms.

UTP cable offers many advantages. Because UTP has an external diameter of approximately 0.43 cm (0.17 inches),
its small size can be advantageous during installation. Because it has such a small external diameter, UTP does not
fill up wiring ducts as rapidly as other types of cable. This can be an extremely important factor to consider,
particularly when installing a network in an older building. UTP cable is easy to install and is less expensive than
other types of networking media. In fact, UTP costs less per meter than any other type of LAN cabling. And because
UTP can be used with most of the major networking architectures, it continues to grow in popularity.

Disadvantages also are involved in using twisted-pair cabling, however. UTP cable is more prone to electrical noise
and interference than other types of networking media, and the distance between signal boosts is shorter for UTP
than it is for coaxial and fiber-optic cables.

Although UTP was once considered to be slower at transmitting data than other types of cable, this is no longer true.
In fact, UTP is considered the fastest copper-based medium today. The following summarizes the features of UTP
cable:

• Speed and throughput—10 to 1000 Mbps


• Average cost per node—Least expensive
• Media and connector size—Small
• Maximum cable length—100 m (short)

Commonly used types of UTP cabling are as follows:

• Category 1—Used for telephone communications. Not suitable for transmitting data.
• Category 2—Capable of transmitting data at speeds up to 4 megabits per second (Mbps).
• Category 3—Used in 10BASE-T networks. Can transmit data at speeds up to 10 Mbps.
• Category 4—Used in Token Ring networks. Can transmit data at speeds up to 16 Mbps.
• Category 5—Can transmit data at speeds up to 100 Mbps.
• Category 5e —Used in networks running at speeds up to 1000 Mbps (1 gigabit per second [Gbps]).
• Category 6—Typically, Category 6 cable consists of four pairs of 24 American Wire Gauge (AWG) copper
wires. Category 6 cable is currently the fastest standard for UTP.

Shielded Twisted-Pair Cable


Shielded twisted-pair (STP) cable combines the techniques of shielding, cancellation, and wire twisting. Each pair of
wires is wrapped in a metallic foil (see Figure 8-3). The four pairs of wires then are wrapped in an overall metallic
braid or foil, usually 150-ohm cable. As specified for use in Ethernet network installations, STP reduces electrical
noise both within the cable (pair-to-pair coupling, or crosstalk) and from outside the cable (EMI and RFI). STP usually
is installed with STP data connector, which is created especially for the STP cable. However, STP cabling also can
use the same RJ connectors that UTP uses.

Figure 8-3 Shielded Twisted-Pair Cable

Although STP prevents interference better than UTP, it is more expensive and difficult to install. In addition, the
metallic shielding must be grounded at both ends. If it is improperly grounded, the shield acts like an antenna and
picks up unwanted signals. Because of its cost and difficulty with termination, STP is rarely used in Ethernet
networks. STP is primarily used in Europe.

The following summarizes the features of STP cable:

• Speed and throughput—10 to 100 Mbps


• Average cost per node—Moderately expensive
• Media and connector size—Medium to large
• Maximum cable length—100 m (short)

When comparing UTP and STP, keep the following points in mind:

• The speed of both types of cable is usually satisfactory for local-area distances.
• These are the least-expensive media for data communication. UTP is less expensive than STP.
• Because most buildings are already wired with UTP, many transmission standards are adapted to use it, to
avoid costly rewiring with an alternative cable type.

DATA LINK LAYER:-


Data Link Layer is second layer of OSI Layered Model. This layer is one of the most
complicated layers and has complex functionalities and liabilities. Data link layer hides
the details of underlying hardware and represents itself to upper layer as the medium to
communicate.
Data link layer works between two hosts which are directly connected in some sense.
This direct connection could be point to point or broadcast. Systems on broadcast
network are said to be on same link. The work of data link layer tends to get more
complex when it is dealing with multiple hosts on single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit and to send
that over the underlying hardware. At the receiving end, Data link layer picks up data
from hardware which are in the form of electrical signals, assembles them in a
recognizable frame format, and hands over to upper layer.
Data link layer has two sub-layers:
• Logical Link Control: It deals with protocols, flow-control, and error control
• Media Access Control: It deals with actual control of media

Functionality of Data-link Layer


Data link layer does many tasks on behalf of upper layer. These are:
• Framing
Data-link layer takes packets from Network Layer and encapsulates them into Frames.Then,
it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up
signals from hardware and assembles them into frames.
• Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is
assumed to be unique on the link. It is encoded into hardware at the time of manufacturing.
• Synchronization
When data frames are sent on the link, both machines must be synchronized in order to
transfer to take place.
• Error Control
Sometimes signals may have encountered problem in transition and the bits are
flipped.These errors are detected and attempted to recover actual data bits. It also provides
error reporting mechanism to the sender.
• Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow
control that enables both machine to exchange data on same speed.
• Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of collision.
Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing a
shared media among multiple Systems.
Q9 Routing Algo:-
Classification of Routing Algorithms
Prerequisite – Fixed and Flooding Routing algorithms
Routing is process of establishing the routes that data packets must follow to reach the
destination. In this process, a routing table table is created which contains information
regarding routes which data packets follow. Various routing algorithm are used for the
purpose of deciding which route an incoming data packet needs to be transmitted on to
reach destination efficiently.
Classification of Routing Algorithms: The routing algorithms can be classified as
follows:
1. Adaptive Algorithms –
These are the algorithms which change their routing decisions whenever network
topology or traffic load changes. The changes in routing decisions are reflected in the
topology as well as traffic of the network. Also known as dynamic routing, these make
use of dynamic information such as current topology, load, delay, etc. to select routes.
Optimization parameters are distance, number of hops and estimated transit time.
Further these are classified as follows:
• (a) Isolated – In this method each, node makes its routing decisions using the
information it has without seeking information from other nodes. The sending
nodes doesn’t have information about status of particular link. Disadvantage is that
packet may be sent through a congested network which may result in delay.
Examples: Hot potato routing, backward learning.
• (b) Centralized – In this method, a centralized node has entire information about
the network and makes all the routing decisions. Advantage of this is only one
node is required to keep the information of entire network and disadvantage is that
if central node goes down the entire network is done.
• (c) Distributed – In this method, the node receives information fro its neighbors
and then takes the decision about routing the packets. Disadvantage is that the
packet may be delayed if there is change in between interval in which it receives
information and sends packet.
2. Non-Adaptive Algorithms –
These are the algorithms which do not change their routing decisions once they have
been selected. This is also known as static routing as route to be taken is computed in
advance and downloaded to routers when router is booted.
Further these are classified as follows:
• (a) Flooding – This adapts the technique in which every incoming packet is sent
on every outgoing line except from which it arrived. One problem with this is that
packets may go in loop and as a result of which a node may receive duplicate
packets. These problems can be overcome with the help of sequence numbers,
hop count and spanning tree.
• (b) Random walk – In this method, packets are sent host by host or node by node
to one of its neighbors randomly. This is highly robust method which is usually
implemented by sending packets onto the link which is least queued.
Q10 error detection and error correction

Ans

There are many reasons such as noise, cross-talk etc., which may help data to get
corrupted during transmission. The upper layers work on some generalized view of
network architecture and are not aware of actual hardware data processing.Hence, the
upper layers expect error-free transmission between the systems. Most of the
applications would not function expectedly if they receive erroneous data. Applications
such as voice and video may not be that affected and with some errors they may still
function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit
streams) are transmitted with certain level of accuracy. But to understand how errors is
controlled, it is essential to know what types of errors may occur.

Types of Errors
There may be three types of errors:
• Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


• Multiple bits error

Frame is received with more than one bits in corrupted state.


• Burst error

Frame contains more than1 consecutive bits corrupted.


Error control mechanism may involve two possible ways:
• Error detection
• Error correction
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic
Redundancy Check (CRC). In both cases, few extra bits are sent along with actual data
to confirm that bits received at other end are same as they were sent. If the counter-
check at receiver’ end fails, the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case
of even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even
parity is used and number of 1s is even then one bit with value 0 is added. This way
number of 1s remains even.If the number of 1s is odd, to make it even a bit with value 1
is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and
even parity is used, the frame is considered to be not-corrupted and is accepted. If the
count of 1s is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect
the error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This
technique involves binary division of the data bits being sent. The divisor is generated
using polynomials. The sender performs a division operation on the bits being sent and
calculates the remainder. Before sending the actual bits, the sender adds the remainder
at the end of the actual bits. Actual data bits plus the remainder is called a codeword.
The sender transmits data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same
CRC divisor. If the remainder contains all zeros the data bits are accepted, otherwise it
is considered as there some data corruption occurred in transit.

Error Correction
In the digital world, error correction can be done in two ways:
• Backward Error Correction When the receiver detects an error in the data received, it
requests back the sender to retransmit the data unit.
• Forward Error Correction When the receiver detects some error in the data received, it
executes error-correcting code, which helps it to auto-recover and to correct some kinds of
errors.
The first one, Backward Error Correction, is simple and can only be efficiently used
where retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error
Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame
is corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell
that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of
information. In m+r bit codeword, there is possibility that the r bits themselves may get
corrupted. So the number of r bits used must inform about m+r bit locations plus no-error
information, i.e. m+r+1.

Q11 Network congestion


Ans

Network Congestion Definition


Network congestion is the result of an Internet route becoming too full. When
there are too many requests over a specific network route, there is a back-up
of data packets. When too many data packets try to move through a specific
network route, the result is network congestion.

What is Network Congestion?


In simple terms, think about network congestion like highway traffic. If you
drive along a highway that is merging from two lanes into one, a traffic jam will
occur. The cause is trying to fit more cars into a lane than it can handle.

The same situation occurs on the Internet. To many requests for data over the
same Internet route causes congestion.

Network congestion occurs due to the inherent structure of the Internet. The
use of border gateway protocol as the routing system.

Congestion Control techniques in Computer Networks


Congestion control refers to the techniques used to control or prevent congestion.
Congestion control techniques can be broadly classified into two categories:
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion before it
happens. The congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.
2. Window Policy :
The type of window at the sender side may also affect the congestion. Several packets in
the Go-back-n window are resent, although some packets may be received successfully at
the receiver side. This duplication may increase the congestion in the network and making
it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion
and at the same time partially discards the corrupted or less sensitive package and also
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgement are also the part of the load in network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be
used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send a acknowledgment only if
it has to sent a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of a congestion or there is a congestion in the network, router should
deny establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the
network.
Closed Loop Congestion Control
Closed loop congestion control technique is used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:
1. Backpressure :
Backpressure is a technique in which a congested node stop receiving packet from
upstream node. This may cause the upstream node or nodes to become congested and
rejects receiving data from above nodes. Backpressure is a node-to-node congestion
control technique that propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has information of its
above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a
result 2nd node may be get congested due to slowing down of the output data flow.
Similarly 1st node may get congested and informs the source to slow down.
2. Choke Packet Technique :
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitor its resources and the utilization at each of its output lines.
whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback to
reduce the traffic. The intermediate nodes through which the packets has traveled are not
warned about congestion.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when
sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet
and explicit signaling is that the signal is included in the packets that carry data rather than
creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling : In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The reciever in this case
adopt policies to prevent further congestion.
• Backward Signaling : In backward signaling signal is sent in the opposite direction
of the congestion. The source is warned about congestion and it needs to slow down

Q12:- Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal
over the same network before the signal becomes too weak or corrupted so as to extend the
length to which the signal can be transmitted over the same network. An important point to be
noted about repeaters is that they do not amplify the signal. When the signal becomes weak,
they copy the signal bit by bit and regenerate it at the original strength. It is a 2 port device.
Routers – A router is a device like a switch that routes data packets based on their IP
addresses. Router is mainly a Network Layer device. Routers normally connect LANs and
WANs together and have a dynamically updating routing table based on which they make
decisions on routing the data packets. Router divide broadcast domains of hosts connected
through it.
Gateway – A gateway, as the name suggests, is a passage to connect two networks together
that may work upon different networking models. They basically work as the messenger agents
that take data from one system, interpret it, and transfer it to another system. Gateways are also
called protocol converters and can operate at any network layer. Gateways are generally more
complex than switch or router.

Time-division multiplexing (TDM) is a method of putting multiple data streams


in a single signal by separating the signal into many segments, each having a
very short duration. Each individual data stream is reassembled at the
receiving end based on the timing.

The circuit that combines signals at the source (transmitting) end of a


communications link is known as a multiplexer. It accepts the input from each
individual end user, breaks each signal into segments, and assigns the
segments to the composite signal in a rotating, repeating sequence. The
composite signal thus contains data from multiple senders. At the other end of
the long-distance cable, the individual signals are separated out by means of
a circuit called a demultiplexer, and routed to the proper end users. A two-way
communications circuit requires a multiplexer/demultiplexer at each end of the
long-distance, high-bandwidth cable.

If many signals must be sent along a single long-distance line, careful


engineering is required to ensure that the system will perform properly. An
asset of TDM is its flexibility. The scheme allows for variation in the number of
signals being sent along the line, and constantly adjusts the time intervals to
make optimum use of the available bandwidth.

S-ar putea să vă placă și