Documente Academic
Documente Profesional
Documente Cultură
Meaning Transmission starts with It uses start bit and stop bit
clock pulse.
Speed
data
Conversations, etcetera.
Ans:-
1. Network security and confidentiality management, text compression and packaging, virtual
terminal protocol (VTP).
2. Syntax conversion - The abstract syntax is converted to the transfer syntax, and the other side
to achieve the opposite conversion (transfer syntax will be converted to abstract syntax).
Involved in the contents of the code conversion, character conversion, data format modification,
as well as data structure operation adaptation, data compression, encryption and so on.
3. Grammar negotiation - According to the requirements of the application layer to negotiate the
appropriate choice of context, that is, to determine the transmission syntax and transmission.
4. Connection management - Including the use of the session layer service to establish a
connection, manage data transport and synchronization control over this connection (using the
corresponding services at the session level), and terminate the connection either normally or
absently.
A
Application layer:-
The application layer is the highest abstraction layer of the TCP/IP model that provides
the interfaces and protocols needed by the users. It combines the functionalities of the
session layer, the presentation layer and the application layer of the OSI model.
The functions of the application layer are:
This layer uses a number of protocols, the main among which are as follows:
1. Hyper Text Transfer Protocol, HTTP: It is the underlying protocol for world wide web. It
defines how hypermedia messages are formatted and transmitted.
2. File Transfer Protocol, FTP: It is a client-server based protocol for transfer of files between
client and server over the network.
3. Simple Mail Transfer Protocol, SMTP: It lays down the rules and semantics for sending and
receiving electronic mails (e-mails).
4. Domain Name System, DNS: It is a naming system for devices in networks. It provides
services for translating domain names to IP addresses.
5. TELNET: It provides bi-directional text-oriented services for remote login to the hosts over
the network.
6. Simple Network Management Protocol, SNMP: It is for managing, monitoring the network
and for organizing information about the networked devices.
a) Mesh Topology :
In mesh topology, every device is connected to another device via particular channel.
Figure 1 : Every device is connected with another via dedicated channels. These
channels are known as links.
• If suppose, N number of devices are connected with each other in mesh topology, then
total number of ports that is required by each device is N-1. In the Figure 1, there are 5
devices connected to each other, hence total number of ports required is 4.
• If suppose, N number of devices are connected with each other in mesh topology, then
total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In the Figure
1, there are 5 devices connected to each other, hence total number of links required is
5*4/2 = 10.
Advantages of this topology :
• It is robust.
• Fault is diagnosed easily. Data is reliable because data is transferred among the devices
through dedicated channels or links.
• Provides security and privacy.
Problems with this topology :
• Installation and configuration is difficult.
• Cost of cables are high as bulk wiring is required, hence suitable for less number of
devices.
• Cost of maintenance is high.
b) Star Topology :
In star topology, all the devices are connected to a single hub through a cable. This hub
is the central node and all others nodes are connected to the central node. The hub can
be passive in nature i.e. not intelligent hub such as broadcasting devices, at the same
time the hub can be intelligent known as active hubs. Active hubs have repeaters in
them.
Figure 2 : A star topology having four systems connected to single point of connection
i.e. hub.
c) Bus Topology :
Bus topology is a network type in which every computer and network device is
connected to single cable. It transmits the data from one end to another in single
direction. No bi-directional feature is in bus topology.
Figure 3 : A bus topology with shared backbone cable. The nodes are connected to the
channel via drop lines.
d) Ring Topology :
In this topology, it forms a ring connecting a devices with its exactly two neighbouring
devices.
Figure 4 : A ring topology comprises of 4 stations connected with each forming a ring..
e) Hybrid Topology :
This topology is a collection of two or more topologies which are described above. This
is a scalable topology which can be expanded easily. It is reliable one but at the same it
is a costly topology.
ISDN Interfaces:
The following are the interfaces of ISDN:
1. Basic Rate Interface (BRI) –
There are two data-bearing channels (‘B’ channels) and one signaling channel (‘D’
channel) in BRI to initiate connections. The B channels operate at a maximum of 64 Kbps
while the D channel operates at a maximum of 16 Kbps. The two channels are
independent of each other. For example, one channel is used as a TCP/IP connection to a
location while the other channel is used to send a fax to a remote location. In iSeries ISDN
supports basic rate interface (BRl).
The basic rate interface (BRl) specifies a digital pipe consisting two B channels of
64 Kbps each and one D channel of 16 Kbps. This equals a speed of 144 Kbps. In
addition, the BRl service itself requires an operating overhead of 48 Kbps.
Therefore a digital pipe of 192 Kbps is required.
2. Primary Rate Interface (PRI) –
Primary Rate Interface service consists of a D channel and either 23 or 30 B channels
depending on the country you are in. PRI is not supported on the iSeries. A digital pipe
with 23 B channels and one 64 Kbps D channel is present in the usual Primary Rate
Interface (PRI). Twenty-three B channels of 64 Kbps each and one D channel of 64 Kbps
equals 1.536 Mbps. The PRI service uses 8 Kbps of overhead also. Therefore PRI
requires a digital pipe of 1.544 Mbps.
3. Broadband-ISDN (B-ISDN) –
Narrowband ISDN has been designed to operate over the current communications
infrastructure, which is heavily dependent on the copper cable however B-ISDN relies
mainly on the evolution of fiber optics. According to CCITT B-ISDN is best described as ‘a
service requiring transmission channels capable of supporting rates greater than the
primary rate.
ISDN Services:
ISDN provides a fully integrated digital service to users. These services fall into 3
categories- bearer services, teleservices and supplementary services.
1. Bearer Services –
Transfer of information (voice, data and video) between users without the network
manipulating the content of that information is provided by the bearer network. There is no
need for the network to process the information and therefore does not change the
content. Bearer services belong to the first three layers of the OSI model. They are well
defined in the ISDN standard. They can be provided using circuit-switched, packet-
switched, frame-switched, or cell-switched networks.
2. Teleservices –
In this the network may change or process the contents of the data. These services
corresponds to layers 4-7 of the OSI model. Teleservices relay on the facilities of the
bearer services and are designed to accommodate complex user needs. The user need
not to be aware of the details of the process. Teleservices include telephony, teletex,
telefax, videotex, telex and teleconferencing. Though the ISDN defines these services by
name yet they have not yet become standards.
3. Supplementary Service –
Additional functionality to the bearer services and teleservices are provided by
supplementary services. Reverse charging, call waiting, and message handling are
examples of supplementary services which are all familiar with today’s telephone company
services.
Principle of ISDN:
The ISDN works based on the standards defined by ITU-T (formerly CCITT). The
Telecommunication Standardization Sector (ITU-T) coordinates standards for
telecommunications on behalf of the International Telecommunication Union (ITU) and
is based in Geneva, Switzerland. The various principles of ISDN as per ITU-T
recommendation are:
• To support switched and non-switched applications
• To support voice and non-voice applications
• Reliance on 64-kbps connections
• Intelligence in the network
• Layered protocol architecture
• Variety of configurations
Guide
Computer networking
Benefits of computer networks
Setting up a computer network is a fast and reliable way of sharing information and resources within a
business. It can help you make the most of your IT systems and equipment.
Advantages of computer networking
Main benefits of networks include:
• File sharing – you can easily share data between different users, or access it remotely if you keep it
on other connected devices.
• Resource sharing – using network-connected peripheral devices like printers, scanners and
copiers, or sharing software between multiple users, saves money.
• Sharing a single internet connection – it is cost-efficient and can help protect your systems if you
properly secure the network.
• Increasing storage capacity – you can access files and multimedia, such as images and music,
which you store remotely on other machines or network-attached storage devices.
Networking computers can also help you improve communication, so that:
• staff, suppliers and customers can share information and get in touch more easily
• your business can become more efficient - eg networked access to a common database can avoid
the same data being keyed multiple times, saving time and preventing errors
• staff can deal with queries and deliver a better standard of service as a result of sharing customer
data
Cost benefits of computer networking
Storing information in one centralised database can also help you reduce costs and drive efficiency.
For example:
• staff can deal with more customers in less time since they have shared access to customer and
product databases
• you can centralise network administration, meaning less IT support is required
• you can cut costs through sharing of peripherals and internet access
You can reduce errors and improve consistency by having all staff work from a single source of
information. This way, you can make standard versions of manuals and directories available to them,
and back up data from a single point on a scheduled basis, ensuring consistency.
A computer network consists of two or more computers that are linked in order to share
resources such as printers and CD-ROMs, exchange files, or allow electronic
communications. The computers on a computer network may be linked through cables,
telephone lines, radio waves, satellites, or infrared light beams.
• In this method of flow control, the sender sends a single frame to receiver & waits for an
acknowledgment.
• The next frame is sent by sender only when acknowledgment of previous frame is
received.
• This process of sending a frame & waiting for an acknowledgment continues as long as
the sender has data to send.
• To end up the transmission sender transmits end of transmission (EOT) frame.
• The main advantage of stop & wait protocols is its accuracy. Next frame is transmitted
only when the first frame is acknowledged. So there is no chance of frame being lost.
• The main disadvantage of this method is that it is inefficient. It makes the transmission
process slow. In this method single frame travels from source to destination and single
acknowledgment travels from destination to source. As a result each frame sent and
received uses the entire time needed to traverse the link. Moreover, if two devices are
distance apart, a lot of time is wasted waiting for ACKs that leads to increase in total
transmission time.
Q6:-Quality of service layer
Ans
Quality of service (QoS) refers to any technology that manages data traffic to
reduce packet loss, latency and jitter on the network. QoS controls and
manages network resources by setting priorities for specific types of data on
the network.
Organizations can achieve QoS by using certain tools and techniques, such
as jitter buffer and traffic shaping. For many organizations, QoS is included in
the service-level agreement (SLA) with their network service provider to
guarantee a certain level of performance.
QoS parameters
• Packet loss happens when network links become congested and routers and
switches start dropping packets. When packets are dropped during real-time
communication, such as a voice or video calls, these sessions can experience jitter
and gaps in speech.
• Jitter is the result of network congestion, timing drift and route changes. Too
much jitter can degrade the quality of voice and video communication.
• Latency is the time it takes a packet to travel from its source to its destination.
Latency should be as close to zero as possible. If a voice over IP call has a high
amount of latency, it can experience echo and overlapping audio.
• Mean opinion score (MOS) is a metric to rate voice quality that uses a five-point
scale, with a five indicating the highest quality.
Implementing QoS
Three models exist to implement QoS: Best Effort, Integrated Services and
Differentiated Services.
Best Effort is a QoS model where all the packets receive the same priority and there
is no guaranteed delivery of packets. Best Effort is applied when networks have not
configured QoS policies or when the infrastructure does not support QoS.
Q7:-
Ans:-
The transmission Control Protocol (TCP) is one of the most important protocols of
Internet Protocols suite. It is most widely used protocol for data transmission in
communication network such as internet.
Features
• TCP is reliable protocol. That is, the receiver always sends either positive or negative
acknowledgement about the data packet to the sender, so that the sender always has bright
clue about whether the data packet is reached the destination or it needs to resend it.
• TCP ensures that the data reaches intended destination in the same order it was sent.
• TCP is connection oriented. TCP requires that connection between two remote points be
established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.
• TCP provides full duplex server, i.e. it can perform roles of both receiver and sender.
Header
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.
• Source Port (16-bits) - It identifies source port of the application process on the sending
device.
• Destination Port (16-bits) - It identifies destination port of the application process on the
receiving device.
• Sequence Number (32-bits) - Sequence number of data bytes of a segment in a session.
• Acknowledgement Number (32-bits) - When ACK flag is set, this number contains the next
sequence number of the data byte expected and works as acknowledgement of the previous
data received.
• Data Offset (4-bits) - This field implies both, the size of TCP header (32-bit words) and the
offset of data in current packet in the whole TCP segment.
• Reserved (3-bits) - Reserved for future use and all are set zero by default.
• Flags (1-bit each)
o NS - Nonce Sum bit is used by Explicit Congestion Notification signaling process.
o CWR - When a host receives packet with ECE bit set, it sets Congestion Windows
Reduced to acknowledge that ECE received.
o ECE -It has two meanings:
▪ If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
▪ If SYN bit is set to 1, ECE means that the device is ECT capable.
o URG - It indicates that Urgent Pointer field has significant data and should be
processed.
o ACK - It indicates that Acknowledgement field has significance. If ACK is cleared to
0, it indicates that packet does not contain any acknowledgement.
o PSH - When set, it is a request to the receiving station to PUSH data (as soon as it
comes) to the receiving application without buffering it.
o RST - Reset flag has the following features:
▪ It is used to refuse an incoming connection.
▪ It is used to reject a segment.
▪ It is used to restart a connection.
o SYN - This flag is used to set up a connection between hosts.
o FIN - This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers, they
are processed in correct order.
• Windows Size - This field is used for flow control between two stations and indicates the
amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how much data is
the receiver expecting.
• Checksum - This field contains the checksum of Header, Data and Pseudo Headers.
• Urgent Pointer - It points to the urgent data byte if URG flag is set to 1.
• Options - It facilitates additional options which are not covered by the regular header. Option
field is always described in 32-bit words. If this field contains data less than 32-bit, padding is
used to cover the remaining bits to reach 32-bit boundary.
Addressing
TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:
Establishment
Client initiates the connection and sends the segment with a Sequence number. Server
acknowledges it back with its own Sequence number and ACK of client’s segment which
is one more than client’s Sequence number. Client after receiving ACK of its segment
sends an acknowledgement of Server’s response.
Release
Either of server and client can send TCP segment with FIN flag set to 1. When the
receiving end responds it back by ACKnowledging FIN, that direction of TCP
communication is closed and connection is released.
Bandwidth Management
TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data byte
segments the receiver at this end can receive. TCP uses slow start phase by using
window size 1 and increases the window size exponentially after each successful
communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and next
sent the segment sent will be 4 data bytes long. When the acknowledgement of 4-byte
data segment is received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received NACK,
then the window size is reduced to half and slow start phase starts again.
Multiplexing
The technique to combine two or more data streams in one session is called Multiplexing.
When a TCP client initializes a connection with Server, it always refers to a well-defined
port number which indicates the application process. The client itself uses a randomly
generated port number from private port number pools.
Using TCP Multiplexing, a client can communicate with a number of different application
process in a single session. For example, a client requests a web page which in turn
contains different types of data (HTTP, SMTP, FTP etc.) the TCP session timeout is
increased and the session is kept open for longer time so that the three-way handshake
overhead can be avoided.
This enables the client system to receive multiple connection over single virtual
connection. These virtual connections are not good for Servers if the timeout is too long.
Congestion Control
When large amount of data is fed to system which is not capable of handling it,
congestion occurs. TCP controls congestion by means of Window mechanism. TCP sets
a window size telling the other end how much data segment to send. TCP may use three
algorithms for congestion control:
• Additive increase, Multiplicative Decrease
• Slow Start
• Timeout React
Timer Management
TCP uses different types of timer to control and management various tasks:
Keep-alive timer:
• This timer is used to check the integrity and validity of a connection.
• When keep-alive time expires, the host sends a probe to check if the connection still exists.
Retransmission timer:
• This timer maintains stateful session of data sent.
• If the acknowledgement of sent data does not receive within the Retransmission time, the data
segment is sent again.
Persist timer:
• TCP session can be paused by either host by sending Window Size 0.
• To resume the session a host needs to send Window Size with some larger value.
• If this segment never reaches the other end, both ends may wait for each other for infinite
time.
• When the Persist timer expires, the host re-sends its window size to let the other end know.
• Persist Timer helps avoid deadlocks in communication.
Timed-Wait:
• After releasing a connection, either of the hosts waits for a Timed-Wait time to terminate the
connection completely.
• This is in order to make sure that the other end has received the acknowledgement of its
connection termination request.
• Timed-out can be a maximum of 240 seconds (4 minutes).
Crash Recovery
TCP is very reliable protocol. It provides sequence number to each of byte sent in
segment. It provides the feedback mechanism i.e. when a host receives a packet, it is
bound to ACK that packet having the next sequence number expected (if it is not the last
segment).
When a TCP Server crashes mid-way communication and re-starts its process it sends
TPDU broadcast to all its hosts. The hosts can then send the last data segment which
was never unacknowledged and carry onwards.
Q8:- twisted cable
Twisted-Pair Cable
Twisted-pair cable is a type of cabling that is used for telephone communications and most modern Ethernet
networks. A pair of wires forms a circuit that can transmit data. The pairs are twisted to provide protection
against crosstalk, the noise generated by adjacent pairs. When electrical current flows through a wire, it creates a
small, circular magnetic field around the wire. When two wires in an electrical circuit are placed close together, their
magnetic fields are the exact opposite of each other. Thus, the two magnetic fields cancel each other out. They also
cancel out any outside magnetic fields. Twisting the wires can enhance this cancellation effect. Using cancellation
together with twisting the wires, cable designers can effectively provide self-shielding for wire pairs within the network
media.
Two basic types of twisted-pair cable exist: unshielded twisted pair (UTP) and shielded twisted pair (STP). The
following sections discuss UTP and STP cable in more detail.
UTP Cable
UTP cable is a medium that is composed of pairs of wires (see Figure 8-1). UTP cable is used in a variety of
networks. Each of the eight individual copper wires in UTP cable \is covered by an insulating material. In addition, the
wires in each pair are twisted around each other.
UTP cable relies solely on the cancellation effect produced by the twisted wire pairs to limit signal degradation
caused by electromagnetic interference (EMI) and radio frequency interference (RFI). To further reduce crosstalk
between the pairs in UTP cable, the number of twists in the wire pairs varies. UTP cable must follow precise
specifications governing how many twists or braids are permitted per meter (3.28 feet) of cable.
UTP cable often is installed using a Registered Jack 45 (RJ-45) connector (see Figure 8-2). The RJ-45 is an eight-
wire connector used commonly to connect computers onto a local-area network (LAN), especially Ethernets.
When used as a networking medium, UTP cable has four pairs of either 22- or 24-gauge copper wire. UTP used as a
networking medium has an impedance of 100 ohms; this differentiates it from other types of twisted-pair wiring such
as that used for telephone wiring, which has impedance of 600 ohms.
UTP cable offers many advantages. Because UTP has an external diameter of approximately 0.43 cm (0.17 inches),
its small size can be advantageous during installation. Because it has such a small external diameter, UTP does not
fill up wiring ducts as rapidly as other types of cable. This can be an extremely important factor to consider,
particularly when installing a network in an older building. UTP cable is easy to install and is less expensive than
other types of networking media. In fact, UTP costs less per meter than any other type of LAN cabling. And because
UTP can be used with most of the major networking architectures, it continues to grow in popularity.
Disadvantages also are involved in using twisted-pair cabling, however. UTP cable is more prone to electrical noise
and interference than other types of networking media, and the distance between signal boosts is shorter for UTP
than it is for coaxial and fiber-optic cables.
Although UTP was once considered to be slower at transmitting data than other types of cable, this is no longer true.
In fact, UTP is considered the fastest copper-based medium today. The following summarizes the features of UTP
cable:
• Category 1—Used for telephone communications. Not suitable for transmitting data.
• Category 2—Capable of transmitting data at speeds up to 4 megabits per second (Mbps).
• Category 3—Used in 10BASE-T networks. Can transmit data at speeds up to 10 Mbps.
• Category 4—Used in Token Ring networks. Can transmit data at speeds up to 16 Mbps.
• Category 5—Can transmit data at speeds up to 100 Mbps.
• Category 5e —Used in networks running at speeds up to 1000 Mbps (1 gigabit per second [Gbps]).
• Category 6—Typically, Category 6 cable consists of four pairs of 24 American Wire Gauge (AWG) copper
wires. Category 6 cable is currently the fastest standard for UTP.
Although STP prevents interference better than UTP, it is more expensive and difficult to install. In addition, the
metallic shielding must be grounded at both ends. If it is improperly grounded, the shield acts like an antenna and
picks up unwanted signals. Because of its cost and difficulty with termination, STP is rarely used in Ethernet
networks. STP is primarily used in Europe.
When comparing UTP and STP, keep the following points in mind:
• The speed of both types of cable is usually satisfactory for local-area distances.
• These are the least-expensive media for data communication. UTP is less expensive than STP.
• Because most buildings are already wired with UTP, many transmission standards are adapted to use it, to
avoid costly rewiring with an alternative cable type.
Ans
There are many reasons such as noise, cross-talk etc., which may help data to get
corrupted during transmission. The upper layers work on some generalized view of
network architecture and are not aware of actual hardware data processing.Hence, the
upper layers expect error-free transmission between the systems. Most of the
applications would not function expectedly if they receive erroneous data. Applications
such as voice and video may not be that affected and with some errors they may still
function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit
streams) are transmitted with certain level of accuracy. But to understand how errors is
controlled, it is essential to know what types of errors may occur.
Types of Errors
There may be three types of errors:
• Single bit error
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and
even parity is used, the frame is considered to be not-corrupted and is accepted. If the
count of 1s is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect
the error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This
technique involves binary division of the data bits being sent. The divisor is generated
using polynomials. The sender performs a division operation on the bits being sent and
calculates the remainder. Before sending the actual bits, the sender adds the remainder
at the end of the actual bits. Actual data bits plus the remainder is called a codeword.
The sender transmits data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same
CRC divisor. If the remainder contains all zeros the data bits are accepted, otherwise it
is considered as there some data corruption occurred in transit.
Error Correction
In the digital world, error correction can be done in two ways:
• Backward Error Correction When the receiver detects an error in the data received, it
requests back the sender to retransmit the data unit.
• Forward Error Correction When the receiver detects some error in the data received, it
executes error-correcting code, which helps it to auto-recover and to correct some kinds of
errors.
The first one, Backward Error Correction, is simple and can only be efficiently used
where retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error
Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame
is corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell
that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of
information. In m+r bit codeword, there is possibility that the r bits themselves may get
corrupted. So the number of r bits used must inform about m+r bit locations plus no-error
information, i.e. m+r+1.
The same situation occurs on the Internet. To many requests for data over the
same Internet route causes congestion.
Network congestion occurs due to the inherent structure of the Internet. The
use of border gateway protocol as the routing system.
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.
2. Window Policy :
The type of window at the sender side may also affect the congestion. Several packets in
the Go-back-n window are resent, although some packets may be received successfully at
the receiver side. This duplication may increase the congestion in the network and making
it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion
and at the same time partially discards the corrupted or less sensitive package and also
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgement are also the part of the load in network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be
used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send a acknowledgment only if
it has to sent a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of a congestion or there is a congestion in the network, router should
deny establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the
network.
Closed Loop Congestion Control
Closed loop congestion control technique is used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:
1. Backpressure :
Backpressure is a technique in which a congested node stop receiving packet from
upstream node. This may cause the upstream node or nodes to become congested and
rejects receiving data from above nodes. Backpressure is a node-to-node congestion
control technique that propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has information of its
above upstream node.
In above diagram the 3rd node is congested and stops receiving packets as a
result 2nd node may be get congested due to slowing down of the output data flow.
Similarly 1st node may get congested and informs the source to slow down.
2. Choke Packet Technique :
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitor its resources and the utilization at each of its output lines.
whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback to
reduce the traffic. The intermediate nodes through which the packets has traveled are not
warned about congestion.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when
sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet
and explicit signaling is that the signal is included in the packets that carry data rather than
creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling : In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The reciever in this case
adopt policies to prevent further congestion.
• Backward Signaling : In backward signaling signal is sent in the opposite direction
of the congestion. The source is warned about congestion and it needs to slow down
Q12:- Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal
over the same network before the signal becomes too weak or corrupted so as to extend the
length to which the signal can be transmitted over the same network. An important point to be
noted about repeaters is that they do not amplify the signal. When the signal becomes weak,
they copy the signal bit by bit and regenerate it at the original strength. It is a 2 port device.
Routers – A router is a device like a switch that routes data packets based on their IP
addresses. Router is mainly a Network Layer device. Routers normally connect LANs and
WANs together and have a dynamically updating routing table based on which they make
decisions on routing the data packets. Router divide broadcast domains of hosts connected
through it.
Gateway – A gateway, as the name suggests, is a passage to connect two networks together
that may work upon different networking models. They basically work as the messenger agents
that take data from one system, interpret it, and transfer it to another system. Gateways are also
called protocol converters and can operate at any network layer. Gateways are generally more
complex than switch or router.