Sunteți pe pagina 1din 63

09CS 6048 Amrik Singh Sidhu

Transport Layer
ADDRESSING &
THE BASICS
CONNECTIONS

PREVIEW

BUFFERS &FLOW
TCP
MANAGEMENT
THE BASICS
Transport Layer
Where Where
are We?????
are we?
Connection control
Sequencing
Application
Multiplexing
Congestion control
Flow control
Error control
Services Provided to Transport Layer

Provides Efficient, Reliable, Cost Effective Service

Software/ hardware within the tpt layer that does the work is
called the Transport Entity.

Types of Services

• Connection Oriented
• Connection Less

Same type of services are provided by NW Layer. Why the


repetition?
Requirement of a Tpt Layer

Users have no control over the subnet - cannot solve the


problem of poor service

Another layer on top of NW layer reqd to improve quality


of service.

Existence of tpt layer makes it possible for tpt service to be


more reliable than underlying NW service.

Isolate upper layers from technology, design and


imperfections of the subnet
Services Provided to the Upper Layers

The network, transport, and application layers.


Transport Protocols

Provide application-to-application communication

Need extended addressing mechanism to identify applications

First end-to-end layer

Provides:
• Reliability
• Flow Control
• Congestion Control
Transport interface allows application programs to establish, use,
and release connections.
Elements of Transport Protocols

Explicit addressing scheme reqd for destinations

Connection establishment

Release of connections

Storage capacity of the subnet

Buffering and Flow control

Crash Recovery
ADDRESSING
Transport Layer
Addressing

TSAP: Transport Service Access Point


• Communication End Points
• Define a tpt address to which processes can listen for
connection requests.
• End points (Also called Sockets) consist of an IP address
and a Local Port Number (16 bit in case of TCP)

NSAP : Network Service Access Point


• 32 bit IP address

TSAP = NSAP + Local Port Number


Addressing

TSAPs, NSAPs and transport connections


Addressing

How do you know the destination TSAP ?

Well known ports are one way.

But to cater for lesser well known services the schemes


used are :
• Initial Connection Request – Every machine runs a special process
server that acts as a proxy for lesser used services.
• Name Server / Directory Server
Addressing –Alternate Scheme

Name Server/ Directory Server

• Name/Directory Server listens to a well known port


• User sets up a connection to the Name/Directory Server
• User sends a message specifying the service name.
• Name Server sends back the TSAP address.

User then releases the connection with Name Server and


establishes new connection with desired service.

When a new service is created it registers itself with Name


Server giving its Service name and TSAP address.
CONNECTIONS
Transport Layer
Connection Establishment

Clock based numbering scheme


• Each host equipped with a clock
• A clock is assumed to be running even if the host is
down
• Lower order bits of the clock used as sequence
number
• Each connection starts numbering its TPDU with a
unique sequence number
Connection
Connection Establishment
Establishment

Three-way Handshake Protocol for Connection Est.


Three-way
Three Handshake
Way Hanshake Protocol
Protocol

Old CONNECTION REQUEST appearing out of nowhere.


Three-way
Three Handshake
Way Hanshake Protocol
Protocol

Duplicate CR and duplicate ACK.


Connection Release
Connection Release

Two types of connection release:


• Symmetric - Each side closes connection independently
• Asymmetric - Either side sends a DISCONNECT
TPDU. Upon arrival the connection is released.

Asymmetric release is abrupt and may lead


to loss of data
Asymmetric Connection
Asymmetric Release
Connection Release

Abrupt disconnection with loss of data.


Symmetric Connection Release
Symmetric Connection Release

Connection in each direction is closed


independently

Here host can continue to receive data even after


it has sent a Disconnection TPDU.
Connection Release
Connection Release

Consider a protocol as follows:


• Host 1: I am done, are you done!
• Host 2: I am done too. Goodbye.
• Connection is safely released.
Will this work?
Connection Release
Connection Release

The two-army problem.


Connection Release - Solution

Three-way Handshake Protocol – Normal case.


Connection
Connection Release
Release - Solution
- Solution

Final ACK lost.


Connection
Connection Release
Release - Solution
- Solution

Response lost.
Connection Release - Solution

Response lost and subsequent DRs lost .


Buffers & Flow Control
Transport Layer
Flow Control & Buffering
Flow Control and Buffering
A Sliding Window protocol required to
avoid swamping of a slower receiver
by a faster sender.

Requirement of Buffers
• For an unreliable Network service buffering reqd
at the sender end.
• Buffering reqd at the receiver for a protocol
working on the basis of Selective repeat.
Flow Control & Buffering
Flow Control and Buffering
Size of the buffer a crucial issue.

Fixed size buffers


• Buffer size = Size of the largest possible TPDU
• Buffer size < Size of the largest possible TPDU

Chained fixed size


buffers

Unused space
Flow Control & Buffering
Flow Control and Buffering
Variable size buffers - Better memory utilization
but complicated buffer management

Chained variable
size buffers

Unused space
FlowControl
Flow Control and Buffering
& Buffering

TPDU 1

TPDU 2

Single Large circular


buffer per Connection
TPDU 3

Unused space
Buffering – Trade
Buffering – TradeOffs Offs

For low bandwidth bursty traffic, buffering should be


concentrated at the sender side.

For high bandwidth smooth traffic, buffering capacity at the


receiver side should be more

Sender and Receiver should dynamically adjust their buffer


allocations as per changes in traffic pattern.
Buffering – Dynamic Allocation
Buffering - Dynamic Allocation

Tpt layer protocol should thus allow sender to


request buffer space at receive end.

Sender requests for allocation of buffers using


control TPDUs and receiver grants as many as it
can afford.

Sender decrements allocation on every


transmission and stops when allocation reached
zero.
Buffering – Dynamic Allocation
Receiver piggybacks ACKs and fresh buffer
allocation onto reverse traffic.

Here, buffering is decoupled from ACKs resulting


in variable size sliding window protocol.

Control TPDUs may be lost resulting in deadlock.

Thus, control TDPUs sent periodically giving


buffer status.
Dynamic Buffer Allocation

PROTOCOLS
Transport Layer
Transport Layer Protocols
Transport Layer Protocols

Transmission Control Protocol (TCP)

User Datagram Protocol (UDP)


Dynamic Buffer Allocation

TCP
Transport Layer
Transmission Control Protocol
Transmission Control Protocol

TCP Protocol specifically designed to:


• provide a reliable end-to-end byte stream over
an unreliable internetwork
• dynamically adapt to properties of
Internetwork
• be robust in face of many kinds of failures
Transmission Control Protocol
Transmission Control Protocol

At sender side, TCP entity:

• accepts user data streams from local processes


• breaks them into pieces not exceeding 64 KB
• hands over to IP layer as separate datagrams.
At receiver side,TCP entity reconstructs the
original byte stream from received IP
datagrams.
Transmission Control Protocol
Transmission Control Protocol

As underlying subnet is unreliable,


TCP is responsible for:
• Maintenance of timers
• Retransmission on delivery failure
• Reassemble segments in proper
sequence.
TCP Service Model
TCP Service Model
TCP service is obtained by having both Sender and
Receiver create end points called sockets .

Each socket has a socket number consisting of

• IP address of host
• 16 bit number local to the host called port.

Connections are identified by socket identifiers at


both ends.
TCP Service Model
TCP Service Model
A socket may be used for multiple connections at the same
time.

All TCP connections are full duplex.

TCP does not support multicasting and broadcasting.

TCP connection is a byte stream, not a message stream.


Message boundaries are not preserved end to end.
TCP Service Model
TCP Service Model

Four 512-byte The 2048 bytes of


segments sent as data delivered to the
separate IP application in a single
datagrams. READ CALL.
TCP Service Model
TCP Service Model

When an application passes data to TCP, it may:

• send it immediately or
• buffer it at its own discretion.
If application wants the data to be sent immediately, it can use the
PUSH flag, which tells the TCP not to delay the transmission.

Eg. – Carriage return after typing Command on Command Line


TCP Service Model
TCP Service Model

Urgent Data Eg.-Control-C

Control info along with Urgent flag causes the sending TCP entity
to stop accumulating data and transmit everything immediately.

When the urgent data is received at destination, the receiving


application is interrupted to read the data stream to find the
urgent data.
The TCP Protocol

Every byte on a TCP connection has its own


32 bit sequence number.

Separate 32 bit sequence numbers are used for


acks and the window mechanism.

Sending and receiving TCP entities exchange


data in terms of segments.
The TCP Protocol

A segment consists of a fixed 20 byte header + an optional


part + zero or more data bytes.

Size of segments decided by TCP

• Data from several writes can be put into one segment or


• Can split data of one write over multiple segments

Limits to segment size

• Data + Header must fit in IP datagram payload (64 KB)


• Dependent on Maximum Transfer Unit (MTU) of the network
• Max payload =64k-20(TCP header)-20(IP header) =64k-40bytes
TCP Segment Header
Psuedo Pseudoheader
Header in TCP Checksum
in TCP Checksum

It is a pseudo-IP-header, only used for


the checksum calculation
TCP Connection Establishment

6-31

Normal case Call collision case


TCP Connection Release

Four TCP segments required

One FIN and one ACK in each direction

ACK to others FIN and own FIN may be contained


in the same segment.

To avoid ‘Two Army Problem’ timers are used.


TCP Transmission Policy

Window management in TCP


TCP Transmission Policy

When the window size is 0 the sender may not send


normal segments with the following exceptions
• Urgent data may be sent to kill a process on the remote end
• Sender may send a 1 byte segment to make the receiver
announce the next byte expected and window size.
• TCP provides this option to prevent a deadlock.

Flow control

• Sliding window with selective repeat is resorted to.


TCP Transmission Policy

Improvements on Receiver side


• Wait for data. Send the ACK and window
size advertisement together
Improvements on Transmitter side
• Nagle’s Algorithm
TCP Transmission Policy
Nagle’s Algorithm
Send the first byte and buffer all the rest till first byte
is ack

Then send the rest and again buffer till the previous
batch is ack.

Additionally allows to send when enough data trickles


down to fill half the window size or max segment size.

Nagle’s algorithm may not be desirable in some


situations. Eg mouse movements on remote desktops
TCP Transmission Policy
Silly Window Syndrome

Clark’s Solution - Prevent receiver from advertising a window size of one byte. Instead
wait till the window is half empty or the window size = full segment size.
TCP Congestion Control

Congestion causing factors

Receiver capacity - Limited


buffer size of the receiver.
• Sender transmits only as per
window size advertised by receiver

Network capacity - Internal


congestion within the NW
TCP Congestion Control

Slow Start Algorithm

Two windows maintained by the sender, each


reflecting number of bytes it can transmit. These are:
• Window granted by receiver and
• The Congestion window.

Effective sending window is the lesser of the two.


TCP Congestion Control

Slow Start Algorithm

At connection est time the congestion window is set to


max segment size in use on the connection.

The congestion window grows exponentially with


each ack, till either a time out occurs or the receiver’s
window size is reached.
TCP Congestion Control

Internet Congestion algorithm

Uses a third parameter called threshold.

Initially set to a high value of 64KB

When time out occurs, it is set to half of the current congestion window and the
congestion window is set to max segment size.

Slow start is then used

Exponential growth stops at the threshold and the congestion window increases
linearly (one max segment size per ACK) till the receiver window size is reached.

S-ar putea să vă placă și