Documente Academic
Documente Profesional
Documente Cultură
Christos Papadopoulos
(http://netweb.usc.edu/cs551/)
Connectionless Flows
How can a connectionless network allocate anything to a
user?
It doesnt know about users or applications
Flow:
a sequence of packets between same source - destination pair,
following the same route
Taxonomy
Router-centric v.s. Host-centric
router-centric: address problem from inside network routers decide what to forward and what to drop
A variant not captured in the taxonomy: adaptive routing!
..Taxonomy..
Reservation-based v.s. Feedback-based
Reservations: hosts ask for resources, network
responds yes/no
implies router-centric allocation
..Taxonomy
Window-based v.s. Rate-based
Both tell sender how much data to transmit
Window: TCP flow/congestion control
flow control: advertised window
congestion control: cwnd
Service Models
In practice, fewer than eight choices
Best-effort networks
Mostly host-centric, feedback, window based
TCP as an example
Queuing Disciplines
Each router MUST implement some
queuing discipline regardless of what the
resource allocation mechanism is
Queuing allocates bandwidth, buffer space,
and promptness:
bandwidth: which packets get transmitted
buffer space: which packets get dropped
promptness: when packets get transmitted
FIFO Queuing
FIFO:first-in-first-out (or FCFS: first-come-firstserve)
Arriving packets get dropped when queue is full
regardless of flow or importance - implies droptail
Important distinction:
FIFO: scheduling discipline (which packet to serve
next)
Drop-tail: drop policy (which packet to drop next)
Dimensions
Scheduling
Per-connection state
Single class
Class-based queuing
Head
Drop position
FIFO
Tail
Random location
Early drop
Overflow drop
..FIFO
FIFO + drop-tail is the simplest queuing algorithm
used widely in the Internet
Fair Queuing
Demmers90
Fair Queuing
Main idea:
maintain a separate queue for each flow currently
flowing through router
router services queues in Round-Robin fashion
FQ Illustration
Flow 1
Flow 2
I/P
O/P
Flow n
Variation: Weighted Fair Queuing (WFQ)
Some Issues
What constitutes a user?
Several granularities at which one can express flows
For now, assume at the granularity of sourcedestination pair, but this assumption is not critical
Bit-by-bit RR
Router maintains local clock
Single flow: suppose clock ticks when a bit
is transmitted. For packet i:
Pi: length, Ai = arrival time, Si: begin transmit
time, Fi: finish transmit time. Fi = Si+Pi
Fi = max (Fi-1, Ai) + Pi
Fair Queuing
While we cannot actually perform bit-by-bit
interleaving, can compute (for each packet)
Fi. Then, use Fi to schedule packets
Transmit earliest Fi first
Flow 2
Output
F=10
F=8
F=5
Flow 1
(arriving)
F=10
F=2
Flow 2
Output
transmitting
Delay Allocation
Aim: give less delay to those using less than
their fair share
Advance finish times for sources whose
queues drain temporarily
Bi = Pi + max (Fi-1, Ai - d)
Schedule earliest Bi first
Allocate Promptness
Bi = Pi + max (Fi-1, Ai - d)
d gives added promptness:
if Ai < Fi-1, conversation is active and d does
not affect it: Fi = Pi + Fi-1
if Ai > Fi-1, conversation is inactive and d
determines how much history to take into
account
Notes on FQ
FQ is a scheduling policy, not a drop policy
Still achieves statistical muxing - one flow can fill
entire pipe if no contenders FQ is work
conserving
More Notes on FQ
Router does not send explicit feedback to source still needs e2e congestion control
FQ isolates ill-behaved users by forcing users to share
overload with themselves
user: flow, transport protocol, etc
Congestion Avoidance
TCPs approach is reactive:
detect congestion after it happens
increase load trying to maximize utilization until loss
occurs
TCP has a congestion avoidance phase, but thats
different from what were talking about here
Router Mechanisms
Congestion notification
the DEC-bit scheme
explicit congestion feedback to the source
Queue lengths
But what queue lengths (instantaneous, average)?
Surprisingly, simulations
showed that if you want to
increase power
Use no hysteresis
Use average queue length
threshold of 1
Maximizes power function
Power = throughput/delay
Solution
Adaptive queue length
estimation: busy/idle cycles
But need to account for
long current busy periods
Sender Behavior
How often should the source change
window?
In response to what received information
should it change its window?
By how much should the source change its
window?
We already know the answer to this: AIMD
DEC-bit scheme uses a multiplicative factor of
0.875
Dec-bit Evaluation
Aim:
keep throughput high and delay low
accommodate bursts
Other Options
Random drop:
packet arriving when queue is full causes some
random packet to be dropped
Drop front:
on full queue, drop packet at head of queue
Random drop and drop front solve the lockout problem but not the full-queues problem
RED Goals
Detect incipient congestion, allow bursts
Keep power (throughput/delay) high
keep average queue size low
assume hosts respond to lost packets
RED Operation
Min thresh
Max thresh
Average queue
length
P(drop)
1.0
MaxP
Avg length
minthresh
maxthresh
Queue Estimation
Standard EWMA: avg - (1-wq) avg +
wqqlen
Upper bound on wq depends on minth
want to set wq to allow a certain burst size
Thresholds
minth determined by the utilization
requirement
Needs to be high for fairly bursty traffic
Packet Marking
Marking probability based on queue length
Pb = maxp(avg - minth) / (maxth - minth)
RED Algorithm
RED Variants
FRED: Fair Random Early Drop (Sigcomm, 1997)
maintain per flow state only for active flows (ones
having packets in the buffer)
Assumptions:
We can monitor a flows arrival rate
Unresponsive
if drop rate increases by x then arrival rate should
decrease by a factor of sqrt(x)
..Flows to Regulate
Flows using disproportionate bandwidth
assume additive increase, multiplicative
decrease only flows
assume cwin = W at loss
can be shown that: loss prob <= 8/(3W2)
for segment size B:
tput < 0.75W*B/RTT