Download transport-layer

Document related concepts

CAN bus wikipedia , lookup

Low-voltage differential signaling wikipedia , lookup

Deep packet inspection wikipedia , lookup

RS-232 wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Internet protocol suite wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

IEEE 1355 wikipedia , lookup

UniPro protocol stack wikipedia , lookup

TCP congestion control wikipedia , lookup

Transcript
Chapter 3: Transport Layer
Our goals:
 understand
principles behind
transport layer
services:




Multiplexing and
demultiplexing
reliable data
transfer
flow control
congestion control
 learn about transport
layer protocols in the
Internet:



1.
2.
3.
UDP: connectionless
transport
TCP: connection-oriented
transport
TCP congestion control
Multiplexing is to support multiple flows
Network can damage pkt, lose pkt,
duplicate pkt
One of my favorite layer!!
3-1
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-2
Transport services and protocols
 provide logical
communication between app
processes running on
different hosts
 transport protocols run in
end systems
 send side: breaks app
messages into segments,
passes to network layer
 rcv side: reassembles
segments into messages,
passes to app layer
 more than one transport
protocol available to apps
 Internet: TCP and UDP
application
transport
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
application
transport
network
data link
physical
3-3
Transport vs. network layer
 network layer: logical
communication
between hosts
 transport layer:
logical communication
between processes

relies on, enhances,
network layer services
Another analogy:
1.
Post office -> network layer
2. My wife -> transport layer
Household analogy:
12 kids sending letters
to 12 kids
 processes = kids
 app messages =
letters in envelopes
 hosts = houses
 transport protocol =
Ann and Bill
 network-layer
protocol = postal
service
3-4
Internet transport-layer protocols
 reliable, in-order
delivery (TCP)



congestion control
(distributed control)
flow control
connection setup
application
transport
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
 unreliable, unordered
network
data link
physical
delivery: UDP

no-frills extension of
“best-effort” IP
 services not available:
 delay guarantees
 bandwidth guarantees
network
data link
physical
application
transport
network
data link
physical
Research issues
3-5
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-6
Multiplexing/demultiplexing
Multiplexing at send host:
gathering data from multiple
sockets, enveloping data with
header (later used for
demultiplexing)
Demultiplexing at rcv host:
delivering received segments
to correct socket
= socket
application
= process
P1
P1
P3
transport
application
P4
P2
transport
transport
network
network
network
link
application
link
link
physical
host 1
physical
physical
FTP
host 2
telnet
host 3
3-7
How demultiplexing works
 host receives IP datagrams
each datagram has
source IP address,
destination IP address
 each datagram carries 1
transport-layer segment
 each segment has source,
destination port number
(recall: well-known port
numbers for specific
applications)
 host uses IP addresses &
port numbers to direct
segment to appropriate
socket

32 bits
source port # dest port #
other header fields
application
data
(message)
TCP/UDP segment format
3-8
Connectionless demultiplexing
 Create sockets with port
numbers:
DatagramSocket mySocket1 = new
DatagramSocket(99111);
DatagramSocket mySocket2 = new
DatagramSocket(99222);
 UDP socket identified by
two-tuple:
(dest
IP address, dest port number)
 When host receives UDP
segment:


checks destination port
number in segment
directs UDP segment to
socket with that port
number
 IP datagrams with
different source IP
addresses and/or source
port numbers directed to
same socket (this is how a
system can serve multiple
requests!!)
3-9
Connectionless demux (cont)
DatagramSocket serverSocket = new DatagramSocket(6428);
P3
P3
Based on destination
IP and port #
SP: 6428
SP: 6428
DP: 9157
DP: 5775
SP: 9157
client
IP: A
DP: 6428
P1
P1
SP: 5775
server
IP: C
DP: 6428
Client
IP:B
SP provides “return address”
Source IP and port # can be spoofed !!!!
3-10
Connection-oriented demux
 TCP socket identified
by 4-tuple:




source IP address
source port number
dest IP address
dest port number
 recv host uses all four
values to direct
segment to
appropriate socket
 Server host may
support many
simultaneous TCP
sockets:

each socket identified by
its own 4-tuple
 Web servers have
different sockets for
each connecting client

non-persistent HTTP will
have different socket for
each request
3-11
Connection-oriented demux (cont)
(S-IP,SP#, D-IP, DP#)
P3
P3
SP: 80
SP: 80
DP: 9157
DP: 5775
SP: 9157
client
IP: A
DP: 80
P1
P1
P4
SP: 5775
server
IP: C
DP: 80
Client
IP:B
3-12
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-13
UDP: User Datagram Protocol
 “no frills,” “bare bones”
Internet transport
protocol
 “best effort” service,
UDP segments may be:
 lost
 delivered out of order
to app
 connectionless:
 no handshaking
between UDP sender,
receiver
 each UDP segment
handled independently
of others
[RFC 768]
Why is there a UDP?
 no connection
establishment (which can
add delay)
 simple: no connection
state at sender, receiver
 small segment header
 no congestion control:
UDP can blast away as
fast as desired
3-14
UDP: more
 often used for streaming
multimedia apps
 loss tolerant
 rate sensitive
Length, in
bytes of UDP
segment,
including
 other UDP uses
 DNS
When the network is
stressed, you PRAY! header
 SNMP
 reliable transfer over
UDP: add reliability at
application layer
 application-specific
error recovery! (e.g,
FTP based on UDP but
with recovery)
32 bits
source port #
dest port #
length
checksum
Application
data
(message)
UDP segment format
3-15
UDP checksum
Goal: detect “errors” (e.g., flipped bits) in
transmitted segment
Receiver:
Sender:
 treat segment contents
as sequence of 16-bit
integers
 checksum: addition (1’s
complement sum) of
segment contents
 sender puts checksum
value into UDP checksum
field
 compute checksum of
received segment
 check if computed checksum
equals checksum field value:
 NO - error detected
 YES - no error
detected. But maybe
errors nonetheless?
More later ….
e.g: 1+2+3 = 6. So is 0+3+3=6
3-16
Internet Checksum Example
 Note

When adding numbers, a carryout from the
most significant bit needs to be added to the
result
 Example: add two 16-bit integers
1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1
sum 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
3-17
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-18
Principles of Reliable data transfer
 important in app., transport, link layers
 top-10 list of important networking topics!
abstraction
This picture sets the scenario
 characteristics of unreliable channel will determine
complexity of reliable data transfer protocol (rdt)!!!!!!!!
3-19
Reliable data transfer: getting started
rdt_send(): called from above,
(e.g., by app.). Passed data to
deliver to receiver upper layer
deliver_data(): called by
rdt to deliver data to
upper
send
side
udt_send(): called by rdt,
to transfer packet over
unreliable channel to receiver
receive
side
rdt_rcv(): called when packet
arrives on rcv-side of channel
** Let us now look at the gut of these modules. Any question?
3-20
(DON’T FALL ASLEEP!!!)
Reliable data transfer: getting started
We’ll:
 incrementally develop sender, receiver sides
of reliable data transfer protocol (rdt)
 consider only unidirectional data transfer

but control info will flow on both directions!
 use finite state machines (FSM)
sender, receiver
state: when in this
“state” next state
uniquely determined
by next event state
1
to specify
event causing state transition
actions taken on state transition
event
actions
state
2
Event: timer, receives message, …etc.
Action: executes a program, send message, …etc.
3-21
Rdt1.0:
reliable transfer over a reliable channel
 underlying channel perfectly reliable
 no bit errors
In reality, this is an unrealistic assumption, but..
 no loss of packets
 separate FSMs for sender, receiver:
 sender sends data into underlying channel
 receiver reads data from underlying channel
Wait for
call from
above
rdt_send(data)
packet = make_pkt(data)
udt_send(packet)
sender
Wait for
call from
below
rdt_rcv(packet)
extract (packet,data)
deliver_data(data)
receiver
3-22
Rdt2.0: channel with bit errors
 underlying channel may flip bits in packet
 recall: UDP checksum to detect bit errors
 the question: how to recover from errors:
 acknowledgements (ACKs): receiver explicitly tells
sender that pkt received OK
 negative acknowledgements (NAKs): receiver explicitly
tells sender that pkt had errors
 sender retransmits pkt on receipt of NAK
Ack: I love u, I love u 2.
 human scenarios using ACKs, NAKs?
Nak: I love u, I don’t love u
 new mechanisms in rdt2.0 (beyond rdt1.0):


error detection
receiver feedback: control msgs (ACK,NAK) rcvr>sender
3-23
rdt2.0: FSM specification
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
isNAK(rcvpkt)
Wait for
Wait for
call from
ACK or
udt_send(sndpkt)
above
NAK
Buffer is needed to
store data from rdt_rcv(rcvpkt) && isACK(rcvpkt)
application layer or L
to block call.
sender
receiver
rdt_rcv(rcvpkt) &&
corrupt(rcvpkt)
udt_send(NAK)
Wait for
call from
below
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)
3-24
rdt2.0: operation with no errors
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
isNAK(rcvpkt)
Wait for
Wait for
call from
ACK or
udt_send(sndpkt)
above
NAK
rdt_rcv(rcvpkt) && isACK(rcvpkt)
L
rdt_rcv(rcvpkt) &&
corrupt(rcvpkt)
udt_send(NAK)
Wait for
call from
below
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)
3-25
rdt2.0: error scenario
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
isNAK(rcvpkt)
Wait for
Wait for
call from
ACK or
udt_send(sndpkt)
above
NAK
rdt_rcv(rcvpkt) && isACK(rcvpkt)
L
GOT IT ?
rdt_rcv(rcvpkt) &&
corrupt(rcvpkt)
udt_send(NAK)
Wait for
call from
below
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)
3-26
rdt2.0 has a fatal flaw!
What happens if
ACK/NAK corrupted?
 sender doesn’t know what
happened at receiver!
 can’t just retransmit:
possible duplicate
What to do?
 sender ACKs/NAKs
receiver’s ACK/NAK?
What if sender ACK/NAK
lost?
 retransmit, but this might
cause retransmission of
correctly received pkt!
Handling duplicates:
 sender adds sequence
number to each pkt
 sender retransmits
current pkt if ACK/NAK
garbled
 receiver discards (doesn’t
deliver up) duplicate pkt
stop and wait protocol
Sender sends one packet,
then waits for receiver
response
3-27
rdt2.1: sender, handles garbled ACK/NAKs
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
Wait
for
Wait for
isNAK(rcvpkt) )
ACK or
call 0 from
udt_send(sndpkt)
NAK 0
above
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt)
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt)
L
L
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
isNAK(rcvpkt) )
udt_send(sndpkt)
Wait for
ACK or
NAK 1
Wait for
call 1 from
above
rdt_send(data)
sndpkt = make_pkt(1, data, checksum)
udt_send(sndpkt)
THE FSM GETS MESSY!!!
3-28
rdt2.1: receiver, handles garbled ACK/NAKs
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq0(rcvpkt)
rdt_rcv(rcvpkt) && (corrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) && (corrupt(rcvpkt)
sndpkt = make_pkt(NAK, chksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
not corrupt(rcvpkt) &&
has_seq1(rcvpkt)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)
sndpkt = make_pkt(NAK, chksum)
udt_send(sndpkt)
Wait for
0 from
below
Wait for
1 from
below
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq1(rcvpkt)
rdt_rcv(rcvpkt) &&
not corrupt(rcvpkt) &&
has_seq0(rcvpkt)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)
3-29
rdt2.1: discussion
Sender:
 seq # added to pkt
 two seq. #’s (0,1)
will suffice. Why?
 must check if
received ACK/NAK
corrupted
 twice as many states

state must “remember”
whether “current” pkt
has 0 or 1 seq. #
Receiver:
 must check if
received packet is
duplicate

state indicates
whether 0 or 1 is
expected pkt seq #
 note: receiver can
not know if its last
ACK/NAK received
OK at sender
3-30
rdt2.2: a NAK-free protocol
 same functionality as rdt2.1, using ACKs only
 instead of NAK, receiver sends ACK for last pkt
received OK

receiver must explicitly include seq # of pkt being ACKed
 duplicate ACK at sender results in same action as
NAK: retransmit current pkt
 This is important because TCP uses this approach
(NO NAC).
3-31
rdt2.2: sender, receiver fragments
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
Wait for
Wait for
isACK(rcvpkt,1) )
ACK
call 0 from
0
udt_send(sndpkt)
above
sender FSM
fragment
rdt_rcv(rcvpkt) &&
(corrupt(rcvpkt) ||
has_seq1(rcvpkt))
udt_send(sndpkt)
Wait for
0 from
below
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt,0)
receiver FSM
fragment
L
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq1(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK1, chksum)
udt_send(sndpkt)
3-32
rdt3.0: channels with errors and loss
New assumption:
underlying channel can
also lose packets (data
or ACKs)

checksum, seq. #,
ACKs, retransmissions
will be of help, but not
enough
Q: how to deal with loss?


sender waits until
certain data or ACK
lost, then retransmits
yuck: drawbacks?
Approach: sender waits
“reasonable” amount of
time for ACK
 retransmits if no ACK received
in this time
 if pkt (or ACK) just delayed (not
lost):
 retransmission will be
duplicate, but use of seq.
#’s already handles this
 receiver must specify seq #
of pkt being ACKed
 requires countdown timer
What is the “right value” for timer? It depends on the flow and network condition!
3-33
rdt3.0 sender
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt)
start_timer
rdt_rcv(rcvpkt)
L
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt,1)
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
isACK(rcvpkt,0) )
timeout
udt_send(sndpkt)
start_timer
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt,0)
stop_timer
stop_timer
timeout
udt_send(sndpkt)
start_timer
L
Wait
for
ACK0
Wait for
call 0from
above
L
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
isACK(rcvpkt,1) )
Wait
for
ACK1
Wait for
call 1 from
above
rdt_send(data)
rdt_rcv(rcvpkt)
L
sndpkt = make_pkt(1, data, checksum)
udt_send(sndpkt)
start_timer
3-34
rdt3.0 in action
Timer: tick,tick,…
3-35
rdt3.0 in action
Is it
necessary
to send
Ack1
again?
3-36
Performance of rdt3.0
 rdt3.0 works, but performance stinks
 example: 1 Gbps link, 15 ms e-e prop. delay, 1KB
packet:
= L (packet length in bits) = 8kb/pkt
= 8 microsec
transmit
R (transmission rate, bps) 10**9 b/sec
T
U



=
sender
L/R
RTT + L / R
=
.008
30.008
= 0.00027
microsec
onds
U sender: utilization – fraction of time sender busy sending
1KB pkt every 30 msec -> 33kB/sec thruput over 1 Gbps link
network protocol limits use of physical resources!
3-37
rdt3.0: stop-and-wait operation
sender
receiver
first packet bit transmitted, t = 0
last packet bit transmitted, t = L / R
first packet bit arrives
last packet bit arrives, send ACK
RTT
ACK arrives, send next
packet, t = RTT + L / R
U
=
sender
L/R
RTT + L / R
=
.008
30.008
= 0.00027
microsec
onds
3-38
Pipelined protocols
Pipelining: sender allows multiple, “in-flight”,
yet-to-be-acknowledged pkts


range of sequence numbers must be increased
buffering at sender and/or receiver
 Two generic forms of pipelined protocols: go-Back-
N, selective repeat
3-39
Pipelining: increased utilization
sender
receiver
first packet bit transmitted, t = 0
last bit transmitted, t = L / R
first packet bit arrives
last packet bit arrives, send ACK
last bit of 2nd packet arrives, send ACK
last bit of 3rd packet arrives, send ACK
RTT
ACK arrives, send next
packet, t = RTT + L / R
Increase utilization
by a factor of 3!
U
sender
=
3*L/R
RTT + L / R
=
.024
30.008
= 0.0008
microsecon
ds
3-40
DON’T FALL ASLEEP !!!!!
Go-Back-N (sliding window protocol)
Sender:
 k-bit seq # in pkt header
(For now, treat seq # as unlimited)
 “window” of up to N, consecutive unack’ed pkts allowed
 ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK”
Sender may receive duplicate ACKs (see receiver)
 timer for each in-flight pkt
 timeout(n): retransmit pkt n and all higher seq # pkts in window

Q: what happen when a receiver is totally disconnected?
MAX RETRY
3-41
GBN: sender extended FSM
rdt_send(data)
L
base=1
nextseqnum=1
if (nextseqnum < base+N) {
sndpkt[nextseqnum] = make_pkt(nextseqnum,data,chksum)
udt_send(sndpkt[nextseqnum])
if (base == nextseqnum)
start_timer
nextseqnum++
}
Buffer data or block higher app.
else
refuse_data(data)
Wait
rdt_rcv(rcvpkt)
&& corrupt(rcvpkt)
timeout
start_timer
udt_send(sndpkt[base])
udt_send(sndpkt[base+1])
…
udt_send(sndpkt[nextseqnum-1])
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
No pkt in pipe
base = getacknum(rcvpkt)+1
If (base == nextseqnum)
stop_timer
else
Reset timer
start_timer
3-42
GBN: receiver extended FSM
default
udt_send(sndpkt)
L
Wait
expectedseqnum=1
sndpkt =
make_pkt(expectedseqnum,ACK,chksum)
If in order pkt is
received, deliver
to app and ack!
Else, just drop it!
rdt_rcv(rcvpkt)
&& notcurrupt(rcvpkt)
&& hasseqnum(rcvpkt,expectedseqnum)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(expectedseqnum,ACK,chksum)
udt_send(sndpkt)
expectedseqnum++
ACK-only: always send ACK for correctly-received
pkt with highest in-order seq #


may generate duplicate ACKs
need only remember expectedseqnum
 out-of-order pkt:
 discard (don’t buffer) -> no receiver buffering!
 Re-ACK pkt with highest in-order seq #
3-43
GBN
in action
Window size=N=4
What determine the
size of window?
1. RTT
2. Buffer at the
receiver(flow
control)
3. Network congestion
Q: GBN has poor performance. How?
Sender sends pkt 1,2,3,4,5,6,7,8,9..
pkt 1 got lost, receiver got pkt 2,3,4,5,… but will discard them!
3-44
Selective Repeat (improvement of
the GBN Protocol)
 receiver individually acknowledges all correctly
received pkts


buffers pkts, as needed, for eventual in-order
delivery to upper layer
E.g., sender: pkt 1,2,3,4,….,10; receiver got
2,4,6,8,10. Sender resends 1,3,5,7,9.
 sender only resends pkts for which ACK not
received

sender timer for EACH unACKed pkt
 sender window
 N consecutive seq #’s
 again limits seq #s of sent, unACKed pkts
3-45
Selective repeat: sender, receiver windows
Q: why we have this?
Ack is lost or ack
is on its way
3-46
Selective repeat
sender
data from above :
receiver
pkt n in [rcvbase,
 if next available seq # in
 send ACK(n)
window, send pkt
timeout(n) for pkt n:
 resend pkt n, restart timer
ACK(n) in
[sendbase,sendbase+N]:
 mark pkt n as received
 if n smallest unACKed pkt,
advance window base to next
unACKed seq #
(slide the window)
rcvbase+N-1]
 out-of-order: buffer
 in-order: deliver (also
deliver buffered, in-order
pkts), advance window to
next not-yet-received pkt
pkt n in
[rcvbase-N,rcvbase-1]
 ACK(n)
otherwise:
 ignore
Q: why we need this?
The ack got lost.
Sender may timeout,
resend pkt, we need
to ack
3-47
Selective repeat in action (N=4)
Under GBN, this
pkt will be
dropped.
3-48
Selective repeat:
dilemma
In real life, we use k-bits to
implement seq #. Practical issue:
Example:
 seq #’s: 0, 1, 2, 3
 window size (N)=3
 receiver sees no
difference in two
scenarios!
 incorrectly passes
duplicate data as new in
(a)
Q: what relationship
between seq # size and
window size?
N <= 2^k/2
3-49
Why bother study reliable data
transfer?
 We know it is provided by TCP, so why bother
to study?
 Sometimes, we may need to implement “some
form” of reliable transfer without the heavy
duty TCP.
 A good example is multimedia streaming. Even
though the application is loss tolerant, but if
too many packets got lost, it affects the visual
quality. So we may want to implement some for
of reliable transfer.
 At the very least, appreciate the “good
services” provided by some Internet gurus.
3-50
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-51
TCP: Overview
2018, 2581
(The 800 lbs gorilla in the transport stack! PAY ATTENTION!!)
 point-to-point:

RFCs: 793, 1122, 1323,
 full duplex data:
one sender, one receiver
(not multicast)
 bi-directional data flow in
same connection
 reliable, in-order byte

steam:


no “message boundaries”
In App layer, we need
delimiters.
 connection-oriented:

 pipelined:

TCP congestion and flow
control set window size
 send & receive buffers
socket
door
application
reads data
TCP
send buffer
TCP
receive buffer
handshaking (exchange of
control msgs) init’s
sender, receiver state
(e.g., buffer size) before
data exchange
 flow controlled:

application
writes data
MSS: maximum segment
size
sender will not overwhelm
receiver
socket
door
segment
3-52
TCP segment structure
URG: urgent data
(generally not used)
ACK: ACK #
valid
PSH: push data now
(generally not used)
RST, SYN, FIN:
connection estab
(setup, teardown
commands)
Internet
checksum
(as in UDP)
32 bits
source port #
dest port #
sequence number
acknowledgement number
head not
UA P R S F
len used
checksum
Receive window
Urg data pnter
Options (variable length)
application
data
(variable length)
counting
by “bytes”
of data
(not segments!)
# bytes
rcvr willing
to accept
Due to this
field we have a
variable length
header
3-53
TCP seq. #’s and ACKs
Seq. #’s:
 byte stream
“number” of first
byte in segment’s
data
ACKs:
 seq # of next
byte expected
from other side
 cumulative ACK
Q: how receiver handles
out-of-order
segments
 A: TCP spec
doesn’t say, - up
to implementor
Host A
User
types
‘C’
Negotiate
during 3-way
handshake
Host B
host ACKs
receipt of
‘C’, echoes
back ‘C’
host ACKs
receipt
of echoed
‘C’
simple telnet scenario
time
3-54
TCP Round Trip Time and Timeout
Q: how to estimate RTT?
Q: how to set TCP
timeout value?
 SampleRTT: measured time from
 longer than RTT

but RTT varies
 too short: premature
timeout
 unnecessary
retransmissions
 too long: slow reaction to
segment loss, poor
performance.
segment transmission until ACK
receipt
 ignore retransmissions
 SampleRTT will vary, want
estimated RTT “smoother”
 average several recent
measurements, not just current
SampleRTT
tx
Estimated
RTT
Too long
tx
retx
ack
Estimated
RTT
retx
Too short
ack
3-55
TCP Round Trip Time and Timeout
EstimatedRTT = (1- )*EstimatedRTT + *SampleRTT
 Exponential weighted moving average
 influence of past sample decreases exponentially
fast
 typical value:  = 0.125
ERTT(0) = 0
ERTT(1) = (1- )ERTT(0) + SRTT(1)= SRTT(1)
ERTT(2) =(1- ) SRTT(1) + SRTT(2)
ERTT(3) = (1- )(1- ) SRTT(1) + (1- ) SRTT(2) + SRTT(3)
3-56
Example RTT estimation:
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr
350
RTT (milliseconds)
300
250
200
150
100
1
8
15
22
29
36
43
50
57
64
71
78
85
92
99
106
time (seconnds)
SampleRTT
Estimated RTT
3-57
TCP Round Trip Time and Timeout
Setting the timeout (by Jacobson/Karel)
 EstimtedRTT plus “safety margin”

large variation in EstimatedRTT -> larger safety margin
 first estimate of how much SampleRTT deviates from
EstimatedRTT:
DevRTT = (1-)*DevRTT +
*|SampleRTT-EstimatedRTT|
(typically,  = 0.25)
Then set timeout interval:
TimeoutInterval = EstimatedRTT + 4*DevRTT
3-58
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-59
TCP reliable data transfer
 TCP creates rdt
service on top of IP’s
unreliable service
 Pipelined segments
(for performance)
 Cumulative acks
 TCP uses single
retransmission timer
 Retransmissions are
triggered by:


timeout events
duplicate ack ( for
performance reason)
 Initially consider
simplified TCP
sender:


ignore duplicate acks
ignore flow control,
congestion control
3-60
TCP sender events:
data rcvd from app:
 Create segment with
seq #
 seq # is byte-stream
number of first data
byte in segment
 start timer if not
already running (think
of timer as for
oldest unacked
segment)
 expiration interval:
TimeOutInterval
timeout:
 retransmit segment
that caused timeout
 restart timer
Ack rcvd:
 If acknowledges
previously unacked
segments


update what is known
to be acked
start timer if there
are outstanding
segments
3-61
NextSeqNum = InitialSeqNum
SendBase = InitialSeqNum
loop (forever) {
switch(event)
event: data received from application above
create TCP segment with sequence number NextSeqNum
if (timer currently not running)
start timer
pass segment to IP
NextSeqNum = NextSeqNum + length(data)
TCP
sender
(simplified)
Comment:
• SendBase-1: last
cumulatively
event: timer timeout
ack’ed byte
retransmit not-yet-acknowledged segment with
Example:
smallest sequence number
• SendBase-1 = 71;
start timer
y= 73, so the rcvr
event: ACK received, with ACK field value of y
wants 73+ ;
if (y > SendBase) {
y > SendBase, so
SendBase = y
that new data is
if (there are currently not-yet-acknowledged segments) acked
start timer
}
} /* end of loop forever */
3-62
TCP: retransmission scenarios
Host A
X
loss
Sendbase
= 100
SendBase
= 120
SendBase
= 100
time
SendBase
= 120
lost ACK scenario
Host B
Seq=92 timeout
Host B
Seq=92 timeout
timeout
Host A
time
premature timeout
3-63
TCP retransmission scenarios (more)
timeout
Host A
Host B
X
loss
Room for improvement
SendBase
= 120
time
Cumulative ACK scenario
3-64
TCP ACK generation
Event at Receiver
[RFC 1122, RFC 2581]
TCP Receiver action
Arrival of in-order segment with
expected seq #. All data up to
expected seq # already ACKed
Delayed ACK. Wait up to 500ms
for next segment. If no next segment,
send ACK
Arrival of in-order segment with
expected seq #. One other
segment has ACK pending
Immediately send single cumulative
ACK, ACKing both in-order segments
Ack the “largest in-order byte” seq #
Arrival of out-of-order segment
higher-than-expect seq. # .
Gap detected
Immediately send duplicate ACK,
indicating seq. # of next expected byte
Arrival of segment that
partially or completely fills gap
Immediate send ACK, provided that
segment startsat lower end of gap
3-65
Fast Retransmit
 If sender receives 3
 Time-out period
often relatively long:

long delay before
resending lost packet
ACKs for the same
data, it supposes that
segment after ACKed
data was lost:

 Detect lost segments
via duplicate ACKs.


Sender often sends
many segments backto-back
If segment is lost,
there will likely be
many duplicate ACKs.
fast retransmit: resend
segment before timer
expires
timeout
3-66
Fast retransmit algorithm:
event: ACK received, with ACK field value of y
if (y > SendBase) {
SendBase = y
if (there are currently not-yet-acknowledged segments)
start timer
}
else {
increment count of dup ACKs received for y
if (count of dup ACKs received for y = 3) {
resend segment with sequence number y
}
a duplicate ACK for
already ACKed segment
fast retransmit
Q: why resend pkt
with seq # y?
A: That is what the
receiver expect!
3-67
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-68
TCP Flow Control
 receive side of TCP
connection has a
receive buffer:
flow control
sender won’t overflow
receiver’s buffer by
transmitting too much,
too fast
 speed-matching
 app process may be
service: matching the
send rate to the
receiving app’s drain
rate
slow at reading from
buffer
3-69
TCP Flow control: how it works
 Rcvr advertises spare
(Suppose TCP receiver
discards out-of-order
segments)
 spare room in buffer
= RcvWindow
= RcvBuffer-[LastByteRcvd LastByteRead]
room by including
value of RcvWindow in
segments
 Sender limits unACKed
data to RcvWindow

guarantees receive
buffer doesn’t overflow
This goes to show that the design
process of header is important!!
3-70
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-71
TCP Connection Management
Recall: TCP sender,
receiver establish
“connection” before
exchanging data segments
 initialize TCP variables:
 seq. #s
 buffers, flow control
info (e.g. RcvWindow)
 client: connection initiator
Socket clientSocket = new
Socket("hostname","port
number");
 server: contacted by
client
Three way handshake:
Step 1: client host sends TCP SYN
segment to server
 specifies initial seq #
 no data
Step 2: server host receives SYN,
replies with SYN-ACK segment
server allocates buffers
 specifies server initial seq. #
Step 3: client receives SYN-ACK,
replies with ACK segment, which
may contain data

Socket connectionSocket =
welcomeSocket.accept();
3-72
TCP three-way handshake
Connection
request
Connection
granted
ACK
3-73
TCP Connection Management (cont.)
Closing a connection:
client closes socket:
clientSocket.close();
client
close
Step 1: client end system
close
FIN, replies with ACK.
Closes connection, sends
FIN.
timed wait
sends TCP FIN control
segment to server
Step 2: server receives
server
closed
Q: why don’t we
combine ACK and
FIN?
Sender may have
some data in the
pipeline!
3-74
TCP Connection Management (cont.)
Step 3: client receives FIN,
replies with ACK.

client
server
closing
Enters “timed wait” will respond with ACK
to received FINs
closing
Step 4: server, receives
Note: with small
modification, can handle
simultaneous FINs.
timed wait
ACK. Connection closed.
closed
closed
3-75
TCP Connection Management (cont)
TCP server
lifecycle
TCP client
lifecycle
3-76
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-77
Principles of Congestion Control
TCP provides one of the MANY WAYS to perform CC.
Congestion:
 informally: “too many sources sending too much
data too fast for network to handle”
 different from flow control!
 manifestations:
 lost packets (buffer overflow at routers)
 long delays (queueing in router buffers)
 another top-10 problem!
3-78
Causes/costs of congestion: scenario 1
From application
Host A
 two senders, two
receivers
(homogeneous)
 one router, infinite
buffers, capacity C
 no retransmission
Host B
lin : original data
To
lout application
unlimited shared
output link buffers
 large delays when
congested
 maximum
achievable
throughput
3-79
Causes/costs of congestion: scenario 2
 one router, finite buffers
 sender retransmission of lost packet
Host A
Host B
lin : original
data
l'in : original data, plus
retransmitted data
Due to transport layer
lout
finite shared output
link buffers
Pkt got dropped
3-80
Causes/costs of congestion: scenario 2
= l
(goodput)
out
in
 “perfect” retransmission only when loss:
 always:

l
l > lout
in
retransmission of delayed (not lost) packet makes l
in
l
(than perfect case) for same
out
larger
“costs” of congestion:
 more work (retrans) for given “goodput”
 unneeded retransmissions: link carries multiple copies of pkt
3-81
Causes/costs of congestion: scenario 3
 four senders
Q: what happens as l
in
and l increase ?
 multihop paths
 timeout/retransmit
in
Host A
lin : original data
lout
l'in : original data, plus
retransmitted data
finite shared output
link buffers
Host B
3-82
Causes/costs of congestion: scenario 3
H
o
s
t
A
l
o
u
t
H
o
s
t
B
System collapses (e.g., students)
Another “cost” of congestion:
 when packet dropped, any “upstream transmission
capacity used for that packet was wasted!
3-83
Approaches towards congestion control
Two broad approaches towards congestion
control:
Network-assisted
End-end congestion
congestion control:
control:
 no explicit feedback from
network
 congestion inferred from
end-system observed
loss, delay
 approach taken by TCP
 routers provide feedback
to end systems
 single bit indicating
congestion (SNA,
DECbit, TCP/IP ECN,
ATM)
 explicit rate sender
should send at
3-84
Case study: ATM ABR congestion control
ABR: available bit rate:
 “elastic service”
RM (resource management)
cells:
 if sender’s path
 sent by sender, interspersed
“underloaded”:
 sender should use
available bandwidth
 if sender’s path
congested:
 sender throttled to
minimum guaranteed
rate
with data cells
 bits in RM cell set by switches
(“network-assisted”)
 NI bit: no increase in rate
(mild congestion)
 CI bit: congestion indication
 RM cells returned to sender by
receiver, with bits intact
3-85
Case study: ATM ABR congestion control
 two-byte ER (explicit rate) field in RM cell
 congested switch may lower ER value in cell
 sender’ send rate thus minimum supportable rate on path
 EFCI (explicit forward congestion indication) bit in
cells: set to 1 in congested switch

if data cell preceding RM cell has EFCI set, receiver sets
CI bit in returned RM cell
3-86
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-
oriented transport:
TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
3-87
TCP Congestion Control
 end-end control (no network
assistance!!!)
 sender limits transmission:
LastByteSent-LastByteAcked
 CongWin
 Roughly,
rate =
CongWin
Bytes/sec
RTT
 CongWin is dynamic, function
of perceived network
congestion
How does sender
perceive congestion?
 loss event = timeout
or 3 duplicate acks
 TCP sender reduces
rate (CongWin) after
loss event
three mechanisms:



AIMD
slow start
conservative after
timeout events
Note: CC must be efficient to make use of available BW !!!!
3-88
TCP AIMD
additive increase:
increase CongWin by
1 MSS every RTT in
the absence of loss
events: probing
Human analogy: HK government or CUHK !!
multiplicative decrease:
cut CongWin in half
after loss event
congestion
window
Role of ACK:
1. An indication of loss
2. Ack is self clocking:
if delay is large, the
rate of congestion
window will be
reduced.
24 Kbytes
16 Kbytes
8 Kbytes
time
Long-lived TCP connection
3-89
TCP Slow Start
 When connection
begins, CongWin = 1
MSS


Example: MSS = 500
bytes & RTT = 200
msec
initial rate = 20 kbps
 When connection
begins, increase rate
exponentially fast until
first loss event
 available bandwidth
may be >> MSS/RTT

desirable to quickly ramp
up to respectable rate
3-90
TCP Slow Start (more)
 When connection


Host B
RTT
begins, increase rate
exponentially until
first loss event:
Host A
double CongWin every
RTT
done by incrementing
CongWin for every
ACK received
 Summary: initial rate
is slow but ramps up
exponentially fast
time
3-91
Refinement
 After 3 dup ACKs:
CongWin is cut in half
 window then grows linearly
 This is known as “fastrecovery” phase.
Implemented in new TCPReno.
 But after timeout event:
 CongWin instead set to 1
MSS;
 window then grows
exponentially threshold,
then grows linearly

Philosophy:
• 3 dup ACKs indicates
network capable of
delivering some
segments
• timeout before 3 dup
ACKs is “more alarming”
3-92
Refinement (more)
Implementation:
14
congestion window size
(segments)
Q: When should the
exponential
increase switch to
linear?
A: When CongWin
gets to 1/2 of its
value before
timeout.
 Variable Threshold
 At loss event, Threshold
is set to 1/2 of CongWin
just before loss event
12 threshold
10
8
6
4
2
0
1
TCP
Tahoe
TCP
Reno
2 3
6 7
4 5
8 9 10 11 12 13 14 15
Transmission round
Series1
Series2
3-93
Summary: TCP Congestion Control
 When CongWin is below Threshold, sender in
slow-start phase, window grows exponentially.
 When CongWin is above Threshold, sender is
in congestion-avoidance phase, window grows
linearly.
 When a triple duplicate ACK occurs, Threshold
set to CongWin/2 and CongWin set to
Threshold.
 When timeout occurs, Threshold set to
CongWin/2 and CongWin is set to 1 MSS.
READ THE BOOK on TCP-Vegas: instead of
react to a loss, anticipate & prepare for a loss!
3-94
TCP sender congestion control
Event
State
TCP Sender Action
Commentary
ACK receipt
for previously
unacked
data
Slow Start
(SS)
CongWin = CongWin + MSS,
If (CongWin > Threshold)
set state to “Congestion
Avoidance”
Resulting in a doubling of
CongWin every RTT
ACK receipt
for previously
unacked
data
Congestion
Avoidance
(CA)
CongWin = CongWin+MSS *
(MSS/CongWin)
Additive increase, resulting
in increase of CongWin by
1 MSS every RTT
Loss event
detected by
triple
duplicate
ACK
SS or CA
Threshold = CongWin/2,
CongWin = Threshold,
Set state to “Congestion
Avoidance”
Fast recovery,
implementing multiplicative
decrease. CongWin will not
drop below 1 MSS.
Timeout
SS or CA
Threshold = CongWin/2,
CongWin = 1 MSS,
Set state to “Slow Start”
Enter slow start
Duplicate
ACK
SS or CA
Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
3-95
TCP throughput
 What’s the average throughout ot TCP as
a function of window size and RTT?

Ignore slow start
 Let W be the window size when loss
occurs.
 When window is W, throughput is
W/RTT
 Just after loss, window drops to W/2,
throughput to W/2RTT.
 Average throughout: .75 W/RTT
3-96
TCP Futures
 Example: 1500 byte segments, 100ms RTT,
want 10 Gbps throughput
 Requires window size W = 83,333 in-flight
segments
 Throughput in terms of loss rate:
1.22  MSS
RTT L
 ➜ L = 2·10-10 Wow
 New versions of TCP for high-speed needed!
3-97
TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should
have average rate of R/K
TCP connection 1
TCP
connection 2
bottleneck
router
capacity R
3-98
Why is TCP fair?
Two competing, homogeneous sessions (e.g., similar
propagation delay,..etc) :
 Additive increase gives slope of 1, as throughout increases
 multiplicative decrease decreases throughput proportionally
R
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput R
3-99
Fairness (more)
Fairness and parallel TCP
Fairness and UDP
connections
 Multimedia apps often
 nothing prevents app
do not use TCP
from opening parallel
 do not want rate
cnctions between 2
throttled by congestion
control
hosts.
 Instead use UDP:
 Web browsers do this
 pump audio/video at
 Example: link of rate R
constant rate, tolerate
supporting 9 connections;
packet loss
 Research area: TCP
friendly


new app asks for 1 TCP,
gets rate R/10
new app asks for 11 TCPs,
gets around R/2 !
3-100
Delay modeling
Q: How long does it take
to receive an object
from a Web server
after sending a request?
Delay is influenced by:
 TCP connection establishment
 data transmission delay
 slow start
 Sender’s congestion window
Notation, assumptions:
 Assume one link between client




and server of rate R
S: MSS (bits)
O: object size (bits)
no retransmissions (no loss, no
corruption)
Protocol’s overhead is negligible.
Window size:
 First assume: fixed congestion
window, W segments
 Then dynamic window, modeling
slow start
3-101
Fixed congestion window (1): assume
no congestion window constratint Object request
Let W denote a ‘fixed’
congestion window size
(a positive integer).
is piggybacked.
First case:
WS/R > RTT + S/R: ACK
for first segment in
window returns before
window’s worth of data
sent
delay = 2RTT + O/R
This is the “lower bound” of latency!
3-102
Fixed (or static) congestion window (2)
Second case:
 WS/R < RTT + S/R:
wait for ACK after
sending window’s worth
of data sent
delay = 2RTT + O/R
+ (K-1)[S/R + RTT - WS/R]
Where K be the # of windows of data that cover the object.
If O is the “size” of the object, we have
K = O/WS (round to an integer).
3-103
TCP Delay Modeling: Slow Start (1)
Now suppose window grows according to slow start
Will show that the delay for one object is:
Latency  2 RTT 
O
S
S

 P  RTT    ( 2 P  1)
R
R
R

where P is the number of times TCP idles at server:
P  min {Q, K  1}
- where Q is the number of times the server idles
if the object were of infinite size.
- and K is the number of windows that cover the object.
3-104
TCP Delay Modeling: Slow Start (2)
Delay components:
• 2 RTT for connection
establishment and request
• O/R to transmit object
• time server idles due to
slow start
initiate TCP
connection
request
object
first window
= S/R
RTT
Server idles:
P = min{K-1,Q} times
second window
= 2S/R
third window
= 4S/R
Example:
• O/S = 15 segments
• K = 4 windows
• Q = 2
• P = min{K-1,Q} = 2
Server idles P=2 times
fourth window
= 8S/R
complete
transmission
object
delivered
time at
client
time at
server
3-105
TCP Delay Modeling (3)
S
 RTT  time from when server starts to send segment
R
until server receives acknowledg ement
S
2k 1  time to transmit the kth window
R
Where k=1,2,….,K

initiate TCP
connection
request
object
S
k 1 S 

RTT

2
 idle time after the kth window
 R
R 
first window
= S/R
RTT
third window
= 4S/R
P
O
delay   2 RTT   idleTime p
R
p 1
P
O
S
S
  2 RTT   [  RTT  2 k 1 ]
R
R
k 1 R
O
S
S
  2 RTT  P[ RTT  ]  (2 P  1)
R
R
R
second window
= 2S/R
fourth window
= 8S/R
complete
transmission
object
delivered
time at
client
time at
server
3-106
TCP Delay Modeling (4)
Recall K = number of windows that cover object
How do we calculate K ?
K  min {k : 20 S  21 S    2 k 1 S  O}
 min {k : 20  21    2 k 1  O / S}
O
k
 min {k : 2  1  }
S
O
 min {k : k  log 2 (  1)}
S
O


 log 2 (  1)
S


Calculation of Q, number of idles for infinite-size object,
is similar (see HW).
3-107
Experiment A
S= 536 bytes, RTT=100 msec, O=100 kbytes (relatively large)
R
1.
O/R
P
Min latency
O/R+2 RTT
28 kbps
28.6 sec
1
28.8 sec
28.9 sec
100 kbps
8.0 sec
2
8.2 sec
8.4 sec
1 Mbps
800 msec
5
1.0 sec
1.5 sec
10 Mbps
80 msec
7
0.28 sec
Latency with
Slow start
0.98 sec
Slow start adds appreciable delay only when R is high. If R is low, ACK
comes back quickly and TCP quickly ramps up to its maximum rate.
3-108
Experiment B
S= 536 bytes, RTT=100 msec, O = 5 kbytes (relatively small)
R
O/R
P
Min latency
O/R+2 RTT
28 kbps
1.43 sec
1
1.63 sec
1.73 sec
100 kbps
0.4 sec
2
0.6 sec
0.76 sec
1 Mbps
40 msec
3
0.24 sec
0.52 sec
4 msec
3
0.20 sec
0.50 sec
10 Mbps
1.
Latency with
Slow start
Slow start adds appreciable delay when R is high and for a relatively
small object.
3-109
Experiment C
S= 536 bytes, RTT=1 sec, O = 5 kbytes (relatively small)
R
O/R
P
28 kbps
1.43 sec
3
3.4 sec
5.8 sec
100 kbps
0.4 sec
3
2.4 sec
5.2 sec
1 Mbps
40 msec
3
2.0 sec
5.0 sec
10 Mbps
4 msec
3
2.0 sec
5.0 sec
Min latency
O/R+2 RTT
Latency with
Slow start
1. Slow start can significantly increase the latency when the
object size is relatively small and the RTT is relatively
large.
3-110
HTTP Modeling
 Assume Web page consists of:
1 base HTML page (of size O bits)
 M images (each of size O bits)
 Non-persistent HTTP:
 M+1 TCP connections in series
 Response time = (M+1)O/R + (M+1)2RTT + sum of idle times
 Persistent HTTP:
 2 RTT to request and receive base HTML file
 1 RTT to request and receive M images
 Response time = (M+1)O/R + 3RTT + sum of idle times
 Non-persistent HTTP with X parallel connections
 Suppose M/X integer.
 1 TCP connection for base file
 M/X sets of parallel connections for images.
 Response time = (M+1)O/R + (M/X + 1)2RTT + sum of idle
times

3-111
HTTP Response time (in seconds)
RTT = 100 msec, O = 5 Kbytes, M=10 and X=5
20
18
16
14
12
10
8
6
4
2
0
non-persistent
persistent
parallel nonpersistent
28
100
1
10
Kbps Kbps Mbps Mbps
For low bandwidth, connection & response time dominated by
transmission time.
Persistent connections only give minor improvement over parallel
connections.
3-112
HTTP Response time (in seconds)
RTT =1 sec, O = 5 Kbytes, M=10 and X=5
70
60
50
non-persistent
40
persistent
30
20
parallel nonpersistent
10
0
28
100
1
10
Kbps Kbps Mbps Mbps
For larger RTT, response time dominated by TCP establishment
& slow start delays. Persistent connections now give important
improvement: particularly in high delaybandwidth networks.
3-113
Chapter 3: Summary
 principles behind
transport layer services:
 multiplexing,
demultiplexing
 reliable data transfer
 flow control
 congestion control
 instantiation and
implementation in the
Internet
 UDP
 TCP
Next:
 leaving the
network “edge”
(application,
transport layers)
 into the network
“core”
3-114