Download slides ppt

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Generalized linear model wikipedia , lookup

Network science wikipedia , lookup

Plateau principle wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Transcript
Queueing analysis of a feedbackcontrolled (TCP/IP) network
Gaurav Raina
Damon Wischik
Mark Handley
Cambridge
UCL
UCL
Some Internet History
• 1974: First draft of TCP/IP
“A protocol for packet network interconnection”,
Vint Cerf and Robert Kahn
• 1983: ARPANET switches on TCP/IP
• 1986: Congestion collapse
• 1988: Congestion control for TCP
“Congestion avoidance and control”, Van Jacobson
“A Brief History of the Internet”, the Internet Society
Sizing router buffers SIGCOMM 2004
Guido Appenzeller Isaac Keslassy
Stanford University
Stanford University
Nick McKeown
Stanford University
Abstract. All Internet routers contain buffers to hold packets during times of congestion. Today, the size of
the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is
to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure
its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B =
RTT*C, where RTT is the average round-trip time of a flow passing across the link, and C is the data rate
of the link. For example, a 10Gb/s router linecard needs approximately 250ms*10Gb/s = 2.5Gbits of
buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging
for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long,
have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the
rule-of-thumb (B = RTT*C) is now outdated and incorrect for backbone routers. This is because of the
large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory,
simulation and experiments on a network of real routers, we show that a link with N flows requires no
more than B = (RTT*C)/N, for long-lived or short-lived TCP flows. The consequences on router design
are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible
difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering,
which can easily be implemented using fast, on-chip SRAM.
http://tiny-tera.stanford.edu/~nickm/papers/index.html
bandwidth [0-100 kB/sec]
TCP
time [0-8 sec]
if (seqno > _last_acked) {
if (!_in_fast_recovery) {
_last_acked = seqno;
_dupacks = 0;
inflate_window();
send_packets(now);
_last_sent_time = now;
return;
}
if (seqno < _recover) {
uint32_t new_data = seqno - _last_acked;
_last_acked = seqno;
if (new_data < _cwnd) _cwnd -= new_data; else _cwnd=0;
_cwnd += _mss;
retransmit_packet(now);
send_packets(now);
return;
}
uint32_t flightsize = _highest_sent - seqno;
_cwnd = min(_ssthresh, flightsize + _mss);
_last_acked = seqno;
_dupacks = 0;
_in_fast_recovery = false;
send_packets(now);
return;
}
if (_in_fast_recovery) {
_cwnd += _mss;
send_packets(now);
return;
}
_dupacks++;
if (_dupacks!=3) {
send_packets(now);
return;
}
_ssthresh = max(_cwnd/2, (uint32_t)(2 * _mss));
retransmit_packet(now);
_cwnd = _ssthresh + 3 * _mss;
_in_fast_recovery = true;
_recover = _highest_sent;
}
How TCP shares capacity
individual
flow
bandwidths
available
bandwidth
sum of flow
bandwidths
time
Macroscopic description of TCP
• Let x be the mean bandwidth of a flow [pkts/sec]
Let RTT be the flow’s round-trip time [sec]
Let p be the packet loss probability
• The TCP algorithm increases x at rate 1/RTT2 [pkts/sec]
and reduces x by x/2 for every packet loss
• average increase in rate = average decrease in rate:
1/RTT2 = (p x) x/2
Macroscopic description
• Let x be the mean bandwidth of a flow [pkts/sec]
Let RTT be the flow’s round-trip time [sec]
Let p be the packet loss probability
• The TCP algorithm increases x at rate 1/RTT2 [pkts/sec]
and reduces x by x/2 for every packet loss
• average increase in rate = average decrease in rate:
1/RTT2 = (p x) x/2
• Consider a link with N identical flows
Let NC be the capacity of the link [pkts/sec]
• packet loss ratio = fraction of work that exceeds service rate:
p = (Nx-NC)+/Nx = (x-C)+/x
Fixed-Point Models for the End-to-End
Performance Analysis of IP Networks ITC 2000
RJ Gibbens, SK Sargood, C Van Eijl, FP Kelly,
H Azmoodeh, RN Macfadyen, NW Macfadyen
Statistical Laboratory, Cambridge; and BT, Adastral Park
Abstract. This paper presents a new approach to modeling end-to-end performance for IP
networks. Unlike earlier models, in which end stations generate traffic at a constant rate, the
work discussed here takes the adaptive behaviour of TCP/IP into account. The approach is
based on a fixed-point method which determines packet loss, link utilization and TCP
throughput across the network. Results are presented for an IP backbone network, which
highlight how this new model finds the natural operating point for TCP, which depends on
route lengths (via round-trip times and number of resources), end-to-end packet loss and the
number of user sessions.
http://www.statslab.cam.ac.uk/~frank/PAPERS/fpmee.html
Fixed-point analysis
traffic intensity x/C
0.5
1
1.5
2
-1
C*RTT=4 pkts
log10 of
pkt loss
probability
-2
-3
C*RTT=20 pkts
-4
C*RTT=100 pkts
Queue simulations
What’s the queueing theory behind p = (x-C)+/x ?
Where does buffer size come in?
Simulate a queue
• fed by N Poisson flows, each of rate x pkts/sec (x=0.95 then 1.05 pkts/sec)
• served at rate NC (C=1 pkt/sec)
• with buffer size N1/2B (B=3 pkts)
90
80
70
30
queue
size
arrival
rate x
20
20
10
10
60
60
50
50
40
40
30
30
20
20
10
10
1
1
1
1
0.5
0.5
0.5
0.5
20
40
60
N=50
80
20
40
60
N=100
80
20
40
60
N=500
80
20
40
60
N=1000
80
time
A Poisson Limit for Buffer Overflow
Probabilities SIGCOMM 2002
Jin Cao, Kavita Ramanan
Bell Labs
Abstract. A key criterion in the design of high-speed networks is the probability that the
buffer content exceeds a given threshold. We consider n independent traffic sources
modelled as point processes, which are fed into a link with speed proportional to n. Under
fairly general assumptions on the input processes we show that the steady state probability
of the buffer content exceeding a threshold b>0 tends to the corresponding probability
assuming Poisson input processes. We verify the assumptions for a large class of long-range
dependent sources commonly used to model data traffic. Our results show that with
superposition, significant multiplexing gains can be achieved for even smaller buffers than
suggested by previous results, which consider O(n) buffer size. Moreover, simulations show
that for realistic values of the exceedance probability and moderate utilisations, convergence
to the Poisson limit takes place at reasonable values of the number of sources superposed.
This is particularly relevant for high-speed networks in which the cost of high-speed
memory is significant.
http://www.ieee-infocom.org/2002/papers/655.pdf
Mean-field limit
• Consider a link with N flows
and capacity NC and buffer N1/2B
• Let xt be the average bandwidth at time t
Let pt be the packet loss probability at time t
• As N we believe a mean-field limit holds.
Mean-field limit
• Fluid-based Analysis of a Network of AQM
Routers Supporting TCP Flows with an
Application to RED SIGCOMM 2000
Vishal Misra, Wei-Bo Gong, Don Towsley
• Rate-based versus queue-based models of
congestion control ACM Sigmetrics 2004
Supratim Deb, R. Srikant
• Mean field convergence of a rate model of
multiple TCP connections through a buffer
implementing RED To appear in Annals of Applied Probability
David McDonald, Julien Reynier
Stability/instability of fluid model
arrival
rate x/C
1.4
1.2
0.8
0.6
20
40
60
80
100
20
40
60
80
100
1.4
1.2
0.8
0.6
• For some values of C*RTT,
the differential equation is stable
• For others it is unstable and there are oscillations
(i.e. the flows are partially synchronized)
• When it is unstable,
we can calculate the amplitude of the oscillations
time
Instability plot
traffic intensity x/C
0.5
1
1.5
2
-1
C*RTT=4 pkts
log10 of
pkt loss
probability
-2
-3
C*RTT=20 pkts
-4
C*RTT=100 pkts
Illustration: 20 flows
Standard TCP, single bottleneck link, no AQM
service C=60 pkts/sec/flow, RTT=200 ms, #flows N=20
B=20 pkts
(Kelly rule)
B=54 pkts
(Stanford rule)
B=240 pkts
(rule of thumb)
Illustration: 200 flows
Standard TCP, single bottleneck link, no AQM
service C=60 pkts/sec/flow, RTT=200 ms, #flows N=200
B=20 pkts
(Kelly rule)
B=170 pkts
(Stanford rule)
B=2,400 pkts
(rule of thumb)
Illustration: 2000 flows
Standard TCP, single bottleneck link, no AQM
service C=60 pkts/sec/flow, RTT=200 ms, #flows N=2000
B=20 pkts
(Kelly rule)
B=537 pkts
(Stanford rule)
B=24,000 pkts
(rule of thumb)
Alternative buffer-sizing rules
Stanford rule buffer = bandwidth*delay / sqrt(#flows)
or Rule of thumb, no AQM buffer = bandwidth*delay
b25
b100
b400
-1
-2
Rule of thumb with RED
-3
buffer=bandwidth*delay*{¼,1,4}
-4
0.5
b10
1
1.5
b20
b50
-1
-2
Kelly rule, no AQM
-3
buffer={10,20,50} pkts
-4
0.5
b50
1
1.5
b1000
-1
-2
Kelly rule, no AQM, ScalableTCP
p -3
-4
buffer={50,1000} pkts
-5
-6
0.5
1
1.5
Scalable TCP: improving performance in
highspeed wide area networks SIGCOMM CCR 2003
Tom Kelly
CERN -- IT division
Abstract. TCP congestion control can perform badly in highspeed wide area networks because of its slow
response with large congestion windows. The challenge for any alternative protocol is to better utilize
networks with high bandwidth-delay products in a simple and robust manner without interacting badly with
existing traffic. Scalable TCP is a simple sender-side alteration to the TCP congestion window update
algorithm. It offers a robust mechanism to improve performance in highspeed wide area networks using
traditional TCP receivers. Scalable TCP is designed to be incrementally deployable and behaves identically
to traditional TCP stacks when small windows are sufficient. The performance of the scheme is evaluated
through experimental results gathered using a Scalable TCP implementation for the Linux operating system
and a gigabit transatlantic network. The preliminary results gathered suggest that the deployment of
Scalable TCP would have negligible impact on existing network traffic at the same time as improving bulk
transfer performance in highspeed wide area networks.
http://www-lce.eng.cam.ac.uk/~ctk21/scalable/
Rate control in communication networks: shadow
prices, proportional fairness and stability
Journal of the Operational Research Society, 1998
F.P.Kelly, A.K.Maulloo, D.K.H.Tan
Statistical Laboratory, Cambridge
Abstract. This paper analyses the stability and fairness of two classes of rate control algorithm for
communication networks. The algorithms provide natural generalizations to large-scale networks of simple
additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum
characterized by a proportional fairness criterion. Stability is established by showing that, with an
appropriate formulation of the overall optimization problem, the network's implicit objective function
provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The
network's optimization problem may be cast in primal or dual form: this leads naturally to two classes of
algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit
rates based on shadow prices. Both classes of algorithm may be generalized to include routing control, and
provide natural implementations of proportionally fair pricing.
http://www.statslab.cam.ac.uk/~frank/rate.html
Teleological description
U(x)
P(y,C)
• Consider several TCP flows sharing a single link
• Let xr be the mean bandwidth of flow r [pkts/sec]
Let y be the total bandwidth of all flows [pkts/sec]
Let C be the total available capacity [pkts/sec]
• TCP and the network act so as to solve
maximise r U(xr) - P(y,C)
over xr0 where y=r xr
x
C
y
Teleological description
• Consider several TCP flows sharing a single link
• Let xr be the mean bandwidth of flow r [pkts/sec]
Let y be the total bandwidth of all flows [pkts/sec]
Let C be the total available capacity [pkts/sec]
• TCP and the network act so as to solve
maximise r U(xr) - P’(y,C)
By reducing buffer size,
over xr0 where y=r xr
we increase the penalty
U(x)
P’(y,C)
for high utilization.
x
C
y
Conclusion
• Analysis:
– Use fixed-point model to find the equilibrium point;
– Find a mean-field limit, and calculate how stable it is.
• Three rules for choosing buffer size
lead to three different mean-field limits.
– Rule of thumb e.g. 10 Gbytes
– Stanford rule e.g. 100 Mbytes
– Kelly rule e.g. 20 kbytes
• The network acts to solve an optimization problem.
– It may or may not attain the solution.
– We can choose which optimization problem,
by choosing the right buffer size & changing TCP’s code.