* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Buffer-Sizing---CAIDA---May-2005 - McKeown Group
Airborne Networking wikipedia , lookup
Piggybacking (Internet access) wikipedia , lookup
Network tap wikipedia , lookup
Distributed firewall wikipedia , lookup
Computer network wikipedia , lookup
Asynchronous Transfer Mode wikipedia , lookup
Point-to-Point Protocol over Ethernet wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Internet protocol suite wikipedia , lookup
Multiprotocol Label Switching wikipedia , lookup
Deep packet inspection wikipedia , lookup
Wake-on-LAN wikipedia , lookup
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University Joint work with: Guido Appenzeller, Ashish Goel, Tim Roughgarden, Nick McKeown May 5, 2005 [email protected] http://www.stanford.edu/~yganjali Motivation Networks with Little or No Buffers Problem Internet traffic is doubled every year Disparity between traffic and router growth (space, power, cost) Possible Solution All-Optical Networking Consequences Large capacity large traffic Little or no buffers May 5, 2005 2 Which would you choose? DSL Router 1 DSL Router 2 $50 4 x 10/100 Ethernet 1.5Mb/s DSL connection $55 4 x 10/100 Ethernet 1.5Mb/s DSL connection 1Mbit of packet buffer 4Mbit of packet buffer Bigger buffers are better May 5, 2005 3 What we learn in school Packet switching is good Long haul links are expensive Statistical multiplexing allows efficient sharing of long haul links Packet switching requires buffers Packet loss is bad Use big buffers Luckily, big buffers are cheap May 5, 2005 4 Statistical Multiplexing Observations 1. The bigger the buffer, the lower the packet loss. 2. If the buffer never goes empty, the outgoing line is busy 100% of the time. May 5, 2005 5 What we learn in school Queueing Theory M/M/1 EX 1 1 P X k k X Loss rate May 5, 2005 Observations 1. Can pick buffer size for a given loss rate. 2. Loss rate falls fast with increasing buffer size. 3. Bigger is better. Buffer size 6 What we learn in school Moore’s Law: Memory is plentiful and halves in price every 18 months. 1Gbit memory holds 500k packets and costs $25. Conclusion: Make buffers big. Choose the $55 DSL router. May 5, 2005 7 Why bigger isn’t better Network users don’t like buffers Network operators don’t like buffers Router architects don’t like buffers We don’t need big buffers We’d often be better off with smaller ones May 5, 2005 8 Backbone Router Buffers Source Router Destination C 2T Universally applied rule-of-thumb: A router needs a buffer size: B 2T C • 2T is the two-way propagation delay • C is capacity of bottleneck line Context Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines.. Usually referenced to Villamizar and Song: “High Performance TCP in ANSNET”, CCR, 1994. Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design May 5, 2005 9 Review: TCP Congestion Control Rule for adjusting W Only W packets may be outstanding May 5, 2005 If an ACK is received: If a packet is lost: W ← W+1/W W ← W/2 10 Review: TCP Congestion Control Rule for adjusting W Only W packets may be outstanding Source If an ACK is received: If a packet is lost: W ← W+1/W W ← W/2 Dest Window size Wmax Wmax 2 t May 5, 2005 11 Buffer Size in the Core W B 0 Buffer Size May 5, 2005 Probability Distribution 12 Backbone router buffers It turns out that The rule of thumb is wrong for a core routers today 2T C Required buffer is instead of 2T C n May 5, 2005 13 Required Buffer Size 2T C n Simulation May 5, 2005 14 Validation Theoretical results validated by: Thousands of ns2 simulations Network lab (Cisco routers) at University of Wisconsin Stanford University dorm traffic Internet2 experiments Ongoing work with network operators and router vendors… May 5, 2005 15 Impact on Router Design 10Gb/s linecard with 200,000 x 56kb/s flows Rule-of-thumb: Buffer = 2.5Gbits • Requires external, slow DRAM Becomes: Buffer = 6Mbits • Can use on-chip, fast SRAM • Completion time halved for short-flows 40Gb/s linecard with 40,000 x 1Mb/s flows Rule-of-thumb: Buffer = 10Gbits Becomes: Buffer = 50Mbits May 5, 2005 16 How small can buffers be? Imagine you want to build an all-optical router for a backbone network… …and you can build a few dozen packets in delay lines. Conventional wisdom: It’s a routing problem (hence deflection routing, burstswitching, etc.) Our belief: First, think about congestion control. May 5, 2005 17 TCP with ALMOST No Buffers Utilization of bottleneck link = 75% May 5, 2005 18 Two Concurrent TCP Flows May 5, 2005 19 TCP Throughput with Small Buffers TCP Throughput vs. Number of Flows 0.9 0.8 Throughput 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 May 5, 2005 200 400 600 Number of Flows 800 1000 20 TCP Reno Performance Buffer Size = 10; Load = 80% 1 0.9 Throughput 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1000 2000 3000 4000 5000 6000 Bottleneck Capacity Mbps May 5, 2005 21 The chasm between theory and practice Theory (benign conditions) Practice M/M/1 1 X EX = 50%, EX = 1 packet = 75%, EX = 3 packets 1 P X k k Typical OC192 router linecard buffers over 2,000,000 packets = 50%, P[X>10] < 10-3 = 75%, P[X>10] < 0.06 Can we make the traffic arriving at the routers May 5, 2005 Poisson “enough” to get most of the benefit? 22 Ideal Solution If packets are spaced out perfectly; and The starting times of flows are chosen randomly; We only need a small buffer for contention resolution. May 5, 2005 23 Pacing We need to break bursts Modify TCP: Instead of sending packets when your receive ACKS send packets with a fixed rate of CWND/RTT. Rely on network properties: • Access links throttle the flows to low rate • Core:Acess > 1000:1 • TCP’s window size is limited today. If these properties make the flow look like poisson with only 5-10 packets of buffering we can get 70-80% throughput. May 5, 2005 24 What we know so far about very small buffers Arbitrary Injection Process If Poisson Process with load < 1 Theory Any rate > 0 need unbounded buffers Need buffer size of approx: O(logD + logW) i.e. 20-30 pkts Complete Centralized Control Experiment TCP Pacing: Results as good or better than for Poisson Constant fraction throughput with constant buffers [Leighton 1999] D=#of hops W=window size May 5, 2005 25 CWND: Reno vs. Paced TCP May 5, 2005 26 TCP Reno: Throughput vs. Buffer Size May 5, 2005 27 Paced TCP: Throughput vs. Buffer Size May 5, 2005 28 Early results Congested core router with 10 packet buffers. Average offered load = 80% RTT = 100ms; each flow limited to 2.5Mb/s source source >10Gb/s >10Gb/s router 10Gb/s server May 5, 2005 29 Slow access links, lots of flows, 10 packet buffers Congested core router with 10 packet buffers. RTT = 100ms; each flow limited to 2.5Mb/s source source 5Mb/s 5Mb/s router 10Gb/s server May 5, 2005 30 Conclusion We can reduce 1,000,000 packet buffers to 10,000 today. We can probably reduce to 10-20 packet buffers: With many small flows, no change needed With some large flows, need pacing in the access routers or at the edge devices. Need more experiments. May 5, 2005 31 Experiments Performance measurement with Metrics: Small (thousands of packets); and Tiny (tens of packets) buffers Link utilization (goodput/throughput) Drops Buffer occupancy Etc. Data Gathered for minutes to days High load (50-70% utilization) is better May 5, 2005 32 Thank you! Questions?