Download Latency-Across-Networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Latency Across Networks
Goals
• Build intuition for how packets flow from clients across
networks to servers and back again
• Quantify how much latency each component contributes
• Sketch a few subtleties
 Multi-pathed environments
 Transaction vs streaming applications
 Frame loss due to queue overflow
This presentation focuses on Network issues (aka Transport or Ethernet/IP)
© Stuart Kendrick 2012 All rights reserved
7/16/2022
1
Me
[email protected]
http://www.skendric.com
Deep Infrastructure | Trouble-shooting | Network Management
Bio
sbk@cornella
stuart@cpvax5 (SAIC)
[email protected]
[email protected]
[email protected]
Student
1981
Programmer
1984
Desktop / Server Support
1985
Generalist (Desktop / Server / Network / Application)
1991
Third-Tier Tech
1993 - present
I’ve spent my entire career in the research space, mostly non-profit; my core competences lie in transport and
network management. See http://www.skendric.com for examples of my work.
Today
I work for the Fred Hutchinson Cancer Research Center, a non-profit biomedical research institute in Seattle
USA, focused on cancer and infectious disease. The Hutch at a glance:
2300 staff
(250 IT staff)
$400 mil/year
80% grant-funded
8,000 Ethernet ports
13,000 IP addresses
800TB mass storage
450KW in data centers
Exchange + Zimbra
Commvault + TSM
ISC BIND/DHCP
postfix/sendmail
Windows
Linux
OS X
new 750KW data center in 2012
© Stuart Kendrick 2012 All rights reserved
2
Client / Network / Server / Storage
Read Record
Client
Ethernet Switch
Server
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Storage
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Client Thinks
3
Client / Network / Server / Storage
Read Record
Client
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Ethernet Switch
Server
Serialization Delay
.7us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Client Transmits Packet
Storage
4
Client / Network / Server / Storage
Client
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Ethernet Switch
Read Record
Serialization Delay
.7us
Propagation Delay
.2us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Server
Packet Propagates Across Wire, Arrives at Switch
Storage
5
Client / Network / Server / Storage
Read Record
Client
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Ethernet Switch
Serialization Delay
.7us
Propagation Delay
.2us
Forwarding Delay
5us - 80us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Server
Switch Figures Out Where to Send Packet
Storage
6
Client / Network / Server / Storage
Read Record
Client
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Server
Ethernet Switch
Serialization Delay
.7us
Propagation Delay
.2us
Forwarding Delay
5us - 80us
Serialization Delay
.7us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Switch Transmits Packet
Storage
7
Client / Network / Server / Storage
Client
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Ethernet Switch
Serialization Delay
.7us
Propagation Delay
.2us
Forwarding Delay
5us - 80us
Server
Read Record
Serialization Delay
.7us
Propagation Delay
.2us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Packet Propagates Across Wire, Arrives at Server
Storage
8
Client / Network / Server / Storage
Application Thinks,
Asks Storage for Blocks
~.1ms – 1000ms
Server
Ethernet Switch
Serialization Delay
.7us
Propagation Delay
.2us
Forwarding Delay
5us - 80us
Serialization Delay
.7us
Propagation Delay
.2us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Application Translates Request into a Disk Read, Server Transmits Packet
Packet Propagates Across Storage Network
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Read Blocks
Client
Storage
9
Client / Network / Server / Storage
Application Thinks,
Asks Storage for Blocks
~.1ms – 1000ms
Client
Serialization Delay
.7us
Propagation Delay
.2us
Forwarding Delay
5us - 80us
Serialization Delay
.7us
Propagation Delay
.2us
Return Blocks
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Heads Read Blocks from Spindles, Storage Transmits Blocks
Packet Propagates Across Storage Network
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Server
Ethernet Switch
Storage
Retrieve Blocks
Storage Retrieves Blocks
~5ms - 500ms
10
Client / Network / Server / Storage
Return Record
Application
Processes Record
~.1ms – 1000ms
Propagation Delay
.2us
Serialization Delay
12us
Client
Serialization Delay
12us
Application Thinks,
Asks Storage for Blocks
Propagation Delay ~.1ms – 1000ms
.2us
Server
Ethernet Switch
Serialization Delay
.7us
Propagation Delay
.2us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Forwarding Delay
5us - 80us
Serialization Delay
.7us
Propagation Delay
.2us
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Forwarding Delay
8us – 140us
Application Thinks,
Formats Record
~.1ms – 1000ms
Storage
Storage Retrieves Blocks
~5ms - 500ms
Server Thinks, Transmits Packet … Network … Client Processes Record
11
Client / Network / Server / Storage
Application Thinks,
Formats Record
~.1ms – 1000ms
Application
Processes Record
~.1ms – 1000ms
Propagation Delay
.2us
Serialization Delay
12us
Client
Serialization Delay
12us
.2us
Server
Ethernet Switch
Serialization Delay
.7us
Client Thinks
Propagation Delay
.2us
Network Transmits
Forwarding Delay
5us – 80us
Serialization Delay
.7us
Server Thinks
Read Record
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Read … Transmit … Request…
Propagation Delay
.2us
Storage Retrieves
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Forwarding Delay
8us – 140us
Application Thinks,
Asks Storage for Blocks
Propagation Delay ~.1ms – 1000ms
Storage
Storage Retrieves Blocks
~5ms - 500ms
12
Client / Network / Server / Storage
Return Record
Server Thinks
Network Transmits
Client Thinks
Application
Processes Record
~.1ms – 1000ms
Propagation Delay
.2us
Serialization Delay
12us
Client
Forwarding Delay
8us – 140us
Serialization Delay
12us
Application Thinks,
Formats Record
~.1ms – 1000ms
Application Thinks,
Asks Storage for Blocks
Propagation Delay ~.1ms – 1000ms
.2us
Server
Ethernet Switch
Serialization Delay
.7us
Client Thinks
Propagation Delay
.2us
Network Transmits
Forwarding Delay
5us - 80us
Serialization Delay
.7us
Server Thinks
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Propagation Delay
.2us
Storage Retrieves
Read Record
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Storage
Storage Retrieves Blocks
~5ms - 500ms
Read … Transmit … Request … Retrieve … Return … Process
13
core-b-rtr
© Stuart Kendrick 2012
s-a-rtr
Mars
Cetus
s2-a-esx
k-b-rtr
k-a-rtr
k4-a-esx
Deimos
s-b-rtr
Of course, in reality
there are more
moving parts
core-a-rtr
Client / Network / Server / Storage
k4-b-esx
s2-b-esx
14
core-b-rtr
© Stuart Kendrick 2012
s-a-rtr
Mars
Cetus
s2-a-esx
k-b-rtr
k-a-rtr
k4-a-esx
Deimos
s-b-rtr
In a Highly-Available
environment, Packets
Can Traverse Multiple
Paths
core-a-rtr
Client / Network / Server / Storage
Read Record
k4-b-esx
s2-b-esx
15
core-b-rtr
© Stuart Kendrick 2012
s-a-rtr
Mars
Cetus
s2-a-esx
k-b-rtr
k-a-rtr
k4-a-esx
Deimos
s-b-rtr
A handful or so
different paths
core-a-rtr
Client / Network / Server / Storage
Read Record
k4-b-esx
s2-b-esx
16
Client / Network / Server / Storage
Application Thinks,
Formats Record
~.1ms – 1000ms
Application
Processes Record
~.1ms – 1000ms
Propagation Delay
.2us
Serialization Delay
12us
Client
Serialization Delay
12us
.2us
Server
Ethernet Switch
Serialization Delay
.7us
Propagation Delay
.2us
Forwarding Delay
5us - 80us
Serialization Delay
.7us
Propagation Delay
.2us
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
If we add up those numbers,
we can focus our attention on the slowest parts
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Forwarding Delay
8us – 140us
Application Thinks,
Asks Storage for Blocks
Propagation Delay ~.1ms – 1000ms
Storage
Storage Retrieves Blocks
~5ms - 500ms
17
Tuning Potential
Developers (App Architecture and Tuning)
Sys Admins (Deimos NFS + CPU + kernel)
Storage Admins (Mars NFS + CPU + kernel, Cetus)
Application architecture: 1,000x
• SQL, query optimization, caching, system calls
Server & Storage Configuration: 100x
• Disk striping, spindle tiering, paging, NFS tuning
Application fine-tuning: 2-10x
• Threads, asynchronous I/O
Kernel tuning: less than 2x
- Caveats:
• If kernel bottleneck is present, then 10-100x
• Kernel can be a binary performance gate
Version 3.10
Copyright 1994-2007 Hal Stern, Marc Staveley
© Stuart Kendrick 2012
System & Network Performance Tuning
LISA 2007
18
Client / Network / Server / Storage
Application Thinks,
Formats Record
~.1ms – 1000ms
Application
Processes Record
~.1ms – 1000ms
Application Thinks,
Asks Storage for Blocks
~.1ms – 1000ms
Forwarding Delay
30us – 1300us
Serialization Delay
4us – 108 us
Client
Propagation Delay
5us
Network
Serialization Delay / Propagation Delay
Application Thinks,
Decides to Read a
Record
~.1ms – 1000ms
Millisecond
ms
10-3s
.001s
-6s
Microsecond us
10
.000001s
-9s
Nanosecond ns
10
.000000001s
Light travels ~.5m during 1ns in glass
© Stuart Kendrick 2012
Server
Consolidate all the Switches and Wires into a Single Clump
Storage
Storage Retrieves Blocks
~5ms - 500ms
19
Client / Network / Server / Storage
Request
Client
Network
Request
Response
Server
Response
Storage
© Stuart Kendrick 2012
Review: Client-Network-Server-Storage
20
Client / Network / Server / Storage
Request
Response
Request
Response
Request
Response
Transaction Oriented (aka Ping/Pong)
Client
Network
Server
Streaming
Request
Response
Response
Response
Response
Response
Response
Response
© Stuart Kendrick 2012
Consider Subtleties: Applications & Protocols
Vary in Request/Response Behavior
21
Client / Network / Server / Storage
Request1
Client1
Switch is already transmitting a packet
Three new packets arrive
Queue
Fourth packet arrives, overflows queue, discarded
Request2
Queue gradually empties
Client2
Ethernet Switch
Request0
Server
Request3
Client3
Request4
Client4
© Stuart Kendrick 2012
Our switches are configured to allocate 40 slots to the
input queue and 2000 slots to the output queue on
each port.
Low-end switches draw those slots from a shared pool
of memory, so if utilization is heavy, not all ports can
have their 40/2000 slots.
High-end switches dedicate memory to each port.
Consider Subtleties: Queues Can Overflow
22
Client / Network / Server / Storage
Application
Queue
Application
Queue
TCP Queue
IP Queue
NIC Queue
Client
TCP Queue
Shared
Memory
Queue
IP Queue
NIC Queue
Server
Ethernet Switch
Or
Per Port
Queues
HBA Queue
FC Queue
Application
Queue
Storage
© Stuart Kendrick 2012
There are queues everywhere –
the interaction of these queues
is a subject of on-going research
Disk Queue
Controller
Queue
23
Questions
Comments
Complaints
?
[email protected]
http://www.skendric.com
© Stuart Kendrick 2012 All rights reserved
24
Related documents