Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Server Message Block wikipedia , lookup
Deep packet inspection wikipedia , lookup
Remote Desktop Services wikipedia , lookup
Cracking of wireless networks wikipedia , lookup
Real-Time Messaging Protocol wikipedia , lookup
Hypertext Transfer Protocol wikipedia , lookup
UniPro protocol stack wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
Sidevõrgud IRT 4060/ IRT 0020 vooruloeng 8 / 3. nov 2004 Vooülekanne Avo Ots telekommunikatsiooni õppetool, TTÜ raadio- ja sidetehnika inst. [email protected] 79 Application Data loss Bandwidth Time sensitive Application Transport layer protocol protocol File transfer No Elastic No FTP TCP E-mail No Elastic No SMTP TCP Web documents No Elastic, Kbps No HTTP TCP Real-time audio/video Yes Kbps, Mbps Yes, 100s of Msec Proprietary TCP or UDP Stored audio/video Yes Kbps, Mbps Yes, sec Proprietary, NFS TCP or UDP Interactive games Yes Kbps Yes, 100s of Msec Proprietary TCP or UDP Financial applications No Elastic Both Proprietary TCP TCP: guarantee: deliver all data; no guarantee: rate, delay; connection; congestion control UDP: no guarantee: deliver all data, order, delay; no connection; no congestion control 80 Web cache – proxy server • Network entity that satisfies HTTP requests on behalf of a client. • Keeps copies of recently requested objects in its own disk storage. • Content distribution. client HTTP request HTTP request HTTP response HTTP response proxy server is also client of origin origin server 81 Transport protocols • Transform host-to-host communication, offered by network layer, to process-to-process communication Deliver important services to application layer: • 1. 2. 3. 4. 5. 6. 7. 8. • Guaranteed message delivery In-order delivery Detect/eliminate message replication Support arbitrarily large messages Support synchronization between sender & receiver Allow receiver to apply flow control to sender Support multiple application processes on each host Other services? (congestion control, quality-of-service, …) Overview: – – UDP: unreliable message delivery protocol TCP: reliable stream transfer protocol 82 User Datagram Protocol (UDP) • Connectionless, unreliable, message delivery – But, optional checksum provides limited error detection • Computed over UDP header, message, and IP pseudoheader • Pseudoheader: IP src & dst addresses, protocol #, and UDP length • If source does not want to compute checksum, it sets it to zero • Allows process demultiplexing using ports – 16 bits per port: 65536 possible channels per host • No flow control: sender can overrun receiver’s buffers 0 16 31 SrcPort DstPort Checksum Length Data 83 Application demultiplexing with UDP Application process Application process Application process Ports Queues Packets demultiplexed UDP Packets arrive 84 Transmission Control Protocol (TCP) • Reliable, connection-oriented, byte-stream delivery service Full duplex connection Supports flow control (and congestion control; to be covered later) Supports application layer demultiplexing using ports Segments vs application-layer “writes”: when does TCP transmit a segment? • Maximum Segment Size (MSS), Push operation, Send-Timer Application process Application process … Write bytes … – – – – Read bytes TCP TCP Send buffer Receive buffer Segment Segment … Segment Transmit segments 85 Sokkel controlled by application developer controlled by operating system process process socket TCP with buffers, variables socket TCP with buffers, variables Internet 86 TCP Congestion Control 20 Congestion avoidance Congestion occurs 15 Congestion window Threshold 10 5 Slow start 0 Round-trip times 87 TCP Window Size Over Time congestion window 24 Kbytes 16 Kbytes 8 Kbytes time Long-lived TCP connection 88 Slow Start Sequence Plot . . . Window doubles every round Sequence No Time 89 The big picture cwnd Timeout Congestion Avoidance Slow Start Time 90 TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 TCP connection 2 bottleneck router capacity R 91 Why is TCP fair? Two competing sessions: • Additive increase gives slope of 1, as throughout increases • multiplicative decrease drops throughput proportionally R equal bandwidth share loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R 92 TCP Connection Management TCP server lifecycle TCP client lifecycle 93