Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Chapter 5 Peer-to-Peer Protocols and Data Link Layer PART I: Peer-to-Peer Protocols 5.1 Peer-to-Peer Protocols and Service Models 5.2 ARQ Protocols and Reliable Data Transfer 5.3 Other peer-to-peer Protocols: Flow Control Timing Recovery TCP Reliable Stream Service and Flow Control 1 Chapter 5 Peer-to-Peer Protocols and Data Link Layer PART II: Data Link Controls 5.4 Framing 5.5 Point-to-Point Protocol 5.6 High-Level Data Link Control 5.7 Link Sharing Using Statistical Multiplexing 2 Chapter 5 Peer-to-Peer Protocols and Data Link Layer 5.1 Peer-to-Peer Protocols and Service Models 3 Peer-to-Peer Protocols n + 1 peer process SDU PDU Layer-(n+1) peer calls layer-n and passes Service Data Units (SDUs) for transfer Layer-n peers exchange Protocol Data Units (PDUs) to effect transfer Layer-n delivers SDUs to destination layer-(n+1) peer 4 n peer process n – 1 peer process n – 1 peer process Layer-n peer processes execute protocol to provide service to layer(n+1) n + 1 peer process SDU n peer process Service Models The service model specifies the information transfer service layer-n provided to layer-(n+1) Two categories of service models: Connection-oriented services Connectionless services A service offered by a given layer can include some of the following features: Arbitrary message size or structure Sequencing and Reliability Timing, Pacing, and Flow control Multiplexing Privacy, integrity, and authentication 5 Connection-Oriented Transfer Service Connection Establishment phase Data transfer phase Connection must be established between layer-(n+1) peers Layer-n protocol must: Set initial parameters, e.g. sequence numbers; and Allocate resources, e.g. buffers Exchange of SDUs Disconnection phase Example: TCP, PPP n + 1 peer process send SDU n + 1 peer process receive Layer n connection-oriented service SDU 6 Connectionless Transfer Service No Connection setup is required, simply send SDU Each block of information is transmitted independently All address information per block must be provided Acknowledge is not required (Simple & quick) or required Example: UDP, IP n + 1 peer process send SDU n + 1 peer process receive Layer n connectionless service 7 Message Size and Structure What message size and structure will a service layer accept? Different services impose restrictions on size & structure of data it will transfer Single bit? Block of bytes? Byte stream? Ex: Transfer of voice mail = 1 long message Ex: Transfer of voice call = byte stream 1 voice mail= 1 message = entire sequence of speech samples (a) 1 call = sequence of 1-byte messages 8 (b) Segmentation & Blocking To accommodate arbitrary message size, a layer may have to deal with messages that are too long or too short for its protocol Segmentation & Reassembly: a layer breaks long messages into smaller blocks and reassembles these at the destination Blocking & Unblocking: a layer combines small messages into bigger blocks prior to transfer 1 long message 2 or more short messages 9 2 or more blocks 1 block Reliability & Sequencing Reliability: Are messages or information stream delivered error-free and without loss or duplication? Sequencing: Are messages or information stream delivered in order? ARQ protocols combine error detection, retransmission, and sequence numbering to provide reliability & sequencing Examples: TCP and HDLC 10 Pacing and Flow Control Messages can be lost if receiving system does not have sufficient buffering to store arriving messages (buffer overflow) If destination layer-(n+1) does not retrieve its information fast enough, destination layer-n buffers may overflow Pacing & Flow Control provide backpressure mechanisms that control transfer according to availability of buffers at the destination Examples: TCP and HDLC 11 Timing Applications involving voice and video generate units of information that are related temporally Destination application must reconstruct temporal relation in voice/video units Network transfer introduces delay & jitter Timing Recovery protocols use timestamps & sequence numbering to control the delay & jitter in delivered information Examples: RTP & associated protocols in Voice over IP 12 Multiplexing Multiplexing enables multiple layer-(n+1) users to share a layer-n service A multiplexing tag is required to identify specific users at the destination Examples: UDP, IP 13 Privacy, Integrity, & Authentication Privacy: ensuring that information transferred cannot be read by others Integrity: ensuring that information is not altered during transfer Authentication: verifying that sender and/or receiver are who they claim to be Security protocols provide these services and are discussed in Chapter 11 Examples: IPSec, SSL 14 End-to-End vs. Hop-by-Hop A service feature can be provided by implementing a protocol end-to-end across the network across every hop in the network Example: Perform error control at every hop in the network or only between the source and destination? Perform flow control between every hop in the network or only between source & destination? 15 Error Control in Data Link Layer Packets Packets Data link layer Data link layer (a) A Frames B Physical layer Physical layer (b) 12 3 21 12 3 B 2 1 Medium A 1 Physical layer entity 2 Data link layer entity 3 Network layer entity 21 Fig. 5.5 Data Link operates over wire-like, directly-connected systems Frames can be corrupted or lost, but arrive in order Data link performs error-checking & retransmission Ensures error-free packet transfer between two systems 16 Error Control in Transport Layer Transport layer protocol (e.g. TCP) sends segments across network and performs end-to-end error checking & retransmission Usually, the underlying network is assumed to be unreliable Messages Messages Segments Transport layer Transport layer Network layer Network layer Network layer Network layer Data link layer Data link layer Data link layer Data link layer layer Physical layer Physical layer Physical layer End system Physical α Fig. 5.6 End system β 17 Network Segments can experience long delays, can be lost, or arrive out-of-order because packets can follow different paths across network The peer processes at the transport layer need to take into account all the characteristics of the network transfer service to be able to provide desired service to its higher layer 12 C 3 21 End System α 4 3 21 End System β 12 3 21 Medium A 12 3 B 2 1 21 123 4 Network 3 Network layer entity18 4 Transport layer entity End-to-End versus hop-by-hop Approaches Hop-by-hop In situations where errors are likely, hopby-hop error becomes necessary. Data 1 Data 2 ACK/ NAK Data 3 Data 4 ACK/ NAK Faster recovery 5 ACK/ NAK ACK/ NAK End-to-end ACK/NAK 1 2 Data 3 Data 5 4 Data Data In situations where errors are infrequent, end-toend mechanisms are preferred. 19 Chapter 5 Peer-to-Peer Protocols and Data Link Layer 5.2 ARQ Protocols and Reliable Data Transfer 20 Automatic Repeat Request (ARQ) Suppose that the layer n+1 process requires a reliable data transfer service, and then layer n process would provide an ARQ protocol service ARQ combines with error detection code and retransmission to ensure a sequence of information packets being delivered in order and without errors or duplications despite transmission errors & losses Three basic types of ARQ protocols are investigated: Stop-and-Wait ARQ Go-Back N ARQ Selective Repeat ARQ Basic elements of ARQ: ACKs (positive acknowledgments), NAKs (negative acknowlegments) Timeout Mechanism 21 5.2.1 Stop-and-Wait ARQ Transmit a frame, wait for ACK Packet sequence Layer n+1, PDU Layer n A timer is set after a frame is transmitted Error-free Packet sequence Information frame, PDU Receiver (Process B) Transmitter (Process A) Suppose that the layer n+1 requires a reliable data transfer service Control frame CRC Information Header packet Information frame Header CRC Control frame: ACKs 22 Need for Sequence Numbers (a) Frame 1 lost A B Time-out Time Frame 0 ACK (b) ACK lost A B Frame 1 Frame 1 ACK Frame 2 Time-out Time Frame 0 ACK Frame 1 ACK Frame 1 ACK Frame 2 In cases (a) & (b) the transmitting station A acts the same way But in case (b) the receiving station B accepts frame 1 twice Question: How does the receiver know the second frame is also frame 1? Answer: Add frame sequence number in header Slast is sequence number of most recent transmitted frame 23 Sequence Numbers (c) Premature Time-out Time-out A Time Frame 0 ACK B Frame 0 ACK Frame 1 Frame 2 The transmitting station A misinterprets duplicate ACKs Incorrectly assumes second ACK acknowledges Frame 1 Question: How does the receiver know second ACK is for frame 0? Answer: Add frame sequence number in ACK header Rnext is the sequence number of next frame expected by the receiver to implicitly acknowledge the receipt of all prior frames 24 1-Bit Sequence Numbering Suffices 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Rnext Slast Timer Slast Transmitter A Receiver B Rnext Global State: (Slast, Rnext) Fig. 5.12 System State information in Stop-and Wait ARQ (0,0) Error-free frame 0 arrives at receiver ACK for frame 1 arrives at transmitter (1,0) Error-free frame 1 arrives at receiver (0,1) ACK for frame 0 arrives at transmitter (1,1) 25 Stop-and-Wait ARQ Transmitter Ready state Await request from higher layer for packet transfer When request arrives, transmit frame with updated Slast and CRC Go to Wait State Wait state Wait for ACK or timer to expire; block requests from higher layer If timeout expires retransmit frame and reset timer If ACK received: If sequence number is incorrect or if errors detected: ignore ACK If sequence number is correct (Rnext = Slast +1): accept frame, go to Ready state Receiver Always in Ready State Wait for arrival of new frame When frame arrives, check for errors If no errors detected and sequence number is correct (Slast=Rnext), then accept frame, update Rnext, send ACK frame with Rnext, deliver packet to higher layer If no errors detected and wrong sequence number discard frame send ACK frame with Rnext If errors detected discard frame 26 Stop-and-Wait ARQ Example Slast=0 Time-out A Fr 0 ACK 1 B Slast=0 Rnext: 01 Slast=1 Rnext:1 Slast:01 Rnext=1 Time-out Ignore ACK 1 Fr 1 Fr 0 ACK 1 Fr 1 Slast=0 Error Rnext=1 Frame Discard Discard Frame 0 Frame 1 Transmitter Wait state If timeout expires retransmit frame and reset timer If ACK received: If sequence number is incorrect or if errors detected: ignore ACK Receiver Always in Ready State If no errors detected and wrong sequence number discard frame send ACK frame with Rnext If errors detected 27 discard frame Applications of Stop-and-Wait ARQ IBM Binary Synchronous Communications protocol (Bisync): character-oriented data link control Xmodem: modem file transfer protocol Trivial File Transfer Protocol (RFC 1350): simple protocol for file transfer over UDP 28 Performance Issues First frame bit enters channel Last frame bit enters channel Channel idle while transmitter waits for ACK ACK arrives t A B First frame bit arrives at receiver t Last frame bit arrives at receiver Receiver processes frame and prepares ACK Fig. 5.13 Stop-and-Wait ARQ is inefficient when the time to receive an ACK Is large compared to the frame transmission time 1000 bit frame @ 1.5 Mbps If the time waiting for ACK = 40 ms, then efficiency = 1/60= 1.6% If the time waiting for ACK = 1 ms, then efficiency =2/3 = 66.67% 29 Delay Components in Stop-and-Wait ARQ t0 = total time to transmit 1 frame A tproc B tprop frame tf time tproc tprop tack t0 = 2t prop + 2t proc + t f + t ack = 2t prop + 2t proc nf bits/info frame na + + R R delay components bits/ACK frame channel transmission rate 30 Transmission Efficiency of Stopand-Wait ARQ in Error-free channel Effective Information Transmission rate: R 0 eff bits for header & CRC number of information bits delivered to destination n f - no = = , total time required to deliver the information bits t0 Transmission efficiency η0: n f - no no 1Reff nf t0 h0 = = = . na 2(t prop + t proc ) R R R 1+ + nf nf Effect of ACK frame Effect of frame overhead Effect of Delay-Bandwidth Product 31 Example: Effect of Delay-Bandwidth Product nf=1250 bytes = 10,000 bits, na=no=25 bytes = 200 bits Delay Bandwidth Product Efficiency R=1 Mbps R=1 Gbps Reaction time = 1 ms 10 ms 100 ms 1 sec 103 88% 106 1% 104 49% 107 0.1% 105 9% 108 0.01% 106 1% 109 0.001% Stop-and-Wait does not work well for very high speeds or long propagation delays 32 Transmission Efficiency of Stop-andWait ARQ in Channel with Errors Let 1 – Pf = probability frame arrives without errors Avg. number of transmissions for correct reception is 1/ (1–Pf ) If 1 – Pf = 0.1, then on average a frame will have to be transmitted 10 times to get through The Stop-and-Wait ARQ on average requires tsw = t0/(1 – Pf) seconds to get a frame through h SW = Reff R = n f - no t0 1 - Pf R 1= na 1+ + nf no nf 2(t prop + t proc ) R nf (1 - Pf ) Effect of frame loss 33 Example: Effect of Bit Error Rate nf=1250 bytes = 10,000 bits, na=no=25 bytes = 200 bits Find efficiency for random bit errors with p= 0, 10-6, 10-5, 10-4 nf 1 - Pf = (1 - p) » e Pf -n f p for large n f and small p p=0 p=10-6 p=10-5 p=10-4 1 88% 0.99 86.6% 0.905 79.2% 0.368 32.2% Efficiency R = 1 Mbps Reaction Time = 1 ms Bit errors impact performance as nf p approach 1 34 Midterm Exam and Final Exam Midterm: Time: December 5 (Thursday) or 6, 2013 Scope: Chapters 1 – 3 and Chapter 5; all contents given in the class before the beginning of December Final Exam: Time: January 9 (Thursday) or 10, 2014 Scope: Chapter 5 – 6 and Some in Chapter 4 and Chapter 7; all after the end of November 35 5.2.2 Go-Back-N ARQ Improve Stop-and-Wait ARQ by allowing the transmitter to continue sending frames to keep channel busy while the transmitter waits for acknowledgements In Go-Back-N ARQ pipelines, a window size Ws, the maximum number of frames that can be outstanding without acknowledgement, is chosen larger than the delay-bandwidth product to ensure that the channel can be kept busy m Use m-bit sequence numbering from 0 to 2 -1 maximum When will the ARQ protocol go back N? and N = ? If ACK for oldest frame arrives before window is exhausted, frames can continue be transmitted If window is exhausted and the timer is out, pull back and retransmit all outstanding frames 36 Go-Back-N ARQ 4 frames are outstanding; so go back 4 Go-Back-4: fr 0 A fr 1 fr 2 fr 3 fr 4 fr 5 fr 6 fr 3 fr 4 fr 5 fr 6 fr 7 fr 8 Time fr 9 B Pipelined Rnext 0 A C K 1 01 A C K 2 A C K 3 2 3 The receiver ignores frame 3 and all subsequent frames because of out of sequence frames 3 A C K 4 3 4 A C K 5 5 A C K 6 6 A C K 7 A C K 8 7 8 A C K 9 Pipelined 9 Frame transmission are pipelined to keep the channel busy Frame with errors and subsequent out-of-sequence frames are ignored Transmitter is forced to go back when window of 4 is exhausted 37 Window size long enough to cover round trip time Stop-and-Wait ARQ fr 0 A B B fr 1 Time A C K 1 Receiver is looking for Rnext=0 Go-Back-N ARQ A Time-out expires fr 0 Four frames are outstanding; so go back 4 fr 0 fr fr 1 2 fr 3 fr 0 fr fr 1 2 Receiver is Out-oflooking for sequence Rnext=0 fr 3 A C K 1 A C K 2 fr fr 4 5 A C K 3 A C K 4 fr 6 A C K 5 Time A C K 6 frames Fig. 5.16 Relationship of Stop-and-Wait ARQ and Go-Back-N ARQ 38 Go-Back-N with Timeout Problem with Go-Back-N as presented: If frame is lost and source does not have frame to send, then window will not be exhausted and recovery will not commence Use a timeout mechanism with each frame When timeout expires, resend all outstanding frames 39 Go-Back-N Transmitter & Receiver Transmitter Send Window ... Frames transmitted S last and ACKed Srecent Slast Timer Slast+1 oldest unACKed frame ... Timer Srecent most recent transmission ... Slast+Ws-1 Receive Window Slast+Ws-1 Buffers Timer Receiver Max Sequence Number allowed Frames received Rnext Receiver will only accept a frame that is error-free and that has sequence number = Rnext When such a frame arrives, Rnext is incremented by one, and the receive window slides forward by one 40 Sliding Window Operation Transmitter Send Window ... Frames transmitted S last and ACKed Srecent Slast+Ws-1 m-bit Sequence Numbering 2m – 1 0 1 2 Transmitter waits for error-free ACK frame with sequence number Slast When such ACK frame arrives, Slast is incremented by one, and the send window slides forward by one Slast send i window i+1 i + Ws – 1 Ws ≦ 2m– 1 *Note that the modular 2m arithmetic is used here 41 Go-Back-N ARQ Protocol 1. 2. 3. 4. 5. 6. Notice that the modulo 2m arithmetic is used here. The transmitter is in ready state or block state. If the transmitter is in the ready state waiting for a request from its higher layer. If there is request, it sends the frame with sequence number Srecent and sets the timer. If Srecent= Slast+Ws-1, then the send window becomes empty and the transmitter goes into the blocking state. If an error-free ACK frame is received with Rnext in the range between Srecent and Slast, the send window slides forward by setting Slast = Rnext and the maximum window size is Slast+Ws-1. If the Rnext is outside the range from Srecent to Slast, the 42 frame is discarded and no further action is taken. Go-Back-N ARQ Protocol 1. 2. 3. 4. The transmitter is in block state, that is Srecent= Slast+Ws1, denoting the send window becomes empty, and the transmitter refused to accept requests for packet transfer from the upper layer. If a timer expires, the transmitter resends the corresponding packet and resets the timer. If an error-free ACK frame is received with Rnext in the range between Srecent and Slast, the send window slides forward by setting Slast = Rnext and the maximum window size is Slast+Ws-1, then changing to the ready state. If the Rnext is outside the range from Srecent to Slast, the frame is discarded and no further action is taken. 43 Go-Back-N ARQ Protocol 1. 2. 3. 4. The receiver is always in the ready state waiting for a notification of an arriving frame from the lower layer. When a frame arrives, if no errors are detected and the sequence number is Rnext, then the frame is accepted, the sequence number is incremented to Rnext +1, an ACK frame is transmitted with sequence number Rnext +1, and the packet is delivered to its higher layer. If the received frame is correct but with wrong sequence number, the frame is discarded and an ACK is sent with sequence number Rnext. If an erroneous frame is received, the frame is discarded and no further action is taken. 44 Maximum Allowable Window Size is Ws = 2m-1 M = 22 = 4, Go-Back - 4: A fr 0 A C K 1 B Rnext fr 2 fr 1 0 1 fr 3 A C K 2 2 M = 22 = 4, Go-Back-3: A fr 0 fr 0 A C K 3 3 fr 1 A C K 0 fr 2 fr 3 Time Receiver has Rnext= 0, and it regards this frame 0 is a new frame 0. But it is actually the old frame 0. It is a mistake. Therefore, Ws < 2m and the maximum allowable window size is Ws = 2m-1 0 Transmitter goes back 3 fr 0 fr 2 fr 1 Transmitter goes back 4 and retransmits frame 0 - 3 fr 1 fr 2 fr 3 Time But notice that the maximum sequence number can be from 0 to 2m-1 B Rnext 0 A C K 1 A C K 2 A C K 3 A C K 3 1 2 3 3 Receiver has Rnext= 3 , so it rejects the old frame 0, 1, 2, altogether. 45 ACK Piggybacking in Bidirectional GBN SArecent RA next Transmitter Receiver Transmitter SBrecent RB next “A” Receive Window RA next “A” Send Window ... SA last+WA s-1 SA last Buffers Timer SA last Timer SA last+1 ... SArecent ... Timer Timer SA last +WA s-1 Receiver “B” Receive Window Note 1: Out-ofsequence error-free frames discarded after Rnext examined Note 2: If no information frames are scheduled, the receiver can send the ACK timer. When the ACK timer expires a control frame conveying ACK is transmitted RB next “B” Send Window ... SB last SB last+WB s-1 Buffers Timer SB last Timer SBlast+1 ... SBrecent ... Timer Timer SB last+WB s-1 46 Applications of Go-Back-N ARQ HDLC (High-Level Data Link Control): bitoriented data link control V.42 modem: error control over telephone modem links 47 Required Timeout & Window Size Tproc Tout Tprop Tf Tf Tprop Tout = 2 Tprop + 2 Tf +Tproc Timeout value should allow for: Two propagation times + 1 processing time: 2 Tprop + Tproc A frame that begins transmission at the receiver right before our frame arrives; one Tf (as the worst case !) must be counted Next frame carrying the piggybacked ACK takes the second Tf Ws should be large enough to keep channel busy for Tout 48 Required Window Size for Delay-Bandwidth Product Frame = 1250 bytes =10,000 bits, R = 1 Mbps 2(tprop + tproc) 2 x Delay x BW Minimum Window size 1 ms 1000 bits 1 10 ms 10,000 bits 2 100 ms 100,000 bits 11 1 second 1,000,000 bits 101 49 Efficiency of Go-Back-N GBN has a best possible efficiency, if Ws large enough to keep channel busy and the channel is error-free, namely, 1-n0/nf If the channel is erroneous and assume that the frame loss probability is Pf , then time to deliver a frame successfully is tf + Pf Wstf /(1-Pf), where tf =nf/R Notice that the dela-bandwidth product determines the Ws tGBN = t f (1 - Pf ) + Pf {t f + n f - no hGBN = tGBN R = Ws t f 1 - Pf no 1nf 1 + (Ws - 1) Pf } = t f + Pf Ws t f 1 - Pf and (1 - Pf ) 50 Example: Effect of Bit Error Rate and the Delay-Bandwidth Product nf=1250 bytes = 10000 bits, na=no=25 bytes = 200 bits Compare S&W with GBN efficiency for random bit errors with p = 0, 10-6, 10-5, 10-4 and R = 1 Mbps & reaction time 100 ms Delay-Bandwidth Product is 1 Mbps x 100 ms = 100000 bits = 10 frames → Use Ws = 11 Efficiency 0 10-6 10-5 10-4 S&W 8.9% 8.8% 8.0% 3.3% GBN 98% 88.2% 45.4% 4.9% Go-Back-N ARQ significant improvement over Stop-andWait for large delay-bandwidth product Go-Back-N becomes inefficient as error rate increases 51 5.2.3 Selective Repeat ARQ Go-Back-N ARQ protocol is inefficient because multiple frames are retransmitted when frame errors or losses occur Selective Repeat ARQ retransmits only each erroneous or lost individual frame Timeout causes individual corresponding frame to be resent NAK causes retransmission of the oldest un-acked frame Receiver maintains a receive window of sequence numbers that can be accepted Error-free, but out-of-sequence frames with sequence numbers within the receive window are buffered Arrival of frame with Rnext causes window to slide forward by 1 or more 52 Send Window for Selective Repeat ARQ Slast Srecent Ws m 1. Notice that the modulo 2 arithmetic is used here. 2. The transmitter is in ready state or block state. If Srecent= Slast+Ws-1, then the send window becomes empty and the transmitter goes into the blocking state. 3. If an error-free ACK frame is received with Rnext in the range between Srecent and Slast, the send window slides forward by setting Slast = Rnext and the maximum window size is Slast+Ws-1. 4. If an error-free NAK frame is received with Rnext in the range between Srecent and Slast, the frame with sequence number Rnext is retransmitted and the send window slides forward by setting Slast = Rnext and the maximum window size is Slast+Ws-1. 5. If the Rnext is outside the range from Srecent to Slast, the frame is discarded and no further action is taken. Send Window for Selective Repeat ARQ 1. 2. 3. 4. 5. The transmitter is in block state, that is Srecent= Slast+Ws-1, denoting the send window becomes empty, and the transmitter refused to accept requests for packet transfer from the upper layer. If a timer expires, the transmitter resends the corresponding packet and resets the timer. If an error-free ACK frame is received with Rnext in the range between Srecent and Slast, the send window slides forward by setting Slast = Rnext and the maximum window size is Slast+Ws-1, then changing to the ready state. If an error-free NAK frame is received with Rnext in the range between Srecent and Slast, the frame with sequence number Rnext is retransmitted and the send window slides forward by setting Slast = Rnext and the maximum window size is Slast+Ws-1, then changing to the ready state. If the Rnext is outside the range from Srecent to Slast, the frame is discarded and no further action is taken. 54 Receive Window for Selective Repeat ARQ Rnext WR 1. The receiver is always in the ready state waiting for a notification of an arriving frame from the lower layer. 2. If the received frame is correct and with sequence number Rnext, and if Rnext+1 up to Rnext+k-1 have already been received and buffered, then the receive sequence number is incremented to Rnext+k, the receive window slides forward, an ACK with Rnext is sent, and the corresponding packets are delivered to higher layer. 3. When a frame arrives, if no errors are detected and the sequence number is not Rnext but in the range from Rnext to Rnext+WR-1, then the frame is accepted and buffered, and a NAK or ACK frame with sequence number Rnext is transmitted. 4. If an arriving frame has no errors but the sequence number is outside the receive window, then the frame is discarded and an ACK with sequence number Rnext is sent. Selective Repeat ARQ 56 Selective Repeat ARQ A fr 0 fr 1 fr 2 fr 3 fr 4 fr 5 fr 6 fr 2 fr 7 A C K 2 A C K 2 fr 8 fr fr fr fr 9 10 11 12 Time B A C K 1 A C K 2 N A K 2 A C K 2 A C K 7 A C K 8 A C K 9 A C K 1 0 A C K 1 1 A C K 1 2 Using NAK or Timeout mechanism 57 Selective Repeat ARQ Example Selective Repeat ARQ Receiver Transmitter Send Window ... Frames transmitted S last and ACKed Timer Timer Srecent Slast+ Ws -1 Receive Window Frames received Rnext Buffers Slast Buffers Rnext+ 1 Slast+ 1 Rnext+ 2 Rnext + Wr -1 ... Timer Srecent ... Slast+ Ws - 1 ... Rnext+Wr -1 The maximum sequence number that can be accepted 59 The Sliding Send & Receive Windows Transmitter 2m-1 0 Receiver 1 2m-1 0 1 2 Slast send i window i+1 i + Ws – 1 Moves k forward when ACK arrives with Rnext = Slast + k k = 1, …, Ws-1 2 Rnext receive window j i j + Wr – 1 Moves forward by 1 or more when frame arrives with sequence number = Rnext 60 What size Ws and Wr are allowed? Send Window Example: m=2, M=2m=4, Ws=2m-1=3, Wr=3 Timeout and Frame 0 resent {0,1,2} {1,2} A B Receive Window fr0 {2} fr1 {.} fr2 ACK1 {0,1,2} {1,2,3} fr0 ACK2 Time ACK3 {2,3,0} {3,0,1} Old frame 0 accepted as a new frame because it falls in the receive window. It would be an error. 61 Ws = Wr = 2m-1 is the maximum allowed Example: m=2, M=22=4, Ws=2, Wr=2 Timeout and Frame 0 resent Send Window {0,1} A {1} fr0 fr0 fr1 B Receive Window {.} ACK1 {0,1} {1,2} Time ACK2 {2,3} Old frame 0 rejected because it falls outside the receive window 62 Applications of Selective Repeat ARQ TCP (Transmission Control Protocol): transport layer protocol uses variation of selective repeat to provide reliable stream service SSCOP (Service Specific Connection Oriented Protocol): error control for signaling messages in ATM networks 63 Efficiency of Selective Repeat Assume Pf frame loss probability, then time required to deliver a frame is tf /(1-Pf) n f - no h SR = t f /(1 - Pf ) R no = (1 - )(1 - Pf ) nf 64 Example: Effect of Bit Error Rate and Delay-Bandwidth Product nf=1250 bytes = 10000 bits, na=no=25 bytes = 200 bits Compare S&W, GBN & SR efficiency for random bit errors with p=0, 10-6, 10-5, 10-4 and R= 1 Mbps & 100 ms Efficiency 0 10-6 10-5 10-4 S&W 8.9% 8.8% 8.0% 3.3% GBN 98% 88.2% 45.4% 4.9% SR 98% 97% 89% 36% Selective Repeat outperforms GBN and S&W, but its efficiency drops significantly as the error rate is large. 65 Comparison of ARQ Efficiencies Assume na and no are negligible relative to nf, and L = 2(tprop+tproc)R/nf =(Ws-1) is the size of the pipe in multiples of frames. Selective-Repeat: h SR no = (1 - Pf )(1 ) » (1 - Pf ) nf Go-Back-N: hGBN = Stop-and-Wait: h SW = 1 - Pf 1 + (WS - 1) Pf = 1 - Pf For Pf≈0, SR & GBN same 1 + LPf For Pf→1, GBN & SW same (1 - Pf ) 1 - Pf » 2(t prop + t proc ) R na 1+ L 1+ + nf nf 66 ARQ Efficiencies ARQ Efficiency Comparison E fficie ncy 1.2 1 Selective Repeat 0.8 Go Back N 10 0.6 Stop and Wait 10 0.4 Go Back N 100 0.2 Stop and Wait 100 0 10 -9 -9 10 -8-8 10 -7-7 10 -6-6 10 -5-5 10 -4 -4 10 -3-3 10 -2 -2 10 -1 -1 - LOG(p) p Delay-Bandwidth product = 10, 100 67 Chapter 5 Peer-to-Peer Protocols and Data Link Layer 5.3 Other Peer-to-Peer Protocol 68 Flow Control buffer fill Information frame Transmitter Receiver Control frame Receiver has limited buffering to store arriving frames Several situations cause buffer overflow Mismatch between the sending rate and the rate at which the high-layer user can retrieve data Surges in frame arrivals Flow control prevents buffer overflow by regulating rate at which source is allowed to send information 69 X ON / X OFF threshold Information frame Transmitter Receiver Transmit X OFF Transmit Time A on off on off B Time 2Tprop Use the threshold flow control method. When the buffer is occupied over the threshold, an OFF signal will be activated. The threshold would be larger than the 2 Tprop R bits which are the still remaining free in the receive buffer. 70 Window Flow Control tcycle Return of permits A Time B Tcycle > Ws Tf Sliding Window ARQ method with Ws equal to the receive buffer available Transmitter can never send more than Ws frames ACKs that slide window forward can be viewed as permits to transmit more Can also pace ACKs as shown above Time Return permits (ACKs) at the end of cycle to regulate the transmission rate; Tcycle > Ws Tf Problems using sliding window for both error and flow control Choice of window size must take into account the delay-bandwidth product of the channel and as well as the receive buffer size Interplay between transmission rate and retransmissions TCP separates error and flow control 71 5.3.2 Timing Recovery for Synchronous Services Network output not periodic Synchronous source sends periodic information blocks Network Applications that involve voice, audio, or video will generate a synchronous information stream, meaning periodically. Information carried by equally-spaced fixed-length packets Network that performs multiplexing and switching will introduce random delay Packets experience variable transfer delay Jitter (variation in interpacket arrival times) also introduced Timing recovery re-establishes the synchronous nature of the stream 72 Playout Buffer Provide playout buffer and Tmax to Perform timing Recovery Packet Playout Packet Arrivals Playout Buffer Packet Arrivals ■Sequence numbers help order packets Packet Playout Tmax Delay first packet by maximum network delay, Tmax All other packets would arrive with less delay than Tmax Packets will play out periodically thereafter 73 Playout clock must be synchronized to transmitter clock Arrival times Time Send times Playout times Tplayout Time Receiver too fast buffer starvation Receiver too slow; buffer fills and overflows time Receiver speed Time just right Many late packets 74 Tplayout time Tplayout time Adaptive Timing Recovery Buffer for information blocks Timestamps inserted in packet payloads to indicate when information was produced Error signal to indicate the difference between the transmitter and the receiver clocks + t4 t3 t2 t1 Add Smoothing filter Playout command Adjust frequency - Recovered clock Timestamps Counter Fig. 5.29 Adaptive clock recovery for clock synchronization Counter attempts to synchronize the transmitter clock Frequency of counter (Receiver Clock) is adjusted according to arriving timestamps Jitter introduced by network causes fluctuations in buffer & in local clock 75 Synchronization to a Common Clock (Network Clock using GPS) Receiver Transmitter ∆f fs ∆f Network fr=fn-Δf Δf=fn-fs=fn-(M/N)fn fn/fs=N/M fr fn Network clock Clock recovery simple if a common clock is available to transmitter & receiver E.g. SONET network clock; Global Positioning System (GPS) Transmitter sends Δf between its frequency and network frequency Receiver adjusts its local frequency by Δf Packet delay jitter can be removed completely 76 Example: Real-Time Protocol RTP (RFC 1889) designed to support realtime applications such as voice, audio, video RTP provides means to carry: Type of information source Sequence numbers Timestamps Actual timing recovery must be done by higher layer protocol MPEG2 for video, MP3 for audio 77 5.3.3 TCP Reliable Stream Service and Flow Control Application Layer writes bytes into send buffer through socket Application layer TCP transfers byte stream in order, without errors or Application Layer reads duplications bytes from receive buffer through socket Write 45 bytes Write 15 bytes Write 20 bytes Read 40 bytes Read 40 bytes Transport layer Segments Transmitter Receiver Receive buffer Send buffer ACKs 78 TCP ARQ Method • TCP delivers a connection-oriented service over best effort (connectionless) service of IP • • • • • Packets can arrive with errors or be lost Packets can arrive out-of-order Packets can arrive after very long delays Duplicate segments must be detected & discarded Must avoid old segments from previous connections • Sequence Numbers • Very long sequence number (32 bits) to deal with long delays • Initial sequence numbers negotiated during connection setup (to deal with very old duplicates) • Accept segments within a receive window • TCP uses Selective Repeat ARQ • Transfers byte stream without preserving boundaries 79 Transmitter Receiver Send Window Receive Window Slast + WA-1 ... Octets transmitted Slast & ACKed ... Srecent Rlast Rlast + WR – 1 ... Slast + WS – 1 Slast = oldest unacknowledged byte Srecent = highest-numbered transmitted byte Slast+WA-1 = highest-numbered byte that can be transmitted Slast+WS-1 = highest-numbered byte that can be accepted from the application Rnext Rnew Rlast = highest-numbered byte not yet read by the application (has been ACKed, but still in the receive buffer) Rnext = next expected byte Rnew = highest numbered byte received correctly Rlast+WR-1 = highest-numbered byte that can be accommodated in receive buffer 80 Flow Control TCP separates the flow control function from the acknowledgement function. The receiver controls the transmission rate from the sender to prevent from buffer overflow TCP receiver advertises a window size, WA, in the segment header, specifying number of buffers that can be available at the receiver WA = WR – (Rnew – Rlast) TCP sender is obliged to keep the number of outstanding bytes below WA, that is, (Srecent - Slast) ≤ WA Send Window Receive Window Slast + WA-1 ... ... Slast Srecent WA ... Slast + Ws – 1 Rlast Rnew Rlast + WR – 1 81 TCP Connections TCP Connection Connection Setup with Three-Way Handshake Three-way exchange to negotiate initial Seq. #’s for connections in each direction Data Transfer One connection each way Identified uniquely by Send IP Address, Send TCP Port #, Receive IP Address, Receive TCP Port # Exchange segments carrying data Graceful Close Close each direction separately 82 Three Phases of TCP Connection Host A Host B Three-way Handshake Data Transfer Graceful Close 83 1st Handshake: Client-Server Connection Request Initial Sequence number from client to server SYN bit set indicates request to establish connection from client to server 84 2nd Handshake: ACK from Server ACK Seq. no. = Initial Seq. no. + 1 ACK bit set acknowledges connection request; Client-to-Server connection established 85 2nd Handshake: Server-Client Connection Request Initial Seq. # from server to client SYN bit set indicates request to establish connection from server to client 86 3rd Handshake: ACK from Client ACK Seq. # = Init. Seq. #+1 ACK bit set acknowledges connection request; Connections in both directions established 87 TCP Data Exchange Application Layers write bytes into buffers TCP sender forms segments when bytes exceed threshold or timer expires, or upon PUSH command from applications Consecutive bytes from buffer inserted in payload Sequence no. and ACK no. inserted in header Checksum calculated and included in header TCP receiver Performs selective repeat ARQ functions Writes error-free, in-sequence bytes to receive buffer 88 Data Transfer: Server-to-Client Segment 12 bytes of payload Push set 12 bytes of payload carries telnet option negotiation 89 Graceful Close: Client-to-Server Connection Client initiates closing of its connection to server 90 Graceful Close: Client-to-Server Connection ACK Seq. no. = Previous Seq. no. + 1 Server ACKs request; client-to-server connection closed 91 TCP Retransmission by Timeout TCP retransmits the segment after its corresponding timer expires Timeout too short: excessive number of retransmissions Timeout too long: recovery too slow Timeout value depends on round-trip time (RTT): time from when a segment is sent to when its ACK is received RTT in Internet is widely variable from time to time or from connection to connection, therefore TCP adaptively sets the re-transmission time-out value TCP continuously estimates the RTT by tRTT(new) = a tRTT(old) + (1 – a) tn tn : the time elapsed from the instant the n-th segment is transmitted until the corresponding ACK is received a = 7/8 typical 92 RTT Variability The time-out value tout is determined by taking into account not only the mean tRTT but also the variability in the estimation of the round-trip time The variability can be obtained by the standard deviation of the roundtrip time, denoted by sRTT The time-out value is then given by: tout = tRTT + k sRTT If RTT highly variable, timeout value increases accordingly If RTT nearly constant, timeout value is close to the mean RTT estimate Approximate estimation of the standard deviation by dRTT(new) = b dRTT(old) + (1-b) | tn - tRTT | The typical value of b is ¾. The time-out value that has been found to work well is then tout = tRTT + 4 dRTT 93 Chapter 5 Peer-to-Peer Protocols and Data Link Layer PART II: Data Link Controls 94 Chapter 5 Peer-to-Peer Protocols and Data Link Layer 5.4 Framing 95 Framing for Frame Synchronization Framing involves identifying the beginning and end of a block of information Framing presupposes that there is enough clock synchronization at the physical layer to at least identify an individual bit or byte Frame boundaries can be determined using: Character Counts Control Characters Flags CRC Checks The data link layer will do the frame synchronization 96 Character-Oriented Framing Data to be sent A DLE B ETX DLE STX E After stuffing and framing DLE STX A DLE ETX Asynchronous transmission systems using ASCII to transmit printable characters Octets with HEX value <20 are nonprintable Special 8-bit patterns used as control characters ETX DLE DLE STX E Frames consist of integer number of bytes DLE DLE B STX (start of text) = 0x02; ETX (end of text) = 0x03; Byte used to carry non-printable characters in frame DLE (data link escape) = 0x10 DLE STX (DLE ETX) used to indicate beginning (end) of frame Insert extra DLE in front of occurrence of DLE STX (DLE ETX) in frame All DLEs occur in pairs except at frame boundaries Framing & Bit Stuffing HDLC frame Flag Address Control Information FCS Flag any number of bits Frame delineated by flag character HDLC uses bit stuffing to prevent occurrence of flag 01111110 inside the frame Transmitter inserts extra 0 after each consecutive five 1s inside the frame Receiver checks for five consecutive 1s if next bit = 0, it is removed if next two bits are 10, then flag is detected If next two bits are 11, then frame has errors 98 Example: Bit stuffing & destuffing (a) Data to be sent 0110111111111100 After stuffing and framing 01111110 01101111101111100001111110 (b) Data received 01111110000111011111011111011001111110 After destuffing and deframing *000111011111-11111-110* 99 PPP Frame Flag Address 01111110 1111111 Control 00000011 Protocol Information CRC Flag 01111110 integer # of bytes All stations are to accept the frame Specifies what kind of packet is contained in the payload, e.g., LCP, NCP, IP, OSI CLNP, IPX PPP uses similar frame structure as HDLC, except Unnumbered frame Protocol type field Payload contains an integer number of bytes PPP uses the same flag, but uses byte stuffing Problems with PPP byte stuffing Size of frame varies unpredictably due to byte insertion Malicious users can inflate bandwidth by sending 7D or 7E in the information field Byte-Stuffing in PPP PPP is character-oriented version of HDLC Flag is 0x7E (01111110) Control escape 0x7D (01111101) Any occurrence of flag or control escape inside of frame is replaced with 0x7D followed by original octet XORed with 0x20 (00100000) Data to be sent 7E 41 41 7D 42 7E 50 70 46 7D 5D 42 7D 5E 50 70 After stuffing and framing 46 7E CRC-based Framing: Generic Framing Procedure (GFP) GFP payload area 2 2 2 2 0-60 PLI cHEC Type tHEC GEH Payload length indicator Core header error checking Payload type Type header error checking GFP extension headers GFP payload GFP payload GFP combines a frame length indication field with Header Error Control (HEC) method to delineate ATM cells PLI can be used to help finding the beginning of a frame and the beginning of the next frame. cHEC (CRC-16) protects against errors in count field (single-bit error correction + error detection) GFP is designed to operate over octet-synchronous physical layers (e.g. SONET) In the frame-mapped mode, GFP can be used to carry variable-length payloads such as Ethernet frames, PPP/IP packets, or HDLC-framed PDU In the transparent mode, GFP carries synchronous information stream with low delay in fixed-length frames. Transport mode GFP is suitable for Gigabit Ethernet, and storage devices GFP Synchronization and Scrambling Synchronization in three-state process Hunt state: the initial state where it examines 4 bytes to see if CRC of the first two bytes equals the contents of the next two bytes Pre-sync state: the intermediate state where the tentative PLI indicates the boundary of the next frame If N successful frame detections, move to the sync state If no match, go to the hunt state Sync state: the normal state If no, move forward by one byte and check again If yes, move to the pre-sync state Validate PLI/cHEC, extract payload, go to the next frame Use single-error correcting capability of CRC-16 Go to hunt state if non-correctable error happens Scrambling Payload is scrambled to prevent malicious users from inserting long strings of 0s which cause SONET equipment to lose bit clock synchronization (as discussed in line code section). For example, 1+x43 scrambler for ATM Chapter 5 Peer-to-Peer Protocols and Data Link Layer 5.6 High-Level Data Link Control (HDLC) High-Level Data Link Control (HDLC) Bit-oriented data link control Derived from Synchronous Data Link Control (SDLC) developed by IBM Related to Link Access Procedure Balanced (LAPB) developed by CCITT-ITU LAPD in ISDN LAPM in cellular telephone signaling NLPDU Network layer DLSDU The data link layer can be configured to provide the connectionoriented or connectionless services. “Packet” DLSAP DLSAP Data link layer Network layer DLSDU DLPDU “Frame ” Physical layer Data link layer Physical layer Fig. 5.43 The data link layer 5.6.2 HDLC Configuration and Transfer Modes Unbalanced Configuration: Normal Response Mode (NRM) of HDLC Primary station uses Command to poll and Secondary replies using response frame Commands Primary Responses Secondary Secondary Balanced Configuration: Asynchronous Balanced Mode (ABM) Two stations act as peers and information frames are transmitted in full-duplex manner Primary Commands Secondary Secondary Responses Responses Secondary Commands Mode is selected during connection establishment Primary 5.6.3 HDLC Frame Format Flag Address Control Information FCS Flag Codes in fields have specific meanings and uses Flag: delineate frame boundaries Address: identify secondary station (1 or more octets) in unbalanced configuration; in ABM mode, a station can act as primary or secondary so address changes accordingly Control: gives the main functionality of HDLC (1 or 2 octets) Information: contains user data; length not standardized, but implementations impose maximum Frame Check Sequence: 16- or 32-bit CRC Control Field Format Information Frame 1 2-4 0 N(S) 5 6-8 P/F N(R) P/F N(R) Supervisory Frame 1 0 S S Unnumbered Frame 1 1 M M S: Supervisory Function Bits N(S): Send Sequence Number N(R): Receive Sequence Number P/F M M M M: Unnumbered Function Bits P/F: Poll/final bit used in interaction between primary and secondary Information frame (I-frame) Each I-frame contains the send sequence number N(S) Positive piggybacked ACK N(R) N(R) = Sequence number of the next frame expected, meaning to acknowledge all frames up to and including N(R)-1 3 or 7 bit sequence number for N(S) and N(R) N(S) = Sequence number of the sending I-frame Maximum window sizes 7 or 127 Poll/Final Bit NRM: Primary polls station by setting P=1; Secondary sets F=1 in last I-frame in response Primary and secondary always interact via paired P/F bits Error Detection & Loss Recovery Frames lost due to loss-of-synch or receiver buffer overflow Frames may undergo errors in transmission CRCs detect errors and such frames are treated as lost Recovery through ACKs, timeouts, and retransmission Sequence number to identify out-of-sequence and duplicated frames HDLC provides for options that implement several ARQ methods Supervisory frames Supervisory frame is used for error control (ACK, NAK) and flow control (Don’t Send) Receive Ready (RR), SS=00 with N(R) Reject (REJ), SS=01 with N(R) REJ frames are used by the receiver to send a negative acknowledgement, indicating the transmitter should go back and retransmit frames from N(R) onwards (for Go-Back-N ARQ). Receive Not Ready (RNR), SS=10 with N(R) RR frames are used to ACK frames when no I-frames are available to piggyback the acknowledgement RNR frame acknowledges all frames up to N(R)-1 and informs the transmitter that the receiver has temporary problems, that is, no buffers, and will not accept any more frames. Selective Reject (SREJ), SS=11 with N(R) SREJ frame indicates to the transmitter that it should retransmit the frame indicated in the N(R) subfield (for selective repeat ARQ) Unnumbered Frames Set the data transfer mode during call setup or release Unnumbered I-Information transfer between stations UI: Unnumbered information frame Recovery used when normal error/flow control fails SABM: Set Asynchronous Balanced Mode SNRM: Set Normal Response Mode UA: acknowledges acceptance of mode setting command DISC: terminates logical link connection FRMR: frame with correct FCS but impossible semantics RSET: indicates that the transmitter station is resetting the sequence numbers XID: exchange station id and characteristics 5.6.4 Typical Frame Exchange Example: Exchange of frames for connection establishment and release in data link connection In HDLC Set Asynchronous Balanced Mode (SABM) Disconnect (DISC) Unnumbered Acknowledgment (UA) SABM UA Data transfer Fig. 5.47 DISC UA Example: HDLC using NRM to Exchange Frames Address of secondary A polls B Secondaries B, C Primary A B, RR, 0, P Supervisory frame N(R) N(S) N(R) X Unnumbered frame A rejects frame 1 B, SREJ, 1 A polls C C, RR, 0, P B, I, 0, 0 B, I, 1, 0 B, I, 2, 0,F Information frame C, RR, 0, F A polls B, requests selective retransmission of frame1 A send info frame 0 to B, ACKs up to 4 B sends 3 info frames C nothing to send B, SREJ, 1,P B, I, 1, 0 B resends frame1 B, I, 3, 0 Then frame 3 & 4 B, I, 4, 0, F B, I, 0, 5 Time Example: HDLC using Asynchronous Balanced Mode to Exchange Frame Combined Station B Combined Station A N(S), N(R) B sends 5 frames B, I, 0, 0 A, I, 0, 0 B, I, 1, 0 A, I, 1, 1 B, I, 2, 1 A, I, 2, 1 B, I, 3, 2 B, REJ, 1 B, I, 4, 3 B goes back to 1 X ACKs frame 0 from A Rejects frame 1 of A A, I, 3, 1 B, I, 1, 3 B, I, 2, 4 B, I, 3, 4 B, RR, 2 ACKs frame 1 from A B, RR, 3 ACKs frame 2 Flow Control Flow control is required to prevent transmitter from overrunning receiver buffers Receiver can control flow by delaying acknowledgement messages Receiver can also use supervisory frames to explicitly control transmitter Receive Not Ready (RNR) and Receive Ready (RR) I3 I4 I5 RNR5 RR6 I6 Chapter 5 Peer-to-Peer Protocols and Data Link Layer 5.7 Link Sharing Using Statistical Multiplexing 118 Statistical Multiplexing Multiplexing is just to concentrate bursty traffic onto a shared line; statistical multiplexing is to multiplex on demand Greater efficiency and lower cost Header Data payload A B Buffer Output line C 119 Input lines Tradeoff Delay for Efficiency (a) Dedicated lines A2 A1 B2 B1 C1 (b) Shared lines A1 C2 C1 B1 A2 B2 C2 Dedicated lines involve not waiting for other users, but lines are used inefficiently when user traffic is bursty Shared lines concentrate packets into shared line; packets buffered (delayed) when line is not immediately available 120 Multiplexers are inherent in Packet Switches 1 1 2 2 . . . . . . N N Packets/frames forwarded to buffer prior to transmission from switch Multiplexing occurs in these buffers 121 Multiplexer Modeling Input lines A Output line B Buffer C Arrivals: What is the packet interarrival pattern? Service Time: How long are the packets? Service Discipline: What is order of transmission? Buffer Discipline: If buffer is full, which packet is dropped? Performance Measures: [Delay Distribution; Packet Loss Probability; Line Utilization] versus Traffic Load Intensity or Buffer Size 122 Delay = Waiting + Service Times P2 P1 Packet completes transmission P3 P5 Service time Packet begins transmission Packet arrives at queue P4 Waiting time P1 P2 P3 P4 P5 Packets arrive and wait for service Waiting Time: from the instant of arrival to the instant of service beginning Service Time: time to transmit packet Delay: total time in system = waiting time + service time 123 Fluctuations in Packets in the System (a) Dedicated lines A1 A2 B2 B1 C2 C1 (b) Shared line (c) N(t) Number of packets in the system A1 C1 B1 A2 B2 C2 124 Burst Multiplexing / Speech Interpolation Many Voice Calls Fewer Trunks Part of this burst is lost Voice active < 40% time No buffering, on-the-fly switch bursts to available trunks Can handle 2 to 3 times as many calls Tradeoff: Trunk Utilization vs. Speech Loss Fractional Speech Loss: fraction of active speech lost Demand Characteristics Talkspurt and Silence Duration Statistics 125 Proportion of time speaker active/idle Speech Loss vs. Trunks speech loss trunks Typical requirement 48 24 32 40 # connections n ænö k (k - m)ç ÷ p (1 - p) n -k èkø n! ænö k = m +1 speech loss = where ç ÷ = . np k ! ( n k )! èkø å 126 Effect of Scale Larger flows lead to better performance Multiplexing Gain = # speakers / # trunks Trunks required for 1% speech loss Speakers Trunks Multiplexing Gain Utilization 24 13 1.85 0.74 32 16 2.00 0.80 40 20 2.00 0.80 48 23 2.09 0.83 127 Packet Speech Multiplexing Many voice A3 terminals generating B3 voice packets A2 A1 B2 B1 C3 C2 C1 D3 D2 D1 Buffer B3 C3 A2 D2 C2 B1 C1 D1 A1 Buffer overflow B2 Digital speech carried by fixed-length packets No packets when speaker silent Synchronous packets when speaker active Buffer packets & transmit over shared high-speed line Tradeoffs: Utilization vs. Delay/Jitter & Loss 128 Packet Switching of Voice Sent Received 1 2 1 3 t 2 3 t Packetization delay: time for speech samples to fill a packet Jitter: variable inter-packet arrivals at destination Playback strategies required to compensate for jitter/loss Flexible delay inserted to produce fixed end-to-end delay Need buffer overflow/underflow countermeasures Need clock recovery algorithm 129 Home Work Problem 1: 5.15 Problem 2: 5.21 Problem 3: 5.33 Please hand in on December 12 Problem 4: 5.36 Problem 5: 5.52 ========================================= Problem 6: 5.56 Problem 7: 5.67 Problem 8: 5.69 Please hand in on December 19 Problem 9: 5.72 Problem 10: 5.73 130