Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Internet protocol suite wikipedia , lookup
Low-voltage differential signaling wikipedia , lookup
Deep packet inspection wikipedia , lookup
Network tap wikipedia , lookup
Recursive InterNetwork Architecture (RINA) wikipedia , lookup
TCP congestion control wikipedia , lookup
Wake-on-LAN wikipedia , lookup
IEEE 802.11 wikipedia , lookup
Winter 2009 March 18, 2009 1. 2. CMPE 150 Final Exam NAME:______________________ PEM a. To send a binary signal over a physical link, we saw that sending up through the seventh harmonic would give a “reasonable quality” square wave. With this in mind, what bandwidth is needed to send a 1 Mbps binary signal with this “reasonable quality”? Solution: The most demanding of bandwidth is alternating 1, -1: a 1 MHz square wave. It has a fundamental frequency of 1 MHz and 7th Harmonic’s frequency = 7*1 = 7 MHz. Thus a bandwidth to send up to and including the 7th Harmonic must pass from 0+ε to 7 MHz -- and ε depends on the longest runs allowed – so 0 to 7 MHz effectively. (If it were instead a sequence of 1, 0, this also has a fundamental of 1 MHz plus a constant offset (DC or zero-frequency) component of magnitude 0.5, and must thus pass 0 to 7 MHz. Normally DC components are avoided. ) b. What is achieved by digitizing an analog signal such as voice and then sending that digital signal (encoded) over an analog channel, vs. sending the original analog signal? Solution: Digitizing improves immunity to noise – although digitizing (sampling and quantizing) introduce some noise, once digitized, if the digital signals are received and regenerated before noise makes this impossible, there is no additional noise introduced in the transmission of the digital signals. c. How does a repeater extend the length of a LAN? Solution: A repeater receives, regenerates, and retransmits signals (in both directions) to allow digital signals to travel longer distances. Suppose for a real-time application, a data rate of 160 Mbps is needed on the link between the data source and its user. The modem used to send data on this link can send any one of 16 different signal levels at each signaling time. a. What is the minimum bandwidth required for this link? Solution: Using Nyquist: Data Rate = 2Hlog2V Î 160Mbps = 2Hlog216 = 8H 8 H = 160 x 106 Î H = 20x106Hz (or 20 MHz) b. If you have a link with a bandwidth of 20 MHz, and with a signal-to-noise ratio of 30 db, what is the maximum data rate this link can achieve? Solution: 30db = 10log(S/N) Î 3 = log(S/N) or S/N = 1000 Using Shannon: Data Rate = H x log2 (1 + S/N) Data Rate = 20x106 x log2(1+ 1000) = 199.34 x 106 bps. Data Rate = 199.34 Mbps c. Will the modem employing 16 different signal levels successfully achieve the maximum data rate found for the link in part (b)? Why (or why not)? Solution: No. While Shannon’s theorem says the 20MHz channel has a theoretical limit of 199.34 Mbps with a 30db signal/noise ratio, it does not tell us how it can be achieved. Encoding methods much more complex and sophisticated than simply using multiple levels are required to approach the Shannon limit of a channel. Data Rate = 160Mbps 3. A received data string with appended CRC, using CRC generator x3+1, is 110100010. a. What is the data in the received string? Solution: The received string is 110100010 the last 3 bits are the CRC bits. Therefore the data in the string is 110100 110010 1001 110100010 1001 1000 1001 0010 0000 0100 0000 1001 1001 0000 0000 000 Î No Remainder means no error was detected b. What can you tell about possible errors in the data received? Solution: No error was detected in the data received using the CRC algorithm CRC with no remainder guarantees that there were no single bit errors, also no odd number of errors (as x+1 is a factor of x3 +1), and with 3 check bits, no burst errors of length 3 or less. However, there could be other undetected errors -- CRC does not detect all errors.) 4. Suppose you work for a successful start-up company that has expanded to additional space in a large building. It adds a new bridge (B2) for the LANs in the new space, connecting this bridge to LAN 2 already in place in the old space. [Text Fig. 4-42] The mail server is on the LAN 1. You move from port A of LAN 1 to port H of LAN 4. (and get a window office now because of your seniority.) How does your mail get to you? Solution: After the move, the entry in B1’s hash table for my host’s MAC address (formerly at A), will time out and be removed as no frames are now coming from A with that MAC address. B1 and will not know my host is now at H, so it will forward frames sent from the mail server on Lan1 for me, forwarding these on to Lan 2 using flooding. Lan2 does not have my MAC address in its hash table, so it floods it on to B2 who floods it on to Lan3 and Lan4. When my host at H acknowledges it has received the frame addressed to it, it sends back an acknowledgement to the mail server on Lan1, B2 and B1 see this acknowledgment and thus learn – and put the appropriate data in their hash tables – that now will enable B1 to forward frames intended for my host H on to Lan2 and B2 to send them on to Lan 4. Thus the hash table entries are built for my routing to my new location at H by “backward learning”. 5. How does CSMA/CD improve throughput vs. CSMA? Solution: CSMA/CD improves throughput by collision detection. The sender checks for the potential of a collision before sending data by waiting to “seize” the channel before transmitting data. The seize time is 2 x the transmission delay to the furthest node on the LAN. In contrast, CSMA starts transmitting data once the channel is free and retransmit if there is collision – There is wasted bandwidth by continuing to transmit data after collision since the data is garbage and will have to be retransmitted. 6. A CSMA/CD 10 Mbps LAN with a maximum distance of 1 km between stations sharing the link, no repeaters, data frames 512 bits long (including 32 bits used for header, CRC, etc), and with the first bit slot following a successful data transmission reserved for use by the receiver to send back a 32 bit acknowledgment frame, was found to have a maximum effective data rate of 6.45 Mbps. If the speed of the LAN were increased to 100 Mbps and the data frame on the LAN link were increased to 1500 Bytes (again including 32 bits used for header, CRC, etc.), (and assuming the CSMA/CD works correctly at this LAN speed and distance), what effective data rate would this LAN have? Solution: 10 Mbps Timelines: • Seizing channel : 2x propagation delay = 2 x(1000/ 2 x 108 ) = 10µs (speed of signal = 2/3 x speed of light) • Transmitting time of data: 10Mbps Î 512 bits will take 512 / 10x106 = 51.2µs • Transmitting time of 32bit ack slot = 32/10x106 = 3.2 µs • Propagation delay of signal = 5 µs • Propagation delay of Ack = 5 µs • Total time = 5+5+3.2+51.2+10 = 74.5 µs • Total useful data transmitted = 480 bits • Effective data rate = 480 / 74.5 x 10-6 = 6.45 Mbps 100 Mbps Timelines: (NB: 1500 bytes = 12,000 bits) • Seizing channel : 2x propagation delay = 2 x(1000/ 2 x 108 ) = 10µs • Transmitting data: 100Mbps Î 12000 bits takes 12000 / 100x106 = 120µs • Transmitting time of 32bit ack slot = 32/100x106 = 0.32 µs • Propagation delay of signal = 5 µs • Propagation delay of Ack = 5 µs • Total time = 5+5+0.32+120+10 = 140.2 µs • Total useful data transmitted = 11968 bits • Effective data rate = 11968 / 140.2 x 10-6 = 85.4Mbps 7. Given Link State packets for subnet A, B, C, D, E A Seq. Age B | 2 D | 5 B Seq. Age A | 2 D | 2 C | 6 C Seq. Age B | 6 D | 3 E | 2 2 a. draw the subnet A E Seq. Age C | 2 D | 4 6 B 2 5 D Seq. Age B | 2 C | 3 E | 4 A| 5 C 3 2 D 4 E b. Find the optimal (OSPF/ Link State) routing table for node A. A B C D E Value 0 2 7 4 8 Via Router A (self) B B B B What protocol does (IP) traceroute use? How does it work? What does it depend upon to get back the information desired? Solution: Traceroute works by sending ICMP packets with increasing TTL. First it sends 3 packets with TTL =1, and then resends another 3 packets with TTL =2, etc. When routers receive the traceroute packet, they decrement the TTL and then forward to the next router. When the packet’s TTL expires, that router sends an ICMP reply to the sender. 8. 9. a. What is the purpose of TCP’s “slow start” algorithm Solution: Slow start is used for congestion avoidance – increasing the sending rate slowly while watching to see when packet loss starts to occur. b. What is the effect of “slow start” when some of the network between client and server are wireless links? Solution: In wireless, traffic loss may be due to noise or hand-off issues. “Slow start” does that distinguish loss due to congestion or due to other reasons and will enter a “slow start” phase whenever there is a loss. This greatly decreases the bandwidth and will unjustly affect the performance of wireless links especially when packet loss is caused by noise and there is no congestion in the link. 10. Sliding window protocols were introduced in the text at the link layer, but other books (e.g. Kurose) introduce them at the transport layer. Why? What is different (purpose and implementation) about a sliding window protocol at the link layer vs. one at the transport layer? Solution: Sliding window protocols are used for increasing throughput through flow control at the link layer and for network congestion control at the transport layer. That this why it may be introduced at both layers in different texts. At the Link Layer, the sliding window protocol provides flow control by allowing the sender to transmit a certain number of frames before the receiver can send an ACK. Unacknowledged frames are buffered at the sender until ACK is received. Implementation includes selective repeat or go-back-end. At the transport layer, the sliding window provides congestion control by ensuring that the Sender can send multiple TCP packets before receiving an ACK. Here, the size of the window is determined by the receiver and the available buffer size at the sender; i.e. the window size is the minimum of the announced size by the receiver and the available buffer size at the sender. 11. My office computer is tanzanite.cse.ucsc.edu. a. Give the name of a “tool” that converts this name to its IP address (128.114.56.133) Solution: DNS (Domain Name System) via a DNS server by using nslookup, dig, or host b. At what level of the network hierarchy does this “tool” belong, for resolving a machine name and its IP address? Solution: Application Layer c. The subnet netmask for my computer is 255.255.254.0. How many machines are on its subnet? What is the lowest machine address on this subnet? Solution: There are 9 host bits, therefore there are 29 – 2 machines = 510 machines The lowest machine address is 000000001 12. At my home I have NAT (Network Address Translation) running in my Linksys “Wireless-G Broadband Router”) box, so that the external Internet sees a single IP address. If my wife and I both simultaneously access our “my Yahoo” pages at the same time, from different machines on our internal network (and thus both using port 80), how does my page get to me and not to my wife? Solution: NAT uses port mapping to separate one user’s data from another. Outgoing packets: When a sender sends an outgoing packet using TCP or UDP, the packet includes a source port. At the NAT box, the source port and source IP address are stored in a table and the NAT box creates a new source port and replaces the source IP with the external IP in the packets sent. The source port that NAT creates is also an index to the NAT table. Incoming packets: When a packet arrives at the NAT, the source port is the one that NAT box had specified when making the request. Using this port value as an index, the NAT box checks the NAT table and gets the original IP address and source port of the sender of the request. It then recreates the IP packet and sends it to the correct host and port. This is how it differentiates incoming requests for different senders. a. Why is caching of multimedia content at the edge of the network (e.g. at ISPs and thus close to end users) like multicasting? Solution: Caching allows users to get the multimedia content with minimum delay since users don’t have to wait a longer round trip time from their hosts to the original content source. For network managers, edge caching also provides improved bandwidth utilization – several users get the multimedia content from the edge cache without establishing multiple connections to the internet which may cause network congestion. 13. b. Give an example of an application where this caching at the network edge will not be an effective alternative to multicasting. Solution: Caching will not be effective in real-time multimedia applications such as video conferencing where real-time (two-way) conversation is required between the source and destination hosts.