Download Department of Electrical and Computer Engineering Concordia

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

CAN bus wikipedia , lookup

Internet protocol suite wikipedia , lookup

IEEE 1355 wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

TCP congestion control wikipedia , lookup

Transcript
Department of Electrical and Computer Engineering
Concordia University
Communications Networks and Protocols - COEN 445
Fall 2006
Assignment #3 (Due for Thursday Nov. 23rd, 2006 in class)
Inst: F. Khendek
Question 1. A disadvantage of the contention approach for LANs is the capacity wasted due to multiple
stations attempting to access the channel at the same time. Suppose that time is divided into discrete
slots with each of N stations attempting to transmit with probability p during each slot. What fraction of
slots are wasted due to collisions? (Hint: work with probabilities.)
Probability that a given station transmits: p
Probability that a given station does not transmit: n = 1-p
Number of stations: N
Probability of a collision occurring = probability that two or more stations transmit
Probability of a collision NOT occurring = probability that ZERO or ONE stations transmits
Probability of zero stations transmitting: n*n*n*n*n…. = nN
Probability of one station transmitting: p*n*n*n*n…. = p*nN-1 = (1-n)*nN-1 = nN-1 – nN
Probability of a collision not occurring = nN – nN + nN-1 = nN-1
Assuming that collisions do not carry over onto subsequent time slots, the fraction of time slots
wasted due to collisions will be equal to the probability of a collision occurring in a given time slot,
which is (1-nN-1), where n is the probability that a given station does not transmit, and N is the number
of stations transmitting.
Question 2. What is the main disadvantage of backward learning routing technique?
The shortest path is not necessarily the fastest path. Eventually, assuming no nodes go down, all
nodes will have the ‘same’ shortest path setup as the other nodes. What is essentially generated is a fixed
routing table, which will remain the same until new nodes are added or nodes are removed. From this
table, nodes will always route in the same manner, and this means that certain nodes will become
extremely congested while other nodes will be very rarely used.
Question 3. When an X.25 DTE and the DCE to which it attaches both decide to put a call through
using the same virtual circuit number, a call collision occurs and the incoming call is canceled. When
both sides try to clear the same virtual circuit simultaneously, the clear collision is resolved without
canceling either request; the virtual circuit in question is cleared. Do you think simultaneous resets in
X.25 are handled like call collisions or clear collisions? Explain.
Simultaneous resets would be handled similarly to clear collisions. Since both sides are
attempting to accomplish the same objective – resetting of the connection, they simply each carry out
their respective tasks and inform the other end of their success.
Question 4. A 6580-octet datagram is to be transmitted and needs to be fragmented because it will pass
through an Ethernet with a maximum payload of 1620 octets. Show the total length, More Flag, and
Fragment Offset values in each of the resulting fragments.
Original message: Data Length = 6580 octets, Segment Offset = 0, More Flag = 0
Fragmented Datagram:
Fragment 1: Data Length = 1600 octets, Segment Offset = 0, More Flag = 1
Fragment 2: Data Length = 1600 octets, Segment Offset = 200 64-bit units, More Flag = 1
Fragment 3: Data Length = 1600 octets, Segment Offset = 400 64-bit units, More Flag = 1
Fragment 4: Data Length = 1600 octets, Segment Offset = 600 64-bit units, More Flag = 1
Fragment 5: Data Length = 180 octets, Segment Offset = 800 64-bit units, More Flag = 0
Question 5. Consider the following figure. Host A in Network A has to talk to Host B in Network B.
We have to interconnect these two networks, which are using different protocol architectures: Layers 1,
2, 3, 4, and 7 happen to be the same for both networks, but layers 5 and 6 are different. We need a
component to interconnect these networks. Describe two different protocol architectures (two solutions)
of this component for internetworking. Explain.
Question 6. A TCP entity A is sending segments to another TCP entity B. Entity A is initially granted a
credit of 2000 octets. A sends 9 segments of 200 octets each and receives acknowledgements for the first
7 segments and a credit of 800 octets. How many new segments of 200 octets is A allowed to send, at
this point in time, before receiving other credits from B? Explain.
A has originally obtained a credit to send 2000 octets. It sends 1800 of these octets to B, and
therefore still has the credits to send an additional 200 octets. B sends A an acknowledgement of the first
1400 octets, and more importantly, allows A an additional 800 octets of credits. Therefore, A can now
send 1000 octets, or 5 segments, to B, before requiring additional credits.
Question 7. A TCP entity opens a connection and uses slow start. Approximately how many round-trip
times are required before TCP can send N segments? Explain.
Initially when a TCP connection is established, the number of segments which the sender can
send is limited to 1. When the sender receives an acknowledgement for this segment, it increments the
window size that it can use by 1 segment. So, after the first segment is acknowledged, the sender
window size becomes 2. After the second segment is acknowledged, the window size becomes 3, until a
certain maximum.
Assuming the receiver is capable of acknowledging the segments as soon as it receives them, the
window size of the sender will increase exponentially – that is, it will initially send one segment, receive
acknowledgement for that segment, then it will send two segments, and roughly at the same time will
receive acknowledgement for both segments. It will then send four segments, receive four
acknowledgements, send eight segments, etc, until it reaches its maximum window size.
Therefore, assuming the RTT time for each segment is the same, and assuming immediate
acknowledgement of each segment, log2N RTT times will be required before the sender can send N
segments, rounded up.