Download Chapter 2

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Deep packet inspection wikipedia , lookup

Point-to-Point Protocol over Ethernet wikipedia , lookup

Wake-on-LAN wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

I²C wikipedia , lookup

Multiprotocol Label Switching wikipedia , lookup

CAN bus wikipedia , lookup

IEEE 1355 wikipedia , lookup

Real-Time Messaging Protocol wikipedia , lookup

UniPro protocol stack wikipedia , lookup

Asynchronous Transfer Mode wikipedia , lookup

Transcript
An Introduction to
ATM Networks
Solution Manual
Harry Perros
Copyright 2002, Harry Perros
All rights reserved
Contents
Solution for the problems in
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter11
-2-
Chapter 2. Basic Concepts from Computer Networking
1. Circuit switching involves three phases: circuit establishment, data transfer and
circuit disconnect. In circuit switching, channel capacity is dedicated for the duration
of the connection, even when no data is being sent. Circuit switching, therefore, is not
a good choice solution for bursty data. The course emitting the data is active
transmitting for a period of time, then it becomes silent for a period of time which it is
not transmitting. This cycle of being active and then silent repeats until the source
complete its transmission. In such cases, the utilization of the circuit-switched
connection is low.
Packet switching is appropriate for bursty traffic. Information is sent in packets,
and each packet has a header with the destination address. A packet is passed through
the network from node to node until it reaches the destination.
2.
a) Stop-and-Wait flow control: U = 0.1849%
b) Flow control with a sliding window of 7: U = 1.2939%
c) Flow Control with a sliding window of 127: U = 23.475%
3.
a) P = 110011, M = 11100011
FCS = 11010
b) P = 110011, M = 1110111100
FCS = 01000
4.
Stuffed bit stream = 0111101111100111110100
Output Stream = 01111011111011111100
5. Payload = 1500 bytes.
Overhead includes Flag, Address, Control, FCS and Flag bits. Assuming 8 control
bits and 16 FCS bits, overhead = 8+8+8+16+8 = 48.
For every 1500 bytes of payload, we have 48/8 = 6 bytes of overhead.
% Bandwidth used for overheads = (6/1506 )*100 = 0.398 %
6. One DS-1frame consists of 24 voice slots, each slot of 8 bits and 1 bit for frame
synchronization. Therefore one frame = 24*8+1=193 bits.
The DS-1 rate is 1.544Mbps.
Now, out of 193 bits 1 bit is for control, therefore out of 1.544Mb the number of
control bits = 8kb.
-3-
The number of frame synchronization bits per voice channel = 8000/24 = 333.33
Besides these frame synchronization bits, there control signal bits per voice channel.
Every 6th voice slot contains one control signal bit.
Hence, there is one control bit in every 48 bits of voice. Therefore the number of
control bits in a voice channel = 64000/48 = 1333.333
The total control signal data rate per voice channel = 333.33 + 1333.33 = 1.666kbps.
7. In X.25, while setting up virtual circuits, the virtual circuit numbers have local
significance. The reason being that, it is much easier to have local management rather
than global synchronization, by having a simple mapping table between incoming and
outgoing virtual circuit numbers, the design becomes very simple, and easy to
manage. Frame relay and ATM also use this concept of local virtual circuit numbers.
8. Consider the following imaginary IP header:
0100
0000
0001
0000
0000
0101
0000
1111
1101
0100
0100
0000
0000
0100
1101
0101
0001
0110
1000
1010
0111
0010
0000
0000
0001
1000
0000
0000
1000
0000
0100
0101
0000
1011
0000
0001
0001
0000
1111
1001
Now we will perform 1’s complement addition on the 16 bit words of the IP header.
Notice that the Checksum field is all 0’s.
Calculation of checksum:
Bytes 1,2 –
4545
Bytes 3,4 7841
Bytes 5,6 0001
Bytes 7,8 2051
Bytes 9,10 1F06
Bytes 11,12 0000
Bytes 13,14 0D48
Bytes 15,16 08BF
Bytes 17,18 04DA
Bytes 19,20 1009
Sum = 127C8
Final Sum after shifting carry to LSB = 27C9
One’s complement of final sum = D836, which is inserted into the checksum field.
At the receiver this packet is received, and checksum is performed on the packet in
the same manner, if the result of the addition is FFFF, then the received packet is
-4-
without error, but we consider a case where the error cannot be detected. For e.g. if
the least significant bit of byte 2 and the least significant bit of byte 10 are flipped ,
the calculated checksum will remain the same.
Calculation of checksum at receiver:
Bytes 1,2 –
4544 - bit flipped
Bytes 3,4 7841
Bytes 5,6 0001
Bytes 7,8 2051
Bytes 9,10 1F07 – bit flipped
Bytes 11,12 D836 - checksum
Bytes 13,14 0D48
Bytes 15,16 08BF
Bytes 17,18 04DA
Bytes 19,20 1009
Sum = 1FFFE
Final Sum after shifting carry to LSB = FFFF
Hence the error cannot be detected.
9.
a) It is a Class B address
b) Network address = 1001 1000.0000 0001
Host address = 1101 0101.1001 1100
10.
a) Only one subnet can be defined, as the subnet mask 255.255.0.0 masks the first 2
bytes of the IP address, which is the network id, incase of Class B. In this case the
network itself is the subnet, and vice versa. For multiple subnets, the subnet mask
needs to mask more than 2 bytes of the IP address for Class B address.
b) The maximum number of hosts that can be defined per subnet = 216- 2, because all
1’s is used for directed broadcast, and all 0’s is used for network id.
-5-
Chapter 3. Frame Relay
1. The motivation behind Frame Relay was a need to develop new high speed WANS,
in order to support bursty traffic, rapid transfer rates imposed by new applications and
the required response times. Some of the main features of frame relay are as follows:
a) Connection Oriented
b) Based on packet switching
c) No link level error and flow control
d) Routing decision, as to which node a packet should be forwarded to moved from
layer 3 in IP/X.25 to layer 2 in Frame Relay.
e) Lost/Discarded frames are recovered by the end user, higher level protocols
f) Uses a feedback based congestion control scheme
2. Frame Relay was developed for high speed WANS, offering high transfer rates and
low response times. In traditional packet switching networks such as IP and X.25, the
switching decision is made at the network layer and the flow/error control is done at
the link layer. The two layers introduce their own encapsulation overheads, and
passing a packet from one layer to another required moving it from one buffer of the
computer’s memory to another, a time consuming process. Refer to Fig. 3.1.
In order to achieve its desired goal, frame relay overcomes this overhead. The
layer three switching is moved to layer two, error and flow control is done by the end
users, thereby providing a more efficient transport mechanism.
3. The total number of DLCI values that can be defined with this header = 216.
4. One reason being that, if two node A, B request a connection to a node C through the
frame relay network using the same value of DLCI, then there will be a conflict at the
frame relay node connected to C. Hence it is better if there is local significance of the
DLCI value.
DLCI = 10
A
C
B
?
DLCI = 10
Also, global synchronization of the DLCI values is not necessary, a local mapping
table, which maps incoming DLCI value to the outgoing DLCI value is sufficient. If
the same DLCI value is to be used then it has to be relayed all the way to the
destination node, which could be very far away, incurring huge overhead. Another
reason is that if it is global, the result is a limited number of connections. These are
some reasons why the DLCI value has local significance.
-6-
5.
a) TC = BC/CIR = 12/128 = 0.0938s when BC = 12Kb
TC = 40/128 = 0.3125s when BC = 40Kb
The time interval TC increases with increase in BC.
b) The time interval increases with BC, which means that the CIR remaining the
same, the node can transmit more bits, without being penalized by network
policing.
c) One of the parameters negotiated at set-up time includes excess burst size BE. As
long as the number of bits transmitted is below the negotiated BC, the delivery of
the data will be guaranteed. The parameter BE allows the node to transmit more
than BC. If the number of bits transmitted is between BC, BC+BE, the data will be
delivered with no guarantees, it DE bit will be set. If there in congestion at a node
in the network, then this packet which had the DE bit set, might be discarded. If
the node transmits more than BC+BE, then the frame will definitely be discarded.
Hence, there is a mechanism for submitting to the network more than BC for each
time interval.
6. The forward explicit congestion notification (FECN) and backward explicit
congestion notification (BECN) bits are part of a feedback based congestion control
scheme adopted by frame relay. When there is congestion at a node, then the node
sets the FECN bit in all the outgoing frames. So all the nodes that receive these
frames know that an upstream node is congested. When the destination node receives
a frame with the FECN bit set, it tries to help out with the congestion, by reducing its
window size or delaying its acknowledgements, thereby slowing down the
transmitting device. The receiving node also sets the BECN bit in the frames it send
back to the transmitting node. When the transmitting node receives a frame with the
BECN bit set, the node decreases its transmission rate. This helps in reducing the
congestion.
-7-
Chapter 4. Main Features of ATM Networks
1. There is no error control between two adjacent ATM nodes. It is not necessary, since
the links in the ATM network have a very low error rate. In view of this, the payload
of the packet is not protected against transmission errors. However, the header is
protected in order to guard against forwarding a packet to the wrong destination. The
recovery of a lost packet or a packet that is delivered to its destination with erroneous
payload is left to the higher protocol layers.
2. The ATM architecture was designed with a view to transmitting voice, video and data
on the same network. Voice and video are real time applications and require very low
delay. If link layer flow control is used, it will lead to retransmissions at the link
layer, increasing delay, which cannot be tolerated for these time sensitive
applications. Also the transmission links are of high quality and they have a very low
error rate. These are some of the reasons why there is no data link layer flow control
in ATM networks.
3. It is easier to design ATM switches when the ATM cells are of a fixed size. The
switches are less complex as no functionality has to be added to take care of cell size.
If variable sized cells are allowed, this will considerably increase the complexity of
ATM switches.
4. ATM cell size = 53B = 53*8 = 424b
a) T1 line:
424
= 274.6s
1.544 * 10^ 6
b) OC-3:
424
= 2.73s
155.52 * 10^ 6
c) OC-12:
424
= 0.6816s
622.08 *10^6
d) OC-24:
424
= 0.34s
1.244 * 10^9
e) OC-48:
424
= 0.176s
2.488 *10^9
-8-
5.
a) Since in correction mode, HEC state machine will only accept error-free cells and
correct single-error cells, the probability of a cell being rejected is:
1 – P(Error free cell) – P(Single bit error) = 1 – (1-p)40 – 40*p*(1-p)39
b) Since in the detection mode, HEC state machine will only accept error-free cells,
the probability of a cell being rejected is:
1 – P(Error free cell ) = 1 – (1-p)40
c) After the first cell is rejected, the HEC state machine will be in detection mode.
So, the probability of n successive cells being rejected is:
[1-(1- p)40 – 40*p*(1-p)39] * [1-(1-p)40]n-1
d) Define:
a0 = (1-p)40 …………… Prob. Header is error free
a1 = 40p(1-p)39 ………. Prob. of 1-bit error in header
Let P(n) be the probability that n successive cells have been admitted.
P(1) = a0 + a1
P(2) = a0a0 + a0a1 + a1a0
= a0(a0+a1) + a1a0
= a0P(1) + a1a0
P(3) = a0a0a0 + a0a0a1 + a0a1a0 + a1a0a0 + a1a0a1
= a0P(2) + a1a0P(1)
In general,
P(n) = a0P(n–1) + a1a0P(n-2)
6.
a) W=S *( A/T) + (S + R)*(1-A/T)
b) T=2000, S=0, W<100msec
R(2000-A) < 200 sec.
A must be less than T
Suppose 0 <= A < 2000
Then R=100msec to 200msec.
-9-
7.
a) (1-p)n
b) nCmpm(1-p)n-m
c) Probability that a cell is received in error = p
Probability of n cells being error free = y = (1-p)N
Probability of n cells with error = (1-y)
Time taken to transmit a packet with no errors = nT + D + F = R
Time taken to transmit a packet with error = nT + D + F + D +nT + D + F = 2R +
D
Average time to taken to transmit a packet = PNOERROR[Time with no error] +
P1ERROR[Time with 1 error] + ……………..
= yR + y(1-y)[2R + D] + y(1-y)2[3R + 2D] + y(1-y)3[4R + 3D] +…………
= yR + y(1-y)2R + y(1-y)23R + y(1-y)34R + ………… + y(1-y)D + y(1-y)22D +
y(1-y)33D + y(1-y)44D+……………………
Separating the two parts into namely,
Part 1 - yR + y(1-y)2R + y(1-y)23R + y(1-y)34R + …………
Part 2- y(1-y)D [ 1 + (1-y)2 + (1-y)23 + (1-y)34+……………………
Summation of iai-1 over i = 0 to infinity it will give 1/(1-a)2
case a =1-y.
Therefore ,Part 2 =
y(1  y) D
D(1  y )
=
(1  (1  y))^ 2
y
For Part 1 = yR[ 1+ (1-y)2 + (1-y)23 + (1-y)34 + ……..]
Therefore Part 1 =
yR
= R/y.
(1  (1  y )) 2
Thus Part1 + Part 2 = R/y + D(1-y)/y
W=
nT  D  F
D[1  (1  p)^ n]
+
(1  p)^ n
(1  p)^ n
- 10 -
for a<1, in this
d)
p
0.1
0.08
0.06
0.04
0.02
0.01
0.008
0.006
0.004
0.002
0.001
0.0008
0.0006
0.0004
0.0002
0.0001
0.00008
0.00006
0.00004
0.00002
0.00001
(1-p)n
0.042
0.082
0.156
0.294
0.545
0.74
0.786
0.835
0.887
0.942
0.97
0.976
0.982
0.988
0.994
0.997
0.998
0.9982
0.9988
0.9994
0.9997
- 11 -
W
934.523
468.9
236.987
116.36
53.56
34.176
31.005
28.012
25.197
22.558
21.33
21.076
20.825
20.577
20.331
20.211
20.17
20.162
20.138
20.114
20.102
W vs p
1000
Average time to correctly transfer packet (W)
900
800
700
600
500
400
300
200
100
0
0.1 0.08 0.06 0.04 0.02 0.01
0.0008 0.0004
0.002
Probability of ce ll in e rror (p)
0.0001
0.00006
e) The above results show that the time W decreases with the decrease in the cell
loss rate. In ATM the cell loss rate is very low, due to high-speed optical links.
Hence the need for link layer error/flow control is not necessary. Hence there is
no data link layer in ATM.
- 12 -
0.00001
Chapter 5. The ATM Adaptation Layer
1. Unstructured data transfer mode corresponds to 47 bytes of user information. DS-1
corresponds to 1.544Mbps.
1.544Mbps corresponds to 193000 bytes/sec.
bytes will take 47/193000 = 243  sec to fill-up an SAR-PDU.
2. Suppose we have a CPS-packet with a CID = 00110111.
Below the CPS-packet bit streams are shown:
1). CPS no header split (CID Shown): 00110111…
2). Same CPS-packer header split (CID shown, 2 bits in previous cell) the 2 bits in
the previous cell look like padding, you cannot tell if it belongs to a CPS-packet
header. 00 | 110111….
Without the offset (OSF), there is no way to know if the last 2 bits in the first
ATM cell is the padding or if it contains bits needed for the next CPS-packet
header. In the worst case, the payload is misrouted to the wrong client.
However, since the CPS also contains a HEC, most likely the cell will be dropped
since it likely contains the wrong HEC. All the following CPS-packets are
similarly affected. If the number of streams is large, this is a bad problem.
3. In AAL2, the value 0 cannot be used to indicate a CID because it is used as padding.
4.
a)
#1
#2
20
48
20
#1
b)
#3
OSF =0
#4
35
27
21
#2
#2
26
#3
OSF=21
20
9
20
#3
#4
OSF=9
5.
a) Voice is coded at the rate of 32kbps. The timer goes off every 5ms.
- 13 -
padding
CPS packet size =
32000 * 5
= 20 bytes
1000 * 8
b) Active Period = 400ms
Timer goes off every 5 ms
Therefore, total number of CPS packets/active period = 400/5 = 80 packets.
6.
a) CPS introduces 8 additional bytes, a 4-byte header and a 4-byte trailer.
b) The number of SAR PDU’s required = 1508/44 = 34.2727 = 35
c) Additional bytes introduced by the SAR in each SAR-PDU are 4 bytes, a 2 byte
header and a 2 byte trailer.
d) The maximum user payload in the ATM cell, will be when there is just the user
PDU and no other CPS-PDU header or trailer in it, which is 44 bytes.
The minimum payload in the ATM cell, will be when along with the data there is
the padding and the CPS-PDU trailer.
which is 44-3-4 = 37 bytes.
7
a) Considering a user pdu of 1500 bytes
i)
1500 Bytes User PDU
PAD
Trailer
User-PDU + Trailer = 1500+1+1+2+4=1508bytes
ii)
iii)
Padding necessary = 28 bytes to make the CPS-PDU a multiple of 48
bytes.
32 cells
Trailer = 8 bytes
Padding = 28 bytes
ATM Cell headers = 5*32= 160 bytes
Total overhead = 8 + 28 + 5*32 = 196 bytes
b) Considering a user pdu of 1000 bytes
i) The padding necessary in this case in 0 because 1000 + 8 = 1008 is an integral
multiple of 48.
ii) 21 cells
iii) Trailer = 8 bytes
AT Padding = 0 bytes
M Cell headers = 5*21 = 105 bytes
Total overhead = 8 + 105 = 113 bytes.
8. Moving the length field from the trailer to the beginning of the CPS_PDU will not
solve any problems. The CPS – PDU will still have to be split into ATM cells, and
the last ATM cell will still have to have an indication that it is the last cell of this
- 14 -
CPS-PDU. Also moving the length field to the header is a bad solution is a bad idea
because if there is an error in the header it will lose synchronization and all the
subsequent data will also be lost. Hence, this is a bad solution.
- 15 -
Chapter 6. ATM Switch Architectures
1. Consider that an ATM cell arrives at input port 0 destined for output port 6 (110).
Stage 1
Stage 2
Stage 3
011
000
001
010
011
100
101
110
111
The cell is routed as follows:
 Initially the destination port address is attached in front of the cell in reverse
order (011).
 At stage 1, the cell is routed to the lower outlet and the leading bit (1) is
dropped.
 At stage 2, the cell is again routed to the lower outlet and the leading bit (1) is
dropped.
 At the final stage, the leading bit is 0, so the cell is routed to the upper outlet,
which is the destination output port 6 (110).
2. The problem with input buffering switches is that they cause head of line blocking.
This problem occurs when there is conflict for the same link between multiple cells at
different input ports. When this happens, one of the cells is allowed to use the link.
The other cells have to wait a the head of the input queue. However the cells, which
are queued up after the blocked cells have to wait for their turn, even though they are
not competing for the same link. These cells are also blocked. This is head of link
blocking. Output buffering switches eliminate head of line blocking, because the do
not have any input buffering. Hence they are preferable to switches with input
buffering.
3. In the buffered banyan switch architecture, each switching element has input and
output buffers. The switch is operated in a slotted manner, and the duration of the slot
- 16 -
is long enough so that a cell can be completely transferred from one buffer to another.
Also, all the buffers function so that a full buffer will accept another cell during a slot,
if the cell at the head of the buffer will depart during the same slot.
Upon arrival at the switch, a cell is delayed until the beginning of the next slot. At
that time it is forwarded to the corresponding input buffer of the switching element in
the 1st stage, if the input buffer is empty. If the buffer is full, the cell is lost. This is
the only time when a cell can be lost in a buffered banyan switch. A cell cannot get
lost at any other time. The reasons are explained as follows:
 A cell in the input buffer of a switching element is forwarded to the corresponding
output buffer only if there is free space. If there is no free space, the cell is
delayed until the next slot. If two cells are destined for the same output buffer,
then one cell is chosen at random.
 The transfer of a cell at the head of the output buffer of a switching element to the
input buffer of the switching element in the next stage is controlled by a
backpressure mechanism. If the input buffer of the next switching element is free,
then the cell is forwarded to the switching element, else the cell waits until the
next slot and tries again. Due to the backpressure mechanism, no cell loss can
occur within the buffered banyan switch.
 If an output port becomes hot, i.e., receives a lot of traffic, a bottleneck will build
up in the switching element associated with this output port. Due to the
backpressure, cell loss can occur at the input ports of the switch, as the bottleneck
extends backwards to the input ports.
4. The buffered banyan distribution network is added in front of the buffered banyan
switch to minimize queueing at the beginning stages of the switch. The mechanism
distributes the traffic offered to the switch randomly over its input ports, which has
the effect of minimizing queueing in the switching elements at the beginning stages
of the switch. External blocking occurs when two or more cells compete for the same
output port at the same time. This mechanism does not help eliminate external
blocking, because the distribution network does not look at the destination address of
the incoming cells when it distributes the cells over the input ports of the switch.
5. The cells will appear in the order 1,1,2,3,4,5,5,8 at ports 0 through 7 respectively.
6.
a) This switch architecture is non-blocking because each input port of the switch has
its own dedicated link to each of the output ports of the switch. There is no
internal or external blocking.
b) The switch is operated in a slotted manner, and each cell is self-routed to the
destination. Output buffers are provided at each output port, since more than one
cell may arrive in the same slot at the input.
7. The upper bound Bi for an output port i, which is less than B is necessary in the case
when the output port becomes hot, i.e., there is too much traffic destined for that port.
In this case, the shared memory space associated with that port will fill up. If there is
- 17 -
no such upper bound Bi, there will be no memory space left for cells destined for
other output ports.
8. Speed of link = V Mbps, therefore an ATM cell arrives every
53 * 8
 s.
V
There are N input links, so the CPU must finish processing all cells in the above time
and return to the first link it served. So the time the CPU has to process a cell in the
worst case, i.e., when the cells arrive back to back on all input ports is,
53 * 8
s
N *V
9. 2NV Mbps.
10. NV Mbps.
- 18 -
Chapter 7. Congestion Control in ATM Networks
1. A constant bit rate source implies PCR = SCR = Average cell rate
64000
PCR = 64 Kbps =
= (approx) 151 cell/sec.
53 * 8
Thus PCR = SCR = Avg. Cell rate = 151 cells/sec.
2.
a) (from Chap-5)
Length of CPS-packet = 0.005s*
32000
bytes/sec + 3 bytes = 20 payload bytes +
8
3 header bytes = 23 bytes.
Number of CPS-packets produced in active period, on an average is = 400/5 = 80
CPS-packets.
Thus peak transmission of source including CPS-packet overhead is 23 bytes in 5
msec. That is 23*8/0.005 = 36.8 Kbps.
Average transmission bit rate of voice source including CPS-packet overhead
bytes is 400/(400+600) * peak bit rate = 0.4 * 36.8 = 14.72 Kbps
b) Assuming one CPS-packet per CPS-PDU, and assuming that a CPS-PDU is sent
immediately down below to the ATM layer as soon as the PDU is filled with one
CPS-packet, a single CPS-PDU will have 23 bytes of CPS-packet, plus 1 byte of
the PDU header and rest (48-23-1) = 24 bytes of padding. This 48 byte PDU is the
payload for the 53-byte ATM cell. Assuming that the handing-down from SSCS
to CPS sub-layer and from CPS sub-layer to ATM layer takes negligible time, we
note that 53 byte of ATM cell is sent as soon as the timer in SSCS layer (=
5msec) expires.
Thus, Peak transmission rate (incl. all overheads is) of voice source =
(53*8)/0.005 = 84.8 Kbps
Average transmission rate (incl. all overheads is) of voice source =
= 400/(400+600) * peak bit rate
= 0.4 * 84.8 Kbps = 33.92 Kbps
[Note that if we had to exclude the padding bits, the peak rate would be
(20+3+1+5) bytes for every 5 ms i.e. 46.4 Kbps.]
3.
- 19 -
a) PCR = Peak Cell Rate
PCR = Maximum rate of source in on period = 20 Mbps = (20*10^6) / (53*8)
= 47169.811 cell/sec = (approx) 47170 cell/sec
b) Maximum on-period = MBS/PCR = (24)/(47170) = 0.50879 ms.
c) SCR is the largest average cell rate over a pre-specified length of period. Thus
here we have T = 1 msec. If we approximate answer in part (b) above as 0.5ms,
the SCR will be half of PCR because no matter how we look at different portions
of length T (e.g. last 0.3 ms of off period followed by 0.5 ms of on period and
0.2ms of off period), the overall picture is the source sending the cells at PCR for
on-period and sending no cells at all during the off-period. Since the on and off
periods are of equal length, SCR will be half of PCR. If, however, we had not
approximated the on period length as 0.5 ms, then the answer will be SCR =
0.508 * PCR + 0 * (0.492) = 0.508*PCR = 23962.36 cells/sec = (approx) 23962
cells/sec
4. Delay experienced by any cell/packet in a real network consists of:
a.
b.
c.
d.
Propagation Delay (fixed)
Transmission Delay (fixed
Processing Delay (fixed)
Queueing Delay (variable)
There are fixed delays because of the properties of the links/switching elements in the
network. Processing delay is the time taken by the CPU inside the switch to do all
processing such as header conversion. Processing delay is fixed. Propagation delay is
a function of the speed of light, which is fixed and transmission delay is a function of
the speed of the link which is also fixed.
Queueing delays are variable because they are caused at the buffers of switches,
where cells queue up for service. Depending upon the occupancy level at a buffer, the
queueing delay can be more or less at a particular switch.
5. Jitter is important in real-time applications and delay sensitive application because the
playback can either run out of cells to play (inter-arrival time being more than interdeparture time at the destination for a long period of time) or there could be over-flow
problems (inter-arrival time being less than inter-departure time at the destination for
a long period of time).
6.
R= 500kbps; b = 0.1s
a=b(1-r)Pln(1/  )
 = 0.02 …Assumption for calculation
- 20 -
e=
a  K  (a  K )^ 2  4 Kar
R
2a
Taking the values of r from 0.05 to 0.95 we calculate the equivalent bandwidth and
the avg. bit rate. The table and the graph are shown below
Fraction of time
source is active ( r )
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
Equivalent Bandwidth
(e)
89284.51
129805
160454.3
186268.7
209193.6
230226.5
249965.1
268806.9
287039.6
304886
322530.5
340135.6
357853.1
375833.7
394235.5
413232.7
433027.4
453864
476052
Average. Bit Rate
25000
50000
75000
100000
125000
150000
175000
200000
225000
250000
275000
300000
325000
350000
375000
400000
425000
450000
475000
Note that as Equivalent Bandwidth approaches Average Bit Rate as r tends to 1
because the Avg. Bit Rate tends to approach Peak Bit rate
- 21 -
Equivalent Bandwidth and Avg. Bit Rate vs. Fraction of
Time the source is active
500000
Equivalent Bandwidth and Avg. Bit Rate
450000
400000
350000
r
300000
250000
Equivalent
Bandwidth
200000
Avg. Bit
Rate
150000
100000
50000
0
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
Fraction of time Source is Active (r)
7. Cells with ts = 132 and ts = 140 are tagged
TAT: 0 40 5 130 170 210 210 210 250
ts: 0 45 90 120 125 132 140 220
ts
TAT
CELL
0
0
C
45
40
C
90
85
C
120
130
C
- 22 -
125
170
C
132
210
NC
140
210
NC
220
210
C
8. Cells with ts = 132 and ts = 140 are tagged
XNEWi+1=XOLDi ; LCTNEWi+1=LCTOLDi
ts
XOLD
LCTOLD
X’
XNEW
LCTNEW
CELL
0
0
0
0
40
0
C
45
40
0
0
40
45
C
90
40
45
0
40
90
C
120
40
90
10
50
120
C
Explanation:
After ts = 0;
LCT=0;
X=40;
X'=0;
After ts=45
LCT=45
X=40
X'=0
After ts = 90;
LCT=90;
X=40;
X'=0;
After ts = 120
LCT=120
X =50
X'=10
After ts = 125;
LCT = 125;
X = 85;
X' = 45;
After ts = 132 (tagged)
LCT = 125
X = 85
X' = 78
After ts = 140(tagged); After ts = 220
LCT = 125;
LCT = 220
X = 85;
X = 40
X' = 70;
X' = 0
- 23 -
125
50
120
45
85
125
C
132
85
125
78
85
125
NC
140
85
125
70
85
125
NC
220
85
125
0
40
220
C
Chapter 8. Transporting IP Traffic Over ATM
1. To transport connectionless traffic (like IP-based) over underlying ATM technology,
which is inherently connection-oriented, ATM forum created the LAN emulation in
the race to making ATM the technology for the desktop. ATM provided speeds of
upto 25 Mbps when Ethernet (a competing technology) provided only about 2 Mbps
taking into account its software bottlenecks. The advent of LAN emulation meant that
the LAN applications need not be changed and can be run on an ATM-based
technology.
2. In LAN emulation, the membership is logical as opposed to physical membership
(physically attached to LAN) as in normal Ethernet. Thus LAN emulation needs to
take care of "registration" of clients wishing to join or leave the LAN. It also needs to
resolve the MAC address of a destination machine provided by the source (the LLC
layer in particular) to the ATM address so that it may set-up a VCC to the destination
machine (or to the MCS/BUS if its a multicast/broadcast). Thus registration and
address resolution are the two problems that LAN emulation must take care of before
providing "connectionless-like characteristics" to the LAN applications residing in
the layers above.
3. There are two types of control VCCs :
a). Control direct VCC: Set-up by the LE client as a bidirectional point-to-point VCC
to the LE server as long as the LE client is a member of the emulated LAN. This
control VCC is used to exchange control messages related to the network.
b). Control distribute VCC: This optional, unidirectional control VCC is set-up by the
LE server to the LE clients. It is used by the LE server to distribute control traffic.
4. Read section 8.3.1 in textbook for detailed description:
In brief, ATMARP is a part of the classical IP over ATM solution that allows
machines, which fall in same logical IP subnet (LIS) to communicate over underlying
ATM technology.
A LIS may consist of many LANs. The basic functionality that an ATMARP solution
must support are:
a). Allow clients to obtain ATM address of a machine in the same LIS given its IP
address using ATMARP request. (similar to ARP)
b). Allow clients to obtain IP address of a machine in the same LIS given its ATM
address using InATMARP (inverse ATMARP) request. This is similar to RARP.
c). Every LIS must have at least one ATMARP server. Each client must be
configured with the ATM address of that ATMARP server.
- 24 -
d). Allow a machine in the LIS to register itself with the ATMARP server so that
ATMARP server has all the mappings of the machines' IP address and ATM
address. Periodic updates/refreshes are needed.
Thus the ATMARP solution offers MAC-like services to the LLC layer above and
uses it address resolution technique (by querying the ATMARP server) to reach
(setup ATM connection) any machine within the same LIS.
5. Read section 8.3.2 for detailed description.
MARS (Multicast Address Resolution Server) helps in collecting, maintaining, and
distributing multicast information in a classical IP over ATM framework. MARS is
not responsible for the actual multicasting of the data packets.
MARS helps only in passing the multicast-group information called the "host map" to
the requesting client (in VC-mesh scheme) or the MCS (in MCS-scheme). This
information is distributed using a ClusterControlVC that MARS maintains with all
clients (VC-mesh scheme) or a ServerControlVC (MCS-scheme).
Note: MCS (in MCS-scheme) needs to register first with MARS.
Assuming all configurations/registrations are complete,
In the VC-mesh scheme:
Clients, who want to join a IP-multicast group, send a MARS_JOIN message to
MARS (each client needs to be configured with the ATM address of MARS so
that it may set up a bidirectional point-2-point VC connection to MARS when
needed). Upon reception of this message, MARS adds the client to the host-map
for that multicast group and send this information either via the VCC set up by the
client or via the ClusterControlVC. Once the client gets the host-map and sees
itself being added, then it confirms. Using that host-map, it then establishes a
point-to-multipoint connection in order to multicast its packet.
In the MCS-scheme:
Clients send a MARS_JOIN to the MARS requesting that it be given a map (list
of ATM addresses it needs to establish connection with) for the requested
multicast group. MARS maintains a server map (that contains the ATM addresses
of more than one MCS that supports a multicast group) and a hostmap as before.
MARS gives the requesting client the server map now instead of the host map.
Thus the client now thinks as its destination consisting of various MCS's and
establishes a point-to-multipoint connection with the MCS's. Thus the client sends
its data to the MCS; each MCS in turn forwards the message to the various clients
(learnt from host map from MARS) it serves.
- 25 -
Note: We haven’t included how the client leaves a multicast group MARS here. This
is done so that things are kept clear. Please refer to text section 8.3.2 in case you
would like to understand how clients(leaves) leave a multicast.
6. The following assumptions are made:
a. MCS is registered with MARS,
b. MCS has listed the IP multicast group address it wishes to support, and
c. Clients are configured with ATM address of MARS,
Now the sequence is as follows:
1. Client sends MARS_JOIN to MARS indicating the IP multicast group it wishes to
join.
2. MARS returns a server map (ATM addresses of MCS's that support the IP
multicast group address) to the client
3. Client establishes point to multipoint connection with the MCS's in the server
map. If there is only one MCS, only a point-to-point VCC is setup.
4. MCS upon receiving the message from the client forwards it to the other clients
that exists in its host map for that multicast group (It gets the host map my
sending a MARS_REQUEST message to MARS). MCS maintains a point to
multipoint VCC to all the hosts in its host map.
7. Classical IP over ATM solution connectivity is limited to a single LIS. Thus
machines belonging to different LIS need to communicate thru' intermediate IP
routers which can slow down and undo any QoS guarantees along the path. If we find
a way somehow to transfer data between machines in different LIS using a VCC, then
the above disadvantage can be done away with. The problem then becomes one of
resolving the IP addresses of the machines into their ATM addresses in a multiple
subnet environment. NHRP is a technique proposed by IETF to do the above.
8. IP flow is a sequence of IP packets that share the same values for the following set of
parameters: <source IP address, source port number, destination IP address,
destination port number>.
9. IP switching makes the job of assembling ATM cells into IP packets at intermediate
IP routers unnecessary by establishing a cut-through path across the ATM switches
associated with IP switches to the destination. To establish the cut-through path,
however, the flow of IP packets need to be determined. Hence IP switching is datadriven.
10. Downstream allocation and upstream allocation are two methods of distributing
tags(labels) for FECs generated.
In downstream allocation method, the tags(labels) are generated and distributed [by
the TSR/LSR] at the downstream link with respect to the flow of IP packets. Thus in
a figure such as : S--->A---B---C--->D
- 26 -
D would decide the incoming tag(label) for the FEC and pass it to C. Thus C would
now set its outgoing tag(label) for that FEC as the incoming tag(label) passed by D.
Similarly, C would pass its incoming tag(label) to B, B would pass its incoming
tag(label) to A and A to S.
Thus in a downstream allocation, the incoming label tags (labels) are distributed in a
direction opposite to the flow of IP packets.
In an upstream allocation the tags(labels) are distributed in the direction of flow of IP
packets. Here, the outgoing tags(labels) are distributed instead of the incoming
tags(labels). Thus if we have S--->A---->B--->C--->D
S would send it outgoing tag(label) to A which would then become A's incoming
tag(label). Similarly A would send it outgoing tag(label) to B, B to C, C to D.
Note:
If a node has more than one neighbour, then all the neighbours in the
downstream/upstream direction with respect to the flow of IP packets to a specific
destination would receive the tag (label) from its upstream/downstream TSR(LSR) in
an upstream/downstream allocation scheme respectively.
11. The MPLS label stack is used to help the LSRs route/forward the packet across
different domains.
Example: Label-Switched Path (LSP) tunneling. The source is in domain 1, the
destination is in domain 3, and domains 1 and 3 are connected through domain 2.
Domain 1 ||
Domain 2 ||
Domain 3
S---A---B------>A1----B1----C1---->A2--->B2---D
The label Stack is as follows:
Domain 1
||
Domain 2
||
Domain 3
Label(1)--->Crossover--> Label(2)--->Crossover-->Label(1)
Label(1)
(Label(k) does not indicate the actual values of the labels used. It just indicates the
label stack depth and the current processing level.)
12. Destination-based routing (or hop-by-hop) involves each router along the way finding
a shortest-path (in terms of number of hops) to the destination. One of the problems
that occurs is congestion if the cost of a path is calculated only by number oh hops.
If delay is used as a metric to evaluate a path, maybe we could have found a longer
path (in terms of number of hops) but shorter overall delay.
- 27 -
This leads to the concept by source-based routing or explicit routing. The path to a
destination is calculated by the source which is then followed by all the routers along
the way. Thus while the routing at source can be computationally intensive, it helps
the source to "optimally" choose a route according to its preference (either by
delay/hop count etc).
- 28 -
Chapter 9. ADSL-based Access Networks
1.
a) 3265.3s = 54 minutes (appox.)
b) 160s = 2.667minutes
2. The only reason why a splitter less ADSL modem is preferred is that the installation
in the case of a modem, which requires a splitter, requires a visit by a technician,
which is costly.
3. The fast path provided low delay, whereas the interleaved path provides greater delay
but lower error rate.
4. The Network Access Server is there to provide scalability. It is not possible to
provide each ATU-R with a PVC to each NSP, since this will require a large number
of PVC’s to be set up and managed. The role of the NSP is to terminate all the PVC’s
from the ATU-R’s, and then aggregate the traffic into a single connection for each
NSP.
5. The control messages are used to setting up, maintaining and clearing the tunnels, and
hence have a higher priority over data messages. Reliable transport of these messages
is important. However, as for the data, a lot of data that is transmitted is time sensitive
data, like voice, video, and these applications can tolerate some error, but not delay.
Retransmission of data may lead to delay and jitter. Besides, these links have a very
low error rate. Also each tunnel multiplexes several PPP connections, and there may
be several tunnels. If error control is introduced in the form of retransmissions, this
will increase the overhead considerably on the L2TP layer, again increasing the delay.
- 29 -
Chapter 10. Signaling over the UNI
1) The ATM Links have high propagation delay. They also have high transmission
speed. These links have very low error rates. So there is no need of any link layer
error control schemes. Addition of these schemes will delay the transmission of data,
by considerably increasing propagation delay, increasing jitter, and this is not
favorable for real time applications transmitting data over ATM networks, which
have a high bandwidth delay product.
2) The basic difference is in the way the error recovery scheme is implemented in both
the cases. Let us consider the traditional ARQ scheme, such as go-back-n and
selective reject. In this case we use timers and sequence numbers. The receiver uses
the piggybacking mechanism, in which it sends the next sequence number it expects.
The transmitter on the other hand uses a timer for each packet transmitted. If the timer
times out without an acknowledgement, it retransmits the packet. However if the
transmitter does receive the piggybacked ack, it refers to a new packet or a
retransmission, and the transmitter does the needful. In go-back-n all packets
following the lost/erroneous packet are also retransmitted, whereas in selective reject,
only the lost/erroneous packets are retransmitted. There is no explicit polling
mechanism in place in ARQ, which requests the list of lost/erroneous frames.
In SSCOP the error recovery mechanism works on the basis of a polling scheme
coupled with an exchange of STATUS REQUEST/RESPONSE frames. The
transmitter periodically polls the receiver with the STATUS REQUEST (POLL)
frame. The receiver replies with a SOLICITED STATUS RESPONSE frame, which
contains the sequence numbers of the lost/erroneous frames. The receiver can
alternatively send an UNSOLICITED STATUS RESPONSE frame to the transmitter
when it detects a lost/erroneous frame. This is not associated with any POLL frame.
3) The primitives can be one of the following four:
request, indication, response, confirm
A request type is used when the signaling protocol wants to request a service from
SAAL. An indication type is used by SAAL to notify the signaling protocol of a
service-related activity. A response type is used by the signaling protocol to
acknowledge receipt of a primitive type indication. A confirm type is used by SAAL
to confirm that a requested activity has been completed.
4) For example refer page 197.
5) The call reference value is associated with a particular call, and is selected by the call
originator. It is possible that the two sides of the UNI interface select the same call
reference value. This leads to confusion. To prevent this, a call reference flag is used
to differentiate between the call originator and the other end of the interface. The side
that originates the call sets the flag to 0, whereas the destination sets the flag to 1
when it replies to a message sent by the originating side. The flag basically
- 30 -
differentiates replies from the original messages. If no such flag is used then one side
will not be able to differentiate between replies, and original messages.
6) ATM Traffic Descriptor IE
7) End-to-End Transit Delay IE, Extended QoS Parameters IE
8)
CALLING USER
CALLED USER
ATM NETWORK
SETUP
SETUP
CALL PROC
CONNECT
CONNECT
CONNECT
ACK
9)
CALLING USER
CALLED USER
ATM NETWORK
ADD PARTY
SETUP
CALL PROC
CONNECT
CONNECT
ACK
ADD PARTY
ACK
10) Q.2971 allows the root of a connection to add a leaf to its point-to-multipoint
connection. It is not possible using Q.2971 for a leaf to join a point-to-multipoint
connection without the intervention from the root. However, this can be achieved
using the ATM Forums Leaf Initiated Join (LIJ) capability.
- 31 -
Chapter 11. The Private Network-Network Interface (PNNI)
1.
a) A.1.1, A.1.2,
A.2.1, A.2.2, A.2.3,
A.3.2, A.3.3, A.3.4,
B.1.1, B.1.2,
B.2.2, B.2.3,
B.3.3, B.3.4,
C.1
b) Logical Group Nodes: A.1, A.2, A.3, B.1, B.2, B.3, A, B, C
c) Uplinks
A.1.1 – A.2, A.2.2 – A.1
A.1.2 – A.3, A.3.4 – A.1
A.2.1 – A.3, A.3.2 – A.2
A.2.3 – A.3, A.3.4 – A.2
B.1.1 – B.2, B.2.2 – B.1
B.2.3 – B.3, B.3.4 – B.2
B.3.3 – C, C.1 – B
A.2.1 – B, B.1.1 – A
A.3.3 – B, B.1.2 – A
Induced Uplinks
A.2 – B, B.1 – A
A.3 – B, B.1 – A
B.3 – C
A
d)
A.2
A.1
A3
A.1.1
A.1.2
e) A.2.2, A.2, A, B, C, C.1
- 32 -
B
C
Copyright 2002, Harry Perros
All rights reserved
- 33 -