Download EE228A Communication Networks Project

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
EE228A Communication Networks
Optical Packet Switching
Chih-Hao Chang and Leon Yu Liang Su
Introduction
Due to the drastic increase in the amount of data traffic, many of the existing circuit oriented networks will have to be upgraded to support packet switched data. Recently, wavelength division multiplexing technique drastically boosted up the communication
bandwidth per link. With a denser spacing between each channel and a higher modulation speed of the transmission lasers, multi-terabit per second of data will be deployed in
the near future. To manipulate the enormous amount of data at high speed, the design of
the electronic circuits inside routers can be an intimidating task. Moreover, various different kinds of multimedia data and the transport protocols that have to be supported by
future network further complicate the electronics.
Optical packet switching provides a potential solution for the next generation optical networks. Its high-speed, protocol and bit rate transparent is deal for different classes of traffic inside the network. With the advances in optoelectronics and silicon microfabrication, a number of key components necessary for a multi-function switch nodes are
available. For example, tunable wavelength converter using semiconductor optical amplifier (SOA) and the MEMS based optical switch are actively been studied and are very
promising to be commercialized before long.
In spite of the excitement of the advances in device and the advantages with optical packet switching, all-optical network remains a challenging topic. Without the powerful processing power and abundant storage in the electronic domain, the design of an
optical packet switching router involves engineering efforts and genius. When optical
signals travel along the fiber, not only the amplitude is attenuated but also the shape is
distorted due to dispersion. Fiber nonlinearity and polarization dependence of components both contributes to a further degradation of the signal. To reshape and regenerate
the signal all optically is still an active research. Also, information delivered at different
wavelength will have different speed traveling inside fibers. The packets thus need to be
re-clocked or re-aligned before information can be extracted for routing. Ideally, all the
functions mentioned above, in addition to some other requirements imposed by the network architecture such as add-drop capability, would be provided at the switching nodes.
We will focus on the contention resolution for optical packet switching in this report. The problem is particularly important because a fine method to implement optical
buffer is not known. Fiber delay lines are used as buffer in different proposed switches.
If many buffers are required for a switch, not only the physical size of the switch will increase, but also the channel crosstalk and noise will degrade the signal after routing. Additional effort will then be needed to bring up the signal quality compatible to the network specification at the output of the switch. Wavelength conversion and deflection
routing are the two interesting methods to reduce the number of fiber delay lines needed
while maintaining a high network throughput.
The discussions are organized as follows. Network architecture is first discussed
to give an overall picture of the system utilizing optical packet switching. Secondly, the
realization of a switching matrix will be presented. The critical components that help to
build network nodes will be identified and briefly explained. Wavelength conversion,
deflection routing and multiple path routing will be studied and their overall network performance will be compared.
Network Architecture
The European KEOPS (Keys to Optical Packet Switching) project [1] proposed
an all optical network having an optical packet layer in between the electronic switching
layer, for example ATM, and the underlying optical backbone supported by WDM. This
optical layer not only serves as the linkage between the terabit backbone and the subgigahertz electronic switching layer, its transparency in format and bit rate also helps to
integrate different kind of services in the future network. Packets in this layer will be
routed with optical cross connect (OXC) within the switch. The network reference structure is shown in figure 1.
Figure 1: Network reference structure in KEOPS.
Optical Packet Switching Node
A general optical switching node consists of three sections. At the input interface,
chromatic dispersion compensation is first performed to get better signals. The packets
are then coarsely aligned and the header information is extracted through O/E conversion.
The header message is then send to the control unit for routing.
Optical cross connect is the center part of the switching node. It can be realized
with Mach-Zahnder waveguide, inject bubble technology or MEMS. Because there is no
"optical RAM" available to implement a large buffer, fiber delay lines are used as the
buffer in the optical domain. However, large buffers implemented with fiber delay lines
imply an additional long fiber path that packets have to go through. This will further de-
grade the signal with channel crosstalk and accumulation of amplifier spontaneous emission noise.
At the output of the switching node, an optical regenerator is needed to meet the
power requirement in the system. Time-jittering correction, possible wavelength conversion and even header rewrite electronics are also necessary in order to maintain a high
signal quality throughout the cascade of switching nodes in the WDM system.
Wavelength Conversion
The availability of multiple wavelengths in the communication system not only
enhances the fiber link capacity but it also provides an additional dimension in the
switching matrix. This is important because only limited amount of buffer is available at
each optical packet switching node. The buffer size for a given fiber delay line scales up
in the same way as WDM enhances the fiber capacity. The more wavelengths available
for buffering, the less fiber delay lines are needed.
To appreciate the wavelength conversion solution, it would be better to have some
background of how the optical buffering has evolved. Similar to the electronic counterpart, several buffering architectures were proposed for optical packet switching. "Shared
buffer" technique re-circulate the packets back to input whenever contention occurs. This
procedure is repeated until packets get their slot in the demanded output. When the traffic
is bursty, several re-circulations might be necessary before packets are sent out of the
switch. Due to the coupling loss of optical components, packets that undergo several circulations may have amplitude much lower than others. This complicates the signal amplitude equalization at outbound links. "Output buffer" is another candidate to resolve
contention. In this case, fiber delay lines with different lengths are installed at the output.
In bursty situations, quite a few delay lines might be necessary to prevent large packet
loss. The combination of the previous two methods was also proposed. Optical buffers
are dedicated to each output ports in addition to a shared buffer pool. When the output
buffers are full, packets are routed to the common buffer and re-circulated back to the
inputs. Switches with this configuration will have the advantages of less frequent recirculation and fewer fiber delay lines. The price is, of course, a more complicated algorithm for switching.
Consider the output buffer case mentioned above. Given an incoming traffic load
of 0.8, more than 40 delay lines are needed if the packet loss rate is to keep below 10-10
[2]. If we used the packet format proposed by the European KEOPS project, 1.7 s is the
duration of a packet. A 340 m of fiber is therefore needed to buffer one packet. This
amounts to about 1.5 km of fiber length per channel. The number easily goes up beyond
tens of kilometers if multi channels are considered! As DWDM system becomes the
backbone of the optical communication network, wavelength conversion is naturally proposed to resolve the contention without long fiber delay lines.
Figure 2 is a proposal to resolve contention using wavelength conversion [3]. The
first section of the switch is demultiplexers separating packets in different wavelengths.
Based on the status of the switch, electronic control unit (not shown) shifts, if necessary,
incoming packets to the free wavelength in the buffer at their intending output link. Space
switch in the middle section route the packets to the correct output link. Note that buffers
with B positions now needs only B/N, instead of B, fiber delay lines (The additional one
refers to packets without delay). It has to be pointed out that wavelength conversion does
Figure 2: WDM packet switch block with buffers realized by
fiber delay-lines and with tunable wavelength converters to
address free space in the buffers (one buffer per outlet) [3].
Figure 3: Packet loss probability with and without tunable
wavelength converters versus the number of fiber delay-lines,
B/N, at a load per wavelength channel of 0.8 for a 16 X 16
switch with four and eight wavelengths per inlet. [3]
not solve bandwidth contention. If more packets are sent to a certain output link than its
bandwidth, packets that overflow the buffer will be dropped. Fortunately, in a DWDM
system, contention usually occurs when packets of the same wavelength coming from
different input links intend for a certain output link. It is less likely that a lot of packets
overload a particular outbound link.
Figure 3 shows that packet loss rate can be reduced by increasing the number of
wavelength N available for conversion. When N increases, the input channels and output
channels scale up proportionally. This gives a constant load independent of N and the
buffer size needed for a given packet loss rate is almost unchanged. The buffer size,
however, increases linearly with the wavelength number N. This is the reason why given
a fixed number of fiber delay line, the packet loss probability decreases dramatically with
the number of wavelength.
Figure 4: Share-per-node wavelength conversion switch architecture.
Figure 5: Share-per-link wavelength conversion switch architecture.
In the previous architecture, the number of wavelength converters equals to the
total number of input channels connecting to the switch. Apparently, this is not necessary. Consider the simplest case where 2 packets of the same wavelength request to route
to the same output link. Only one packet needs to be converted into other wavelength
while the other one can remain unchanged. The number of converters can further be reduced if converters are shared between different channels or links. Figure 4 and 5
demonstrate the architecture that realizes this idea [4]. In figure 4, a spool of converters,
shown as WCB, are shared between all the links and channels. For the case in figure 5,
converters are shared between channels from the same input link, but not between different links.
The problem raises as how many wavelength converters are needed for these architectures. Intuitively, we would like to maximize the number of wavelength for a given
output link for a given input condition if possible. The worst case would be that all the
packets to be routed to a given port are of the same wavelength. If there are n packets in
this situation, n-1 conversion will be necessary. Mathematically, we can model the problem as the grouping problem of color blocks [4]. Let the number of incoming link be L
and the channel number per link be M, we then have L*M color blocks where each of the
M colors has L blocks. These L*M color blocks will be grouped into L different groups,
each with M blocks. Assuming that we have to pay for each group according to the number of different colors in it, e.g. one dollar for one color, it can be proved that the minimum total cost for all L groups is (M + L – gcd (M,L) ), where gcd stands for greatest
common divisor. The maximum number of conversion needed for share-per-node case
can then shown to be ( L*M – M – L + gcd (M,L) ). Using the same idea, the maximum
number of conversion for share-per-link case can be shown to be ( M – ceil (M/L) ),
where ceil (M/L) is the smallest integer greater than or equal to M/L. The maximum
number of conversions represents the upper bound of the number of converters required
for each architecture. In both cases, the number of converters is smaller than the number
of total channels. Note that for share-per-link case, the total number of converters is L*(
M – ceil (M/L) ).
Figure 6: The packet dropping probability versus the packet arrival
rate per channel for 4 links. For the cases with conversion, the
number of converters equals to the number of maximum conversion
stated in the paragraph.
Figure 7: The packet dropping probability versus the number
of converters per node in the share-per-node architecture (the
packet arrival rate = 1.0, 0.5 and 0.2, 4 links and 8 wavelengths per channel). For the cases with conversion, the
number of converters equals to the number of maximum
conversion stated in the paragraph.
Figure 6 compares the packet loss rate between without conversion case and cases
with different number of wavelengths available for conversion. As expected, the loss rate
decreases with the increase in the number of wavelength available for conversion. Packet
loss in the wavelength conversion case is due to bandwidth contention. Figure 7 is more
interesting. In this case, 8 channels per link with total 4 links in the system results in 24
maximum conversions based on the formula stated in the previous paragraph. The simulation shows that as long as the number of converters exceeds a certain threshold, the
packet loss rate saturates even with a very high traffic load. This implies that maximum
number of converters rarely is necessary for all traffic conditions and the number of con-
Figure 8: Structure of the WDM optical packet architecture
equipped with wavelength converters and fiber delay-lines.
Figure 9: Buffer state and wavelengths used by the arriving
packets before (a) and after (b, c) the conversion process is
accomplished (a), after the first wavelength conversion (b)
and the second one.
verters can be drastically reduced from the theoretical value without compromising the
switch performance.
Figure 8 shows the implementation of the "partially shared buffer" with wavelength conversion [5]. For packets not blocked by contention, they are routed directly to
the output. For congested packets, instead of staying at the same wavelength, recirculating until an empty spot at the desired output link being available, packets are converted to the free wavelength/channel for a better chance of routing out of the switch.
The wavelength conversion algorithm makes sure the load for different channels in a given output link be evenly distributed, shown in figure 9. With the proposed algorithm, the
average number of wavelength conversion can be evaluated for different input traffic
conditions and the output buffer status. The average number of conversions is used as a
reference for the number of converters required in the switch.
By assuming independent and identical distribution of incoming traffic for all the
channels and uniform probability of routing to each output link for packets, the average
number of conversions, and therefore the number of converters needed, versus traffic
load can be shown as figure 10. Again, the required number of converters is much lower
than the total channel numbers. As a matter of fact, for all the cases simulated, converters number of only less than 20% of the total channels number will be sufficient. Figure
11 shows the "survivor function" against the number of converters. The survivor function is defined as the probability when the actual number of conversion is larger than the
calculated average number of conversions. In other words, it represents the case when
the number of converters is not enough to handle the traffic and results in packet loss.
Since the dependence on the traffic load is not very sensitive, the number of converters
should always be chosen based on the worst traffic situation.
The number of converters obtained with the above assumptions is actually overestimated. The reason is that whether the packet distribution over the output links be independent is debatable. At a given time slot, the input packet number can at most equals to
the total channel number. If a large amount of the packets are for the same output, packets heading toward the remaining outputs will be less. This negative correlation causes
the total number of conversions to be statistically smaller than the case where independent distribution of packets among all output is assumed. Only when the number of input
links is large will it be a valid assumption.
Figure 10: Average number conversions versus the traffic per
channel p. The parameter M is the number of wavelengths per
input lines; N is the number of output and input lines.
Figure 11: Survivor function of the number of required converters for M = 4, N = 16 and p varying from 0.5 to 0.9.
As mentioned in the previous section, optical switches with add-drop capability of
local packets are highly desirable. A switch with this function is shown in figure 12. Incoming packets from each input fiber are demultiplexed and sent to a stack of n w modules. The absorption block will first take out packets destined for the switch. Enough receivers are assumed at the absorption block so that all the packets for the node will be
received. Local data will be added to the network based on the switch condition and the
algorithm used. The through traffic and the new added local packets are then wavelength
converted, if necessary, and routed to the output fibers.
Figure 12: Logical structure of the node [6].
Three different access schemes were studied in [6] and the results are shown in
figure 13 and 14. Pooled-injection (PI in the figures) handles all the locally generated
packets together and tries to maximize the number of injected packets per slot over all
wavelengths. In this case, nw tunable transmitters are required at the injection block. The
pooled-per-wavelength-injection (PPWI) scheme maximizes the number of injected
packets as PI, but the nw transmitters are fixed, one per wavelength. The independent-perwavelength-injection (IPWI) algorithm assigns each of the n w locally generated data
stream with a fixed wavelength transmitter. Every packet stream is handled independently of others. The less greedy scheme, IPWI, was shown to have the best performance at
high traffic load in terms of throughput and delay. This is not surprising since the max-
imizing the number of injected packets might reduce the contention resolution capability
of the wavelength conversion block.
Again in figure 13, we see that deflection probability is reduced when more wavelengths are available for conversion. The improvement is most significant when nw increases from 1 to 2 and becomes marginal at higher value of nw. This is the same situation observed in figure 6. Also shown in figure 13 is the effectiveness of the fiber delay
lines using as buffers. Four or more wavelengths are needed for wavelength conversion
to match the contention resolution capability of a single delay line. This is because the
number of care cells stored in the buffer, namely those cells that cause contention, can be
made much smaller than the number of care cells circulating in the network in the wavelength conversion without buffering case. The deflection probability will be reduced if
less care cells exist in the network. Moreover, the cost per switch can be trimmed down
without active devices like tunable wavelength converters. From this point of view, we
should implement buffer with optical delay lines, instead of wavelength converters, to
resolve contention.
On the other hand, in figure 13, the probability of deflection is more than 10 -2 at full
load for a single fiber delay line. To reduce the number down to 10-10, a lot more delay
lines might be needed. It seems reasonable to use fiber delay lines as buffers if the network only has a few wavelengths available. In a system like DWDM, wavelength conversion will be a potential candidate. For medium network size, whether fiber delay
lines, wavelength conversion or the combination of the two should be chosen needs further study.
Figure 13: Average number of hops versus throughput per wavelength in a 64-node Shufflenet. Number of wavelengths as a parameter: nw = 1, 2,…15. Solid lines – pooled injections (PI);
dashed lines – independent per-wavelength injections (IPWI);
delay-line – indicates a single wavelength network with one optical
buffer per node
Figure 14: Deflection probability at care nodes d versus link
utilization u [cells/slot] in a 64-node ShuffleNet. Number of
wavelengths as a parameter: nw = 1, 2, .., 15. Solid lines—
pooled injections (PI); circles (nw = 1, 2, 3 )—pooled per
wavelength injections (PPWI); dashed lines—independent
per-wavelength injections (IPWI).
Deflection Routing
Overview
One of the major problems in an optical network is to resolve packet contentions
at the outputs of intermediate nodes in the network. Similar to the electronic networks,
some buffering effect needs to be applied to the optical networks as well. The effect of
buffering can be achieved in three dimensions in an optical packet switching network:
time, wavelength and space. Time dimension is the traditional buffering utilizing
memory buffer, either by converting optical signal to electronic signal (O/E) or using a
delay line to slow down the traveling packet in the network. Wavelength dimension uses
the wavelength conversion technology discussed above, and lastly, the space dimension
utilizes the technique of deflection routing. Deflection routing gives the best results in
meshed networks. Thus, the discussion below will analyze the performance of deflection
routing in meshed networks
Hot Potato Routing V.S. Store and Forward Routing
A special case of deflection routing is called Hot Potato Routing, where no buffer
is used at the output ports and conflicting packet is deflected into a remote node in the
network, like a hot potato in the hands of a group of people. What can be visualized is
the whole network becomes a large buffer. Each node in the network is a two-connected
node with two input and two output ports with an additional input port for locally generated packets (Fig.15). Whenever there is a contention of packets for the same output port
of a node, one of them will be selected to the optimal path, and the other packet will be
misrouted intentionally to the network. The packet will then travel a different path to the
destination than other packets with the same origin and destination.
Compare to the store and forward routing, hot potato routing increases the average number of hops required for packets to reach their destination. This is because the
store and forward routing uses shortest path approach to minimize the expected number
of hops experienced by the packets. One result from the different paths taken among
packets is that the packets at the destination will arrive out of sequence. Therefore, at the
destination, a finite window destination buffer is needed to reassemble the packets [10].
In addition, the increase in propagation delay of each packet lowers the throughput of the
packets in the network. A portion of the network capacity is used for these misrouted
packets, while only a small fraction is used for newly generated packets from the nodes
[7]. Since the packets are not completely regenerated on the optical path, at high bit rates
in the optical network, propagation distance is limited. Thus bit rate is limited by packet
error rate (PER), which in turn is determined by the throughput of the network [7]. What
the network benefits from deflection routing is the simple implementation of the nodes.
Making a buffer at nodes of an optical network is extremely difficult and expensive. Hot
potato routing provides a simple, costless design of the nodes. Also, a store and forward
routing model using a delay line requires amplification of the signal, but also amplifies
the unwanted noise, while hot potato routing does not have this concern [7]. The conclusion for a comparison of store and forward routing versus deflection routing is that a
tradeoff between throughput and hardware cost and complexity needs to be considered.
Variations to the hot potato scheme:
It can be visualized that as the load of the network increases, the throughput of hot
potato routing collapses due to large number of deflected packets in the network occupying the capacity of the links, while locally generated packets of each node just make the
situation worse. Modifications can be done to improve this situation by adding a small
size buffer in integers of packet sizes at each node in the network. This decreases the
number of packets that needs to be deflected and thus the deflection probability of a
packet. Deflection routing can also be combined with wavelength conversion to improve
the network performance. Different results occur when different buffer sizes and number
of wavelengths are used.[6]
Performance of Deflection Routed Networks
Characteristics of Deflection Routing
There are three properties that dictate the performance of a deflection-routed network:
1. Diameter: It is defined as the maximum distance between any two pair of nodes
in the network. It is used to indicate how compact a network is. The more compact the network is, the shorter the traveling distance is required by the packets.
2. Deflection Cost: Indicates the maximum number of hops a packet can experience
due to a deflection. This parameter shows how the network performance degrades under heavy traffic.
3. Don’t Care Nodes: For a network, a don’t care node is defined as a node with
both of its outputs as the shortest path to the destination. High number of nodes is
beneficial to the network in which it alleviates the load of the network by distributing packets destined for the same destination to different output ports while still
traveling at the shortest path.
These properties determine the throughput, delay, deflection probability, and hop distribution of the packets in the networks. The Manhattan Street Network and ShuffleNet
will be used to study these properties.
Figure 15. Manhattan Street Network and ShuffleNET
Figure 16. Percentage of don’t care nodes
Manhattan Street Network and ShuffleNet
Two of the most popular meshed networks are Manhattan Street Network (MS)
and ShuffleNet (SN) shown in (Figure 15). From [8], MS has N = n2MS nodes, when
nMS/2 is even, the diameter d = nMS +1 and d = nMS when nMS/2 is odd. Therefore, d is
approximated as (N)1/2. The deflection cost for shuffleNet is 4, since after four hops a
packet will be deflected back to the same node. For Don’t Care Nodes (DC), MS converges to 50% for large N. SN has number of nodes N = nSN2nSN and a diameter of d =
2log2 N, with deflection cost of log2 N and don’t care nodes percentage DC% = 1- (2/
nSN)(1-1/2nSN). Figure 16 shows the percentage of don’t care nodes of the two networks.
For low number of nodes, MS and SN have similar deflection constant and DC. Because
SN has a smaller diameter, it has a better performance in throughput than MS. As the
number of nodes increase, the DC% increases for SN. This property allows SN to be
used with small or no buffer.
Single buffer vs. Hot Potato Routing
The performance of the networks greatly improves with just an addition of a single packet sized buffer. The network assumes evenly distributed packet generation.
Probability for each node to be the destination is the same [8]. Figure 17 shows the
throughput of the networks with 64 nodes and for 400 nodes when using hot potato and
one buffer routing versus g, the probability of a new packet being generated at a node.
Note the improvement of the throughput when a single buffer is used. S&F stands for
store and forward scheme, it is higher than both of the other schemes because it minimizes the paths of the packets. A higher throughput for the single buffered networks indicates that less number of hops is needed for the packets to reach the destination, which
also corresponds to a decrease of the loads in the network links.
Figure 17. Throughput versus probability of packet generation of one buffer and hot potato scheme
If there is a large number of don’t care nodes in the network, the packets will have
more paths to take without introducing any unwanted results to the network and alleviates
the congestion of the network links. A high percentage of don’t care nodes increases the
probability of a packet finding itself to be in a don’t care node [8]. The higher the don’t
care probability Pdc, the probability of a packet to be in a don’t care node, greatly benefits
the network traffic. Figure 18 plots Pdc versus g of MS and SN under two different network sizes of 64 nodes and 400 nodes for one buffer and hot potato scheme. Pdc of SN is
significantly higher than MS [8]. The increase of Pdc with a increase in the size of the
network corresponds to Figure 16, where the DC% increases as the number of nodes increases. Pdc of MS also reflects Figure 16, where a change in the size of the network does
not change the Pdc. The difference between the performance of one buffer network and
the hot potato scheme becomes most apparent when the Deflection Probability Pnet, probability that a packet is being deflected, is looked at. Figure 19 shows the plot of Pnet versus g. The one buffer scheme reduces Pnet by a factor of two.
Figure 18. Don’t care probability versus probability
of packet generation for MS and SN at 64 and 400 nodes.
Figure 19. Deflection probability versus probability
of packet generation of MS and SN at 64 and 400 nodes.
When the performances of MS and SN are compared, it seems apparent that SN
prevails over MS, however, when evaluating the hop probability distribution versus number of hops, MS is more promising. [8] Figure 20 is a plot of the probability distribution
of number of hops versus the number of hops. Even though SN has a lower mean as in
number of hops, the tail of the line extends far more out than MS. [8] A point to note is
that there is always a cross point between the MS and SN line where the MS network will
out perform the SN network as the number of hops increases. The cross point is at low
probability at a lower g and SN performs better than MS. At high g, the cross point is at
a higher probability [8]. The area under the SN line after the cross point becomes significantly greater than the area under the MS line. This corresponds to a larger fraction of
packets experiencing large number of hops in the SN network than MS network. In addition, the packets in the tail of the distribution also experience larger number of hops in
the network than the packets in the tail of MS network. The number of hops of the one
buffer scheme is reduced by a factor of 2 when compared to the hot potato routing. This
is because the shorter paths are made available by delaying conflicting packets by a onepacket propagation time.
Figure 20. Probability distribution of number of hops versus number of hops. Hot potato and one buffer scheme
is illustrated with different packet generation parameters g = 0, 0.1, 0.3, 0.5, 1
Consider when the destinations of the packets are no longer evenly distributed
across the network, but a stream of packets having the same destination. Throughput of
the network will be affected due to the fact that a higher number of packets are now contending for the same destination. [8] Even though it does not produce hot spots as significantly as the shortest path routing, it would result a higher deflection probability and thus
increase the number of hops of the packets. Figure 21 shows the plot of one buffer and
hot potato routed network when having different stream lengths.
When the stream length goes up, the effectiveness of deflection routing reduces.
Figure 21. Throughput of the MS and SN networks versus probability of packet generation when having different stream
lengths of packets going from the same source to the same destination.
Multiple Path Routing
Figure 22. Pan-European Topology
Figure 23. Percentages of nodes versus number
of optional outputs.
The concept of deflection routing is in general called Multiple Path Routing
(MPR), as an indication that packets take multiple paths to a destination. The opposition
of MPR is the store and forward Single Path Routing (SPR). In MPR, contending packets for the same output ports need to be resolved. Paths are prioritized in different ways
and different algorithms are described to assign paths to the deflected packets. [9] In [9],
the Pan-European topology is used (Figure 22). Two techniques are introduced, shortest
distance (SD) and least number of hops (LNH). A path satisfies SD-MPR is it is shorter
than two times the shortest path and has less or equal number of hops. LNH-MPR uses
all paths with the same number of paths as the lowest number of hops. Figure 23 plots
the percentage of the nodes versus the number of optional outputs. LNH-MPR displays a
better result because it reduces the percentage in the single option output and thus spreads
the traffic more evenly among the nodes. Relaxation of the ruling of LNH produces an
even better result [9]. The mean of the percentage shifts more and more to a higher number of nodes as the relaxation increases and thus corresponds a more uniformly distributed network. However, optimization of the network needs to be considered at a tradeoff
between an increase in traffic and an increase in propagation delay. [9] More routing
flexibility corresponds to an increasing complexity of the lookup table in each node. In
addition, at the receiver side, more numerous choices of path selection correspond to a
greater out of order effect of the packets. Increasing the destination buffer is needed in
order to compensate this effect. This induces another trade off between reassembly window [10] corresponding to number of path choices and the packet loss rate required for
the network.
Rules adopted to assign packets to prioritized paths are:
1. Packets in the buffer are sent to the first available slot.
2. Prioritize path according to the distance to the destination
3. Prioritize according to the network’s own output prioities.
4. Output according to the SD or LNH or a combination of them.
What is of concern in MPR is the propagation delay and the number of hops taken. If
MPR is combined with wavelength conversion, these problems can be mitigated. Figure
24 shows the decrease of propagation delay on MPR versus an increasing number of
wavelengths. SPR is constant throughout the graph because it is always taking only the
shortest path in the network. When under long streams of packets from the same node to
the same destination, Figure 25 plots out the distribution of the number of hops taken versus the number of hops of MPR and SPR. Only 8% of the traffic experiences a higher
number of hops [9]. This percentage can be further decreased with an increase of buffering in the intermediate nodes.
Figure 24. Plotting of MPR and SPR on average propagation
delay versus number of wavelengths.
Figure 25. Distribution of number of hops vs. number of hops
on MPR and SPR.
Summary
Optical packet switching was reviewed from system architecture, switching node
configuration and functions to several different approaches to minimize the impact of the
lacking of optical buffers. For optical packet switching, wavelength conversion is an efficient method to resolve packet contention in a system with abundant wavelengths.
However, the number of wavelengths available in a router will determine the complexity
of the design and thus the cost of the whole system. Deflection routing is another potential solution when an inexpensive system is considered. Deflection routing provides an
easy implementation to resolve contention; at the cost of network throughput and link
capacity. The performance of deflection routing greatly depends on the network topology. With the knowledge of the size and niche of the optical network, tradeoffs between
the hardware and routing algorithm must be determined to achieve less costly and efficient terabit switching nodes and networks.
Reference:
[1] Gambini, P. et al., "Transparent optical packet switching: Network architecture and demonstrations in
the KEOPS project," IEEE Journal on Selected Areas in Communications, vol.16, no.7, pp.1245-59,
1998.
[2] Hluchyj, M. G. et al., "Queuing in High-Performance Packet Switching," IEEE Journal on Selected
Areas in Communications, Vol. 6, No. 9, pp. 1587-1597, 1988
[3] Danielsen, S. K. et al., "Wavelength Conversion in Optical Packet Switching," Journal of Lightwave
Technology, Vol. 16, No. 12, pp.2095-2108, 1998.
[4] Lee, K. et al, "Optimization of a WDM Optical Packet Switch with Wavelength Converters," IEEE
Infocom '95, pp. 423-430, 1995.
[5] Eramo, V. et al, "Dimensioning of the Wavelength Converters in a WDM Optical Packet Switch," Photonic Network Communications, Vol. 2, No. 1, pp.73-84, 2000.
[6] Bononi, A., et al, "Analysis of Hot-Potato Optical Networks with Wavelength Conversion," Journal of
Lightwave Technology, Vol. 17, No. 4, pp. 525-534, 1999.
[7] Yao, Shun et al., “Advances in Photonic Packet Switching: An Overview, ” IEEE Communications
Magazine, ’00, pp. 84-94.
[8] Forghieri, Fabrizio, et al., “Analysis and Comparison of Hot-Potato and Single-Buffer Deflection
Routing in Very High Bit Rate Optical Mesh Networks,” IEEE Transactions on Communications, Vol.
43, No.1, pp. 88-98, ’95.
[9] Castanon, Gerardo, et al., “Optical Packet Switching With Multiple Path Routing,” Computer Networks, http://www.elsevier.com/locate/comnet , ‘00
[10] Tan, Jack, et al., “Efficient Buffering Techniques for Deflection Routing in High Speed Networks.”
IEEE, pp. 28-36, ’96.