Download Topology Selection Criteria for a Virtual Topology Controller based

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

IEEE 1355 wikipedia , lookup

IEEE 802.1aq wikipedia , lookup

Computer network wikipedia , lookup

Distributed firewall wikipedia , lookup

Network tap wikipedia , lookup

Peering wikipedia , lookup

Passive optical network wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Airborne Networking wikipedia , lookup

Transcript
Topology Selection Criteria for a Virtual Topology
Controller based on Neural Memories
Y. Sinan Hanay∗, Shin’ichi Arakawa†∗ and Masayuki Murata†∗
∗
†
National Institute of Information and Communications Technology, Japan
Email: [email protected]
Graduate School of Information Science and Technology, Osaka University, Japan
Email: {arakawa, murata}@ist.osaka-u.ac.jp
Abstract—This work extends a previously proposed algorithm
for virtual topology reconfiguration in all optical networks.
Earlier, an algorithm using auto-associative neural memories
has been presented. The algorithm stores topologies by assigning
equal weights. In this work, we analyzed the effect of weighing
topologies differently based on maximum flow, average weighted
hop and the age of topology. Although we focus on optical
networks, the algorithm and the analysis we present here can
be useful in other network domains, such as wireless networks.
algorithm that uses neural memories [4].
In this work, we focus on selection criteria for topologies
using ASB method. We present different selection criteria for
assigning weights to topologies. We focus on optical networks,
however, the method and analysis we present here can be
applied to other network domains as well.
I. I NTRODUCTION
In this paper, we focus on VTR problem, which is one of
the four aspects of virtual topology design along with lightpath
routing, wavelength assignment and traffic routing. [5]. VTR
problem is known to be NP-complete [6]. Exact solutions
to VTR problem can be obtained using mixed integer linear
programming (MILP), but MILP methods become intractable
for more than ten nodes. However real world topologies have
much more than ten nodes, for example AT&T has 154
nodes and DFN has 30 nodes [7]. Figure 1 illustrates the
problem setting. There is a physical network and each node
is connected through optical fiber links. In the demonstration
it is assumed that two wavelengths can be carried per fiber
link. The corresponding virtual topology is shown, and has
five lightpaths p1−4 , p4−2 , p2−5 , p5−3 and p3−0 .
ASB randomly searches for topologies and stores the found
good topologies in a neural network. It is possible to modify
the neural memory in a way to give different topologies
different weights. We will give the weights according the
following metrics.
Optical networks have become the primary selection to
transmit data over long distances due to the high efficiency of
fiber technology over copper wires. In addition, optical networks utilize wavelength-division multiplexing (WDM) which
allows carriage of multiple channels over a single fiber. This
allows a flexibility in construction of “virtual” (or logical)
topologies on top of the physical layer. These topologies are
router-level (i.e. intra-AS) topologies, that a network operator
can optimize based on traffic demand.
Virtual Topology Design (VTD) problem is to find a
topology that satisfies some performance (e.g., minimizing
maximally loaded link) or a resource metric (e.g., minimizing
the number of transceivers). In a physical topology, the edges
are optical fiber cables, and in virtual topology the edges
are called lightpaths. VTD flow starts with choosing the
lightpaths that have to be constructed. Next, a controller runs
a routing algorithm in the physical layer to see if the network
resources allow the establishment of the chosen lightpaths.
Each lightpath is assigned a unique wavelength. However, with
wavelength converters, a lightpath can be carried by different
lightpaths.
Since network traffic may change over time, a reconfiguration of the topology may be needed to meet the performance
goals. This second problem of reconfiguration is called virtual
topology reconfiguration (VTR). Extra resources may be added
or some resources maybe unavailable, and in such a situation
a network operator may want to change the goal from performance maximization to resource minimization in order to
reduce the energy consumption.
VTR has attracted much focus in the last two decades.
Methods based on mixed-integer linear programming, heuristics and genetic algorithms were also presented [1]–[3]. Attractor Selection Based (ASB) VTR is a previously proposed
II. P ROBLEM S TATEMENT
•
•
•
The average traffic weighted hop of the topology
The capacity the topology
The recency of the topology
The first criteria mentions that the topologies having less
average weighted hop, should be assigned more weights. We
find the capacity of the topology by running a max-flow
algorithm. The recency of topology is related to self similarity
of the Internet traffic. Assuming that Internet traffic is periodic,
we can give more weight to the older topologies, projecting
that they are more likely correspond to the future traffic.
III. P RELIMINARIES
This section presents ASB and auto-associative memories
briefly.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
Fig. 1. A virtual network topology example. In the physical topology, each
physical link is a fiber, and each fiber can carry two wavelengths. The virtual
topology, corresponding to the wavelength assignment, has 5 lightpaths.
12:
13:
14:
15:
16:
A. Attractor Selection Based Topology Control
17:
ASB utilizes neural networks. Neural networks are useful
tools, when a pattern exists in the data. Here, we assume that
self-similarity of Internet traffic can be considered as a pattern,
and we randomly search for network topologies, when good
topologies are found we add them in a neural memory. After
some time, the traffic characteristic will change and there may
be other topologies that can serve that traffic better. After some
period of time, the first topology found will be highly likely
to serve again.
At each time step, the topology is updated according to a
system dynamics given by Equation 1. Algorithm 1 presents
the overview of ASB. One key part with ASB is selection
of topologies. First, it measures the utilization of the maximally loaded link umax , and based on this it calcualtes the
performance metric VG . Then by ComputeW eightM atrix
it adds the found topology to memory using auto-associative
memories, which has been explained in the next section in
detail. On line 4, ASB updates its system state corresponding
to each lightpath variable by the following equation [4]:
⎞
⎤
⎡ ⎛
n
%
dxi
(1)
= ⎣f ⎝
wij xj ⎠ − xi ⎦ VG + N (0, 1)
dt
j=1
+,
* +, *
auto-associative memory
random walk
where N (0, 1) is the standard normal random variable. Here
VG is performance metric, which is
VG =
1
umax
(2)
The system dynamics shown in Eq. (1) consist of two
components: auto-associative memory and random walk. ASB
uses binary auto-associative memories, an extension of ASB
that uses multistate memories was also proposed [8]. When
the topology performs well, VG is high, the auto-associative
memory part steers the topology selection; however when VG
is low, the new topologies are searched randomly.
procedure ASB( time t )
VG ← 1/umax
ComputeW eightM atrix(VG, t)
ComputeExpression()
U pdateLightP ath()
procedure C OMPUTE W EIGHT M ATRIX( VG , time t)
if (VG (t − 1) < 0.5 & VG (t) > 0.5) then
for i ← 1, n do
for j ← 1, n do
W[i, j] −= Hebb(i, j, Ak )
◃ Eq. 6
for i ← 1, n × (n − 1) do
Ak [i] = LightP ath[i] ◃ Update attractors
for i ← 1, n do
for j ← 1, n do
W[i, j] += Hebb(i, j, Ak )
k = (k + 1) mod numberOf Attractors
18:
19:
20:
21:
22:
function C OMPUTE E XPRESSION
◃ by Eq. 1
for i ← 1, n × (n − 1) do
for j ← 1, n × (n − 1) do
x[i] += ComputeDeltaExp(i, j)
23:
24:
25:
26:
27:
28:
29:
procedure U PDATE L IGHT PATH
for i ← 1, n × (n − 1) do
if (x[i] > 0.5 & CanEstablish(i)) then
EstablishLighpath(i) ◃ LightPath[i]=1
else if (IsEstablished(i) & x[i] < 0.5) then
RemoveLighpath(i)
◃ LightPath[i]=0
Algorithm 1. ASB method
B. Auto-associative Memories
The main function of auto-associative memories is to correct
noisy inputs, or find the closest matching stored pattern for a
given input. Auto-associative memories are neural memories,
and the queries are done by a matrix multiplication [9]. In
that sense the neural memories are different that computer
memories where values are read from bitcells. The output O
for an input I is calculated by
O = sgn(I W)
(3)
Here O and I are row vectors, sgn() is the sign function
given by
⎧
⎪
⎪
⎨ 1
if x ≥ 0
sgn(x) =
(4)
⎪
⎪
⎩ −1 if x < 0
In ASB, the set of stored topologies are called “attractors”
as the system state is attracted towards this topologies. That
is, at any point, system tries to converge one of the stored
topologies. These stored topologies are found by random walk,
and they satisfy the performance criteria. A topology Ti for
and N node network can be described by a binary vector
of size N 2 − N . If there is a lightpath between two pairs,
corresponding bit in Ti is set to 1, otherwise 0. However, in
neural memories it is better to code values bipolar as (1, -1)
rather than (1, 0) [10]. Thus, we adopted this notation in our
work. Thus, an attractor is defined as a k by N 2 − N matrix
consists of topologies, where k is the number of topologies that
can be stored. For this A, the weight matrix W is calculated
by
W = A⊤ A
(5)
Here, W is the auto-correlation matrix of A, which is based
on Hebbian learning [11]. We analyzed the effects of different
learning algorithms previously [12]. However we use Hebbian
learning in order to keep the discussion simpler.
Hebbian Learning: ASB’s auto-associative memory uses
Hebbian learning to store and read the elements. In our case,
virtual topologies are stored in a auto-associative memory of
which weight matrix is constructed using Hebbian learning.
Weight matrix can be constructed, using Hebbian learning
weight update rule below:
∆wi,j = αli lj
(6)
The learning rate is denoted by α, it was kept constant
equal to 1 in previous works [4], [8], [12]. The weight matrix
can be thought of summation of several weight matrices that
corresponds to each topology. Each topology becomes
W = α1 W1 + α2 W2 + .. + αN WN
(7)
The weight matrix can also be constructed using pseudoinverse [10], [13]. Even though, pseudoinverse matrix performs
better recognizing noisy inputs, it is shown that the calculation
of pseudoinverse matrix is two order of magnitude slower than
that of auto-correlation matrix [12].
In order to clarify the discussion so far, let assume we want
to store two topologies T1 and T2 . For a running example,
let T1 = [1 1 -1 -1] and T2 = [-1 -1 1 -1]. Then the
attractor matrix A is represented as A = [T1 ; T2 ], which is
2
3
1
1 -1 -1
A=
-1 -1
1 -1
and by Equation 5, the weight matrix corresponding to this
A becomes
⎡
⎤
2
2 -2 0
⎢ 2
2 -2 0 ⎥
⎥
W=⎢
⎣ -2 -2
2 0 ⎦
0
0
0 2
Then, outputs can be calculated by Equation 3 as
sign(T1 W ) = T1 and sign(T2 W ) = T2 . It is straightforward
to check
6
7
T1 W = 6 6 -6 2
6
7
sign(T1 W) = 1 1 -1 1
Let assume that a noisy input Tnoisy is given as
6
7
Tnoisy = -1 -1 -1 -1 ,
(8)
Note that this noisy input has a hamming distance of 1
to T1 and 2 to T2 . Thus, it is expected that the autoassociative memory has to return T1 for this noisy input. It
is straighforward to check sign(Tnoisy W ) = T1 holds. In
other words, when Tnoisy is presented, the memory returns
T1 . However, the memory cannot correct every possible noisy
input. For example, [1 1 -1 1] cannot be corrected. Even
though this input has a hamming distance of 1 to T1 , it is
exact complement of T2 . This is a well known issue with
auto-associative memories [10]. In the construction of weight
matrix, both topologies have equal weights. The construction
of weight matrix can also be thought as sum of the weight
matrix belonging to each topology.
W1 = T1⊤ T1
W2 =
T2⊤
T2
W = α1 W1 + α1 W1
(9)
(10)
(11)
In our example, we took α1 and α2 as 1. However, if we take
α1 >> α2 then the W will be biased toward T1 . With such
a modification, the second noisy input, which we presented
above, can be corrected with this new weight matrix.
IV. C RITERIA
FOR
T OPOLOGY S ELECTION
As discussed in the previous section, we refer stored
topologies as “attractors”. Previously, each attractor has same
weights. However, some attractors may perform better than
others. We present the following weight selection strategies
for evaluating topologies.
Our motivation in selecting different weights for different
attractors is based on a few factors. First, the attractor update
strategy is not optimal (line 8), and it is hard to devise an
optimal update. For example, an attractor that is found when
the traffic load is low might not be as efficient as an attractor
that is found when the network is highly loaded. Also, a
topology can be more efficient intrinsically as we show in
terms of max-flow capacity in Section V-B.
1) F low: Each topology has a different maximum flow,
thus each attractor can be assigned a different weight. We calculated the maximum flow of the topologies using EdmondsKarp max-flow algorithm [14].
α(Ti ) = C × max f low
(12)
where C is normalizing constant.
The equation above shows that any topology Ti has a
weight proportional to its maximum flow. The top 10% of
the loaded links were taken, and the maximum flow between
the corresponding node pairs were calculated.
2) Recent: Under the assumption that the internet traffic is
self-similar in small time-scales [15], we gave higher weights
to recently found topologies.
α(Ti ) = 2 × α(Ti−1 )
(13)
Here, one important aspect is the time between rounds. That
is, Ti − Ti−1 .
3) W eightedHop: Average weighted hop is a commonly
used metric in evaluating performance of VTR algorithms.
When an attractor is found, its average weighted hop is
calculated and we associate this metric with is attractor weight
by the following equation.
α(t + 1) = C ×
1
wh
(14)
where wh is the traffic weighted average hop, which is desired
to be lower.
V. S IMULATIONS
This section presents the comparison of the four methods
using an optical network simulator, which has been implemented in C#.
A. Simulation Setting
The physical network we simulated consists of 100 nodes.
We used the the log-normal traffic, that was explained [16].
Dijsktra’s shortest path algorithm was used for lightpath and
traffic routing. We assumed that the nodes are equipped with
wavelength converters. We run each method with varying
number of stored topologies as: 2,4 and 8. Each method was
run 36 times. Each run was run for 400 rounds, and the
performance metric was summed for 400 rounds.
B. Simulation Results
performance
Figure 2 shows the simulation results. F low outperforms
all the other methods, however W eightedHop and Recent
failed to show any significant improvement over ASB. The
results look promising, as F low’s median is 34% higher than
that of ASB.
The Z score for difference between F low and ASB was
found to be 24.84, which gives a p-value of less than 0.00001.
Thus we conclude that F low improves the performance, and
the improvement is from 23% to 29% with 99% confidence
interval. The results for Recent and W eightedHop is insignificant.
We noticed no effect of the number of stored patterns
between 4 and 8, but they are marginally better than size of 2.
However, this depends on the characteristics of traffic matrix.
We are currently analyzing how to set the weights effectively.
400
350
300
250
200
150
100
50
0
Flow
Recent WeightedHop ASB
Fig. 2. The median for F low is 34% higher than that of ASB, while
W eightedHop is 5% lower than ASB.
VI. C ONCLUSION
VTR problem has generated huge interest in the last decade.
ASB method is one highly adaptive VTR method. ASB uses
neural memories to store topologies. However, the topology
selection is purely based on the congestion faced with the
topology. In this work, we analyzed the topology storing
strategies of ASB. We proposed new selection metrics based
on average weighted hop, max-flow and age of the found
topology.
We showed that topology selection based on F low ensures
the selection of highly efficient topologies, by improving
ASB 26% on average. However, when the topologies selected
according to weighted hop and recentness did not provide any
meaningful improvement.
For future work, we will evaluate different metrics. For
example, currently we are looking at average weighted hop
metric.
R EFERENCES
[1] R. Ramaswami and K. N. Sivarajan, “Design of logical topologies for
wavelength-routed optical networks,” IEEE Journal on Selected Areas
in Communications, vol. 14, pp. 840–851, 1996.
[2] E. Leonardi, M. Mellia, and M. A. Marsan, “Algorithms for the logical
topology design in wdm all-optical networks,” OPTICAL NETWORKS,
vol. 1, pp. 35–46, 2000.
[3] A. Gençata and B. Mukherjee, “Virtual-topology adaptation for WDM
mesh networks under dynamic traffic.” IEEE/ACM Trans. Netw., pp.
236–247, 2003.
[4] Y. Koizumi, T. Miyamura, S. Arakawa, E. Oki, K. Shiomoto, and
M. Murata, “Adaptive virtual network topology control based on attractor selection,” J. Lightwave Technol., vol. 28, no. 11, pp. 1720–1731,
Jun 2010.
[5] J. Zheng and H. T. Mouftah, Optical WDM Networks. Wiley-IEEE
Press, 2004.
[6] I. Chlamtac, A. Ganz, and G. Karmi, “Lightpath communications: an
approach to high bandwidth optical wan’s,” Communications, IEEE
Transactions on, vol. 40, no. 7, pp. 1171 –1182, jul 1992.
[7] O. Heckmann, M. Piringer, J. Schmitt, and R. Steinmetz, “On realistic
network topologies for simulation,” in MoMeTools ’03: Proceedings
of the ACM SIGCOMM workshop on Models, methods and tools for
reproducible network research. New York, NY, USA: ACM, 2003, pp.
28–32.
[8] Y. S. Hanay, S. Arakawa, and M. Murata, “Virtual topology control with
multistate neural associative memories,” in The 38th IEEE Conference
on Local Computer Networks, 2013, pp. 9–15.
[9] T. Kohonen, “Correlation matrix memories,” Computers, IEEE Transactions on, vol. C-21, no. 4, pp. 353 –359, april 1972.
[10] R. Rojas, Neural Networks - A Systematic Introduction.
Berlin:
Springer-Verlag, 1996.
[11] D. O. Hebb, The Organization of Behavior. New York: Wiley, 1949.
[12] Y. S. Hanay, Y. Koizumi, S. Arakawa, and M. Murata, “Virtual network
topology control with oja and apex learning,” in Proceedings of the 24th
International Teletraffic Congress, ser. ITC ’12, 2012, pp. 47:1–47:6.
[13] K. Matsuoka, “A model of orthogonal auto-associative networks,”
vol. 62, 1990.
[14] J. Edmonds and R. M. Karp, “Theoretical improvements in algorithmic
efficiency for network flow problems,” Journal of the ACM (JACM),
vol. 19, no. 2, pp. 248–264, 1972.
[15] M. E. Crovella and A. Bestavros, “Self-similarity in world wide web
traffic: evidence and possible causes,” Networking, IEEE/ACM Transactions on, vol. 5, no. 6, pp. 835–846, 1997.
[16] A. Nucci, A. Sridharan, and N. Taft, “The problem of synthetically
generating ip traffic matrices: initial recommendations,” SIGCOMM
Comput. Commun. Rev., vol. 35, no. 3, pp. 19–32, Jul. 2005.