Download PSO Algorithm with Self Tuned Parameter for

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Unification (computer science) wikipedia , lookup

Gene expression programming wikipedia , lookup

Multi-armed bandit wikipedia , lookup

Minimax wikipedia , lookup

Genetic algorithm wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Transcript
International Conference on Futuristic Trends in Computing and Communication (ICFTCC-2015)
PSO Algorithm with Self Tuned Parameter for Efficient Routing in VLSI
Design
Sudipta Ghosh1, Subhrapratim Nath2, Subir Kumar Sarkar3
1
Assistant Prof., Dept. of Electronics & Communication Engineering, Meghnad Saha institute of Technology, Kolkata, India
2
Assistant Prof., Dept. of Computer Science & Engineering, Meghnad Saha institute of Technology, Kolkata, India
3
Professor, Dept. of Electronics & Telecommunication Engineering, Jadavpur University, Kolkata, India.
Abstract— Device size is scaled down to a large extent with the
rapid advancement of VLSI technology. Consequently this has
become a challenging area of research to minimize the
interconnect length, which is a part of VLSI physical layer
design. VLSI routing is broadly classified into 2 categories:
Global routing and detailed routing. The Rectilinear Steiner
Minimal Tree (RMST) problem is one of the fundamental
problems in Global routing arena. Introduction of the metaheuristic algorithms, like particle swarm optimization, for solving
RMST problem in global routing optimization field achieved a
magnificent success in wire length minimization in VLSI
technology. In this paper, we propose a modified version of PSO
algorithm which exhibits better performance in optimization of
RMST problem in VLSI global routing. The modification is
applied in PSO algorithm for controlling acceleration coefficient
variables by incorporating a self tuned mechanism along with
usual optimization variables, resulting in high convergence rate
fot finding the best solution in search space.
Keywords— Global routing, RSMT, Meta-heuristic, PSO.
I. INTRODUCTION
Advancement in IC process technology in nano-meter
regime leads to the fabrication of billions of transistors in a
single chip. The number of transistors per die will still grow
drastically in near future, which increases complexity and
thereby imposes enormous challenges in VLSI for physical
layer design, especially in routing. In order to handle this
complexity, global routing followed by detailed routing is
adopted. The primary objectives of global routing in wire
length reduction, is becoming very crucial in modern chip
design. The only way to minimize the length of interconnects
in VLSI physical layer design technology is to address the
problem of Rectilinear Steiner Minimal Tree [1]. To solve this
NP complete problem, meta-heuristic algorithms [2] like
particle swarm optimization is adopted. It is a robust
optimization technique, introduced in 1995 by Eberhert and
Kenedy [3]. The PSO approach in VLSI routing is first
implemented by the authors Dong et al. at 2009. Various
improvements over original PSO algorithm have been made to
make this algorithm more efficient. The introduction of
linearly decreasing inertia weight by Shi and Eberhart [4]
increases the convergence rate of the algorithm. Innovation of
a self-adaptive inertia weight function [5] enhances the
convergence rate of the PSO algorithm for multi-dimensional
problem. This paper proposes further improvement of the
existing PSO algorithm which modifies the acceleration
coefficients of PSO algorithm yielding accurate results and
ISSN: 2348 – 8549
thereby establishing better and efficient exploration and
exploitation in the search space. This motivates us to apply the
algorithm
suitably to address the RMST problem in
minimization of interconnect length in the field of VLSI
routing optimization. The paper is organized as follows. In
section II brief outline of PSO algorithm is given. In section
III proposed algorithm of PSO is described in steps followed
by experiments and results in section IV. Finally the paper
concludes with section V.
II. BASIC PSO ALGORITHM
An PSO is a kind of evolutionary computation technique.
More specifically it is a meta-heuristic algorithm, derived
from the collective intelligence exhibited by swarm of insects,
schools of fishes or flock of birds etc, implemented to revolve
many kinds of optimization problems. The basic PSO model
[3] consists of a swarm ‘S’ containing ‘n’ particle (S=1, 2,
3…., n) in a ‘D’ – dimensional solution space. Each of the
particles, having both position ‘Xi’ and velocity vector ‘Vi’ of
dimension ‘D’ respectively, are given by, Xi = (x1, x2,…..xn) ;
Vi = (v1, v2, ……vn);
The variable ‘i’ stands for the i-th particle. Position ‘Xi’
represents a possible solution to the optimization problem.
Velocity vector ‘Vi’ represents the change of rate of position
of the i-th particle in the next iteration. A particle updates its
position and velocity through these two equations (1) and (2):
V t+ 1 = Vt + c1r1 *(p i – X t) + c2r2 *(p g – X t) ...…(1)
Xt+ 1= V t +1 + X t
……………………..………...(2)
Here the constants ‘c1’ and ‘c2’ are responsible for the
influence of the individual particle’s own knowledge (c1)
and that of the group (c2), both usually initialized to 2. The
variables ‘r 1’ and ‘r2’ are uniformly distributed random
numbers defined by some upper limit, ‘r max’, that is a
parameter of the algorithm [6]. ‘p i’ and ‘p g’ are the particle’s
previous best position and the group’s previous best position.
‘Xt’ is the current position for the dimension considered .The
particles are directed towards the previously known best
points in the search space.
The balance between the movement towards the local best
and that towards the global best, i.e. between exploration and
exploitation, is considered in the above equation by Shi and
Eberhart in generating acceptable solution in the path of
particles. Therefore the inertia weight ‘w’ is introduced [4] in
equation (3) as follows :
www.internationaljournalssrg.org
Page 60
International Conference on Futuristic Trends in Computing and Communication (ICFTCC-2015)
for the first time. The pseudo code for the algorithm is given
below.
V t+ 1 = w*Vt + c1r1 *(p i – X t) + c2r2 *(p g – X t)
.…(3)
III. PAGE CONTROLLING PARAMETER OF PSO FOR
OPTIMIZATION IN VLSI GLOBAL ROUTING
While optimizing RSMT problem for wire length
minimization in VLSI physical layer design with the help of
particle swarm optimization algorithm, we have considered
controlling of parameters to facilitate maximum convergence
and prevent an explosion of swarm. The following parameters
in PSO algorithm are controlled.
D. Pseudo Code

-
-
A. Selection of max velocity
Larger upsurge or diminution of particle velocities leads to
divergence of swarm due to un-inhibited increase the
magnitude of particle velocity, |Vi,j(t+1)|,especially for
enormous search space. The max velocity is limited by :

Vi,j(t+1) = (Xj(max) - Xj(min)) / k
where Vi,j(t+1) is the velocity for next iteration. Xj(max)
and Xj (min) is the max and min position value respectively,
found so far by the particles in the j-th dimension and K is a
user defined parameter that controls the particles’ steps in
each dimension of the search space with K=2 [7].
B. Inertia weight
The dimension size of the search space affects the
performance of PSO. In complex high dimensional condition,
basic PSO algorithm constricted into local optima, leading
premature convergence. Hence large inertia weight is required.
Smaller inertia weight is utilized for small dimension size of
search space to strengthen local search capability, assuring
high rate of convergence [8]. A self adaptive inertia weight
function is used in our algorithm, which relates inertia weight,
fitness, swarm size and dimension size of the search space.
W = [3 − exp ( − S / 200) + (R / 8 * D) 2] −1
where S is the swarm size, D is the dimension size and R is
the fitness rank of the given particles [5]. In our algorithm D is
taken as 100.
C. Proposed Modification: Self-tuned Acceleration coefficient
The acceleration coefficient determines the scaled
distribution of random cognitive component vector and social
component vector i.e. the movement of each particle towards
its individual and global best position respectively. Using
smaller value limits the movement of the particle, while using
larger for the coefficient may cause the particle to diverge. For
c1 = c2 > 0, particles are attracted to the average of ‘pbest’ i.e.
particles’ local best position and ‘gbest’ i.e. particles’ global
best position value. A good starting point proposed to be c 1=
c2 =2 is used in algorithm [9], [10] for acceleration constant.
Again c1= c2 = 1.49 generates good results for convergence as
proposed Shi and Eberhart [11], [12]. In our proposed
modification, we have introduced self tuned acceleration
coefficients, linearly decreasing over time in the range of 2 to
1.4. This is introduced in VLSI routing optimization problem
ISSN: 2348 – 8549
-
Preprocess Block
The search space for the problem is defined for a
fixed dimension.
Within the search space the user defined terminal
node in form of coordinates are represented as ‘1’ to
form the required matrix.
The weight of the path between the nodes, including
the Steiner nodes, is calculated to form the objective
functions for n-particles.
Cost of each objective function is calculated by
Prim’s algorithm.
Minimal cost along with the corresponding objective
function is identified amongst these results.
PSO Block
Initialization for First Iteration
- Each of the objective function is assigned for position
and velocity vector
- Each objective function represents local-best for each
particle of ‘n’ population.
- The objective function with minimal cost is assigned
as global-best for the swarm of particles.
Execution of the PSO Block
While (termination criteria is not met)
- Velocity and position is updated for ‘n’ number of
population using the equation no (3) and (2).
- Fitness values (present pi) of ‘n’ particles are
evaluated by Prim’s algorithm.
- Each pi is compared with previous pi,
- If pi(present) > pi(previous),
then pbest pi(present);
Else retain the previous value.
- New pg  minimal of all pi s (present).
- If pg(present) > pg(previous),
then gbest pg(present);
Else retain the previous value.
Controlling inertia constant(w)
- Calculate the self-adaptive inertia weight of each
particle within the pop using equation no (3) in terms
of fitness, population (swarm size) and dimension of
the search space.
Controlling acceleration constant (c1, c2)
- Calculate self-tuned linearly decreasing acceleration
constant.
Check the termination criteria. If satisfied, stop execution
and get the global best value for the swarm size. Otherwise
execute the PSO block again
www.internationaljournalssrg.org
Page 61
International Conference on Futuristic Trends in Computing and Communication (ICFTCC-2015)
IV. EXPERIMENTS AND RESULTS
3 sets of co-ordinates for 10 terminal nodes are randomly
generated for connections in the defined search space of
100x100. The experiment executed 25 times for each set. The
population size of swarm is taken 200. The maximum no of
iteration is set as 100. Experimental co-ordinates are shown in
table-1.
TABLE I
PSO with Self
tuned c1,c2
224
253.2
Experiment No. 3
PSO with
c1=c2=2
219
223.31
PSO with Self
tuned c1,c2
205
218.7
COORDINATE SET OF 10 TERMINAL NODES FOR 3 EXPERIMENTS .
NO.
1
2
3
4
5
6
7
8
9
10
Coordinate Set 1
X
09
17
31
20
40
49
56
72
88
93
Y
53
01
38
89
67
95
70
29
63
100
Coordinate Set 2
X
84
93
60
76
64
17
95
54
21
29
Y
15
67
48
89
79
62
38
35
57
81
The comparison of the performances of existing PSO and
self-tuned PSO algorithm are shown in the bar charts. As our
proposed algorithm generates lower global best value as
shown in figure 1, it implies that the cost of the Rectilinear
Steiner Minimal Tree (RSMT), constructed by interconnecting
the terminal nodes, has been reduced. The ‘mean value’ of the
global best parameter i.e. average of the minimum cost has
also been improved as shown in figure 2. So this algorithm
can effectively handle the RSMT problem of graphs and
thereby reduce the interconnect length to a great extent.
Minimum 'gbest' value
350
308
Coordinate Set 3
X
44
42
97
61
89
91
80
69
31
56
Y
63
99
36
25
68
51
28
58
82
94
We first performed the experiment for PSO algorithm with
acceleration coefficient taken as c1=c2=2 and then the
experiment is again performed for the proposed PSO
algorithm with self tuned acceleration coefficient for the same
three coordinate sets. The result is tabulated in table-2. From
the comparative analysis it is observed that our proposed PSO
algorithm generates lower ‘gbest’ i.e. global best value
compared to the previous algorithm [9],[10]. It is also seen
from the results that the ‘mean value’ is also improved for our
proposed algorithm with self tuned linearly decreasing
acceleration coefficient than the existing PSO algorithm [9],
[10].
TABLE II
EXPERIMENTAL RESULTS .
Minimum
‘gbest’ Value
300
Exp. No. 1
Exp. No. 2
PSO with c1 = c2 = 2
Exp. No. 3
PSO with self tuned c1 , c2
Fig. 1. Comparison of Minimum Cost obtained by Existing and Modified
PSO algorithm
Mean 'gbest' value
319.81
307.5
300
319.81
200
PSO with Self
tuned c1,c2
295
307.5
150
ISSN: 2348 – 8549
205
150
308
Experiment No. 2
219
200
PSO with
c1=c2=2
248
224
264.67
253.2
250
Experiment No. 1
PSO with
c1=c2=2
248
250
350
Mean ‘gbest’
Value
295
223.31 218.7
Exp. No. 1
PSO with c1 = c2 = 2
Exp. No. 2
Exp. No. 3
PSO with self tuned c1, c2
264.67
Fig.2. Comparison of Mean Cost obtained by Existing and Modified PSO
algorithm
www.internationaljournalssrg.org
Page 62
International Conference on Futuristic Trends in Computing and Communication (ICFTCC-2015)
V. CONCLUSIONS
Wire length minimization in VLSI technology can be
achieved through global routing optimization using PSO
algorithm. In our proposed algorithm a modification is
incorporated to the existing PSO algorithm. The technique
used here is to modify the acceleration coefficients in such a
way that it can tune itself over the iteration process throughout
the experiment. The experimental results exhibits a clear
difference in the performance of the modified version from the
existing one. It is seen that the convergence rate is high and
also it can be applied for a large search space and having still
good result which clearly establish the robustness and stability
of the optimization algorithm. Therefore this algorithm can
effectively be used in global routing optimization in VLSI
physical layer design. The further scope of the work can be
extended to examine the algorithm in obstacle avoiding
routing environment.
REFERENCES
[1] J.-M. Ho, G. Vijayan, and C.K. Wong, “New algorithms for the rectilinear
Steiner tree problem”, IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, 1990, pp. 185-193.
[2] Xin-She Yang, Nature-Inspired Metaheuristic Algorithms. Luniver Press,
UK, 2008.
ISSN: 2348 – 8549
[3] R.C.Eberhart and J.Kennedy, “A new optimizer using particles swarm
theory”, Proeedings of Sixth International Symposium on Micro Machine
and Human Science, Nagoya, Japan, 1995, pp. 39-43.
[4] Y.H. Shi and R.C.Eberhart, “Empirical study of particle swarm
optimization”, Proceedings of IEEE Congress on Evolutionary
Computation, Washington DC, 1999, pp. 1945-1950.
[5] Dong Chen, Wang Gaofeng, Chen Zhenyi and Yu Zuqiang, “A Method of
Self-Adaptive Inertia Weight For PSO”, Proceedings of IEEE
International Conference on Computer Science and Software Engineering,
Dec 2008, vol. 1, pp. 1195-1198
[6] A.Carlisle and G.Dozier, “An off-the-shelf PSO”, Proceedings of the
Workshop on Particle Swarm Optimization, Indianapolis, IN.2001
[7] A. Rezaee Jordehi, and J. Jasni, “Parameter selection in particle swarm
optimization: a survey”, Journal of Experimental & Theoretical
Artificial Intelligence, 2013, 25(4). pp. 527-542.
[8] F.Van Den Bergh and A.P.Engelbrecht, “Effects of swarm size on
cooperative particle swarm optimizers”, Proceedings of the Genetic and
Evolutionary Computation Conference, San Francisco, California, 2001,
pp.892-899.
[9] R. Eberhart, Y. Shi, and J. Kennedy, Swarm Intelligence. San Mateo, CA:
Morgan Kaufmann, 2001.
[10] J. Kennedy and R. Mendes. Population structure and particle swarm
performance. In Proceedings of the IEEE Congress on Evolutionary
Computation , 2002, vol 2, pp. 1671–1676.
[11] Y.H. Shi and R.C.Eberhart, “A modified particle swarm optimizer”,
Proceedings of IEEE World Congress on Computational Intelligence,
1998, pp. 69-73.
[12] Rania Hassan, Babak Cohanim, Olivier de Weck, and Gerhard
Venter, “A Comparison of Particle Swarm Optimization and the Genetic
Algorithm”, American Institute of Aeronautics and Astronautics journal,
2005, 2055-1897.
www.internationaljournalssrg.org
Page 63