Download Immunity Based Genetic Algorithm for Solving Quadratic Assignment

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Polyploid wikipedia , lookup

Koinophilia wikipedia , lookup

Genetic testing wikipedia , lookup

Biology and consumer behaviour wikipedia , lookup

Artificial gene synthesis wikipedia , lookup

Genetic engineering wikipedia , lookup

Genetic drift wikipedia , lookup

History of genetic engineering wikipedia , lookup

Public health genomics wikipedia , lookup

Population genetics wikipedia , lookup

Designer baby wikipedia , lookup

Genome (book) wikipedia , lookup

Microevolution wikipedia , lookup

Gene expression programming wikipedia , lookup

Transcript
Immunity Based Genetic Algorithm for Solving Quadratic Assignment
Problem (QAP)
Chou-Yuan Lee 1,Zne-Jung Lee2 and Shun-Feng Su3
1
Dept. of Electrical Engineering,
National Taiwan University of Science and Technology
2
Dept. of Information Management
Kang-Ning Junior College of Nursing
3
Dept. of Electrical Engineering,
National Taiwan University of Science and Technology
Abstract:
In this paper, immunity based genetic algorithm is proposed to solve quadratic assignment problem (QAP).
The QAP problem, known as NP-hard problem, is a combinatorial problem found in the optimal assignment of
facilities to allocations. The proposed algorithm is to enhance the performance of genetic algorithms by
embedded immune systems so as to have locally optimal offspring, and it is successfully applied to solve QAP.
From our simulations for those tested problems, the proposed algorithm has the best performances when
compared to other existing search algorithms.
Keywords: Genetic Algorithm, Immune Systems, Quadratic Assignment Problem, Optimization, Local Search.
1. Introduction
QAP is a combinatorial optimization problem
found in many fields such as VLSI module placement,
machine scheduling, optimal assignment of resources
to tasks, and other areas of applications. Various
methods such as separable convex objective functions
and graph theory have been employed to solve this
class of problems. But the computational complexity
of these methods grows exponentially while the
problem size increases. As a result, these methods are
only practical for small-sized problems [1].
Genetic algorithms (GAs) or more generally,
evolutionary algorithms [2] have been touted as a
class of general-purpose search strategies for
optimization problems. GAs use a population of
solutions, from which, using recombination and
selection strategies, better and better solutions can be
produced. GAs can handle any kind of objective
functions and any kind of constraints without much
mathematical requirements about the optimization
problems, and have been widely used as search
algorithms in various applications. Various GAs have
been proposed in the literature [2,13] and shown
superior performances over other methods. As a
consequence, GAs seemed to be nice approaches for
solving QAP. However, GAs may cause certain
degeneracy in search performance if their operators
are not carefully designed [4,15].
Recently, GAs with local search have been considered
as good alternatives for solving optimization
problems [3,5~7]. Local search can explore the
neighborhood in an attempt to enhance the cost of the
solution in a local manner and find a better solution.
The natural immune system is a very complex system
with several mechanisms to defense against
pathogenic organisms. However, the natural immune
system is also a source of inspiration for solving
optimization problems [4,8,9]. From an information
processing perspective, the artificial immune systems
are remarkable adaptive systems and can provide
several important aspects in the field of computation
[8]. When incorporated with evolutionary algorithms,
artificial immune systems can improve the search
ability during the evolutionary process. In our
research, a specific designed artificial immune system
is implemented for QAP to improve the local search
efficiency of GAs. The general idea of the proposed
algorithm is to combine the advantages of GAs, the
ability to explore the search space, and that of
artificial immune systems, the ability to quickly find
good solutions within a small region of the search
space. The proposed algorithm can demonstrated
excellent performance in later simulations.
This paper is organized as follows. In section 2,
QAP is introduced. In section 3, the proposed
algorithm, immunity based genetic algorithms for
solving QAP, is described. In section 4, the algorithm
is employed to solve the examples of QAP, and the
results are listed. The performance shows the
superiority of our algorithm. Finally, section 5
concludes this paper.
2. The Problem Formulation
Generally, a quadratic assignment problem (QAP)
is a problem of how to economically to assign N
facilities to N locations with constraints. This problem
is one of the classical NP-hard problems. Let two
NN matrices D = ( dik) and F = ( fjl ) be given, where
N is the total number of the facilities or locations, dik
is the distance between location i, and location k and
fjl is the cost of material flow from facility j to facility
l. Usually, the matrix D is called the distance matrix
and F is the flow matrix. Then, the QAP can be
formulated as the follows:
N
Min. Z =
N
N
N
 d
i 1 j 1 k 1 l 1
ik
f jl X ij X kl
(1)
with the constraints:
N
 X ij  1,
i= 1,2,…,N
j 1
N
 X ij  1,
i 1
X ij  0,1
j= 1,2,…,N
(2)
i, j = 1,2, …,N
where Xij= 1 indicates that facility i is assigned to
location j; otherwise Xij= 0.
The QAP is a combinatorial optimization
problem found in the optimal assignment of facilities
to allocations. Various methods such as separable
convex objective functions and graph theory have
been employed to solve this class of problems [1].
But the computational complexity of these methods
grows exponentially while the problem size increases.
Recently, genetic algorithms have been employed to
solve the QAP. In the literature [3,5], local search
mechanisms are also found to be nice fine-tuning
methods to improve the performance of QAP. Based
on the above two-optimization ideas, in this study, an
immunity based genetic algorithms is applied to QAP,
and the results shown later indeed demonstrate
superior performance for QAP.
3. Genetic Algorithms and The
Proposed Algorithm
Genetic algorithms (GAs) or more generally,
evolutionary algorithms [12~14] have been touted as
a class of general-purpose search strategies for
optimization problems. GAs use a population of
solutions, from which, using recombination and
selection strategies, better and better solutions can be
produced [5]. GAs can handle any kind of objective
functions and any kind of constraints without much
mathematical requirements about the optimization
problems. When applying to optimization problems,
genetic algorithms provide the advantages to perform
global search and hybridize with domain-dependent
heuristics for a specific problem [10,11]. Genetic
algorithms start with a set of randomly selected
chromosomes as the initial population that encodes a
set of possible solutions. In GAs, variables of a
problem are represented as genes in a chromosome,
and the chromosomes are evaluated according to their
cost values using some measures of profit or utility
that we want to optimize. Recombination typically
involves two genetic operators: crossover and
mutation. The genetic operators alter the composition
of genes to create new chromosomes called offspring.
The selection operator is an artificial version of
natural selection, a Darwinian survival of the fittest
among populations, to create populations from
generation to generation, and chromosomes with
better-cost values have higher probabilities of being
selected in the next generation. After several
generations, GA can converge to the best solution. Let
P(t) and C(t) are parents and offspring in generation t.
A usual form of general GA is shown in the following
[13]:
Procedure: General GA
Begin
t ← 0;
Initialize P(t);
Evaluate P(t);
While (not match the termination
conditions) do
Recombine P(t) to yield C(t);
Evaluate C(t);
Select P(t+1) form P(t) and C(t);
t ← t+1;
End;
End;
Recently, genetic algorithms with local search
have also been considered as good alternatives for
solving optimization problems. The flow chart of the
GA with local search is shown in Fig. 1, and general
structure is shown in the following.
Procedure: GA with local search
Begin
t ← 0;
Initialize P(t);
Evaluate P(t);
While (not matched for the termination
conditions) do
Apply crossover on P(t) to generate
c1 (t ) ;
Initialize population P(t)
Apply local search on c1 (t ) to yield
c2 (t);
Apply mutation on c2 (t) to yield
c3 (t);
Evaluate P(t)
Apply local search on c3 (t) to yield
c4 (t);
Evaluate C(t)={ c1 (t ) , c2 (t), c3 (t),
c4 (t)};
Apply crossover and local
search on P(t) to yield C(t)
Select P(t+1) from P(t) and
C(t );
t ← t+1;
End;
End;
It is noted that the GA with local search becomes
a general GA if the local search is omitted. This
representation uses N numerical genes and each gene
with an integer number form 1 to N to represent
facility-location pairs. For crossover operator, the
partially mapped crossover (PMX) can be
traditionally implemented. The idea of the PMX
crossover operator is to generate the mapping
relations so that the offspring can be repaired
accordingly and become feasible. The idea of the
PMX operation is to generate the mapping relations
so that the offspring can be repaired accordingly and
become feasible. PMX has been showed effective in
many applications, and its algorithm is stated as:
Step 1: Select substrings from parents at
random.
Step 2: Exchange substrings between
two parents to produce
proto-offspring.
Step 3: Determine the mapping
relationship from the exchanged
substrings.
Step 4: Repair proto-offspring with the
mapping relationship.
To see the procedure, an example is illustrated.
Consider two chromosomes:
A= 1 2 3 4 5 6 7 8 9
B= 4 5 6 9 1 2 7 3 8
First, two positions, e.g., the 3rd and the 6th positions,
are selected at random to define a substring in
chromosomes and the defined substrings in those two
chromosomes are then exchanged to generate the
proto-offspring as:
A= 1 2 6 9 1 2 7 8 9
B= 4 5 3 4 5 6 7 3 8
Then the mapping relationship between those two
substrings can be established as:
Apply mutation and local
search on C(t) to yield and
then evaluate D(t)
Select P(t+1) from P(t) and
D(t) based on cost
t
No
t+1
Stop criterion
satisfied?
Stop
Fig. 1. The flowchart of GA with local search
approaches.
(A’s genes) 2  6  3
9  4
(B’s genes)
1 5
It is noted that the mapping relations 63 and 26
have been merged into 263. Finally, the
proto-offspring are repaired according to the above
mapping lists. The resultant feasible offspring are:
A’= 5 3 6 9 1 2 7 8 4
B’= 9 1 3 4 5 6 7 2 8
Another new recombination operator proposed
for the QAP preserves the information contained in
both parents in sense that all alleles of the offspring
are taken either from the first or from the second
parent. The recombination operator is presently called
information-contained crossover (ICX) [6]. The ICX
works as follow:
Step 1: All facilities found at the same
locations in the two parents are
assigned to the corresponding
locations in the offspring.
Step 2: Starting with a randomly chosen
location that has no facility
assigned yet, a facility is randomly
chosen from the two parents. After
that, additional assignments are
made to ensure that no implicit
mutation occurs. Then, the next
onto-assigned location to the right
is preceded in the same way until
all locations have been considered.
A good crossover operator is called keep
good-gene crossover (KGGX), The KGGX operator
is to generate the better gene so that the offspring
become better representation. Its algorithm is stated
as:
Step 1: Select a gene  from parent A at
random.
Step 2: Select a gene  from parent B at
random.
Step 3: The same gene  from parent A is
replaced as , and original gene 
is replaced as .
Step 4: The same gene  from parent B is
Replaced as , and original gene 
is replaced as .
We use an example illustrate the procedure.
Consider two chromosomes:
A= 1 2 3 4 5 6 7 8 9
B= 1 3 8 9 2 4 5 6 7
First, the 2nd position of A is 2 and the 4th position of
B is 9 are selected at random.
A= 1 2 3 4 5 6 7 8 9
B= 1 3 8 9 2 4 5 6 7
The 9th position of A is replaced as 2, and the original
2nd position is replaced as 9. The 5th position of B is
replaced as 9, and the original 4th position is replaced
as 2.
The resultant offspring are:
A= 1 9 3 4 5 6 7 8 2
B= 1 3 8 2 9 4 5 6 7
Then, the KGGX operator terminates since all genes
have been replaced. The KGGX operator has better
representation than ICX and PMX in our simulations.
The operator of mutation can be implemented as
inverse mutation. The inversion mutation operation is
stated as:
Step 1: Select two positions within a
chromosome at random.
Step 2: Invert the substring between these
two positions.
Consider a chromosome:
A=1 2 3 4 5 6 7 8 9.
In a mutation process, the 3rd and the 6th positions
are randomly selected. If the mutation operation is
performed, then the offspring becomes:
A’=1 2 6 5 4 3 7 8 9.
The used selection strategy is referred to as
(u+λ)–ES (evolution strategy) survival [6], where u is
the population size and λ is the number of offspring
created. The process simply deletes redundant
chromosomes and then retains the best u
chromosomes in P(t+1). From the above algorithm, it
can be seen that local search is performed for the
chromosomes obtained by recombination and by
mutation, respectively. The offspring set C(t) also
includes those original chromosomes before local
search to retain the genetic information obtained in
the evolutionary process. Local search can explore the
neighborhood in an attempt to enhance the cost of the
solution in a local manner.
General local search starts from the current
solution and repeatedly tries to improve the
current solution by local changes. If a better
solution is found, then it replaces the current
solution and the algorithm searches from the new
solution again. These steps are repeated until a
criterion is satisfied. Thereafter, General local
search can explore the neighborhood of a
solution then to find a better solution. The
procedure of general local search process is
[13,14]:
Procedure: General Local Search
Begin
While(General local search has not been
stopped) do
Generate a neighborhood solution  ' ;
If C  '  < C (π) then π=  ' ;
End;
End;
For local search, simulated annealing (SA) is
also employed as local search to take advantages of
search strategies in which cost-deteriorating
neighborhood solution may possibly be accepted in
searching for the optimal solutions [19]. In other
words, in addition to better cost neighbors are always
accepted, worse cost neighbors may also be accepted
according to a probability that is gradually decreased
in the cooling process. The SA algorithm is
implemented as follows [19].
Procedure: The SA algorithm
Begin
Define an initial temperature T1 and a
coefficient  (0<<1);
Randomly generate an initial solution as
the current state;
 ←1;
While (SA has not been frozen) do
 ←0;  ←0;
While (The state does not approach the
equilibrium sufficiently close) do
Generate a new solution
from the current solution;
C= cost value of the new
solution – cost value of
the current solution;
Pr = exp(-C/ T);
If Pr  random[0,1] then
Accept the new solution;
current solution ←new
solution;
 ←+1;
End;
 ←+1;
End;
Update the maximum and minimum
costs;
T +1←T *  ;
←+1;
End;
End;
The initial temperature can be set as [4]:
(3)
T 1  ln(C elitist /   1)
elitist
where C
is the elitist cost in the
beginning of the search. The way of generating
new solutions is to inverse two randomly selected
positions in the current solution. In this algorithm,
the new generated solution is regarded as the
next solution only when exp(-C/T) 
random[0,1], where random[0,1] is a random
value generated from a uniform distribution in
the interval [0,1]. It is easy to see that when the
generated solution is better than that of current
solution, F is negative and exp(-C/T) is
greater than 1. Thus, the solution is always
updated. When the new solution is not better than
the current solution, the solution may still take
place of its ancestor in a random manner. The
process is repeated until the state approaches the
equilibrium
sufficiently
close.
In
our
implementation,
the
following
simple
equilibrium state, which is also used in [19] is
used:
(  ) or (  )
(4)
Where  is the number of new generated solutions, 
is the number of new accepted solutions,  is the
maximum number of generation, and  is the
maximum number of acceptance. In our
implementation, =1.5N and =N. This algorithm is
repeated until it enters a frozen situation, which is:
(C max  C min ) / C max   or T  ε1 , (5)
where Cmax and Cmin are the maximum and minimum
costs, ε and ε1 are pre-specified constants and ε=0.001
and ε1=0.005 in our implementation.
3.1 The Proposed Algorithm
In this paper, we propose to use immune systems
to improve the search efficiency. The natural immune
system is a very complex system with several
approaches to defense against pathogenic organisms.
Biologically, the function of an immune system is to
protect our body from antigens. Several types of
immunity, such as anti-infected immunity,
self-immunity, and particularity immunity are
investigated in biology. In the aspect of self-immunity,
there are many kinds of antibodies against
self-antigen in the body. They are helpful in
eliminating the decrepit and degenerative parts but
will not destroy the normal parts. From the concept of
self-immunity, a novel model for the immune system
has been developed in [9] to embed heuristics for
genetic algorithms for solving travel salesman
problem. The approach consists of two main features.
One is the vaccination used for improving the current
cost and the other is immune selection used for
preventing deterioration. There are two steps in
immune selection. The first one is called the immune
test, and the second one is the annealing selection.
The immune test is used to test the vaccinated bits,
and the annealing selection is to accept these bits with
a probability according to their cost value.
When incorporated into GAs, immune systems
can use local information to improve the search
capability during the evolutionary process. In [4], an
immune operator was proposed to solve the travel
salesman problem. This algorithm consists of two
main operations; the vaccination used for reducing
the current cost and the immune selection used for
preventing deterioration [12]. The process of
vaccination is to modify genes of the current
chromosome with heuristics so as to possibly gain
better cost. The immune selection includes two steps.
The first one is called the immune test, and the
second one is the annealing selection. The immune
test is used to test the vaccinated genes, and the
annealing selection is to accept those genes with a
probability according to their costs.
Similar to that used in [4], the vaccination
operation is applied to keep good genes and modify
other genes. If it is not a good gene, the gene is
replaced by a randomized integer between 1 to N. In
vaccination, a random number r is used to decide to
forward or backward modify the assigned pairs in the
chromosome. In this process, a randomly generated
integer (i) between 1 and N is chosen as the starting
gene in the chromosome. The pairs are forward
modified from the starting gene to the Nth gene or
backward modified to the first gene. The forward
immune operator is described as follows:
Step 1:Select a position within a
chromosome at random.
Step 2: Exchange the first positon and
the randomly selected position
to produce a new offspring.
Then, the next one- assigned
location to the right is
proceeded in the same way until
all locations have been
considered.
Consider a chromosome:
A= 1 2 3 4 5 6 7 8 9.
In a forward immune process, the 6th position of
parent A is randomly selected. The 1st position of
parent A becomes 6, and the 6th position becomes 1.
A’ = 6 2 3 4 5 1 7 8 9.
We can proceed in choosing a position at random and
exchange the second position, In our example, the 7th
position of A’ is randomly selected. The 2nd position
of parent A’ becomes 7, and the 7th becomes 2.
A’ = 6 2 3 4 5 1 7 8 9.
A’’= 6 7 3 4 5 1 2 8 9.
The backward immune operator is described as
follows:
Step1: Select a position within
chromosome at random.
Step 2: Exchange the last position and the
randomly selected position to
produce a new offspring. Then, the
next one- assigned location to the
left is proceeded in the same way
until all locations have been
considered.
Consider a chromosome:
B= 1 2 3 4 5 6 7 8 9.
In a backward immune process, the 3rd position of
parent B is randomly selected. The last position of
parent B becomes 3, and the 3rd position becomes 9.
B’ =1 2 9 4 5 6 7 8 3.
We can proceed in choosing a position at random and
exchange the last second position, In our example, the
4th position of B’ is randomly selected. The 8th
position of parent B’ becomes 4, and the 4th becomes
8.
B’ = 1 2 9 4 5 6 7 8 3.
B’’= 1 2 9 8 5 6 7 4 3.
From the above immune operator it can be
applied for the chromosomes after crossove and
mutation. We can refer to the flowchart shown in
Fig. 2 about the course of executing the above
algorithm.
In immune test, modified genes with better costs
are always accepted and those with worse costs may
also be accepted according to the annealing selection.
In our implementation, the selection probability for
the jth modified gene is calculated as:
e  Ct
ij
P 
j
W
e
/ Tt
,
(6)
 Ctij / Tt
j 1
Tt  ln(C elitist / t  1) ,
N
where C tij is the value of
N
N
N
 d
i 1 j 1 k 1 l 1
ik
f jl X ij X kl at
generation t. It is noted that the annealing selection is
similar to that used in [4].
4.
Simulation and Results
The most well known QAP is defined in Nugent
et. al. and the test problems taken from QAPLIB
[16,17,18] are used to compare the performance of
the proposed algorithm with GA algorithms. First, a
simple case consisting of randomized data for tai12a
is used to investigate the performances for various
crossover operators. All simulations use the same
initial population, which are randomly generated. The
maximum number of generations is set as
max_gen=2000, and experiments were run on PCs
with a Pentium 2-GHz processor.
operator can find better fitness values with less CPU
time than other operators. Since the KGGX operator
has the best performances among those operators, it is
employed as the default crossover operator in the
following simulations.
Gen=0
Create Initial
Random Population
Table 1. The simulation results for tai12a. Results
areaveraged over 10 trials.
Algorithm Operator Best fitness Gap(%) Converged
CPU time
(sec)
General
PMX
224826
0.183
26.83
GA
ICX
224637
0.098
20.36
Evaluate Fitness of Each
Individual in Population
Termination
Criterion
Satisfied?
Gen=Gen+1
Yes
Designate
Results
No
End
Perform
Crossover
And Immune
Operator
Perform mutation
And Immune
Operator
Fig. 2. The Flowchart of GA based on immunity
The crossover operators PMX and ICX and
KGGX, parameters use in GA are tested the crossover
with probability Pc=0.8, the inverse mutation with
probability Pm=0.07; and r=0.5 for SA. The
simulation is conducted for 10 trials. The results are
shown in Table 1. From Table 1, we report the best
fitness value, the percentage gap in ten trials, and
averaged converged CPU time. Also in the table, the
difference relatively to the best known solution of the
QAP library is given as a percentage gap for all
algorithms, 0 means that all ten tests converged to the
best know value. It is clearly evident that the KGGX
Immunity
based GA
KGGX
224530
0.051
15.25
PMX
224416
0
24.45
ICX
224416
0
16.24
KGGX
224416
0
10.26
Next, the performances of various search
algorithms are investigated. These algorithms include
general GA, SA algorithm, GA with SA, GA with
local search, immunity based GA. Since these
algorithms are search algorithms, for this comparison,
we have studied large instances from QAP problem, it
is not easy to stop their search in a fair basis from the
algorithm itself. Since the issue considered in this
research is the search efficiency of algorithms, in our
comparison, we simply stopped these algorithms after
a fixed time of running. Experiments were also run on
PCs with a Pentium 2-GHz processor and were
stopped after two hours of running. Since we need to
run ten tests for each algorithm, it may not be feasible
to run too long. If the running time is too short, the
results may not be significant. To run algorithms for
two hours is only a handy selection. The results of
averaged best fitness are listed in Table 2. From Table
2, it is easy to see that the immunity based GA can
find better fitness values than other algorithms. It also
shows that immunity based GA can always
outperform other algorithms.
Furthermore, we have compared various search
algorithms on a set of 12 instances with the same
computing time. Those simulations are to see which
algorithm can find a better solution in a fixed period
of running. Results are in Table 3. In this case best
results are obtained by immunity based GA, which
found best average fitness for a set of 12 instances. It
is clear that immunity based GA finds better solutions
than other algorithms.
5.
Conclusions
In this paper, we presented an algorithm of
immunity based genetic algorithms for solving
QAP. This algorithm supports a mechanism to
economically manage facilities with the
advantages of GAs and immune systems that
efficiently find optimal feasible solutions. From
our simulation for those well-known QAP
examples, the proposed algorithm indeed can
find the optimal feasible solutions for all of those
test problems. Even though GA algorithms may
have a great possibility of being trapped into a
local optimum, due to crossover and mutation
operations used in GAs, the search can easily
escape from local optima. When compared to
existing search algorithms, the proposed
algorithm
obviously
outperforms
those
algorithms.
References
[1]A. M. H. Bjorndal, et al., “Some thoughts on
combinatorial optimisation,” European Journal of
Operational Research, 1995, pp. 253-270.
[2]M. Gen and R. Cheng, Genetic Algorithms and
Engineering Design, John Wiley & Sons Inc.,
1997.
[3]E. H. L Aarts and J. K. Lenstra, Local Search in
Combinatorial Optimization, John Wiley & Sons
Inc., 1997.
[4]L. Jiao and L. Wang, “Novel genetic algorithm
based on immunity,” IEEE Transactions on
Systems, Man and Cybernetics, Part A, vol. 30, no.
5, 2000, pp. 552 –561.
[5]A. Kolen and E. Pesch, “Genetic local search in
combinatorial optimization,” Discrete Applied
Mathematics and Combinatorial Operation
Research and Computer Science, vol. 48, 1994,
pp. 273-284.
[6]P. Merz and B. Freisleben, “Fitness landscape
analysis and memetic algorithms for quadratic
assignment problem,” IEEE Trans. On
Evolutionary Computation, vol. 4, no. 4, 2000,
pp. 337-352.
[7]P. P. C. Yip and Y. H. Pao, “A guided
evolutionary simulated annealing approach to the
quadratic
assignment
problem,”
IEEE
Transactions on Systems, Man and Cybernetics,
vol. 24, 1994, pp. 1383 –1386.
[8]I. Tazawa, S. Koakutsu, and H. Hirata, “An
immunity based genetic algorithm and its
application to the VLSI floor plan Design
Problem,” Proceedings of IEEE International
Conference on Evolutionary Computation, 1996,
pp. 417–421.
[9]D.
Dasgupta
and
N.
Attoh-Okine,
“Immunity-based systems: a survey,” IEEE
International Conference on Systems, Man, and
Cybernetics, 1997. Computational Cybernetics
and Simulation, vol. 1, 1997, pp. 369 –374.
[10]A. Gasper and P. Collard, “From GAs to artificial
immune systems: improving adaptation in time
dependent optimization,” Proceedings of the 1999
Congress on Evolutionary Computation, CEC 99,
vol. 3, 1999, pp. 1999 –1866.
[11]D. M. Tate and A. E. Smith, “A genetic approach
to the quadratic assignment problem,” Computers
Operation Research, vol. 22, no. 1, 1995,
pp.73-83.
[12]D. A. Goldberg, Genetic Algorithms in Search,
Optimization,
and
Machine
Learning,
Addision-Wesley: Reading, MA, 1989.
[13]Z. Michalewicz, Genetic Algorithms + Data
Structure = Evolution Programs, Springer-Verlag:
Berlin, 1994.
[14]L. Davis, editor Handbook of Genetic Algorithms,
Van Nostrand Reinhold, New York, 1991.
[15]N. L. J. Ulder, E. H. L. Aarts, H. J. Bandelt, P. J.
M. Van Laarhoven, and E. Pesch, “Genetic local
search algorithms for the traveling salesman
problem,” in Parallel Problem solving from
nature-Proc. 1st Workshop, PPSN I, Schwefel and
Männer, Eds., vol. 496, Lecture notes in Computer
Science, 1991, pp. 109-116.
[16]V. Nissen, “Solving the quadratic assignment
problem with clues from nature,” IEEE
Transactions on Neural Networks, vol. 5, 1994,
pp. 66 –72.
[17]K. Smith, “Solving the generalized quadratic
assignment problem using a self-organizing
process,” IEEE International Conference on
Neural Networks, vol. 4, 1995, pp. 1876-1879.
[18]R. E. Burkard, S. E. Karisch, and F. Rendl,
“QAPLIB-A quadratic assignment problem
library,” Technical Report no. 287, Technical Univ.
of Graz, Austria, 1994.
[19]L. Davis, Genetic Algorithms and Simulated
Annealing, Morgan Kaufmann Publishers, 1987.
Table 2. Comparison various search algorithms. Results are averaged over
are in boldface).
tai100a
sko100a
sko100b sko100c
sko100d
Best known 21125314
152002
153890
147862
149576
General GA
Gap(%)
0.322
0.022
0.016
0.015
0.023
Time(min) 107
102
115
92
112
SA
Gap(%)
0.278
0.017
0.009
0.013
0.001
Time(min) 97
97
99
97
101
GA with local search
10 trials (best results
sko100e
149150
wil100
273038
0.013
84
0.013
109
0.005
99
0.006
118
Gap(%)
0.158
0.014
0.010
0.006
0
0.002
0.004
Time(min)
90
98
95
78
55
97
104
GA with SA
Gap(%)
0.142
Time(min) 85
0.009
78
0.003
90
0.004
56
0
48
0.009
90
0.001
92
Immunity based GA
Gap(%)
0
Time(min) 45
0
37
0
66
0
46
0
43
0
52
0
90
Table 3. Compare various search algorithms with the same computing time. Results are averaged
over 10 trials (best results are in boldface).
General
SA
GA
GA
Immun Time(s)
Problem Best
GA
with
with
ity
known
local
SA
based
value
search
GA
nug20
2570
0.911
0.070
20
0
0
0
nug25
3744
0.872
0.032
0.121
0.007
62
0
lipa20b
27076
1.116
0.039
0.114
0.003
223
0
lipa30a
13178
0.978
0.062
0.133
0.040
331
0
lipa40a
31538
1.082
0.080
0.110
0.060
340
0
lipa50a
62093
0.861
0.064
0.095
0.092
746
0
Lipa60a 107218
0.948
0.148
0.178
0.143
851
0
sko81
90998
0.880
0.098
0.006
0.136
1011
0
sko90
115534
0.748
0.169
0.227
0.196
1525
0
sko100a 152002
0.345
0.510
0.102
0.629
1867
0
tai20a
703482
0.675
0.757
1.345
0.698
26
0
scr20
110030
0.623
1.006
1.307
0.884
22
0
Values are the average of percentage gap between solution value and best known value in percent
over 10 runs.