Download The Wolf Colony Algorithm and Its Application

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Factorization of polynomials over finite fields wikipedia , lookup

Algorithm characterizations wikipedia , lookup

Travelling salesman problem wikipedia , lookup

Algorithm wikipedia , lookup

Dijkstra's algorithm wikipedia , lookup

Genetic algorithm wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Mathematical optimization wikipedia , lookup

Transcript
Chinese Journal of Electronics
Vol.20, No.2, Apr. 2011
The Wolf Colony Algorithm and Its Application∗
LIU Changan, YAN Xiaohu, LIU Chunyang and WU Hua
(School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China)
Abstract — The paper proposes the wolf colony algorithm that simulates the intelligent predatory behaviors of
the wolf colony to solve the optimization problem. The
solution in the searching space is the artificial wolf in the
algorithm. A few artificial wolves are assigned to searching
in the activity range of the quarry. When the searching artificial wolves discover the quarry, they notify the position
of the quarry to the other artificial wolves by howl. The
other artificial wolves get close to the quarry and besiege
it. The wolf colony is updated according to the assignment
rule of the wolf colony. The performance of wolf colony
algorithm is discussed according to the function optimization problem. To prove the good generalization, the wolf
colony algorithm is used to plan the optimal path for the
mobile robot. The results prove that the path planning
method based on the wolf colony algorithm is viable and
efficient.
Key words — Wolf colony algorithm, Optimization
problem, Mobile robot, Path planning.
I. Introduction
Optimization problem is the mathematical programming
problem that is frequently encountered in scientific research
and engineering application. In recent years, many evolutionary algorithms have been successfully applied to the optimization problem[1] . Genetic algorithm (GA) based on evolutionary concepts of natural selection and genetics was proposed by
J.H. Holland in 1962[2,3] . The algorithm reflects the natural
rule and has strong global searching ability. Particle swarm optimization (PSO) inspired by the social behaviors of bird flocking and fish schooling was proposed by Kennedy and Eberhart in 1995[4,5] . The algorithm has many advantages such
as easy implementation and good generalization. Ant colony
optimization algorithm (ACO) simulating the swarm intelligence of the ant colony behaviors was proposed by M. Dorigo
in 1996[6,7] . The algorithm has the advantages of positive feedback and parallel computation mechanism.
The above-mentioned algorithms have some shortcomings
in some applications due to the variety of the optimization
problem, such as the slow convergence speed and the local optimum. The paper proposes the Wolf colony algorithm (WCA)
that simulates the intelligent predatory behaviors of the wolf
colony to solve the optimization problem. The solution in the
searching space is the artificial wolf in the algorithm. A few artificial wolves are assigned to searching in the activity range of
∗ Manuscript
Received May 2010; Accepted Nov. 2010.
the quarry. When searching wolves discover the quarry, they
notify the position of the quarry to the other artificial wolves
by howl. The other artificial wolves get close to the quarry
and besiege it. The wolf colony is updated according to the
assignment rule of the wolf colony.
The rest of this paper is organized as follows: In Section II,
the wolf colony algorithm is introduced. Section III discusses
the performance of WCA according to the function optimization problem. Section IV uses WCA to plan the optimal path
for mobile robot to prove the good generalization of WCA.
Finally, conclusions are summarized in Section V.
II. The Wolf Colony Algorithm
It was found that the wolf colony has a rigorous organized
system. The wolves divide the task definitely and keep their
steps consistent when they are preying. A few artificial wolves
are assigned to searching in the activity range of the quarry.
When the searching wolves discover the quarry, they notify the
position of the quarry to the other artificial wolves by howl.
The other artificial wolves get close to the quarry and besiege
it. The assignment rule of the wolf colony is to assign the
food to the strong wolf first and then to the weak one. The
wolf colony algorithm that simulates the above behaviors is
proposed.
1. The description of the behaviors
Suppose that the dimension of the searching space is D and
the individual number is n. The position of the ith artificial
wolf is Xi , then
Xi = (Xi1 , · · · , Xid , · · · , XiD )
(1)
where 1 ≤ i ≤ n, 1 ≤ d ≤ D.
(1) The searching behavior
To increase the chance of discovering the quarry, q artificial wolves are assigned to searching in the activity range of
the quarry. The searching behavior is shown in the Fig.1.
Fig. 1. The searching behavior
The Wolf Colony Algorithm and Its Application
When the artificial wolf is on the position P0 , h searching
positions that are in h directions around P0 are calculated and
the optimal searching position is P1 . If P1 is better than the
current position P0 , the artificial wolf moves to P1 . P1 is set
as the current position and the artificial wolf will move on.
Suppose that the q searching wolves are the wolves that
are nearest to the quarry, the maximum searching number is
maxdh and the position of the ith searching artificial wolf is
XXi . h positions are generated around XXi , and the jth
(1 ≤ j ≤ h) searching position is Yj , then
Yj = XXi + randn · stepa
(2)
Where randn is a random number uniformly distributed in
the range [−1, 1]; stepa is the searching step. If the searching
number is larger than maxdh or the current position is better
than the optimal searching position, the searching behavior
ends.
(2) Besiege the quarry
When the searching wolves discover the quarry, they notify the position of the quarry to the other artificial wolves
by howl. The other artificial wolves get close to the quarry
and besiege it. Suppose that the position of the quarry is the
position of the searching artificial wolf that is nearest to the
quarry. The position of the quarry in the dth searching space
of the kth iteration colony is Gkd , and the position of the ith
k
, then:
artificial wolf is Xid
k+1
k
k
= Xid
+ rand · stepb · (Gkd − Xid
)
Xid
(3)
where rand is a random number uniformly distributed in the
range [0, 1]; stepb is the besieging step; k is the iteration number. The range of the dth position is [XM INd , XM AXd ]. If
the value calculated by Eq.(3) exceeds the range, set it as the
boundary value.
Eq.(3) is composed of two parts, where the first one is the
present position of the artificial wolf, and the second one is the
trend that the wolf colony gets close to the optimal artificial
wolf. The trend represents the study and information sharing among the colony. As the wolf colony besieges the quarry,
WCA can search the global optimum.
(3) Update the wolf colony
The assignment rule of the wolf colony is to assign the food
to the strong wolf first and then to the weak one. The wolf
colony assigns most of the food to the strong wolf then to the
weak one although the weak wolf will starve to death. It can
ensure that the strong wolves prey next time, so the adaptability of the wolf colony can be enhanced. Simulating the
behavior, the paper removes the worst m artificial wolves in
the colony and generates m artificial wolves randomly in WCA.
Therefore, the colony becomes various and the algorithm can
avoid the local optimum.
2. The steps of the wolf colony algorithm
The wolf colony algorithm simulating the intelligent predatory behaviors of the wolf colony is proposed. And the steps
of the algorithm are as follows:
Step 1 Initialization. Initialize the individual number
n, the maximum iteration number maxk, the number of the
searching artificial wolf q, the searching direction h, the maximum searching number maxdh, the searching step stepa, the
213
besieging step stepb, the number of the worst artificial wolf m
and the position of the ith (1 ≤ i ≤ n) artificial wolf Xi .
Step 2 Select q optimal artificial wolves as the searching
artificial wolves and every searching artificial wolf moves on
according to Eq.(2).
Step 3 Select the best position in the searching artificial
wolves as the position of the quarry. Update the position of
every artificial wolf according to Eq.(3). If Xid is less than
XM INd , then Xid is equal to XM INd ; if Xid is larger than
XM AXd , then Xid is equal to XM AXd .
Step 4 Update the wolf colony according to the assignment rule of the wolf colony. Remove the worst m artificial
wolves in the wolf colony and generate m artificial wolves randomly.
Step 5 Judge whether the termination condition is satisfied. If the circulation step of WCA reaches the maximum
iteration number, output the position of the optimal artificial
wolf which is the optimal solution of the problem; otherwise
turn to Step 2.
III. The Performance Analysis of WCA
The selection of the parameters is crucial to the performance of an optimization algorithm. In this study, the important parameters in WCA are as follows: the besieging step
stepb and the number of the worst artificial wolf m. The continuous function optimization problem[8−10] is used to discuss
the selection of the important parameters in WCA. The continuous function is as follows:
n
1 2
f1 (x) = − 20 exp − 0.2
x
n i=1 i
− exp
n
1
cos(2πxi ) + 20 + e,
n i=1
S =[−32, 32]n ,
fmin = 0
(4)
When the dimension of the functions is 30, the important
parameters in WCA are different and the other parameters
are set as follows: n = 200, q = 5, h = 4, maxdh = 15,
stepa = 1.5, the average values and the optimal values of f1 (x)
obtained by WCA are shown in Table 1.
Table 1. The performances of WCA with
different parameters
The average
The optimal
stepb
m
value
value
0.1
5
0.9702
5.8271e−005
5
5
0.0606
0.0086
0.9
50
0.1090
2.5731e−005
0.9
5
0.0373
9.5163e−007
As is shown in Table 1, to study better for the artificial
wolf, the appropriate values of stepb should be about 1. If the
value of stepb is too small, the global optimal value can be obtained but the average value is too large which illustrates the
convergence speed is show. If the value of stepb is too large,
the best position detected by the artificial wolf will exceed
the boundary value easily and the global searching ability is
Chinese Journal of Electronics
214
decreased. The appropriate values of m should be small. Otherwise, the individual that studies among the colony will be
replaced by the random individual. Then the convergence and
the searching speed are affected.
The number of the searching artificial wolf q and the
searching direction h can not be too large. Otherwise, the
searching area is overlapped and the computation cost is very
large. Compared to the number of the colony, the number of
the searching artificial wolf is small. Supposing that h∗maxdh
is smaller than n, the time complexity of WCA is O(n∗maxk).
To test the convergence and the global searching ability of
WCA, the following standard functions are used in the experiment.
Sphere function:
f2 (x) =
n
x2i ,
S = [−100, 100]n ,
fmin = 0
(5)
i=1
Schwefel function:
f3 (x) = max(|xi |, 1 ≤ i ≤ n),
S = [−100, 100]n ,
fmin = 0
(6)
Rosenbroc function:
f4 (x) =
n
2011
the dimension of every function is 30, PSO and GA are compared to optimize the standard functions. The maximum iteration number of the three algorithms is 800. The parameters of
WCA are set as follows: n = 200, q = 5, h = 4, maxdh = 15,
stepa = 1.5, stepb = 0.9, m = 5. In PSO, the particle number
is 200, the inertia weight is 0.729 and the two learning coefficients are both 1.496. In GA, the crossover and mutation rate
is 0.9 and 0.1 respectively. The population size is 600 and the
length of the individual is 900. If the optimal value obtained in
the iteration exceeds 30, set it as 30. The convergence curves
of the three algorithms for every function are shown in Fig.2.
As is shown in Fig.2, WCA has a good convergence and
strong global searching ability. Every function is optimized by
the three optimization algorithms for 20 trials independently
and the best solutions obtained are recorded. The average values and the optimal values obtained by the three algorithms
are shown in Table 2.
Table 2. The comparison of the three algorithms
Function
Sphere
Schwefel
Rosenbroc
Rastigri
The WCA 3.0815e−009 7.8201e−006 2.4866e−006 1.7545e−012
average PSO
0.0628
6.2732
51.6593
0.1408
value
GA
0.4125
0.3632
0.0011
14.3940
The WCA 2.7055e−011 1.8378e−007 5.7362e−008 1.1102e−016
optimal PSO
0.0024
3.7711
17.9819
0.0022
value
GA
0.2443
0.1673
4.2302e−004
6.1172
[x2i − 10 cos(2πxi ) + 10],
i=1
S = [−5.12, 5.12]n ,
fmin = 0
(7)
Rastigri function:
n
n
1 2 xi
f5 (x) =
xi −
cos √ + 1,
4000 i=1
i
i=1
n
S = [−600, 600] ,
fmin = 0
As is shown in Table 2, PSO and GA are trapped in the
local optimum easily when the dimension and the searching
space are large. WCA can get the global optimum and has a
good convergence.
IV. The Application and Analysis
(8)
As an effective global optimization algorithm, WCA can
be used in many fields such as TSP problem[11,12] and jobshop scheduling problem[13,14] . The study will prove the good
generalization of WCA according to the path planning for the
mobile robot. Path planning is the most important research
area in mobile robotics. It is to find a collision-free path from
a starting point to a target point in an environment with obstacles according to some optimization criterions, such as the
shortest distance and the least energy consumption[15,16] .
1. The environment model
Suppose that the work space of the robot is a two dimensional environment. As is shown in Fig.3, the black areas are
obstacles, the starting point is S and the target point is T . In
the coordinate system, divide ST into n + 1 segments. The
division point is xi (1 ≤ i ≤ n). Find the line li that is vertical
with the ST on xi and choose a point pi randomly on li . pi
should satisfy the following conditions: pi is not in obstacles;
the lines of pi−1 pi and pi pi+1 do not across obstacles. Link
the point pi by the line segments. And there will be a random
path P for the mobile robot t, then
Fig. 2. The convergence curves of the three algorithms. (a)
Sphere function; (b) Schwefel function; (c) Rosenbroc
function; (d) Rastigri function
P = {S, p1 , p2 , · · · , pi , · · · , pn , T }
The first two functions have only one optimal position
while others have two or more local optimal positions. When
The path planning in this research is to find the collisionfree path from S to T which the length is the shortest. The
(9)
The Wolf Colony Algorithm and Its Application
length of the path P can be expressed as:
LP =
n−1
LPi Pi+1 + LSP1 + LT Pn
(10)
i=1
where LPi Pi+1 is the distance between pi and pi+1 , LSP1 is the
distance between S and p1 , LT Pn is the distance between T
and Pn . Suppose the coordinate of pi is (xi , yi ), then:
LPi Pi+1 = (xi+1 − xi )2 + (yi+1 − yi )2
(11)
Fig. 3. The coordinate system for working environment of
robot
2. The path planning based on WCA
The path planning for mobile robot based on WCA is proposed. As is shown in the Fig.3, regard pi on the path P as
the position of the artificial wolf in the ith dimensional space.
Then the steps of the path planning for mobile robot based on
WCA can be summarized as follows:
Step 1 Initialization. Initialize the parameters of WCA,
the starting point S and the target point T . Select n paths as
the artificial wolves randomly.
Step 2 Select q optimal artificial wolves as the searching
artificial wolves and every searching artificial moves on according to Eq.(2).
Step 3 Besiege the quarry. Select the best position in the
searching artificial wolves as the position of the quarry. Update the position of every artificial wolf according to Eq.(3).
If the updated value exceeds the range, set it as the boundary
value.
Step 4 Update the wolf colony according to the assignment rule of the wolf colony. Remove the worst m artificial
wolves in the wolf colony and generate m artificial wolves randomly.
Step 5 Judge whether the position xid satisfies the following conditions: xid is not in obstacles; the lines that link
xid and its adjacent points do not across obstacles. If it satisfies the conditions, then xid is the position of the ith artificial
wolf in the dth dimensional space. Otherwise, replace xid with
the point generated randomly that satisfies the conditions.
Step 6 Judge whether the termination condition is satisfied. If the circulation step of WCA reaches the maximum
iteration number, output the position of the optimal artificial
wolf which is the optimal path for the mobile robot; otherwise,
turn to Step 2.
3. The experiment and analysis
In order to verify the validity and effectiveness of the path
planning based on WCA, the paper uses the Matlab engine to
call WCA that is written in the Matlab to simulate the experiment of the path planning under the VC++ 6.0 environment. The experiment conditions are as follows: CPU: Intel
215
Core 2 Duo 1.6GHz. Memory: 1.00G the physical address of
the memory expansion. The simulation tool: Matlab7.0 and
VC++ 6.0.
The unit is pixel in the simulation experiment. The starting point S is (100, 100) and the target point T is (430, 430).
The black areas are the obstacles. ST is divided into 33 segments. WCA, PSO and GA are compared to plan the optimal
path between ST . The maximum iteration number of the
three algorithms is 500. The parameters of WCA are set as
follows: n = 200, maxk = 200, q = 5, h = 4, maxdh = 15,
stepa = 1.5, stepb = 0.9, m = 4. In PSO, the particle number
is 200, the inertia weight is 0.729 and the two learning coefficients are both 1.496. In GA, the crossover and mutation rate
are 0.9 and 0.1. The population size is 400 and the length of
the individual is 600. Suppose that the speed of the robot is
20 pixels/s and it takes a second to turn the corner. Then the
length of the optimal path and the walking time for the robot
are compared in Table 3.
Table 3. The comparison of the three algorithms
Algorithm
Length (pixel)
Time (s)
WCA
486.9351
27.3468
PSO
490.4596
28.5230
GA
490.3649
29.5182
As is shown in Table 3, PSO and GA are trapped in the
local optimum, the path planning planned by WCA is shorter
and less time-consuming. WCA can search the global optimum
quickly and has a good convergence. The results prove that
WCA can solve the path planning for mobile robot effectively.
V. Conclusions
In this study, WCA that simulates the intelligent predatory behaviors of the wolf colony is proposed. WCA is a global
optimization algorithm based on the behaviors of the wolf. It
searches the global optimum through the coexistence and cooperation among the wolf colony. The performance of WCA
is discussed according to the function optimization problem.
WCA is used in the path planning for the mobile robot. The
results prove that the path planning method based on WCA
is viable and efficient. The characters of WCA can be summarized as follows:
(1) Global searching ability. The searching artificial wolf
searches the position that is nearest to the quarry in many
directions, so the global searching ability of the algorithm is
strong.
(2) Simplicity. The algorithm only uses the objective function and the implementation is simple.
(3) Good generalization. WCA can be used in many optimization problems by modifying the algorithm slightly.
Therefore, WCA is very suitable to solve the optimization
problem.
References
[1] Jingan Yang, Yanbin Zhuang, “An improved ant colony optimization algorithm for solving a complex combinatorial optimization problem”, Applied Soft Computing, Vol.10, No.2,
pp.653–660, 2010.
216
Chinese Journal of Electronics
[2] J.H. Holland, “Outline for a logical theory of adaptive systems”,
Journal of the Association for Computing Machinery, Vol.9,
No.3, pp.297–314, 1962.
[3] Luong Duc Long, Ario Ohsato, “A genetic algorithm based
method for scheduling repetitive construction projects”, Automation in Construction, Vol.18, No.4, pp.499–511, 2009.
[4] J. Kennedy, R.C. Eberhart, “Particle swarm optimization”,
Proc. of IEEE International Conference on Neural Networks,
Piscataway, USA, pp.1942–1948, 1995.
[5] Zhang Changsheng, Sun Jigui, Ouyang Dantong, “A selfadaptive discrete particle swarm optimization algorithm”, Acta
Electronica Sinica, Vol.32, No.2, pp.209–304, 2009. (in Chinese)
[6] M. Dorigo, V. Maniezzo, A. Colorni, “Ant system: optimization by a colony of cooperating agent”, IEEE Transactions on
Systems, Man, and Cybernetics, Vol.26, No.1, pp.29–41, 1996.
[7] Frank Neumann, Carsten Witt, “Ant Colony Optimization and
the minimum spanning tree problem”, Theoretical Computer
Science, Vol.411, No.25, pp.2406–2413, 2010.
[8] Maoguo Gong, Licheng Jiao, Dongdong Yang et al., “Research
on evolutionary multi-objective optimization algorithms”, Journal of Software, Vol.20, No.2, pp.271–289, 2009 (in Chinese).
[9] Hui Pan, Ling Wang, Bo Liu, “Particle swarm optimization for
function optimization in noisy environment”, Applied Mathematics and Computation, Vol.181, No.2, pp.908–919, 2006.
[10] Xiaomin Hu, Jun Zhang, Yun Li, “Orthogonal methods based
ant colony search for solving continuous optimization problems”, Journal of Computer Science and Technology, Vol.23,
No.1, pp.2–18, 2008.
[11] Klaus Meer, “Simulated annealing versus metropolis for a
TSP instance”, Information Processing Letter, Vol.104, No.6,
pp.216–219, 2007.
[12] T. Lust, A. Jaszkiewicz, “Speed-up techniques for solving largescale biobjective TSP”, Computers & Operations Research,
Vol.37, No.3, pp.521–533, 2010.
[13] Shi Qiang Liu, Erhan Kozan, “Scheduling trains as a blocking
parallel-machine job shop scheduling problem”, Computers &
Operations Research, Vol.36, No.10, pp.2840–2852, 2009.
[14] F. Pezzella, G. Morganti, G. Ciaschetti, “A genetic algorithm
for the flexible job-shop scheduling problem”, Computers & Operations Research, Vol.35, No.10, pp.3202–3212, 2008.
[15] Meijuan Gao, Jin Xu, Jingwen Tian et al., “Path planning for
mobile robot based on chaos genetic algorithm”, Fourth International Conference on Natural Computation, Jinan, China,
pp.409–413, 2008.
[16] Lidia M. Ortega, Antonio J. Rueda, Francisco R. Feito, “A solution to the path planning problem using angle preprocess-
2011
ing”, Robotics and Autonomous Systems, Vol.58, No.1, pp.27–
36, 2010.
[17] Liu Changan, Chang Jingang, Liu Chunyang, “Path planning
for mobile robot based on an improved probabilistic roadmap
method”, Chinese Journal of Electronics, Vol.18, No.3, pp.395–
399, 2009.
LIU Changan
was born at Heilongjiang Baiquan in Dec. 1971, Professor,
Ph.D., received B.S. degree from Northeast Agricultural University in 1995 and
M.S., Ph.D. degrees from Harbin Institute
of Technology in 1997 and 2001 respectively. Employed to North China Electric
Power University in 2001. Director of Intelligence Robot Research Institute, North
China Electric Power University. Researching fields: technology of intelligent robot, theory of artificial intelligence. (Email: [email protected])
YAN Xiaohu received B.S. degree
from Huazhong Agricultural University in
2008 and entering School of Computer Science and Technology, North China Electric
Power University as a graduate student on
technology of intelligence robot.
LIU Chunyang was born at Tangshan in Oct., 1978, lecturer. Received B.S.
degree from Hebei Normal University in
2002 and M.S. degree from North China
Electric Power University in 2005. Employed to North China Electric Power University in 2005. Researching fields: technology of intelligent robot, theory of artificial intelligence.
WU Hua
was born at Jinzhou in
Oct., 1981, lecturer, Ph.D. Employed to
North China Electric Power University in
2010. Researching fields: technology of intelligent robot, theory of artificial intelligence.