Download Unlicensed-7-PDF729-732_engineering optimization

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Corecursion wikipedia , lookup

Transcript
13.4
V j(i) = _V j(i Š 1) + c 1
1r
[P
best,j
Particle Swarm Optimization
711
Š X j(i Š 1)]
(13.23)
+ c 2 2r
Š X j(i Š 1)]; j = 1, 2, . . . , N
[Gbest
The inertia weight _ was originally introduced by Shi and Eberhart in 1999 [13.36] to
dampen the velocities over time (or iterations), enabling the swarm to converge more
accurately and efficiently compared to the original PSO algorthm with Eq. (13.21).
Equation (13.23) denotes an adapting velocity formulation, which improves its fine
tuning ability in solution search. Equation (13.23) shows that a larger value of _ promotes global exploration and a smaller value promotes a local search. Thus a large value
of _ makes the algorithm constantly explore new areas without much local search and
hence fails to find the true optimum. To achieve a balance between global and local
exploration to speed up convergence to the true optimum, an inertia weight whose
value decreases linearly with the iteration number has been used:
•(i) = _max Š
•max Š _min
ff
i
(13.24)
imax
where _max and _min are the initial and final values of the inertia weight, respectively,
and imax is the maximum number of iterations used in PSO. The values of _max = 0 9.
and _min = 0 4 . are commonly used.
13.4.4
Solution of the Constrained Optimization Problem
Let the constrained optimization problem be given by
Maximize f (X)
subject to
(13.25)
gj (X) _ 0; j = 1, 2, . . . , m
An equivalent unconstrained function, F (X), is constructed by using a penalty function
for the constraints. Two types of penalty functions can be used in defining the function
F (X). The first type, known as the stationary penalty function, uses fixed penalty parameters throughout the minimization and the penalty value depends only on the degree of
violation of the constraints. The second type, known as nonstationary penalty function,
uses penalty parameters whose values change dynamically with the iteration number
during optimization. The results obtained with the nonstationary penalty functions have
been found to be superior to those obtained with stationary penalty functions in the
numerical studies reported in the literature. As such, the nonstationary penalty function
is to be used in practical computations.
According to the nonstationary penalty function approach, the function F (X) is
defined as
F (X) = f (X) + C(i)H(X)
where C(i) denotes a dynamically modified penalty parameter that varies with the iteration number i, and H(X) represents the penalty factor associated with the constraints:
(13.26)
712
Modern Methods of Optimization
C(i) = (ci)_
•
H(X) =
(13.27)
m
__[g (X)][q (X)] _ [q i(X)] _
j
j=1
1Š
_[q j(X)] = a
(13.28)
j
ff
1
(X)
e
+b
(13.29)
_
q (jX ) = max 0, g j(X)_ ; j = 1, 2, . . . , m
jq
(13.30)
where c, _, a, and b are constants. Note that the function q j(X) denotes the magnitude
of violation of the jth constraint, _[q j(X)] indicates a continuous assignment function,
assumed to be of exponential form, as shown in Eq. (13.29), and _ [q i(X)] represents
the power of the violated function. The values of c = 0.5, _ = 2, a = 150, and b = 10
along with
_ [q (X)] =
j
_ 1 if
2 if
qj (X) _ 1
qj(X) > 1
(13.31)
were used by Liu and Lin [13.35].
Example 13.4
Find the maximum of the function
f (x) = Šx 2 + 2x + 11
in the range Š2 _ x _ 2 using the PSO method. Use 4 particles (N = 4) with the initial
positions x 1 = Š1.5, x 2 = 0.0, x
tions for iterations 1 and 2.
3
= 0.5, and x
4
= 1.25. Show the detailed computa-
SOLUTION
1. Choose the number of particles N as 4.
2. The initial population, chosen randomly (given as data), can be represented
as x 1(0) = Š1.5, x 2(0) = 0.0, x 3(0) = 0.5, and x 4(0) = 1.25. Evaluate the
objective function values at current x j(0), j = 1, 2, 3, 4 as f 1 = f [x 1(0)] =
f (Š1.5) = 5.75, f 2 = f [x 2(0)] = f (0.0) = 11.0, f 3 = f [x 3(0)] = f (0.5) =
11.75, and f 4 = f [x 4(0)] = f (1.25) = 11.9375.
3. Set the initial velocities of each particle to zero:
v (0)1
= v 2(0) = v 3(0) = v 4(0) = 0
Set the iteration number as i = 1 and go to step 4.
= Š1.5, P
4. (a) Find P
best,2
best,1
= 0. 0 , best,3 = 0. 5 , best,4 = 1 .25, and Gbest =
P
P
1 .25.
(b) Find the velocities of the particles as (by assuming c 1 = c 2 = 1 and using
the random numbers in the range (0, 1) as r
v j(i) = v j(i Š 1) + r
+r
2[Gbest
1[P
1
best,j
= 0.3294 and r
2
= 0.9542):
Š x j(i Š 1)]
Š x j(i Š 1)]; j = 1 , 2 , 3 , 4
13.4
Particle Swarm Optimization
713
so that
v (1) = 0 +1 0 .3294(Š1.5 + 1.5) + 0.9542(1.25 + 1.5) = 2.6241
v (1) = 0 +2 0 .3294(0.0 Š 0.0) + 0.9542(1.25 Š 0.0) = 1.1927
v (1) = 0 +3 0 .3294(0.5 Š 0.5) + 0.9542(1.25 Š 0.5) = 0.7156
v (1) = 0 +4 0 .3294(1.25 Š 1.25) + 0.9542(1.25 Š 1.25) = 0.0
(c) Find the new values of x j(1), j = 1 , 2 , 3 , 4 , as x j(i) = x j(i Š 1) + v j(i):
= Š1.5 + 2.6241 = 1.1241
x
(1)1
x (1) = 0 0
1 .1927 = 1.1927
+.2
0 .7156 = 1.2156
x (1) = 0 5
x (1) 1 .25 + 0.0 = 1.25
+.3
=4
5. Evaluate the objective function values at the current x j(i):
did
f [x
1(1)]
= 11.9846, f [x
f [x
4(1)]
= 11.9375
2(1)]
= 11.9629, f [x
3(1)]
= 11.9535,
Check the convergence of the current solution. Since the values of x j(i)
not converge, we increment the iteration number as i = 2 and go to step 4.
4. (a) Find P best,1 = 1 .1241, best,2 = 1 .1927, best,3 = 1 .2156, best,4 = 1
P
P
.25,
and Gbest = 1 P
.1241.
(b) Compute the new velocities of particles (by assuming c = c = 1 and using
1
the random numbers in the range (0, 1) as r
v j(i) = v j(i Š 1) + r
1(P
best,j
Š x j(i)) + r
1 = 0.1482 and r
2(Gbest
2
2
= 0.4867):
Š x j(i)); j = 1, 2, 3, 4
so that
v (2)
=1
v (2)
=2
2 .6240 + 0.1482(1.1241 Š 1.1241) + 0.4867(1.1241 Š 1.1241) = 2.6240
1 .1927 + 0.1482(1.1927 Š 1.1927) + 0.4867(1.1241 Š 1.1927) = 1.1593
0 .7156 + 0.1482(1.2156 Š 1.2156) + 0.4867(1.1241 Š 1.2156) = 0.6711
v (2) = 0 0
0 .1482(1.25 Š 1.25) + 0.4867(1.1241 Š 1.25) = Š0.0613
+.4
=3
(c) Compute the current values of x j(i) as x j(i) = x j(i Š 1) + v j(i), j = 1, 2, 3, 4:
x (2)
=1
1 .1241 + 2.6240 = 3.7481
1 .1927 + 1.1593 = 2.3520
x (2)
=2
x (2)
=3
x (2)
=4
1 .2156 + 0.6711 = 1.8867
1 .25 Š 0.0613 = 1.1887
714
Modern Methods of Optimization
6. Find the objective function values at the current x j(i):
f [x
1(2)]
= 4.4480, f [x
f [x
4(2)]
= 11.9644
2(2)]
= 10.1721, f [x
3(2)]
= 11.2138,
Check the convergence of the process. Since the values of x j(i) did not converge, we increment the iteration number as i = 3 and go to step 4. Repeat
step 4 until the convergence of the process is achieved.
13.5
13.5.1
ANT COLONY OPTIMIZATION
Basic Concept
Ant colony optimization (ACO) is based on the cooperative behavior of real ant
colonies, which are able to find the shortest path from their nest to a food source.
The method was developed by Dorigo and his associates in the early 1990s [13.31,
13.32]. The ant colony optimization process can be explained by representing the optimization problem as a multilayered graph as shown in Fig. 13.3, where the number of
Home
Layer 1 (
Layer 2 (
1x
2x
)
x11
)
x
x12
x22
x13
x23
x14
x15
x16
x17
x18
x24
x25
x26
x27
x28
x36
x37
x38
x46
x47
x48
21
Layer 3 (
Layer 4 (
Layer 5 (
Layer 6 (
3x
4x
5x
6x
)
)
)
x31
x32
x33
x34
x35
x41
x42
x43
x44
x45
x51
x52
x53
x54
x55
x56
x57
x58
x63
x64
x65
x66
x67
x68
)
x61
x62
Destination
(Food)
Figure 13.3
network.
Graphical representation of the ACO process in the form of a multi-layered