Download Evolutionary Computation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Corecursion wikipedia , lookup

Algorithm characterizations wikipedia , lookup

Lambda calculus wikipedia , lookup

Genetic algorithm wikipedia , lookup

Recursion (computer science) wikipedia , lookup

Simulated annealing wikipedia , lookup

Renormalization group wikipedia , lookup

Simplex algorithm wikipedia , lookup

Dirac delta function wikipedia , lookup

Generalized linear model wikipedia , lookup

Mathematical optimization wikipedia , lookup

Transcript
Evolutionary Computation
SS 2001
Prof. Petros Koumoutsakos
Assistant: Sibylle Mueller
Project 1 -- Solution and Comments
1) General comments:
- One mutation means the creation of one child vector xchild. One
mutation does not mean the creation of one element of the child vector
xchild. This is important for a correct implementation of the 1/5 rule.
- The random number generators for C and Fortran can be obtained from
the
book
‘Numerical
Recipes’,
online
under
http://www.ulib.org/webRoot/Books/Numerical_Recipes/…
…/bookc.html
( ps-files for C)
…/bookcpdf.html
( pdf files for C)
I recommend for uniform random number generators (RNG) the
algorithm ‘ran1’, and for normally distributed RNG’s the algorithm
‘gasdev’.
- It is justified to perform less than 30 repetitions to determine mean and
standard deviation of the number of function evaluations as long as the
standard deviation is relatively small compared with the mean.
2) Results:
Function
f sphere
f Rosen
Dimension
5
20
5
20
# iterations (average +/- standard deviation)
470
+/35
1928
+/70
140227
+/1487
929326
+/3183
3) Discussion of the results:
- The number of function evaluations increases proportionally to the
dimensionality for the sphere function, and does not increase
proportionally for Rosenbrock’s function. The number of function
evaluations is much higher for Rosenbrock’s function than for the
sphere.
- The sphere is relatively easy to optimize with the 1+1-ES employing
the 1/5 success rule because parameters of the 1/5 rule are fixed such
that they are optimal for linear and quadratic functions. The result of
this choice is that the step sizes are adapted in an optimal fashion for
the quadratic sphere function. However, step sizes cannot be adapted
optimally for Rosenbrock’s problem. The 1/5 rule assumes that there is
always some combination of variances i > 0 with which on average at
least one improvement can be expected within five mutations. This
-
assumption is not met in the narrow valley topology of Rosenbrock’s
function for large step sizes but rather for small ones (look at a contour
plot for the 2-D problem). The step size attains too small values in
general. Additionally, the step size oscillates during the entire
optimization and the control becomes inefficient. A remedy would be
to use different strategy parameters in the 1/5 rule.
‘Flatness’ of the function in the valley bottom does not matter since the
ES compares function values only.
Also, Rosenbrock’s function is unimodal.
For comparison, we show the number of iterations to reach the goal
using an ES with an advanced adaptation of individual step sizes
(CMA-ES: Evolution Strategy with Covariance Matrix Adaptation)
and a Quasi-Newton method which needs to compute the gradient (LBFGS: Broyden-Fletcher-Goldfarb-Shanno algorithm with limits).
While for the CMA-ES the number of function evaluations are shown
in the table below, for the L-BFGS the sum of function and gradient
evaluations is reported.
Function
Dimension
f sphere
5
20
5
20
f Rosen
# iterations
CMA-ES
700
3000
1000
20000
# iterations
L-BFGS
18
63
270
3000