Download Selective Crossover Using Gene Dominance as an Adaptive

Document related concepts

Behavioural genetics wikipedia , lookup

Philopatry wikipedia , lookup

Genetic testing wikipedia , lookup

History of genetic engineering wikipedia , lookup

Public health genomics wikipedia , lookup

Heritability of IQ wikipedia , lookup

Genetic engineering wikipedia , lookup

Human genetic variation wikipedia , lookup

Polymorphism (biology) wikipedia , lookup

Designer baby wikipedia , lookup

Holliday junction wikipedia , lookup

Koinophilia wikipedia , lookup

Genetic drift wikipedia , lookup

Selective breeding wikipedia , lookup

Epistasis wikipedia , lookup

Genome (book) wikipedia , lookup

Microevolution wikipedia , lookup

Population genetics wikipedia , lookup

Gene expression programming wikipedia , lookup

Transcript
Selective Crossover Using Gene Dominance as an
Adaptive Strategy for Genetic Programming
Chi Chung Yuen
A thesis submitted in partial fulfilment
of the requirements for the degree of
MSc Intelligent Systems
at
University College London,
University of London
Department of Computer Science
University College London
Gower Street
London WC1E 6BT
UK
September 2004
Supervisor: Chris Clack
Yuen, C.C.,
-1-
“In the struggle for survival, the fittest win out at the expense of their rivals
because they succeed in adapting themselves best to their environment.”
Darwin, Charles Robert
(1809 – 1882)
Yuen, C.C.,
-2-
Abstract
Since the emergence of the evolutionary computing, many new natural genetic
operators have been research and within genetic algorithms and many new
recombination techniques have been proposed. There has been substantially less
development in Genetic Programming compared with Genetic Algorithms. Koza [14]
stated that crossover was much more influential than mutation for evolution in genetic
programming; suggesting that mutation was unnecessary. A well known problem with
crossover is that good sub-trees can be destroyed by an inappropriate choice of
crossover point. This is otherwise known as destructive crossover.
This thesis proposes two new crossover methods which uses the idea of haploid
gene dominance in genetic programming. The dominance information identifies the
goodness of a particular node, or the sub-tree, and aid to reduce destructive crossover.
The new selective crossover techniques will be used to test a variety of optimisation
problems and compared with the analysis work by Vekaria [28]. Additionally, uniform
crossover which Poli and Langdon [22] proposed has been revised and discussed.
The gene dominance selective crossover operator was initially designed by
Vekaria in 1999 who implemented it for Genetic Algorithms and showed improvement
in performance when evaluated on certain problems. The proposed operators, “Simple
Selective Crossover” and “Dominance Selective Crossover”, have been compared and
contrasted with Vekaria results on two problems; an attempt has also been made to test
it on a more complex genetic programming problem. Satisfactory results have been
found.
Yuen, C.C.,
-3-
Acknowledgements
This thesis would not have been complete without the help and supervision of Chris
Clack, Wei Yan, friends and family.
I would like to take this opportunity to thank my supervisor Chris Clack for his
guidance, my parents for their continued support and encouragement, without whom I
would not been able to complete my MSc. I would also like give a special thank you to
Purvin Patel, Amit Malhotra, Yu Hui Yang, Rob Houghton and Kamal Shividansi for
their friendship, support and understanding.
Yuen, C.C.,
-4-
Table of Contents
ABSTRACT.................................................................................................................... 3
ACKNOWLEDGEMENTS .......................................................................................... 4
TABLE OF CONTENTS .............................................................................................. 5
LIST OF FIGURES ....................................................................................................... 8
1 INTRODUCTION....................................................................................................... 9
1.1 MOTIVATION ........................................................................................................... 9
1.1.1 Theory of Evolution......................................................................................... 9
1.2 AIMS AND OBJECTIVES OF THIS THESIS ................................................................. 10
1.2.1 Hypothesis: ................................................................................................... 11
1.3 CONTRIBUTIONS .................................................................................................... 13
1.4 STRUCTURE OF THIS THESIS .................................................................................. 13
2 BACKGROUND AND RELATED WORK ........................................................... 15
2.1 EVOLUTIONARY COMPUTING ................................................................................ 15
2.1.1 General Algorithm ........................................................................................ 15
2.1.2 Evaluating Individuals Fitness ..................................................................... 16
2.1.3 Selection Methods ......................................................................................... 16
2.1.4 Natural Recombination Operators ............................................................... 18
2.1.5 Natural Genetic Variation Operators........................................................... 19
2.1.6 Linkage, Epistasis and Deception................................................................. 19
2.2 WHAT IS A GENETIC ALGORITHM?........................................................................ 20
2.2.1 Terminology .................................................................................................. 20
2.2.2 Genetic Operators......................................................................................... 21
2.2.3 Genetic Variation Operator.......................................................................... 21
2.3 WHAT IS A GENETIC PROGRAM? ........................................................................... 21
2.3.1 LISP............................................................................................................... 23
2.3.2 Representation – Functions and Terminals .................................................. 23
2.3.3 Genetic Operators......................................................................................... 24
2.3.4 Genetic Variation Operators ........................................................................ 24
2.4 DIFFERENCE BETWEEN GA AND GP...................................................................... 24
2.5 SELECTIVE CROSSOVER IN GA.............................................................................. 25
2.5.1 Algorithm and Illustrative Example of Selective Crossover in GA............... 26
2.6 SELECTIVE CROSSOVER IN GENETIC PROGRAMMING ............................................ 28
2.7 OTHER ADAPTIVE CROSSOVER TECHNIQUES FOR GP ........................................... 28
2.5.1 Depth Dependent Crossover ......................................................................... 29
2.5.2 Non Destructive Crossover ........................................................................... 29
2.5.3 Non Destructive Depth Dependent Crossover.............................................. 29
2.5.4 Self – Tuning Depth Dependent Crossover................................................... 29
2.5.5 Brood Recombination in GP......................................................................... 30
3 – SELECTIVE CROSSOVER METHODS USING GENE DOMINANCE IN GP
........................................................................................................................................ 32
3.1 TERMINOLOGY ...................................................................................................... 32
3.2 UNIFORM CROSSOVER........................................................................................... 33
3.2.1 A revised version of GP Uniform Crossover ................................................ 34
3.3 SIMPLE DOMINANCE SELECTIVE CROSSOVER ....................................................... 35
Yuen, C.C.,
-5-
3.3.1 Example of simple dominance selective crossover algorithm.......................37
3.3.2 Key Properties ...............................................................................................39
3.4 DOMINANCE SELECTIVE CROSSOVER ....................................................................39
3.4.1 Example of dominance selective crossover algorithm ..................................40
3.4.2 Key Properties ...............................................................................................43
4 – IMPLEMENTATION OF PROPOSED CROSSOVER TECHNIQUES AND
GP SYSTEM .................................................................................................................44
4.1 MATLAB ................................................................................................................44
4.2 SYSTEM COMPONENTS...........................................................................................44
4.2.1 Population .....................................................................................................44
4.2.2 Initialisation ..................................................................................................44
4.2.3 Standard Crossover .......................................................................................45
4.2.4 Simple Selective Crossover............................................................................45
4.2.5 Dominance Selective Crossover ....................................................................45
4.2.6 Uniform Crossover ........................................................................................45
4.2.7 Mutation ........................................................................................................45
4.2.8 Selection Method ...........................................................................................45
4.2.9 Evaluation......................................................................................................45
4.2.10 Updating Dominance Values.......................................................................46
4.2.11 Termination Condition ................................................................................46
4.2.12 Entity relation diagram of all the main functions .......................................47
4.3 THEORETICAL ADVANTAGES OVER STANDARD CROSSOVER AND UNIFORM
CROSSOVER .................................................................................................................48
5 - THE EXPERIMENTS ............................................................................................49
5.1 STATISTICAL HYPOTHESIS .....................................................................................49
5.2 EXPERIMENTAL DESIGN ........................................................................................50
5.3 SELECTION OF PUBLISHED GA EXPERIMENTS .......................................................50
5.3.1. One Max .......................................................................................................51
5.3.2 L-MaxSAT......................................................................................................51
5.4 EXPRESSION OF EXPERIMENT FOR GP....................................................................52
5.4.1 One Max ........................................................................................................53
5.4.2 Random L – Max SAT....................................................................................53
5.5 THE 6 BOOLEAN MULTIPLEXER .............................................................................54
5.6 TESTING.................................................................................................................56
5.6.1 Test Plan........................................................................................................56
CHAPTER 6 – EXPERIMENTAL AND ANALYSIS OF RESULTS ....................58
6.1 INTERPRETATION OF THE GRAPHS ..........................................................................58
6.2 RESULTS ................................................................................................................59
6.2.1 One Max ........................................................................................................59
6.2.2 Random L – Max SAT....................................................................................62
6.3 COMPARISON WITH DOMINANCE SELECTIVE CROSSOVER FOR GA........................64
6.3.1 One Max ........................................................................................................64
6.3.2 L-Max SAT.....................................................................................................65
6.4 THE MULTIPLEXER ................................................................................................65
CHAPTER 7 – CONCLUSION ..................................................................................66
7.1 CRITICAL EVALUATION .........................................................................................66
7.2 FURTHER WORK ....................................................................................................67
Yuen, C.C.,
-6-
APPENDIX A: FULL TEST RESULTS.................................................................... 68
APPENDIX C: STATISTICAL RESULTS .............................................................. 76
APPENDIX C: USER MANUAL ............................................................................... 85
APPENDIX D: SYSTEM MANUAL ......................................................................... 94
APPENDIX E: CODE LISTING................................................................................ 95
BIBLIOGRAPHY ...................................................................................................... 109
Yuen, C.C.,
-7-
List of Figures
Figure 2.1: Tree Representation of (+AB) in Lisp form ................................................23
Figure 2.2: Recombination with Selective Crossover and Updating Dominance Values
........................................................................................................................................27
Figure 3.1: Parent Chromosomes for Uniform Crossover..............................................34
Figure 3.2: Offspring Chromosomes from Uniform Crossover .....................................35
Figure 4.1: Flow Diagram showing the basic procedure of the Genetic Program .........47
Figure 6.1: An example of the results in graphical form................................................58
Figure 6.2: Comparing the Mean number of generation until optimal solution is found
........................................................................................................................................60
Figure 6.3: Mean CPU time for 3000 generations using the different crossover
techniques .......................................................................................................................61
Figure 6.4: Comparing the mean of the maximum fitness over 30 runs ........................63
Figure 6.5: Mean CPU time for 600 generations, using the 4 crossover techniques. ....64
Yuen, C.C.,
-8-
Chapter 1
1 Introduction
This idea of evolution has inspired many algorithms for optimisation and machine
learning. This gave birth to the technique Evolutionary computing. This idea was
present in the 1950s, many computer scientist independently studied the idea and
developed optimisation systems.
Genetic Programming (GP) is a non-deterministic search technique within
evolutionary computing. It has been widely agreed that standard crossover in genetic
programming is more bias towards local searching and not ideal to explore the search
space of programs efficiently. It has been argued strongly, in both Genetic Algorithm
(GA) and Genetic Programming (GP) that more crossover points lead to more effective
exploration.
1.1 Motivation
From the idea of evolution in natural species, selective breeding and gene dominance,
we felt that we could exploit the idea of dominance to improve evolutionary techniques.
We can evolve selectively using dominance information about the particular gene.
Vekaria [28] used the idea of gene dominance and implemented a selective crossover
technique in GA that biases the more dominant genes. She found that the technique
required fewer evaluations before convergence was reached when compared with twopoint and uniform Crossover.
1.1.1 Theory of Evolution
Charles Darwin [5] formalised the concept of Evolution in 1859. He demonstrated that
life evolved to suit the environment. Darwin [5] used the growth of a tree as an example
to demonstrate evolution.
Evolution is believed to be a gradual process in which something changes into a
different and usually more complex or better form. In Biology, there is strong empirical
evidence which show that living species evolve to increase it’s fitness to adapt more to
the environment it is in. It is an act of development. Darwin argued that if a new
variation to an individual occurred, and it has benefited the individual, then it will be
Yuen, C.C.,
-9-
assured a better change of being preserved in the struggle to life. Therefore, it should
have more chance of passing on this trait onto the next generation.
Evolution can not be measured or seen individually, we must analyse the whole
population. A more detailed discussion of evolution by Charles Darwin can be found in
the book, “Origin of Species”[5], which introduced the idea of natural selection as a the
main mechanism in which small heritable variations occur.
From Darwin’s words, I think that I can summarise the occurrence of evolution
into four essential preconditions:
•
Reproduction of individuals in the population;
•
Variation that affects the likelihood of survival of individuals
•
Heredity in reproduction
•
Finite resources causing competition
These ideas will be expanded and explained further when I discuss into more detail
about Genetic Algorithms and Genetic Programming.
1.2 Aims and Objectives of this Thesis
Following from the ideas of evolution, selective breeding and Vekaria [28], I has been
decided that investigation into new recombination technique for Genetic Programming
to improve efficiency and reduce the chances of destructive crossover occurring.
Vekaria selective crossover access every gene, effectively, it can be described as
uniform crossover in GA with selective and adaptive control.
Poli and Langdon [22] developed uniform crossover in GP, they claim that in its
early generations, uniform crossover performs like a global search operator. They
explained that as the population starts to converge, uniform crossover becomes more
and more local in the sense that the offspring produced are progressively more similar
to their parents.
This thesis will propose two new crossover operators and critically compare and
contrast them with other GP operators.
This thesis examines the following hypothesis, which has been heavily
influenced by Vekaria.
Yuen, C.C.,
- 10 -
1.2.1 Hypothesis:
Genetic programming is known to be able to solve problems, despite only
knowing very little or no concrete knowledge about the problem being optimised. The
two operators on proposal will have different properties.
Simple selective crossover is a computationally simple crossover method with
the following properties:
•
Detection: It detects sub-trees which have a good impact during crossover on
the candidate solution.
•
Correlation: It uses differences between parental and offspring fitnesses as a
means of discovering beneficial alleles.
•
Preservation: It preserves alleles by keeping the more dominant sub-trees with
individuals with higher fitness.
Similarly, dominance selective crossover is predicted to hold the same three properties,
but differently and more effectively. The four properties are:
•
Detection: It detects nodes that were changed during crossover to identify
modifications made to the candidate solution.
•
Correlation: It uses differences between parental and offspring fitnesses as a
means of discovering beneficial alleles.
•
Preservation of beneficial genes: After discovering a beneficial gene or a
group of beneficial genes, its advantage over the less beneficial genes are
always desired, since we will attempt to merge all the better genes together
to form possibly the best solution, we initially perverse the better genes
found in each generation by ensuring that they kept.
•
Mirroring natural evolution and homology: In the past, the idea of
maintaining a genetic structure of a chromosome in genetic programming
has never been explored. Selective Crossover will compare the whole
individual with another individual within the population, going through a
point for point comparison effectively.
There are four main aims to this thesis:
1. Design and implement two new adaptive crossover operators, “dominance
selective crossover”, and “simple dominance selective crossover” with the
above three properties.
Yuen, C.C.,
- 11 -
2. Compare and contrast the selective dominance crossover in GP with selective
crossover in GA which Vekaria implemented.
3. Critically compare both selective crossover techniques with standard crossover.
4. Implement uniform crossover and extensively evaluate selective crossover
against uniform crossover.
In addition, this thesis aims to implement a revised version of uniform and test it
on a problem with functions of different arity. This aim was added whilst designing
dominance selective crossover. This will give me the opportunity to compare my
revised uniform crossover method with the one proposed by Poli and Langdon, as well
as perform further test on the new selective crossover methods proposed.
Destructive crossover has been a major issue in discussion in the field of genetic
programming. There have been several attempts to try over come this problem by
introducing a method to effectively select a good crossover point prior to crossing over
the two sub-trees. Tackett [27] proposed a method known as brood crossover, which
produced several offspring, and choose the best two from the group. Iba [12] suggested
to measure the “goodness” of sub-trees, and use them to bias the choice of crossover
points. However it has been shown that a sub-tree with high fitness does not necessary
have a good impact on the tree itself. Vekaria, 1999 [28] used the idea of gene
dominance in GA to try to implement a selective crossover technique that bias the more
dominant genes.
Hengpraprohm and Chongstitvatana, 2001 [11] implemented a
selective crossover in GP, which used the idea of crossing over a good sub-tree with a
bad sub-tree. A detailed discussion of the methods will be discussed further in chapter
2.
Simple dominance crossover aims to reduce the probability of destructive
crossover occurring. With the knowledge that a sub-tree with high fitness does not
necessarily lead to a good impact, we believe that dominance values provide a more
unbiased and different selective method, as it considers average dominance per node,
and not the fitness of the sub-tree.
Research to create new recombination techniques have focused on the
preservation of building blocks. The preservation of homology has not been really
considered. Some of the methods proposed have been successful, others have intriguing
empirical implications regarding the building block hypothesis, and a few have
considered that homology could be important within the GP field. Dominance selective
Yuen, C.C.,
- 12 -
crossover aims to preserve homology as well become an efficient selective crossover
operator.
1.3 Contributions
This thesis makes five main contributions:
1. The design and implementation of “simple selective crossover” and “dominance
selective crossover”, two new adaptive crossover methods that detect beneficial
sub-trees or nodes and incorporates correlations between parents and offspring
as a means of discovering and preserving beneficial alleles at each locus during
crossover to produce fitter offspring.
2. Compare and contrast dominance selective crossover in GP with selective
crossover in GA.
3. Compare the performance between simple selective crossover, dominance
selective crossover, standard crossover and uniform crossover.
4. Compare the performance between simple selective crossover with standard
crossover and identify whether it helps to reduce destructive crossover.
5. Compare the performance between dominance selective crossover with uniform
crossover and evaluate whether the dominance information helps to provide
useful bias information for exploration.
A bonus contribution would be to design and implement a revised version of
uniform crossover, and show that extra performance can be achieved by allowing
functional nodes with different arity to swap over individually. Subsequently, it will
provide an opportunity to firstly, compare my revised version of uniform crossover
with the algorithm expressed by Poli and Langdon, secondly, to compare dominance
selective crossover with both versions of uniform crossover.
1.4 Structure of this Thesis
After this introductory chapter, Chapter 2 will present a more through explanation of
Evolutionary Computing, Genetic Algorithms, and Genetic Programming and highlight
the differences between GA and GP’s. A discussion of Vekaria’s selective crossover in
GA and other interesting crossover techniques for GP will be present. It provides a
strong basis for reasons to development of new recombination methods.
Yuen, C.C.,
- 13 -
Chapter 3 provides a detailed description of uniform crossover, reasons for proposing a
new version of uniform crossover and outlining the algorithm. The design of simple
selective crossover and dominance selective crossover accompanied with illustrative
examples. The three key properties will be discussed and emphasised. The remaining
chapters will discuss the implementation of the GP system, experiments and critically
analysis the performance using statistical measures.
Chapter 4 outlines the GP system, the main modules in the system, the structure of
objects and other implementation issues.
Chapter 5 details the experiments and reasons why I have chosen them. Dominance
selective crossover and simple selective crossover will be compared against standard
crossover and uniform crossover in genetic programming. The results will also be
compared with the results Vekaria [28] obtained when a similar technique was
implemented in GA.
Chapter 6 provides empirical results and a critical analysis of the performance, both in
terms of the number of generations required before the global solution is found or the
best solution after x generations, and in terms of computation effort required to obtain
such a solution.
Chapter 7 concludes this thesis, stating the findings, and critically evaluating the work
being done. A suggestion for improvements and ideas for further work will be
discussed.
Yuen, C.C.,
- 14 -
Chapter 2
2 Background and Related Work
2.1 Evolutionary Computing
Evolutionary Computing is a category of problem solving techniques that are based on
principles of nature and biological development, such as natural selection, genetic
inheritance and mutation.
The main techniques under this field are genetic algorithms, genetic
programming, biological computation, classifier systems, artificial life, artificial
immune systems, particle swarm intelligence, evolution strategies, ant colony
optimization, swarm intelligence, and recently, the concept of evolvable hardware.
2.1.1 General Algorithm
Given a well defined problem, we can solve it using a general genetic algorithmic
approach. The method is as follows:
1. Start the initial population randomly. Generating a population of size n.
2. Measure the fitness of each individual in the population.
3. Sort the population in order of fitness
4. Repeat the following steps until n offspring are created.
a. Select a pair of individuals from the current population. The selection
method is based on a probability model. Selection is done “with
replacement”, meaning that the same individual in the current population
can be selected more than once to become a parent. Various selection
methods have been developed, and this will be discussed further in this
chapter.
b. Next we have to decide what operation will be done to the pair of
individuals chosen. We set a crossover rate and a mutation rate. We
generate a random number, based on this random number, we decide
what operator is done.
i.
If crossover is activated, normally, we choose a random point
in each parent and swap the two, to create the offspring.
Yuen, C.C.,
- 15 -
(Please note, there are other crossover methods, i.e. 2 point
crossover, uniform crossover, etc). We add two to the counter
for creating n number of new individuals for the next
generation.
ii.
Else if mutation is activated, we select one of the parents
chosen, and randomly a random point in which we will
randomly generate a new chromosome at that location. We will
add one to the counter for creating n number of new
individuals for the next generation.
iii.
Else if neither crossover nor mutation was selected, we create
two offspring that are exact copies of their respective parents.
Like crossover, we will add two to the counter.
c. If we have created more than n individuals, we will simply discard one
of the new individuals at random.
5. We will replace the current population of individuals with the new population
created.
6. Go back and repeat step 2, until we have reached the number of generations
specified.
2.1.2 Evaluating Individuals Fitness
Evaluating the fitness of each individual is the driving force behind the movement
towards obtaining an optimal solution for the problem. Once all the fitness values for
each individual is calculated, the selection method will aid the direction of evolution by
selecting the more fitter, i.e., more suitable solutions for the given problem.
The fitness after evaluation is known as the raw fitness, it is the value that
represents a true value to the problem. However, on certain occasions, we may need to
adjust it, so that the values are between 0 and 1, where 1 is the ideal fitness.
2.1.3 Selection Methods
Choosing the individuals in the population to create offspring can difficult, as nature
defines that the fittest individuals will survive longer in the particular environment and
therefore will have a much higher probability of reproducing. Therefore it is only
logical so that individuals with a higher fitness will have a higher chance of being
selected.
Yuen, C.C.,
- 16 -
There are 5 known methods that have been widely used within the field. The
methods are:
1. Fitness – Proportionate Selection with Roulette Wheel
2. Scaling
3. Ranking
4. Tournament Selection
5. Elitism
2.1.3.1 Fitness – Proportionate Selection with Roulette Wheel
The number of times an individual expected to reproduce is proportional to its fitness
divided by the total fitness of the population. A probability distribution can be
generated using the following equation:
Pr(k ) =
fk
(2.1)
n
∑f
i =1
i
The most common method of implementing this is the roulette wheel selection
(RWS) method. Conceptually, each individual is assigned a slice of a circular roulette
wheel. The size of the slice is proportional to the individual’s fitness. Each time we
wish to select a parent, we generate a random number, the location of the random
number can be visualise as the position of where the ball comes to rest after a spin of
the roulette wheel. The corresponding individual will be selected.
There are problems with the RWS method, as individuals with high probability
of selection will tend to dominant; individuals with low probability may never be
selected to become a parent. Therefore, the fitness proportionate selection tends to put
too much emphasis on exploitation on highly fit individuals in the early generations at
the expense of exploration of other regions of the search space [19].
2.1.3.2 Scaling
To solve the problem of premature convergence, there have been many scaling methods
proposed. All scaling methods map the raw fitness values onto expected values that will
make the evolutionary algorithm less susceptible to premature convergence.
2.1.3.3 Ranking
Rank selection was proposed by Baker [2], it was also a method which prevented
premature convergence. The method is to rank the population according to their fitness
Yuen, C.C.,
- 17 -
and the expected value of each individual depends on its rank rather than its absolute
fitness.
This method is simpler than scaling, but the fact that it ignores the absolute
fitness information; it can have severe disadvantages, as we would like the algorithm to
converge to the optimal as quickly as possible as well as having explored the search
space well. On this note, it can have an advantage of a low likelihood that it will lead to
convergence problems. Selection is with conducted replacement, this allows individuals
to be selected many times.
2.1.3.4 Tournament Selection
Tournament Selection was introduced to save computational power and reduce the
likelihood of early convergence. The method is to select n individuals at random from
the population, where n is any positive integer less than the size of the population.
Select a random number r, between 0 and 1; if the random number is less than a preset
threshold parameter, k; the fitter individual is chosen; else the less fit individual is
selected [19]. This method is also selection with replacement.
2.1.3.5 Elitism
Elitism is an addition to other selection methods that forces the evolutionary algorithm
to retain some number of the best individuals at each generation. This method retained
the individuals as they can be lost if they are not selected to reproduce or can be
destroyed by crossover or mutation. This idea was introduced by Kenneth De Jong
(1975).
2.1.4 Natural Recombination Operators
Natural Selection Operators are general regarded as Crossover. Recombination is
defined as the natural formation in offspring of genetic combinations not present in
parents, by the processes of crossing over or independent assortment. In specific to
biology, it is a characteristic resulting from the exchange of genetic material between
homologous chromosomes during meiosis.
In Evolutionary Computing, it’s normally assumed that two offspring are
created during meiosis. Selecting parts from one parent and others from the other parent
has potentially created individuals which are testing areas of the search space which has
yet to be explored. This can be thought of as parallel exploring, as we potentially have
Yuen, C.C.,
- 18 -
two very different individuals created in one operation, each searching in a different
direction.
2.1.5 Natural Genetic Variation Operators
Natural Genetic Variation Operators are also known as mutation based operators.
Mutation is defined as an alteration or change, as in nature, form, or quality. In specific
to biology, it is the process by which such a change occurs in a chromosome, either
through an alteration in the nucleotide sequence of the DNA coding for a gene or
through a change in the physical arrangement of a chromosome.
This change of the DNA sequence within a gene or chromosome of an organism
resulting in the creation of a new character or trait not found in the parental type will
affect the individual’s fitness, in respect to digital evolution; we are potentially
exploring an area of the search space which hasn’t been considered before, if there isn’t
another individual in the current population who is similar.
The presence of mutation aids divergence in a population and helps to prevent
premature convergence.
2.1.6 Linkage, Epistasis and Deception
Theory of Evolutionary Algorithms state that the type and frequency of the
recombination used will heavily influence the efficiency of the algorithm to reach a
solution. Unfortunately, reality is not usually so simple. There are many
interrelationships between various components of a problem solution. This is known as
Linkage and prohibits efficient search. The reasoning for this is believed to be a
variation of a parameter having a negative influence on overall fitness due to its linkage
with one another. Effectively, this is similar to dealing with non linear problems with
an interaction between components. This phenomenon is also known as epistasis.
The phenomenon extends further to something known as deception. Deception
is defined as the act of deceit. It has been widely studied [5, 6, 7, 8, 9, 12] and shown
that deception is strongly connected to epistasis. A problem is considered deceptive if a
combination of alleles or schemata leads the GA or GP from the global optimum and
concentrate around a local optimum. It is widely regarded that increasing or
maintaining diversity helps to overcome deception, a diverse population is likely to
contain another individual which is significantly different and allows the run to
continue.
Yuen, C.C.,
- 19 -
2.2 What is a Genetic Algorithm?
Genetic Algorithms were invented by John Holland in the 1960s and 1970s. He later
published a book named “Adaptation in Natural and Artificial Systems” in 1975. The
book detailed a theoretical framework for adaptation using GA’s. It showed that GA’s
are effective and robust problem solvers, which don’t require a huge domain specific
knowledge.
2.2.1 Terminology
All living organisms are made of cells, and each cell contains the same set of one or
more chromosomes. A chromosome is defined as a circular strand of DNA that contains
the hereditary information necessary for cell life. Each gene encodes for a particular
trait, such as eye colour.
In nature, most organisms have multiple chromosomes in each cell. Those
organisms with a pair of chromosomes are called diploid and organisms with a single
set of chromosomes are called haploid.
Most sexually reproducing organisms are diploid. However, genetic algorithms
normally assume haploid individuals. Each chromosome representing each individual is
of equal length. The genes are the single units or short blocks of adjacent units in the
chromosome. Genes are located at certain locations on the chromosome, which are
called loci. Each gene has a value associated; the values are known as alleles and are
defined by the alphabet set used to create the chromosome. The alphabet for a bit string
representation will be binary, {0,1}. A typical representation of a chromosome will be
as follows:
[0][1][0][1][1][0][0][0][1][1][0][1][1][0]
Each chromosome represents an individual in the population; it is a potential solution to
the problem. Each gene can be thought of as a variable.
A typical genetic algorithm will operate fairly similar to a standard evolutionary
algorithm, where a population is developed and each individual has its corresponding
fitness value. Over generations of evolution, the fitness of the population should
improve and hopefully discover an individual which optimises the environment setting.
Any of the selection methods listed above has been used with GAs; it has been found
that the amount of selection pressure depends on the population size, spread of the
initial population and the search space present.
Yuen, C.C.,
- 20 -
2.2.2 Genetic Operators
Since the invention of genetic algorithms, there has been a lot of research into
developing new genetic operators, to improve the ability to search for a better solution
in reasonable time.
One point crossover is the most simple crossover technique. It selects a point at
random, and then swaps the second half of the strings.
Two-point crossover, selects two random points, it extracts the selection inbetween the two points and exchanges them. This method has shown to vastly improve
exploration.
N–point crossover performs crossover at n number of different point in the
chromosome, where n is a real number less than the total length of the chromosome. Npoint crossover has shown to be very effective in certain problems, and in others, it
performs only as well as one-point crossover.
Uniform crossover compares point for point, so every index has a 50% of being
crossover with the identical index of the other parent. We go through the string index
by index, generating a random number between 0 and 1. If he random number is higher
than 0.5, we crossover the two genes; otherwise, we move to the next index and
compare.
It has been regarded that the more crossover points there are, the better the
performance. However, the optimal number of crossover points has been generally
agreed to be half the length of the chromosome. Uniform crossover has failed to show
consistent results that it’s a better performer. In my cases, uniform crossover causes too
much disruption.
2.2.3 Genetic Variation Operator
Mutation is just simply a change of a bit or a gene. As chromosomes are encoded using
the binary alphabet, then it would just simply be a change from one to zero or vice
versa.
2.3 What is a Genetic Program?
Genetic programming (GP) is an automated method for creating a working computer
program from a high-level problem statement of a problem. Genetic programming
starts from a high-level statement of “what needs to be done” and automatically creates
a computer program to solve the problem.
Yuen, C.C.,
- 21 -
Genetic programming is an evolutionary programming method which branched
off Genetic algorithms (GAs). Research into Genetic Algorithms started in the 1950’s
and 1960’s. Many different techniques were developed, all aimed to evolve a
population of candidate solutions to a given problem, using operators inspired by
natural genetic variation and natural selection.
John Koza used the idea of genetic algorithms to evolve Lisp programs, he
named it Genetic Programming. Koza claimed that GP’s have the potential to produce
programs of the necessary complexity and robustness for general automatic
programming. GP’s provided a visualisation for GA’s, as we can structure it as a parse
tree. This enables better visualisation and more advance object handling, compared to a
string of bits.
Genetic Programming approaches are well known to suit non dynamic, static
problems. Such problems have an optimal solution for each setting of the environment,
the search space is fixed, despite being very large, and it can be solved manually if
needed.
Genetic Programming is classed under the field of Artificial Intelligence (A.I)
and Machine Learning (M.L). However, it’s very different from all other approaches to
artificial intelligence, machine learning, neural networks, adaptive systems,
reinforcement learning, or automated logic in all (or most).
This is because of the following five reasons:
1. Representation: Genetic programming overtly conducts its search for a solution
to the given problem in program space.
2. Role of point-to-point transformations in the search: Genetic programming does
not conduct its search by transforming a single point in the search space into
another single point, but instead transforms a set of points into another set of
points.
3. Role of hill climbing in the search: Genetic programming does not rely
exclusively on greedy hill climbing to conduct its search, but instead allocates a
certain number of trials, in a principled way, to choices that are known to be
inferior.
4. Role of determinism in the search: Genetic programming conducts its search
probabilistically.
5. Underpinnings of the technique: Biologically inspired.
Yuen, C.C.,
- 22 -
2.3.1 LISP
Lisp Programming has been widely regarded as a programming language for Artificial
Intelligence [30]. LISP was formulated by AI pioneer John McCarthy in the late 50's.
LISP's essential data structure is an ordered sequence of elements called a "list". Lists
are essential for AI work because of their flexibility: a programmer need not specify in
advance the number or type of elements in a list. Also, lists can be used to represent an
almost limitless array of things, from expert rules to computer programs to thought
processes to system components. [13]
Programs in Lisp can easily be expressed in the form of a “parse tree”. A parse
tree is a grammatical structure represented as a tree data structure. It provides a set of
rules. The parse trees are the objects the evolutionary algorithm will work on.
Therefore each individual is an independent parse tree. In Lisp, the representation of a
parse tree will be a string, where the operators precede their arguments. E.g., A + B is
written as (+ A B).
+
A
B
Figure 2.1: Tree Representation of (+AB) in Lisp form
All valid expressions can be represented using in the form of a parse tree. The
representation of LISP can be logically implemented in other languages using array or
string objects. Both, strings and arrays are simple data structures in computing which
resemble a list or vector.
2.3.2 Representation – Functions and Terminals
As mention previously, Genetic Programs automatically creates new expressions. Each
expression is mathematically meaningful. Genetic Programs have extra flexibility and
explain ability over genetic algorithms, as they have a functional set and a terminal set.
The elements within the two sets must be predefined. The functional set, F = {f1, f2, …,
fn} contain operators such as ‘AND’ and ‘OR’. The terminal set, T = {t1, t2, …, tn}
Yuen, C.C.,
- 23 -
contain variable which are objects, such as real numbers. All valid parse trees will have
inner nodes as functional nodes, as all functional nodes will require one child. All the
leaf nodes will be terminal nodes.
2.3.3 Genetic Operators
Unlike GA’s, GP’s only have established standard crossover. One-point, two-point, npoint and uniform crossover are rarely used. Standard Crossover is performed by
simply selecting a random node within a parse tree and selecting another random point
in another parse tree, then crossover the two sub-trees.
Crossover can result in three different ways; it can be constructive, neural or
destructive. Destructive crossover is undesirable; a lot of research has been done in this
field. Methods explored will be discussed later in the chapter.
2.3.4 Genetic Variation Operators
Mutation is the only natural variation operator. In the past, two forms of mutation have
been tested. Both forms offer there advantages and disadvantages.
Unlike GA’s, it will randomly select a location, then replace the whole sub-tree
below the point with a new randomly created one. This method offers a lot of searching
capabilities, as it may have created totally new individuals that have never been
explored previously.
Another popular method of implementing mutation is by randomly creating a
temporary tree and using the idea of standard crossover, we select a point in the random
tree and a point in the individual selected out of the population. Then we replace the
sub-tree from the temporary tree onto the individual, this creates the new individual for
the next generation. This method is known as “headless chicken crossover” [23].
The other method is to only change the value of the node selected, randomly.
Due to different functional operators may require a different number of children, when
changing a functional node, we must only swap it with another functional operator with
identical number of children, and otherwise the expression will be invalid. Changing a
terminal node is simply done by replacing that node with another variable from within
the functional set.
2.4 Difference between GA and GP
Although GA and GP have very similar background and methodologies; they are
different in many different ways. GA’s use chromosomes of fixed length whereas GP
Yuen, C.C.,
- 24 -
have chromosomes of different length in the population. Usually a maximum depth or
size of tree is imposed in GP to avoid memory overflow (caused by bloat). Bloating is
when the tree grows in size, but a lot of the information added does not contribute in
any fashion to the over fitness, it also commonly known as Introns, useless information.
During a run, the size of trees tends to increase and these needs to be controlled. Even if
an upper limit on the size of a tree is not imposed, there is an effective upper limit
which is dictated by the finite memory of the machine on which the GP is being
executed. So in reality, GP has variable size up to some limit.
As genetic programs have a terminal and functional set, they are suited to a
wider range of problems compared to genetic algorithms. However, some simpler
problems are more efficiently solved using a GA, as the problem contains fewer
overheads, and a more constraint defined search space, as the gene are fixed in location,
so the optimal will be to optimise every gene. However, as genetic material is free
allowed to move about in GP, the search space is significantly larger. The terminal and
functional data makes GP solutions more expressible and logical.
2.5 Selective Crossover in GA
Vekaria [28], was inspired by nature; specifically Dawkins’ model of evolution and
dominance characteristics in nature. Vekaria developed a new selective crossover
method for genetic algorithms using the idea of evolution in gene dominance. The aim
is to see if crossover of genes in a haploid GA run can be evolved where alleles in one
parent competing to be retained in a fitter individual and the use of correlations
between parental and offspring’s fitness would allow the means of discovering
beneficial alleles. Vekaria described her method of selective crossover as “dominance
without diploidy”, as most species are in diploid form.
Each individual was represented by a chromosome vector and dominance
vector; both vectors are identical in length. Each bit will have an associated dominance
value to accumulate knowledge of what happened in previous generations and uses this
memory to bias and combine successful alleles.
Vekaria claims there are three interdependent properties which work together to
form selective crossover.
•
Detection – It detects alleles that were changed during recombination to identify
modifications to the candidate solution. [28]
Yuen, C.C.,
- 25 -
•
Correlation – It uses correlations between parental and offspring fitnesses as a
means of discovering beneficial alleles. [28]
•
Preservation – It preferentially preserves alleles at each locus, during
recombination, according to their previous contributions to beneficial changes
in fitness. [28]
Vekaria explains that the correlations between parents and offspring work inline with
the detection of alleles (inheritance of alleles) are used to update the dominance value.
The dominance values in turn dictate the inheritance and preservation of allele
combinations.
Vekaria’s reasons for keeping Child 2, despite having all the genes with lower
dominance values which potentially leads to individuals with low fitness, was to
preserve genetic diversity in early generations when more exploration is required than
exploitation. However, Child 2 may have a higher fitness than its parents, and then its
dominance values will also be reflected by the increase.
Vekaria [28] demonstrated that Selective Crossover was superior or equally as
good as two recombination methods she tested against (two-point and uniform
crossover). She showed that it had adaptive and self-adaptive features. Using the fact
that it was adaptive, this proved her hypothesis of the three key properties (detection,
correlation and preservation) were used effectively during crossover. Selective
Crossover is not biased against schemata with high defining length, unlike one-point or
two-point crossover.
2.5.1 Algorithm and Illustrative Example of Selective Crossover in GA
The recombination works as follows:
1. Select two parents
2. Compare the dominance values linearly across the chromosome. The allele that
has a higher dominance value contributes to Child 1 along with the associated
dominance value and Child 2 inherits the allele with the lower dominance value.
If both dominance values are equal then crossover does not occur at that
position.
3. If crossover occurred, and the node was different, then an exchange is recorded.
4. Once the crossover has been completed, i.e. created two new individuals for the
next generation, we will measure the individual’s fitness and compare it against
both the parents’ fitnesses. If the child’s fitness is greater than the fitness of
Yuen, C.C.,
- 26 -
either parent, the dominance values (of only those genes that were exchanged
during crossover) are increased proportionately to the fitness increase. This is
done to reflect the alleles’ contribution to the fitness increase.
A work example from Vekaria’s thesis is shown below, the top vector stores the
dominance values and the bottom vector is the chromosome of genes.
Parent 1 – fitness = 0.36
0.4
1
0.3
0
0.01
0
0.9
1
0.1
0
0.2
0
0.4
1
0.2
1
0.9
1
0.3
0
Parent 2 – fitness = 0.30
0.01
0
0.2
1
After recombination with Selective Crossover
Child 1 – fitness = 0.46
0.4
1
0.3
0
0.4
1
0.9
1
0.9
1
0.3
0
0.01
0
0.2
1
0.1
0
0.2
0
Child 2 – fitness = 0.20
0.01
0
0.2
1
Increase dominance values
Child 1 – fitness = 0.46
0.4
1
0.3
0
0.5
1
0.9
1
1.0
1
0.3
0
0.01
0
0.2
1
0.1
0
0.2
0
Child 2 – fitness = 0.20
0.01
0
0.2
1
Figure 2.2: Recombination with Selective Crossover and
Updating Dominance Values
Yuen, C.C.,
- 27 -
2.6 Selective Crossover in Genetic Programming
Hengpraprohm, S and Chongstitvatana, P [11] understood that in simple crossover, a
good solution, can destroyed by an inappropriate choice of crossover points, as
discussed, this is also known as destructive crossover. Hengpraprohm and
Chongstitvatana proposed a new crossover operator that would identify a good sub-tree
by measuring the impact on the fitness of that tree if that sub-tree was removed. It has
been designed so that the best sub-tree in an individual is always protected.
To aid promoting constructive crossovers, they find the best sub-tree and the
worst sub-tree. This is achieved by continuously pruning sub-tree’s, pruning is done by
substituting the sub-tree by a randomly selected terminal from the terminal set.
Assuming we are maximising a problem. We re-evaluate the fitness and see how much
the new fitness has dropped by. The best sub-tree is the sub-tree that has the highest
impact on the fitness value, so when it’s pruned, the fitness value drops the most. The
worst sub-tree is of the contrary; it is the sub-tree that when pruned, the fitness value is
either increased the most or decreased the least.
All of the functional nodes are tested, and then once we identify the best and
worst sub-trees in both parents, crossover is performed by substituting the worst subtree of one parent with the best sub-tree of the other parent, therefore combining the
good sub-trees from both parents to produce the offspring.
Hengpraprohm and Chongstitvatana tested the technique on two problems, the
robot arm control and the artificial ant. They excluded the mutation operator to ensure
that only crossover techniques were compared like for like. Results found that
computational effort was reduced when compared with simple crossover.
Computational effort is measured by the minimum number of candidate evaluations in
order to find the solution, as defined by Koza [13]. They concluded that due to the CPU
time required for analysing the pruned trees, a further study which weights the
computational time against the gain of faster convergence is required.
2.7 Other Adaptive Crossover Techniques for GP
Apart from the methods mentioned above, a few of the more interesting methods have
been proposed, and I will highlight them below.
Yuen, C.C.,
- 28 -
2.5.1 Depth Dependent Crossover
Ito et al [26] aimed to reduce bloating and protect building blocks by introducing Depth
Dependent Crossover. It uses the same notion as simple crossover with an additional
constraint, where the node selection probability is determined by the depth of the tree
structure. On these crossovers, shallower nodes are more often selected, and deeper
nodes are selected less often. This helps to protect the building blocks and reduce the
chance of introducing introns, by swapping shallower nodes.
2.5.2 Non Destructive Crossover
Ito et al [26] also aimed to have an improved population in every generation as standard
crossover can sometimes lead to new individuals with lower fitness values than it’s
parents; this is because crossover destroyed some of the good building blocks. Non
destructive crossover only keeps offspring that have higher fitness than their parents.
Normally, the fitness value will only have to be greater than either parent for the
offspring to be retained. Some researchers insist on the offspring to have a higher
fitness than both parents for it to be retained. This Crossover method has a tendency to
lead to premature convergence, however, it also helps to remove the destructive effects
caused by standard crossover. From my point of view, this method essentially becomes
hill-climbing, as soon as a optimal is found, regardless local or global, it has a tendency
to remain in that area of the search space.
2.5.3 Non Destructive Depth Dependent Crossover
After some detailed analysis, Ito et al [26] decided to combine the depth restriction with
an option to discard the offspring if the fitness of the offspring is lower than its parents.
It has not been widely use, as the disadvantage of computational time outweighs the
benefits of bloating and destructive effects caused by crossover.
2.5.4 Self – Tuning Depth Dependent Crossover
Self – Tuning Depth Dependent Crossover is an extension of Depth Dependent
Crossover by Ito et al [26]. It uses the same logic, but reduces the randomness of the
crossover points selected. Each individual of the population has a different depth
selection probability and the depth selection probability is copied across to the next
generation. This crossover method has enhanced the applicability of the depth
dependent crossover for various GP problems.
Yuen, C.C.,
- 29 -
2.5.5 Brood Recombination in GP
Altenberg [1] inspired Tackett to devise a method which reduced the destructive effect
of the crossover operator called brood recombination [27]. Tackett attempted to model
the observed fact that many animal species produce far more offspring than are
expected to live. He used this idea to remove individual’s caused by bad crossover.
A brood is a young group of individuals from a certain species. Tackett created
a “brood” each time crossover was performed. The size of the brood, “N” was defined
by the user. The method is as follows:
1. Pick two parents from the population.
2. Perform random crossover on the parents’ N times, each time creating a pair of
children as a result of crossover. In this case there are eight children resulting
from N = 4 crossover operations.
3. Evaluate each of the children for fitness. Sort them by fitness. Select the best
two. They are considered the children of the parents. The remainder of the
children are discarded.
Figure 2.3: Brood recombination illustrated1
1
Extracted from Banzhaf, Nordin, Keller and Francone [3]
Yuen, C.C.,
- 30 -
The main disadvantage of this method is evaluation time. GP is usually slow in
performing evaluations. Brood recombination makes 2N number of evaluations,
whereas standard crossover only requires 2 evaluations. Later, Altenberg and Tackett
devised an intelligent approach by evaluating them on only a small portion of the
training set. Tackett’s reasoning is that the entire brood is the offspring of one set of
parents, selection among the brood members is selecting for effective crossovers – good
recombination.
Brood recombination has been found to be less disruptive to good building
blocks. Tackett showed that brood was more effective in comparison to standard
crossover on the suite of problems he tested it on. Tackett also found that performance
was only reduced very slightly if he reduces the number of training instances to
evaluate the brood. All the experiments showed that greater diversity and more efficient
computation time was achievable when brood recombination was added. He also found
that reducing the population size didn’t really affect the search for an optimal result
negatively.
Yuen, C.C.,
- 31 -
Chapter 3
3 – Selective Crossover Methods using Gene Dominance in
GP
So far, in a standard crossover for GP’s, there is no way to find out whether the choice
of crossover point is good or bad. Selective Crossover for GA, Selective Crossover in
GP and Brood Crossover for GP have shown that improvements to the technique of
crossover can be achieved. However, the techniques work better on specific problem
and less well on others. Extra computational time has been an issue in which researcher
have tried to avoid answering.
After reading Vekaria’s PhD thesis, 1999; it inspired me to develop a greater
understanding of natural evolution. I adapted her idea, and the problem of finding a
good crossover point in order to avoid destructive crossovers into consideration.
Initially, due to the issue of each operator requiring a different number of children, I
could not figure out exactly how to ensure that the crossover would always create valid
trees. I believe that extra memory leads to a more intelligent crossover technique, which
avoids destructive crossover.
In standard crossover for GP’s, genetic material is freely allowed to move from
one location to another in the genome. Biologically, the genes representing a certain
trait are located in a similar location for all chromosomes of that species. Each loci or
group of locus represents a specific trait; our goal in genetic programming is to obtain a
solution that optimises the problem with a solution that has the best traits.
Uniform crossover compares every gene in one parent with the other gene in the
corresponding location; every gene has a 50% chance of being crossover. Dominance
Crossover
Therefore, we want the best set of genes in every location. Dominance
Crossover will retain location of traits.
3.1 Terminology
Tree – A structure for organizing or classifying data in which every item can be traced
to a single origin through a unique path.
Yuen, C.C.,
- 32 -
Root node – This is the node at the top of the tree structure, the node has no parents, but
typically has children.
Functional nodes – nodes which contain parameters from the functional set
Terminal nodes – nodes which contain parameters from the terminal set
The Chromosome, also known as a gene vector, each individual will have a
chromosome.
Dominance Vector – a vector which contains a value for each gene in the chromosome,
the length of the Dominance Vector will be identical to the specific individual.
Change Vector – a vector which registers whether a crossover has taken place or not.
3.2 Uniform Crossover
GP uniform crossover is a GP operator inspired by the GA operator of the same name.
As stated in section 2.2.2, GA uniform crossover constructs offspring on a bitwise
basis, copying each allele from each parent with a 50% probability. In GA, this
operation relies on the fact that all the chromosomes in the population are of the same
structure and the same length. We know that such assumption is not possible in GP, as
the initial population will almost always contain unequal number of nodes and be
structured dissimilarly.
Poli and Langdon [22] proposed uniform crossover for GP in 1998. They
proposed the following crossover rules:
•
Individual nodes can be swapped if the two nodes are terminals or functions of
the same arity.
•
If one node is a terminal and the other is a functional node, we simply crossover
the sub-tree of the functional node with the terminal node.
•
If both are functional nodes, but of different arity, then we simply crossover the
whole sub-tree of the corresponding node.
Poli and Langdon found that uniform crossover was a more global operator in terms of
searching, whereas one point and standard crossover were more biased towards local
searching, they stated that standard GP crossover was biased to certain types of local
adjustments, typically very close to the leaves.
Yuen, C.C.,
- 33 -
3.2.1 A revised version of GP Uniform Crossover
After reading Poli and Langdon [22], I felt that Poli and Langdon’s version of GP
uniform crossover was restrictive as it would crossover the whole sub-tree if the two
functional nodes had different arity. If a rare functional node was used, say it required 3
terminals, and was located on the immediate level after the root node. The chances of
the rest of tree being compared and swapped over will be very low.
This problem can be overridden by revising the crossover rules. When the
situation of two functional nodes with different arity is swapped, we should keep track
of the fact that one of the nodes have extra child.
If the chromosome were represented using a parse tree, then we will recursively
check that the functional node has been filled with the required number of children
before we traverse back up the tree.
If the chromosome were represented using LISP, then we will simply introduce
a variable that stores the number of extra children required in the chromosome and add
it on at the end of the string when the parents have remaining indexes which have
nothing to compare to.
3.2.1.1 Example of the revised version of GP Uniform Crossover
AND
OR
OR
1
IF
4
AND
2
3
AND
1
3
5
OR
NOT
1
4
2
Parent 1
Parent 2
Figure 3.1: Parent Chromosomes for Uniform Crossover
The crossover will be done as follows:
1. “AND” and “OR”, the root nodes, both have arity of two, so they can freely
crossover.
Yuen, C.C.,
- 34 -
2. “OR” and “AND”, also have the same arity, as in step 1
3. “1” and “3” are both terminals, so they can be freely crossover.
4. “4” and “NOT” is an example of the case of terminal compared with functional
node, we have to crossover “4” with the sub-tree “NOT - 2”.
5. “IF” and “OR” are both functional nodes, but with different arity, in the
algorithm proposed by Poli and Langdon, they will crossover the whole subtrees. My proposed method is to crossover the nodes, “IF” and “OR” and
introduce a variable “extra node” to remember that the IF node requires one
more child.
6. “AND” and “1” as in step 4.
7. “3” and “4” as in step 3.
8. Now we have “1” from parent 1 with nothing to compare with, hence we know
that the child that has the value of 1 in the variable extra node will need this
node to make tree valid.
For this example, we crossover every other node, the offspring will be as follows:
OR
AND
OR
3
OR
AND
4
2
AND
4
1
5
IF
NOT
1
3
1
2
Child 1
Child 2
Figure 3.2: Offspring Chromosomes from Uniform Crossover
3.3 Simple Dominance Selective Crossover
Simple dominance selective crossover works similarly to standard crossover. Due to the
existence of destructive crossover, we aim to reduce the number of destructive
crossovers, as destructive crossover is perceived to be a waste of computational time.
However, if we fully eliminate destructive crossovers, there is fairly high possibility of
restricting the search space.
Yuen, C.C.,
- 35 -
By using an adaptive approach which informs us how dominant a sub-tree is
within the population provides use with a stronger learning ability. The question of
many recessive genes in a sub-tree will surely affect the overall sub-tree exist, this
raises a query into the worthiness of this method, we predict that a sub-tree which is
more dominant represents a sub-tree which has a fitter impact.
Like standard crossover, we require two parents to produce two offspring. We
will outline the algorithm and provide a step by step example of the workings.
1. After selection of the parent, we establish the parent with the higher fitness.
2. We select a random location in each of the parent. We sum up the dominance
values of the sub-trees selected and divided it by the number of the number of
nodes in the sub-tree.
Below states the equation for calculating the D value which determines a
dominance value for the sub-tree.
e
D=
∑ Dominance value
i=s
i
(3.1)
# of nodes in sub-tree
where s is the start node in the sub-tree and e is end node, and # represents
number.
3. We use this value to determine whether crossover should be performed. If the D
value for the sub-tree from the fitter parent is higher than the D value for the
sub-tree from the lesser fit parent, no crossover occurs. In contrast, if the D
value for the sub-tree from the fitter parent is lower than the D value for the
lesser fit parent, we crossover the two sub-trees. During crossover, the child
inherits the corresponding dominance values.
4. If crossover took place, then the change values for the corresponding sub-tree
being crossover will change to 1.
5. We compute the fitness of the 2 children, if their fitness values have increased;
the logic reason would be the gene from crossover benefited the individual. To
reflect the beneficial genes found, we update the dominance value of the
respective genes that lead to an improvement in fitness over its respective
parent. The dominance values are updated by calculating the difference in
fitness between the child and its parent.
Yuen, C.C.,
- 36 -
3.3.1 Example of simple dominance selective crossover algorithm
Parent 1 – Fitness = 20
Gene Vector
AND NOT OR
1
3
IF
4
2
3
Dominance Vector
0.36
0.81
0.01
0.59
0.11
0.67
0.61
0.63
0.81
Change Vector
0
0
0
0
0
0
0
0
0
Gene Vector
IF
OR
AND 2
3
1
1
4
Dominance Vector
0.53
0.44
0.56
0.29
0.89
0.17
0.96
0.83
Change Vector
0
0
0
0
0
0
0
0
Parent 2 – Fitness = 17
Say we chose node 3 for crossover in parent 1 and node 2 in parent 2.
We sum the dominance values of the sub-tree, and divide it by the number of nodes in
the sub-tree.
The sub-tree in parent 1 has a D value of 0.236 (to 3dp), and parent 2 has a D value of
0.47. Therefore, a crossover should be performed, as the lesser fit parent has a sub-tree
of higher dominance.
Child 1 – Fitness = 19
Gene
AND NOT OR
AND 2
3
1
IF
4
2
3
0.36
0.81
0.44
0.56
0.29
0.89
0.17
0.67
0.61
0.63
0.81
0
0
1
1
1
1
1
0
0
0
0
Vector
Dominance
Vector
Change
Vector
Child 2 – Fitness = 19
Yuen, C.C.,
- 37 -
Gene Vector
IF
OR
1
3
1
4
Dominance Vector
0.53
0.01
0.59
0.11
0.96
0.83
Change Vector
0
1
1
1
0
0
As crossover took place, and at least one of the fitness values improved, we need to
update the fitness. The updating procedure is executed by calculating the difference in
fitness between child 1 and parent 1, and child 2 and parent 2. If the difference results
in a positive value where the child has higher fitness than the parent, we add the
difference to the dominance values of the genes that were exchanged during crossover.
In this example, the fitness difference between child 1 and parent 1 is “-1”, our
algorithm will not update the dominance values in child 1.
The final offspring will be as follows:
Child 1 – Fitness = 19
Gene
AND NOT OR
AND 2
3
1
IF
4
2
3
0.36
0.81
0.44
0.56
0.29
0.89
0.17
0.67
0.61
0.63
0.81
0
0
1
1
1
1
1
0
0
0
0
Vector
Dominance
Vector
Change
Vector
Child 2 – Fitness = 19
Gene Vector
IF
OR
1
3
1
4
Dominance Vector
0.53
2.01
2.59
2.11
0.96
0.83
Change Vector
0
1
1
1
0
0
Yuen, C.C.,
- 38 -
3.3.2 Key Properties
This new crossover technique has been based around the ideas of mirroring natural
evolution in genetic programming, detection of beneficial genes using correlation and
preservation of beneficial genes.
•
Detection: It detects sub-trees which have a good impact during crossover on
the candidate solution.
•
Correlation: It uses differences between parental and offspring fitnesses as a
means of discovering beneficial alleles.
•
Preservation: It preserves alleles by keeping the more dominant sub-trees with
individuals with higher fitness.
3.4 Dominance Selective Crossover
Dominance selective crossover integrates the idea of gene dominance and uniform
crossover evolving into a new crossover technique designed with the feature of
adaptability to the problem being optimised.
Dominance selective crossover was designed using the analogy of dominance,
where alleles in a chromosome compete with those on the other chromosome, and the
analogy of evolution of dominance. As previously stated, most species have two sets of
chromosomes. In our design, individuals in the population will only have one
chromosome.
The aim is to see if crossover of genes in a haploid GP can be evolved where
alleles in one parent compete with those on the other parent chosen for crossover. This
is a form of adaptation, as the alleles are competing to be retained in a fitter individual
and the use of correlations between parental and offspring fitnesses would provide the
means of discovering beneficial alleles.
Like most crossover techniques, we use 2 parents to create two children. We
will state the algorithm of the proposed technique below, and provide an illustrative
example.
1. After selecting the two parents, our first step is to establish the fitter parent. We
name the fitter parent, parent 1 and the lesser fit parent, parent 2. The child
which stores all the genes with the higher dominance values will be named
child 1, and the child that stores all the genes with the lesser dominance values
will be named child 2.
Yuen, C.C.,
- 39 -
2. We start from the index node, and move along the string. Crossover only occurs
when the dominance value of the selected node from the lesser fit parent is
higher than the dominance value of the selected node in the fitter parent.
During crossover, both the gene and the dominance value is copied across
whilst the corresponding indexes in the change vector will have value 1. Due to
the fact that not all functional nodes take the same number of children, i.e. have
different arity, we require a memory variable to store the number of extra
terminals needed to ensure the string is valid. We have three possible crossover
comparison and my rules for crossover are as follows:
•
If a terminal node is compared with another terminal node, then we
may swap the two alleles.
•
If a terminal node is compared with a functional node, we treat the
sub-tree from the functional node as a terminal, as all sub-trees are a
value, once evaluated, therefore, they are effectively a terminal.
When we crossover, we swap the terminal with a sub-tree.
•
If we are comparing a functional node with another functional, we
check the number of children each node requires. If the nodes are of
different arity, we store the difference into a variable which is
attached to child 1. We will name this variable, “child_diff”. Once
we have reached the end of one chromosome, we check the value of
child_diff. The variable child_diff decides the corresponding number
of additional or lesser children required for child 1. We extract the
child_diff amount from the parent with remaining nodes and copy it
across to the child that requires extra terminal nodes. If the amount
remaining in the parent does not cover the number of extra terminal
nodes required, then we will remove the required amount of terminal
nodes from the end of other child onto the other child.
3.4.1 Example of dominance selective crossover algorithm
Parent 1 – Fitness = 20
Gene Vector
OR
Dominance Vector
0.36 0.81
Yuen, C.C.,
AND 1
0.01
NOT 3
IF
4
2
3
0.59
0.67
0.61
0.63
0.81
- 40 -
0.11
0
0
0
0
0
0
0
0
Gene Vector
IF
OR
AND 4
2
3
2
1
Dominance Vector
0.53
0.44
0.56
0.29
0.89
0.17
0.96
0.83
Change Vector
0
0
0
0
0
0
0
0
Change Vector
0
Parent 2 – Fitness = 17
Child 1 - Fitness = 21
Gene Vector
IF
AND AND 4
2
NOT 3
2
1
Dominance
0.53
0.81
0.56
0.29
0.89
0.59
0.11
0.96
0.83
1
0
1
1
1
0
0
1
1
Vector
Change Vector
Child 2 – Fitness = 24
Gene Vector
OR
OR
1
3
IF
4
2
3
Dominance
0.36
0.44
0.01
0.17
0.67
0.61
0.63
0.81
1
0
1
0
1
1
1
1
Vector
Change Vector
I would like to point out that the selective crossover procedure has been done, however,
the dominance values still require to be updated.
•
Firstly, we compared node 1 (index 1) of both parents, both were functional
nodes, however, the required number of children were different, so I stored
the difference ,“1” into the variable child_diff, as “IF” requires three
children and “AND” only requires two.
•
Next, “OR” is compared with “OR”, so it’s just a simple crossover decision,
both nodes have same arity. The next comparison is between a functional
Yuen, C.C.,
- 41 -
node and a terminal node; crossover at this instance will mean a sub-tree
from the “AND” node.
•
The process continues, until we have reached the end of either string. In this
example, the last node in the parent 2 has nothing to be compared against.
We check the value of child_diff, and observe that child 1 requires one more
node to be complete; we know parent 2 still has one node left, so we copy
the last node from parent 2 to child 1.
The change vector memorises the locations where crossover occurred, this allows us to
update the dominance values. Let’s say that child 1 has a fitness value of 21 and child 2
has a fitness of 24. I have deliberately chosen child 2 to have a higher fitness to
demonstrate that even child 2 got all the genes with lower dominance, but such a
combination of genes may lead to a better individual solution for the given problem.
We calculate the difference in fitness between child 1 and parent 1, and also child 2 and
parent 2. The difference will be added onto the dominance values, so the updated
offspring will be as follows:
Child 1 - Fitness = 21
Gene Vector
IF
OR
AND 4
2
NOT 3
2
1
Dominance
1.53
1.81
1.56
1.29
1.89
1.59
1.11
1.96
1.83
1
0
1
1
1
0
0
1
1
Vector
Change Vector
Child 2 – Fitness = 24
Gene Vector
AND OR
1
3
IF
4
2
3
Dominance
7.36
7.44
7.01
7.17
7.67
7.61
7.63
7.81
1
0
1
0
1
1
1
1
Vector
Change Vector
Yuen, C.C.,
- 42 -
3.4.2 Key Properties
This new crossover technique has been based around the ideas of mirroring natural
evolution in genetic programming, detection of beneficial genes using correlation and
preservation of beneficial genes.
•
Detection: It detects nodes that were changed during crossover to identify
modifications made to the candidate solution.
•
Correlation: It uses differences between parental and offspring fitnesses as a
means of discovering beneficial alleles.
•
Preservation of beneficial genes: After discovering a beneficial gene or a
group of beneficial genes, its advantage over the less beneficial genes are
always desired, since we will attempt to merge all the better genes together
to form possibly the best solution, we initially perverse the better genes
found in each generation by ensuring that they kept.
•
Mirroring natural evolution: In the past, the idea of maintaining a genetic
structure of a chromosome in genetic programming has never been explored.
Selective Crossover will compare the whole individual with another
individual within the population, going through a point for point comparison
effectively.
Yuen, C.C.,
- 43 -
Chapter 4
4 – Implementation of Proposed Crossover Techniques and GP System
4.1 Matlab
The experiments will be implemented in MATLAB® version 6.5, R13. Matlab is a
platform independent language which integrates computation, visualization and
programming which provides an easy to use interface where problems and solutions are
expressed in mathematical notion.
Matlab is a high-level matrix/array language with control flow statements,
functions, data structures, input/output, and object-oriented programming features.
An individual of the population will closely follow the representation used in
LISP, as discussed in Chapter 2.
4.2 System Components
4.2.1 Population
Each member of the population is represented as an object consisting of a string vector
storing the chromosome; an associated dominance vector, change vector and fitness
value. During the process of crossover, mutation and selection, all the attributes of the
individual will have to be accessible.
After creating a new population, the old population is replaced with the new
population for the new generation.
4.2.2 Initialisation
Each member of the first population is created randomly from the set of available
functions and terminals. I have designed it so that most individuals are not at maximum
length, and the chances of very short individuals are also very low.
The Dominance values are randomly initialised with values that are initially
constrained to lie in the range [0, 1].
The Change vector will be initialised to 0. 0 signifies crossover has not occurred
and 1 signifies that a crossover has occurred at that location.
To ensure that eh initial population is as random as possible. We have designed
it such that it will allow the size of an individual to be as small as 3 nodes or up to max
length. Each node has a 50% chance of being a functional or terminal node until we
Yuen, C.C.,
- 44 -
have got enough functional nodes that require the remaining to be terminal nodes to
create a complete individual of maximum length.
4.2.3 Standard Crossover
Standard Crossover acts on the chromosome as described in Chapter 2. The dominance
and change vectors are not used.
4.2.4 Simple Selective Crossover
Simple Selective Crossover acts on the chromosome as described in Section 3.3. The
dominance and change vectors are modified accordingly in every generation.
4.2.5 Dominance Selective Crossover
Dominance Selective Crossover acts on the chromosome as described in Section 3.4.
The dominance and change vectors are modified accordingly in every generation.
4.2.6 Uniform Crossover
Uniform Crossover acts on the chromosome as described in section 3.1. The dominance
and change vectors are not used.
4.2.7 Mutation
Mutation acts on the chromosome as stated in Chapter 2. The genes that have mutated
get assigned new random dominance value between 0 and 1, if this mutation is
beneficial, it will be reflected in the next generations when the dominance value gets
updated.
4.2.8 Selection Method
I have combined the idea of fitness proportionate selection and also included elitism
into my design. The top 10% of the population will always be copied across to the next
generation; they will be available for selection as well.
4.2.9 Evaluation
Evaluation of an individual is the same regardless what genetic operations have been
performed on the individual. The dominance vectors do not contribute to an
individuals’ fitness and thus are not used to evaluate an individual. Each problem will
have its fitness evaluation explicitly defined.
Yuen, C.C.,
- 45 -
4.2.10 Updating Dominance Values
The genes which have been crossover, and contributed to the increasing the fitness
value in comparison to its parent will get an increase in dominance value. Child 1’s
fitness is compared with Parent 1, and similarly, Child 2’s fitness is compared with
Parent 2.
If the child’s fitness is greater than the fitness of the parent, the dominance
value of the genes that were exchanged during crossover are increased proportionately
to the fitness increase. This is done to reflect the alleles’ contribution to the fitness
increase.
4.2.11 Termination Condition
The algorithm will run until a termination condition is met. The termination condition
can be of either of the following cases:
Limit the number of generations, so the algorithm stops running when it has
reached the maximum number of generations.
Stop the algorithm once convergence is met, once the global optimal is found,
the algorithm stops, we know that the algorithm has converged by observing that a
global optimal solution exist in the current generation and the immediate generation.
Yuen, C.C.,
- 46 -
4.2.12 Entity relation diagram of all the main functions
Calculate Statistical
Values
Population
Fitness Evaluation
Select Parent(s)
Fitness Evaluation
Mutation
Standard Crossover
Uniform Crossover
Direct Copy
Mutation with Dominance
Selective Crossover
One Point Selective
Crossover
Update Dominance
Values
Figure 4.1: Flow Diagram showing the basic procedure of
the Genetic Program
In each evaluation run of the Genetic Program, only one crossover methods can
be selected and used. If standard crossover was selected, then the other options to create
the next generation are mutation, and direct copy of the parent into the next generation.
If selective crossover or one-point selective crossover was chosen, then the options
become mutation with dominance and direct copy as well as the crossover method
chosen.
Yuen, C.C.,
- 47 -
4.3 Theoretical Advantages over standard Crossover and uniform
crossover
We believe that both Selective crossover and simple selective crossover will bring
advantages over standard crossover and uniform crossover. The reasoning behind the
crossover operator is to explore the search space with an element of randomness,
discovering fitter individuals.
Despite selection pressure aids evolution by allowing the fitter individuals to be
selected more often for producing offspring; if bad offspring are given birth due to
actions resulting from genetic operations, it is a regarded as ineffective and a waste of
computational power.
Dominance selective crossover or simple selective crossover uses dominance
values which have the detection, preservation and correlation features to decide if the
crossover will be beneficial to the new offspring. The chances of destructive crossover
are predicted to be comparatively lower than standard crossover.
Uniform crossover is known to be a better global search operator, as dominance
crossover is a off-shoot of uniform crossover, I believe that both operators will perform
equally as well with simple problems. With more complex problems, such as problems
with deception and epistasis, we should find that the features of adaptive-ness will aid
exploration, hence finding a more efficient balance between exploration and
exploitation. As dominance crossover will limit the search space significantly as the
number of generations increase.
Yuen, C.C.,
- 48 -
Chapter 5
5 - The Experiments
In the previous chapter, a detailed explanation of the two methods proposed was
provided. In order to test the effectiveness of the new crossover techniques designed.
As my design was inspired by Vekaria’s work, I will compare my findings
with her findings to identify if the same benefits that were found for genetic
algorithms also apply for genetic programs. Vekaria tested her implementation of
selective crossover with 5 well-studied benchmark problems. We have chosen to take
2 problems and implemented them using a genetic program. The problems selected
were:
•
One Max problem – a linear fitness landscape with no epistasis and no
deception. Fix search space.
•
L-MaxSAT problems – many flat regions in the fitness landscape with
tuneable epistasis.
As an additional bonus, it was interesting to test the proposed crossover methods on a
well known genetic programming problem. This would also provide the opportunity
to compare and contrast the proposed revised version of uniform crossover.
5.1 Statistical Hypothesis
As with all scientific experiment, we wish to perform some statistical analysis to the
results or data to statistically determine whether there is difference in performance of
any sort, this is done by testing whether the means of the two populations are the
same when tackling the same problem under identical environmental settings.
Statistical Hypothesis testing will be done using the t-test. Normally, a t-test is
carried out at 5% significance level. The significance level is defined as the
probability of a false rejection of the null hypothesis in a statistical test, therefore, it
reflects residual uncertainty in our conclusions. We need to state a null and alternative
hypothesis for the test.
The null hypothesis H0 and the alternative hypothesis H1 are used to compare
selective crossover with standard crossover and one-point selective crossover with
standard crossover are defined as:
H 0 : µ sel = µ st
Yuen, C.C.,
- 49 -
H1 : µ sel ≠ µ st
where µ sel is the mean performance when selective crossover or one-point selective
crossover was used and µ st is the mean performance when standard crossover was
used.
In words, the null hypothesis states that selective crossover / one-point
selective crossover performs equally as well as standard crossover. The alternative
hypothesis states that selective crossover or one point selective crossover performs
better than standard crossover.
Since each run is independent of another run, we will use the two sample ttest. The paired samples t-test was considered, however, it’s not suitable as the initial
generation is independent on every run, hence we are modifying the same generation
before and after. The paired t-test is also known as the before-and-after test.
The results of this test will be used in the discussion of experiments in the next
chapter.
5.2 Experimental Design
As I have decided to compare my results with Vekaria’s findings, to conduct a fair
comparison, I think it’s logically to use parameter settings identical to hers. I also
intend to conduct other experiments with different parameters settings to compare
performance.
The parameter settings used by Vekaria are as follows:
Crossover rate:
0.6
Mutation rate:
0.01
Runs:
50
The GP experiments will use the same crossover and mutation settings. However, we
strongly feel that 30 runs would be sufficient. In more computational resources were
available, then more runs would be done, as larger the sample, lesser the chance of an
error or undesired result affect the overall results.
5.3 Selection of Published GA Experiments
Vekaria [28] defines performance as a measure of the number of evaluations taken to
find the global solution. If the optimal solution is unknown, or not found after x
Yuen, C.C.,
- 50 -
generations, then the best solution found is used to measure the performance. Vekaria
conducted 50 runs for every parameter setting.
Out of the 5 problems that Vekaria simulated, I have chosen to test my
crossover techniques with the One Max and L-MaxSAT problems. This is because the
One Max problem does not contain epistasis or deception; hence we should be quite
easily able to identify the difference in performance. The L-MaxSAT problem is
affected by epistasis, the level of epistasis is directly proportional to the number of
clauses. Vekaria [28] found that selective crossover had a advantage in performance
over two-point and uniform crossover. I wish to test if the same findings exist in a GP
system with selective crossover.
Below briefly summarises the 2 problems chosen from the problem suite
Vekaria used.
5.3.1. One Max
The One Max problem is the easiest function for a GA. The One Max problem is a
simple bit counting problem (Ackley 1987), where each bit that is set to ‘1’ in the
chromosome contributes an equal amount to the fitness, thus all contributions of 1 bits
are good schemata which makes the function linear with no epistasis or deception.
Vekaria used a population of 100.
5.3.2 L-MaxSAT
The Boolean satisfiability (SAT) problem is a well-known constrained satisfaction
problem, which consists of variables or negated variables that are combined together
to form clauses using ‘and’ (^) and ‘or’ (V). Universally, SAT problems are presented
in conjunctive normal form.
The goal is to find an assignment of 0 and 1 values to the variables such that
the Boolean expression is true. The L-Max SAT problems are encoded as binary bit
chromosomes, where each bit is a Boolean variable.
The random L-MaxSAT problem generator [6 16] is a Boolean expression
generator. Random problems are created in conjunctive normal form, subject to three
parameters V (number of variables), C (number of clauses) and L (length of the
clauses). Each clause is of fixed length L, it’s generated by randomly selecting L of
the variables, each variable has a equal chance of being selected. By increasing the
number of clauses, the occurrence of a particular variable in each clause will also
increase. Hence, each variable occurs, on average, in CV/L clauses. Multiple
Yuen, C.C.,
- 51 -
occurrences of a particular variable leads to an increase in epistasis and creates more
constraints in finding an assignment of 0 and 1 values to the variables such that the
boolean expression is true.
This experiment will be able to test how the crossover different crossover
methods react to the change in epistasis. The experiment will use the same parameters
as Vekaria [28] who extracted the parameter settings from De Jong, Potter and Spears,
[6]. We will keep V and L fixed, as they do not directly contribute to epistasis. By
changing C, we can vary the amount of epistasis. The number of variables is set to
100 and the clause length is set to 3. The number of clauses C varies from 200 (low
epistasis), to 1200 (medium epistasis), to 2400 (high epistasis). The chromosome
length will be a maximum of 199, as there will be 100 variables, the population size
will be set to 100.Vekaria’s GA ran for 600 generations as there is no guarantee that a
global solution exist.
5.4 Expression of Experiment for GP
As stated, my aims for this thesis are, firstly to design and implement two new
adaptive crossover operators; secondly, to compare and contrast the dominance
selective crossover in GP with selective crossover in GA, (Vekaria’s work); thirdly, to
compare the performance of the different operators, and lastly, to deign and
implement the proposed revise version of uniform crossover.
In order to compare and contrast selective crossover in GP with selective
crossover in GA, we will have to run experiments which compute the number of
evaluations it takes to find the global solution. To test for significance in performance,
we will use the student t-test which Vekaria also used.
By running the problems with other crossover operators, using identical
parameter settings, we can empirically observe its performance in relation to the other
crossover techniques.
In order to further compare the performance, I will use some statistical
measures, such as time taken to perform the simulation, the mean fitness, the variance
of the fitness values, and the mean number of generations before it converges to best
solution found.
Yuen, C.C.,
- 52 -
5.4.1 One Max
To observe performance, I will calculate the mean fitness of each population, and the
variance in fitness of each population. A graph will be plotted to display the results of
the best individual found up to that generation.
5.4.1.1 Functional Set
The functional set will consist of a “together” operator, with takes 2 children. This
operator has no logical operational meaning; it is simply used so that we can represent
this problem using a tree based structure.
5.4.1.2 Terminal Set
The terminal set will contain all the variables. Each variable is equivalent to a bit in
the chromosome. The occurrence of a variable in an individual will signify the value
of that bit as 1. If the variable in not existent in the chromosome, or the tree
pictorially, it has the value 0 assigned to that bit.
5.4.1.3 Fitness Function
The fitness function is defined as:
l
f (chromosome) = ∑ xi
(5.1)
i =0
where l is the length of the chromosome.
5.4.1.4 Termination Condition
We know the global solution is 50, basically, the size of the terminal set. The
evolutionary process will terminate once the global solution has been found. The
generation number at which it found the global solution will be recorded.
5.4.2 Random L – Max SAT
The problem will be We will use the same performance measures as we did in the
One Max Problem, plots of the mean and variance in fitness of each generation.
5.4.2.1 Functional Set
Like the One Max Problem, the functional set will consist of a “together” operator,
with takes 2 children. This operator has no logical operational meaning; it is simply
used so that we can represent this problem using a tree based structure.
Yuen, C.C.,
- 53 -
5.4.2.2 Terminal Set
The terminal set will have 100 independent variables.
5.4.2.3 Fitness Function
The fitness function for the L-Max SAT Problem is defined as:
f (chromosome) =
1 C
∑ f (clausei )
C i =1
(5.2)
where the chromosome consists of C clauses, f (clausei ) is the fitness contribution of
each clause and is 1 if the clause is satisfied or 0 otherwise. Since the problem is
randomly generated using the random problem generator, there is no guarantee that
such an assignment to the expression exists. The difficulty of a problem increases as a
function of the number of Boolean variables and the complexity of the Boolean
expression.
5.4.2.4 Termination Condition
Since the problem generator randomly generates problems on demand, there is no
guarantee that an assignment to the expression exists. Therefore, Vekaria decided to
run her GA for 600 generations, I will also run my GP for 600 generations.
5.5 The 6 Boolean Multiplexer
The 6 Boolean Multiplexer is a well documented machine learning problem that has
been effectively solved using genetic programming, Koza [13] showed that genetic
programming is well suited for the Boolean multiplexer problems (6-Mult and 11Mult). The 11-Mult is a scaled up version of the 6-Mult. The problem has a finite
search space and the test suite is also finite.
A
6
bit
Boolean
multiplexer
has
6
Boolean-valued
terminals,
( a0 , a1 , d 0 , d1 , d 2 , d3 ). It has 4 data registers, ( d 0 , d1 , d 2 , d3 ) of binary values and 2
binary valued address lines, ( a0 , a1 ). In tandem the 2 address lines can encode binary
values “00”, “01”, “10” and “11” which translate to decimal addresses 0 through 3.
The task of a Boolean multiplexer is to decode an address encoded in binary and
return the binary data value of a register at that address. The solution found is
regarded as a program, as we want to find a logical rule that will return the value of
Yuen, C.C.,
- 54 -
the d terminal that is addresses by the two a terminals. E.g., if a0 = 0 and a1 = 1, the
address is 01 and the answer is the value of d1 .
5.5.1 Functional Set
The functional set will consist of four operators, (AND, OR, NOT, IF). The first three
functions are definitive logical operators, each operator takes a different number of
operands, in other words, the number of argument required for it to function. We will
briefly state the definitions below:
Logical ‘AND’ takes two operands, the symbol ‘^’ is used to represent this
function. A logical AND truth table can be found located in Appendix B, Table 1.
Logical ‘OR’ which takes two operands, the symbol ‘v’ is used to represent
this function. A logical OR truth table can be found located in Appendix B, Table 2.
Logical ‘NOT’ which takes one operand, the symbol ‘~’ is used to represent
this function. A logical NOT truth table can be found located in Appendix B, Table 3.
IF which takes three operands, the symbol ‘i’ is used to represent this function.
The function works as follows: For the string (IF X Y Z), the first argument X is
evaluated, if X is true, then Y, the second argument is evaluated; otherwise if X is not
true, then Z, the third argument is evaluated.
5.5.2 Terminal Set
The terminal set will consist of the 4 data registers and 2 binary address lines. I used
the following values to represent the variables:
1 = a0 ,
2 = a1 ,
3 = d0
4 = d1
5 = d2
6 = d3
Yuen, C.C.,
- 55 -
5.5.3 Fitness Function
The fitness of a program, (a solution) is a fraction of correct answers over all 26
possible fitness cases. The fitness function for the 6 Boolean Multiplexer is defined
as:
64
f (chromosome) = ∑ f ( xi )
(5.3)
i =1
where the chromosome is number of cases it answers correctly. f ( xi ) returns a 1 if
the program correctly answer that particular fitness case, and i is the ith fitness case;
otherwise, it returns a 0.
5.5.4 Termination Condition
The 6 Boolean multiplexer also has a known global solution; the best solution will
have a fitness of 64. This demonstrates the solution can decode every address encoded
in binary correctly.
5.6 Testing
To ensure the all problems tackled work according to the specification of the problem.
I have rigorously tested each function and the system. The main testing procedures I
have used are functional testing, integration testing, and system testing.
5.6.1 Test Plan
5.6.1.1 Functional Testing
Each function has been individually tested using drivers and self creating input data
where necessary. Each line in the function was printed so I can track the values stored
in each variable at each line. Manually working out exactly what the expected value
would be after each computation and comparing the actual result of the computation
provided confirmation that the function was operating as it should.
Apart from testing if the function makes the correct computations, certain
functions required the check of randomness. An example of this would be the function
that generates the initial population. I ran the function 50 times to ensure we get a
spread of different length and different structured individuals.
Yuen, C.C.,
- 56 -
5.6.1.2 Integration Testing
Many of the functions are called within another function; we need to ensure that when
the functions join together, the expected outcome is desired. This is carried out in a
similar fashion to functional testing, displaying each line of the computation so we
can track the values manually.
5.6.1.3 System Testing
Once we have written every function and confident that each function is performs as it
should. We test the system as a whole by inserting break points and printing out
values of certain variable so that we can track the location of error if it occurs.
Yuen, C.C.,
- 57 -
Chapter 6 – Experimental and Analysis of Results
After running the experiments with the parameters set, the performance evaluation
will be presented in graphical, statistical numeric and tabular form. Performance will
be measured by observing the number of generations before convergence to a solution
is established.
A discussion about the behaviour after convergence will also be found later in
the chapter. All experiments have statistical performance measure indicators, and a
graph will be displayed at the end of each run. This provides general information
about the diversity of the population as well as the learning rate of improving the
solution for the given problem.
6.1 Interpretation of the graphs
All the graphs will be displayed in the following format.
•
The first graph (top left) shows the maximum fitness up till that generation.
•
The second graph (top right) shows the mean fitness of the generation.
•
The third graph (bottom left) shows the variance of the fitness of that
generation.
Figure 6.1: An example of the results in graphical form
Yuen, C.C.,
- 58 -
6.2 Results
6.2.1 One Max
As previously stated in Chapter 5, the One Max Problem in our experiment has a
global solution of 50. As we have a population size of 50, so therefore, the optimal
would be every member in the terminal set to occur once in the solution.
For this problem, as there is a known global solution, Vekaria set her GA to
run until the global solution was found. She counted the number of evaluations were
needed until the global solution was found. We have set our GP to also run until the
global solution was found.
As we are analysing the crossover techniques, it would be interesting to
observe what happens after the global solution was found. 30 runs of setting the
maximum generations to 3000 were conducted. We were also interested in analysing
the techniques in their early generations (i.e. the first 100 generations).
Standard crossover and simple dominance crossover showed mainly mixed
results, there were signs of convergence towards a solution, but not the optimal.
Dominance crossover showed that it was consistently learning in the first 50 -70
generations, and then it would slowly learn, this is shown by the reduction in variance
and small increases in the mean of each population. Uniform crossover
Table 1 in Appendix A displays the number of generations until the optimal
solution is found. Visually, we can see that Uniform crossover out performs all the
other crossover techniques.
After calculating the statistical means, Figure 6.2 shows that simple selective
crossover required slightly more generations than standard crossover. After
conducting a 2 sample t-test at 5% significance level with the hypothesis stated in
section 5.1, we found that the two crossover techniques were not significantly
different in performance, as we obtained a t-statistic of 0.64, with 56 degrees of
freedom, which is less than 2.003; therefore we do not reject the null hypothesis.
Conducting further 2 sample t-tests at 5% significance level, we find that
dominance selective crossover performs significantly different compared to the other
3 operators; whereas uniform crossover performs significantly better. Section C.2 in
Appendix C details the values for each t-test.
Yuen, C.C.,
- 59 -
Mean Evaluations
Standard Crossover
3379 (350)
Simple Selective Crossover
3546 (328)
Uniform Crossover
316.9 (14.1)
Dominance Selective Crossover
2083 (136)
Table 6.1: Results for the One Max Problem. Mean number of evaluations taken to
obtain the global solution. The standard deviation is shown in brackets.
Mean number of Generations until optimal solution is found
4000
3546
3379
Number of Generations
3000
2083
2000
1000
316.9
0
Crossover Techniques
Simple
Simple Selective
Uniform
Dominance
Figure 6.2: Comparing the Mean number of generation
until optimal solution is found
Figure 1, 2 and 3 in Appendix A are results after runs 1, 6 and 14 respectively,
using standard crossover. It shows that standard crossover has a large degree of
randomness attached; it managed to find the optimal solution in 55 generations at its
best, and at its worst, 6169 generations. Figure 2 shows that it was learning at a
constant rate, whereas Figures 1 and 3 did not show that it was consistently improving
its overall fitness every generation. Roughly between the 100th and 400th generation,
Yuen, C.C.,
- 60 -
in figure 1, we can see that the fittest individual in that generation dropped
dramatically, this is mainly due to destructive crossover.
Figures 4 and 5 in Appendix A, were results from simple selective crossover,
figure 4 shows another extreme case, where it randomly found the global solution at
the 19th generation. Figure 5 shows the other extreme, where it took 6009 generations.
In the early generations of figure 5, roughly between 70th and 90th, a mass drop
occurred, similar reasoning as figure 1.
In general, simple selective crossover is not useful; there is no significant
difference in performance, as confirmed by conducting a t-test. It is computationally
more expensive than standard crossover, as confirmed by conducting 300 runs with
identical parameter settings. Figure 6.3 shows that it is computationally more
expensive. By inspection of the graphs, it is near impossible to separate the two sets
of runs by the different operators.
Mean run tim e for 3000 generations
600
500
400
300
200
100
0
Crossover Techniques
Standard
Simple Select ive
Uniform
Dominance
Figure 6.3: Mean CPU time for 3000 generations using
the different crossover techniques
The box plots in Appendix C, figure C 3.1, C 3.2, C 3.3 and C 3.4 show that
uniform crossover and dominance selective crossover are stable in terms of cpu time,
as outliers do not exist.
Yuen, C.C.,
- 61 -
Uniform crossover has shown to perform significantly better than all the other
operators, as shown in table 6.1. Each run is near identical, figure 6 shows a typical
run, where the mean fitness either increases or is nearly as good as it’s previous. The
learning rate is stable throughout the run, this is not to say that it always find the
global solution at the same generation. The range of solutions found is between 131 –
461 generations.
Dominance selective crossover is also a very stable learning operator. The
majority of the solutions are found around the 2000th generation. Unlike uniform
crossover, which starts learning from the start, dominance selective crossover tends to
drop the fitness of the population, and then slowly increases the fitness steadily. In the
earlier generations, it has a tendency to maintain a slightly more diverse population;
this is reflected in the variance plots of each run. Figure 7 shows that learning has
started from the very early generations, in this case, it found the global solution on the
33rd generation. Figure 8 shows a typical run using dominance selective crossover.
6.2.2 Random L – Max SAT
As previously mentioned in section 5.3.2, there is no guarantee that a randomly
generated boolean expression has a global solution.
Under low epistasis, the mean solution for uniform crossover is a little
misleading, as during one of the runs, we encountered an unsolvable problem, and
therefore we got a maximum fitness of 0 for that run. If we exclude that particular run,
under the conditions of low epistasis, uniform crossover has a mean fitness of 98.966
and variance of 1.085.
The results show that it is statistically not possible to identify the better
performer in this experiment. T-test were conducted to test significance at 5%. All
tests showed that we should not reject the null hypothesis, hence, the performance of
crossover operators are equal.
By inspective of the graphs, we can not distinguish any special features within
each crossover operator.
Yuen, C.C.,
- 62 -
Comparing the Mean of the maximum fitness with varying epitasis after 600 generations
101
100
99
Fitness
98
97
96
95
94
93
Low Epitasis
Medium Epitasis
High Epitasis
Level of Epitasis
Standard Crossover
Simple Selective Crossover
Uniform Crossover
Dominance Selective Crossover
Figure 6.4: Comparing the mean of the maximum
fitness over 30 runs
Low Epistasis
Medium Epistasis
High Epistasis
Mean Solution
Mean Solution
Mean Solution
Standard Crossover
99.833 (0.461)
99.3 (0.837)
99.167 (1.085)
Simple Selective Crossover
99.567 (1.135)
99.6 (0.675)
99.367 (0.890)
Uniform Crossover
95.67 (18.10)1
99.433 (1.357)
99.133 (1.008)
Dominance Crossover
99.467 (0.776)
99.333 (1.295)
99.467 (0.900)
Table 6.2: Results for the L-Max SAT Problem. Maximum fitness after 3000
generations. The standard deviation is shown in brackets.
1
When the run which encountered the unsolvable problem was removed, the mean fitness was 98.966
with variance 1.085
Yuen, C.C.,
- 63 -
CPU Time for 600 generations
6000
5000
CPU Time
4000
3000
2000
1000
0
Low
Medium
High
Level of Epistasis
Standard
Simple Selective
Uniform
Dominance Selective
Figure 6.5: Mean CPU time for 600 generations, using
the 4 crossover techniques.
After conducting a series of t-test, we find that dominance crossover requires
significantly more cpu time under low and medium levels of epistasis. However, it
required significantly less cpu time than cpu time than standard and simple selective
crossover. Section C.5 in Appendix C shows tables of t statistics when 2 sample t-test
were done.
6.3 Comparison with Dominance Selective Crossover for GA
Generally, Vekaria [28] concluded that Selective Crossover in Genetic Algorithm
required less evaluations compared to two-point and uniform crossover.
6.3.1 One Max
Vekaria [28] found that there was no significance in performance across the three
crossover operators, despite selective crossover only required 3557 evaluations
compared to 3686 for uniform crossover and 3733 for two-point crossover.
In my experiments, results showed that uniform crossover performed
significantly better, followed by dominance selective crossover, standard and simple
selective crossover were equal in performance.
Yuen, C.C.,
- 64 -
6.3.2 L-Max SAT
On the contrary, Vekaria [28] found that at all levels of epistasis, selective crossover
performed better than two-point and uniform crossover, by taking lesser number of
evaluations. In particular, the performance of selective crossover was comparatively
greater as the level of epistasis increased. Therefore, she demonstrated that selective
crossover works well with epistatic problems.
My experiments did not show any difference in performance across the four
operators. The level of epistasis did not affect performance in terms of the number of
generations required.
6.4 The Multiplexer
As state, I choose to tackle this problem with the goal of testing the revised version of
uniform crossover which I have proposed in section 3.2.1, and analysing the effect of
the two new operators on a GP based problem. Due to the level computational effort
required, we were unable to find the global solution of 64.
Empirically, we can see from the graphs in Appendix A (figure A.9, A.10,
A.11, A.12) show that both uniform and dominance selective crossover had a more
stable upward climbing tend. The mean of the population converged quicker when
using either dominance selective crossover or uniform crossover compared to the
other two methods which seemed to be still learning.
Computational time again showed that there was a large difference between
the longest and the shortest time taken for 50 generations when using standard
crossover.
To fully compare and contrast my proposed revised version of uniform
crossover, more experiments would have to be conducted. However, my main aim of
showing that both new selective crossover methods will work for more complex
optimisation problems is shown.
Yuen, C.C.,
- 65 -
Chapter 7 – Conclusion
The goals of this thesis were to design, implement and evaluate the new selective
crossover techniques proposed. Compare my findings with Vekaria’s [28] work, and
lastly, compare the operators’ performance within the GP system. The inspiration
from natural evolution and Vekaria’s PhD thesis motivated me to look deeper into
adaptive crossover techniques.
My findings were slightly different to Vekaria over the two problems we
commonly tackled. Vekaria claimed that for the one max problem, selective crossover
was not significantly better than other two static crossover methods she chosen to
compare selective crossover with. Tackling random L Max SAT problem boosted
Vekaria’s findings, as she found that selective crossover performed significantly
better, as the level of epistasis rose, the difference in performance was larger. Vekaria
concluded that selective crossover in GA works particularly well for problems with
epistasis.
Our empirical study found that selective crossover performed significantly
better than standard GP crossover. The overall best performer was the uniform
crossover operator, in the one max problem, it required less than a tenth of the number
of evaluations which standard and simple selective crossover required. In terms of
economics value, uniform crossover still outperforms the other operators, even it
needed over twice as long to conduct 3000 generations, but in practise, we would
never need to make that many runs, as it tends to find the global solution within 400
generations.
7.1 Critical Evaluation
We have successful covered the main goals of this thesis but fell a little short of the
bonus aim which was added in towards the end of the implementation phase. If I was
to redo this thesis, there would be a few minor adjustments that I would make to
improve the effectiveness of this comparison.
Firstly, implementing two point crossover is fairly crucial to our experiment,
whereas we choose to spend the time thinking about revising the uniform crossover
operator.
Secondly, in out design we should have included a counter which counts the
number of times crossover happen until the global optimal solution was found.
Yuen, C.C.,
- 66 -
Overall, we met most our goals, so therefore, we conclude that it has been a
successful experiment with results that back up the widely accepted argument, that the
more crossover points, the better the operator can explore, especially in it’s earlier
generations.
7.2 Further Work
A number of questions were added during the experimental phrase of this thesis, listed
below are some of the question that would be very interesting to research in:
•
Finding out if varying the mutation rate would affect the performance of the
crossover operators.
•
Does Koza’s [13] implication of totally eliminating mutation not affect GP in
any significant manner hold for the 2 new crossover methods.
•
Is there any difference in performance between my revised method of uniform
crossover and the existing uniform crossover algorithm. I hypothesised that it
will aid to explore the search space more efficiently, as it will allow more
combinations of a valid candidate solution.
•
Modify my current GP system to test the other problems which Vekaria
tackled will help to identify the similarities and differences of selective
crossover in GP
•
We would like to also vastly extend our comparative analysis, qualitatively
analysing the performance of dominance selective crossover in relation to
alternative strategies other than static crossover operators, such as simulated
annealing and hill climbing.
Yuen, C.C.,
- 67 -
Appendix A: Full Test Results
One Max
Standard
2300
4552
3892
3539
709
6169
5367
16
5100
3222
2642
4217
4899
55
4573
4178
4487
2443
5931
153
4168
13
3570
835
4486
978
4653
5464
4070
4938
Simple Selective
3043
4533
24
4442
5296
19
2228
4580
4001
4622
4139
2017
6009
4624
2930
2222
4844
5506
4574
93
1869
5226
4036
4339
3845
4929
3464
4125
4567
4196
Uniform
218
243
419
189
290
265
337
301
296
162
308
461
131
348
341
308
335
394
373
356
400
386
330
369
333
367
387
398
214
347
Dominance
1882
33
1742
1810
2065
2034
1984
2292
2211
2156
2445
2230
2212
2483
2192
2182
2197
2341
2177
2345
4518
2423
61
2351
1553
2082
2588
1717
2486
1709
Table A.1: Number of Generations required until optimal solution is found (30 runs)
All the runs of the graphs can be viewed on the enclosed compact disc (CD) under the
folder One Max 1.
Below displays a few runs of the typical cases and some of few of the more special
cases.
Yuen, C.C.,
- 68 -
Figure A.1: 2300 generations till global solution found using standard crossover
Figure A.2: 6169 generations till global solution found using standard crossover
Yuen, C.C.,
- 69 -
Figure A.3: 55 generations till global solution found using standard crossover
Figure A.4: 19 generations till global solution found using simple selective crossover
Yuen, C.C.,
- 70 -
Figure A.5: 6009 generations till global solution found using simple selective
crossover
Figure A.6: 333 generations till global solution found using uniform crossover
Yuen, C.C.,
- 71 -
Figure A.7: 33 generations till global solution found using dominance selective
crossover
Figure A.8: 2065 generations till global solution found using dominance selective
crossover
Yuen, C.C.,
- 72 -
Figure A.9: 50 generations using Standard Crossover on the multiplexer problem.
Figure A.10: 50 generations using Simple Selective Crossover on the multiplexer
problem.
Yuen, C.C.,
- 73 -
Figure A.11: 50 generations using Uniform Crossover on the multiplexer problem.
Figure A.12: 50 generations using Dominance Selective Crossover on the multiplexer
problem.
Yuen, C.C.,
- 74 -
Appendix B: Logical Operators
X
0
0
1
1
Y
0
1
0
1
X AND Y
0
0
0
1
Table B.1: Truth Table for logical AND
X
0
0
1
1
Y
0
1
0
1
Table B.2: Truth Table for logical OR
X
0
1
NOT X
1
0
Table B.3: Truth Table for logical NOT
Yuen, C.C.,
- 75 -
X OR Y
0
1
1
1
Appendix C: Statistical Results
One Max Problem
C.1 Statistical Summary
Below summaries the results which tested how many generations were required until
optimal solution of 50 was found using the different crossover methods. Identical
parameter settings have been used for all runs.
Standard Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
3379
4122
3433
1918
350
1927
4715
Simple Selective Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
3546
4174
3647
1796
328
2702
4623
Uniform Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
316.9
334
320.5
77.1
14.1
283.8
370
Dominance Selective
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
2083
2187
2127
747
136
1864
2347
Yuen, C.C.,
- 76 -
C.2 T-Test analysis
Two-Sample T-Test and CI: Simple Selective vs Standard
Simple S
Standard
N
30
30
Mean
3678
3387
StDev SE Mean
1602
293
1912
349
Difference = mu Simple Selective - mu Standard
Estimate for difference: 291
95% CI for difference: (-622, 1203)
T-Test of difference = 0 (vs not =): T-Value = 0.64
DF = 56
Two-Sample T-Test and CI: Simple vs Dominance
N
Standard 30
Dominance 30
Mean
3387
2083
StDev SE Mean
1912 349
747
136
Difference = mu Standard - mu Dominance
Estimate for difference: 1304
95% CI for difference: (545, 2063)
T-Test of difference = 0 (vs not =): T-Value = 3.48
DF = 37
Two-Sample T-Test and CI: Simple vs Uniform
Standard
Uniform
N
30
30
Mean
3387
320.2
StDev
1912
78.5
SE Mean
349
14
Difference = mu Simple - mu Uniform
Estimate for difference: 3067
95% CI for difference: (2353, 3782)
T-Test of difference = 0 (vs not =): T-Value = 8.78
DF = 29
Two-Sample T-Test and CI: Simple Selective vs Uniform
N
Simple S 30
Uniform 30
Mean
3678
320.2
StDev
1602
78.5
SE Mean
293
14
Difference = mu Simple Selective - mu Uniform
Estimate for difference: 3358
95% CI for difference: (2759, 3957)
T-Test of difference = 0 (vs not =): T-Value = 11.46
DF = 29
Yuen, C.C.,
- 77 -
Two-Sample T-Test and CI: Simple Selective vs Dominance
N
Simple S 30
Dominance 30
Mean
3678
2083
StDev
1602
747
SE Mean
293
136
Difference = mu Simple Selective - mu Dominance
Estimate for difference: 1595
95% CI for difference: (943, 2247)
T-Test of difference = 0 (vs not =): T-Value = 4.94
DF = 41
Two-Sample T-Test and CI: Dominance vs Uniform
N
Dominance 30
Uniform
30
Mean
2083
320.2
StDev SE Mean
747
136
78.5
14
Difference = mu Dominance - mu Uniform
Estimate for difference: 1763
95% CI for difference: (1483, 2044)
T-Test of difference = 0 (vs not =): T-Value = 12.85
DF = 29
C.3 Box-plots
Box-plots displaying the Inter-Quartile range and mean of the run time using each
crossover technique for 3000 generations. The “*” are outliers.
Standard
300
200
100
Figure C 3.1: Standard Crossover
Yuen, C.C.,
- 78 -
Simple Selective
350
300
250
200
Figure C 3.2: Simple Selective Crossover
Uniform
400
300
200
Figure C 3.3: Uniform Crossover
Dominance
600
500
400
Figure C 3.4 Dominance Selective Crossover
Yuen, C.C.,
- 79 -
L-Max SAT
C.4 Statistical Results
Low epistasis – maximum fitness after 600 generations
Standard Crossover
N
Mean
Median
30
99.833 100.000
TrMean
StDev
SE Mean
Q1
Q3
99.923
0.461
0.084
100.000
100.000
TrMean
StDev
SE Mean
Q1
Q3
99.769
1.135
0.207
99.000
100.000
Simple Selective Crossover
N
Mean
Median
30
99.567 100.000
Uniform Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
95.67
99.00
98.67
18.10
3.30
98.00
100.00
TrMean
StDev
SE Mean
Q1
Q3
99.577
0.776
0.142
99.000
100.000
Dominance selective
N
Mean
30
99.467 100.000
Yuen, C.C.,
Median
- 80 -
Medium Epistasis – Maximum fitness after 600 Generation
Standard Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.300
100.000
99.346
0.837
0.153
98.750 100.000
Simple Selective Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.600
100.000
99.692
0.675
0.123
99.000 100.000
Uniform Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.433
100.000
99.692
1.357
0.248
99.000 100.000
Dominance Selective
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.333
100.000
99.577
1.295
0.237
99.000 100.000
High Epistasis – Maximum fitness after 600 Generation
Standard Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.167
100.000
99.269
1.085
0.198
98.750 100.000
Simple Selective Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.367
100.000
99.462
0.890
0.162
99.000 100.000
Uniform Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
99.133
99.500
99.231
1.008
0.184
98.000 100.000
Dominance Selective
N
Mean
Median
TrMean
StDev
SE Mean
Q1
30
99.467
100.000
99.615
0.900
0.164
99.000 100.000
Low Epistasis – Run Time to complete 600 generations
Yuen, C.C.,
- 81 -
Q3
Standard Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
70.40
68.32
69.78
16.38
2.99
58.00
79.68
Simple Selective Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
78.04
75.42
76.89
14.60
2.66
64.77
86.48
TrMean
StDev
SE Mean
Q1
Q3
95.69
35.29
6.44
78.14
107.08
TrMean
StDev
SE Mean
Q1
Q3
116.93
46.50
8.49
102.03
144.75
Uniform Crossover
N
Mean
Median
30
100.12 104.48
Dominance selective
N
Mean
Median
30
122.53 110.23
Medium Epistasis – Run Time for 600 Generation
Standard Crossover
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
531.5
533.5
533.9
110.8
20.2
625.7
458.8
Simple Selective Crossover
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
510.7
518.9
512.2
71.7
13.1
571.9
449.3
Uniform Crossover
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
476.9
464.0
464.7
76.9
14.0
479.2
449.3
Dominance Selective
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
760.3
757.0
756.4
75.1
13.7
805.5
High Epistasis – Run Time for 600 Generation
Standard Crossover
Yuen, C.C.,
- 82 -
707.6
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
5120
5199
5133
980
179
5892
4404
Simple Selective Crossover
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
3998
3932
3991
1021
186
4773
3104
Uniform Crossover
N
Mean
Median
TrMean
StDev
SE Mean
Q1
Q3
30
1658.1
1629.2
1607.1
428.5
78.2
1376.9 1769.7
Dominance Selective
N
Mean
Median
TrMean
StDev
SE Mean Q1
Q3
30
3480
3691
3475
912
167
4056
2592
C.5 T-test results for difference in cpu time required
If the t-statistic is positive, e.g. (standard, simple selective) in table C.5.1, 4.34 tells us
that standard crossover took longer than simple selective crossover. The degrees of
freedom for all the following section is 57, however, under the statistic table1, 57
degrees of freedom, does not exist, hence I use the value of DF = 60, which is 2.00.
Low Epistasis
Standard
Standard
Simple Selective
Uniform
Dominance
1.91
4.18
5.79
Simple Selective Uniform
-1.91
- 4.18
- 3.17
3.17
5.00
2.10
1
Dominance
- 5.79
- 5.00
- 2.10
Table found in Wild, C.J. and Seber, G.A.F., (2000). Chance Encounters A First Course in Data
Analysis and Inference, University of Auckland, John Wiley & Sons, Inc. Appendix A6.
Yuen, C.C.,
- 83 -
Medium Epistasis
Standard
Standard
Simple Selective
Uniform
Dominance
- 0.86
- 2.22
9.36
Simple Selective Uniform
0.86
2.22
1.77
- 1.77
13.16
14.44
Dominance
- 9.36
- 13.16
- 14.44
Simple Selective Uniform
4.34
17.72
11.58
- 11.58
- 2.07
9.90
Dominance
6.71
2.07
- 9.90
High Epistasis
Standard
Standard
Simple Selective
Uniform
Dominance
Yuen, C.C.,
- 4.34
- 17.72
- 6.71
- 84 -
Selective Recombination (Dominance Crossover)
Appendix C: User Manual
Investigation into the effects of
Selective Crossover techniques using Gene Dominance
in Genetic Programming
User Manual
Chi Chung Yuen
Department of Computer Science
University College London
Gower Street
London
W1C 6BT
[email protected]
Distribution of this Manual and Software
No part of this manual, including the software described in it, may be
reproduced, transmitted, transcribed, stored in a retrieval system, or
translated into any language in any form or by any means, except
documentation kept by the purchaser for backup purposes, without the
express written permission of Chi Chung Yuen and University College
London.
- 85 -
Selective Recombination (Dominance Crossover)
Introduction
This chapter briefly discusses the equipment needed in order to
be able to operate the programs accompanying the investigation
Software Requirement
Matlab, Version 6.5, The MathWorks Inc
Computer System Requirements
Windows Based Operating Systems
Pentium, Pentium Pro, Pentium II, Pentium III, Pentium IV,
Intel Xeon, AMD Athlon or Athlon XP based personal
computer.
Microsoft Windows 98 (original and Second Edition),
Windows Millennium Edition (ME), Windows NT 4.0 (with
Service Pack 5 for Y2K compliancy or Service Pack 6a),
Windows 2000, Windows XP or Windows 2003.
128 MB RAM minimum, 256 MB RAM recommended Disk
space varies depending on size of partition and installation of
online help files.
Hard disk space: 120 MB for MATLAB only and 260 MB for
MATLAB with online help files.
CD-ROM drive (for installation from CD)
8-bit graphics adapter and display (for 256 simultaneous
colours). A 16, 24 or 32-bit OpenGL capable graphics
adapter is strongly recommended.
- 86 -
Selective Recombination (Dominance Crossover)
Unix Based Operating Systems
Operating system vendors' most current recommended patch
set for the hardware and operating system
90 MB free disk space for MATLAB only (215 MB to include
MATLAB online help files)
128 MB RAM, 256 MB RAM recommended 128 MB swap
space (recommended)
Macintosh (Mac) Based Operating Systems
Power Macintosh G3 or G4 running OS X (10.1.4 or later)
X Windows. The only supported version is the XFree86 X
server (XDarwin) with the OroborOSX window manager; both
are included with MATLAB. (Note: XFree86 requires
approximately 100MB after it is uncompressed and installed
onto your disk. To uncompress and install it onto your disk,
you need an additional 40MB for the uncompressed file and
40MB for the actual installer. This space (80MB) is not
needed after XFree86 is installed.)
90 MB free disk space for MATLAB only (215 MB to include
MATLAB online help files)
128 MB RAM minimum, 256 MB RAM recommended
FLEXlm 8.0, installed by the MATLAB installer
Features
All Genetic Programs provide users with manual selection of
population size, number of generations and selecting which
recombination method they wish to use when running the
selected program.
Results are displayed in an elegant graphical and text based
format. All graphical results can be saved by the user for
reference purposes.
- 87 -
Selective Recombination (Dominance Crossover)
Getting Started
This chapter provides you with details on how to start using the
programs on the enclosed Compact Disk (CD) and setting your
workspace.
Compact Disk Contents
After inserting the enclosed Compact Disk into your optical
drive. Please open the contents of the CD and you will find
folders named as follows:
GP SYSTEM
L Max Sat
Multiplexer
One Max
RESULTS
L Max Sat
Multiplexer
One Max
DOCUMENTATION
- 88 -
Selective Recombination (Dominance Crossover)
Operating (Basics)
This chapter details a step by step guide to run the programs
accompanied with the compact disc supplied. It will operate with
the standard preset population size and generation size.
Assumption: Matlab version 6.5 has been installed properly with
the standard library directory installed onto the
computer.
1. Open Matlab
Example: Windows Environment
Start -> All Programs -> Matlab 6.5 -> Matlab 6.5
Or
Click on the Matlab Icon
2. Set the Current Directory to your directory of the CD
3. Select the Program that you wish to run (Multiplexer / One Max /
Random L-SAT)
4. Each Program takes requires a command to be entered in the Matlab
Command Window
a. If Multiplexer was selected. The command is:
multiplexer(flag)
The variable flag takes values 0, 1, or 2,
where
“0” activates standard crossover
“1” activates dominance selective crossover
“2” activates simple selective crossover
“3” activates uniform crossover
“4” actives revised uniform crossover (only in
multiplexer)
Example: multiplexer(0)
- 89 -
Selective Recombination (Dominance Crossover)
b. If One Max was selected. The command is:
onemaxterm(flag)
c. If Random L-SAT was selected. The command is:
l_sat(flag, C)
where C is the number of clauses, see section 5.3.2 in main
report.
5. Press the “Enter” key
6. When the program has finished running, it will display a set of graphs
The graphical results may be saved for referencing reasons as
follows:
File -> Save As…
Or
Press “Ctrl + S”
- 90 -
Selective Recombination (Dominance Crossover)
Or simply press the save icon
7. To run the same experiment again, go back to 4. If you wish to run
another experiment, go back to step 3
8. After you have finished with using Matlab, quit Matlab as follows:
File -> Exit MATLAB
Or
Press “Ctrl + Q”.
- 91 -
Selective Recombination (Dominance Crossover)
Operating (Experts)
This chapter details a comprehensive guide to operate the
programs accompanied with the compact disc supplied at a
more advance level. The user will be able to manually enter the
parameter values for variables such as population size, number
of generations, etc.
Assumption: Matlab version 6.5 has been installed properly with
the standard library directory installed onto the
computer.
1. Open Matlab
2. Set the Current Directory to your directory of the CD (see fig 4.1)
3. Select the Program that you wish to run (Multiplexer / One Max /
Random L-SAT)
a. If Multiplex is selected;
Multiplexer takes 3 parameters:
•
population size
•
number of generations
•
flag - to select crossover method
The command is ad_multiplexer (population size, no of
generations, flag)
e.g. ad_multiplexer(100, 100, 2)
b. If One Max is selected
One Max takes 3 parameters, identical to Multiplexer,
The command is onemax(population size no of generations,
flag)
e.g. onemax(100, 50, 3)
- 92 -
Selective Recombination (Dominance Crossover)
c. If Random L-SAT is selected
Random L-SAT requires 4 parameters:
•
Population size
•
Number of generations
•
Flag
•
Number of clauses
The command is ad_l_sat(population size, no of generations,
flag, Clause)
e.g. ad_l_sat(population size, no of generations, flag, clauses)
Troubleshooting
1.
Problem
Solution
Matlab halts
Press “Ctrl+C”
Exit Matlab and restart Matlab
2.
Other errors
Please report to the development
team at [email protected]
immediately.
- 93 -
Selective Recombination (Dominance Crossover)
Appendix D: System Manual
All the code for running experiments and conducting further work can be found on the CD
accompanying this thesis.
Matlab is required to be pre-installed onto a computer prior to running the code.
Each individual is treated as an object. In the multiplexer, and the one max, I have made the
objects global, which means they are freely accessible in any function within the same
directory, please note, for the random l sat problem, I have modified my structure and kept
all the objects local in the main function, reasons for this were more efficient programming
and lesser chance of the object being modified by mistake elsewhere due to unexpected
bugs.
For each problem, the main functions are as follows:
•
ad_multiplexer
•
multiplexer
•
onemaxterm
•
onemax
•
l_sat
•
ad_l_sat
A recommended approach to fully understand how to make full use of the code, use break
points after each line of code which you are unsure off.
The system build has been designed with plug and play in mind, therefore, if you wish to test
another operator on the same problem you can easily add another function in and just make
sure you set the new operator to be active.
If there are any problems, please do not hesitate to contact me ([email protected]).
- 94 -
Selective Recombination (Dominance Crossover)
Appendix E: Code Listing
After careful consideration of what will be of interest to the user; we have decided to include
code listings for the crossover techniques, (simple dominance crossover, dominance
crossover, uniform crossover, and standard crossover) and the fitness function. As all three
problems used a near identical programming structure, it is unnecessary to display all the
code. I will list the code for the Random L-Max SAT problem. All the other code can be
found on the CD enclosed.
==================================================================
function [str_c1, str_c2] = crossover(str1,str2,tree_len)
% This function will crossover two strings, at 2 random locations in each string
% Input Parameters : str1 = a member of the current population
%
str2 = another memeber of the current population
%
tree_len = the maximum of length of a tree
% Output Parameters : str1 = a updated member for the new population
%
str2 = another updated member for the new
%
population
temp1 = length(str1)-1;
temp2 = length(str2)-1;
pos1 = randint(1,1,[1,temp1]);
pos2 = randint(1,1,[1,temp2]);
if pos1 == 0
pos1 = 1;
end
if pos2 == 0
pos2 = 1;
end
len1 = getSubTree(str1, pos1);
len2 = getSubTree(str2, pos2);
str_c1(1 : pos1 - 1) = str1(1 : pos1-1);
str_c2(1 : pos2 - 1) = str2(1 : pos2-1);
str_c2(pos2 : pos2 + len1 - pos1) = str1(pos1 : len1);
str_c1(pos1 : pos1 + len2 - pos2) = str2(pos2 : len2);
str_c1(pos1 + len2 - pos2 : pos1 + len2 - pos2 + length(str1) - len1) = str1(len1 : length(str1));
str_c2(pos2 + len1 - pos1 : pos2 + len1 - pos1 + length(str2) - len2) = str2(len2 : length(str2));
- 95 -
Selective Recombination (Dominance Crossover)
function [str_ch1, str_ch2, strdom_ch1, strdom_ch2] = dominancecrossover(str1, str2, strdom1, strdom2,
tree_len, C, L, V)
% This functions performs crossover using 2 parents to create 2 offspring.
% Swapping node for node or node for sub tree
% INPUT: str1 & str2 are the parent chromosomes, strdom1 and strdom2 are
% their respective dominance vectors; tree_len is max tree length.
% C - no of clauses, L - length of a clause, V - no of avaliable variables
len_p1 = length(str1);
len_p2 = length(str2);
len_d1 = length(strdom1);
len_d2 = length(strdom2);
i = 1; j = 1; a = 1; b = 1;
while i <= len_p1 | j <= len_p2
% keeps on looping until either of the index have exceeded the length of
the chromosome
if i <= len_p1 & j <= len_p2
if strdom1(i) > strdom2(j) % DO NOT CROSSOVER
if str1(i) == 999 & str2(j) == 999
str_ch1(a) = str1(i);
str_ch2(b) = str2(j);
strdom_ch1(a) = strdom1(i);
strdom_ch2(b) = strdom2(j);
ch1_change(a) = 0;
ch2_change(b) = 0;
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) ~= 999 & str2(j) ~= 999
str_ch1(a) = str1(i);
str_ch2(b) = str2(j);
strdom_ch1(a) = strdom1(i);
strdom_ch2(b) = strdom2(j);
ch1_change(a) = 0;
ch2_change(b) = 0;
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) == 999 & str2(j) ~= 999
len = getSubTree(str1,i);
dif = len - i;
str_ch1(a : a + dif) = str1(i : len);
str_ch2(b) = str2(j);
strdom_ch1(a : a + dif) = strdom1(i : len);
strdom_ch2(b) = strdom2(j);
ch1_change(a : a + dif) = 0;
ch2_change(b) = 0;
i = i + 1 + dif;
j = j + 1;
a = a + 1 + dif;
b = b + 1;
elseif str1(i) ~= 999 & str2(j) == 999
len = getSubTree(str2,j);
dif = len - j;
- 96 -
Selective Recombination (Dominance Crossover)
str_ch1(a) = str1(i);
str_ch2(b : b + dif) = str2(j : len);
strdom_ch1(a) = strdom1(i);
strdom_ch2(b : b + dif) = strdom2(j : len);
ch1_change(a) = 0;
ch2_change(b : b + dif) = 0;
i = i + 1;
j = j + 1 + dif;
a = a + 1;
b = b + 1 + dif;
end
str_ch1 = str_ch1(str_ch1 ~= 0);
str_ch2 = str_ch2(str_ch2 ~= 0);
elseif strdom1(i) <= strdom2(j)
% PERFORM CROSSOVER
if str1(i) == 999 & str2(j) == 999
str_ch2(a) = str1(i);
str_ch1(b) = str2(j);
strdom_ch2(a) = strdom1(i);
strdom_ch1(b) = strdom2(j);
ch1_change(a) = 1;
ch2_change(b) = 1;
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) ~= 999 & str2(j) ~= 999
str_ch2(a) = str1(i);
str_ch1(b) = str2(j);
strdom_ch2(a) = strdom1(i);
strdom_ch1(b) = strdom2(j);
ch1_change(a) = 1;
ch2_change(b) = 1;
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) == 999 & str2(j) ~= 999
len = getSubTree(str1,i);
dif = len - i;
str_ch2(b : b + dif) = str1(i : len);
str_ch1(a) = str2(j);
strdom_ch2(b : b + dif) = strdom1(i : len);
strdom_ch1(a) = strdom2(j);
ch2_change(b : b + dif) = 1;
ch1_change(a) = 1;
i = i + 1 + dif;
j = j + 1;
a = a + 1;
b = b + 1 + dif;
elseif str1(i) ~= 999 & str2(j) == 999
len = getSubTree(str2,j);
dif = len - j;
str_ch2(b) = str1(i);
str_ch1(a : a + dif) = str2(j : len);
strdom_ch2(b) = strdom1(i);
strdom_ch1(a : a + dif) = strdom2(j : len);
ch2_change(b) = 1;
ch1_change(a : a + dif) = 1;
- 97 -
Selective Recombination (Dominance Crossover)
i = i + 1;
j = j + 1 + dif;
a = a + 1 + dif;
b = b + 1;
end
str_ch1 = str_ch1(str_ch1 ~= 0);
str_ch2 = str_ch2(str_ch2 ~= 0);
end
elseif i <= len_p1 & j > len_p2
if a < b
str_ch1(a : a + len_p1 - i) = str1(i : len_p1);
strdom_ch1(a : a + len_p1 - i) = strdom1(i : len_p1);
ch1_change(a : a + len_p1 - i) = 0;
a = a + len_p1 - i + 1;
i = len_p1 + 1;
else
str_ch2(b : b + len_p1 - i) = str1(i : len_p1);
strdom_ch2(b : b + len_p1 - i) = strdom1(i : len_p1);
ch2_change(b : b + len_p1 - i) = 0;
b = b + len_p1 - i + 1;
i = len_p1 + 1;
end
elseif i > len_p1 & j <= len_p2
check1 = valid_tree(str_ch1);
if a > b
str_ch2(b : b + len_p2 - j) = str2(j : len_p2);
strdom_ch2(b : b + len_p2 - j) = strdom2(j : len_p2);
ch2_change(b : b + len_p2 - j) = 0;
b = b + len_p2 - j + 1;
j = len_p2 + 1;
else
str_ch1(a : a + len_p2 - j) = str2(j : len_p2);
strdom_ch1(a : a + len_p2 - j) = strdom2(j : len_p2);
ch1_change(a : a + len_p2 - j) = 0;
a = a + len_p2 - j + 1;
j = len_p2 + 1;
end
end
end
str_ch1 = str_ch1(str_ch1 ~= 0);
str_ch2 = str_ch2(str_ch2 ~= 0);
% EVALUATE THE FITNESS OF PARENT AND CHILDREN
fit_p1 = fitness(str1, tree_len, C, L, V);
fit_p2 = fitness(str2, tree_len, C, L, V);
fit_ch1 = fitness(str_ch1, tree_len, C, L, V);
fit_ch2 = fitness(str_ch2, tree_len, C, L, V);
diff_fit1 = fit_ch1 - fit_p1;
diff_fit2 = fit_ch2 - fit_p2;
if diff_fit1 < 0
for x = 1 : length(str_ch1)
if ch1_change(x) == 1
strdom_ch1(x) = strdom_ch1(x) + diff_fit1;
end
end
- 98 -
Selective Recombination (Dominance Crossover)
end
if diff_fit2 > 0
for y = 1 : length(str_ch2)
if ch2_change(y) == 1
strdom_ch2(y) = strdom_ch2(y) + diff_fit2;
end
end
end
function fitness = fitness(str, tree_len, C, L, V);
% This function calculates the fitness of the string.
% INPUT: str - the chromosome, tree_len - the maximum length of a tree,
%
C - no of clauses, L - length of each clause, V - no of
%
variables.
global a b c
fitness = 0;
% converting the string so that i get an index of which locations where
% activated
tempstring_eval = zeros(1,tree_len);
length(str);
for j = 1 : length(str)
if str(j) < V
tempstring_eval(str(j)) = 1;
end
end
str = tempstring_eval;
b_count = 1;
for i = 1 : length(a)
if a(i) == '^'
temp(i) = min (str(b (b_count)), str(b (b_count + 1)) );
b_count = b_count + 2;
elseif a(i) == 'v'
temp(i) = max (str(b (b_count)), str(b (b_count + 1)) );
b_count = b_count + 2;
elseif a(i) == '~'
temp(i) = 1 - str(b(b_count));
b_count = b_count + 1;
end
end
% then we need to handle each conjun
temp_count = 1;
fitness_temp = 0;
for j = 1 : length(c)
if j == 1
if c(j) == '^'
fitness_temp = min (temp(temp_count), temp(temp_count + 1));
temp_count = temp_count + 2;
elseif c(j) == 'v'
fitness_temp = max (temp(temp_count), temp(temp_count + 1));
temp_count = temp_count + 2;
- 99 -
Selective Recombination (Dominance Crossover)
end
else
if c(j) == '^'
fitness_temp = min(fitness_temp, temp(temp_count));
temp_count = temp_count + 1;
elseif c(j) == 'v'
fitness_temp = max(fitness_temp, temp(temp_count));
temp_count = temp_count + 1;
end
end
end
fitness = fitness_temp;
function [a, b, c] = generate_eval(V,C,L)
% Generates a evaluation function randomly of conjunctive form
% Input : V = number of variables
%
C = number of clauses
%
L = Length of each clause
% Output : A matrix with the evaluation expression
% initiallise the multi-dimensional array
eval_express = zeros(C,L);
% I will generate a clause individually on each row
for i = 1:C
% keeping a count to check if the tree is valid, by counting how many
% terminal are needed to construct a valid tree
terms_req = 1;
% Generating an expression of Max length L
for j = 1:L
% Making sure that we have enough room to fill the end with terminal variables
if L - terms_req > j
% Choosing an integer to represent the option 1,2,or 3 randomly
OpRand = randint(1,1,[1,3]);
switch OpRand
case 1
eval_express(i,j) = 1000; % a NOT '~' is placed into the string
case 2
eval_express(i,j) = 1001; % an AND '^'
terms_req = terms_req + 1;
case 3
eval_express(i,j) = 1002; % an OR 'v'
terms_req = terms_req + 1;
end
else
if terms_req > 0
% Choosing a random terminal variable to put into the expression
term = randint(1,1,[1,V]);
eval_express(i,j) = term;
terms_req = terms_req - 1;
end
end
end
if i < C
ConOpRand = randint(1,1,[1,2]);
- 100 -
Selective Recombination (Dominance Crossover)
switch ConOpRand
case 1
conjun(i) = 1001; % an AND '^'
case 2
conjun(i) = 1002; % an OR 'v'
end
end
end
count_a = 1;
count_b = 1;
for i = 1:C
for j = 1:L
switch eval_express(i,j)
case 1000
a(count_a) = '~';
count_a = count_a + 1;
case 1001
a(count_a) = '^';
count_a = count_a + 1;
case 1002
a(count_a) = 'v';
count_a = count_a + 1;
otherwise
b(count_b) = eval_express(i,j);
count_b = count_b + 1;
end
end
end
b = b(b~=0);
count_c = 1;
for k = 1 : C-1
switch conjun(k)
case 1001
c(count_c) = '^';
count_c = count_c + 1;
case 1002
c(count_c) = 'v';
count_c = count_c + 1;
end
end
function l_sat(cross, C)
% This is the main function for the Random L-SAT Problem
% Parameters
% pop_size is the size of a population in each generations
% gen is the number of generations the GP should run for
% cross is a flag to distinguish which crossover technique should be used
% time counter
tic;
% global parameters access by other functions
global a b c
% defining the struct of each individual
- 101 -
Selective Recombination (Dominance Crossover)
prog = struct('string',{},'fitness',{},'change',{},'dominance',{});
% funct_set = {'combine'} which I have named 999;
%Initialising the variables
pop_size = 100;
% population size
gen = 600;
% number of generations
L = 3;
% length of each clause
V = 100;
% no of variables
tree_len = (V * 2) - 1;
% length of the tree
population = prog;
% setting the struct for population
new_population = prog;
% setting the struct for new_population
parents = zeros(pop_size, 2); % initialising a matrix
Max_Fit = 0;
total = 0;
p_mut = 0.01;
% Probability of mutation
p_cross = 0.4;
% Probability of Crossover
max_hits = -1;
% Max hits will tell us the max fitness throughout all generations
% Generate a random evaluation string
[a, b, c] = generate_eval(V,C,L);
% initialise population
for i = 1 : pop_size
population(i).string = random_tree(tree_len, V);
population(i).fitness = fitness(population(i).string, tree_len, C, L, V);
population(i).dominance = rand(1, length(population(i).string)) * (population(i).fitness/V);
parents(i,1) = i;
parents(i,2) = population(i).fitness;
end
% We want the fitness of whole population, so we sum population.fitness
parents = sortrows(parents,2);
popfit = []; % store the fitness of the fittest candidate solution in that generation.
avefit = []; % store the avergae fitness of that generation
varfit = []; % store the variance of the fitness values of that generation.
count = 1;
for i = 1 : gen
% running the loop for the number of generations set
ind = 1;
% initialising the counter for no of individuals
while ind <= pop_size
% while we have not reached the max number of individuals for a
generation
if ind > (pop_size - (pop_size * 0.1))
% keeping the last 10% of the previous generation
parent = parents(ind, 1);
new_population(ind).string = population(parent).string; %assigning the parent to the child directly
new_population(ind).dominance = population(parent).dominance;
ind = ind + 1;
% incrementing the count by 1
else
% END_IF
p1 = select_parent(pop_size);
% Selecting a parent
parent1 = parents(p1,1);
str1 = population(parent1).string;
strdom1 = population(parent1).dominance;
op = rand;
% rand no for deciding which operation to perform
if op < p_mut
if cross == 1 | cross == 2
[str_ch1 strdom_ch1] = mutation_dom(str1, strdom1, V); % calling the function mutation
new_population(ind).string = str_ch1;
new_population(ind).dominance = strdom_ch1;
ind = ind + 1; % incrementing the count by 1
- 102 -
Selective Recombination (Dominance Crossover)
else
[str_ch1] = mutation(str1, V);
new_population(ind).string = str_ch1;
ind = ind + 1;
end
elseif op > p_cross
p2 = select_parent(pop_size);
% Selecting a second parent
while p2 == p1
% A check to perform we haven't chosen the same parent
p2 = select_parent(pop_size); % Select one until we have not selected the same parent
end
%END_WHILE
parent2 = parents(p2,1);
str2 = population(parent2).string;
strdom2 = population(parent2).dominance;
% We have 3 crossover techniques to choose from, standard crossover,
% subtree-dominance crossover and dominance crossover, select the
% corresponding flag to activate the correct operation
if cross == 0
[str_ch1 str_ch2] = crossover(str1,str2,tree_len);
new_population(ind).string = str_ch1;
new_population(ind + 1).string = str_ch2;
elseif cross == 1
count = count + 1;
[str_ch1 str_ch2 strdom_ch1 strdom_ch2] = dominancecrossover(str1, str2, strdom1, strdom2,
tree_len, C, L, V);
new_population(ind).string = str_ch1;
new_population(ind + 1).string = str_ch2;
new_population(ind).dominance = strdom_ch1;
new_population(ind + 1).dominance = strdom_ch2;
elseif cross == 2
[str_ch1 str_ch2 strdom_ch1 strdom_ch2] = subtreedominancecrossover(str1, str2, strdom1,
strdom2, tree_len, C, L, V);
new_population(ind).string = str_ch1;
new_population(ind + 1).string = str_ch2;
new_population(ind).dominance = strdom_ch1;
new_population(ind + 1).dominance = strdom_ch2;
elseif cross == 3
[str_ch1, str_ch2] = uniformcrossover(str1, str2, tree_len, C, L, V);
new_population(ind).string = str_ch1;
new_population(ind + 1).string = str_ch2;
end
% END_IF
ind = ind + 2;
else
new_population(ind).string = population(parent1).string;
new_population(ind).dominance = population(parent1).dominance;
ind = ind + 1;
end
% END_IF
end
% END_IF
end
% END_WHILE
% Sum up the total fitness
Total_Fitness = 0;
for m = 1 : pop_size
Total_Fitness = Total_Fitness + population(m).fitness;
end
% Storing the fitest individual out of every generation
if Max_Fit < Total_Fitness
Max_Fit = Total_Fitness;
end
popfit = [popfit Max_Fit];
- 103 -
Selective Recombination (Dominance Crossover)
% Measuring the Average fitness of the population
total = total + Total_Fitness;
ave_fit = total/i;
avefit = [avefit ave_fit];
% Measure the variance of the fitness of the population
for n = 1 : pop_size
temp_var_fit(n) = (population(n).fitness - Total_Fitness/pop_size)^2;
end
% computing the variance of the generation
var_fit = sum(temp_var_fit) / pop_size;
varfit = [varfit var_fit];
population = new_population;
parents = zeros(pop_size,2);
for k = 1 : pop_size
population(k).fitness = fitness(population(k).string, tree_len, C, L, V);
%
population(k).fitness
parents(k,1) = k;
parents(k,2) = population(k).fitness;
end
parents = sortrows(parents,2);
end
time_taken = toc
Max_Fit
subplot(2,2,1); plot(popfit)
subplot(2,2,2); plot(avefit)
subplot(2,2,3); plot(varfit)
function str = random_tree(tree_len, var)
% This function is used to generate a valid tree
% Input Parameters : tree_len, var
%
tree_len defines the maximum length of a tree
%
var defines the number of termination variables to
%
choose from
% Output Parameters : str
term_req = 2; % counting the number of terminals required to make the tree valid
str(1) = 999; % representing a functional node, to say 'combine' or 'and'.
terminals = [1:var];
for i = 2 : tree_len
termop = rand; % generate a random decider
brancher = rand;
if brancher > 0.1
if (termop > 0.25 & (term_req > 1) & (tree_len - i > term_req))
str(i) = 999;
% representing a functional node, to say 'combine' or 'and'
term_req = term_req + 1;
elseif term_req > 0
termrand = randint(1,1,[1,var]);
var = var - 1;
term = terminals(termrand);
- 104 -
Selective Recombination (Dominance Crossover)
terminals = terminals(terminals~=term);
term_req = term_req - 1;
str(i) = term;
end
else
if term_req > 0
% If more terminals are required, we add one to the chromosome string
termrand = randint(1,1,[1,var]);
var = var - 1;
term = terminals(termrand);
terminals = terminals(terminals~=term);
term_req = term_req - 1; % Decrease the count of the nnumber of terminals needed.
str(i) = term;
end
end
end
function [str_ch1, str_ch2, strdom_ch1, strdom_ch2] = subTreeDominanceCrossover(str1, str2, strdom1,
strdom2, tree_len, C, L, V)
% This function creates 2 offspring from 2 parents. It
flag = 0;
total_dom1 = 0;
total_dom2 = 0;
while flag == 0
pos1 = randint(1,1,[1,length(str1)]);
pos2 = randint(1,1,[1,length(str2)]);
len1 = getSubTree(str1, pos1);
len2 = getSubTree(str2, pos2);
if length(str1) - (len1 - pos1) + (len2 - pos2) <= tree_len
if length(str2) - (len2 - pos2) + (len1 - pos1) <= tree_len
flag = 1;
end
end
end
total_dom1 = sum(strdom1(pos1:len1));
% sum of dominance values in the sub tree
Len1 = len1 - pos1 + 1;
% computing the length of the sub tree
total_dom1 = total_dom1 / Len1;
% compute the dominance per node
total_dom2 = sum(strdom2(pos2:len2));
Len2 = len2 - pos2 + 1;
total_dom2 = total_dom2 / Len2;
if total_dom1 >= total_dom2
% DO NOT PERFORM CROSSOVER, AS THE FITTER PARENT
ALREADY HAS THE MORE DOMINANT SUB TREE
str_ch1 = str1;
strdom_ch1 = strdom1;
str_ch2 = str2;
strdom_ch2 = strdom2;
elseif total_dom1 < total_dom2
% PERFORM CROSSOVER
str_ch1(1 : pos1 - 1) = str1(1 : pos1-1);
str_ch2(1 : pos2 - 1) = str2(1 : pos2-1);
str_ch2(pos2 : pos2 + len1 - pos1) = str1(pos1 : len1);
str_ch1(pos1 : pos1 + len2 - pos2) = str2(pos2 : len2);
- 105 -
Selective Recombination (Dominance Crossover)
str_ch1(pos1 + len2 - pos2 : pos1 + len2 - pos2 + length(str1) - len1) = str1(len1 : length(str1));
str_ch2(pos2 + len1 - pos1 : pos2 + len1 - pos1 + length(str2) - len2) = str2(len2 : length(str2));
strdom_ch1(1 : pos1 - 1) = strdom1(1 : pos1 - 1);
strdom_ch2(1 : pos2 - 1) = strdom2(1 : pos2 - 1);
strdom_ch2(pos2 : pos2 + len1 - pos1) = strdom1(pos1 : len1);
strdom_ch1(pos1 : pos1 + len2 - pos2) = strdom2(pos2 : len2);
strdom_ch1(pos1 + len2 - pos2 : pos1 + len2 - pos2 + length(str1) - len1) = strdom1(len1 : length(str1));
strdom_ch2(pos2 + len1 - pos1 : pos2 + len1 - pos1 + length(str2) - len2) = strdom2(len2 : length(str2));
% Evaluate fitness of parent and child
fit_p1 = fitness(str1, tree_len, C, L, V);
fit_p2 = fitness(str2, tree_len, C, L, V);
fit_ch1 = fitness(str_ch1, tree_len, C, L, V);
fit_ch2 = fitness(str_ch2, tree_len, C, L, V);
diff_fit1 = fit_ch1 - fit_p1;
diff_fit2 = fit_ch2 - fit_p2;
% If offspring has a higher fitness than the parent, then we update the
% difference in fitness to the dominance values
if diff_fit1 > 0
if len2 == pos2
strdom_ch1(pos1) = strdom_ch1(pos1) + diff_fit1;
else
strdom_ch1(pos1 : pos1 + len2 - pos2) = strdom_ch1(pos1 : pos1 + len2 - pos2) + diff_fit1;
end
if diff_fit2 > 0
if len1 == pos1
strdom_ch2(pos2) = strdom_ch2(pos2) + diff_fit2;
else
strdom_ch2(pos2 : pos2 + len1 - pos1) = strdom_ch2(pos2 : pos2 + len1 - pos1) + diff_fit2;
end
end
end
end
function [str_ch1, str_ch2] = uniformcrossover(str1, str2, tree_len, C, L, V)
% This funciton performs uniform crossover, creating 2 offspring from 2
% parents
len_p1 = length(str1); % compute the length of parent 1
len_p2 = length(str2); % compute the length of parent 2
i = 1;
% index tracking parent 1
j = 1;
% index tracking parent 2
a = 1;
% index tracking child 1
b = 1;
% index tracking child 2
while i <= len_p1 | j <= len_p2 % while we have reach the end of both strings
if i <= len_p1 & j <= len_p2 % if both index are within the lenght of the parent chromosomes
if rand > 0.5
% DO NOT CROSSOVER IF RAND > 0.5
if str1(i) == 999 & str2(j) == 999
% node i & node j is a functional node
str_ch1(a) = str1(i);
% copy node i from parent 1 to node a in child 1
str_ch2(b) = str2(j);
% copy node j from parent 2 to node b in child 2
- 106 -
Selective Recombination (Dominance Crossover)
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) ~= 999 & str2(j) ~= 999
% node i & node j is a terminal node
str_ch1(a) = str1(i);
str_ch2(b) = str2(j);
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) == 999 & str2(j) ~= 999 % node i is a functional node & node j is a terminal node
len = getSubTree(str1,i);
% get a sub tree from node i
dif = len - i;
% calcualte the length of the sub tree
str_ch1(a : a + dif) = str1(i : len);
str_ch2(b) = str2(j);
i = i + 1 + dif;
j = j + 1;
a = a + 1 + dif;
b = b + 1;
elseif str1(i) ~= 999 & str2(j) == 999
% node i is at a terminal node & node j is at a functional node
len = getSubTree(str2,j);
dif = len - j;
str_ch1(a) = str1(i);
str_ch2(b : b + dif) = str2(j : len);
i = i + 1;
j = j + 1 + dif;
a = a + 1;
b = b + 1 + dif;
end
str_ch1 = str_ch1(str_ch1 ~= 0);
str_ch2 = str_ch2(str_ch2 ~= 0);
elseif rand < 0.5
% PERFORM CROSSOVER
if str1(i) == 999 & str2(j) == 999
str_ch2(b) = str1(i);
% node i from parent 1 is copied to node b in child 2
str_ch1(a) = str2(j);
% node j from parent 2 is copied to node a in child 1
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) ~= 999 & str2(j) ~= 999
str_ch2(b) = str1(i);
str_ch1(a) = str2(j);
i = i + 1;
j = j + 1;
a = a + 1;
b = b + 1;
elseif str1(i) == 999 & str2(j) ~= 999
len = getSubTree(str1,i);
dif = len - i;
str_ch2(b : b + dif) = str1(i : len);
str_ch1(a) = str2(j);
i = i + 1 + dif;
j = j + 1;
a = a + 1;
b = b + 1 + dif;
elseif str1(i) ~= 999 & str2(j) == 999
len = getSubTree(str2,j);
- 107 -
Selective Recombination (Dominance Crossover)
dif = len - j;
str_ch2(b) = str1(i);
str_ch1(a : a + dif) = str2(j : len);
i = i + 1;
j = j + 1 + dif;
a = a + 1 + dif;
b = b + 1;
end
str_ch1 = str_ch1(str_ch1 ~= 0);
str_ch2 = str_ch2(str_ch2 ~= 0);
end
elseif i <= len_p1 & j > len_p2
if a < b
str_ch1(a : a + len_p1 - i) = str1(i : len_p1);
a = a + len_p1 - i + 1;
i = len_p1 + 1;
else
str_ch2(b : b + len_p1 - i) = str1(i : len_p1);
b = b + len_p1 - i + 1;
i = len_p1 + 1;
end
elseif i > len_p1 & j <= len_p2
check1 = valid_tree(str_ch1);
if a > b
str_ch2(b : b + len_p2 - j) = str2(j : len_p2);
b = b + len_p2 - j + 1;
j = len_p2 + 1;
else
str_ch1(a : a + len_p2 - j) = str2(j : len_p2);
a = a + len_p2 - j + 1;
j = len_p2 + 1;
end
end
end
str_ch1 = str_ch1(str_ch1 ~= 0);
str_ch2 = str_ch2(str_ch2 ~= 0);
- 108 -
Selective Recombination (Dominance Crossover)
Bibliography
[1]
Altenberg, L., (1994). Emergent phenomena in genetic programming. In
Sebald, A. V. and Fogel, L. J., editors, Evolutionary Programming –
Proceedings of the Thrid Annual Conference, pages 233 – 241. World
Scientific, Singapore.
[2]
Baker, J. E., (1985). Adaptive selection methods for genetic algorithms. In J. J.
Grenfenstette, editors, Proceedings of the First International Conference on
Genetic Algorithms and Their Applications. Erlbaum.
[3]
Banzhaf, W., Nordin, Peter., Keller, R. E., and Francome, F.D., (1998).
Genetic Programming, An Introduction, On the Automatic Evolution of
Computer Programs and its Applications, Morgan Kaufmann, San Francisco,
C.A.
[4]
Burke, E. K., Gustafson, S., Kendall, G. and Krasnogor, N., Is Increased
Diversity in Genetic Programming Beneficial? An Analysis of Lineage
Selection
[5]
Darwin, C.R., (1859). On the Origin of Species by Means of Natural Selection
or the Preservation of Favoured Races in the Struggle for Life, John Murray,
London.
[6]
De Jong, K. A., Potter, M. A. and Spears, W. M., (1997). Using Problem
Generators to Explore the Effects of Epistasis. In T. Black, editor, Proceedings
of the Seventh International Conference on Genetic Algorithms, 338 – 345.
Morgan Kaufmann.
[7]
Goldberg, D. E., (1989a). Genetic Algorithms in search, optimization and
machine learning. Addison – Wesley.
[8]
Goldberg, D. E., (1989b). Sizing Populations for Serial and Parallel Genetic
Algorithms. In J. D. Schaffer, editor, Proceedings of the Third International
Conference on Genetic Algorithms, 70 – 79. Morgan Kaufmann
- 109 -
Selective Recombination (Dominance Crossover)
[9]
Goldberg, D. E. and O’Reilly, U., (1998). Where does the Good Stuff Go, and
Why? How contextual semantics influence program structure in simple genetic
programming. In W. Banzhaf, R. Poli, M. Schoenauer and T. C. Fogarty,
editor, Proceedings of the First European Workshop on Genetic Programming,
16 – 36. Springer-Verlag.
[10]
Grefenstette, J. J. (1993) Deception Considered Harmful. In L. D. Whitley,
editor, Proceedings of Foundations of Genetic Algorithms 2, 75 – 91. Morgan
Kaufmann.
[11]
Hengpraprohm, S. and Chongstitvatna, P., Selective crossover in genetic
programming, Inter. Symposium on Communications and Information
Technology, pages 534-537, Chiangmai, Thailand, 2001.
[12]
Iba, H, and de Garis, H., “Extending Genetic Programming with
Recombinative Guidance”, Angeline, P. and Kinnear, K., editors, Advances in
Genetic Programming Vol 2, MIT Press, 1996.
[13]
Koza, J. R., (1992). Genetic Programming: On the Programming of
Computers by Natural Selection. MIT Press, Cambridge, MA.
[14]
Koza, J. R., (1994). Genetic Programming II: Automatic Discovery of
Reusable Programs. MIT Press, Cambridge Massachusetts.
[15]
Langdon, W. B. and Poli, R., (1997). An analysis of the MAX problem in
genetic programming. In Koza, J. R., Deb, K., Dorigo, M., Fogel, D. B.,
Garzon, M., Iba, H., and Riolo, R. L., editors, Genetic Programming 1997:
Proceedings of the Second Annual Conference, pages 222-230, Stanford
University, C.A. Morgan Kaufmann, San Francisco, CA.
[16]
Langdon, W. B. and Poli, R., (1998) On the Ability to Search the Space of
Programs of Standard, One point, and Uniform Crossover in Genetic
Programming, Technical Report, The University of Birmingham.
[17]
Luke, S. and Spector, L., (1997). A Revised Comparison of Crossover and
Mutation in Genetic Programming. In Genetic Programming 1997:
Proceedings of the Second Annual Conference(GP97). J. Koza et al, eds. San
Fancisco: Morgan Kaufmann. 240 – 248.
- 110 -
Selective Recombination (Dominance Crossover)
[18]
Mitchell, D., Selman, B. and Levesque, H., (1992). Hard and Easy
Distributions of SAT Problems. In Proceedings of the Tenth National
Conference on Artificial Intelligence, 459 – 465. MIT Press.
[19]
Mitchell, M., An Introduction to Genetic Algorithms. MIT Press.
[20]
O’Reilly, U. M. and Oppacher, F., (1994). Using building block functions to
investigate a building block hypothesis for genetic programming. Working
Paper 94-02-029, Santa Fe Institute, Sa Fe, NM.
[21]
O’Reilly, U. M. and Oppacher, F., (1995). The troubling aspects of a building
block hypothesis for genetic programming. In Whitley, L. D. and Vose, M. D.,
editors, Foundations of Genetic Programming
[22]
Poli, R. and Langdon, W. B., (1998). On the Search Properties of Different
Crossover Operators in Genetic Programming, The University of Birmingham.
[23]
Poli, R. and McPhee, N. F., (2001). Exact GP Schema Theory for Headless
Chicken Crossover and Subtree Mutation.
[24]
Reeves, C.R. and Rowe, J.E., Genetic Algorithms – Principles and
Perspectives: A Guide to GA Theory. Kluwer Academic Publishers.
[25]
Sastry, K., O'Reilly, M., Goldberg, D. E., and Hill, D., (2003). Building-Block
Supply in Genetic Programming
[26]
T. Ito, I. Iba, and S. Sato., Non-destructive depth-dependent crossover for
genetic programming. In W. Banzhaf, R. Poli, M. Schoenauer, and T. C.
Fogarty, editors, Proceedings of the First European Workshop on Genetic
Programming, LNCS, Paris, 14-15 April 1998. Springer-Verlag. Forthcoming.
[27]
Tackett, W. A., (1994). Recombination, Selection, and the Genetic
Construction of Computer Programs. PhD thesis, University of Southern
California, Department of Electrical Engineering Systems.
[28]
Vekaria, K. P., (1999). Selective Crossover as an Adaptive Strategy for
Genetic Algorithms. PhD thesis, University of London, Department of
- 111 -
Selective Recombination (Dominance Crossover)
Computer Science, University College London.
[29]
Whitley, L. D., (1991). Fundamental Principles of Deception in Genetic
Search. In G. Rawlins, editor, Foundations of Genetic Algorithms, 221 – 241.
Morgan Kaufmann.
[30]
http://www.pcai.com/web/ai_info/pcai_lisp.html 06/06/2004
- 112 -