Download Genetic Algorithms and the Evolution of Neural Networks for

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of genetic engineering wikipedia , lookup

Public health genomics wikipedia , lookup

Gene expression profiling wikipedia , lookup

Designer baby wikipedia , lookup

Biology and consumer behaviour wikipedia , lookup

Minimal genome wikipedia , lookup

Genome (book) wikipedia , lookup

Genome evolution wikipedia , lookup

Microevolution wikipedia , lookup

Gene expression programming wikipedia , lookup

Transcript
Genetic Algorithms and the Evolution of Neural Networks
for Language Processing
Jaime J. Dávila
Hampshire College, School of Cognitive Science
Amherst, MA 01002
[email protected]
Abstract
One approach used by researchers trying to develop computer systems capable of understanding natural languages is that of training a neural network (NN) for the task. For this type
of approach, one of the key questions becomes how to best configure NN parameters such as
topology, learning rates, training data, and other. How to choose values for these parameters
is still an open question, especially since the effect these variables have on each other is not
completely understood.
Genetic algorithms (GA) are particularly well suited for finding optimal combinations of
parameters, since they make no assumption about the problem being solved. Different NN
configurations are coded as genomes, which have a fitness function based on how well they can
solve a particular task. Genomes are paired and recombined in the hope that the offspring of
good solutions will be even better.
In this paper I present ways in which I have used Genetic Algorithms (GAs) to find which
NN parameter values produce better performance for a particular natural language task. In
addition to this, the system has been modified and studied in order to evaluate ways in
which coding methods in the GA and the NN can affect performance. In the case of GA
coding, an evaluation method based on schema theory is presented. This methodology can
help determine optimal balances between different evolutionary operators such as crossover
and mutation, based on the effect they have on the structures being processed by the GA.
In the case of NN coding, the effect of different ways of representing words and sentences at
the output layer is examined with both binary and floating point schemes.
Keywords: Neural Networks, Genetic Algorithms, Natural Language Processing.
1
Introduction
One approach used by researchers trying to develop computer systems capable of understanding
natural languages is that of training a neural network (NN) for the task. For this type of approach,
one of the key questions becomes how to best configure NN parameters such as topology, learning
rates, training data, and other. How to choose values for these parameters is still an open question,
especially since the effect these variables have on each other is not completely understood.
Genetic algorithms (GA) are particularly well suited for finding optimal combinations of parameters, since they make no assumption about the problem being solved, and find solutions
by combining exploitation of known good solutions with exploration of new solutions. GAs are
modeled after the process of natural selection. We start with a randomly selected population of
elements, each of which represents a possible solution to the problem being solved. Each of these
elements, called a genome, is formed by a string of values (called genes) that code the free variables
in the experiment. The fitness of each genome is determined by how well the solution it codes
solves the problem at hand. From the point of view of a genetic algorithm trying to optimize NN
configurations, particular values for these variables constitute particular schemata.
The rest of this paper is organized as follows; section 2 presents some of the ways in which
other researchers have used GA to optimize NN. Section 3 presents the natural language task I
have been working with. Section 4 will present a method of coding NN topologies at the layer level
of abstraction. In section 5 I present a mathematical model for evaluating a GA coding scheme’s
ability to efficiently process the genetic schemata in its population. Section 6 uses this model to
make predictions for three different ways of coding NNs for the natural language task I present in
section 3. Section 7 analizes results for these three coding schemes. Section 8 deals with different
ways of representing sentences at the NN output layer. Section 9 then discusses the empirical
results of using this different sentence coding scheme. Finally, section 10 describes possible future
research and section 11 presents conclusions.
2
GA for NN Optimization
In recent years researchers have used genetic algorithm techniques to evolve neural network topologies. Although these researchers have had the same aim in mind (namely, the evolution of topologies that are better able to solve a particular problem), the approaches they used varied greatly.
For example, de Garis (1996) evolved NN by having a series of growth commands give instructions on how to grow connections among nodes. Each node in the network processed signals that
told it how to extend its synapses. When two different synapses reached each other, a new node
was formed. The genetic algorithm was responsible for evolving the sequence of growth commands
that controlled how the network developed.
Fullmer and Miikkulainen (1991) developed a GA coding system where pieces of a genotype
went unused, imitating biological DNA processing. Only information stored between a Start
marker and an End marker was used to generate networks. The number of correctly configured
Start-End markers defined how many hidden nodes the network would have. In addition, information between these Start-End markers defined how the nodes were connected to each other. The
meaning conveyed by each position in the used part of the genome depended on its distance from
its corresponding Start symbol.
For example, the genome shown in figure 1 would generate two nodes, one for string
S,a,1,b,5,a,-2,E and another for string S,b,0,a,3,E , which wraps around the end of the genome.
Node a had an initial activation of 1(because of substring S,a,1), is connected to node b with a
weight of 5 (because of substring b, 5), and to itself with a weight of -2 (because of substring a,
-2). Node b had an initial activation of 0 (because of substring S,b,0) and a connection to node
a with a weight of 3(because of substring a,3).
Figure 1: example of coding used by Fullmer & Miikulainen (Fullmer and Miikkulainen (1991)).
The network evolved by this process was used to control a virtual creature’s movements in a
square field, avoiding bad objects and coming into contact with good objects. The GA continued
to run until a network that could solve the problem evolved. The number of generations needed
until this network was found varied between 7 and 304 for objects that could be identified before
hitting them, and between 15 and 414 generations when recognizing the object required traveling
around it looking for a characteristic view.
Kitano (1994) used GA to evolve a sequence of graph generation rules, as opposed to directly
coding network topology. Each genome defined a sequence of rules used to rewrite an element of
the graph. When these rules were applied until only terminal symbols remained, the graph defined
a connectivity matrix which was then used to configure a NN. For example, if we were developing
a network with two nodes, a genome might code rules [S → AB][A → 01][B → 10]. When these
three rules are applied we end up with a 2*2 matrix than defines the connectivity between the two
nodes in the network.
To be sure, the above examples do not represent a complete list of researchers who have used
GA to optimize NN. A more complete review can be found, for example, in Yao (1993).
3
A Natural Language Processing Task for a NN
This task was originally presented in Dávila (1999). As an overview, a network is asked to receive
a sentence one word at a time, and to incrementally build a description of the sentence in its
output nodes. For example, if the sentence “the boy ran in the park” is entered, the network
should respond by indicating that the boy is a noun phrase, and it acts as the agent of the verb
ran . The network should also indicate that “in the park” is a prepositional phrase modifying the
verb ran .
Entering a word into the network amounts to activating a single node that represents the given
word at the input layer, and at the same time activating those semantic nodes that reflect the
meaning of the word being entered. For example, to enter the word boy , a node that represents
that word is activated, as well as nodes that indicate that the word being entered is a proper noun,
singular, concrete, and human. In addition, an ID node is set to a value that would allow the
network to distinguish john from other words that might have the same semantic identity, such as
girl . An example of such activation is shown in figure 2.
Figure 2: Input layer presentation of boy (not all input nodes are shown).
The language used in this research is composed of ten nouns: boy, girl, john, mary, horse,
duck, car, boat, park, river. Available semantic nodes are: human, animal, or mechanical (three
mutually exclusive nodes); animate or inanimate (represented by one node, active if the noun is
animate); proper (active if true, inactive otherwise); and one ID node.
Verbs are entered in a similar way; the node representing that verb is activated, simultaneously
with semantic nodes that convey the meaning of that verb. Semantic nodes available for verbs
are: present or past tense (two mutually exclusive nodes), auxiliary verb, movement verb, sound
producing verb, sound receiving verb, visual receiving verb, and a verb ID node used to distinguish
verbs that would be identical otherwise (for example, ran and swam would be identical without
this last node). In total, there are twelve main verbs (saw, swam, swimming, ran, runs, running,
is, was, raced, floated, said, heard) and two auxiliary verbs (is, was). For example, figure 3 shows
how the verb runs is entered in the network; the individual node for runs is activated, as well as
semantic nodes representing a verb of movement and a present tense verb.
In addition to nouns and verbs, the language has three adjectives (fast, blue, red ), one article
( the ), one adverb ( fast ), and three prepositions( with , in , after ). Each of these is entered in
the network by activating an individual node for the word, plus an additional node that indicates
which type of word (adjective, article, adverb, or preposition) is being entered.
After each word is entered, the NN is expected to produce a representation of what it understands about the sentence up to that point. For example, after the network sees the boy (entered
Figure 3: Input layer presentation of runs (not all input nodes are shown).
one word at a time; first the and then boy ) it should indicate that it has detected a noun phrase
that uses an article, and that the noun of this phrase is boy. Boy in this case is represented by
activating output nodes that code human , animate , concrete , and an additional ID node that
distinguishes boy from other words that otherwise would have identical semantics (such as girl
). An example of such an activation is shown in figure 4.
Figure 4: output layer presentation of the boy (not all output nodes are shown).
If the network, immediately after having been shown the boy at the input layer, is shown runs ,
it should respond by indicating that it has detected a verb phrase with runs as its verb. Indicating
that the verb of this verb phrase is runs is done by activating semantic nodes for verb of movement
and present tense . In addition, a node ID is activated so that runs can be differentiated from
other verbs that would otherwise have identical semantics. At this point the network should also
indicate that the first noun phrase detected is the agent of the first verb phrase detected. This is
done by activating a np1-agent-of-vp1 node in the output layer.
In the manner described above, then, the network should continue to indicate its understanding
of the sentence being entered until an end-of-sentence marker is seen. The fitness of a network
is determined by computing the sum of squared errors for all output nodes during the processing
of all sentences in the language.
4
Genetic Definition of NN Topology
Each NN in this system has 75 hidden nodes between the input and output layers. These 75 nodes
are divided into N hidden layers, where N is a number between 1 and 30. The exact number of
hidden layers is determined by the first gene of the corresponding genome. This position stores
a random floating point number, with a value between 0 and 1. To determine how many hidden
layers a network has, the value of this gene is multiplied by 30, and rounded to the next highest
integer. If the result of this rounding up is 31, the network uses 30 hidden layers.
The number of hidden nodes in each of these hidden layers is also determined by the network’s
corresponding genome. The genome has 30 genes used to code the relative worth of each of the
possible hidden layers. Once the number of hidden layers is determined to be N using the process
described above, the N layers with the highest relative worth are identified. The 75 available
hidden nodes are distributed among each of these N hidden layers according to each layer’s worth
relative to the sum of all N worth values.
Figure 5: sample of genes 1-30.
For example, if a genome had genes 1-30 as illustrated in figure 5, and it had already been
determined that it would have five hidden layers (as described above), the five layers to use are
those indicated in bold. Since the sum of these five genes is 4.6, the first hidden layer would have
(75*.91/4.6 = ) 14 nodes. The other four hidden layers would be allocated hidden nodes in the
same way.
The connections between layers are also determined by the network’s genome. For each of the
thirty possible layers, there is a gene that indicates where the layer takes its input from. Each
of these genes stores a random floating point value between 0 and 1. To determine where each
hidden layer takes its input from, its takes-its-input from gene value is multiplied by N+2 (where
N is the number of hidden layers this network will have, as determined by the procedure outlined
previously), and rounded to the nearest integer. The resulting number points to which layer this
one takes its input from. We multiply by N+2 to allow a hidden layer to take its input from any
of the N hidden layers, as well as either the input or the output layer. A value of 1 would mean
the layer takes its input from the input layer. A value of N+2 would mean the layer takes its
input form the output layer. For values between 2 and N+1, the layer would take its input from
the layer with the (N-1)th highest relative worth.
Figure 6: sample of genes 31-60.
For example, if the same genome used for the example above had genes 31-60 as illustrated
in figure 6, we would look at the corresponding 5 takes-input-from genes, shown in bold in figure
6. Multiplying each of the selected genes by 6, we would obtain 4.14, 1.44, .06, 2.22, and 1.26.
This would mean that hidden layer 1 would take its input from hidden layer 4, hidden layer 2
would take its input from hidden layer 1, hidden layer 3 would take its input from the input layer,
hidden layer 4 would take its input from hidden layer 2, and hidden layer 2 would take its input
from hidden layer 1.
Where each layer sends its output is determined in a similar way, using positions 61-90 of
the genotype. Each of these genes stores a random floating point value between 0 and 1. To
determine where each layer sends its output, its sends-output-to gene value is multiplied by N+1
and rounded to the nearest integer. The resulting number points to which other layer this one will
send its output to. We multiply by N+1 to allow for hidden layers sending their output to any of
the N hidden layers, as well as to the output layer. A value of N+1 would mean the layer sends
its output to the output layer. For values between 1 and N, the layer sends its output to the layer
with the Nth highest relative worth. No layer sends its output back to the input layer.
Results for the experiment described above are presented in table 1. The four topologies
evolved by the GA system described above outperform commonly used NN topologies such as
Topology
Type I
Type II
Type III
Type IV
SRN
fully connected
FGS
N-P
Average
90.21
95.99
87.59
82.69
70.58
72.95
71.85
72.95
Best
92.09
97.62
88.84
85.97
90.40
90.41
90.40
90.41
Worst
88.97
84.91
86.20
80.39
17.69
17.79
33.29
17.79
Table 1: Percent of language correctly processed after training with 20% of complete language,
for both evolved and commonly used topologies.
Simple Recurrent Networks (SRN), Frasconi-Gori-Soda networks, and Narendra-Parthasarathy
networks. Although some of these commonly used topologies managed to outperform some evolved
topologies in the best of cases, on average the evolved topologies performed better by more than
10%. In addition, previously used topologies demonstrate a higher sensitivity to initial conditions.
The worst performance for previously used topologies is more than 45% lower than the worst
performance for evolved topologies. Details about the characteristics of the evolved networks can
be found in Dávila (1999).
5
Schema Disruption Computation
If we view the evolution of NN as a process with the main goal of defining connections between
any two nodes, then we can determine their ability to combine building blocks by estimating how
likely it is for evolutionary operations to disrupt connection definitions; the less likely it is for
connection definitions to be disturbed, the easier it is for the algorithm to combine building blocks
present in the current population.
An operation like crossover can disrupt a connection definition every time a crossover point is
selected between two genes that, taken together, define a connection between nodes of a network.
Therefore, how likely it is for crossover to cause this disruption can be estimated by the distance
between genes that combine to define any particular connection. If a particular connection is
defined by alleles in genes gi and gj , then the bigger the distance between gi and gj , the bigger
the chance that the connection will be disrupted by a crossover operation. Taking a sum of the
distance between genes that can define a connection we obtain a total disruption index (TDI) of
C X
N X
N
X
(|i − j| ∗ DC(k, i) ∗ DC(k, j))
(1)
k=0 i=0 j=0
where N is the number of genes, and DC(k, x) equals to a number between 0 and 1 which
indicates what is the probability that gene x is involved in defining connection k. Notice that this
number reflects a global probability of disruption for the complete network, as opposed to for any
particular connection. This is different from what was originally presented in Dávila (2000), and
is motivated by the fact that the more connections a network has, the more likely it is to suffer
disruptions.
6
Making Predictions with the TDI Computation
Under the GA coding method presented in section 4, which from now I will call SYSTEM-A, the
existence of a connection k between nodes i & j depends on the number of layers that the hidden
nodes are divided into (gene 0), which layers contain nodes i & j (genes 1-30), where the layer with
node j takes its input from (a gene from among genes 31-60, depending on the values of j, gene
0, and the nodes/layer distribution determined by genes 1-30), and where the layer with node i
sends its output to (a gene from among genes 61-90, again depending on the values of i, gene 0,
and the nodes/layer distribution determined by genes 1-30).
As a comparison, the same type of network topology could be evolved by using the following
coding scheme, which I will call SYSTEM-B. Each network still has 75 hidden nodes, but they
are always divided into 30 hidden layers (that is, there is no gene used to determine how many
hidden layers to use). The distribution of these 75 nodes into 30 layers is done based on 30 relative
worth values in the genome. To determine how many nodes each layer has, the relative worth for
a specific layer is divided by the sum of relative worth of all 30 layers, and then multiplied by 75.
The connections between layers are also determined by the network’s genome. For each of the
thirty layers, there is a gene that indicates where the layer takes its input from. Each of these
genes stores a random floating point value between 0 and 1. To determine where each hidden
layer takes its input from, its takes-its-input from gene value is multiplied by 32 and rounded to
the nearest integer. The resulting number points to which layer this one takes its input from. A
value of 1 would mean the layer takes its input from the input layer. A value of 32 would mean
the layer takes its input form the output layer. For values between 2 and N+1, the layer would
take its input from the (N-1)th hidden layer.
If we assume that DC(k,x) = 1 for all values of k and x, then TDI(SYSTEM-A) =
TDI(SYSTEM-B). In reality, though, DC(k,x) does not always return 1, and in fact it tends
to return smaller values under SYSTEM-B than under SYSTEM-A. Notice, for example, that
under SYSTEM-A the set of gene sequences that would allow node n to be in layer L has a
higher cardinality than under SYSTEM-B, given that SYSTEM-B always has 30 hidden layers,
while under SYSTEM-A the number of hidden layers is determined by a gene. This will, in turn,
affect which genes are involved in defining a connection between two given nodes, which affects
DC(k,x). Under standard random distributions, considering actual values for DC(k,x) would give
TDI(SYSTEM-A) > TDI(SYSTEM-B). What this would mean is that it is easier for a particularly
good schema to be disrupted by crossover under SYSTEM-A than under SYSTEM-B.
Aside from the effect of DC(k,x), of course, the actual positioning of the genes and how
they map into phenome characteristics also has an effect on the disruption caused by crossover
operations. Take, for example, a system with the same types of genes as SYSTEM-B, but where
the position of the genes has been altered. Instead of having 30 worth values followed by 30
takes-input-from values and then 30 sends-output-to values, SYSTEM-C arranges genes so that
the worth, takes-input-from and sends-output-to genes for any one particular layer are in three
consecutive positions (positions 3*L, 3*L+1, and 3*L+2, where L is the hidden layer number).
Because a gene with a particular functionality will have the same effect on resulting phenomes
regardless of its position in the genome, we can discard terms DC(k,i) and DC(k,j) in equation
(1), and obtain
C X
N X
N
X
T DI 0 (.) =
(|i − j|)
(2)
k=0 i=0 j=0
Under this definition, TDI (SYSTEM-B) > TDI (SYSTEM-C), since SYSTEM-C minimizes
the distance between the relative-worth and takes-input-from genes . This means that crossover
is less likely to disrupt a useful schema under SYSTEM-C.
7
Empirical Results
To verify the effect these disruptions might have on solutions found by evolutionary computation,
I performed the optimization of topologies for the natural language problem outlined previously
under the three genome coding schemes discussed above. In order to better measure the effect
of crossover, computations were performed with no mutation and population size of 21 elements.
Network fitness values were computed by taking the sum of square errors for all output nodes
throughout the presentation of a language of 413 sentences. Training is performed on 20% of this
same language. In order to verify consistent performance, all evolved networks were validated by
performing 144 bootstrap validation runs per network (Weiss and Kulikowski (1991)). Evolutionary runs were repeated 48 times, and the graphs presented here, although taken from particular
runs, are typical of results throughout all runs.
Figures 7 and 8 show the fitness for the best, average, and worst individuals in a run of
40 generations. Figure 7 plots the values for SYSTEM-A, while figure 8 shows the values for
SYSTEM-C. It quickly becomes evident that average and even worst element evolve towards the
best solution faster under SYSTEM-C. This is in accordance with the prediction made by TDI
computations for these two coding schemes; the traits that make the best individual so good can
be transfered to other members of the population more efficiently under SYSTEM-C.
Figure 7: fitness values for SYSTEM-A
Figure 8: fitness values for SYSTEM-C
In order to verify that it is in fact the ability to efficiently process GA schemata that causes
this performance difference, runs with SYSTEM-C were repeated with higher mutation values. A
sample run of this type of experiment is plotted in figure 9. Notice how it now takes longer for
the GA to process and disperse good schemata through the population.
It is important to notice that what is being computed, predicted, and verified here is simply
the ability of a GA system to process the schemata that already exist in a population. These
Figure 9: fitness values for SYSTEM-C under higher mutation values
computations do not talk directly about the ability of the system to discover new schemata. In
fact, random mutation in some cases could generate an individual which is better than anything
currently in the population, or anything that could be evolved from the population by schema
recombination. An example of this is shown in figure 10. Notice, however, that although the best
elements in this population are better than the ones obtained in previous experiments, the system
is still slower to process the schemata. This is shown by the bigger difference between better and
worse elements within this particular run.
Figure 10: fitness values for SYSTEM-B under higher mutation values
8
Analog vs. Digital Sentence Representation
The method for sentence representation at the output layer used in these experiments has a number
of limitations and problems. For example, a high number of output nodes are required to remain
inactive (output of zero) for the output pattern to be considered correct. Because of this, both the
evolutionary process and the (backpropagation) learning process find solutions where many nodes
are simply disconnected from the rest of the network, ensuring that they never activate. Although
this leads to errors for some of the patterns, these errors are small when considered within the
context of the complete language used.
Another issue introduced by this output coding scheme is the difficulty of expanding it to
include more that the three noun phrases and two verb phrases used here. The number of nodes,
and therefore connections, generated by such an expansion would make the system harder to train
and slower to respond to inputs.
In order to avoid this type of problem, I have modified the way in which sentences are represented at the output layer. Instead of having binary (and mutually exclusive) nodes representing
noun properties such as human, animal, mechanical and inanimate, these four nodes are substituted with a single type-of-noun node, with activations ranging between 0 and 1. Something similar
can be done to describe verbs, substituting nodes for verb-of-movement, verb-of-sound-producing,
verb-of-sound-receiving, etc., with a single type-of-verb node. Another change similar to this
can be done with the NP1-MODIFIES-NP2, NP1-MODIFIES-NP3, NP1-MODIFIES-VP1, NP1MODIFIES-VP2 nodes; they can be substituted with a single node that indicates which phrase
this one phrase is modifying. By doing this we both drastically reduce the number of nodes that
can correctly keep their output at zero, and the number of nodes that need to be added to the
output layer when a new phrase is added to the sentence representation. In particular, the number
of nodes needed to represent a noun prase is reduced from 3*NVP + NNP + 12(where NVP is the
number of verb phrases and NNP is the number of noun phrases) to a constant 8. The number of
nodes needed to represent a verb phrase is reduced from NNP + 6 to a constant 6.
9
empirical Results for Analog Representation.
After 40 generations, typical runs of the experiments described in section 8 converge to NN with
errors of around 5.45 - 5.81%. This compares very favorably with results for digital systems, where
the variability was somewhat higher. In fact, the only digital system capable of ever producing
better results (Type II in table 1) has the biggest performance variability, and sometimes performs
close to 10 percentage points below the lowest analog performer.
A sample plot of fitness values for analog systems is presented in figure 11.
Figure 11: fitness values for analog SYSTEM-C, low mutation values
In terms of GA schemata disruption, figure 11 shows this coding taking ample advantage of
the better elements in the population. Compare figure 11, for example, with figure 8; we note that
the system moves towards good fitness values in the presence of a superior element much quicker
than the digital version of SYSTEM-C. Equations (1) & (2) predict this, based on a smaller total
number of connections C used (since there are fewer output nodes).
10
Future Work
In order to better understand the effect of different TDI values during the evolution of NN, analysis
of a series of different coding schemes should be caried out. For example, the systems mentioned
in section 1 could be examined, as well as others found in the GA/NN literature. By observing
how different coding schemes appear to be able (or not) to process good GA schematta, a better
understanding of TDI values can be obtained. In addition, the disruptive ability of different
TDI values should be compared with that of several ranges of mutation rates. By doing so, TDI
values can be better understood, allowing this methodology to better assist in the design of coding
schemes and the balancing between different types of evolutionary operations.
Another aspect of TDI computation to be explored relates to its effect when a population
contains elements with drastically different fitness values. For example, figure 10 shows fitness
values for a run where there is a considerable initial difference between the best and worst elements.
Although the graph shows that fitness values are starting to converge towards the 40th generation,
the system seems to process the schemata of the best elements slowly. It would be important
to determine if this is caused by the magnitude of the difference, or by other factors (such as
evolutionary pressure, size of the population, etc.).
Finally, several question relating to the analog coding system presented in section 8 should
also be explored further. In particular, the effect of using the same number of nodes, but with
different coding schemes for nouns and verbs should be investigated. Furthermore, the possibility
of having a GA evolve these codings should be explored.
11
Conclusion
This paper has presented a mathematical methodology for predicting the effectiveness of a GA
in processing NN topology. The methodology is used to predict the effectiveness of several GA
coding schemes. These predictions are shown to correspond with actual results.
In addition, different ways of representing sentences at a NN output later are presented. Advantages of using non-binary representations are discussed, both from the point of view of expanding
sentences that can it can process and efficiency of GA used to evolve them.
References
de Garis, H. (1996). CAM-BRAIN: The Evolutionary Engineering of a Billion Neuron Artificial
Brain by 2001 Which Grows/Evolves at Electronic Speeds Inside a Cellular Automata Machine
(CAM), In Lecture Notes in Computer Science - Towards Evolvable Hardware, Vol. 1062, pages
76-98. Springer Verlag, pages. 76-98.
Dávila, J. (1999) Exploring the Relationship Between Neural Network Topology and Optimal
Training Set by Means of Genetic Algorithms. In International Conference on Artificial Neural
Networks and Genetic Algorithms, pages 307-311. Springer-Verlag.
Dávila, J. (2000) A New Metric for Evaluating Genetic Optimization of Neural Networks. In
Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and
Neural Networks. pages 52-58. IEEE
Fullmer, B., Miikkulainen, R., (1991). Using marker- based genetic encoding of neural networks to
evolve finite-state behavior. In Proceedings of the first European Conference on Artificial Life,
pages. 253-262.
Kitano, H., (1994). Designing Neural Networks using Genetic Algorithm with Graph Generation
System. In Complex Systems, 4. pages 461-476.
Weiss, S., Kulikowski, C. (1991). Computer Systems that Learn. Classification and Prediction
Methods from Statistics. In Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann.
Yao, X. (1993). A review of evolutionary artificial neural networks. In International Journal of
Intelligent Systems, 4. pages 203–222.