Download Artificial Neural Networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Gene expression programming wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Perceptual control theory wikipedia , lookup

Concept learning wikipedia , lookup

Machine learning wikipedia , lookup

Neural modeling fields wikipedia , lookup

Pattern recognition wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Convolutional neural network wikipedia , lookup

Catastrophic interference wikipedia , lookup

Transcript
B219 Intelligent Systems
Semester 1, 2003
Artificial Neural Networks
(Ref: Negnevitsky, M. “Artificial Intelligence, Chapter 6)
BPNN in Practice
Week 3 Lecture Notes
page 1 of 1
B219 Intelligent Systems
Semester 1, 2003
The Hopfield Network
§ In this network, it was designed on analogy of brain’s
memory, which is work by association.
§ For example, we can recognise a familiar face even in
an unfamiliar environment within 100-200ms.
§ We can also recall a complete sensory experience,
including sounds and scenes, when we hear only a
few bars of music.
§ The brain routinely associates one thing with another.
§ Multilayer
Neural
Networks
trained
with
backpropagation algorithm are used for pattern
recognition problems.
§ To
emulate
the
human
memory’s
associative
characteristics, we use a recurrent neural network.
Week 3 Lecture Notes
page 2 of 2
B219 Intelligent Systems
Semester 1, 2003
§ A recurrent neural network has feedback loops from
its outputs to its inputs. The presence of such loops
has a profound impact on the learning capability of
the network.
§ Single layer n-neuron Hopfield network
§ The Hopfield network uses McCulloch and Pitts
neurons with the sign activation function as its
computing element:
§ The current state of the Hopfield is determined by the
current outputs of all neurons, y1, y2,…,yn.
Week 3 Lecture Notes
page 3 of 3
B219 Intelligent Systems
Semester 1, 2003
Hopfield Learning Algorithm:
§ Step 1: Assign weights
o Assign random connections weights with values
wij = +1 or wij = −1 for all i ≠ j and
0 for i = j
§ Step 2: Initialisation
o Initialise the network with an unknown pattern:
xi = Oi ( k ), 0 ≤ i ≤ N − 1,
where Oi (k ) is the output of node i at time t = k = 0
and xi is an element at input i of an input pattern,
+ 1 or - 1
§ Step 3: Convergence
o Iterate until convergence is reached, using the
relation:
 N −1

Oi ( k + 1) = f  ∑ wij Oi ( k ) , 0 ≤ j ≤ N − 1
 i =0

where the function f(.) is a hard limiting
nonlinearity.
o Repeat the process until the node outputs remain
unchanged.
Week 3 Lecture Notes
page 4 of 4
B219 Intelligent Systems
Semester 1, 2003
o The node outputs then best represent the
exemplar pattern that best matches the unknown
input.
§ Step 4: Repeat for Next Pattern
o Go back to step 2 and repeat for next xi , and so
on.
§ Hopfield network can act as an error correction
network.
Type of Learning
§ Supervised Learning
o the input vectors and the corresponding output
vectors are given
o the ANN learns to approximate the function
from the inputs to the outputs
Week 3 Lecture Notes
page 5 of 5
B219 Intelligent Systems
Semester 1, 2003
§ Reinforcement Learning
o the input vectors and a reinforcement signal are
given
o the reinforcement signal tells how good the
true output was
§ Unsupervised Learning
o only input are given
o the ANN learns to form internal representations
or codes for the input data that can then be
used e.g. for clustering
§ From now we will look at unsupervised learning
neural networks.
Week 3 Lecture Notes
page 6 of 6
B219 Intelligent Systems
Semester 1, 2003
Hebbian Learning
§ In 1949, Donald Hebb proposed one of the key ideas
in biological learning, commonly known as Hebb’s
Law.
§ Hebb’s Law states that if neuron i is near enough to
excite neuron j and repeatedly participates in its
activation, the synaptic connection between these two
neurons is strengthened and neuron j becomes more
sensitive to stimuli from neuron i.
§ Hebb’s Law can be represented in the form of two
rules:
• If two neurons on either side of a connection are
activated synchronously, then the weight of that
connection is increased.
• If two neurons on either side of a connection are
activated asynchronously, then the weight of that
connection is decreased.
Week 3 Lecture Notes
page 7 of 7
B219 Intelligent Systems
Semester 1, 2003
§ Hebbian learning implies that weights can only
increase. To resolve this problem, we might impose a
limit on the growth of synaptic weights.
§ It can be implemented by introducing a non-linear
forgetting factor into Hebb’s Law:
where ö is the following factor
§ Forgetting factor usually falls in the interval between
0 and 1, typically between 0.01 and 0.1, to allow
only a little “forgetting” while limiting the weight
growth.
Week 3 Lecture Notes
page 8 of 8
B219 Intelligent Systems
Semester 1, 2003
Hebbian Learning Algorithm:
§ Step 1: Initialisation
o Set initial synaptic weights and threshold to small
random values, say in an interval [0,1].
§ Step 2: Activation
o Compute the neuron output at iteraction p
where n is the number of neuron inputs, and èj is
the threshold value of neuron j.
§ Step 3: Learning
o Update the weights in the network:
where Äwij(p) is the weight correction at iteration
p.
Week 3 Lecture Notes
page 9 of 9
B219 Intelligent Systems
Semester 1, 2003
o The weight correction is determine by the
generalised activity product rule:
§ Step 4: Iteration
Increase iteration p by one, go back to Step 2.
Competitive Learning
§ Neurons compete among themselves to be activated
§ While in Hebbian Learning, several output neurons
can be activated simultaneously, in competitive
learning, only a single output neuron is active at any
time.
§ The output neuron that wins the “competition” is
called the winner-takes-all neuron.
§ In the late 1980s, Kohonen introduced a special call
of ANN called self-organising maps. These maps are
based on competitive learning.
Week 3 Lecture Notes
page 10 of 10
B219 Intelligent Systems
Semester 1, 2003
Self-organising Map
§ Our brain is dominated by the cerebral cortex, a very
complex structure of billions of neurons and hundreds
of billions of synapses.
§ The cortex includes areas that are responsible for
different human activities (motor, visual, auditory,
etc), and associated with different sensory input.
§ Each sensory input is mapped into a corresponding
area of the cerebral cortex. The cortex is a selforganising computational map in the human brain.
Week 3 Lecture Notes
page 11 of 11
B219 Intelligent Systems
Semester 1, 2003
§ Feature-mapping Kohonen model
The Kohonen Network
§ The Kohonen model provides a topological mapping.
It places a fixed number of input patterns from the
input layer into a higher dimensional output or
Kohonen layer.
Week 3 Lecture Notes
page 12 of 12
B219 Intelligent Systems
Semester 1, 2003
§ Training in the Kohonen network begins with the
winner’s neighbourhood of a fairly large size. Then,
as
training
proceeds,
the
neighbourhood
size
gradually decreases.
§ The lateral connections are used to create a
competition between neurons. The neuron with the
largest activation level among all neurons in the
output layer becomes the winner.
§ The winning neuron is the only neuron that produces
an output signal. The activity of all other neurons is
suppressed in the competition.
Week 3 Lecture Notes
page 13 of 13
B219 Intelligent Systems
Semester 1, 2003
§ The lateral feedback connections produce excitatory
or inhibitory effects, depending on the distance from
the winning neuron.
§ This is achieved by the use of a Mexican hat
function which describes synaptic weights between
neurons in the Kohonen layer.
§ In the Kohonen network, a neuron learns by shifting
its weights from inactive connections to actives ones.
Only the winning neuron and its neighbourhood are
allowed to learn.
Week 3 Lecture Notes
page 14 of 14
B219 Intelligent Systems
Semester 1, 2003
Competitive Learning Algorithm
§ Step 1: Initialisation
o Set initial synaptic weights to small random
values, say in an interval [0, 1], and assign a small
positive value to learning rate parameter á
§ Step 2: Activation and Similarity Matching
o Activate the Kohonen network by applying the
input vector X, and find the winner-takes-all (best
matching) neuron jX at iteration p, using the
minimum-distance Euclidean criterion
where n is the number of neurons in the input
layer, and m is the number of neurons in the
Kohonen layer.
Week 3 Lecture Notes
page 15 of 15
B219 Intelligent Systems
Semester 1, 2003
§ Step 3: Learning
o Update the synaptic weights
where Äwij(p) is the weight correction at iteration
p.
o The weight correction is determined by the
competitive learning rule:
where á is the learning rate parameter, and Ëj(p)
is the neighbourhood function central around the
winner-takes-all neuron jx at iteration p.
§ Step 4: Iteration
o Increase iteration p by one, go back to step 2 and
continue
until
minimum-distance
Euclidean
criterion is satisfied, or no noticeable changes
occur in the feature map
Week 3 Lecture Notes
page 16 of 16