Download Associative Memories

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Distributed firewall wikipedia , lookup

Internet protocol suite wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Computer network wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Airborne Networking wikipedia , lookup

Network tap wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Transcript
RECURRENT NEURAL NETWORKS OR
ASSOCIATIVE MEMORIES
Ranga Rodrigo
February 24, 2014
1
INTRODUCTION
• In a network, the signal received at the output was
sent again to the network input.
• Such circulation of the signal is called feedback.
• Such neural networks are are called recurrent
neural networks.
• Recurrent neural networks:
– Hopfield neural network
– Hamming neural network
– Real Time Recurrent Network (RTRN)
– Elman neural network
– Bidirectional Associative Memory (BAM)
2
HOPFIELD NEURAL NETWORK
3
w11
1
w12
w10
z-1

y1
w1n
w21
. . .
1
w20
w22
z-1

y2
w2n
wn1
wn2
wnn
.
.
.
1
wn0

z-1
yn
4
HOPFIED STRUCTURE
• It is a one-layer network with a regular structure,
made of many neurons connected one to the other.
• Output is fedback after one-step time delay.
• There are no feedbacks in the same neuron.
• During learning weights wkj get modified depending
on the value of learning vector x.
• In retrieval mode, the input signal stimulates the
network which, through the feedback, repeatedly
receives the output signal at its input, until the
answer is stabilized.
5
w11
1
w12
. . .
w10
z-1

w1n
1
w21
w20
w22

y1
z-1
y2
w2n
wn1
wn2
1
.
.
.
wn0

z-1
yn
wnn
y j ( 0)  x j
 D

yk (t )  f   wkj y j (t  1)   k , k  1,, N
 j 1, j  k

6
LEARNING METHODS FOR HOPFIELD: HEBB
• The generalized Hebbian rule
1
wkj 
M
M
i i
x
 kxj
i 1
• for M leaning vectors of the form


x i  x1i ,, xni , i  1,, M
• The maximum number of patterns which the
network is able to memorize using this rule is only
13.8% of the number of neurons.
7
LEARNING METHODS FOR HOPFIELD: PSEUDOINVERSE
• for M leaning vectors of the form


x i  x1i ,, xni , i  1,, M
• Form a matrix of learning vectors
X  [x1 , x 2 ,, x M ]
• The nn weights matrix W is found as
W  X( XT X) 1 XT
• In Matlab
– W = X*pinv(X)
8
HAMMING NEURAL NETWORK
9
10
OPERATION OF THE HAMMING NN
• In the first layer, there are p neurons, which determine the
Hamming distance between the input vector and each of the
p desired vectors coded in the weights of this layer.
• The second layer is called MAXNET. It is a layer corresponding
to the Hopfield network. However, in this layer feedback
covering the same neuron are added. The weights in these
feedbacks are equal to 1. The values of weights of other
neurons of this layer are selected so that they inhibit the
process.
• Thus, in the MAXNET layer there is the extinction of all
outputs except the one which was the strongest in the first
layer.
• The neuron of this layer, which is identified with the winner,
through the weights of output neurons with a linear
activation function, will retrieve the output vector associated
with the vector coded in the first layer.
11