Download Binary neurons and networks

Document related concepts

Mirror neuron wikipedia , lookup

Apical dendrite wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Neuroanatomy wikipedia , lookup

Optogenetics wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Single-unit recording wikipedia , lookup

End-plate potential wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Neural coding wikipedia , lookup

Development of the nervous system wikipedia , lookup

Axon wikipedia , lookup

Neuromuscular junction wikipedia , lookup

Neural modeling fields wikipedia , lookup

Artificial neural network wikipedia , lookup

Synaptic noise wikipedia , lookup

Sparse distributed memory wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Neurotransmitter wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Pattern recognition wikipedia , lookup

Central pattern generator wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Biological neuron model wikipedia , lookup

Metastability in the brain wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Synaptogenesis wikipedia , lookup

Convolutional neural network wikipedia , lookup

Chemical synapse wikipedia , lookup

Nervous system network models wikipedia , lookup

Catastrophic interference wikipedia , lookup

Synaptic gating wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Recurrent neural network wikipedia , lookup

Transcript
Part 3.1
Binary neurons and networks
Modeling the neural hardware
Srdjan Ostojic ([email protected])
Tuesday, April 1, 14
What you have seen so far
Sensory Stimuli
Organism
Behavioral Response
Part 1: abstract models of behavior (Rescorla-Wagner, RL)
Part 2: relate neural activities to behavior
Part 3: how is the neural activity generated
neural hardware,
biophysics and network dynamics
Tuesday, April 1, 14
What does the hardware look like?
Ramon y Cajal (Nobel Prize 1906)
Tuesday, April 1, 14
neuron doctrine
Neurons = basic units of computation
Dendrites
Soma
Axon
Tuesday, April 1, 14
Neurons = basic units of computation
Dendrites
Axon
Tuesday, April 1, 14
INFORMATION FLOW
Soma
The Typical Cortical Neuron
dendrites
(Ø 1 µm)
1 synapse
/ 0,5 µm
synapses
(~ 10000)
To other neurons
Soma (20 µm)
axon
(4 cm
Ø 1 µm)
Tuesday, April 1, 14
The Typical Cortical Neuron
dendrites
(Ø 1 µm)
1 synapse
/ 0,5 µm
synapses
(~ 10000)
To other neurons
Soma (20 µm)
axon
(4 cm
Ø 1 µm)
Tuesday, April 1, 14
The Typical Cortical Neuron
Action
potentials
dendrites
(Ø 1 µm)
1 synapse
/ 0,5 µm
synapses
(~ 10000)
Membrane
potential
To other neurons
Soma (20 µm)
axon
(4 cm
Ø 1 µm)
Tuesday, April 1, 14
The Typical Cortical Neuron
Action
potentials
dendrites
(Ø 1 µm)
1 synapse
/ 0,5 µm
synapses
(~ 10000)
Membrane
potential
To other neurons
The synapse
Soma (20 µm)
synaptic
cleft
presynaptic
neuron
axon
(4 cm
Ø 1 µm)
Tuesday, April 1, 14
vesicles
postsynaptic neuron
The Typical Cortical Neuron
Action
potentials
synapses
(~ 10000)
Membrane
potential
dendrites
(Ø 1 µm)
1 synapse
/ 0,5 µm
Presynaptic
spike
To other neurons
The synapse
Soma (20 µm)
synaptic
cleft
presynaptic
neuron
axon
(4 cm
Ø 1 µm)
Tuesday, April 1, 14
vesicles
postsynaptic neuron
Part 3 of the course
- Description of neural dynamics
- Biophysics of membrane potential dynamics and synaptic
transmission
- Dynamics of networks of neurons
TODAY: simplest possible description of neurons and
networks
9
Tuesday, April 1, 14
The Binary Neuron
INFORMATION FLOW
x1
w1
x3 … xn
x2
w2
w3
wn
Synaptic Inputs
Synaptic weights
N
X
Summation
k=1
1
Threshold
y
Output
y=H
N
X
k=1
McCulloch and Pi/s (1943)
Tuesday, April 1, 14
w k xk
w k xk
b
!
What can binary neurons compute?
x1
x3 … xn
x2
y
McCulloch and Pitts (1943)
Tuesday, April 1, 14
Inputs: 0 or 1
Outputs: 0 or 1
Neuron = Basic logical unit
Brain = digital machine
Historical perspective:
1930-1950: birth of first computers
Claude Shannon
Shannon: information theory of digital signals
Turing : universal capabilities of digital machines
Alan Turing
Von Neumann: architecture of universal computers
Can we construct an electronic brain?
Birth of Artificial Intelligence
Tuesday, April 1, 14
John von Neumann
The Binary Neuron
x1
w1
x3 … xn
x2
w2
w3
wn
Synaptic Inputs
Synaptic weights
N
X
Summation
w k xk
k=1
1
Threshold
y
Output
y=H
N
X
k=1
McCulloch and Pi/s (1943)
Tuesday, April 1, 14
w k xk
b
!
y=H
N
X
w k xk
k=1
Two synaptic inputs
Tuesday, April 1, 14
b
!
H
= H (w.~
~x
~x = (x1 , x2 )
b)
1
w
~ = (w1 , w2 )
y=H
N
X
w k xk
k=1
Two synaptic inputs
b
!
H
= H (w.~
~x
~x = (x1 , x2 )
b)
1
w
~ = (w1 , w2 )
x2
x1
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Two synaptic inputs
w.~
~x
b
!
H
= H (w.~
~x
~x = (x1 , x2 )
b=0
b)
1
w
~ = (w1 , w2 )
x2
x1
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Two synaptic inputs
w.~
~x
b
!
H
= H (w.~
~x
~x = (x1 , x2 )
b=0
b)
1
w
~ = (w1 , w2 )
x2
w
~
x1
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Two synaptic inputs
w.~
~x
b
!
H
= H (w.~
~x
b)
~x = (x1 , x2 )
b=0
1
w
~ = (w1 , w2 )
x2
y=1
y=0
w
~
x1
Separates the plane in two regions
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Three synaptic inputs
Tuesday, April 1, 14
b
!
H
= H (w.~
~x
~x = (x1 , x2 , x3 )
b)
1
w
~ = (w1 , w2 , w3 )
y=H
N
X
w k xk
k=1
Three synaptic inputs
b
!
H
= H (w.~
~x
~x = (x1 , x2 , x3 )
b)
1
w
~ = (w1 , w2 , w3 )
x2
x1
x3
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Three synaptic inputs
b
!
H
= H (w.~
~x
~x = (x1 , x2 , x3 )
1
b)
w
~ = (w1 , w2 , w3 )
x2
w.~
~x
b=0
x1
x3
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Three synaptic inputs
b
!
H
= H (w.~
~x
1
b)
~x = (x1 , x2 , x3 )
w
~ = (w1 , w2 , w3 )
x2
w.~
~x
b=0
w
~
x1
x3
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Three synaptic inputs
b
!
H
= H (w.~
~x
1
b)
~x = (x1 , x2 , x3 )
w
~ = (w1 , w2 , w3 )
x2
w.~
~x
y=0
b=0
y=1
w
~
x1
x3
Tuesday, April 1, 14
y=H
N
X
w k xk
k=1
Three synaptic inputs
b
!
H
= H (w.~
~x
1
b)
~x = (x1 , x2 , x3 )
w
~ = (w1 , w2 , w3 )
x2
w.~
~x
y=0
b=0
y=1
w
~
x1
Separates
x3 the space in two regions
Tuesday, April 1, 14
y=H
N
X
w k xk
b
k=1
N synaptic inputs
w.~
~x
Tuesday, April 1, 14
!
H
= H (w.~
~x
~x = (x1 , . . . , xN )
b=0
b)
1
w
~ = (w1 , . . . , wN )
y=H
N
X
w k xk
b
k=1
N synaptic inputs
w.~
~x
!
H
= H (w.~
~x
~x = (x1 , . . . , xN )
b=0
1
b)
w
~ = (w1 , . . . , wN )
defines a hyperplane
Separates the space in two regions
binary classifier
Tuesday, April 1, 14
Binary classification task
Tuesday, April 1, 14
Activity of neurons in MT
Can an upstream neuron read out the mo;on direc;on?
Tuesday, April 1, 14
Neural readout of motion direction
neuron 1
neuron 2
Tuesday, April 1, 14
w1
Output =
w2
1 if right
0 if leC
Neural readout of motion direction
neuron 1
neuron 2
Tuesday, April 1, 14
w1
Output =
w2
1 if right
0 if leC
Correct readout!
Neural readout of motion direction
neuron 1
neuron 2
Tuesday, April 1, 14
w1
Output =
w2
1 if right
0 if leC
Binary neuron = linear classifier
Need to adjust synap;c weights!!
Learning = modifica;on of synap;c weights
Tuesday, April 1, 14
Learning in the Binary Neuron
x1
w1
x3 … xn
x2
w2
w3
wn
Synaptic Inputs
Synaptic weights
Summation
Threshold
y
Tuesday, April 1, 14
Output
Learning in the Binary Neuron
x1
given
w1
x3 … xn
x2
w2
w3
wn
Synaptic Inputs
Synaptic weights
Summation
Threshold
y
Tuesday, April 1, 14
Output
Learning in the Binary Neuron
x1
given
w1
x3 … xn
x2
w2
w3
wn
Synaptic Inputs
Synaptic weights
Summation
Threshold
given
Tuesday, April 1, 14
y
Output
Learning in the Binary Neuron
x1
given
w1
x3 … xn
x2
w2
w3
wn
Synaptic Inputs
Synaptic weights
Summation
Threshold
given
y
Output
synaptic plasticity underlies learning
Tuesday, April 1, 14
adjust
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
Exemple: classify red points as 1 and blue points as 0
automatic rule for updating synaptic weights?
Tuesday, April 1, 14
The perceptron learning rule
Rosenblatt (1958)
Training set of p patterns:
where
Tuesday, April 1, 14
is an input vector
is a desired output
The perceptron learning rule
Rosenblatt (1958)
Training set of p patterns:
where
is an input vector
is a desired output
On every step:
for each pattern
1. compute the output
2. if
update the weights:
Converges in a finite number of steps if a solution exists
Tuesday, April 1, 14
The perceptron learning rule
Aim: classify red points as 1 and blue points as 0
Tuesday, April 1, 14
The perceptron learning rule
Start with a random set of synaptic weights
Tuesday, April 1, 14
The perceptron learning rule
Choose a misclassified pattern
Tuesday, April 1, 14
The perceptron learning rule
Choose a misclassified pattern
Tuesday, April 1, 14
The perceptron learning rule
Update weight vector
Tuesday, April 1, 14
The perceptron learning rule
Update weight vector
Tuesday, April 1, 14
The perceptron learning rule
Update weight vector
Tuesday, April 1, 14
The perceptron learning rule
Update weight vector
Tuesday, April 1, 14
The perceptron learning rule
Choose next misclassified pattern
Tuesday, April 1, 14
The perceptron learning rule
Choose next misclassified pattern
Tuesday, April 1, 14
The perceptron learning rule
Update weight vector
Tuesday, April 1, 14
The perceptron learning rule
Correct classification: learning terminates
Tuesday, April 1, 14
What can a perceptron do?
Rosenblatt: « The perceptron may eventually be able to learn, make decisions,
and translate languages »
Tuesday, April 1, 14
What can a perceptron do?
Rosenblatt: « The perceptron may eventually be able to learn, make decisions,
and translate languages »
Exemple: train neurons to recognize
hand-written digits
Tuesday, April 1, 14
What can a perceptron do?
Rosenblatt: « The perceptron may eventually be able to learn, make decisions,
and translate languages »
Exemple: train neurons to recognize
hand-written digits
Train ten binary neurons
Inputs: vector of pixel values
corresponding to digits
Neuron 3: output = 1 if input is the digit 3
output = 0 otherwise
Tuesday, April 1, 14
The perceptron
Frank Rosenbla/
Tuesday, April 1, 14
Can the perceptron perform universal binary
computations ?
Binary inputs
Can any function
be implemented by a perceptron?
Tuesday, April 1, 14
Implementing binary functions with a perceptron
N=2, only 4 possible input pa/erns
(0, 1)
(1, 1)
(0, 0)
(1, 0)
Example: y= AND(x1,x2)
Tuesday, April 1, 14
Implementing binary functions with a perceptron
n=2, only 4 possible pa/erns
(0, 1)
(1, 1)
(0, 0)
(1, 0)
Example: y= AND(x1,x2)
Tuesday, April 1, 14
Implementing binary functions with a perceptron
n=2, only 4 possible pa/erns
(0, 1)
(1, 1)
(0, 0)
(1, 0)
Example: y= OR(x1,x2)
Tuesday, April 1, 14
Implementing binary functions with a perceptron
n=2, only 4 possible pa/erns
(0, 1)
(1, 1)
(0, 0)
(1, 0)
Example: y= OR(x1,x2)
Tuesday, April 1, 14
Implementing binary functions with a perceptron
n=2, only 4 possible pa/erns
(0, 1)
(1, 1)
(0, 0)
(1, 0)
Example: y= XOR(x1,x2)
Tuesday, April 1, 14
Implementing binary functions with a perceptron
n=2, only 4 possible pa/erns
(0, 1)
(1, 1)
(0, 0)
(1, 0)
Example: y= XOR(x1,x2)
IMPOSSIBLE!!!
Tuesday, April 1, 14
A binary neuron can only implement linearly
separable functions
Two sets are linearly separable if there exists a hyperplane separating them
Minsky and Pappert, Perceptrons (1969)
Tuesday, April 1, 14
A binary neuron can only implement linearly
separable functions
Marvin Minsky and Seymour Papert
Perceptrons (1969)
AI winter: halt in research and funding during 10
years
Tuesday, April 1, 14
What can binary neurons compute?
x1
x2
x3 …
xn
y1
Single binary neurons can compute only linearly
separable functions
Tuesday, April 1, 14
What can feedforward networks compute?
x1
y1
Tuesday, April 1, 14
x2
x3 …
xn
y2
What can feedforward networks compute?
x1
y1
x2
x3 …
xn
y2
…
yn
Single layer networks can compute only linearly
separable functions
Tuesday, April 1, 14
What can multilayer feedforward
networks compute?
x1
x2
x3 …
xn
Input
Hidden units
y1
Tuesday, April 1, 14
y2
Output
Multilayer networks can compute an XOR
NON-AND
AND
Tuesday, April 1, 14
Multilayer networks can compute an XOR
OR
NON-AND
AND
Tuesday, April 1, 14
Multilayer networks can compute an XOR
OR
NON-AND
AND
Tuesday, April 1, 14
Multilayer networks can compute
any binary function!
Any binary function
can be represented using only ANDs and ORs
[disjunctive normal form and conjunctive normal form]
Multilayer networks have universal computational
properties
Tuesday, April 1, 14
Multilayer networks can compute
any binary function!
Any binary function
can be represented using only ANDs and ORs
[disjunctive normal form and conjunctive normal form]
Multilayer networks have universal computational
properties
... but how to train them?
Tuesday, April 1, 14
Training multilayer networks
Set of p training patterns
Aim: minimize cost function
by changing the synaptic weights in the network
Backpropagation algorithm (Rumelhart, Hinton and Williams 1986)
David Rumelhart
Tuesday, April 1, 14
Geoff Hinton
Backpropagation
Renaissance of Artificial Neural Networks since the 80’s
but
requires retrograde propagation along axons and synapses
Not compatible with biology
Tuesday, April 1, 14
Hebb’s postulate
When an axon of cell A is near enough to excite cell B
and repeatedly or persistently takes part in firing it,
some growth process or metabolic change takes place in
one or both cells such that A's efficiency, as one of the
cells firing B, is increased. (1949)
Donald Hebb
A
If A and B are active at the same time,
Tuesday, April 1, 14
B
increases.
The perceptron learning rule
Rosenblatt (1958)
Training set of p patterns:
where
On every step:
for each pattern
1. compute the output
2. if
update the weights:
input x output hebbian learning
Tuesday, April 1, 14
is an input vector
is a desired output
So far: feedforward networks
x1
x2
x3 …
xn
Input
Hidden units
Output
y1
Tuesday, April 1, 14
y2
More general: recurrent connections
x1
x2
x3 …
xn
Input
Hidden units
Output
y1
Tuesday, April 1, 14
y2
More general: recurrent connections
x1
x2
x3 …
xn
Input
Hidden units
Output
y1
Tuesday, April 1, 14
y2
More general: recurrent connections
x1
x2
x3 …
xn
Input
Hidden units
Output
y1
Tuesday, April 1, 14
y2
Recurrent networks: need to look at the
dynamics!
x1
x2
x3 …
xn
Input
Hidden units
Output
y1
Tuesday, April 1, 14
y2
Recurrent networks: need to look at the
dynamics!
x1
x2
x3 …
y1
xN
External Inputs
y2
yN
y1
Tuesday, April 1, 14
y2
y3 …
yN
Activities
Network dynamics in discrete time
Network of N units
Tuesday, April 1, 14
Network dynamics in discrete time
Network of N units
Ac;vity of neuron I
at next ;mestep
Tuesday, April 1, 14
Network dynamics in discrete time
Network of N units
Ac;vity of neuron I
at next ;mestep
Tuesday, April 1, 14
Total input
External input
from network
at present ;mestep
Network dynamics in discrete time
Network of N units
Ac;vity of neuron I
at next ;mestep
Tuesday, April 1, 14
1
Total input
External input
from network
at present ;mestep
Network dynamics in discrete time
Network of N units
Ac;vity of neuron I
at next ;mestep
Change of nota;ons:
1
Total input
External input
from network
at present ;mestep
ac;ve
inac;ve
1
-1
Tuesday, April 1, 14
Example
Network of N = 3 units
activity:
synaptic matrix:
dynamics:
~y (t) = (y1 (t), y2 (t), y3 (t))
0
w11 w12 w13
1
B
C
B
W = @ w21 w22 w23 C
A
w31 w32 w33
~y (t + 1) = sgn [W.~y (t)]
95
Tuesday, April 1, 14
Example
Network of N = 3 units
initial condition:
synaptic matrix:
dynamics:
~y (t = 1) = (1, 1, 1)
0
B
W =B
@
1
1
1
1
1
1
1
1
C
1 C
A
1
~y (t + 1) = sgn [W.~y (t)]
96
Tuesday, April 1, 14
Dynamics - first time step
~y (t + 1) = sgn [W.~y (t)]
97
Tuesday, April 1, 14
Dynamics - first time step
~y (t + 1) = sgn [W.~y (t)]
0
B
W.~y (1) = B
@
1
1
1
1
1
1
1
10
1
1
0
1
1
CB C B
C
B
C
B
C
1 A@ 1 A = @ 1 C
A
1
1
1
98
Tuesday, April 1, 14
Dynamics - first time step
~y (t + 1) = sgn [W.~y (t)]
0
B
W.~y (1) = B
@
1
1
1
1
0
B
~y (2) = sgn B
@
Tuesday, April 1, 14
1
1
1
1
1
10
1
1
0
1
1
CB C B
C
B
C
B
C
1 A@ 1 A = @ 1 C
A
1
1
1
0
1
1
C B
C
C
B
1 A=@ 1 C
A
1
1
99
Dynamics - second time step
~y (t + 1) = sgn [W.~y (t)]
0
B
W.~y (2) = B
@
1
1
1
1
1
1
1
10
1
1
0
1
1
CB
C
C B
B 1 C=B 1 C
1 C
A
A@
A @
1
1
1
100
Tuesday, April 1, 14
Dynamics - second time step
~y (t + 1) = sgn [W.~y (t)]
0
B
W.~y (2) = B
@
1
1
B
~y (3) = sgn B
@
1
1
1
1
1
0
1
1
CB
C
C B
B 1 C=B 1 C
1 C
A
A@
A @
1
1
1
1
1
0
Tuesday, April 1, 14
1
10
1
0
1
1
C B
C
C
B
1 A=@ 1 C
A = ~y (2)!
1
1
101
Dynamics - second time step
~y (t + 1) = sgn [W.~y (t)]
0
B
W.~y (2) = B
@
1
1
1
B
~y (3) = sgn B
@
1
1
1
1
0
1
0
1
1
C B
C
C
B
1 A=@ 1 C
A = ~y (2)!
1
1
the activity does not evolve anymore = fixed point
Tuesday, April 1, 14
1
1
CB
C
C B
B 1 C=B 1 C
1 C
A
A@
A @
1
1
1
1
1
0
1
10
102
Fixed points of network dynamics
External input only on first step = set initial conditions
Dynamics stop when
yi (t + 1) = yi (t)
Tuesday, April 1, 14
Fixed points of network dynamics
External input only on first step = set initial conditions
Dynamics stop when
yi (t + 1) = yi (t)
0
yi = sgn @
N
X
j=1
1
wij yj A
fixed point = output of the network
Tuesday, April 1, 14
Start from different initial conditon
Network of N = 3 units
initial condition:
synaptic matrix:
dynamics:
~y (t = 1) = (1, 1, 1)
0
B
W =B
@
1
1
1
1
1
1
1
1
C
1 C
A
1
~y (t + 1) = sgn [W.~y (t)]
105
Tuesday, April 1, 14
Dynamics - first time step
~y (t + 1) = sgn [W.~y (t)]
0
B
W.~y (1) = B
@
1
1
1
1
1
1
0
B
~y (2) = sgn B
@
1
1
10
CB
B
1 C
A@
1
1
0
0
1
1
C
C B
B
C
1 A=@ 1 C
A
1
1
1
1
C B
C
C
B
1 A=@ 1 C
A
1
1
same fixed point!
Tuesday, April 1, 14
1
1
106
Attractors
Given an input (=initial condition),
the network dynamics will evolve to the closest fixed point.
Tuesday, April 1, 14
Attractors
Given an input (=initial condition),
the network dynamics will evolve to the closest fixed point.
Different initial conditions can lead to the same fixed point.
Tuesday, April 1, 14
Attractors
Given an input (=initial condition),
the network dynamics will evolve to the closest fixed point.
Different initial conditions can lead to the same fixed point.
fixed points = attractors for the dynamics
Tuesday, April 1, 14
Attractors
Given an input (=initial condition),
the network dynamics will evolve to the closest fixed point.
Different initial conditions can lead to the same fixed point.
fixed points = attractors for the dynamics
attractor neural networks:
store patterns as fixed points = memories
Tuesday, April 1, 14
Attractors
Given an input (=initial condition),
the network dynamics will evolve to the closest fixed point.
Different initial conditions can lead to the same fixed point.
fixed points = attractors for the dynamics
attractor neural networks:
store patterns as fixed points = memories
Fixed points depend on synaptic weights.
how to set weights to encode desired patterns?
Tuesday, April 1, 14
Hopfield learning rule for recurrent networks
Set of p desired outcomes:
Set weights to (Hopfield 1982):
John Hopfield
p
X
1
(k) (k)
wij =
⇠i ⇠j
N
k=1
Pseudo-hebbian rule
Symmetric connections: the network possesses an energy function
Network dynamics minimize the energy function
Stored patterns are located at the minima of the
energy function
Tuesday, April 1, 14
Exemple: network of n=100 neurons
Every box represents a neuron
All neurons are interconnected
Black
White
Tuesday, April 1, 14
Exemple of learning in a recurrent network
Desired ouput
Synaptic weights:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
Exemple of learning in a recurrent network
w=1
Desired ouput
Synaptic weights:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
Exemple of learning in a recurrent network
w=1
Desired ouput
Synaptic weights:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
Exemple of learning in a recurrent network
w=
Desired ouput
Synaptic weights:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
1
is a fixed point of the dynamics
Fixed point equation:
with:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
is a fixed point of the dynamics
Fixed point equation:
with:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
is a fixed point of the dynamics
Fixed point equation:
with:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
is a fixed point of the dynamics
Fixed point equation:
with:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
is a fixed point of the dynamics
Fixed point equation:
with:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
is a fixed point of the dynamics
Fixed point equation:
with:
1 (1) (1)
wij = ⇠i ⇠j
N
Tuesday, April 1, 14
Exemple of learning in a recurrent network
Pattern is stored in the network
and recovered from suitable initial conditions
Tuesday, April 1, 14
Tuesday, April 1, 14
Tuesday, April 1, 14
Inverse pattern is stored too!
Tuesday, April 1, 14
Storing two patterns
(1) (1)
wij =
Tuesday, April 1, 14
(1) (1)
⇠i ⇠j + ⌘i ⌘j
N
Storing two patterns
Tuesday, April 1, 14
Storing two patterns
Tuesday, April 1, 14
Storing two patterns
” Cross-­‐talk” term
N
1 X
C=
⌘j ⇠j
N
j=1
Overlap between patterns
Tuesday, April 1, 14
Storing two patterns
Tuesday, April 1, 14
Storing three patterns
More “cross-talk” !!
Tuesday, April 1, 14
Limitations of Hopfield networks
-­‐ inverse patterns are always stored
-­‐ if three or more patterns are stored, spurious memories can
appear
-­‐ the number of random patterns that can be stored is p=0.138N
-­‐ catastrophe: if the capacity is reached, all patterns are erased
-­‐ biologically unrealistic (symmetry between -1 and +1, no
separation between excitation and inhibition, infinite number of
synaptic states)
Tuesday, April 1, 14
Binary neurons and networks: summary
-­‐ Binary neurons act as binary linear classifiers
-­‐ Learning rules can be used to train the neurons to produce the
desired output
-­‐ Single layer feedforward networks can compute linearly
separable operations [PERCEPTRON – HEBBIAN learning rule]
-­‐ Multilayer feedforward networks can compute any binary
function
-­‐ Recurrent networks can memorize and recall patterns
[ATTRACTOR networks]
Tuesday, April 1, 14
Outlook
BINARY NEURONS AND
NETWORKS
ARTIFICIAL NEURAL
NETWORKS
aim: solve machine-learning
problems
inspired by biology
Tuesday, April 1, 14
NEUROSCIENCE
aim: understand how the brain
works
constrained by
biology