Download Neural Networks Architecture

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of neuroimaging wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Neuropsychology wikipedia , lookup

Selfish brain theory wikipedia , lookup

Neuroplasticity wikipedia , lookup

Brain Rules wikipedia , lookup

Optogenetics wikipedia , lookup

Mind uploading wikipedia , lookup

Connectome wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Central pattern generator wikipedia , lookup

Catastrophic interference wikipedia , lookup

Synaptic gating wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Neuroanatomy wikipedia , lookup

Convolutional neural network wikipedia , lookup

Metastability in the brain wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Nervous system network models wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Recurrent neural network wikipedia , lookup

Transcript
Neural Networks
Architecture
Baktash Babadi
IPM, SCS
Fall 2004
The Neuron Model
Architectures (1)
Feed Forward Networks






The neurons are arranged in
separate layers
There is no connection between
the neurons in the same layer
The neurons in one layer receive
inputs from the previous layer
The neurons in one layer delivers
its output to the next layer
The connections are unidirectional
(Hierarchical)
Architectures (2)
Recurrent Networks

Some connections are
present from a layer to
the previous layers
Architectures (3)
Associative networks


There is no hierarchical arrangement
The connections can be bidirectional
Why Feed Forward?
Why Recurrent/Associative?
An Example of Associative
Networks: Hopfield Network
John Hopfield (1982)



Associative Memory via artificial neural
networks
Solution for optimization problems
Statistical mechanics
Neurons in Hopfield Network
The neurons are binary units


They are either active (1) or passive
Alternatively + or –
The network contains N neurons
The state of the network is described as a
vector of 0s and 1s:
U  (u1 , u2 ,..., u N )  (0,1,0,1,...,0,0,1)
The architecture of Hopfield
Network
The network is fully interconnected


All the neurons are connected to each other
The connections are bidirectional and symmetric
Wi , j  W j ,i

The setting of weights depends on the
application
Updating the Hopfield Network
The state of the network changes at each time
step. There are four updating modes:

Serial – Random:
The state of a randomly chosen single neuron will be
updated at each time step

Serial-Sequential :
The state of a single neuron will be updated at each time
step, in a fixed sequence

Parallel-Synchronous:
All the neurons will be updated at each time step
synchronously

Parallel Asynchronous:
The neurons that are not in refractoriness will be updated at
the same time
The updating Rule (1):
Here we assume that updating is serial-Random
Updating will be continued until a stable state is
reached.

Each neuron receives a weighted sum of the inputs
from other neurons:
N
h j   ui .w j ,i
i 1
i j

If the input h j is positive the state of the neuron will
be 1, otherwise 0:
1
uj  
0
if h j  0
if h j  0
The updating rule (2)
Convergence of the Hopfield
Network (1)
Does the network eventually reach a stable
state (convergence)?
To evaluate this a ‘energy’ value will be
associated to the network:
N
1
E    w j ,i ui u j
2 j i 1
i j
The system will be converged if the energy is
minimized
Convergence of the Hopfield
Network (2)
Why energy?

An analogy with spin-glass models of Ferromagnetism (Ising model): k
wi , j 
1
0
0
1
1
1
0
1
0
1
1
1
i 1
i j
1
1
0
0
h j   w j ,i ui : the local field exerted upon the unit j
0
0
1
N
1
1
1
1
1
1
1
, d i , j  distance
d i, j
u j : the spin of unit j
1
1
0
2
1
0
1
e j   h j u j : The potential energy of unit j
2
E   e j : The overal potential energy of the system
j
N
1
E    w j ,i ui u j
2 j i 1
i j

The system is stable if the energy is minimized
Convergence of the Hopfield
Network (3)
Why convergence?
N
h j   ui .w j ,i
i 1
i j
1
uj  
0
if h j  0
if h j  0
N
N
1
1
1
E    w j ,i ui u j    u j  w j ,i ui    u j h j
2 j i 1
2 j
2 j
i 1
i j
i j
if h j  0 and u j  1 then u j will not change  u j h j  h j  0
if h j  0 and u j  0 then u j will change  u j h j  0
if h j  0 and u j  1 then u j will not change  u j h j  0
if h j  0 and u j  1 then u j will change  u j h j  h j  0
in each case u j h j is maximum when u j does not change 
1
E  -  u j h j is minimum if u j values do not change
2
Convergence of the Hopfield
Network (4)
The changes of E with updating:
N
h j   ui .w j ,i
i 1
i j
1
uj  
0
E  Enew  Eold  (
if h j  0
if h j  0
N
1
E    w j ,i ui u j   1  u j h j
2 j i 1
2 j
i j
1
1
1
1
1
1
u j h j  uk newhk )  (  u j h j  uk old hk )   (uk new  uk old )hk   uk .hk

2 j k
2
2 j k
2
2
2
1
if uk old  1 and hk  0  uk new  1  uk  0   uk .hk  0
2
1
if uk old  1 and hk  0  uk new  0  uk  1   uk .hk  0
2
1
if uk old  0 and hk  0  uk new  0  uk  1   uk .hk  0
2
1
if uk old  0 and hk  0  uk new  0  uk  1   uk .hk  0
2
In each case the energy will decrease or remains constant thus the system tends to
Stabilize.
The Energy Function:
The energy function is similar to a
multidimensional (N) terrain
Local Minimum
Local Minimum
Global Minimum
Hopfield network as a model for
associative memory
Associative memory

Associates different features with eacother
Karen  green
George  red
Paul  blue

Recall with partial cues
Neural Network Model of
associative memory
Neurons are arranged like a grid:
Setting the weights
Each pattern can be denoted by a vector of
-1s or 1s: S  (1,1,1,1,...,1,1,1)  (s p , s p , s p ,...s p )
p
1
2
If the number of patterns is m then:
m
wi , j   si s
p
p
j
p 1
Hebbian Learning:

The neurons that fire together , wire together
3
N
Limitations of Hofield associative
memory
1) The evoked pattern is sometimes not
necessarily the most similar pattern to the
input
2) Some patterns will be recall more than
others
3) Spurious states: non-original patterns
Capacity: 0.15 N
Hopfield network and the brain (1):
In the real neuron, synapses are distributed
along the dendritic tree and their distance
change the synaptic weight
In hopfield network there is no dendritic
geometry
If they are distributed uniformly, the geometry is
not important
Hopfield network and the brain (2):
In the brain the Dale principle holds and
the connections are not symmetric
The hopfield network with assymetric
weights and dale principle, work properly
Hopfield network and the brain (3):
The brain is insensitive to noise and local
lesions
Hopfield network can tolerate noise in the
input and partial loss of synapses
Hopfield network and the brain (4):
In brain the neurons are not binary
devices, they generate continuous values
of firing rates
Hopfield network with sigmoid transfer
function is even more powerful than the
binary version
Hopfield network and the brain (5):
In the brain most of the neurons are silent
or firing at low rates but in hopfield
network many of the neurons are active
In sparse hopfield network the capacity is
even more
Hopfield network and the brain (6):
In hopfield network updating is serial
which is far from biological reality
In parallel updating hopfield network the
associative memories can be recalled as
well
Hopfield network and the brain (7):
When the number of learned patterns in
hopfield network will be overloaded, the
performance of the network will fall
abruptly for all the stored patterns
But in real brain an overload of memories
affect only some memories and the rest of
them will be intact
Catastrophic inference
Hopfield network and the brain (8):
In hopfield network the usefull information
appears only when the system is in the
stable state
The Brain do not fall in stable states and
remains dynamic
Hopfield network and the brain (9):
The connectivity in the brain is much less
than hopfield network
The diluted hopfield network works well