Download PowerPoint-Präsentation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neuroeconomics wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Multielectrode array wikipedia , lookup

Mirror neuron wikipedia , lookup

Binding problem wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neural modeling fields wikipedia , lookup

Neuroanatomy wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Biological neuron model wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Neural oscillation wikipedia , lookup

Neurophilosophy wikipedia , lookup

Artificial intelligence wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Neural coding wikipedia , lookup

Catastrophic interference wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Circumventricular organs wikipedia , lookup

Holonomic brain theory wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Synaptic gating wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Optogenetics wikipedia , lookup

Neural engineering wikipedia , lookup

Convolutional neural network wikipedia , lookup

Neural binding wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Development of the nervous system wikipedia , lookup

Artificial neural network wikipedia , lookup

Central pattern generator wikipedia , lookup

Nervous system network models wikipedia , lookup

Metastability in the brain wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Transcript
Artificial Neurons: Hopfield Networks
Introduction
Neurophysiological Background
Modeling Simplified Neurophysiological Information
The Hopfield Model
The Associative Memory Problem
The Model
Updating rules
One Pattern
Many Patterns
Stability of a particular pattern
Storage Capacity
The Energy Function
Discussion on Philosophy and Methodology
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Introduction
Inspiration on today’s research in neural computation comes from neuroscience
and is largely motivated by the possibility of modeling artificial computing networks.
So models are extremely simplified when seen from a neurophysiological point of view,
but one should gain insight in the behaviour of “biological” networks.
First the neurophysiological background should
be described:
information for modeling simplified
neurophysiological processes
description and the behaviour of neural
networks – Hopfield Networks.
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Neurophysiological Background
basic elements for a neural network:
neurons and their connections
Systematically the nervous system can be
divided into three parts
- Input
- central processing unit
- output
In the field of ANNs, networks will be
constructed from neurons which have the
canonical division into an input part
(dendritic arbor), a processing part (soma)
and a signal transmission part (axon).
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Modeling Simplified Neurophysiological Information (1)
logical structure of the neuron as a
perceptron includes:
a processing unit and effiacies of
synapses mentioned withJ ij .
input channels are activated by the
signals, which they receive from the
logical boxes  j represent the
values 0 or 1.
decision function  [ hi ] , that will
calculate if the neuron will (will not)

fire, i will take the value 1 (0).
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Modeling Simplified Neurophysiological Information (2)
At any given moment, some of the logical inputs
are activated
The soma (processing part) receives an input a socalled PSP (post-synaptic potential) which is the
linear sum of the efficacies J ij
of those channels
that were activated
The sum of the PSP’s is compared to the threshold
value of the neuron i and the output channel is
activated, if it exceeds the threshold, otherwise it is
not.
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Modeling Simplified Neurophysiological Information (3)
This operation and its components leads to the basic formular
N
hi   J ij j
j 1
The operation can be expressed by the logical truth function
 '   [hi  Ti ]
 [] is a function which is 1 if the statement in the square brackets is true and is 0 otherwise
 ' indicades, whether a spike (1 is sent) will appear in the output axon
 j defines variables which are themselves zeros and ones
(which can also be considered as truth functions of some statement )
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Modeling Simplified Neurophysiological Information (3)
A significant leap is acomplished, when the multineuron (multi-perceptron) is closed onto itself,
where the neurons form a feedback mechanism.
!
An ANN is no longer a linear, but a dynamical
system, when output axons (signal transmission
parts) become input channels, there is a time
shift.
If at a time t one has set of N zeros and ones, denoted by i (t ) then the set of N
bits composing  'i ’s becomes the set of inputs a neural cycletime (1-2
milliseconds) later  i (t  1) .
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Overview
Introduction
Neurophysiological Background
Modeling Simplified Neurophysiological Information
The Hopfield Model
The Associative Memory Problem
The Model
Updating rules
One Pattern
Many Patterns
Stability of a particular pattern
Storage Capacity
The Energy Function
Discussion on Philosophy and Methodology
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
The Hopfield Model - The Associative Memory Problem
Hopfield networks consist of the previously described elements and are totally dynamical,
so including the time shift and possible updating rules.
!
basic problem: to store a set of p patterns  i in such a way that when presented with
a new pattern  i , the network responds by producing whichever one
of the stored patterns most closely resembles  i
The space of all possible states of the network,
is called the configuration space.
basins of attraction:
Division of the the confirguration space by

stored patterns  i
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
The Model
The dynamics of the network can be represented by:
Si : sgn( wij S j   i )
j
where Si is represented for ni with the conversion from ni =0 or 1 via Si =2 ni -1
and sgn(x) is defined by:
sgn( x)  11 ifif
x  0;
x  0;
The threshold terms can be dropped in consideration on random patterns being used.
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Updating rules - Two simplified versions
Synchronous or Parallel
All neurons update their activity states simultaneously at discrete time steps n,
where n = 1, 2, …, as if governed by a clock. The inputs of every neuron in the
network are determined by the same activity state of the network in the time interval
(n-1) < t < n. This choice requires a central clock or pacemaker and is sensitive to
timing errors.
Asynchronous or Sequential (more natural for both brains and artificial networks)
All neurons are updated one by one, where one can proceed in either of two ways:
at each time step, select at random a unit i to be updated and apply the rule
let each unit independently choose to update itself, with some constant
probability per unit, according to
Si : sgn( wij S j )
j
In this mode: every neuron coming up for a decision has full information about all
the decisions of the individual neurons that have been updated before it.
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
One Pattern
The condition for one pattern  i which should be memorized is
i : sgn( wij j )
i
j
For constant of proportionality, using 1/N:
wij = 1/N  i  j
If fewer then half of the bits of the starting patterns Si are wrong they will be overwhelmed
in the sum for the net input
The network will correct errors and so the pattern is an attractor
All starting configurations with more than half the bits different
from the original pattern will end up in the reversed state -  i ,
which leaves to a symmetrically divided configuration spaces into
two basins of attraction.
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Many Patterns
!
hypothesis made by Hebb (1949):
changes proportional to the correlation between the firing of the pre- and postsynaptic neurons
achieved through:
applying the set of patterns  i to the network during the training phase
adjust the wij strenghts according to such pre/post correlations
p
wij  1/ N i j
 1
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Overview
Introduction
Neurophysiological Background
Modeling Simplified Neurophysiological Information
The Hopfield Model
The Associative Memory Problem
The Model
Updating rules
One Pattern
Many Patterns
Stability of a particular pattern
Storage Capacity
The Energy Function
Discussion on Philosophy and Methodology
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Stability of a particular pattern  iv (1)
Going back to the condition for a stable one pattern
i  sgn( wij j ) i
j
and the definiton of the net input
hi   wij S j
j
the stability condition generalizes to
sgn( hiv )   iv
Taking
i
p
wij  1/ N i j
 1
v
i
the net input h to unit i in pattern v is
hiv  1/ N   i j vj
j

seperating the sum on  into the special term  = v
!
hiv  iv  1/ N  i j jv
j  v
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Stability of a particular pattern  iv (2)
Meaning
hiv  iv  1/ N  i j jv
j  v
crosstalk term
(is less than 1, in most cases)
!
If the second term were zero, one can conclude that pattern number v was stable according to
sgn( hiv )   iv
i
This is still true if the second term is small enough:
if its magnitude is smaller than 1 it cannot change the sign of hiv
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Storage Capacity
One consider the quantity by
Civ  iv 1/ N  i j jv
j  v

v
The Ci just depend on the patterns  j that one attempt to store
The distribution of values for the
crosstalkterm
!
Civ  iv 1/ N  i j jv
j  v
For p random patterns and N
units this is a Gaussian with
variance
2  pN
The shaded area is Perror ,
the probability of error per bit
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
The Energy Function (1)
… was atopted from a physical analogy to magnetic systems into the neural network
theory and is one of the most important contributions of the Hopfield paper.
One can imagine an energy landscape metaphor “above” the configuration space
with a multi-dimensional surface with hills and valleys.
The energy function is
H 
Seminar: Introduction to the Theory of Neural Computation
1
wij Si S j

2 ij
Artificial Neurons: Hopfield Networks
The Energy Function (2)
central property:
It is a function that always decreases (or remains constant) as the system evolves
according to its dynamical rule.
The attractors are at local minima (the valleys) of the energy surface, the dynamics then
can be thought of as similar to the motion of a partical on the energy surface under the
influence of gravity (pulling it down) and friction (so that it does not overshoot).
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
The Energy Function (3)
alternate derivation of the Hebb prescription (as we know it from the many pattern case)
p
wij  1/ N i j
 1
minimized when the overlap between network configuration and the stored pattern  i
(one pattern case) is largest.
using: H  
H 
!
1
 wij Si S j
2 ij
1
( Sii )2
2N i
analog: many-pattern case:
 i  patterns should be made into local minima of H
1 p

H 
( Sii )2

2 N  1 i
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
The Energy Function (4)
Multplying out out leads to the original energy function
1 p

H 
( Sii )2

2 N  1 i
1 p
1
1


H 
(
S

)(
S

)


(
i  j  )Si S j





i i
j j
2 N  1 i
2 ij N  1
j
!
good approach of finding the appropriate connection strength wij , by finding an energy
function whose minimum satisfies a problem of interest, and by multiplying it out
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
The Energy Function (5)
“Simple and nice” proving of the central property of the Energy Function
It is a function that always decreases (or remains constant) as the system
evolves according to its dynamical rule.
na
!
E (t )  
1
 wij ni (t )n j (t )
2 ij
na *
E (t  1)  
energy function for the t state
1
wij ni (t  1)n j (t  1)

2 ij
energy function for the t+1 state
E (t  1)  E (t )  0
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Discussion on Philosophy and Methodology (1)
Research in these particular areas involves many different fields of science:
- Biology
- chemistry
- physics
- (...)
natural phenomena are described by mathematical models, sometimes being interpreted
that all natural phenomena are reducible to physical laws.
Alternatively - as I would say too - reduction can be given a very intuitive sense in which it
not only exists but is extremely useful and productive.
Hopflied once stated that “the brain is a physical system”, which may indeed sound like a
call for a reduction of thought process, nevertheless concepts originating in physics can
be used as analogues, including energy, field, relaxation etc.
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Discussion on Philosophy and Methodology (2)
The theory of attractor neural networks (ANN) has engaged in providing a minimal amount of
propositions which can be confronted with experiment.
This matter plays a role in discussing the attitude to verification and/or falsification and
the fact that a theoratical framework must be fended off by an explanation.
In many instances systems have been constructed (hardware implemantations /
computer simulations), being experimental setups for described models and providing a
truly impressive agreement on predicitions by the analysis of the models
But this will not please no experimenter who records, using ingeniuos techniques, the
electrical activities in the cortex of cats or monkeys, for example.
!
For the future: the theory of neural networks is to produce models, about cognitive
processes and which should be robust to the type of disorder, fluctuations, disruptions
one can imagine the brain to be operating under.
Including: parallel processing or potential for abstraction
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks
Discussion on Philosophy and Methodology (3)
So what happenes if a experiment may not show the type of bahaviour identified as the
emergent dynamics.
!
Interpretion: a refutation of the theoretical construction or arguing that the experiment
has missed the theory.
Thank you very much!
Feel free to ask questions!
Seminar: Introduction to the Theory of Neural Computation
Artificial Neurons: Hopfield Networks