* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download BrainMechanismsofUnconsciousInference2010
Synaptogenesis wikipedia , lookup
Recurrent neural network wikipedia , lookup
Sensory cue wikipedia , lookup
Environmental enrichment wikipedia , lookup
Biochemistry of Alzheimer's disease wikipedia , lookup
Neuroeconomics wikipedia , lookup
Neurophilosophy wikipedia , lookup
Axon guidance wikipedia , lookup
Embodied cognitive science wikipedia , lookup
Multielectrode array wikipedia , lookup
Artificial general intelligence wikipedia , lookup
Endocannabinoid system wikipedia , lookup
Activity-dependent plasticity wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Clinical neurochemistry wikipedia , lookup
Development of the nervous system wikipedia , lookup
Neural oscillation wikipedia , lookup
Types of artificial neural networks wikipedia , lookup
Molecular neuroscience wikipedia , lookup
Binding problem wikipedia , lookup
Time perception wikipedia , lookup
Neural modeling fields wikipedia , lookup
Neurotransmitter wikipedia , lookup
Neuroesthetics wikipedia , lookup
Convolutional neural network wikipedia , lookup
Nonsynaptic plasticity wikipedia , lookup
Metastability in the brain wikipedia , lookup
Caridoid escape reaction wikipedia , lookup
Chemical synapse wikipedia , lookup
Central pattern generator wikipedia , lookup
Neuroanatomy wikipedia , lookup
Single-unit recording wikipedia , lookup
Stimulus (physiology) wikipedia , lookup
Mirror neuron wikipedia , lookup
Circumventricular organs wikipedia , lookup
Neural correlates of consciousness wikipedia , lookup
Neural coding wikipedia , lookup
Premovement neuronal activity wikipedia , lookup
Neuropsychopharmacology wikipedia , lookup
Optogenetics wikipedia , lookup
Pre-Bötzinger complex wikipedia , lookup
Biological neuron model wikipedia , lookup
Channelrhodopsin wikipedia , lookup
Nervous system network models wikipedia , lookup
Feature detection (nervous system) wikipedia , lookup
Brain Mechanisms of
Unconscious Inference
J. McClelland
Symsys 100
April 22, 2010
Last time…
• We considered many
examples of unconscious
inference
– Size illusions
– Illusory contours
– Perception of objects from
vague cues
– Unconsious associative
priming
– Lexical effects on speech
perception
– Effect of visual speech on
speech perception.
Today…
• We ask about the mechanisms through
which this occurs
Three Problems for Unconscious Inference Theory
GARY HATFIELD, Philosopher, Univ. of Penna.
•
The cognitive machinery problem: Are the unconscious inferences posited
to explain size perception and other phenomena carried out by the same
cognitive mechanisms that account for conscious and deliberate inferences,
or does the visual system have its own inferential machinery? In either
case, what is the structure of the posited mechanisms?
•
The sophisticated content problem: how shall we describe the content of
the premises and conclusions? For instance, in size perception it might be
that the premises include values for visual angle and perceived distance
[…]. But shall we literally attribute concepts of visual angle […] to the visual
system?
•
The phenomenal experience problem: Third, to be fully explanatory,
unconscious inference theories of perception must explain how the
conclusion of an inference about size and distance leads to the experience
of an object as having a certain size and being at a certain distance. In
other words, the theories need to explain how the conclusion to an
inference […] can be or can cause perceptual experience.
Proposed answers to these
questions
• The cognitive machinery problem. The machinery of unconscious
inference is the propagation of activation among neurons. Neurons
embedded in the perceptual system can carry out such inferences
without engaging the mechanisms used in conscious and
deliberative inference.
• The sophisticated content problem. Activation of particular neurons
or groups of neurons codes for particular content. Connections
among neurons code the conditional relationships between items of
content.
• The phenomenal experience problem. Activity of certain
populations of neurons is a necessary condition for conscious
experience. Anything that affects the activation of they neurons will
affect conscious experience. Is this activity the actual substrate of
experience itself?
Outline of Lecture
• Neurons: Structure and Physiology
• Neurons and The Content of Experience
• How Neurons Make Inferences
– And how these capture features of Bayes
Rule
• Integration of Information in Neurons and
in Perception
Neuronal Structure and Function
• Neurons combine
excitatory and
inhibitory signals
obtained from other
neurons.
• They signal to other
neurons primarily via
‘spikes’ or action
potentials.
Neurons and the Content
of Experience
•
The doctrine of specific nerve
energies:
–
Activity of specific neurons
corresponds to specific sensory
experiences
•
•
•
The brain contains many ‘maps’ in
which neurons correspond to specific
points
•
•
•
•
•
Touch at a certain point on the skin
Light at a certain point in the visual field
On the skin
In the visual world
In non-spatial dimensions such as
auditory frequency
If one stimulates these neurons in a
conscious individual, an appropriate
sensation is aroused.
If these neurons are destroyed, a
corresponding void in experience
occurs.
–
Visual ‘scotomas’ arise from lesions to
the maps in primary visual cortex.
Feature Detectors in Visual Cortex
• Line and edge detectors
in primary visual cortex
(classic figure at left).
– Cells show a graded
response depending on
exact orientation of line.
• Representation of motion
in area MT
– Destroy MT on one side of
the brain, and perception of
motion in the opposite side
of space is greatly impaired.
Neural Representations of Objects and
their Identify
• The ‘Grandmother Cell’ hypothesis:
– Is there a dedicated neuron, or set of
neurons, for each cognized object, such as
my Grandmother?
• Most argue ‘no’… but some cells have surprizingly
specific responses
Stimuli used by Baylis, Rolls, and Leonard
(1991)
Responses of Four Neurons to Face and
Non-Face Stimuli in Previous Slide
Responses to various stimuli by a neuron
responding to a Tabby Cat
(Tanaka et al, 1991)
The Infamous ‘Jennifer Aniston’ Neuron
A ‘Halle Barry’ Neuron
Outline of Lecture
Neurons: Structure and Physiology
Neurons and The Content of Experience
How Neurons Make Inferences
And how these capture features of Bayes Rule
• Integration of Information in Neurons and in
Perception
The Key Idea
•
We treat the firing rate of a neuron
as corresponding to the posterior
probability of the hypothesis for
which the neuron stands.
•
If the excitatory inputs to a neuron
correspond to evidence that
supports the hypothesis for which
the neuron stands
And the inhibitory inputs correspond
to evidence that goes against the
hypothesis for which the neuron
stands
And if the baseline firing rate of the
neuron reflects the prior probability
of the hypothesis for which the
neuron stands
And all elements of the evidence are
conditionally independent given H.
•
•
•
•
THEN the firing rate of the neuron
can represent the posterior
probability of the hypothesis given
the evidence.
Input from
neuron j
wij
Neuron i
Unpacking this idea
•
•
•
•
•
•
It is common to consider a neuron to
have an activation value corresponding
to its instantaneous firing rate or
p(spike) per unit time.
The baseline firing rate of the neuron is
thought to depend on a constant
background input called its ‘bias’.
When other neurons are active, their
influences are combined with the bias to
yield a quantity called the ‘net input’.
The influence of a neuron j on another
neuron i depends on the activation of j
and the weight or strength of the
connection to i from j.
Note that connection weights can be
positive (excitatory) or negative
(inhibitory).
These influences are summed to
determine the net input to neuron i:
neti = biasi + Sjajwij
where aj is the activation of neuron j,
and wij is the strength of the connection
to unit i from unit j. Note that j ranges
over all of the units that have
connections to neuron i.
Input from
neuron j
wij
Neuron i
How a Neuron’s Activation can
Reflect P(H|E)
•
The activation of neuron i given its net
input neti is assumed to be given by:
ai = exp(neti)
1 + exp(neti)
•
•
This function is called the ‘logistic
function’ (graphed at right)
Under this activation function:
ai
ai = P(Hi|E) iff
aj = 1 when Ej is present,
0 when Ej is absent;
wij = log(P(Ej|H)/P(Ej|~H)
biasi = log(P(H)/P(~H))
•
In short, idealized neurons using the
logistic activation function can compute
the probability of the hypothesis they
stand for, given the evidence
represented in their inputs, if their
weights and biases have the
appropriate values, and
the elements of the evidence are
conditionally independent given H.
neti
Math Supporting Above Statements
Bayes Rule with two conditionally independent sources of information:
P( H | E1 & E2 )
Divide through by:
And let:
We obtain:
1
P( E1 | H ) P( E2 | H ) P( H )
P( E1 | H ) P( E2 | H ) P( H ) P( E1 |~ H ) P( E2 |~ H ) P(~ H )
P( E1 |~ H ) P( E2 |~ H ) P(~ H )
P( E1 | H )
P ( E2 | H )
P( H )
, 2
,
P( E1 |~ H )
P( E2 |~ H )
P(~ H )
P( H | E1 & E2 )
This is equivalent to:
1 2
1 2 1
elog(1 )log( 2 )log()
P( H | E1 & E2 ) log(1 )log( 2 )log( )
e
1
And more generally, when {E}
consists of multiple conditionally
independent elements Ej:
e
P( H | {E})
e
log( )
log( )
j
log( j )
j
log( j )
1
Choosing between N alternatives
•
•
•
•
Often we are interested in cases where
there are several alternative
hypotheses (e.g., different directions
of motion of a field of dots). Here we
have a situation in which the
alternatives to a given H, say H1, are
the other hypotheses, H2, H3, etc.
In this case, the probability of a
particular hypothesis given the
evidence becomes:
P(Hi|E) =
p(E|Hi)p(Hi)
Si’p(E|Hi’)p(Hi’)
The normalization implied here can be
performed by computing net inputs as
before but now setting each unit’s
activation according to:
ai = exp(neti)
Si’exp(neti’)
This normalization effect is
approximated by lateral inhibition
mediated by inhibitory interneurons
(shaded unit in illustration).
H
E
Outline of Lecture
Neurons: Structure and Physiology
Neurons and The Content of Experience
How Neurons Make Inferences
And how these capture features of Bayes Rule
Integration of Information in Neurons and in
Perception
‘Cue’ Integration
in Monkeys
• Saltzman and Newsome (1994)
combined two cues to the
perception of motion:
– Partially coherent motion in a
specific direction
– Direct electrical stimulation
• They measured the probability of
choosing each direction with and
without stimulation at different
levels of coherence (next slide).
Model used by S&N:
Electrical
Input
• S&N applied the model we have
been discussing:
Pi = exp(neti)/Si’exp(neti’)
Where Pi represents probability of
responding in direction i
neti = biasi + wiee +wijvj
wie = effect of microstimulation on
neurons representing percept of
motion in direction i
e = 1 if stimulation was applied, 0
otherwise
Wij = effect of visual stimulation in
direction j
vj = strength of motion in direction j
Visual Input
Evidence for the Model
•
•
Effect of electrical stimulation is
absent if visual motion is very
strong, but is considerable if
visual motion is weak (below).
Responses aren’t just averages,
but correctly reflect how
different sources of evidence
should combine, as per the
model equation (right)
•
Open circles above show effect of
presenting visual stimulation at 90o
(using an intermediate coherence
level) together with electrical
stimulation favoring the 225o
position.
•
Dip between peaks rules out simple
averaging of the directions cued by
visual and electrical stimulation but
is ~consistent with model
predictions (filled circles).
Summary: The Mechanism of
Unconscious Perceptual Inference
• Neurons (or populations of neurons) can
represent perceptual hypotheses at different
levels of abstraction and specificity
• Connections among neurons can code
conditional relations among hypotheses.
– Excitation and Inhibition code p(E|H)/p(E|~H)
– Lateral inhibition codes mutual exclusivity
• Propagation of activation produces results
corresponding approximately to Bayesian
inference.
• The resulting activity incorporates inferential
processes that may alter our phenomenal
experience.