Download No Slide Title

Document related concepts

Activity-dependent plasticity wikipedia , lookup

Blood–brain barrier wikipedia , lookup

Neuroeconomics wikipedia , lookup

Cognitive epidemiology wikipedia , lookup

Human brain wikipedia , lookup

Haemodynamic response wikipedia , lookup

Neuroesthetics wikipedia , lookup

Neuroinformatics wikipedia , lookup

Connectome wikipedia , lookup

Craniometry wikipedia , lookup

Intelligence quotient wikipedia , lookup

Aging brain wikipedia , lookup

Neurolinguistics wikipedia , lookup

Brain morphometry wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

History of anthropometry wikipedia , lookup

Neuroplasticity wikipedia , lookup

Nervous system network models wikipedia , lookup

Neuroanatomy wikipedia , lookup

Mind uploading wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Theory of multiple intelligences wikipedia , lookup

Brain Rules wikipedia , lookup

Donald O. Hebb wikipedia , lookup

History of neuroimaging wikipedia , lookup

Selfish brain theory wikipedia , lookup

Human intelligence wikipedia , lookup

Neurophilosophy wikipedia , lookup

Environment and intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Neuropsychology wikipedia , lookup

Impact of health on intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Intelligence wikipedia , lookup

Artificial intelligence wikipedia , lookup

Neuroscience and intelligence wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Evolution of human intelligence wikipedia , lookup

Metastability in the brain wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Transcript
Itti: CS564 - Brain Theory and Artificial Intelligence
University of Southern California
Lecture 28. Overview & Summary
Reading Assignment:
TMB2 Section 8.3
Supplementary reading: Article on Consciousness in HBTNN
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
You said “brain” theory??
First step: let’s get oriented!
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Major Functional Areas
Primary motor: voluntary movement
Primary somatosensory: tactile, pain, pressure, position, temp., mvt.
Motor association: coordination of complex movements
Sensory association: processing of multisensorial information
Prefrontal: planning, emotion, judgement
Speech center (Broca’s area): speech production and articulation
Wernicke’s area: comprehension of speech
Auditory: hearing
Auditory association: complex
auditory processing
Visual: low-level vision
Visual association: higher-level
vision
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Neurons and Synapses
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
http://www.radiology.wisc.edu/Med_Students/neuroradiology/fmri/
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Limbic System
Cortex “inside” the brain.
Involved in emotions, sexual behavior, memory, etc
(not very well known)
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Major Functional Areas
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Some general brain principles
Cortex is layered
Retinotopy
Columnar organization
Feedforward/feedback
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Layered Organization of Cortex
Cortex is 1 to 5mm-thick, folded at the surface of the brain
(grey matter), and organized as 6 superimposed layers.
Layer names:
1: Molecular layer
2: External granular layer
3: External pyramidal layer
4: internal granular layer
5: Internal pyramidal layer
6: Fusiform layer
Basic layer functions:
Layers 1/2: connectivity
Layer 4: Input
Layers 3/5: Pyramidal cell bodies
Layers 5/6: Output
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Retinotopy
Many visual areas are organized as retinotopic maps: locations next
to each other in the outside world are represented by neurons close
to each other in cortex.
Although the topology is thus preserved, the mapping typically is highly nonlinear (yielding large deformations in representation).
Stimulus shown on screen…
Itti: CS564 - Brain Theory and Artificial Intelligence.
and corresponding
activity in cortex!
Overview and Summary
Columnar Organization
Very general principle in cortex: neurons processing similar “things” are
grouped together in small patches, or “columns,” or cortex.
In primary visual cortex…
as in higher (object
recognition) visual areas…
and in many, non-visual, areas as well (e.g., auditory, motor, sensory,
etc).
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Interconnect
Felleman & Van Essen, 1991
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Neurons???
Abstracting from biological neurons to neuron models
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
The "basic" biological neuron
Dendrites
Soma
Axon with branches and
synaptic terminals
The soma and dendrites act as the input surface; the axon carries the
outputs.
The tips of the branches of the axon form synapses upon other
neurons or upon effectors (though synapses may occur along the
branches of an axon as well as the ends). The arrows indicate the
direction of "typical" information flow from inputs to outputs.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Transmenbrane Ionic Transport
Ion channels act as gates that allow or block the flow of
specific ions into and out of the cell.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Action Potential and Ion Channels
Initial depolarization due to opening sodium (Na+) channels
Repolarization due to opening potassium (K+) channels
Hyperpolarization happens because K+ channels stay open longer
than Na+ channels (and longer than necessary to exactly come back to
resting potential).
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Warren McCulloch and Walter Pitts (1943)
A McCulloch-Pitts neuron operates on a discrete
time-scale, t = 0,1,2,3, ... with time tick equal to
one refractory period
x 1(t)
w1
x 2(t)

w2
w
xn(t)
axon
y(t+1)
n
At each time step, an input or output is
on or off — 1 or 0, respectively.
Each connection or synapse from the output of one neuron to
the input of another, has an attached weight.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
From Logical Neurons to Finite Automata
1
AND
1.5
1
Brains, Machines, and
Mathematics, 2nd Edition,
1987
Boolean Net
1
OR
X
0.5
Y
1
X
NOT
Finite
Automaton
0
-1
Y
Itti: CS564 - Brain Theory and Artificial Intelligence.
Q
Overview and Summary
Leaky Integrator Neuron
The simplest "realistic" neuron model is a
continuous time model based on using
the firing rate (e.g., the number of spikes traversing the axon in
the most recent 20 msec.)
as a continuously varying measure of the cell's activity
The state of the neuron is described by a single variable, the
membrane potential.
The firing rate is approximated by a sigmoid, function of
membrane potential.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Leaky Integrator Model
t m(t) = - m(t) + h
has solution m(t) = e-t/t m(0) + (1 - e-t/t)h
 h for time constant t > 0.
We now add synaptic inputs to get the
Leaky Integrator Model:
t m(t) = - m(t) +  i wi Xi(t) + h
where Xi(t) is the firing rate at the ith input.
Excitatory input (wi > 0) will increase m(t)
Inhibitory input (wi < 0) will have the opposite effect.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Models of what?
We need data to constrain the models
Empirical data comes from various experimental techniques:
Physiology
Psychophysics
Various imaging
Etc.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Electrode setup
- drill hole in cranium under anesthesia
- install and seal “recording chamber”
- allow animal to wake up and heal
- because there are no pain receptors
in brain, electrodes can then
be inserted & moved in chamber
with no discomfort to animal.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Receptive field
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Example: yes/no task
Example of contrast discrimination using yes/no paradigm.
- subject
fixates cross.
- subject initiates trial by pressing space bar.
- stimulus appears at random location, or may not appear at all.
- subject presses “1” for “stimulus present” or “2” for “stimulus absent.”
- if subject keeps giving correct answers, experimenter decreases
contrast of stimulus (so that it becomes harder to see).
+
time
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Staircase procedure
Staircase procedure is a method for adjusting stimulus to each
observer such as to find the observer’s threshold. Stimulus is
parametrized, and parameter(s) are adjusted during experiment
depending on responses.
Typically:
- start with a stimulus that is very easy to see.
- 4 consecutive correct answers make stimulus more difficult to see by a fixed amount.
- 2 consecutive incorrect answers make stimulus easier to see by a fixed amount.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Example SPECT images
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Reconstruction using coincidence
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Example PET image
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
BOLD contrast
The magnetic properties of blood change with
the amount of oxygenation
resulting in small signal changes
with oxygenated
blood
with deoxygenated
blood
Bo
Itti: CS564 - Brain Theory and Artificial Intelligence.
Bo
Overview and Summary
Vascular System
arteries
arterioles
(<0.1mm)
Itti: CS564 - Brain Theory and Artificial Intelligence.
capillaries
Overview and Summary
venules
(<0.1mm)
veins
Oxygen consumpsion
O2
O2
O2
O2
The exclusive source of metabolic energy
of the brain is glycolysis:
O2 O2
O2 O2 O2
O2
C6H12O6
O2 O2
O2 O2 O2
+ 6 O2
O2
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
6 H2O + 6 CO2
BOLD Contrast
stimulation
neuronal activation
metabolic changes
hemodynamic changes
local susceptibility changes
MR-signal changes
signal detection
data processing
functional image
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Example of Blocked paradigm
Gandhi et al., 1999
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
First BOLD-effect experiment
Kwong and colleagues at Mass. General Hospital (Boston).
Stimulus: flashing light.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Summary 2
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Case study: Vision
Vision is the most widely studied brain function
Our goals:
- analyze
fundamental issues
- Understand basic algorithms that may address those issues
- Look at computer implementations
- Look at evidence for biological implementations
- Look at neural network implementations
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Eye Anatomy
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Visual Pathways
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Retinal Sampling
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Origin of Center-Surround
Neurons at every location receive inhibition from neurons at
neighboring locations.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Origin of Orientation Selectivity
Feedforward model of Hubel & Wiesel: V1 cells receive inputs
from LGN cells arranged along a given orientation.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Oriented RFs
Gabor function:
product of a grating and
a Gaussian.
Feedforward model:
equivalent to convolving
input image by sets of
Gabor filters.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Cortical Hypercolumn
A hypercolumn
represents one visual
location, but many
visual attributes.
Basic processing “module”
in V1.
“Blobs”: discontinuities
in the columnar structure.
Patches of neurons concerned
mainly with color vision.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
From neurons to mind
A good conceptual intermediary between patterns of neural
activity and mental events is provided by the schema theory
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
The Famous
Duck-Rabbit
From Schemas
to Schema
Assemblages
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Bringing in Context
For Further Reading:
TMB2:
Section 5.2 for the
VISIONS system for
schema-based
interpretation of visual
scenes.
HBTNN:
Visual Schemas in Object
Recognition and Scene
Analysis
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
A First “Useful” Network
Example of fully-engineered neural net that performs
Useful computation: the Didday max-selector
Issues:
- how
can we design a network that performs a given task
- How can we analyze non-linear networks
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Winner-take-all Networks
Goal: given an array of inputs, enhance the strongest (or
strongest few) and suppress the others
No clear strong input yields
global suppression
Strongest input is enhanced
and suppresses other inputs
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Didday’s Model
= inhibitory inter-neurons
retinotopic
input
= copy of input
= receives excitation
from foodness layer
and inhibition from
S-cells
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
NN & Physics
Perceptrons = layered networks, weights tuned to learn
A given input/output mapping
Winner-take-all = specific recurrent architecture for specific
purpose
Now: Hopfield nets = view neurons as physical entities and
analyze network using methods inspired from statistical physics
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Hopfield Networks
A Hopfield net (Hopfield 1982) is a net of such units
subject to the asynchronous rule for updating one
neuron at a time:
"Pick a unit i at random.
If wij sj  i, turn it on.
Otherwise turn it off."
Moreover, Hopfield assumes symmetric weights:
wij = wji
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
“Energy” of a Neural Network
Hopfield defined the “energy”:
E = - ½  ij sisjwij +  i sii
If we pick unit i and the firing rule (previous slide) does not
change its si, it will not change E.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
si: 0 to 1 transition
If si initially equals 0, and  wijsj  i
then si goes from 0 to 1 with all other sj constant,
and the "energy gap", or change in E, is given by
DE = - ½ j (wijsj + wjisj) + i
= - ( j wijsj - i)
 0.
Itti: CS564 - Brain Theory and Artificial Intelligence.
(by symmetry)
Overview and Summary
si: 1 to 0 transition
If si initially equals 1, and  wijsj < i
then si goes from 1 to 0 with all other sj constant
The "energy gap," or change in E, is given, for symmetric
wij, by:
DE = j wijsj - i < 0
On every updating we have DE  0
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Minimizing Energy
On every updating we have DE  0
Hence the dynamics of the net tends to move E toward a minimum.
We stress that there may be different such states — they are local
minima. Global minimization is not guaranteed.
Basin of
Attraction for C
A
B
D
E
C
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Attractors
For all recurrent networks of interest
(i.e., neural networks comprised of leaky integrator
neurons, and containing loops), given initial state
and fixed input, there are just three possibilities for the asymptotic state:
1. The state vector
comes to rest, i.e. the
unit activations stop
changing. This is called
a fixed point. For given
input data, the region of
initial states which
settles into a fixed point
is called its basin of
attraction.
2. The state vector
settles into a periodic
motion, called a limit
cycle.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Strange attractors
3. Strange attractors describe
such complex paths through
the state space that, although
the system is deterministic, a
path which approaches the
strange attractor gives every
appearance of being random.
Two copies of the system which
initially have nearly identical
states will grow more and more
dissimilar as time passes.
Such a trajectory has become
the accepted mathematical
model of chaos,and is used to
describe a number of physical
phenomena such as the onset
of turbulence in weather.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
The traveling salesman problem 1
There are n cities, with a road of length lij joining
city i to city j.
The salesman wishes to find a way to visit the cities that
is optimal in two ways: each city is visited only once, and
the total route is as short as possible.
This is an NP-Complete problem: the only known algorithms (so
far) to solve it have exponential complexity.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Associative Memories
http://www.shef.ac.uk/psychology/gurney/notes/l5/l5.html
Idea:
store:
So that we can recover it if presented
with corrupted data such as:
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Associative memory with Hopfield nets
Setup a Hopfield net such that local minima correspond
to the stored patterns.
Issues:
-because of weight symmetry, anti-patterns (binary reverse) are
stored as well as the original patterns (also spurious local
minima are created when many patterns are stored)
-if one tries to store more than about 0.14*(number of neurons)
patterns, the network exhibits unstable behavior
- works well only if patterns are uncorrelated
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Learning
All this is nice, but finding the synaptic weights that achieve a
given computation is hard (e.g., as shown in the TSP example or
the Didday example).
Could we learn those weights instead?
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Simple vs. General Perceptrons
The associator units are not interconnected, and so
the simple perceptron has no short-term memory.
If cross-connections are present between units, the perceptron is
called cross-coupled - it may then have multiple layers, and
loops back from an “earlier” to a “later” layer.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Linear Separability
A linear function of the form
f(x) = w1x1+ w2x2+ ... wdxd+wd+1 (wd+1 = - )
is a two-category pattern classifier.
f(x) = 0  w1x1+ w2x2+ ... wdxd+wd+1 = 
gives a hyperplane as the decision surface
Training involves adjusting the coefficients (w1,w2,...,wd,wd+1) so that the
decision surface produces an acceptable separation of the
two
A
x
classes.
2
Two categories are linearly
separable patterns if in fact
an acceptable setting of such
linear weights exists.
A
A
B
A
A
BB
B
B
B
B
B
B
B
B
B
Overview and Summary
A
A
A
B
B
B
A
B
A
B
B
A
B
B
B
f(x)  0
A
A
B
A
A
A
f(x) < 0
Itti: CS564 - Brain Theory and Artificial Intelligence.
AA
A
A A
BB
B
x
1
Classic Models for Adaptive Networks
The two classic learning schemes for McCulloch-Pitts
formal neurons
i wixi  
 Hebbian Learning (The Organization of Behaviour 1949)
— strengthen a synapse whose activity coincides with the firing
of the postsynaptic neuron
[cf. Hebbian Synaptic Plasticity, Comparative and Developmental Aspects
(HBTNN)]
 The Perceptron (Rosenblatt 1962)
— strengthen an active synapse if the efferent neuron fails to
fire when it should have fired;
— weaken an active synapse if the efferent neuron fires when it
should not have fired.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Hebb’s Rule
The simplest formalization of Hebb’s rule is
to increase wij by: Dwij = k yi xj
(1)
xj
yi
where synapse wij connects a presynaptic neuron with firing rate xj to a
postsynaptic neuron with firing rate yi.
Peter Milner noted the saturation problem
von der Malsburg 1973 (modeling the development of oriented edge
detectors in cat visual cortex [Hubel-Wiesel: simple cells])
augmented Hebb-type synapses with:
- a normalization rule to stop all synapses "saturating"
 wi = Constant
- lateral inhibition to stop the first "experience" from "taking over" all "learning
circuits:” it prevents nearby cells from acquiring the same pattern thus enabling
the set of neurons to "span the feature space"
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Perceptron Learning Rule
The best known perceptron learning rule
strengthens an active synapse if the efferent neuron fails to
fire when it should have fired, and
- weakens an active synapse if the neuron fires when it should not have:
-
Dwij = k (Yi - yi) xj
(2)
As before, synapse wij connects a neuron with firing rate xj
to a neuron with firing rate yi, but now
Yi is the "correct" output supplied by the "teacher."
The rule changes the response to xj in the right direction:
If the output is correct, Yi = yi and there is no change, Dwij = 0.
 If the output is too small, then Yi - yi > 0, and the change in wij will add Dwij
xj = k (Yi - yi) xj xj > 0 to the output unit's response to (x1, . . ., xd).


If the output is too large, Dwij will decrease the output unit's response.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Back-Propagation
Backpropagation: a method for training a loop-free network
which has three types of unit:
input units;
hidden units carrying an internal representation;
output units.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Example: face recognition
Here using the 2-stage approach:
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Non-Associative and Associative
Reinforcement Learning
[Basically B, but
with new labels]
Non-associative reinforcement learning, the only input to the
learning system is the reinforcement signal
Objective: find the optimal action
Associative reinforcement learning, the learning system also
receives information about the process and maybe more.
Objective: learn an associative mapping that produces the
optimal action on any trial as a function of the stimulus pattern
present on that trial.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Self-Organizing Feature Maps
Localized competition & cooperation yield emergent
global mapping
o o o o o o o o o o o o o
o o o o o o o o o o o o o
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Capabilities and Limitations of Layered Networks
To approximate a set of functions of the inputs by
A layered network with continuous-valued units and
Sigmoidal activation function…
Cybenko, 1988: … at most two hidden layers are necessary, with
arbitrary accuracy attainable by adding more hidden units.
Cybenko, 1989: one hidden layer is enough to approximate any
continuous function.
Intuition of proof: decompose function to be approximated into a
sum of localized “bumps.” The bumps can be constructed with
two hidden layers.
Similar in spirit to Fourier decomposition. Bumps = radial basis
functions.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Optimal Network Architectures
How can we determine the number of hidden units?
- genetic
algorithms: evaluate variations of the network, using a
metric that combines its performance and its complexity. Then
apply various mutations to the network (change number of
hidden units) until the best one is found.
- Pruning
and weight decay:
- apply weight decay (remember reinforcement
learning) during training
- eliminate connections with weight below threshold
- re-train
- How about eliminating units? For example, eliminate units with
total synaptic input weight smaller than threshold.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Large Network Example
Example of network with many cooperating brain areas:
Dominey & Arbib
Issues:
- how
to use empirical data to design overall architecture?
- How to implement?
- How to test?
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
 delay
FEF
PP
FOn
PPct r
ms
Peter Dominey & Michael Arbib:
Cerebral Cortex, 2:153-175
switch
Filling in the Schemas: Neural
Network Models Based on
Monkey Neurophysiology
qv
sm
vm
FEF
vs
PP
MD
VisCx
sm
CAUDATE
Vis Cx
SC
CD
TH
LGN
vm
SNr
SNR
vs
SG
sm
 delay
Develop hypotheses on Neural
Networks that yield an equivalent
functionality:
mapping schemas (functions) to the
cooperative cooperation of sets of
brain regions (structures)
FEFvs
FEFms
SC
vs
ms
qv
FOn
wta
eye movement
FEFvs
FEFms
Brainstem
Saccade
Generator
Retina
VisInput
Low-Level Processing
Remember: Vision as a change in representation.
At the low-level, such change can be done by fairly streamlined
mathematical transforms:
- Fourier transform
- Wavelet transform
these transforms yield a simpler but more organized image of the
input.
Additional organization is obtained through multiscale
representations.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Laplacian Edge Detection
Edges are defined as zero-crossings of the second derivative
(Laplacian if more than one-dimensional) of the signal.
This is very sensitive to image noise; thus typically we first blur
the image to reduce noise. We then use a Laplacian-ofGaussian filter to extract edges.
Smoothed signal
Itti: CS564 - Brain Theory and Artificial Intelligence.
First derivative (gradient)
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Illusory Contours
Some mechanism is responsible for our illusory perception of
contours where there are none…
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Long-range Excitation
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Modeling long-range connections
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Depth & Stereo
L
+qmax
OA
C
OA
-qmax
B
A
R
D
qL
qR
qRA, qRD
qLB, qLD
-qmax +qmax
q0
qLA, qLC
Itti: CS564 - Brain Theory and Artificial Intelligence.
q0
qRB, qRC
Overview and Summary
Correspondence problem
Segment & recognize objects in each eye separately first,
then establish correspondence?
No! (at least not only): Julesz’ random-dot stereograms
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Regularization
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Higher Visual Function
Examine components of mid/high-level vision:
Attention
Object recognition
Gist
Action recognition
Scene understanding
Memory & consciousness
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Itti Overview
& Koch,and
Nat
Rev Neurosci, Mar. 2001
Summary
Brefczynski & DeYoe, Nature Neuroscience 1999
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Treue & Martinez-Trujillo, Nature 1999
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Challenges of Object Recognition
The binding problem: binding different features (color, orientation,
etc) to yield a unitary percept.
(see
next slide)
Bottom-up vs. top-down processing: how
much is assumed top-down vs. extracted
from the image?
Perception vs. recognition vs. categorization: seeing an object vs.
seeing is as something. Matching views of known objects to
memory vs. matching a novel object to object categories in
memory.
Viewpoint invariance: a major issue is to recognize objects
irrespectively of the viewpoint from which we see them.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Fusiform Face Area in Humans
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Eye Movements
1) Free examination
2) estimate material
circumstances of family
3) give ages of the people
4) surmise what family has
been doing before arrival
of “unexpected visitor”
5) remember clothes worn by
the people
6) remember position of people
and objects
7) estimate how long the “unexpected
visitor” has been away from family
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Several Problems…
with the “progressive visual buffer hypothesis:”
Change blindness:
Attention seems to be required for us to perceive change in
images, while these could be easily detected in a visual buffer!
Amount of memory required is huge!
Interpretation of buffer contents by high-level vision is very
difficult if buffer contains very detailed representations (Tsotsos,
1990)!
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
The World as an Outside Memory
Kevin O’Regan, early 90s:
why build a detailed internal representation of the world?
too complex…
not enough memory…
… and useless?
The world is the memory. Attention and the eyes are a look-up
tool!
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
The “Attention Hypothesis”
Rensink, 2000
No “integrative buffer”
Early processing extracts information up to “proto-object”
complexity in massively parallel manner
Attention is necessary to bind the different proto-objects into
complete objects, as well as to bind object and location
Once attention leaves an object, the binding “dissolves.” Not a
problem, it can be formed again whenever needed, by shifting
attention back to the object.
Only a rather sketchy “virtual representation” is kept in memory,
and attention/eye movements are used to gather details as
needed
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Gist of a Scene
Biederman, 1981:
from very brief exposure to a scene (120ms or less), we can
already extract a lot of information about its global structure, its
category (indoors, outdoors, etc) and some of its components.
“riding the first spike:” 120ms is the time it takes the first spike to
travel from the retina to IT!
Thorpe, van Rullen:
very fast classification (down to 27ms exposure, no mask), e.g.,
for tasks such as “was there an animal in the scene?”
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
One lesson…
From 50+ years of research…
Solving vision in general is impossible!
But solving purposive vision can be done. Example: vision for
action.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Grip Selectivity in a Single AIP
Cell
A cell that is
selective for
side
opposition
(Sakata)
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
FARS (Fagg-Arbib-Rizzolatti-Sakata) Model Overview
AIP extracts the set of
affordances for an
attended object.These
affordances highlight the
features of the object
relevant to physical
interaction with it.
Ways to grab
this “thing”
Task Constraints
T(F6)
ask Constraints (F6)
Working Memory
Working M emory (46)
(46?)
Instruction Stimuli
Instruction Stimuli (F2)
(F2)
AIP
AIP
Dorsal
Stream:
dorsal/ventral
Affordances
streams
Ventral
Stream:
Recognition
F5
“It’s a mug”
PFC
Itti: CS564 - Brain Theory and Artificial Intelligence.
IT
Overview and Summary
AT and DF: "How" versus "What"
reach programming
Parietal
Cortex
How (dorsal)
grasp programming
Visual
Cortex
Inferotemporal
Cortex
What (ventral)
“What” versus “How”:
AT: Goodale and Milner: object parameters for grasp (How) but
not for saying or pantomiming
DF: Jeannerod et al.: saying and pantomiming (What) but no
“How” except for familiar objects with specific sizes.
Lesson: Even schemas that seem to be normally under conscious
control can in fact proceed without our being conscious of their activity.
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary
Itti: CS564 - Brain Theory and Artificial Intelligence.
Overview and Summary