Download Slide 1

Document related concepts

Cognitive neuroscience of music wikipedia , lookup

History of neuroimaging wikipedia , lookup

Axon guidance wikipedia , lookup

Brain wikipedia , lookup

Human brain wikipedia , lookup

Neurotransmitter wikipedia , lookup

Neuropsychology wikipedia , lookup

Aging brain wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Synaptogenesis wikipedia , lookup

State-dependent memory wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Environmental enrichment wikipedia , lookup

Neurophilosophy wikipedia , lookup

Neuroplasticity wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Neural modeling fields wikipedia , lookup

Neural oscillation wikipedia , lookup

Axon wikipedia , lookup

Central pattern generator wikipedia , lookup

Mirror neuron wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Neuroesthetics wikipedia , lookup

Connectome wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Donald O. Hebb wikipedia , lookup

Neuroeconomics wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Embodied language processing wikipedia , lookup

Optogenetics wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Convolutional neural network wikipedia , lookup

Brain Rules wikipedia , lookup

Neural engineering wikipedia , lookup

Single-unit recording wikipedia , lookup

Artificial neural network wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Binding problem wikipedia , lookup

Biological neuron model wikipedia , lookup

Neural coding wikipedia , lookup

Neuroanatomy wikipedia , lookup

Recurrent neural network wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Efficient coding hypothesis wikipedia , lookup

Neural binding wikipedia , lookup

Development of the nervous system wikipedia , lookup

Metastability in the brain wikipedia , lookup

Synaptic gating wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
TNI: Computational Neuroscience
Instructors: Peter Latham
Maneesh Sahani
Peter Dayan
TA:
Website:
Mandana Ahmadi, [email protected]
http://www.gatsby.ucl.ac.uk/~mandana/TNI/TNI.htm
(slides will be on website)
Lectures:
Review:
Tuesday/Friday, 11:00-1:00.
Friday, 1:00-3:00.
Homework: Assigned Friday, due Friday (1 week later).
first homework: assigned Oct. 3, due Oct. 10.
What is computational neuroscience?
Our goal: figure out how the brain works.
There are about 10 billion cubes of
this size in your brain!
10 microns
How do we go about making sense of this mess?
David Marr (1945-1980) proposed three levels of analysis:
1. the problem (computational level)
2. the strategy (algorithmic level)
3. how it’s actually done by networks of neurons
(implementational level)
Example #1: memory.
the problem:
recall events, typically based on partial information.
Example #1: memory.
the problem:
recall events, typically based on partial information.
associative or content-addressable memory.
an algorithm:
dynamical systems with fixed points.
r3
r2
r1
activity space
Example #1: memory.
the problem:
recall events, typically based on partial information.
associative or content-addressable memory.
an algorithm:
dynamical systems with fixed points.
neural implementation:
Hopfield networks.
xi = sign(∑j Jij xj)
Example #2: vision.
the problem (Marr):
2-D image on retina →
3-D reconstruction of a visual scene.
Example #2: vision.
the problem (modern version):
2-D image on retina →
recover the latent variables.
house
sun
tree
bad artist
Example #2: vision.
the problem (modern version):
2-D image on retina →
reconstruction of latent variables.
an algorithm:
graphical models.
x1
r1
x2
r2
x3
r3
latent variables
r4
low level representation
Example #2: vision.
the problem (modern version):
2-D image on retina →
reconstruction of latent variables.
an algorithm:
graphical models.
x1
x2
x3
latent variables
inference
r1
r2
r3
r4
low level representation
Example #2: vision.
the problem (modern version):
2-D image on retina →
reconstruction of latent variables.
an algorithm:
graphical models.
implementation in networks of neurons:
no clue.
Comment #1:
the problem:
the algorithm:
neural implementation:
Comment #1:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
often ignored!!!
Comment #1:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
A common approach:
Experimental observation → model
Usually very underconstrained!!!!
Comment #1:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
Example i: CPGs (central pattern generators)
rate
rate
Too easy!!!
Comment #1:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
Example ii: single cell modeling
C dV/dt = -gL(V – VL) – n4(V – VNa) …
dn/dt = …
…
lots and lots of parameters … which ones should you use?
Comment #1:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
Example iii: network modeling
lots and lots of parameters × thousands
Comment #2:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
You need to know a lot of math!!!!!
x1
x2
r3
x3
r2
r1
r2
r3
r4
r1
activity space
Comment #3:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
This is a good goal, but it’s hard to do in practice.
Our actual bread and butter:
1. Explaining observations (mathematically)
2. Using sophisticated analysis to design simple experiments
that test hypotheses.
A classic example: Hodgkin and Huxley.
dendrites
soma
axon
voltage
+40 mV
1 ms
-50 mV
time
100 ms
A classic example: Hodgkin and Huxley.
C dV/dt = –gL(V-VL) – gNam3h(V-VNa) – …
dm/dt = …
…
Comment #4:
the problem:
the algorithm:
neural implementation:
easier
harder
harder
these are linked!!!
some algorithms are easy to implement on a computer
but hard in a brain, and vice-versa.
we should be looking for the vice-versa ones.
it can be hard to tell which is which.
Basic facts about the brain
Your brain
Your cortex unfolded
neocortex (cognition)
6 layers
~30 cm
~0.5 cm
subcortical structures
(emotions, reward,
homeostasis, much much
more)
Your cortex unfolded
1 cubic millimeter,
~3*10-5 oz
1 mm3 of cortex:
50,000 neurons
10000 connections/neuron
(=> 500 million connections)
4 km of axons
1 mm3 of cortex:
1 mm2 of a CPU:
50,000 neurons
10000 connections/neuron
(=> 500 million connections)
4 km of axons
1 million transistors
2 connections/transistor
(=> 2 million connections)
.002 km of wire
1 mm3 of cortex:
1 mm2 of a CPU:
50,000 neurons
10000 connections/neuron
(=> 500 million connections)
4 km of axons
1 million transistors
2 connections/transistor
(=> 2 million connections)
.002 km of wire
whole brain (2 kg):
whole CPU:
1011 neurons
1015 connections
8 million km of axons
109 transistors
2*109 connections
2 km of wire
1 mm3 of cortex:
1 mm2 of a CPU:
50,000 neurons
10000 connections/neuron
(=> 500 million connections)
4 km of axons
1 million transistors
2 connections/transistor
(=> 2 million connections)
.002 km of wire
whole brain (2 kg):
whole CPU:
1011 neurons
1015 connections
8 million km of axons
109 transistors
2*109 connections
2 km of wire
dendrites (input)
soma (spike generation)
axon (output)
voltage
+40 mV
1 ms
-50 mV
time
100 ms
synapse
current flow
synapse
current flow
voltage
+40 mV
-50 mV
time
100 ms
neuron i
neuron j
V on neuron i
neuron j emits a spike:
EPSP
t
10 ms
neuron i
neuron j
V on neuron i
neuron j emits a spike:
IPSP
t
10 ms
neuron i
neuron j
V on neuron i
neuron j emits a spike:
IPSP
t
10 ms
amplitude = wij
neuron i
neuron j
V on neuron i
neuron j emits a spike:
IPSP
changes with
learning
t
10 ms
amplitude = wij
synapse
current flow
A bigger picture view of the brain
x
r
latent variables
peripheral spikes
sensory processing
^
r
cognition
memory
action selection
^
r'
“direct” code for
latent variables
brain
“direct” code for
motor actions
motor processing
r'
peripheral spikes
x'
motor actions
r
r
r
r
you are the
cutest stick
figure ever!
r
you are the
cutest stick
figure ever!
r
x
r
latent variables
peripheral spikes
sensory processing
^
r
cognition
memory
action selection
^
r'
“direct” code for
latent variables
brain
“direct” code for
motor actions
motor processing
r'
peripheral spikes
x'
motor actions
x
r
latent variables
peripheral spikes
sensory processing
^
r
cognition
memory
action selection
^
r'
“direct” code for
latent variables
brain
“direct” code for
motor actions
motor processing
r'
peripheral spikes
x'
motor actions
In some sense, action selection is the most important
problem:
if we don’t choose the right actions, we don’t
reproduce, and all the neural coding and
computation in the world isn’t going to help us.
Do I call him and risk rejection and humiliation,
or do I play it safe, and stay home on Saturday
night and eat oreos?
Do I call her and risk rejection and humiliation,
or do I play it safe, and stay home on Saturday
night and eat oreos?
x
r
latent variables
peripheral spikes
sensory processing
^
r
cognition
memory
action selection
^
r'
“direct” code for
latent variables
brain
“direct” code for
motor actions
motor processing
r'
peripheral spikes
x'
motor actions
Problems:
1. How does the brain extract latent variables?
2. How does it manipulate latent variables?
3. How does it learn to do both?
Ask at two levels:
1. What are the algorithms?
2. How are they implemented in neural hardware?
What do we know about the brain?
a. Anatomy. We know a lot about what is where. But be
careful about labels: neurons in motor cortex sometimes
respond to color.
Connectivity. We know (more or less) which area
is connected to which. We don’t know the wiring diagram
at the microscopic level.
wij
b. Single neurons. We know very well how point neurons work
(think Hodgkin Huxley).
Dendrites. Lots of potential for incredibly complex
processing.
My guess: they make neurons bigger and reduce wiring
length (see the work of Mitya Chklovskii).
How much I would bet: 20 p.
c. The neural code.
My guess: once you get away from periphery, it’s mainly
firing rate: an inhomogeneous Poisson process with
a refractory period is a good model of spike trains.
How much I would bet: £100.
The role of correlations. Still unknown.
My guess: don’t have one.
d. Recurrent networks of spiking neurons. This is a field that
is advancing rapidly! There were two absolutely seminal
papers about a decade ago:
van Vreeswijk and Sompolinsky (Science, 1996)
van Vreeswijk and Sompolinsky (Neural Comp., 1998)
We now understand very well randomly connected networks
(harder than you might think), and (I believe) we are on
the verge of:
i) understanding networks that have interesting
computational properties.
ii) computing the correlational structure in those
networks.
e. Learning. We know a lot of facts (LTP, LTD, STDP).
• it’s not clear which, if any, are relevant.
• the relationship between learning rules and computation
is essentially unknown.
Theorists are starting to develop unsupervised learning
algorithms, mainly ones that maximize mutual information.
These are promising, but the link to the brain has not been
fully established.
e. Learning. We know a lot of facts (LTP, LTD, STDP).
• it’s not clear which, if any, are relevant.
• the relationship between learning rules and computation
is essentially unknown.
Theorists are starting to develop unsupervised learning
algorithms, mainly ones that maximize mutual information.
These are promising, but the link to the brain has not been
fully established.
What is unsupervised learning?
Learning structure from data without any help from anybody.
Example: most visual scenes are very unlikely to occur.
1000 × 1000 pixels => million dimensional space.
pixel 2
space of possible pictures is much smaller, and
forms a very complicated manifold:
possible visual
scenes
pixel 1
What is unsupervised learning?
Learning structure from data without any help from anybody.
Example: most visual scenes are very unlikely to occur.
1000 × 1000 pixels => million dimensional space.
pixel 2
space of possible pictures is much smaller, and
forms a very complicated manifold:
visual scenes
pixel 1
What is unsupervised learning?
Learning structure from data without any help from anybody.
Example: most visual scenes are very unlikely to occur.
1000 × 1000 pixels => million dimensional space.
pixel 2
space of possible pictures is much smaller, and
forms a very complicated manifold:
visual scenes
pixel 1
What is unsupervised learning?
neuron 2
Learning from spikes:
neurons 1
What is unsupervised learning?
neuron 2
Learning from spikes:
dog
cat
neurons 1
A word about learning (remember these numbers!!!):
You have about 1015 synapses.
If it takes 1 bit of information to set a synapse,
you need 1015 bits to set all of them.
30 years ≈ 109 seconds.
To set 1/10 of your synapses in 30 years,
you must absorb 100,000 bits/second.
Learning in the brain is almost completely unsupervised!!!
f. Where we know algorithms we know the neural
implementation (sort of):
vestibular system, sound localization, echolocation, addition
This is not a coincidence!!!!
Remember David Marr:
1. the problem (computational level)
2. the strategy (algorithmic level)
3. how it’s actually done by networks of neurons
(implementational level)
What we know: my score (1-10).
a.
b.
c.
d.
e.
Anatomy.
Single neurons.
The neural code.
Recurrent networks of spiking neurons.
Learning.
The hard problems:
1. How does the brain extract latent variables?
2. How does it manipulate latent variables?
3. How does it learn to do both?
5
6
6
3
2
1.001
1.002
1.001
Outline:
1.
2.
3.
4.
Basics: single neurons/axons/dendrites/synapses.
Language of neurons: neural coding.
Learning at network and behavioral level.
What we know about networks (very little).
Latham
Sahani
Dayan
Latham
Outline for this part of the course (biophysics):
1.
2.
3.
4.
5.
What makes a neuron spike.
How current propagates in dendrites.
How current propagates in axons.
How synapses work.
Lots and lots of math!!!