* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Slide 1 - Gatsby Computational Neuroscience Unit
Time perception wikipedia , lookup
History of neuroimaging wikipedia , lookup
Cognitive neuroscience of music wikipedia , lookup
State-dependent memory wikipedia , lookup
Axon guidance wikipedia , lookup
Neuropsychology wikipedia , lookup
Human brain wikipedia , lookup
Aging brain wikipedia , lookup
Caridoid escape reaction wikipedia , lookup
Neurophilosophy wikipedia , lookup
Neural modeling fields wikipedia , lookup
Molecular neuroscience wikipedia , lookup
Stimulus (physiology) wikipedia , lookup
Environmental enrichment wikipedia , lookup
Nonsynaptic plasticity wikipedia , lookup
Cognitive neuroscience wikipedia , lookup
Neuroplasticity wikipedia , lookup
Neuroesthetics wikipedia , lookup
Neural oscillation wikipedia , lookup
Donald O. Hebb wikipedia , lookup
Artificial general intelligence wikipedia , lookup
Clinical neurochemistry wikipedia , lookup
Neuroeconomics wikipedia , lookup
Single-unit recording wikipedia , lookup
Mirror neuron wikipedia , lookup
Central pattern generator wikipedia , lookup
Embodied language processing wikipedia , lookup
Neural engineering wikipedia , lookup
Convolutional neural network wikipedia , lookup
Activity-dependent plasticity wikipedia , lookup
Embodied cognitive science wikipedia , lookup
Artificial neural network wikipedia , lookup
Brain Rules wikipedia , lookup
Biological neuron model wikipedia , lookup
Circumventricular organs wikipedia , lookup
Optogenetics wikipedia , lookup
Premovement neuronal activity wikipedia , lookup
Binding problem wikipedia , lookup
Neural correlates of consciousness wikipedia , lookup
Neural coding wikipedia , lookup
Recurrent neural network wikipedia , lookup
Feature detection (nervous system) wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Channelrhodopsin wikipedia , lookup
Neuroanatomy wikipedia , lookup
Development of the nervous system wikipedia , lookup
Efficient coding hypothesis wikipedia , lookup
Metastability in the brain wikipedia , lookup
Types of artificial neural networks wikipedia , lookup
Neuropsychopharmacology wikipedia , lookup
TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan TAs: Website: Arthur Guez, [email protected] Marius Pachitariu, [email protected] http://www.gatsby.ucl.ac.uk/~aguez/tn1/ Lectures: Review: Tuesday/Friday, 11:00-1:00. Tuesday, starting at 4:30. Homework: Assigned Friday, due Friday (1 week later). first homework: assigned Oct. 7, due Oct. 14. What is computational neuroscience? Our goal: figure out how the brain works. There are about 10 billion cubes of this size in your brain! 10 microns How do we go about making sense of this mess? David Marr (1945-1980) proposed three levels of analysis: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level) Example #1: memory. the problem: recall events, typically based on partial information. Example #1: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. an algorithm: dynamical systems with fixed points. r3 r2 r1 activity space Example #1: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. an algorithm: dynamical systems with fixed points. neural implementation: Hopfield networks. xi = sign(∑j Jij xj) Example #2: vision. the problem (Marr): 2-D image on retina → 3-D reconstruction of a visual scene. Example #2: vision. the problem (modern version): 2-D image on retina → recover the latent variables. house sun tree bad artist Example #2: vision. the problem (modern version): 2-D image on retina → recover the latent variables. house sun tree bad artist cloud Example #2: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. an algorithm: graphical models. x1 r1 x2 r2 x3 r3 latent variables r4 low level representation Example #2: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. an algorithm: graphical models. x1 x2 x3 latent variables inference r1 r2 r3 r4 low level representation Example #2: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. an algorithm: graphical models. implementation in networks of neurons: no clue. Comment #1: the problem: the algorithm: neural implementation: Comment #1: the problem: the algorithm: neural implementation: easier harder harder often ignored!!! Comment #1: the problem: the algorithm: neural implementation: easier harder harder A common approach: Experimental observation → model Usually very underconstrained!!!! Comment #1: the problem: the algorithm: neural implementation: easier harder harder Example i: CPGs (central pattern generators) rate rate Too easy!!! Comment #1: the problem: the algorithm: neural implementation: easier harder harder Example ii: single cell modeling C dV/dt = -gL(V – VL) – n4(V – VK) … dn/dt = … … lots and lots of parameters … which ones should you use? Comment #1: the problem: the algorithm: neural implementation: easier harder harder Example iii: network modeling lots and lots of parameters × thousands Comment #2: the problem: the algorithm: neural implementation: easier harder harder You need to know a lot of math!!!!! x1 x2 r3 x3 r2 r1 r2 r3 r4 r1 activity space Comment #3: the problem: the algorithm: neural implementation: easier harder harder This is a good goal, but it’s hard to do in practice. Our actual bread and butter: 1. Explaining observations (mathematically) 2. Using sophisticated analysis to design simple experiments that test hypotheses. Comment #3: Two experiments: - record, using loose patch, from a bunch of cells in culture - block synaptic transmission - record again - found quantitative support for the balanced regime. J. Neurophys., 83:808-827, 828-835, 2000 Comment #3: Two experiments: - perform whole cell recordings in vivo - stimulate cells with a current pulse every couple hundred ms - build current-triggered PSTH - showed that the brain is intrinsically very noisy, and is likely to be using a rate code. Nature, 466:123-127 (2010) Comment #4: the problem: the algorithm: neural implementation: easier harder harder these are linked!!! some algorithms are easy to implement on a computer but hard in a brain, and vice-versa. Comment #4: hard for a brain, easy for a computer: A-1 z=x+y ∫dx ... easy for a brain, hard for a computer: associative memory Comment #4: the problem: the algorithm: neural implementation: easier harder harder these are linked!!! some algorithms are easy to implement on a computer but hard in a brain, and vice-versa. we should be looking for the vice-versa ones. it can be hard to tell which is which. Basic facts about the brain Your brain Your cortex unfolded neocortex (cognition) 6 layers ~30 cm ~0.5 cm subcortical structures (emotions, reward, homeostasis, much much more) Your cortex unfolded 1 cubic millimeter, ~3*10-5 oz 1 mm3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 mm3 of cortex: 1 mm2 of a CPU: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 million transistors 2 connections/transistor (=> 2 million connections) .002 km of wire 1 mm3 of cortex: 1 mm2 of a CPU: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 million transistors 2 connections/transistor (=> 2 million connections) .002 km of wire whole brain (2 kg): whole CPU: 1011 neurons 1015 connections 8 million km of axons 109 transistors 2*109 connections 2 km of wire 1 mm3 of cortex: 1 mm2 of a CPU: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 million transistors 2 connections/transistor (=> 2 million connections) .002 km of wire whole brain (2 kg): whole CPU: 1011 neurons 1015 connections 8 million km of axons 109 transistors 2*109 connections 2 km of wire dendrites (input) soma (spike generation) axon (output) voltage +20 mV 1 ms -50 mV time 100 ms synapse current flow synapse current flow voltage +20 mV -50 mV time 100 ms neuron i neuron j V on neuron i neuron j emits a spike: EPSP t 10 ms neuron i neuron j V on neuron i neuron j emits a spike: IPSP t 10 ms neuron i neuron j V on neuron i neuron j emits a spike: IPSP t 10 ms amplitude = wij neuron i neuron j V on neuron i neuron j emits a spike: IPSP changes with learning t 10 ms amplitude = wij wij current flow A bigger picture view of the brain x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions Who is walking behind the picket fence? r r r r r you are the cutest stick figure ever! r you are the cutest stick figure ever! r x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions In some sense, action selection is the most important problem: if we don’t choose the right actions, we don’t reproduce, and all the neural coding and computation in the world isn’t going to help us. Do I call him and risk rejection and humiliation, or do I play it safe, and stay home on Saturday night and eat oreos? Do I call her and risk rejection and humiliation, or do I play it safe, and stay home on Saturday night and eat oreos? x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions Problems: 1. How does the brain extract latent variables? 2. How does it manipulate latent variables? 3. How does it learn to do both? Ask at two levels: 1. What are the algorithms? 2. How are they implemented in neural hardware? What do we know about the brain? a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. The van Essen diagram a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. We don’t know the wiring diagram at the microscopic level. wij a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. We don’t know the wiring diagram at the microscopic level. But we might in a few decades! wij b. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). Dendrites. Lots of potential for incredibly complex processing. My guess: all they do make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii). L m neurons L n neurons total wire length without dendrites: ~nmL total length = mL L m neurons total length = nL L n neurons total wire length without dendrites: ~nmL total wire length with dendrites: ~(n+m)L b. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). Dendrites. Lots of potential for incredibly complex processing. My guess: all they do is make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii). How much I would bet that that’s true: 20 p. c. The neural code. My guess: once you get away from periphery, it’s mainly firing rate: an inhomogeneous Poisson process with a refractory period is a good model of spike trains. How much I would bet: £100. The role of correlations. Still unknown. My guess: don’t have one. d. Recurrent networks of spiking neurons. This is a field that is advancing rapidly! There were two absolutely seminal papers about a decade ago: van Vreeswijk and Sompolinsky (Science, 1996) van Vreeswijk and Sompolinsky (Neural Comp., 1998) We now understand very well randomly connected networks (harder than you might think), and (I believe) we are on the verge of: i) understanding networks that have interesting computational properties. ii) computing the correlational structure in those networks. e. Learning. We know a lot of facts (LTP, LTD, STDP). • it’s not clear which, if any, are relevant. • the relationship between learning rules and computation is essentially unknown. Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established. e. Learning. We know a lot of facts (LTP, LTD, STDP). • it’s not clear which, if any, are relevant. • the relationship between learning rules and computation is essentially unknown. Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established. What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space. space of possible pictures is much smaller, and forms a very complicated manifold: possible visual scenes What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space. space of possible pictures is much smaller, and forms a very complicated manifold: visual scenes What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space. space of possible pictures is much smaller, and forms a very complicated manifold: visual scenes What is unsupervised learning? neuron 2 Learning from spikes: neurons 1 What is unsupervised learning? neuron 2 Learning from spikes: dog cat neurons 1 What is unsupervised learning? Learning structure from data without any help from anybody. Which is real and which is a painting? A word about learning (remember these numbers!!!): You have about 1015 synapses. If it takes 1 bit of information to set a synapse, you need 1015 bits to set all of them. 30 years ≈ 109 seconds. To set 1/10 of your synapses in 30 years, you must absorb 100,000 bits/second. Learning in the brain is almost completely unsupervised!!! stolen from Geoff Hinton f. Where we know algorithms we know the neural implementation (sort of): vestibular system, sound localization, echolocation, addition This is not a coincidence!!!! Remember David Marr: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level) What we know: my score (1-10). a. b. c. d. e. Anatomy. Single neurons. The neural code. Recurrent networks of spiking neurons. Learning. The hard problems: 1. How does the brain extract latent variables? 2. How does it manipulate latent variables? 3. How does it learn to do both? 5 6 6 3 2 1.001 1.002 1.001 Outline: 1. 2. 3. 4. Basics: single neurons/axons/dendrites/synapses. Language of neurons: neural coding. Learning at the network and behavioral level. What we know about networks (very little). Latham Sahani Dayan Latham Outline for this part of the course (biophysics): 1. 2. 3. 4. 5. What makes a neuron spike. How current propagates in dendrites. How current propagates in axons. How synapses work. Lots and lots of math!!!