Download From Vision to Movement

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Brain wikipedia , lookup

Binding problem wikipedia , lookup

Executive functions wikipedia , lookup

Haemodynamic response wikipedia , lookup

Brain–computer interface wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Aging brain wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Connectome wikipedia , lookup

Cortical cooling wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Brain Rules wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Environmental enrichment wikipedia , lookup

Functional magnetic resonance imaging wikipedia , lookup

Nervous system network models wikipedia , lookup

Response priming wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neuroplasticity wikipedia , lookup

Synaptic gating wikipedia , lookup

Human brain wikipedia , lookup

Visual selective attention in dementia wikipedia , lookup

Development of the nervous system wikipedia , lookup

Visual servoing wikipedia , lookup

Visual N1 wikipedia , lookup

Optogenetics wikipedia , lookup

Cognitive neuroscience of music wikipedia , lookup

Neuroeconomics wikipedia , lookup

Allochiria wikipedia , lookup

Neuroanatomy wikipedia , lookup

Visual extinction wikipedia , lookup

Visual memory wikipedia , lookup

Time perception wikipedia , lookup

Evoked potential wikipedia , lookup

Neural coding wikipedia , lookup

Metastability in the brain wikipedia , lookup

Neuroanatomy of memory wikipedia , lookup

Embodied language processing wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Neuroesthetics wikipedia , lookup

Inferior temporal gyrus wikipedia , lookup

C1 and P1 (neuroscience) wikipedia , lookup

Superior colliculus wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

P200 wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Efficient coding hypothesis wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Transcript
Perhaps the most fundamental question in Visual-Motor Neuroscience is when, where, and how visual
signals are transformed into motor signals. We will consider more complex aspects of this in the
following sessions, but right now we just want to differentiate between visual and motor signals in the
brain. Does this difference occur between different areas of the brain? Between different neurons?
Within the same neurons at different times?
Approaching the brain from a global view, one starts with the impression that vision is encoded in
occipital cortex, movement in frontal cortex, and parietal cortex is involved in the transformation from
vision to action. However, things are not that simple. For example, frontal cortex neurons often carry
visual signals, and some occipital areas may code the direction of movement rather than the direction of
the stimulus when these are dissociated (although this is likely a matter of imagery rather than control).
To answer this seemingly basic question, one needs a variety of techniques. Computational models can
help us understand how visual signals are transformed into motor signals within artificial networks that
are designed to emulate some part(s) of the brain. Studies of patients with damage to specific brain
areas can help identify behavioral deficits that are more linked globally to vision vs. action. fMRI can
help identify different areas of the brain that show activity correlated to visual direction vs. movement
direction. Neurophysiological recordings from neurons can further show whether different cells within
the same or different brain areas code different aspects of vision and movement, or both of these within
the same cell. This generally needs to be done in association with some behavioral paradigm that
dissociates vision from action in time and/or space.
A simple way to dissociate vision from action in time is to require a delay (usually in the order of a
second for neurophysiology which records neurons in real time, in the order of about 10 seconds in fMRI
which measures much slower responses). We visited this ‘memory delay’ paradigm before when we
talked about working memory, but here we will focus on the visual and motor signals. Using this
paradigm, one often finds cells in the same area (for example in the cortical gaze control system and
superior colliculus) that have only visual responses (just after the visual stimulus), only motor responses
(starting just before the movement), or both (visuomotor). This allows us to link neural responses to
events, and category different neurons, but does not yet tell us what the cells are coding in space.
A simple way to dissociate visual and motor direction is the anti-saccade (or anti-reach) paradigm. Again,
in this paradigm humans are asked (or monkeys are trained) to saccade toward (pro) or away (anti) from
the visual stimulus, based on some kind of cue. In this paradigm, most visuomotor areas show tuning
toward the visual stimulus in their visual responses, and tuning toward the movement direction in their
motor responses, even in the same visuomotor cells. When combined with fMRI, one sees a similar
switch at the level of entire brain areas, although there is a tendency for the visual response to
dominate (or sometimes be the only response) in earlier occipital areas and the motor direction
response to dominate in later areas, especially primary motor cortex. Parietal areas are intermediate.
A variant of the anti-reach task is to train people to make reaches while looking through prisms that
reverse everything you see left-to-right. You will even see your hand reversed, but of course the real
objects are still in the same place so you have to learn to reach opposite to what you see. We have
found that this occurs fairly automatically within about 10 trials if the task is kept simple. It is also
action-specific (for example, training on reach direction does not affect grasp orientation, and vice
versa). When we did this in conjunction with fMRI, we found that most parietal areas (except AG)
continued to code visual direction rather than movement direction. This seems to contradict the antipointing results, but these could be reconciled if we say that parietal cortex is coding the goal of the
movement in visual space (which is the direction you see with reversing prisms, but is imagined to be
opposite in the ant-reach experiments). However, a more complicated results occurred when people
recorded from individual ‘parietal reach’ neurons in monkeys trained on both the prism and anti-reach
task. Here, some neurons coded visual goal direction, whereas others coded movement direction. This
discrepancy could be because of different species (I think unlikely), or more likely either from
differences in what fMRI vs. neurophysiology measures, or from extensive training in monkeys.
Finally, another way to dissociate vision from movement is by looking at trials with errors (e.g., saccades
that miss the visual stimulus). This has been done in situations where saccades made errors as a
function of initial eye position. It has also been done by looking to see if neural activity reflects variable
errors in saccades (motor coding), or not (target coding). We have recently exploited this in headunrestrained gaze shifts in a memory delay paradigm while recording from both the superior colliculus
and frontal eye fields. In both places, we found that visual responses remained tightly linked to stimulus
location whereas motor responses showed variations related to errors in future gaze position. (It
appears that the source of these errors accumulates in neurons during the memory delay, and in the
final step from memory to the motor response). Moreover, visuomotor cells showed both of these
responses, showing again that the same cell can code different things at different times.
The overall conclusion of these studies is that the visuomotor transformation is a network phenomenon
that can be observed at both the global and very local levels in the brian. Much of occipital-parietalfrontal cortex is active in such transformations, and there is a general transition from vision-tomovement coding along this axis. However, when one looks within these areas, one sees that different
cells contribute differently to this process, and that it can even be observed within cells.
In the upcoming sessions, we will look more closely at the specific computations that these
transformations must perform in order to create action from vision.