Download Abstracts - BCCN 2009

Document related concepts

Environmental enrichment wikipedia , lookup

Multielectrode array wikipedia , lookup

Functional magnetic resonance imaging wikipedia , lookup

Brain Rules wikipedia , lookup

Cognitive neuroscience of music wikipedia , lookup

Neuroethology wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Eyeblink conditioning wikipedia , lookup

Connectome wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Artificial neural network wikipedia , lookup

Catastrophic interference wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Neuroeconomics wikipedia , lookup

Neuroanatomy wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neuroinformatics wikipedia , lookup

Binding problem wikipedia , lookup

Central pattern generator wikipedia , lookup

Neuroplasticity wikipedia , lookup

Time perception wikipedia , lookup

Neural engineering wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Neural oscillation wikipedia , lookup

Neurophilosophy wikipedia , lookup

Convolutional neural network wikipedia , lookup

Neuroesthetics wikipedia , lookup

Optogenetics wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Development of the nervous system wikipedia , lookup

Neural modeling fields wikipedia , lookup

Synaptic gating wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Biological neuron model wikipedia , lookup

Recurrent neural network wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Neural coding wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Neural binding wikipedia , lookup

Efficient coding hypothesis wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Nervous system network models wikipedia , lookup

Metastability in the brain wikipedia , lookup

Transcript
Table of contents
Overview
1
Welcome to BCCN 2009 in Frankfurt!..........................................................1
Organization.................................................................................................2
Invited speakers...........................................................................................3
Funding........................................................................................................3
Program.......................................................................................................4
Conference information
9
Internet.........................................................................................................9
Instructions for presenters............................................................................9
Food...........................................................................................................10
Venue.........................................................................................................11
Abstracts
13
Oral Presentations.....................................................................................15
Wednesday, September 30................................................................................15
Thursday, October 1...........................................................................................18
Friday, October 2................................................................................................28
Poster Session I, Wednesday, September 30............................................40
Dynamical systems and recurrent networks.......................................................40
Information processing in neurons and networks...............................................62
Neural encoding and decoding...........................................................................93
Neurotechnology and brain computer interfaces..............................................103
Probabilistic models and unsupervised learning..............................................106
Poster Session II, Thursday, October 1...................................................116
Computer vision................................................................................................116
Decision, control and reward............................................................................130
Learning and plasticity......................................................................................153
Sensory processing..........................................................................................178
Demonstrations........................................................................................202
Abstracts: Table of contents
208
Abstracts: Author index
214
Overview
Welcome to BCCN 2009 in Frankfurt!
It is my pleasure to welcome you to BCCN 2009 in Frankfurt am Main, Germany. Whether it
is the first time you visit this annual meeting of the Bernstein Network for Computational
Neuroscience, or whether you have already been to some of the previous meetings in
Freiburg, Berlin, Göttingen and Munich, I hope you will enjoy your stay and find the
conference exciting.
The Bernstein Focus for Neurotechnology Frankfurt has started operation less than a year
ago. We are happy to be part of this network and honored to have the opportunity to
organize this meeting. As in previous years, there will be a single track program of talks and
poster sessions. In line with the theme of our Bernstein Focus, a special emphasis is put on
Computational Vision. Highlights of this program will be invited talks by József Fiser,
Wulfram Gerstner, Amiram Grinvald, Gilles Laurent, Klaus Obermayer, Mriganka Sur and
the winner of the 2009 Bernstein Award.
But this meeting also differs in some ways from its four predecessors. We were charged with
the task of opening the meeting internationally. To this end, we solicited the submission of
abstracts from all over the world and recruited an international program committee to
evaluate abstracts for their suitability for oral presentation. Reflecting its new character, the
name of the meeting was changed from Bernstein Symposium to Bernstein Conference for
Computational Neuroscience. As a consequence of this opening, we have received a record
number of submitted abstracts. Of the total number of 192 submitted abstracts, 51 are from
international researchers. Like last year, the contributed abstracts have been published in
the journal Frontiers in Computational Neuroscience. You can access them at:
http://frontiersin.org/conferences/individual_conference_listing.php?confid=264.
A slightly more subtle change was the expansion of topic areas covered by the program. In
response to the growing interest in more applied research topics as represented by the new
Bernstein Foci for Neurotechnology, we have introduced a demonstration track and several
exhibits will be shown at the meeting.
Thanks to the generous support of the Deutsche Telekom AG Laboratories, there will be
awards for the best talk, best demonstration and three best posters (€ 300 each). While our
award committee will select the winner of the best talk prize, all participants will vote on the
best demonstration and posters.
Naturally, the organization of this conference would not have been possible without the hard
work of the members of the organizing committee from the Frankfurt Institute for Advanced
Studies, our administrative staff, and the many PhD students and additional helpers. I am
deeply grateful for their enthusiasm, creativity, and tireless efforts to make this conference a
success.
Jochen Triesch, General Chair
1
Organization
Organization
Organizing committee
This conference is organized by the Frankfurt Institute for Advanced Studies (FIAS).
General Chair:
Program Chairs:
Publications Chair:
Publicity Chair:
Demo & Finance Chair:
Local Organization:
Student Symposium Chair:
Jochen Triesch
Jörg Lücke, Gordon Pipa, Constantin Rothkopf
Junmei Zhu
Prashant Joshi
Cornelius Weber
Gaby Schmitz
Cristina Savin
Program committee
Bruno Averbeck, University College London, UK
Dana Ballard, University of Texas at Austin, USA
Pietro Berkes, Brandeis University, USA
Matthias Bethge, Max-Planck Institute for Biological Cybernetics, Germany
Zhe Chen, Harvard Medical School, USA
Julian Eggert, Honda Research Institute Europe GmbH, Germany
Marc-Oliver Gewaltig, Honda Research Institute Europe, Germany
Rob Haslinger, Massachusetts General Hospital, USA
Konrad Koerding, Northwestern University, USA
Máté Lengyel, University of Cambridge, UK
David Nguyen, Massachusetts Institute of Technology, USA
Jonathan Pillow, University of Texas at Austin, USA
Alex Roxin, Columbia University, USA
Paul Schrater, University of Minnesota, USA
Lars Schwabe, University of Rostock, Germany
Peggy Seriès, The University of Edinburgh, UK
Fritz Sommer, University of California Berkeley, USA
Heiko Wersing, Honda Research Institute Europe GmbH, Germany
Diek W. Wheeler, George Mason University, USA
Award committee
Dana Ballard, University of Texas at Austin, USA
Theo Geisel, Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
Andreas Herz, Technical University München, Germany
Christoph von der Malsburg, FIAS, Germany
Peggy Seriès, University of Edinburgh, UK
2
Invited speakers
Invited speakers
József Fiser, Brandeis University, USA
Wulfram Gerstner, Ecole Polytechnique Federale de Lausanne, Switzerland
Amiram Grinvald, Weizmann Institute, Israel
Gilles Laurent, California Institute of Technology, USA
Klaus Obermayer, Bernstein Center Berlin, Germany
Mriganka Sur, Massachusetts Institute of Technology, USA
Bernstein Award 2009 winner
Funding
The conference is mainly funded by the “Bundesministerium für Bildung und Forschung”
(BMBF, Federal Ministry of Education and Research) via the Bernstein Focus
Neurotechnology Frankfurt, which is part of the National Bernstein Network Computational
Neuroscience.
Company participation
Deutsche Telekom AG Laboratories
Honda Research Institute Europe GmbH
Exhibiting companies
Multi Channel Systems MCS GmbH
inomed Medizintechnik GmbH
neuroConn GmbH
NIRx Medical Technologies LLC
Brain Products GmbH
Springer Verlag GmbH
3
Program
Program
September 29 – 30
Satellite Event at FIAS: Workshop
Tuesday, September 29, 14:00 – 18:00
Wednesday, September 30, 09:00 – 13:00
Title:
“Getting the message across”
Katrin Weigmann
September 30
Bernstein Meeting and Registration
Wednesday, September 30, 10:00 – 13:30
10:00
Meeting of the members of Bernstein Computational Neuroscience e.V.
by invitation only
11:30
Bernstein Project Committee Meeting
by invitation only
11:30
Registration and Welcome Reception
Talk Session
Wednesday, September 30, 13:30 – 15:40
Session Chairs: Jochen Triesch, Constantin Rothkopf
13:30
Welcome, Issuing of Bernstein Award
14:00
Keynote
Bernstein Awardee
15:00
Neuronal phase response curves for maximal information transmission
Jan-Hendrik Schleimer, Martin Stemmler
15:20
Coffee break
Talk Session: Plasticity
Wednesday, September 30, 15:40 – 17:20
Session Chair: Christoph von der Malsburg
15:40
Keynote: Modeling synaptic plasticity
Wulfram Gerstner
4
Program
16:40
Adaptive spike timing dependent plasticity realises palimsest auto-associative
memories
Klaus Pawelzik, Christian Albers
17:00
A gamma-phase model of receptive field formation
Dana H Ballard
Poster Session I
Wednesday, September 30, 17:20 – 21:20
Poster topics:
Dynamical systems and recurrent networks; Information processing in neurons
and networks; Neural encoding and decoding; Neurotechnology and brain
computer interfaces; Probabilistic models and unsupervised learning
Catering
October 1
Talk Session: Detailed models
Thursday, October 1, 09:00 – 11:00
Session Chair: Klaus Pawelzik
09:00
Keynote: Rules of cortical plasticity
Mriganka Sur
10:00
Efficient reconstruction of large-scale neuronal morphologies
Panos Drouvelis, Stefan Lang, Peter Bastian, Marcel Oberlaender, Thorben
Kurz, Bert Sakmann
10:20
Adaptive accurate simulations of single neurons
Dan Popovic, Stefan Lang, Peter Bastian
10:40
Coffee break
Talk Session: Synchrony
Thursday, October 1, 11:00 – 13:00
Session Chair: Gordon Pipa
11:00
Synchronized inputs induce switching to criticality in a neural network
Anna Levina, J. Michael Herrmann, Theo Geisel
11:20
Role of neuronal synchrony in the generation of evoked EEG/MEG responses
Bartosz Telenczuk, Vadim Nikulin, Gabriel Curio
11:40
Spike time coordination maps to diffusion process
Lishma Anand, Birgit Kriener, Raoul-Martin Memmesheimer, Marc Timme
12:00
Lunch break
5
Program
Talk Session: Network dynamics
Thursday, October 1, 13:00 – 15:00
Session Chair: Jörg Lücke
13:00
Keynote: Coding and connectivity in an olfactory circuit
Gilles Laurent
14:00
Neurometric function analysis of short-term population codes
Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge
14:20
A network architecture for maximal separation of neuronal representations experiment and theory
Ron Jortner, Gilles Laurent
14:40
Dynamics of nonlinear suppression in V1 simple cells
Manuel Levy, Anthony Truchard, Gérard Sadoc, Izumi Ohzawa, Yves Fregnac,
Ralph Freeman
Poster Session II and Demonstrations
Thursday, October 1, 15:00 – 19:00
Poster topics:
Computer vision; Decision, control and reward; Learning and plasticity; Sensory
processing
19:00
Conference dinner
October 2
Talk Session: Representations /Decoding
Friday, October 2, 09:00 – 11:00
Session Chair: Máté Lengyel
09:00
Keynote: Modelling cortical representations
Klaus Obermayer
10:00
Inferred potential motor goal representation in the parietal reach region
Christian Klaes, Stephanie Westendorff, Alexander Gail
10:20
A P300-based brain-robot interface for shaping human-robot interaction
Andrea Finke, Yaochu Jin, Helge Ritter
10:40
Coffee break
6
Program
Talk Session: Integration
Friday, October 2, 11:00 – 13:00
Session Chair: Peggy Seriès
11:00
On the interaction of feature- and object-based attention
Detlef Wegener, Friederike Ehn, Orlando Galashan, Andreas K Kreiter
11:20
Interactions between top-down and stimulus-driven processes in visual feature
integration
Marc Schipper, Udo Ernst, Klaus Pawelzik, Manfred Fahle
11:40
Coding of interaural time differences in the DNLL of the mongolian gerbil
Hannes Lüling, Ida Siveke, Benedikt Grothe, Christian Leibold
12:00
Lunch break
Talk Session: Memory
Friday, October 2, 13:00 – 15:00
Session Chair: Constantin Rothkopf
13:00
Keynote: Probabilistic inference and learning: from behavior to neural
representations
József Fiser
14:00
A multi-stage synaptic model of memory
Alex Roxin, Stefano Fusi
14:20
An integrated system for incremental learning of multiple visual categories
Stephan Kirstein, Heiko Wersing, Horst-Michael Groß, Edgar Körner
14:40
Coffee break
Talk Session: Mesoscopic dynamics
Friday, October 2, 15:00 – 17:00
Session Chair: Dirk Jancke
15:00
A mesoscopic model of VSD dynamics observed in visual cortex induced by
flashed and moving stimuli
Valentin Markounikau, Christian Igel, Dirk Jancke
15:20
Keynote: Dynamics of on going activity in anesthetized and awake primate
Amiram Grinvald, David Omer
16:20
Awards and Closing Speech
7
Program
October 3
Satellite Event at FIAS: Student Symposium
Saturday, October 3, 09:30 – 17:00
Invited speakers:
Tim Gollisch:
Máté Lengyel:
Peggy Seriès:
Neural coding in the retina
Episodic memory: why and how - or the powers and perils of
Bayesian inference in the brain
Sensory adaptation and the readout of population codes
8
Conference information
Internet
To obtain access to the Internet, please come to the welcome reception desk to sign the
form "terms of agreement", and get your login and password.
Access to the internet is established through a secure connection using your web browser.
Connect to the wireless network with the SSID ‘FREIFLUG' and start your browser. You will
have to agree to the 'terms of agreement' on the upcoming page. On the next page, enter
the login and password. After clicking the 'login' button, a separate popup window will open
showing your connection status. Please make sure to disable any popup blockers for this
page. When leaving the network, you can close the connection by clicking the 'logout' button
in the popup window.
Instructions for presenters
Oral sessions
The conference has single-track oral sessions. Contributed talks are 20 minutes including
questions. The main meeting room is equipped with audio visual equipment, such as a
projector and microphones. A laptop (windows XP) with standard software (i.e. MS Office
2007 with Powerpoint, and OpenOffice) will be available to load your talks ahead of time via
USB or CD. You can also use your own personal laptop. In any case, please get in touch
with the session chair right at the beginning of the break preceding your session.
Poster sessions
There will be two official poster sessions on Wednesday and Thursday. Poster boards are
numbered according to abstract numbers as they appear in this program book (labelled as
W# (Poster Session I on Wednesday) and T# (Poster Session II on Thursday)). On your
poster day, please set up your poster starting 11:30 on Wednesday, and 8:30 on Thursday.
Please take down the poster Wednesday by 21:20 and Thursday by 19:00. Please keep in
mind that the conference dinner starts right after the end of the poster session on Thursday.
Posters will be displayed in the room 14 and 15 on the third floor. Poster boards are of
height 140 cm (55.1 inch) by width 100 cm (39.4 inch). Pins will be available at registration.
9
Food
Food
A welcome reception snack will be served in the foyer on Wednesday. All coffee breaks will
be organized on the 3rd floor of the new auditorium. Lunch will be served in the food courts
(Mensa No. 1 and 3, open Monday-Friday 11:00-15:00). Each voucher covers the following
courses:
1 starter
1 main dish with 2 sides
1 dessert from the offer of the day
1 soft drink, 0,5 l
Additional courses and food/drink from places other than Mensa 1+3 would have to be paid
by yourself.
Map of food courts on Campus Westend
10
Venue
Venue
The conference is held in the new auditorium ("Neues Hörsaalzentrum") of the Goethe
University Frankfurt at Campus Westend:
Campus Westend: Neues Hörsaalzentrum
Grüneburgplatz 1
D-60323 Frankfurt am Main
The new auditorium and the casino (food courts) form the new Center of the Campus
Westend, Goethe University Frankfurt. This unique architecture is part of the design concept
of Ferdinand Heide, which won the urban design competition on how the terrain around the
monument IG Farben Building should be constructed for the Campus Westend in 2002. His
concept will be finalised by 2020. The IG Farben Building was built in 1929 by Hans Poelzig.
After world war II it served as the headquarter for the American allied occupation. Since the
withdrawal of the Americans in 1989, the building accommodates the faculties of Humanities
and the Cultural and Social Science of the Goethe University.
Getting there
The closest subway station ("U-Bahn") is “Holzhausenstraße” (lines U1/2/3), which is a 10minute walk from the conference building. The nearest bus stop is “Simon-Bolivar-Anlage”,
served by bus line 36, and is a 4-minute walk away.
The city center (station Hauptwache, U1/2/3/6/7) is about 2 km away.
The public transport in Frankfurt is managed by Rhein-Main-Verkehrsverbund whose
multilingual website (www.rmv.de) has a very useful route planner to organize your trips in
and around Frankfurt. Tickets have to be bought from ticket machines prior to the trip. When
going by bus, you can also buy them from the bus driver when boarding, however, this
alternative is not available in the subway.
11
Abstracts
Abstracts and supplementary material have been published in the journal
“Frontiers in Computational Neuroscience” and can be found at:
http://frontiersin.org/conferences/individual_conference_listing.php?confid=264
13
Oral Presentations
Wednesday, September 30
Neuronal phase response curves for maximal information
transmission
Jan-Hendrik Schleimer*13, Martin Stemmler24
1
2
3
4
Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
Bernstein Center for Computational Neuroscience Munich, Munich, Germany
Institut for theoretical Biology, Humboldt University, Berlin, Germany
Ludwig-Maximilian Universität, Munich, Germany
* [email protected]
The Hodgkin and Huxley model of a neuron, when driven with constant input, spikes
periodically, such that the dynamics trace out a stable, closed orbit in the system's state
space, which is composed of the voltage and the gating variables. If the input is not
constant, but varies in time around a mean input, then the underlying dynamics is perturbed
away from the stable orbit, yet the underlying limit cycle will still be recognizable and will act
as an attractor for the dynamics.
Each point in state space is associated with a phase, which translates directly into a
prediction of the next spike time in the absence of further perturbing input and yields phase
response curves (PRC), one for each dynamical variable. For instance, the PRC of the
gating variables relates the stochasticity in channel opening and closing to the temporal jitter
in spikes, whereas the voltage PRC describes the shift in the next spike time for a brief input
pulse. By coarse-graining the fast time-scales of channel noise (Fox & Lu, 1994), we reduce
models of the Hodgkin-Huxley type to one-dimensional noisy phase oscillators, which allows
one to deduce the inter-spike interval distribution in a model, or, vice versa, estimate the
channel noise from experimental histograms.
For the phase model, we perform a linear perturbation analysis based on the Fokker-Planck
equations, which describe the time evolution of the probability distribution over the dynamical
variables. From this analysis, we derive the linear filter that maps the input onto an average
response, based on the system's PRC and the intrinsic noise level. Together with the
knowledge of the stimulus statistics, we use this filter to compute a lower bound on the
information transmitted (Gabbiani & Koch, 1998). We then optimize the PRC (represented as
15
Oral Presentations
a Fourier series) to transmit the most information given a fixed sensitivity to broadband input
noise and the biophysical requirement that the voltage PRC must tend to zero during the
action potential itself. The resulting optimal PRC lies between that of a classical type I
(integrator) and type II neuron (resonator) (Hodgkin, 1948), and is fairly insensitive to
stimulus bandwidth and noise level. In addition, we extend the results of Ermentrout et al.
(2007) to relate the PRC to the spike-triggered average voltage and the spike-triggered
covariance of the voltage in the presence of noise, allowing us to quantify not only how
much, but also what information is transmitted by a neuron with a particular PRC, and the
stimulus features to which that neuron is sensitive.
Modeling synaptic plasticity
Wulfram Gerstner*1
1 Laboratory of Computational Neuroscience, Ecole Polytechnique Federale de Lausanne,
Lausanne, Switzerland
* [email protected]
Adaptive spike timing dependent plasticity realises palimsest
auto-associative memories
Klaus Pawelzik*1, Christian Albers1
1 Department for Theoretical Physics, Center for Cognitive Sciences, Bremen University,
Bremen, Germany
* [email protected]
Memory contents are believed to be stored in the efficiency of synapses in highly recurrent
networks of the brain. In prefrontal cortex it was found that short and long term memory is
accompanied with persistent spike rates [1,2] indicating that reentrant activities in recurrent
networks reflect the content of synaptically encoded memories [3].
It is, however, not clear which mechanisms enable synapses to incrementally accumulate
information from the stream of spatially and temporally patterned inputs which under natural
conditions enter as perturbations of the ongoing neuronal activities. For successful
sequential learning only novel input should alter specific synaptic efficacies while previous
memories should be preserved as long as network capacity is not exhausted. In other words,
synaptic learning should realise a palimpsest property with erasing the oldest memories first.
Here we demonstrate that synaptic modifications which sensitively depend on temporal
changes of pre- and the post-synaptic neural activity can enable such incremental learning in
recurrent neuronal networks. We investigated a realistic rate based model and found that for
robust incremental learning in a setting with sequentially presented input patterns specific
16
Wednesday, September 30
adaptation mechanisms of spike timing dependent plasticity (STDP) are required that go
beyond the mechanisms of the synaptic changes observed with sequences of pre- and postsynaptic spikes [4]. Our predicted pre- and post-synaptic adaptation mechanisms
contributing to synaptic changes in response to respective rate changes are experimentally
testable and ̶if confirmed ̶ would strongly suggest that STDP provides an unsupervised
learning mechanism particularly well suited for incremental memory acquisition by
circumventing the notorious stability-plasticity dilemma.
Acknowledgements:
Supported by the BMBF and the Center for Cognitive Sciences (ZKW) Bremen.
References:
[1] Miyashita, Nature 335, 817, 1988.
[2] Miyashita and Chang, Nature 331, 86, 1988.
[3] Amit et al., J. Neurosci. 14, 6435, 1994.
[4] Froemke et al., J. Neurophysiol 95, 1620, 2006.
A gamma-phase model of receptive field formation
Dana H Ballard*1
1 University of Texas, Austin, USA
* [email protected]
For the most part, cortical neurons exhibit predictably random spiking behavior that can be
modeled as a Poisson process with a baseline rate that has been shown to be a correlate of
experimental parameters in hundreds of experiments. Because of this extensive data set it,
has been almost taken for granted that a neuron communicates a scalar parameter by the
spike rate even though this strategy has proven very difficult to realize in widespread circuit
simulations.
One of the reasons that it has been difficult to find an alternate interpretation of cortical
spikes may be that they are used for a number of different purposes simultaneously, each
having different requirements. To focus on two of the important ones, the cells must learn
their receptive fields and at the same time communicate stimulus information. These two
tasks have radically different information processing requirements. The first task is slow and
incremental, occurring prominently during development, but also in the lifetime of the animal,
and uses aggregates of inputs. The second task occurs vary rapidly and uses just a few
spikes over a very fast, 100-300 millisecond timescale.
Our primary result suggests that the membrane potentials of cells with overlapping receptive
fields are representing components of probability distributions such that each spike
generated is a data point from the combined distribution. Thus if the receptive fields overlap
17
Oral Presentations
only one cell in the overlap can send it and the overlapping cells compete probabilistically to
be the sender. Each spike communicates numerical information is by using relative timing
where in a wave of spikes the earlier spikes represent higher values. This strategy can be
used in general circuitry including feedback circuitry if such waves are references to the
gamma oscillatory signal. Spikes coincident with zero phase in the gamma signal can signal
high numbers and spikes lagging by a few milliseconds can signal lower numbers. The
reason a neuron's spike train appears random is that, in any specific computation, the
information is randomly routed in a neural circuit from moment to moment. It is this random
routing that causes the spike train to appear almost Poisson in distribution.
Learning incorporates sparse coding directly in that the input is only approximated to a
certain error, resulting in a very small number of cells at each cycle that are required to send
spikes. Furthermore, learning uses the spike timing phase directly to modify each synapse
according to a Hebb rule. The gamma phase timing is also critical for fitting the data rapidly.
By using lateral inhibition from successive components, the input data can be coded in a
single gamma phase cycle.
To illustrate these points, we simulate the learning of receptive fields in striate cortex,
making use of a model of the LGN to striate cortex feedback circuitry. The simulation
suggests the possibility that the rate code interpretation of cortical cells may be a correlate of
a more fundamental process and makes testable predictions given timing information.
Thursday, October 1
Rules of cortical plasticity
Mriganka Sur*1
1 Department of Brain and Cognitive Sciences and Picower Institute for Learning and
Memory, Massachusetts Institute of Technology, Cambridge, USA
* [email protected]
Plasticity of adult synapses and circuits is overlaid on principles of cortical organization and
development. Plasticity induced by sustained patterned stimulation in adult visual cortex, and
by visual deprivation in the developing visual cortex, illustrates how feedforward, Hebbian
mechanisms combine with feedback, self-regulatory mechanisms to mediate neuronal and
network plasticity. Cortical plasticity relies on representations, and its rules are implemented
by specific synaptic molecules as well as by astrocytes that are mapped precisely alongside
neurons.
18
Thursday, October 1
Efficient reconstruction of large-scale neuronal morphologies
Panos Drouvelis*1, Stefan Lang1, Peter Bastian1, Marcel Oberlaender2, Thorben Kurz2
1 Interdisciplinary Center for Scientific Computing, University of Heidelberg, Heidelberg,
Germany
2 Max-Planck Institute of Neurobiology, Munich, Germany
* [email protected]
The recently developed serial block face scanning electron microscopy (SBFSEM) allows for
imaging of large volumes of brain tissue (~200x200x100 microns) with approximately 20 nm
spatial resolution. Using this technique to reconstruct single biocytin-labeled neurons, will
reveal new insights on widely spreading neuron morphologies at subcellular level.
As a first step, we therefore aim to extract the number and three dimensional distribution of
spines, to categorize spine morphologies and to determine membrane surface areas for
dendrites of excitatory cortical neurons. This will yield key prerequisites for an authentic
anatomical neuron classification and conversion into realistic full-compartmental models,
which might as well be integrated within neuronal microcircuits. Hence, the presented work
will help to reengineere the morphology and connectivity of large functional neuronal
networks at subcellular resolution.
However, imaging a few hundred microns of cortical tissue, with nanometer resolution,
results in very large volumes of image data. Here, we present an efficient reconstruction
pipeline that allows for a fast and reliable extraction of neuron geometry. The developed
framework comprises specialized three dimensional segmentation and morphological
operators, which result in tracings of the three and one dimensional skeleton structure of
neurons.
The principle algorithms of the presented reconstruction pipeline are parallelized, using the
CUDA programming model. Exploiting the performance of current graphics hardware, the
CUDA platform allows for an efficient multi-thread parallelization of visualization algorithms,
either at the level of pixels or voxels. It further offers possibilities to optimize the
management of available hardware resources. In consequence, we achieved efficient
processing of input data volumes of typical sizes of several Gigabytes. Further, time for
image processing reduces from a few hours of CPU time to a few minutes.
A resultant example, revealing highly resolved morphological characteristics and geometries
of dendrites and spines, is shown Fig.1 (supplementary material). Thus, realistic anatomical
description and classification of neuron types will become possible in the near future.
19
Oral Presentations
Adaptive accurate simulations of single neurons
Dan Popovic*1, Stefan Lang1, Peter Bastian1
1 Interdisciplinary Center for Scientific Computing, University of Heidelberg, Heidelberg,
Germany
* [email protected]
Active signal processing in physiological single neurons can be described by a nonlinear
system of one partial and several ordinary differential equations composed by the cable
equation and a reaction part in Hodgkin-Huxley notation. The partial differential equation for
the potential v yields
c m  x , t  ∂t v  x ,t =∂ x g a  x ,t  ∂x v x , t −i Ion  x , t −i Syn x , t 
where c m is the membrane capacitance and g a the axial conductivity of the cell. The
current i Syn is imposed by synaptical inputs whereas the ionic current i Ion may be driven
by several ionic channels, each of which is controlled by an additional ordinary differential
equation.
The system exhibits various electrical activity patterns which are often localized in space as
well as rapid changes in characteristic time scales of the cell. In order to achieve reliable
simulation results as well as to minimize expensive simulation time, numerical simulation
codes should resolve local features adapting the computational grid and time steps
accordingly. In this sense, it is necessary to have detailed information about the
discretisation error evoked by the applied numerical solution schemes in space and time.
Recently, second order accurate Finite Volume (FV) schemes have been developed to
discretise and solve the model numerically in conjunction with conventional time stepping
schemes such as the Backward Euler or the Crank-Nicholson method. However, information
about the error contributions arised by the spatial and temporal discretisation schemes is not
available yet as they are not easy to obtain.
We present a duality based a posteriori error estimation method for FV based solution
schemes which splits up spatial and temporal contributions to the discretisation error. The
method evolves from a framework for error estimation for Finite Element Methods for
diffusion-reaction systems developed by Estep et. al. (Memoirs of the AMS, No.696). Based
on the error estimations, the spatial discretisation grid and time step are optimized in order to
resolve local electrical activity and changes of intrinsic time scales during simulations. The
error functional to be observed can arbitrarily be chosen. The previously described methods
have been realized within NeuroDUNE ̶ a simulator for large-scale neuron networks.
Numerical results for simulations with L5A pyramidal cells of the rat barrel cortex observing
point errors at the soma and the spatial error in L2 -sense at the end of the simulation time
interval are presented. We show various experiments including passive and active signal
20
Thursday, October 1
processing with multiple synaptical inputs. Further, we examine uniform as well as adaptive
simulation configurations with regard to accuracy and efficiency. An outlook to the possible
application of the adaptation scheme to network simulations will be given.
Synchronized inputs induce switching to criticality in a neural
network.
Anna Levina*13, J. Michael Herrmann2, Theo Geisel13
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Institute of Perception, Action and Behaviour, University of Edinburgh, Edinburgh, UK
3 Max-Planck Institute for Dynamics and Self-Organisation, Göttingen, Germany
* [email protected]
The concept of self-organized criticality (SOC) describes a variety of phenomena ranging
from plate tectonics, the dynamics of granular media and stick-slip motion to neural
avalanches. In all these cases the dynamics is marginally stable and event sizes obey a
characteristic power-law distribution. Criticality was shown to bring about optimal
computational capabilities, optimal transmission and storage of information, and sensitivity to
sensory stimuli. In neuronal systems the existence of critical avalanches was predicted in a
paper of one of the present authors [1] and observed experimentally by Beggs and Plenz [2].
In our previous work, we have shown that an extended critical interval can be obtained in a
neural network by incorporation of depressive synapses [3]. In the present study we
scrutinize a more realistic dynamics for the synaptic interactions that can be considered as
the state-of-the-art in computational modeling of synaptic interaction. Interestingly, the more
complex model does not exclude an analytical treatment and it shows a type of stationary
state consisting of self-organized critical phase and a subcritical phase that has not been
described earlier. The phases are connected by first- or second-order phase transitions in a
cusp bifurcation which is implied by the dynamical equations of the underlying biological
model [4]. We show that switching between critical and subcritical phase can be induced by
synchronized excitatory or inhibitory inputs and study the reliability of switching in
dependence of the input strength.We present exact analytical results supported by extensive
numerical simulations.
Although presented in the specific context of a neural model, the dynamical structure of our
model is of more general interest. It is the first observation of a system that combines a
complex classical bifurcation scenario with a robust critical phase. Our study suggests that
critical properties of neuronal dynamics in the brain may be considered as a consequence of
the regulatory mechanisms at the level of synaptic connections. The system may account
not only for SOC behavior, but also for various switching effects observed in the brain. It
suggests to explain observations of up and down states in the prefrontal cortex as well as
the discrete changes in synaptic potentiation and depression as a network effects. The
relation between neural activity and average synaptic strength, which we derived here may
21
Oral Presentations
account for the reported all-or-none behavior.
References:
[1] C. W. Eurich, M. Herrmann, and U. Ernst. Finite-size effects of avalanche dynamics.
Phys. Rev. E, 2002.
[2] J. Beggs and D. Plenz. Neuronal avalanches in neocortical circuits. J. Neurosci.2003.
[3] A. Levina, J. M. Herrmann, T. Geisel. Dynamical synapses causing self-organized
criticality in neural networks, Nature Phys., 2007.
[4] A. Levina, J. M. Herrmann, T. Geisel. Phase transitions towards criticality in a neural
system with adaptive interactions, PRL, 2009.
Role of neuronal synchrony in the generation of evoked EEG/MEG
responses
Bartosz Telenczuk*1, Vadim Nikulin2, Gabriel Curio2
1 Institute for Theoretical Biology, Humboldt Universität zu Berlin, Berlin, Germany
2 Neurologie, Charité-Universitätsmedizin, Berlin, Germany
* [email protected]
Evoked responses (ERs) are primary real-time measures of perceptual and cognitive activity
in the human brain. Yet, there is a continuing debate on which mechanisms contribute to the
generation of ERs: First, in case of an "additive" mechanism stimuli evoke an response that
is superimposed on the ongoing activity, and the ongoing activity is understood as noise.
The second mechanism is based on "phase resetting" where ongoing oscillations adjust their
phase in response to the stimuli. Arguments supporting either of these two views are based
mainly on macroscopic ERs recorded from the human scalp with EEG/MEG. We argue here
that results based on the analysis of macroscopic EEG/MEG data are not conclusive about
the nature of microscopic events responsible for the generation of evoked responses.
Moreover, we show that in principle attempts to decide between either of the two alternatives
are futile without precise knowledge of the spatial synchronization of microscopic neuronal
oscillations.
We derive this notion from a computational model in which single neurons or small neuronal
populations are represented by stochastic phase-oscillators. The mean phase of any of
these oscillators is progressing linearly, but it can be advanced or delayed by a transient
external stimulus (Tass 2005). In order to understand how external stimuli affect the
macroscopic activity, we simulate large number of mutually coupled neuronal oscillators and
analyze the amplitude dynamics of the whole ensemble. Specifically, we model a situation
when there is a phase concentration across different oscillators upon the presentation of
stimuli (phase reset mechanism). We show that although at the microscopic level phase
resetting does not lead to a change in the mean level of activity, the macroscopic response
might be associated with a pronounced amplitude increase, which is usually taken as
22
Thursday, October 1
evidence for the additive model of ERs. Furthermore, we show that the magnitude of such
amplitude increase is dependent on the pre-stimulus population synchrony. Interestingly, in
case of large pre-stimulus synchrony there is no amplitude increase in macroscopically
measured activity – the situation which corresponds to the generation of ERs according to
phase-reset model.
In summary, changing the level of the synchronization across a neuronal population can
produce macroscopic signals which might agree with either of the two models, yet the true
responsible mechanism is phase reset of underlying neuronal elements. Consequently, the
results based only on the analysis of macroscopic ERs are but ambiguous regarding the
neuronal processes which accompany responses to external stimulation and may potentially
lead to unfounded conclusions.
Our analysis is applicable to a large body of experimental EEG/MEG research and provides
a critical argument to the current discussion about the mechanisms of ER generation.
Acknowledgements:
DFG (SFB 618, B4) and Berlin Bernstein Center for Computational Neuroscience (C4).
References:
Tass, P.A. Estimation of the transmission time of stimulus-locked responses: modelling and
stochastic phase resetting analysis. Philos. Trans. R. Soc. Lond., B, Biol. Sci 360, 995999 (2005).
Spike time coordination maps to diffusion process
Lishma Anand*23, Birgit Kriener23, Raoul-Martin Memmesheimer1, Marc Timme23
1 Center for Brain Science, Harvard University, Cambridge, USA
2 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
3 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
* [email protected]
Patterns of precisely timed spikes occur in a variety of neural systems. They correlate to
external stimuli and internal events and fundamentally underly information processing in the
brain. A major open question in theoretical neuroscience is how spike times may be
coordinated among neurons that recurrently connect to a complex circuit [1]. In particular, it
is not well understood how two neurons may synchronize their spike times even if they are
not directly connected by a synapse but interact only indirectly through recurrent network
cycles. Here we show that the dynamics of synchronization of spike times in complex circuits
of leaky integrate-and-fire neurons is equivalent to the relaxation dynamics of a standard
diffusion process on the same network topology. We provide exact analytical conditions for
this equivalence and illustrate our findings by numerical simulations. The synchronization
time of a network of leaky integrate-and-fire neurons in fact equals the relaxation time of
23
Oral Presentations
diffusion for appropriate choice of parameters on the same network.
These results complement standard mean field [2] and event based analyses [3,4] and
provide a natural link between stochastic processes of random walks on networks and spike
time coordination in neural circuits. In particular, a set of mathematical tools for analyzing
diffusion (or, more generally, Markov processes) may now well be transferred to pin down
features of synchronization in neural circuit models.
Acknowledgements:
This work was supported by the Federal ministry of Education Research (BMBF) Germany
by grant number 01GQ0430 to the Bernstein Center for Computational Neuroscience
(BCCN) Goettingen.
References:
[1] C. Kirst and M. Timme, Front. Neurosci. 3:2 (2009).
[2] N. Brunel, J. Comput, Neurosci. 8:183 (2000).
[3] S. Jahnke, R.M. Memmesheimer, and M. Timme, Phys. Rev. Lett. 100:048102 (2008).
[4] C. Kirst, T. Geisel, and M. Timme,Phys. Rev. Lett. 102:068101 (2009).
Coding and connectivity in an olfactory circuit
Gilles Laurent*1
1 Division of Biology, California Institute of Technology, Pasadena, CA, USA
* [email protected]
Neurometric function analysis of short-term population codes
Philipp Berens*21, Sebastian Gerwinn2, Alexander Ecker2, Matthias Bethge2
1 Baylor College of Medicine, Houston, USA
2 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
* [email protected]
The relative merits of different population coding schemes have mostly been studied in the
framework of stimulus reconstruction using Fisher Information, minimum mean square error
or mutual information.
Here, we analyze neural population codes using the minimal discrimination error (MDE) and
the Jensen-Shannon information in a two alternatives forced choice (2AFC) task. In a certain
sense, this approach is more informative than the previous ones as it defines an error that is
specific to any pair of possible stimuli - in particular, it includes Fisher Information as a
24
Thursday, October 1
special case.
We demonstrate several advantages of the minimal discrimination error: (1) it is very intuitive
and easier to compare to experimental data, (2) it is easier to compute than mutual
information or minimum mean square error, (3) it allows studying assumption about prior
distributions, and (4) it provides a more reliable assessment of coding accuracy than Fisher
information.
First, we introduce the Jensen-Shannon information and explain how it can be used to
bound the MDE. In particular, we derive a new lower bound on the minimal discrimination
error that is tighter than previous ones. Also, we explain how Fisher information can be
derived from the Jensen-Shannon information and conversely to what extent Fisher
information can be used to predict the minimal discrimination error for arbitrary pairs of
stimuli depending on the properties of the tuning functions.
Second, we use the minimal discrimination error to study population codes of angular
variables. In particular, we assess the impact of different noise correlations structures on
coding accuracy in long versus short decoding time windows. That is, for long time window
we use the common Gaussian noise approximation while we analyze the Ising model with
identical noise correlation structure to address the case of short time windows. As an
important result, we find that the beneficial effect of stimulus dependent correlations in the
absence of 'limited-range' correlations holds only true for long-term population codes while
they provide no advantage in case of short decoding time windows.
In this way, we provide for a new rigorous framework for assessing the functional
consequences of correlation structures for the representational accuracy of neural
population codes in short time scales.
A network architecture for maximal separation of neuronal
representations - experiment and theory
Ron Jortner*2, Gilles Laurent1
1 California Institute of Technology, Pasadena, USA
2 Max-Planck Institute for Neurobiology, Munich, Germany
* [email protected]
Characterizing connectivity in neuronal circuits is a crucial step towards understanding how
they perform computations. We used this approach to address a central neural coding issue
in the olfactory system of the locust (Schistocerca americana) – to find network mechanisms
which give rise to sparse, specific neural codes and their implementation at the level of
neuronal circuitry.
Sparse coding, where each stimulus (or external state) activates only a small subset of
neurons and each neuron responds to only a small subset of stimuli (or states) has recently
25
Oral Presentations
attracted much interest in systems neuroscience, and has been observed in many systems
and across phyla. In the locust olfactory system, odor-evoked activity is transformed
between two subsequent relays: the antennal lobe, where 800 excitatory projection neurons
(PNs) encode odors using broad tuning and distributed representations, and the mushroom
body (MB), a larger network (ca. 50,000 Kenyon cells; KCs) which utilizes sparse
representations and is characterized by exquisite KC selectivity. We used simultaneous
intracellular and extracellular recordings and cross-correlation analysis to detect synaptic
contacts and quantify connectivity between these two neuronal populations (see also
supplementary figure 1).
We found that each KC receives synaptic connections from half the PN population (400 out
of the total of 800 PNs) on average (Jortner, Farivar and Laurent, 2007). While initially
surprising, simple analysis indicates that such architecture in fact maximizes differences
between input vectors to different KCs: with probability of connection of 1/2, the number of
possible ways to wire PNs onto a KC is maximal (~10^240), and since only 50,000
combinations are actually picked from this vast pool of possibilities, each KC receives a
unique set of inputs, on average maximally different from that of all other KCs (as the pool it
is drawn from is maximal). Rare spiking is then achieved by setting a high firing threshold
(equivalent to ~100 PN inputs; Jortner, Farivar and Laurent, 2007) so that KCs rarely cross
it. This ensures each KC responds to an arbitrarily small subset of stimuli, as different as
possible from those driving other KCs - while the probability of “accidental” threshold
crossing is minute.
Using an analytic mathematical model, we express higher system properties in terms of its
basic parameters – connectivity, firing thresholds and input firing rates. We prove that in a
generalized feed-forward system, the distance between representations is maximized as the
connection probability approaches 1/2 (see also supplementary figure 2), and that the
response sparseness of the target population can be expressed as a function of the basic
network parameters. This approach thus leads us to formulate general design principles
underlying the spontaneous emergence of sparse, specific and reliable neural codes.
References:
Jortner RA, Farivar SS, and Laurent G (2007). A simple connectivity scheme for sparse
coding in an olfactory system. J Neurosci. 27:1659-1669
26
Thursday, October 1
Dynamics of nonlinear suppression in V1 simple cells
Manuel Levy*3, Anthony Truchard2, Gérard Sadoc3, Izumi Ohzawa1, Yves Fregnac3, Ralph
Freeman2
1 Graduate School of Frontier Biosciences, Osaka University, Osaka, Japan
2 School of Optometry, University of California, Berkeley, USA
3 Unite de Neuroscience Integratives et Computationelles, Centre national de la recherche
scientifique, Gif/Yvette, France
* [email protected]
The visual responses of V1 neurons are affected by several nonlinearities, acting over
different timescales and having different biological substrates. Some are considered nearly
instantaneous: such is the case for the motion-dependent nonlinearities, and for the fastacting contrast gain control, which increases the neuronal gain and accelerates the response
dynamics for high contrast stimuli. Another, slower contrast dependent nonlinearity, also
termed contrast adaptation, adjusts the neuronal dynamic range to the contrast prevailing in
the receptive field for the past few seconds.
While cortical mechanisms likely participate in slow contrast adaptation, the functional
origins of the fast contrast- and motion-dependent nonlinearities are still debated. Some
studies suggest that they can be accounted for by a model consisting of a linear
spatiotemporal filter followed by a static nonlinearity (LN model), while others suggest that
additional nonlinear cortical suppression is required. It should also be noted that the time
constants of fast and slow nonlinearities are not very well known; thus their effects could mix
in the responses to seconds-long drifting gratings.
To clarify these issues, we measured contrast and motion interactions in V1 Simple cells
with white noise analysis techniques. The stimulus was a dynamic sequence of optimal
gratings whose contrast and spatial phase changed randomly every 13 ms. We also varied
the distribution from which contrasts were drawn, to explore the effects of slow contrast
adaptation. We reconstructed the 2nd-order kernels at low and high average contrasts, and
fitted multi-LN models to the responses. None of the Simple cells we recorded conformed to
a pure LN model, and most of them (79%) showed evidence of nonlinear (predominantly
divisive) suppression at high ambient contrast. Suppression was often (but not always)
motion-opponent; suppression lagged excitation by ~11 ms; and suppression improved the
response temporal precision and thus the rate of information transfer. At low average
contrast, the response was noisier and suppression was less visible. The response was
dominated by excitation, whose gain increased and whose kinetics slowed down.
Our findings suggest that both fast- and slow-acting nonlinearities participate in the contrastdependent changes in temporal dynamics observed with drifting gratings. More generally we
propose that contrast adaptation trades neuronal sensitivity against processing speed, by
changing the balance between excitation and delayed inhibition.
27
Oral Presentations
Friday, October 2
Modelling cortical representations
Klaus Obermayer*1
1 Bernstein Group for Computational Neuroscience, Berlin Germany and Technische
Universitaet, Berlin, Germany
* [email protected]
In my talk I will first present results from a map model of primary visual cortex, where we
analysed how much evidence recent single unit recordings from cat area 17 provide for a
particular cortical "operating point". Using a Bayesian analysis we find, that the experimental
data most strongly support a regime where the local cortical network provides dominant
excitatory and inhibitory recurrent inputs (compared to the feedforward drive). Most
interestingly, the data supports an operating regime which is close to the border to instability,
where cortical responses are sensitive to small changes in neuronal properties.
Secondly, I will show results of a study where we investigated visual attention in humans in a
probabilistic reward-based visual discrimination task. We find that behavioural performance
is not optimal but consistent with a heuristic based on a moving average estimate of stimulus
predictability and reward. We also found that the amplitudes of early visual, attention-related
EEG signals quantitatively reflect these estimates. Thus, information about stimulus statistics
and reward are already integrated by low-level attentional mechanisms.
Finally, I will discuss results of developmental perturbations imposed on the visual system
through retinal lesions in adolescent cats. Using a computational model of visual cortical
responses, I will show that the lesion induced changes of neuronal response properties are
consistent with spike timing-dependent plasticity (STDP) learning rules. STDP causes visual
cortical receptive fields to converge by creating a competition between neurons for the
control of spike timing within the network. The spatial scale of this competition appears to
depend on the balance of excitation and inhibition and and can in principle be controlled by
synaptic scaling type mechanisms.
Inferred potential motor goal representation in the parietal reach
region
Christian Klaes*1, Stephanie Westendorff 12, Alexander Gail1
1 German Primate Center, Göttingen, Germany
2 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
* [email protected]
28
Friday, October 2
Depending on the behavioral context, the visuomotor system selects and executes the most
appropriate action out of several alternatives. Two important areas for reach planning are
the parietal reach region (PRR) and the dorsal premotor cortex (PMd). It has been shown
that individual PMd neurons can simultaneously encode two potential reach directions if both
movement targets have been visually presented to the subject in advance (1).
Here we asked if potential reach directions are also encoded in PRR, and if spatially inferred
potential motor goals are represented equivalently to visually cued ones. We used a
memory-guided anti-reach paradigm in which a colored contextual cue instructed to move
either towards (pro-reach; visually cued motor goal) or opposite to a spatial cue (anti-reach;
inferred motor goal). The spatial cue was shown before and the contextual cue after a
memory period. In a fraction of trials we randomly suppressed the contextual cue (context
suppression; CS trials) to probe the monkeys choice between pro and anti, when no explicit
instruction was given.
We simultaneously recorded single neurons from PRR and PMd in macaque monkeys and
analyzed the tuning properties during the memory period. Bipolar directional tuning of the
neurons indicated that both potential motor goals, the visually cued (pro) and the inferred
(anti) goal, were simultaneously represented by many neurons in PRR and also PMd
(preliminary data), when the monkeys selected each goal with similar probability. The
behavioral control in CS trials rules out the possibility that the bipolar tuning was a
consequence of the monkeys deciding randomly for one of the two targets in the beginning
of each trial: When sorted according to the monkeys choice, the bipolar tuning was found
independently in both subsets of trials in which the monkeys exclusively selected either the
pro or anti goal. In contrast, when the monkeys had a strong bias to choose the anti target,
neurons were also tuned for the anti goal. Our results indicate that PRR represents potential
motor goals, and does so even if a potential goal is spatially inferred rather than directly
cued. Additionally, PRR directional tuning consistently changes with the behavioral
preference of the monkey, and hence could be involved in the selection process itself.
References:
Cisek P & Kalaska JF (2002) J Neurophysiol 87:1149.
A P300-based brain-robot interface for shaping human-robot
interaction
Andrea Finke*1, Yaochu Jin2, Helge Ritter1
1 Research Institute for Cognition and Robotics, Bielefeld University, Bielefeld, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
Brain-computer interfaces (BCI) based on the P300 event-related potential (ERP) have been
studied widely in the past decade. These BCIs exploit stimuli, called oddballs, which are
29
Oral Presentations
presented on a computer screen in an arbitrary fashion to implement a binary selection
mechanism. The P300 potential has been linked to human surprise, meaning that P300
potentials are triggered by unpredictable events. This hypothesis is the basis of the oddball
paradigm. In this work, we go beyond the standard paradigm and exploit the P300 in a more
natural fashion for shaping human-robot interaction (HRI).
In HRI a flawless behavior of the robot is essential to avoid confusion or anxiety of the
human user when interacting with the robot. Detecting such reactions in the human user on
the fly and providing instantaneous feedback to the robot is crucial. Ideally, the feedback
system does not demand additional cognitive loads and operates automatically in the
background. In other words, providing feedback from the human user to the robot should be
an inherent feature of the human-machine interaction framework. Information extracted from
the human EEG, in particular the P300, is a well-suited candidate for serving as input to this
feedback loop.
We propose to use P300 as a means for human-robot interaction, in particular to spot the
surprises of the human user during interaction to detect in time any mistakes in robot
behavior the human user observes. In this way, the robot can notice its mistakes as early as
possible and correct them accordingly.
Our brain-robot interface implementing the proposed feedback system consists of the
following core modules: (1) a "P300 spotter" that analyzes the incoming preprocessed data
stream for identifying P300 potentials on a single-trial basis and (2) a "translation" module
that translates the detected P300s into appropriate feedback signals to the robot.
The classification relies on a supervised machine learning algorithm that requires labeled
training data. This data must be collected subject-wise to account for the high inter-subject
variances typically found in EEG data. The off-line training needs to be carried out only once
prior to using the interface. The trained classifier is then employed for on-line detection of
P300 signals. During the online operation, the incoming multi-channel EEG data is recorded
and analyzed continuously. Each incoming new sample vector is added to a new window.
Spectral, spatial and temporal features are extracted from the filtered windows. The resulting
feature vectors are classified and a probability that the vector contains a P300 is assigned.
Eventually, a feedback signal to the robot is generated based on the classification result,
either a class label or a probability between 0 and 1.
The proposed framework was tested off-line in a scenario using Honda's humanoid robot
ASIMO. This scenario is suited for eliciting P300 events in a controlled experimental
environment without neglecting the constraints of real robots. We recorded EEG data during
interaction with ASIMO and applied our method off-line. In the future we plan to extend our
system to a fully on-line operating framework.
30
Friday, October 2
On the interaction of feature- and object-based attention
Detlef Wegener*1, Friederike Ehn1, Orlando Galashan1, Andreas K Kreiter1
1 Brain Research Institute, Department of Theoretical Neurobiology, University of Bremen,
Bremen, Germany
* [email protected]
Attending a feature of an object might be associated with the activation of both, feature- and
object-based attention mechanisms. Both mechanisms support selection of the attended
feature, but they differ strongly regarding the processing of non-attended features of the
target object: object-based attention is thought to support co-selection of irrelevant target
object features, thus selecting the entire object, whereas feature-based attention is
associated with suppressed processing of non-attended features, thus supporting the
selection of the target feature in a globally, space-independent manner. Hence, the question
arises whether both of these attention mechanisms would be activated at the same time,
how they interact, and by what factors this interaction might be influenced.
We examined these questions by conducting a feature-change detection paradigm that
required subjects to attend to either motion or color of one of two superimposed random dot
patterns (RDP). In Exp. 1, objects were made out of white dots moving in opposite
directions. In this way, RDP were defined by motion direction (integrative object feature), but
not by color (non-integrative object feature). In Exp. 2, objects were made out of green and
yellow dots, and moved in the same direction. In this way, RDP were defined by color, but
not by motion direction, and hence, integrative and non-integrative object features were
exchanged as compared to Exp. 1. Both experiments were designed as two-dimensional
Posner paradigms using colored arrows to indicate target object and changing feature. For
75% of the trials the cue gave fully correct information, and for each third of the remaining
25% the cue was either (i) incorrect regarding the changing feature, (ii) incorrect regarding
the target object, or (iii) incorrect in both respects.
The results show a strong and general influence of feature-based attention on the detection
of both types of feature changes in both experiments. However, the main and most
interesting finding is that feature-based attention can be accompanied by additional objectbased selection mechanisms, but only for integrative object features, and not for nonintegrative features. In other words, co-selection of non-attended object features was only
found when the feature was defining the object and was thus supporting selection, but not if
it was irrelevant for object selection. Hence, our results demonstrate that attention does not
necessarily improve the selection of all object features. They do not support the hypothesis
that objects are the target entities of attentional selection mechanisms but rather pose the
question, whether at least some of the data that have been suggested to demonstrate
object-based attention may instead reflect attention to those features that have to be
attended, even if uninstructed, to perceptually select the object.
31
Oral Presentations
Interactions between top-down and stimulus-driven processes in
visual feature integration
Marc Schipper*1, Udo Ernst2, Klaus Pawelzik2, Manfred Fahle1
1 Department for Human Neurobiology, Center for Cognitive Sciences, Bremen University,
Bremen, Germany
2 Department for Theoretical Physics, Center for Cognitive Sciences, Bremen University,
Bremen, Germany
* [email protected]
Perception of visual scenes requires the brain to link local image features into global
contexts. Contour integration is such an example grouping colinearily aligned edge elements
to form coherent percepts. Theoretical and modeling studies demonstrated that purely
stimulus-driven mechanisms, as implemented by feedforward or recurrent network
architectures, are well suited to explain this cognitive function. However, recent empirical
work showed that top-down attention can strongly modulate contour integration.
By combining psychophysical with electrophysiological methods, we studied how strongly
prior expectations shape contour integration. These empirical techniques were
complemented by model simulations to uncover the putative neural substrates and
mechanisms underlying contour integration.
Subjects participated in two experiments with identical visual stimuli but different behavioural
tasks: a detection task (A) and a discrimination task (B). Stimuli consisted of vertical or
horizontal ellipses formed by colinearily aligned Gabor elements embedded in a field of
Gabors with random orientations and positions. Each hemifield could contain either (i) one
vertical, (ii) one horizontal, or (iii) no ellipse. All combinations of these three basic
configurations were possible, resulting in nine stimulus categories. In experiment A
participants replied ‘yes’ whenever one stimulus contained at least one ellipse, in experiment
B observers replied ‘yes’ only when a target was present (either a horizontal or vertical
ellipse).
The psychophysical data demonstrate a pronounced influence of higher cognitive processes
on contour integration: In the discrimination task, reaction times (RT) are consistently shorter
for targets than for distractors. The presence of redundant targets (e.g. two horizontal
ellipses instead of only one horizontal ellipse) also shortens RTs. These first two effects
were consistent with our expectations. Moreover we discovered an additional bias in RT for
horizontal ellipses (~70 ms shorter than for vertical ellipses).
In EEG recordings, we find pronounced differences in event-related potentials (ERPs)
between stimulations with versus without the presence of contours. These differences
appear at about 110-160 ms after stimulus onset in the occipital regions of the cortex. In the
same regions the evoked potentials were substantially modulated by the number of contours
present (~140 ms after stimulus onset) and depending on the behavioural task (~230 ms
32
Friday, October 2
after stimulus onset).
Psychophysical and electrophysiological results are qualitatively consistent: The larger the
RT differences, the more dissimilar are ERPs in occipital regions. Moreover,
phenomenological modeling reveals that the horizontal bias and task-induced effects either
constructively or destructively combine in a multiplicative way. This may lead to much lower
RTs when e.g. a horizontal bias combines with a horizontal target, or to a mutual
cancellation of the different RT effects when e.g. a horizontal bias combines with a vertical
target.
Acknowledgements:
This work was supported by the BMBF as part of the National Bernstein Network for
Computational Neuroscience.
Coding of interaural time differences in the DNLL of the mongolian
gerbil
Hannes Lüling*1, Ida Siveke2, Benedikt Grothe2
1 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
2 Ludwig-Maximilians-Universität, Munich, Germany
* [email protected]
The difference in traveling time of a sound from its origin to the two ears is called the
interaural time difference (ITD). ITDs are the main cue for low-frequency-sound localization.
The frequency of the stimulus modulates the ITD sensitivity of the response rates of neurons
in the brain stem. This modulation is generally characterized by two parameters: The
characteristic phase (CP) and the characteristic delay (CD). The CD corresponds to a
difference in the temporal delays from the ear to the respective coincidence detector neuron.
The CP is an additional phase offset the nature of which is still under debate. The two above
characteristic quantities hence describe the best ITD at which a neuron responds maximally
via (best ITD)=CD+CP/f, in which f is the frequency of the pure tone stimulus.
We recorded neuronal firing rates in the dorsal nucleus of the lateral lemniscus (DNLL) of
the mongolian gerbil for pure tone stimuli with varying ITD and frequency. Intrestingly, we
found that CPs and CDs were strongly negatively correlated. To understand the observed
distribution of CPs and CDs among the recorded population, we have assessed the mutual
information of firing rate and ITD in terms of these two parameters. Therefore we computed
noise entropies from rate distributions fitted to the experiments.
Our results show that the information-optimal distribution of CPs and CDs exhibits a similar
negative correlation as the one experimentally observed. Assuming similar rate statistics, we
make hypotheses about how CDs and CPs should optimally be distributed for mammals with
various head diameters. As expected, the mutual information increases with head diameter.
33
Oral Presentations
Moreover, for increasing head diameter the two distinct subclusters of high mutual
information (peakers and troughers) fuse into one.
Probabilistic inference and learning: from behavior to neural
representations
József Fiser*1
1 Department of Psychology and Volen Center for Complex Systems, Brandeis University,
Waltham, USA
* [email protected]
Recent behavioral studies provide steadily increasing evidence that humans and animals
perceive sensory input, make decisions and control their movement by optimally considering
the uncertainty of the surrounding environment. Such behavior is best captured in a
statistical framework, as making probabilistic inference based on the input stimulus and the
stored representations of the cortex. The formalism of Probabilistic Population Codes (PPC)
has emerged as one such framework that can explain how optimal cue-combination can
happen in the brain.However, there is a notable lack of evidence highlighting how stored
representation used in this process are obtained, whether this learning is optimal, and PPC
provides little guidance as to how it might be implemented neurally.
In this talk, I will argue that inference and learning are two facets of the same underlying
principle of statistically optimal adaptation to external stimuli, therefore, they need to be
treated together under a unified approach. First, I will present evidence that humans learn
unknown hierarchical visual structures by developing a minimally sufficient representation
instead of encoding the full correlational structure of the input. I will show that this learning
cannot be characterized as a hierarchical associative learning process recursively linking
pairs of lower-level subfeatures, but it is better captured by optimal Bayesian model
comparison.
Next, I will discuss how such abstract learning could be implemented in the cortex. Motivated
by classical work on statistical neural networks, I will present a new probabilistic framework
based on the ideas that neural activity represents samples from the posterior probability
distribution of possible interpretations, and that spontaneous activity in the cortex is not
noise but represents internal-state-dependent prior knowledge and assumptions of the
system. I will contrast this sample-based framework with PPCs and derive predictions from
the framework that can be tested empirically. Finally, I will show that multi-electrode
recordings from awake behaving animals confirm these predictions by showing that the
structure of spontaneous activity becomes similar with age to that of visually evoked activity
in the primary visual cortex.
34
Friday, October 2
A multi-stage synaptic model of memory
Alex Roxin*1, Stefano Fusi1
1 Center for Theoretical Neuroscience, Columbia University, USA
* [email protected]
Over a century of experimental and clinical studies provide overwhelming evidence that
declarative memory is a dynamic and spatially distributed process. Lesion studies have
shown that the hippocampus is crucial for the formation of new memories but that its role
decreases over time; ablation of the hippocampus does not affect remote memories. This
suggests that memory consolidation involves the transference of memory to extrahippocampal areas. Despite the wealth of behavioral data on this consolidation process,
relatively little theoretical work has been done to understand it or to address the underlying
physiological process which is presumably long-term synaptic plasticity.
Here we present a model of memory consolidation explicitly based on the constraints
imposed by a plausible rule for synaptic plasticity. The model consists of N plastic, binary
synapses divided into n stages. Uncorrelated memories are encoded in the first stage with a
rate r. Synapses in the second stage are potentiated or depressed with a fixed probability
according to the state (potentiated or depressed) of synapses in stage 1. Synapses in
downstream stages are updated in an analogous way with stage k directly influencing only
stage k+1. Additionally, synapses become increasingly less plastic the further downstream
one goes, i.e. learning rates decrease with increasing stage number. Therefore we posit a
feed-forward structure in which the memory trace in each stage is actively transferred to the
next downstream stage. This is reminiscent of the physiological process of replay which has
been recorded in hippocampal cells of awake and sleeping rats.
The model trivially reproduces power-law forgetting curves for the learned memories by
virtue of the distribution of learning rates. Furthermore, through degradation of early stages
in our model we can account for both anterograde and graded retrograde amnesia effects. In
a similar vein we can reproduce results from studies in which drugs have been found to
selectively enhance or degrade memories.Finally, this model leads to vastly improved
memory traces compared to uncoupled synapses, especially when adjacent stages have
nearly the same learning rate and the total number of stages is large.
35
Oral Presentations
An integrated system for incremental learning of multiple visual
categories
Stephan Kirstein*12, Heiko Wersing1, Horst-Michael Groß2, Edgar Körner1
1 Honda Research Institute Europe GmbH, Offenbach, Germany
2 Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau,
Germany
* [email protected]
An amazing capability of the human visual system is the ability to learn an enormous
repertoire of visual categories. This large amount of categories is acquired incrementally
during our life and requires at least partially the direct interaction with a tutor. Inspired by
child-like learning we propose an architecture for learning several visual categories in an
incremental and interactive fashion based on natural hand-held objects, which typically
belong to several different categories. To make the most efficient use of the rare interactively
collected training examples a learning method is required which is able to decouple the
representation of cooccuring categories. Especially such decoupled representation can not
be learned with typical categorization systems so that each category has to be trained
independently.
This independent training of categories is impractically for interactive learning, because for
each category an object belongs to a repetitive presentation to the system is required to train
each particular category. We also impose no restrictions to the viewing angle of presented
objects, relaxing the common constraint on canonical views. As a consequence this
relaxation considerably complicates the category learning task, because in addition to
category variations also variations caused by full object rotation has to be handled by the
learning method.
The overall categorization system is composed of a figure-ground segregation part and
several feature extraction methods providing color and shape features, which for each object
view are concatenated into a high-dimensional but sparse feature vector. The major
contribution in this paper is an incremental category learning method that combines a
learning vector quantization (LVQ) to approach the ``stability-plasticity dilemma'' with a
category-specific forward feature selection to decouple cooccuring categories. Both parts are
optimized together to ensure a compact and efficient category representation, which is
necessary for fast and interactive learning. Based on this learning method we are able to
interactively learn several color (e.g. red, green, blue, yellow and white) and shape
categories (e.g. toy car, rubber duck, cell phone, cup, can, bottle, tea box, tools, and four
legged animal) with good generalization to previously unseen category members, but also
good rejection of unknown categories.
The complete categorization system runs on a single computer, but makes efficient use of
currently available multi-core CPUs. Overall the system roughly runs at the frame rate of our
current camera system of approximately 6-8 Hz, which is fast enough to show the desired
36
Friday, October 2
interactive and life-long learning ability. To our knowledge this is the first online learning
system which allows category learning based on complex-shaped objects held in hand.
Especially the ability to handle high-dimensional but sparse feature vectors is necessary to
allow interactive and incremental learning, where often additional dimension reduction
techniques like the principal component analysis (PCA) are required to allow online learning.
This high feature dimensionality is also challenging for the used feature selection method,
because of the large amount of possible feature candidates. Nevertheless our proposed
learning system is able to extract small sets of category-specific features out of many
possible feature candidates.
A mesoscopic model of VSD dynamics observed in visual cortex
induced by flashed and moving stimuli
Valentin Markounikau*21, Christian Igel21, Dirk Jancke21
1 Bernstein Group for Computational Neuroscience Bochum, Bochum, Germany
2 Institute for Neuroinformatics, Ruhr-University, Bochum, Germany
* [email protected]
Understanding the functioning of the primary visual cortex requires characterization of the
dynamics that underlie visual perception and of how the cortical architecture gives rise to
these dynamics. Recent advances in real-time voltage-sensitive dye (VSD) imaging permit
the cortical activity of neuronal populations to be recorded with high spatial and temporal
resolution. This wealth of data can be related to cortical function, dynamics and architecture
by computational modeling. To describe brain dynamics at the population level (as
measured by VSD imaging), a mesoscopic model is an appropriate choice.
We present a two-layered neural field model that captures essential characteristics of activity
recorded by VSD imaging across several square millimeters of early visual cortex in
response to flashed and moving stimuli [1]. Stimulation included the well-known line-motion
paradigm [2] (in which apparent motion is inducible by a square briefly flashed before a bar),
a single flashed square, a single flashed bar, and squares moving with different speeds.
The neural field model describes an inhibitory and an excitatory layer of neurons as a
coupled system of non-linear integro-differential equations [3,4]. The model subsumes precortical and intracortical processing. It has relatively few parameters, which can be
interpreted functionally. We have extended our simulation and analysis of cortical activity
dynamics from one spacial dimension - along the (apparent) movement direction - to the two
dimensional cortical sheet. In order to identify the parameters of the dynamical system, we
combine linear and derivative-free non-linear optimization techniques [5]. Under the
assumption that the aggregated activity of both layers is reflected by VSD imaging, our
model quantitatively accounts for the observed spatio-temporal activity patterns (e.g., see
supplementary Fig. 1).
37
Oral Presentations
Our results indicate that feedback from higher brain areas is not required to produce motion
patterns in the case of the illusory line-motion paradigm. Inverting the model suggests that a
considerable fraction of the VSD signal may be due to inhibitory activity, supporting the
notion that intra-layer cortical interactions between inhibitory and excitatory populations play
a major role in shaping dynamic stimulus representations in the early visual cortex.
References:
[1] Jancke D, Chavane F, Na'aman S, Grinvald A (2004) Imaging cortical correlates of
illusion in early visual cortex. Nature 428: 423-426.
[2] Hikosaka O, Miyauchi S, Shimojo S (1993) Focal visual attention produces illusory
temporal order and motion sensation. Vision Research 33: 1219-1240.
[3] Amari SI (1977) Dynamics of pattern formation in lateral-inhibition type neural fields.
Biological Cybernetics 27: 77-87.
[4] Wilson R, Cowan D (1972) Excitatory and inhibitory interactions in localized populations
of model neurons. Biophysical Journal 12: 1-24.
[5] Igel C, Erlhagen W, Jancke D (2001) Optimization of Neural Field Models.
Neurocomputing 36(1-4): 225-233.
Dynamics of on going activity in anesthetized and awake primate
Amiram Grinvald*1, David Omer1
1 Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
* [email protected]
Previous studies using voltage sensitive dyes imaging (VSDI) carried out on anesthetized
cats reported that spontaneous ongoing cortical activity in the primary visual cortex
represents dynamic spatial patterns, many of which resembling the cortical representations
of visual attributes, and span large cortical areas (Grinvald et al., 1989; Arieli et al., 1995;
Arieli et al., 1996; Tsodyks et al., 1999; Kenet et al., 2003; Ringach D.L., 2003, Omer et al.,
2007).
Whether these results are relevant to behavior is unknown. Therefore, we preformed VSDI
of ongoing cortical activity in the visual cortices of awake monkeys simultaneously with
measurements of single & multi unit activity and the local-field potential. We found coherent
activity also in the awake monkey: a single cell had a tendency to fire when a large
population of cells was coherently depolarized as seen in the Spike Triggered Average
curves (STAs) of the awake monkeys. However, the dynamics was very different form that
found in anesthetized cats. To rule out species difference rather anesthetized state we
explored the anesthetized monkey and found that the results were similar to the
anesthetized cat results. However, in the anesthetized monkey spontaneous cortical activity
shows larger repertoire of cortical states; Not surprisingly we found that the two OD maps
were also spontaneously represented and to a larger extent than orientation representations.
Furthermore, spontaneous cortical states which resemble OD maps tend to switch into their
38
Friday, October 2
corresponding orthogonal states. We then compared the dynamics found in the anesthetized
macaque to that observed in the awake state. The dynamics of ongoing activity in the awake
state was significantly different: ongoing activity did not clearly revealed any appearance of
the cortical states related to the functional architecture, over a large area. However, more
sensitive averaging techniques in space and time revealed cortical states related to
orientation and OD maps that are switching rapidly and are spatially mixed. Those results
challenge the classical notion which considers spontaneous (ongoing) cortical activity as
noise and indeed suggest that ongoing coherent activity play an important role in cortical
processing and high cognitive functions.
Acknowledgements:
Supported by the Weizmann Institute of Science, Daisy EU grant, the Goldsmith Foundation
and the Grodetsky Center for research of higher brain functions.
39
Poster Session I, Wednesday, September 30
Poster Session I, Wednesday, September 30
Dynamical systems and recurrent networks
W1
Numerical simulation of neurite stimulation by finite and
homogeneous electric sources
Andres Agudelo-Toro*12, Andreas Neef 12
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Max-Planck Institute for Nonlinear Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Extracellular stimulation of neural networks is a promising tool in research and has a major
potential in therapy in the form of for example, transcranial magnetic stimulation or
transcranial direct current stimulation. However, the biophysics of extracellular excitation of
neurons is not fully understood, especially due to the complexity and heterogeneity of the
neural tissue surrounding the cell, and the effects of the geometry of both the stimulation
source and target.
Modeling of these phenomena can be divided into two main aspects: finding the potential
field generated by the stimulation source and describing the response of the neuron.
Calculation of the potential field has been attempted analytically for simple symmetric cases
and numerically for more complex configurations. The “activation function”, an extension of
the cable equation that models the effect of an externally applied field, has been used to
predict the effects at neural segments. However, calculation of the membrane potential and
the effects on the neuron are usually treated separately, the feedback of the membrane
potential is ignored and moreover in many cases, the membrane is considered to be passive
i.e. non-excitable.
We present numerical simulations that model the effects of an external stimulation on the
membrane potential of a three dimensional active neural membrane in a non empty
extracellular space. To model the complete system, a common particularization of the
Maxwell's equations for biological tissues is used and the membrane is introduced as a
special boundary where current exchange is determined by Hodgkin-Huxley dynamics. We
compare our results with previous 1-D (cable equation) simulations for the cases of a
homogeneous external field and a point source (as in the case of extracellular stimulation by
a small electrode. In particular we compare our results to a recent extension of the activation
40
Dynamical systems and recurrent networks
function that accounts for past criticism and to recent experimental results from cultures
under magnetically induced electric fields, that seem to match the thresholds required for
action potential generation predicted by the activating function.
The presented framework allows the simulation of neural excitation by extracellular
stimulation in a space with arbitrary heterogeneous conductivity.
W2
Dynamic transitions in the effective connectivity of interacting
cortical areas
Demian Battaglia*21, Annette Witt21
1 Bernstein Center for Computational Neurosciencen Göttingen, Göttingen, Germany
2 Max-Planck Institute for Nonlinear Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Long-range anatomic connections between distinct cortical local areas define a substrate
network constraining the spatio-temporal complexity of neural responses and, particularly of
brain rhythmic activity [1]. Such structural connectivity does not however coincide with
effective connectivity, related to the more elusive question “Which areas cause the activity of
which others?” [2]. Effective connectivity is directed and is often task-dependent, evolving
even across different stages of a single task [3, 4]. These fast changes are incompatible with
the slow variation of anatomical connections in a mature brain and might be explained as
dynamical transitions in the collective organization of neural activity. We consider here small
network motifs of interacting cortical areas (N = 2 ÷ 4), modeled first as mean-field rate units
and then as large populations of spiking neurons. Intra-areal local couplings are mainly
inhibitory while inter-areal longer-range couplings are purely excitatory. All the interactions
are delayed. Sufficiently strong local delayed inhibition induces synchronous fast oscillations
and for weak long-range excitation phase-locked multi-areal polyrhythms are obtained [5, 6].
Even when the structural networks are fully symmetric, varying the strength of local inhibition
and the delays of local and long-range interactions generates dynamical configurations
which spontaneously break the symmetry under permutation of the areas. The simplest
example is provided by the N = 2 network in which transitions from in-phase or anti-phase to
out-of-phase lockings with intermediate equilibrium phase-shifts are identified [6]. Areas
leading in phase over laggard areas can therefore be unambiguously pinpointed. The natural
emergence of directionality in inter-areal communication is probed analysing the time-series
obtained from simulations with tools like cross wavelet transform [7] and spectral-based
estimation of Granger causality [8]. Remarkably, for stronger inter-areal couplings, chaotic
states emerge which amplify the asymmetries of the polyrhythms from which they originate.
In such configurations, the firing rate of laggard areas undergoes significantly stronger and
more irregular amplitude fluctuations than leading areas. Asymmetric chaotic states can be
described as conditions of effective entrainment in which laggard areas are driven into chaos
by the more periodic firing of leader areas. Fully symmetric structural networks can thus give
41
Poster Session I, Wednesday, September 30
thus rise to multiple alternative effective networks with reduced symmetry. Transitions
between different effective connectivities are achieved via transient perturbations of the
dynamics without need for costly rearrangements of the structural connections.
References:
[1] C.J. Honey, R. Kötter, M. Breakspear and O. Sporns, Proc. Nat. Ac. Sci. 104(24), 10240–
10245 (2007).
[2] K.J. Friston, Hum Brain Mapping 2, 56-78 (1994).
[3] T. Bitani et al., Journ. Neurosci. 25(22):5397–5403 (2005).
[4] S.L. Fairhall and A. Ishai, Cereb Cortex 17(10): 2400–2406 (2007).
[5] M. Golubitsky and I. Stewart, The Symmetry Perspective, Birkäuser (2002).
[6] D.Battaglia, N. Brunel and D. Hansel, Phys. Rev. Lett. 99, 238106 (2007).
[7] A. Grinsted, J.C. Moore and S. Jevrejeva, Nonlin. Processes Geophys., 11, 561-566,
2004.
[8] M. Dhamala, G. Rangarajan, and M. Ding, Phys. Rev. Lett. 100 (1) 018701, 2008.
W3
The selective attention for action model (SAAM)
Christoph Böhme*1, Dietmar Heinke1
1 School of Psychology, University of Birmingham, Birmingham, UK
* [email protected]
Classically, visual attention is assumed to be influenced by visual properties of objects, e.g.
as assessed in visual search tasks. However, recent experimental evidence suggests that
visual attention is also guided by action-related properties of objects ("affordances", Gibson,
1966, 1979), e.g. the handle of a cup affords grasping the cup; therefore attention is drawn
towards the handle (see Pellegrino, Rafal, & Tipper, 2005 for an example).
In a first step towards modelling this interaction between attention and action, we
implemented the Selective Attention for Action model (SAAM). The design of SAAM is based
on the Selective Attention for Identification model (SAIM, Heinke & Humphreys, 2003). For
instance, we also followed a soft-constraint satisfaction approach in a connectionist
framework. However, SAAM's selection process is guided by locations within objects
suitable for grasping them whereas SAIM selects objects based on their visual properties.
In order to implement SAAM's selection mechanism two sets of constraints were
implemented. The first set of constraints took into account the anatomy of the hand, e.g.
maximal possible distances between fingers. The second set of constraints (geometrical
constraints) considered suitable contact points on objects by using simple edge detectors. At
first, we demonstrate that SAAM can successfully mimic human behaviour by comparing
simulated contact points with experimental data. Secondly, we show that SAAM simulates
affordance-guided attentional behaviour as it successfully generates contact points for only
one object in two-object images. Our model shows that stable grasps can be derived directly
42
Dynamical systems and recurrent networks
from visual inputs without doing object-recognition and without constructing three
dimensional internal representations of objects.
Also, no complex torque and forces analysis is required. The similar mechanisms employed
in SAIM and SAAM make it palpable to combine both into a unified model of visual selection
for action and identification.
References:
Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: HoughtenMifflin.
Gibson, J.J. (1779). The ecological approach to visual perception. Boston: Houghton-Mifflin.
Heinke, D., & Humphreys, G. W. (2003). Attention, spatial representation and visual neglect:
Simulating emergent attention and spatial memory in the selective attention for
identification model (SAIM). Psychological Review 110(1), 29--87.
Pellegrino, G. di, Rafal, R., & Tipper, S. P. (2005). Implicitly evoked actions modulate visual
selection: evidence from parietal extinction. Current Biology, 15(16), 1469--1472.
W4
Matching network dynamics generated by a neuromorphic
hardware system and by a software simulator
Daniel Brüderle*3, Jens Kremkow41, Andreas Bauer3, Laurent Perrinet2, Ad Aertsen41,
Guillaume Masson2, Karlheinz Meier3, Johannes Schemmel3
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Institut de Neurosciences Cognitives de la Méditerranée, Centre national de la recherche
scientifique, Aix-Marseille Universite, Marseille, France
3 Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
4 Neurobiology and Biophysics, Albert-Ludwigs-University, Freiburg, Germany
* [email protected]
We introduce and utilize a novel methodological framework for the unified setup, execution
and analysis of cortical network experiments on both a neuromorphic hardware device and a
software simulator. In order to be able to quantitatively compare data from both domains, we
developed hardware calibration and parameter mapping procedures that allow for a direct
biological interpretation of the hardware output. Building upon this, we integrated the
hardware interface into the simulator-independent modeling language PyNN. We present the
results of a cortical network model that is both emulated on the hardware system and
computed with the software simulator NEST. With respect to noise and transistor level
variations in the VLSI device, we propose that statistical descriptors are adequate for the
discrimination between states of emerging network dynamics. We apply measures for the
rate, the synchrony and the regularity of spiking as a function of the recurrent inhibition
within the network and of the external stimulation strength. We discuss the biological
relevance of the experimental results and the correspondence between both platforms in
terms of the introduced measures.
43
Poster Session I, Wednesday, September 30
W5
Attractor dynamics in VLSI
Patrick Camilleri*2, Massimiliano Giulioni1, Maurizio Mattia1, Jochen Braun2, Paolo del
Giudice1
1 Italian National Institute of Health, Rome, Italy
2 Otto-von-Guericke University, Magdeburg, Germany
* [email protected]
We describe and demonstrate the implementation of attractor neural network dynamics in
analog VLSI technology on the F-LANN chip [1]. The on-chip network is made up of an
excitatory and an inhibitory population consisting of 128 linear integrate-and-fire neurons
recurrently connected together. Apart from the recurrent input these two populations receive
external input in the form of Poisson distributed spike trains from an Address-EventRepresentation (AER) based system. These external stimuli are needed to provide an actual
stimulus to the attractor network as well as to provide an adequate 'thermal-bath' for the onchip populations. We explain how by starting from a theoretical mean-field approximation of
a hypothetical attractor neural network having two stable states of activity, we progress to
find the correct chip parameters (voltage biases) in order to obtain an on-chip effective
response function (EFR) that matches with the theoretical EFR [2]. Once this is achieved we
proceed to demonstrate that the hardware attractor neural network really shows the attractor
behavior by having a spontaneous state and a working memory state. The measured
attractor activity matches favorably with the mean-field and software simulation results.
References:
[1] M. Giulioni, and P. Camilleri et al. A VLSI Network of Spiking Neurons with Plastic Fully
Configurable "Stop-Learning" Synapses. ICECS, 2008.
[2] M. Mascaro, and D. Amit. Effective neural response function for collective population
states. Network , 1999.
W6
A novel information measure to understand differentiation in
social systems
Paolo Di Prodi*2, Bernd Porr2, Florentin Wörgötter1
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Department of Electronics & Electrical Engineering, University of Glasgow, Glasgow, UK
* [email protected]
We propose a novel information measure called anticipatory information (AI) that can be
applied to a wide range of adaptive closed loop controllers. AI determines the success of
learning in an agent which initially relies on a predefined reflex that will be gradually avoided
by learning to use an anticipatory signal. This measure can be used to validate Luhmann's
44
Dynamical systems and recurrent networks
theory (Social Systems, 1996) of social differentiation: sub-systems are formed to reduce the
amount of closed loop information. This means that our anticipatory information (AI) will be
lower in case of of subsystem formation while still avoiding the undesired reflex.
Now we are going to describe how this measure is computed. Before learning the agent has
a pure reflex based behaviour. It can be described as a closed loop feedback controller
which calculates an error signal to represent the deviation from its desired state. This error
signal is then used to trigger a motor action in order to compensate the error. Predictive or
anticipatory learning (ICO, ISO, RL, ...) aims to predict the trigger of this reflex reaction or in
other words the trigger of a non-zero error signal. In order to achieve this the organism
learns to use additional sensory information to prevent the trigger of the reflex. Our AI is
computed from the correlation measure between the error signal of the reflex loop and
additional predictive signals. AI rises if additional sensor inputs are able to reduce the peak
of the cross correlation between the reflex and the predictive input. In terms of bits this
means that halving the peak of the cross correlation corresponds to a 1 bit increase.
We are now explaining how a differentiated sub-system uses less AI compared to an
homogeneous case. The social task is cooperative food foraging: agents can forage directly
from the food patches or reduce the energy of other agents who have previously got food.
Thus every agent has two competitive signals: one from the food patches and one indicating
the energy level of the other agents. The agents are Braitenberg vehicles with 2 lateral
wheels and 2 antennas. The agents learn how to use the long antennas to approach food or
other agents to get their energy.
The AI is computed between the reflex and the predictive inputs. Luhmann theorised that
sub-systems are formed to reduce the perceived complexity of the environment: here agents
can discard either the food signal or the energy signal. Indeed, we found different Ais for the
2 different signals: for the food searchers the AI mainly comes from the sensors which sense
the food whereas the parasites' AI is mainly about the food signals coming from the other
agents.
Thus, we conclude that predictive learning in a social context leads to the formation of
subsystems which could be shown with the help of AI.
W7
Enhancing information processing by synchronization
Udo Ernst*1, David Rotermund1
1 Department for Theoretical Physics, University of Bremen, Bremen, Germany
* [email protected]
Synchronization is a generic dynamical feature of brain activity, occurring on a range of
spatial and temporal scales in different cortical areas. There have been several suggestions
about the functional role of synchronization, e.g. that it dynamically links elementary features
into coherent percepts, performs magnitude-invariant pattern matching, or that it is just an
45
Poster Session I, Wednesday, September 30
epiphenomenon of the cortical dynamics.
Here, we explore the different idea that synchronization serves as a mechanism to enhance
differences in input patterns presented to a recurrently coupled neural network. Our idea is
motivated by gamma oscillations observed in local field potential (LFP) recordings from
macaque monkey area V4, which allow a support vector machine (SVM) to predict the
stimulus shown to the animal with great accuracy. These gamma oscillations are modulated
by attention such that activity patterns for different stimuli become more distinct. This change
in neural activity is accompanied by a pronounced increase in classification performance of
the SVM.
We investigate a recurrent network of randomly coupled integrate-and-fire neurons driven by
Poissonian input spike trains. All synaptic connections have equal strength. The input rate
distribution over all neurons in the network is fixed, with about half of the neurons being
stimulated by a low rate, and the remaining neurons with a high rate. However, the
assignment of these input rates to specific neurons is permuted for every stimulus, thus
leading to specific stimulation patterns. Parameters are adjusted such that the network only
weakly synchronizes in its ground state, corresponding to the non-attended condition in the
experiments.
Simulations of the network are done with N different patterns, and over M trials. Average
activity is convolved with an alpha-function modeling the mapping of the population activity
into LFPs. From these LFPs, power coefficients between 5 Hz and 200 Hz are computed
and used as inputs for a SVM classifier, which had a performance of 35% correct for N=6.
We simulated the influence of attention by increasing the internal coupling strengths by 20%.
While still being in a weakly synchronized regime, the LFPs for different stimuli now become
more distinct, increasing SVM classification to 42%. Performances and power-spectra
correspond well with experimental findings.
In summary, this example not only proposes a novel mechanism for the enhancement of a
neural representation under attention. It also introduces a new concept of how
synchronization can render neural activities more distinct, (e.g. if higher areas like V4 collect
information from local features). Hereby recurrent interactions amplify differences in the input
rates and hence prevent information loss from a normal, synaptic averaging procedure.
Acknowledgements:
Supported by BMBF Bernstein Group Bremen, DIP Metacomp, and the ZKW Bremen. We
thank S. Mandon, A. Kreiter, K. Taylor and K. Pawelzik for stimulating discussions, and for
kindly providing us tons of data.
46
Dynamical systems and recurrent networks
W8
A computational model of stress coping in rats
Vincenzo Fiore*2, Francesco Mannella2, Marco Mirolli2, Simona Cabib1, Stefano PuglisiAllegra1, Gianluca Baldassarre2
1 Department of Psychology, Università degli studi di Roma "La Sapienza", Rome, Italy
2 Laboratory of Computational Embodied Neuroscience, Istituto di Scienze e Tecnologie
della Cognizione, Consiglio Nazionale delle Ricerche, Rome, Italy
* [email protected]
This work presents a computational neural-network model explaining the brain processes
underlying stress coping in rats exposed to long lasting inescapable stress conditions,
focussing on the three neuromodulators dopamine (DA), noradrenaline (NE) and serotonin
(5-HT). The importance of the model relies on the fact that stress coping experiments are
considered a good animal model of the mechanisms underlying human depression.
Pascucci et al. (2007) used microdialysis to investigate the correlation existing between the
presence of NE and DA in medial prefrontal cortex (mPFC) and the quantity of
mesoaccumbens DA during a restraint test lasting 240 min. The comparison of the
microdialysis results related to sham rats and rats with either NE or DA depletion in mPFC
showed the role played by such neuromodulators on DA release in nucleus accumbens
(NAcc) and the active/passive modality of stress coping.
In the model, the stressing stimulus initially activates a first group of neural systems devoted
to active stress-coping and learning. The amygdala (Amg) activates the subsystems NAccshell/infralimbic-cortex (NAccS-IL) and NAcc-core/prelimibic-cortex (NAccC-PL). The latter
subsystem is responsible for triggering actions that may terminate the stressing stimulus,
whereas the former is responsible for (learning) the selected inhibition of those 'neural
channels' of actions which are executed but fail to stop the stressing stimulus.
The ability of actively coping with stress (lasting about 120 min in experiments) and learning
which actions have to be avoided as ineffective is modulated (either depressed or enhanced)
by the presence of the three neuromodulators targeting the Amg-NAcc-mPFC systems. Amg
activates the locus coeruleus (LC) which in turn produces NE, enhancing the activity of Amg,
NAccS and mPFC. The activity in mPFC activates the mesoaccumbens module of the VTA
which releases DA, enhancing the activity of NAcc.
Passive stress coping, which follows active coping, is caused by both the release of 5-HT in
the Amg-NAcc-mPFC systems and the VTA release of DA in mPFC. The cause of the shift
from active to passive coping is assumed to be in the PL and its inhibitory control of the
activity of the dorsal raphe (DR): when this inhibition terminates due to the IL inhibition of PL,
the DR starts releasing 5-HT (Maier and Watckins, 2005), activating at the same time the
mesocortical VTA via glutamatergic synapses.
The model has an architecture wholly constrained by the known brain anatomy and it
reproduces rather in detail the micro-dialysis recordings of the slow dynamics (tonic) of DA
47
Poster Session I, Wednesday, September 30
and NE in mPFC and NAcc (e.g. see charts comparing microdialyses and simulations). On
this basis, the model offers for the first time a coherent and detailed computational account
of brain processes during stress coping.
References:
Maier F.S., Watkins R.L. (2005). Stressor controllability and learned helplessness: The roles
of the dorsal raphe nucleus, serotonin, and corticotropin-releasing factor. Neuroscience
and Biobehavioral Reviews, 29, 829-841.
Pascucci T., Ventura R., Latagliata E.C., Cabib S., Puglisi-Allegra S. (2007). The medial
prefrontal cortex determines the accumbens dopamine response to stress through the
opposing influences of norepinephrine and dopamine. Cerebral Cortex, 17, 2796-804.
W9
Self-sustained activity in networks of integrate and fire neurons
without external noise
Marc-Oliver Gewaltig*21
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
There is consensus in the current literature that stable states of asynchronous irregular firing
require (i) very large networks of 10000 or more neurons [and (ii) diffuse external
background activity or pacemaker neurons.
Here, we demonstrate that random networks of integrate and fire neurons with current based
synapses assume stable states of self-sustained asynchronous and irregular firing even
without external random background (Brunel 2000) or pacemaker neurons (Roudi and
Latham 2007). These states can be robustly induced by a brief pulse to a small fraction of
the neurons. If another brief pulse is applied to a small fraction of the inhibitory population,
the network will return to its silent resting state.
We demonstrate states of self-sustained activity in a wide range of network sizes, ranging
from as few as 1000 neurons to more than 100,000 neurons. Networks previously described
(Amit and Brunel 1997, Brunel 2000) operate in the diffusion limit where the synaptic weight
is much smaller than the threshold. By contrast, the networks described here operate in a
regime where each spike has a big influence on the firing probability of the post-synaptic
neuron. In this “combinatorial regime” each neuron exhibits very irregular firing patterns, very
similar to experimentally observed delay activity. We analyze the networks, using a random
walk model (Stein 1965).
References:
D.J. Amit and N. Brunel (1997) Cereb. Cortex, 7:237-252
48
Dynamical systems and recurrent networks
N. Brunel(2000) J Comput Neurosci, 8(3):183-208
Y. Roudi and P.E. Latham (2007) PLoS Comput Biol, 3 (9):e141
R. B. Stein (1965) Biophysical Journal,5:173-194;
W10
Intrinsically regulated self-organization of topologically ordered
neural maps
Claudius Gläser*1, Frank Joublin1, Christian Goerick1
1 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
Dynamic field theory models the spatio-temporal evolution of activity within the cortex and
has been successfully applied in various domains. However, the development of dynamic
neural fields (DNFs) is only rarely explored. This is due to the fact that DNFs are sensible to
the right balance between excitation and inhibition within the fields. Small changes to this
balance will result in runaway excitation or quiescence. Consequently, learning most often
focuses on the synaptic weights of projections to the DNF, thereby adapting the input-driven
dynamics, but leaving the self-driven dynamics unchanged.
Here we present a recurrent neural network model composed of excitatory and inhibitory
units which overcomes these problems. Our approach differs insofar as we do not make any
assumption on the connectivity of the field. In other words, synaptic weights of both, afferent
projections to the field as well as lateral connections within the field, undergo Hebbian
plasticity. As a direct consequence our model has to self-regulate in order to maintain a
stable operation mode even in face of these experience-driven changes.
We therefore incorporate recent advances in the understanding of such homeostatic
processes. Firstly, we model the activity-dependent release of the neurotrophine BDNF
(brain-derived neurotrophic factor) which is thought to underlie homeostatic synaptic scaling.
BDNF has opposing effects on the scaling of excitatory synapses on pyramidal neurons and
interneurons, thereby mediating a dynamic adjustment in the excitatory-inhibitory balance.
Secondly, we adapt the intrinsic excitability of the model units by adjusting their resting
potentials. In both processes the objective function of each neuron is to achieve some target
firing rate.
We experimentally show how homeostasis in form of such locally operating processes
contributes to the global stability of the field. Due to the self-regulatory nature of our model,
the number of free parameters reduces to a minimum which eases its use for applications in
various domains. It is particularly suited for modeling cortical development, since the process
of learning the mapping is self-organizing, intrinsically regulated, and only depends on the
statistics of the input patterns. Self-organizing maps usually develop a topologically ordered
representation by making use of distance-dependent lateral connections (e.g. Mexican Hat
connectivity). Since our model does not rely on such an assumption, the learned mappings
do not necessarily have to be topology preserving. In order to counteract this problem we
49
Poster Session I, Wednesday, September 30
propose to incorporate an additional process which aims at the minimization of the wiring
length between the model units. This process relies on a purely local objective and runs in
parallel to the above mentioned self-regulation.
Our experiments confirm that this additional mechanism leads to a significant decrease in
topological defects and further enhances the quality of the learned mappings.
W11
Are biological neural networks capable of acting as computing
reservoirs?
Gabrielle Gutierrez*1, Larry Abbott2, Eve Marder1
1 Brandeis University, Boston, MA, USA
2 Columbia University, New York, NY, USA
* [email protected]
Recent computational work on neural networks has suggested that biological neural circuits
may act as rich, dynamic computing reservoirs that can be tapped by external circuitry to
perform a wide array of functions. So-called “liquid-state” or “echo-state” networks must be
of sufficient complexity and have a read-out mechanism to make use of this complexity
within the context of a specific task. These models make strong predictions that have not yet
been tested directly in a biological network. We examine the potential of the crustacean
stomatogastric ganglion (STG) to act as such a reservoir for general output-function
generation. The STG is a useful system for this investigation because it is a small group of
~25 highly connected motor neurons that can easily be isolated from the rest of the
crustacean nervous system. The activity of most (if not all) of its neurons can be recorded
simultaneously, and it can be driven effectively by an external signal using current-clamp
techniques.
By driving one identified STG neuron with sinusoidal injected current and analyzing action
potentials recorded from a number of neurons, we identify a set of basis functions that can
be used to generate a family of different output functions through a linear read-out unit. We
evaluate the completeness and diversity of these basis functions with an “output kernel” to
assess the potential of the STG to act as a dynamic reservoir for a variety of tasks. The
output kernel borrows from signal processing methods and we introduce its use as a metric
for evaluating the completeness of the set of possible outputs of a neural network. This
analysis is also applied to a model network of similar size and complexity as the STG and
the output kernels are compared to those for the biological network.
The behavior of complex dynamical systems can be hard to predict and the small differences
between biological and modeled networks may produce very different results. These
preliminary experiments are important for elucidating the computing strategies of living
nervous systems.
50
Dynamical systems and recurrent networks
W12
A model of V1 for visual working memory using cortical and
interlaminar feedback
Thorsten Hansen*1, Heiko Neumann2
1 Department of General Psychology, Justus Liebig University, Giessen, Germany
2 Institute of Neural Information Processing, Ulm University, Ulm, Germany
* [email protected]
Early visual areas can store specific information about visual features held in working
memory for many seconds in the absence of a physical stimulus (Harrison & Tong 2009,
Nature 458 632-635). We have developed a model of V1 using recurrent long-range
interaction that enhances coherent contours (Hansen & Neumann 2008, Journal of Vision
8(8):8 1-25) and robustly extracts corners and junctions points (Hansen & Neumann 2004,
Neural Computation 16(5) 1013-1037).
Here we extend this model by incorporating an orientation selective feedback signal from a
higher cortical area. The feedback signal is nonlinearly compressed and multiplied with the
feedforward signal. The compression increases the gain for decreasing input, such that the
selection of the orientation to be memorized is realized by a selective decrease of feedback
for this orientation. As a consequence, the model predicts that the overall activity in the
network should decrease with the number of orientations to be memorized. Model
simulations reveal that the feedback results in sustained activity of the orientation to be
memorized over many recurrent cycles after stimulus removal. The pattern of activity is
robust against an intervening, irrelevant orthogonal orientation shown after the orientation to
be memorized. We suggest that the prolonged activation for sustained working memory in
V1 shares similarities with the finding that different processing stages map onto different
temporal episodes of V1 activation in figure-ground segregation (Roelfsema, Tolboom, &
Khayat 2007, Neuron 56 785-792). Unlike previous approaches that have modeled working
memory with a dedicated circuit, we show that a model of recurrent interactions in a sensory
area such as V1 can be extended to memorize visual features by incorporating a feedback
signal from a higher area.
W13
Finite synaptic potentials cause a non-linear instantaneous
response of the integrate-and-fire model
Moritz Helias1, Moritz Deger*1, Markus Diesmann3, Stefan Rotter12
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Faculty of Biology, Albert-Ludwig University, Freiburg, Germany
3 RIKEN Brain Science Institute, Wako City, Japan
* [email protected]
51
Poster Session I, Wednesday, September 30
The integrate-and-fire neuron model with exponential postsynaptic potentials is widely used
in analytical work and in simulation studies of neural networks alike. For Gaussian white
noise input currents, the membrane potential distribution is described by a population density
approach [1]. The linear response properties of the model have successfully been calculated
and applied to the dynamics of recurrent networks in this diffusion limit [2]. However, the
diffusion approximation assumes the effect of each synapse on the membrane potential to
be infinitesimally small.
Here we go beyond this limit and allow for finite synaptic weights. We show, that this
considerably alters the absorbing boundary condition at the threshold: in contrast to the
diffusion limit, the probability density goes to zero on the scale of the amplitude of a
postsynaptic potential (suppl. Fig B). We give an analytic approximation for the density
(suppl. Fig A) and calculate how its behavior near threshold shapes the response properties
of the neuron. The neuron with finite synaptic weights responds arbitrarily fast to transient
positive inputs. This differs qualitatively from the behavior in the diffusion limit, where the
neuron acts as a low-pass filter [3]. We extend the linear response theory [3] and quantify
the instantaneous response of the neuron to an impulse like input current. Even for
realistically small perturbations (s) of the order of a synaptic weight, we find a highly nonlinear behavior of the spike density (suppl. Fig C). Direct simulations in continuous time [4]
confirm the analytical results. For numerical simulations in discrete time, we provide an
analytical treatment which quantitatively explains the distortions of the membrane potential
density. We find that temporal discretization of spikes times amplifies the effects of finite
synaptic weights. Our demonstration of a non-linear instantaneous response amends the
theoretical analysis of synchronization phenomena and plasticity based on the diffusion limit
and linear response theory.
Acknowledgements:
Partially funded by DIP F1.2, BMBF Grant 01GQ0420 to the Bernstein Center for
Computational Neuroscience Freiburg, EU Grant 15879 (FACETS), and Next-Generation
Supercomputer Project of MEXT, Japan. All simulations are performed using NEST [5].
References:
[1] Ricciardi LM, Sacerdote L: The Ornstein-Uhlenbeck process as a model for neuronal
activity. Biol Cybern35 :1979, 1-9
[2] N, Hakim V: Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low
Firing Rates.Neural Comput1999, 11(7) : 1621-1671
[3] Brunel N, Chance FS, Fourcoud N, Abbott LF: Effects of Synaptic Noise and Filtering on
the Frequency Response of Spiking Neurons. PRL 2001, 86(10) : 2186-2189
[4] Morrison A, Straube S, Plesser HE, Diesmann M: Exact subthreshold integration with
continuous spike times in discrete time neural network simulations. Neural Comput.
2007, 19(1): 47-79.
[5] Gewaltig M-O, Diesmann M: NEST (NEural Simulation Tool), Scholarpedia 2007, 2(4):
1430
52
Dynamical systems and recurrent networks
W14
Simple recurrent neural filters for non-speech sound recognition
of reactive walking machines
Poramate Manoonpong*1, Florentin Wörgötter1
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
* [email protected]
Biological neural networks consist of extensive recurrent structures implying the existence of
neural dynamics, like chaotic [1], oscillatory [2], and hysteresis behavior [3]. This suggests
that complex dynamics plays an important role for different brain functions, e.g., for
processing sensory signals and for controlling actuators [4]. From this point of view, in this
study, we exploit hysteresis effects of a single recurrent neuron [5] in order to systematically
design minimal and analyzable filters. Due to hysteresis effects and transient dynamics of
the neuron, at specific parameter configurations, the single recurrent neuron can be
configured into adjustable low-pass filters (see Supplementary Fig. 1). Extending the neural
module by two recurrent neurons we even obtain high- and band-pass filters (see
Supplementary Fig. 1).
The networks presented here are hardware oriented, so we have successfully implemented,
e.g., a low-pass filter network, on a mobile processor of our hexapod robot [6]. As a
consequence, it filters motor noise and enables the robot through neural locomotion control
[6] to autonomously react to a specific auditory signal in a real environment. Such that the
robot changes its gait from slow to fast one as soon as it detects the auditory signal at a
carrier frequency of 300 Hz (see Supplementary video at http://www.nld.ds.mpg.de/
~poramate/BCCN2009/AuditoryDrivenWalkingBehavior.mpg ).
These auditory-driven walking behavioral experiments show that the simple recurrent neural
filters are appropriate for applications like background noise elimination, or non-speech
sound recognition in robots. To a certain extent the approach pursued here sharpens the
understanding of how the dynamical properties of a recurrent neural network can benefit for
filter design and may guide to a new way of modeling sensory preprocessing for robot
communication as well as robot behavior control.
Acknowledgements:
This research was supported by the PACO-PLUS project as well as by BMBF (Federal
Ministry of Education and Research), BCCN (Bernstein Center for Computational
Neuroscience)–Goettingen W3.
References:
[1] H. Korn, P. Faure, Is there chaos in the brain? II. Experimental evidence and related
models, Comptes Rendus Biologies 326 (9) (2003) 787–840.
53
Poster Session I, Wednesday, September 30
[2] T. G. Brown, On the nature of the fundamental activity of the nervous centres; together
with an analysis of the conditioning of rhythmic activity in progression, and a theory of
the evolution of function in the nervous system, Journal of Physiology - London 48 (1)
(1914) 18–46.
[3] A. Kleinschmidt, C. Buechel, C. Hutton, K. J. Friston, R. S. Frackowiak, The neural
structures expressing perceptual hysteresis in visual letter recognition, Neurons 34 (4)
(2002) 659–666.
[4] R. B. Ivry, The representation of temporal information in perception and motor control,
Current Opinion in Neurobiology 6 (6) (1996) 851–857.
[5] F. Pasemann, Dynamics of a single model neuron, International Journal of Bifurcation
and Chaos 3 (2) (1993) 271–278.
[6] P. Manoonpong, F. Pasemann, F. Woergoetter, Sensor-driven neural control for
omnidirectional locomotion and versatile reactive behaviors of walking machines.
Robotics and Autonomous Systems 56(3) (2008) 265–288.
W15
A comparison of fixed final time optimal control computational
methods with a view to closed loop IM
Xavier Matieni*1, Stephen Dodds1
1 School of Computing Information Technology & Engineering, University of East London,
London, UK
* [email protected]
The purpose of this paper is to lay the foundations of a new generation of closed loop
optimal control laws based on the plant state space model and implemented using artificial
neural networks. The basis is the long established open loop methods of Bellman and
Pontryagin, which compute optimal controls off line and apply them subsequently in real
time. They are therefore open loop methods and during the period leading up to the present
century, they have been abandoned by the mainstream control researchers due to a) the
fundamental drawback of susceptibility to plant modelling errors and external disturbances
and b) the lack of success in deriving closed loop versions in all but the simplest and often
unrealistic cases.
The recent energy crisis, however, has promoted the authors to re-visit the classical optimal
control methods with a view to deriving new practicable closed loop optimal control laws that
could save terawatts of electrical energy by replacement of classical controllers throughout
industry. First Bellman’s and Pontryagin’s methods are compared regarding ease of
computation. Then a new optimal state feedback controller is proposed based on the training
of artificial neural networks with the computed optimal controls.
References:
Bellman R., (1957). Dynamic Programming, Princeton, NJ: Princeton University Press.
Pontryagin L. S., (1959), Optimal Control Processes. Usp. Mat. Nauk 14, 3
54
Dynamical systems and recurrent networks
Bolttyanskii, V. G., Gamkrelidze, R.V., and Pontryagin, L. S, (1960), The Mathematical
Theory of Optimal Processes, I. The Maximum Principle, Izv., Akad., Nauk, SSR, Ser.
Mat. 24, 3.
Bellman, R., Dreyfus S. E. (1962), Applied Dynamic Programming, Princeton, NJ: Princeton
University Press.
Pearson A. B., ‘Synthesis of a Minimum Energy Controller subject to Average Power
Constraint, in Proceedings of the 1962 Joint Automatic Control Conference, New York,
pp. 19-4-1 to 19-4-6.
Shinners S. M., (1992), Modern Control System Theory and Design, John Wiley & Sons, pp
632-668.
Sunan, H., Kok K. and Kok Z (2004), Neural Network Control: Theory and Application,
Research Studies Press Ltd.
Picton P., (2000), Neural Networks, Palgrave.
W16
Is cortical activity during work, idling and sleep always selforganized critical?
Viola Priesemann*3, Michael Wibral1, Matthias HJ Munk2
1 MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
2 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
3 Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
Self- organized critical (SOC) systems are complex dynamical systems which may express
cascades of events, called avalanches (Bak et al., 1987). SOC was proposed to govern
brain dynamics, because of its activity fluctuations over many orders of magnitude, its
sensitivity to small input, and its long term stability (Bak, 1996; Jensen, 1998). In addition,
the critical state is optimal for information storage and processing (Bertschinger and
Natschläger, 2004). The hallmark feature of SOC systems, a power law distribution f(s) for
the avalanche size s, was found for neuronal avalanches recorded in vitro (Beggs and Plenz,
2003). However, in vivo, electrophysiological recordings only cover a small fraction of the
brain, while criticality analysis assumes that the complete system is sampled. Nevertheless,
f(s) obtained local field potentials (LFP) recorded from 16 channels in the behaving monkey
could be reproduced by subsampling a SOC model, namely evaluating only the activity from
16 selected sites which represented the electrodes in the brain (Priesemann et al., 2009).
Here, we addressed the question whether the brain of the monkey always operates in the
SOC state, or whether the state changes with working, idling and sleeping phases. We then
investigated how the different neuronal dynamics observed in the awake and sleeping
monkey can be interpreted within the framework of SOC.
We calculated f(s) from multichannel LFPs recorded in the prefrontal cortex (PFC) of the
macaque monkey during performance of a short term memory task, idling, or sleeping. We
compared these results to f(s) obtained from subsampling a SOC model (Bak et al., 1987)
55
Poster Session I, Wednesday, September 30
and the following variations of this model: To vary the local dynamics of the SOC model, we
changed its connectivity. The connectivity can be altered such that only the slope of the
power law of the fully sampled model changes, while the system stays in the critical state
(Dhar, 2006). To obtain slightly sub- and supercritical models instead of a SOC model, we
changed the probability of activity propagation by <2%.
f(s) calculated from LFPs recorded in monkey PFC during task performance differed only
slightly from f(s) in the idling monkey, while f(s) in the sleeping monkey showed less large
avalanches. In the subsampled model, a similar decrease of the probability of large
avalanches could be obtained in two ways: Either, by decreasing the probability of activity
propagation, or by increasing the fraction of long range connections. Given that the brain
was in a SOC state during waking, the first option implies a state change from critical to
subcritical, while the second option allows the global dynamics to stay in the critical state.
A change in f(s) for different states (awake/asleep) does not necessarily imply a change from
criticality to sub- or supercriticality, but can also be explained by a change in the effective
connectivity of the network without leaving the critical state.
Acknowledgements:
We thank J. Klon-Lipok for help with data acquisition and M. Beian for help with data
preprocessing and evaluation. Support: BMBF Bernstein Partner, “memory network”
(16KEGygR).
W17
Filtering spike firing frequencies through subthreshold
oscillations
Belen Sancristobal*1, José María Sancho2, Jordi García-Ojalvo1
1 Departament de Física i Enginyeria Nuclear, Universitat Politecnica de Catalunya,
Terassa, Spain
2 Universitat de Barcelona, Barcelona, Spain
* [email protected]
In order to understand the role of subthreshold oscillations in filtering input signals, we study
the spiking behavior of a FitzHugh-Nagumo neuron with subthreshold oscillations, when
subject to a periodic train of action potentials. We also examine the situation in which the
receiving neuron is electrically coupled to another one. We relate the effectivity of frequency
filtering with iterative maps arising from phase resetting curves obtained from the
simulations. Our results show and explain in which situations a resonant behavior arises. We
extend the study to a chain of neurons in order to analyse the propagation of spikes.
56
Dynamical systems and recurrent networks
W18
Sensitivity analysis for the EEG forward problem
Maria Troparevsky*3, Diana Rubio1, Nicolas Saintier23
1 Centro de Matematica Aplicada, Universidad Nacional de San Martín, San Martin,
Argentina
2 Universidad Nacional de General Sarmiento, Los Polvorines, Argenina
3 Universidad de Buenos Aires, Buenos Aires, Argentina
* [email protected]
Sensitivity Analysis can provide useful information when one is interested in identifying the
parameters of a system since it measures the effects of parameter variations in the system
output. In the literature two different sensitivity functions are frequently used: the Traditional
Sensitivity Functions (TSF) and the Generalized Sensitivity Functions (GSF). The TSF is a
common tool used to measure the variation of the output, u, of a system with respect to
changes in its parameter q=(q1,q2,..,qn). Assuming smoothness of u, the sensitivity with
respect to a parameter qi, si(x), is defined as the partial derivative of u with respect to qi.
These functions are related to u via the Taylor approximation of first order. They give local
information and are used to determine the parameter to which the model is more sensitive.
The GSF was introduced by Thomaseth and Cobelli in 1999 to understand how the
parameter estimation is related to observed system outputs. It is defined only at the discrete
time points where measurements are taken. On a nonlinear parametric dynamical system
they are defined from the minimization of the weighted residual sum of squares. Both
functions were considered by some authors who compared their results for different
dynamical systems.
In this work we compute the TSF and the GSF to analize the sensitivity of the 3D Poissontype equation with interfaces of the Forward Problem of Electroencephalografy (EEG) that
relates the measured electric potential u and the primary current Jp.
In a simple model where we consider the head as a volume consisting of three nested
homogeneous sets, we establish the differential equations that correspond to the TSF with
respect to the value of the conductivity of the different tissues q1, q2, q3. We introduce the
Sensitivity Equations for the parameters and deduce the corresponding Integral Equations.
Afterwards, in a spherical head model, we approximate the values of the TSF and the GSF
of the electric potential with respect to q1 for the case of a dipole source considering
different locations. This simple head model allows us to calculate the solution by a series
formula. Differentiating this series with respect to q1 we obtain the sensitivity function s1 for
the case of nested homogeneous spherical sets. The values of the sensitivities were
simulated considering that the observations are measurements of the electric potential on
the scalp collected by means of a set of electrodes with 10-10B configuration, at a spikeinstant.
57
Poster Session I, Wednesday, September 30
We compare the values obtained for both sensitivity functions. From the experiments we
conclude that in this example TSF and GSF do not seem to provide the same information.
The results suggest that a theoretical analysis about the information provided of both
sensitivity functions must be done.
W19
Cortical networks at work: using beamforming and transfer
entropy to quantify effective connectivity
Michael Wibral*1, Christine Grützner4, Peter Uhlhaas4, Michael Lindner2, Gordon Pipa34, Wei
Wu4, Raul Vicente4
1
2
3
4
MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
Deutsches Institut für Internationale Pädagogische Forschung, Frankfurt, Germany
Frankfurt Institute for Advanced Studies, Frankfurt, Germany
Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
Functional connectivity of the brain describes the network of correlated activities of different
brain areas. However, correlation does not imply causality and most synchronization
measures do not distinguish causal and non-causal interactions among remote brain areas,
i.e. determine the effective connectivity. Identification of causal interactions in brain networks
is fundamental to understanding the processing of information. Quantifying effective
connectivity from non-invasive magneto- or electroencephalographic (MEG/EEG) recordings
at the sensor level is hampered by volume conduction leading to highly correlated sensor
signals. Even if effective connectivity were detected at the sensor level, spatial information
on the underlying cortical networks would be missing.
Here, we propose to use a source reconstruction technique, beamforming, to localize the
dominant sources of scalp signals and reconstruct the time-course of electrical source
activity at these locations (virtual electrodes). Once the source time-courses are
reconstructed it is possible to apply transfer entropy [1,2] – a nonlinear, model-free
estimation of effective connectivity - to reveal the causal interactions in the observed
network.
We applied this approach to MEG data recorded during the “Mooney Faces” task: Subjects
were presented with a picture of a face degraded to black and white tones or a scrambled
version thereof for an interval of 200 ms. Subjects had to indicate via button press whether
they perceived a face or not. Stimulus presentation was followed by a prominent increase in
signal power in the higher gamma band (~60-120Hz) for the interval from 100 to 350ms.
Beamforming localized the main sources of this activity in a network of bilateral parietooccipital, occipito-temporal and frontal brain areas. Transfer Entropy detected both, changes
in effective connectivity between task and baseline and between the two types of stimuli.
58
Dynamical systems and recurrent networks
References:
[1] Measuring Information Transfer. T. Schreiber, Phys. Rev. Lett. 2001
[2] Estimating Mutual Information. A. Kraskov et al., Phys. Rev. E 2004
W20
A activity dependent connection strategie for creating biologically
inspired neural networks.
Andreas Wolf*1, Andreas Herzog1, Bernd Michaelis1
1 Institute of Electronics, Signal Processing and Communications, Otto-von-Guericke
University, Magdeburg, Germany
* [email protected]
Simulation of biologically plausible neurons and networks become more and more complex
during the research of neurobiologists. However, for simulating often a simple
networkarchitecture, possibly with multiple layers, is given. In the simplest case the neurons
are fully connected to each other, which means, that every simulated neuron have one
connection to every other neuron in the cellculture.
A more complex example is shown in Kube et al. (2008) They use a Small-WorldArchitecture with randomly setted local connections and a few long distance global
connections. Changes in the network during the simulation don't play an important role in
most of the simulations. So, the behaviour of a neuron doesn't have an influence on the
networkarchitecture.
Here we use a approach to generate networks, dependent on the activity of neurons. The
main goal is, that neurons which were more active than others, form a larger amount of
connections to the other cells, in both directions (incoming and outgoing).
The simulation is based on the Izhikevich-Modell, which models excitatory and inhibitory
neurons and also conductance based synapses. A background-activity in form of a
simulated thalamic input excite the neurons for spontaneous activity. During the
simulationprocess the neurons begin, in dependence of their activity, to distribute molecules
and form a pioneeraxon. The emitted molecules diffuse through the cellculture and are
catched by pioneeraxons from other cells. So, these axons can find and establish a pathway
to the cell which emitted the molecules.
If a pioneeraxon is connected to an other cell a new axon will be generated, beginning at the
rear part of the pioneeraxon. Those new axons grow out and start to catch new molecules
from other cells. This dividing process is performed by every axon, to find and connect to a
destination cell.
Several mechanism control those growing process and the final amount of connections, e.g.
the amount of emitted molecules per cell and the lifetime of the molecules and the not
established axons.
59
Poster Session I, Wednesday, September 30
More complex mechanisms like the influence of the substrat or a disattrative effect of the
molecules to the searching axons are not regarded, because we don't try to simulate an
exact bio-chemical modell for axonal guidance. In fact, we are more interested in the effects
of the spontaneous activity of a neuron on the networkarchitecture. Additionally, we should
observe the influence of small groups of neurons, which were established in the early phase
of the connection process and which fire synchronously, on the whole network. To compare
the generated networks with the structure of biological networks, a statistical analysis will be
processed. This comparison can also applied to more technical connection creating methods
(see Herzog et al. (2007)).
References:
Herzog, A.; Kube, K.; Michaelis, B.; de Lima, AD.; Voigt, T.: Displaced strategies optimize
connectivity in neocortical networks. Neurocomputing, 70:1121-1129, 2007.
Kube, K.; Herzog, A.; Michaelis, B.; AD. de Lima, Voigt. T.: Spike-timing-dependent plasticity
in small world networks. Neurocomputing, 71, 1694-1704, 2008.
W21
Computational neurosciense methods in human walking
behaviour
Mahdi Yousefi Azar Khanian*1
1 Qazvin Islamic Azad University, Qazvin, Iran
* [email protected]
The control of human walking was analysed by means of electromyographic (EMG) and
kinematic methods. Particular emphasis was placed on the walking system reaction to
unexpected optical disturbances and how stability is maintained. By measuring delay times,
phase changes and by correlating muscle activities with changes of movement we expect to
gain information on the strategies of stability maintenance during biped walking.
The purpose of this study was to compare muscle activation patterns and kinematics during
recumbent stepping and walking to determine if recumbent stepping has a similar motor
pattern as walking. We measured joint kinematics and electromyography in ten
neurologically intact humans walking on a treadmill at 0 and 50% body weight support
(BWS), and recumbent stepping using a commercially available exercise machine.
60
Dynamical systems and recurrent networks
W22
Invariant object recognition with interacting winner-take-all
dynamics
Junmei Zhu*1, Christoph von der Malsburg1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
* [email protected]
An important problem in neuroscience is object recognition invariant to transformations, such
as translation, rotation and scale. When an input image is generated by a stored object
through a transformation, the recognition task is to recover the object and the transformation
that best explain the input image. Various dynamic models have achieved considerable
success experimentally, but their behavior is difficult to analyze. To gain insights on the
recognition dynamics and the organization of stored objects, we aim to develop a model
system as an abstraction of invariant recognition.
Let each transformation variable stand for a transformation of the input image, and each
object variable for a stored object. Under the assumption that the image contains only one
object with one global transformation, invariant recognition can be achieved by finding the
winner-take-all solution on the product space of the two sets of variables: transformation and
object. However, the product of variables is not readily implemented biologically. We
therefore propose a system that has winner-take-all dynamics on single variables.
Our system consists of two interacting winner-take-all dynamics, one for each set of
variables (transformation and object identity). The winner-take-all dynamics are modeled by
Eigen's evolution equations. Within each set, the fitness terms are the similarity between
(patterns represented by) its variables and the linear combination of (patterns represented
by) variables in the other set. We show that this system does not guarantee to be winnertake-all on the product space of the two sets of variables. In fact, any element in the
similarity matrix that is the maximum in its row and column is a stable fixed point of this
system. Grouping variables within each set may eliminate these local maxima, indicating a
possible role for the coarse-to-fine strategy in perception.
Acknowledgements:
Supported by EU project SECO and the Hertie Foundation.
61
Poster Session I, Wednesday, September 30
Information processing in neurons and networks
W23
Ephaptic interactions enhance temporal precision of CA1
pyramidal neurons during pattern activity
Costas Anastassiou*1, S.M. Montgomery2, M. Barahona3, G. Buzsaki2, C. Koch1
1 Division of Biology, California Institute of Technology, Pasadena CA, USA
2 Center for Molecular and Behavioral Neuroscience, Rutgers University,
Newark NJ, USA
3 Department of Bioengineering, Imperial College, London, England
* [email protected]
While our knowledge of the dynamics and biophysics of synaptic and gapjunction
communication has considerably increased over the last decades, the impact of nonsynaptic electric field effects on neuronal signaling has been largely ignored. The local field
potential (LFP) provides experimental access to the spatiotemporal activity of afferent,
associational and local operations in a particular brain structure. Despite the fact that the
spatial and temporal characteristics of LFPs have been related to a spectrum of functions
such as memory and navigation it is unclear whether such extraneuronal flow of currents has
functional implications. This hypothesis has recently been supported by the demonstration
that even weak externally applied electric fields exerted a significant affect on brain function
(Marshall et al. 2006).
To address the relevance of extracellular current flow on neuronal activity, we study the
effect of a spatially inhomogeneous extracellular field on the membrane potential (Vm) of
passive neurons. This requires that the spatial and temporal characteristics of the external
field, as well as the morphological characteristics of the neuron have to be considered.
Numerical simulations with a reconstructed CA1 hippocampal pyramidal neuron illustrate
how these effects are reflected on real neurons and how their occurrence is a function of
location (soma vs. dendritic tree, proximal vs. distal sites, etc.). Based on the above
analysis, simple criteria that predict the impact of an inhomogeneous external field on Vm
are presented and shown to hold from unbranched cables to realistic neurons.
Finally, we investigate the electrostatic effect of endogenous hippocampal LFP rhythms
(theta, sharp waves) in rats on the Vm of the morphologically detailed neuron. We find that
theta induces small deviations (|Vm – Vrest| < 0.5 mV depending on the location) while
sharp waves can result in deviations up to 1.5 mV. In vitro data as well as numerical
simulations with CA1 pyramidal neurons with active membranes show that such deviations
in Vm can readily alter the rate code of these neurons. Based on these observations, we
discuss implications of such Vm-entrainment to the local LFP for single neuron computation
and population activity in the hippocampus.
62
Information processing in neurons and networks
W24
Characterisation of Shepherd’s crook neurons in the chicken optic
tectum
Oguzhan Angay21, Katharina Kaiser21, Stefan Weigel*21, Harald Luksch21
1 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
2 Department of Animal Sciences, Technical University Munich, Munich, Germany
* [email protected]
The midbrain is involved in the processing of visual stimuli in vertebrates. Here, all available
sensory modalities are integrated, relayed to further processing areas, and appropriate
premotor signals are generated.
Our group is interested in the architecture and function of midbrain neuronal networks, in
particular in the signal processing between different layers of the optic tectum (OT) and the
nuclei isthmi (NI). The latter consists of three subdivisions: the nucleus isthmi pars
parvocellularis (IPC), the n.i. pars magnocellularis (IMC) and the n.i. pars semilunaris (SLU).
The three nuclei are heavily interconnected and have reciprocal connectivity with the optic
tectum, thus forming exclusive feedback loops of a complex architecture. These feedbackloops probably play a major role in object recognition and they help to discriminate between
multiple objects by a „winner takes all“-principle.
Visual information is conveyed retinotopically from retinal ganglion cells to the upper layers
of the optic tectum. Here, retinal afferents contact a prominent neuron type – the Shepherd's
Crook Neurons (SCN). These neurons possess a bipolar dendritic field in the upper and the
lower layers of the OT and project exclusively to the NI. It is so far unknown to which extend
the SCNs also integrate input from deeper layers of the optic tectum (where, e.g., auditory
input enters the OT) and/or the rest of the midbrain.
It also remains unknown, whether SNCs comprise of one or several subtypes in respect to
their projection pattern or their physiological properties. This information is however critical
for adequate modelling of the network. While immunohistochemistry against the transcription
factors Brn3A/Pax7 and the Ca2+/calmodulin dependent protein kinase 2 (CamK2) indicate
that SCNs might consist of only one type, these data have to be complemented by additional
neuroanatomical and electrophysiological investigations. Hence, we are characterizing their
properties by patch-clamp recordings and visualize their anatomy by intracellular staining.
In addition to these structural issues, we explore spatiotemporal signal processing in the
isthmotectal feedback circuit. To visualize spatial and temporal activity patterns either in
single prominent neurons or in and between particular midbrain areas, we use optical
imaging technique with voltage sensitive dyes. These dyes are either applied to single
neurons via retrograde or intracellular labelling or by bath incubation. The circuit is then
activated by electrical stimulation of afferent layers of the OT which mimics the input from
retinal ganglion cells. We will record signal integration and signal spreading in single
neurons as well as signal propagation between midbrain areas to analyse the exact spatial
63
Poster Session I, Wednesday, September 30
and temporal activity patterns in the isthmotectal feedback loop. Based on these data,
modelling will allow us to assess the validity of several hypotheses put forward for this
circuit.
W25
Multiplicative changes in area MST neuron’s responses of primate
visual cortex by spatial attention
Sonia Baloni*21, Daniel Kaping2, Stefan Treue2
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 German Primate Center, Göttingen, Germany
* [email protected]
Spatial attention has been shown to create multiplicative enhancements of orientation tuning
curves in area V4 and of direction tuning curves of area MT of primate visual cortex. We
similarly, aimed to study attentional effects on tuning profiles of MST neurons, which are
tuned for spiral motion space (SMS) directions. The SMS, introduced by Graziano et al.
(1994), is a circular dimension that considers expansion, clockwise rotation, contraction and
counterclockwise rotation as cardinal directions in this space, with a continuum of stimuli in
between.
We recorded SMS tuning curves from 123 MST neurons of two macaque monkeys. The
monkeys were trained to attend to a target stimulus, a SMS random dot pattern (RDP) in the
presence of another RDP (distractor). One of the RDP was placed in the receptive field (RF)
while the other was placed outside, in the opposite hemifield. In a given trial the two RDPs
moved in the same direction, picked randomly from one of twelve SMS directions and either
the stimulus inside (attention-in condition) or outside (attention-out condition) the RF was the
designated target. The monkeys had to report a speed change of the target stimulus while
ignoring all other changes. The tuning profile of individual MST neurons can be well fitted by
a Gauss function, allowing a quantitative comparison of neuronal responses to the stimulus
inside the RF, when it is behaviorally relevant (attended target stimulus) or irrelevant
(unattended distractor).
We found that directing spatial attention into the RF enhances the response of MST neurons
to optimized SMS multiplicatively (average +30%). The robust responses of MST neurons to
SMS stimuli away from the preferred direction can be used to test between two alternative
attentional modulation models. In the activity gain model, attention multiplicatively modulates
the overall responses of neurons. Because the given activity level evoked by a particular
stimulus is modulated independent of the neuron’s baseline firing rate, the given activity is
multiplied by a fixed factor. An alternative to the activity gain model is the response gain
model in which attention only modulates the additional activity evoked by a given stimulus
leaving a neuron’s “baseline” response unmodulated.
64
Information processing in neurons and networks
We modified the Gaussian tuning function by holding all parameters to the values obtained
for the attention-out condition while introducing a single attentional multiplication factor,
either multiplying the entire function (activity gain) or all parameters but the baseline
(response gain). The fits are all well correlated with the data and because the two functions
have a similar form they are highly correlated. A partial correlation between the fitted activity
and response gain data revealed that many more cells were better fit by the activity gain
model.
In summary, responses in MST are multiplicatively enhanced when spatial attention is
directed into the RF. This effect is best accounted for by activity gain models where the
overall response of the neuron is modulated by a constant factor.
W26
Dynamical origin of the “magical number” in working memory
Christian Bick*1, Mikhail Rabinovich1
1 Institute for Nonlinear Science, Department of Mathematics, University of California, San
Diego, USA
* [email protected]
Working memory (WM), the ability to hold several items in mind over a short period of time,
and attention, that selects currently relevant stimuli from the environment, are essential
cognitive functions. These two brain functions are strongly interconnected. On an anatomical
level overlapping neural substrates for of the neural networks have been reported [3]. On a
functional level attention acts as a “gatekeeper” for working memory so that the finite
capacity is not overloaded (bottom-up). On the other hand, working memory can effectively
guide attention for example in a search task (top-down). Recently, it has been reported that
optimal working memory performance can be attained through optimal suppression of
irrelevant stimuli [8].
Based on the experimental findings, we propose a model of attention-working memory
tandem dynamics [6]. Attention selects from available information according to the
winnerless competition principle through inhibition and transfers the selected items
sequentially to working memory. Feedback from working memory can influence the
competition and therefore guide attention, i.e. it represents top-bottom interaction between
working memory and attention. Mathematically, these dynamics are described by stable
heteroclinic channels in competitive networks as introduced in [1, 7, 5].
Analytical results that were derived for this model [2] establish an increasing relationship
between the memory capacity and the coupling strengths in the corresponding attention-WM
network. Due to the fact that parameters in neurobiological systems are bounded, this gives
a purely dynamical bound for the number of items that can be robustly stored in attentionworking memory system, which is, under reasonable assumptions, close to the “magical
number seven” [4], a well-established bound for WM capacity.
65
Poster Session I, Wednesday, September 30
References:
[1] V. S. Afraimovich, V. P. Zhigulin, and M. I. Rabinovich. On the origin ofreproducible
sequential activity in neural circuits. Chaos, 14(4):1123–1129,2004.
[2] Christian Bick and Mikhail Rabinovich. On the occurrence of stable heteroclinic channels
in random networks. Submitted to Dynamical Systems.
[3] Kevin S. LaBar, Darren R. Gitelman, Todd B. Parrish, and M. Marsel Mesulam.
Neuroanatomic overlap of working memory and spatial attention networks: A functional
mri comparison within subjects. NeuroImage, 10(6):695–704, 1999.
[4] George Miller. The magical number seven, plus or minus two: Some limits on our
capacity for processing information. The Psychological Review, 63:81–97, 1956.
[5] M. I. Rabinovich, R. Huerta, P. Varona, and V. S. Afraimovich. Transient cognitive
dynamics, metastability, and decision making. PLoS Comput Biol, 4(5):e1000072, 2008.
[6] Mikhail Rabinovich and Christian Bick. Dynamical origin of the “magical number” in
working memory. Submitted to Physical Review Letters.
[7] Mikhail Rabinovich, Ramon Huerta, and Gilles Laurent. Transient Dynamics for Neural
Processing. Science, 321(5885):48–50, 2008.
[8] Theodore P. Zanto and Adam Gazzaley. Neural Suppression of Irrelevant Information
Underlies Optimal Working Memory Performance. J. Neurosci., 29(10):3059–3066,
2009.
W27
A novel measure of model error for conductance-based neuron
models
Ted Brookings*1, Eve Marder1
1 Biology Department, Brandeis University, USA
* [email protected]
Conductance-based neuronal models typically have several unknown parameters that are
critical to their functional properties. Such unknown parameters typically include the density
of different species of ion channels in each model compartment, but may also include
membrane properties (such as capacitance) parameters of ion channel kinetics (e.g. voltage
of half-activation) or even geometric properties of the model. These parameters are often
determined by numerically minimizing a measure of error between the model and a neuron
that the model is intended to represent; the error typically being quantified by a combination
of physiologically-relevant functional properties, such as spike rate, resting membrane
potential, input impedance, etc. Quantifications of model error are problem-specific, and
must be determined and specified by the modeler.
We describe a novel measure of model error for multi-compartment models with a linear
geometry. For a given set of model parameters, our algorithm takes as an input a given
desired somatic voltage trace (such as one measured intracellularly in a real neuron) and
computes the current that must be injected into the distal-most compartment of the model in
order to precisely reproduce the somatic voltage trace. This computed distal current
66
Information processing in neurons and networks
represents a time-varying error signal because a perfect model would require zero injected
current. The algorithm is novel in that it does not require measurement of voltage at points
other than the soma. This measure of model error can be used to fit model parameters to
data, as well as to investigate the sensitivity of the model to changes in different parameters.
We describe the application of this error measure to a variety of models and data sets.
W28
Neuronal copying of spike pattern generators
Daniel Bush*21, Chrisantha Fernando1, Phil Husbands2
1 Collegium Budapest, Budapest, Hungary
2 University of Sussex, Brighton, UK
* [email protected]
The neuronal replicator hypothesis proposes that units of selection exist in the human brain
and can themselves replicate and undergo natural selection [1]. This process can explain
performance in search tasks that require representational re-description, such as insight
problems that cannot be solved by existing reinforcement learning algorithms [2]. We have
previously proposed two mechanisms by which a process of neuronal replication might
operate, allowing either the copying of neuronal topology by causal inference between layers
of neurons or the copying of binary activity vectors in systems of bi-stable spiking neurons.
Here, we examine a third possibility: that the neuronal machinery capable of producing high
fidelity spatio-temporal spike patterns can be copied between cortical regions.
Our model is comprised of a spiking, feed-forward neural network with axonal delays that
implements spike-timing dependent plasticity (STDP) and synaptic scaling. Initially, input
spike patterns to the first layer of neurons are followed – after some short delay - by subthreshold depolarization in the output layer. If a sufficient richness of axonal delays exists in
this feed-forward mapping then the desired transformation can be achieved, as synaptic
weights are selectively potentiated according to the correspondence between their axonal
delays and the desired input / output firing latencies. We subsequently demonstrate that a
wide range of input / output spike pattern transformations, including the replication / identity
function, can be learned with only a short period of supervised training. Interestingly,
following this initial learning period, synchronous stimulation of the intermediate layer can
also produce the desired output spike pattern with high fidelity.
Temporal coding offers numerous advantages for processing in spiking neural networks [3],
and our model describes a fundamental operation that is likely to be essential for a diverse
range of cortical functions. It may be a particularly important component of symbolic
neuronal processing [4], as it allows the representation of multiple distinct individual copies
of an informational unit. The respective forms of neural stimulation that are utilised in this
research – namely, spatio-temporal input and output patterns that repeat cyclically, and
synchronous stimulation at low frequencies – also correspond with well-documented cortical
activity regimes that appear during waking and sleep, and clear parallels can be drawn
67
Poster Session I, Wednesday, September 30
between this work and the theory of polychronous groups [5]. In the year of Darwin’s
bicentenary, this research aims to provide the foundations for extending the framework of
selectionism to the realm of the human brain.
References:
[1] Fernando C, Karishma KK and Szathmáry E. Copying and Evolution of Neuronal
Topology. PLoS ONE 3 (11): e3775 (2008)
[2] Sternberg RJ and Davidson JE (eds). The Nature of Insight. MIT Press: Cambridge MA
(1995)
[3] Van Rullen R and Thorpe SJ. Surfing a spike wave down the ventral stream. Vision
Research 42: 2593-2615 (2002)
[4] Marcus GF. The Algebraic Mind: Integrating Connectionism and Cognitive Science MIT
Press: Cambridge MA (2001)
[5] Izhikevich EM. Polychronization: Computation with Spikes. Neural Computation 18: 245282 (2006)
W29
Electrophysiological properties of interneurons recorded in
human brain slices
Stefan Hefft*2, Rüdiger Köhling3, Ad Aertsen41
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Department of Neurosurgery-Cellular Neurophysiology, Universitäts-Klinikum Freiburg,
Freiburg, Germany
3 Institute of Physiology, University of Rostock, Rostock, Germany
4 Neurobiology and Biophysics, Albert-Ludwigs-University Freiburg, Germany
* [email protected]
Fast-spiking interneurons are thought to be key-players in the generation of high frequency
oscillations in neuronal networks. Such oscillations occur during normal cognitive processes
and with even higher frequency during abnormal hyper synchronisation in epileptogenic
zones in the human brain. Although huge amount of data about both, cellular properties and
synaptic mechanisms have been collected from experiments performed in animal brain
slices, very little is known about the electrophysiological properties of human interneurons.
Therefore we used human brain tissue resected from neurosurgical patients in order to
investigate the electrophysiological properties of fast spiking basket cells (220 ± 30 Hz) in
comparison to regular spiking interneurons (110 ± 26 Hz at 32-340C in submerged human
cortical slices. All cells were filled with biocytin for post-hoc morphological analysis combined
with immunocytochemistry. A subset of fast spiking cells revealed to be Parvalbumin
positive. In agreement with the differences in firing rate, fast spiking basket cells showed a
fast half-duration (0,43 ± 0,120ms), slope of rise (432,86 ± 77,94 V/s) and decay (342,56 ±
55,07 ms) of single action potentials. There was no significant difference in AP-kinetics
between fast-spiking and regular spiking interneurons. However the input resistance of fast
68
Information processing in neurons and networks
spiking interneurons (91,44 ± 15MΩ) was about 4 fold lower as compared to regular spiking
interneurons (415,67 ± 62, 06 MΩ). In accordance with the higher input resistance, the
instantaneous frequency calculated from the first 10 intervals within a burst of action
potentials evoked by a 100 ms lasting current injection, declined by 40% from 300 ± 25 Hz to
179 ± 45 Hz in regular spiking interneurons but only by 6% from 324 ± 60 Hz to 304 ± 60 Hz
in fast spiking basket cells. Interestingly, the fast spiking basket cells showed a much higher
frequency of synaptic input and could spontaneously generate a nested gamma-theta
spiking pattern triggered during periods of increased synaptic input. Altogether, these data
point to the pivotal role of gabaergic basket cells in the generation of network oscillations in
the human cortex.
W30
Temporal precision of speech coded into nerve-action potentials
Michael Isik1, Marek Rudnicki2, Huan Wang1, Marcus Holmberg1, Sonja Karg1, Michele
Nicoletti1, Werner Hemmert*13
1 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
2 Fakultät für Elektrotechnik und Informationstechnik, Technische Universität Munich,
Munich, Germany
3 Institute for Medical Engineering, Technische Universität Munich, Munich, Germany
* [email protected]
The auditory pathway is an excellent system to study temporal aspects of neuronal
processing. Unlike other sensory systems, temporal cues cover an extremely wide range of
information: for sound localization, interaural time differences with a precision of tens of
microseconds are extracted. Phase-locking of auditory nerve responses, which is related to
the coding of the temporal fine structure, occurs from the lowest audible frequencies
probably up to 3 kHz in humans. Amplitude modulations in speech signals are processed in
the ms to tens of ms range. And finally, the energy of spoken speech itself is modulated with
a frequency of about 4 Hz, corresponding to a syllable frequency in the order of few
hundreds of ms. To extract temporal cues at all timescales, it is important to understand how
temporal information is coded.
We investigate temporal coding of speech signals using the methods of information theory
and a model of the human inner ear. The model is based on a traveling-wave model, a
nonlinear compression stage which mimics the function of the “cochlear amplifier”, a model
of the sensory cells, the afferent synapse and spike generation (Sumner ) which we
extended to replicate “offset adaptation” (Zhang). We used the action potentials of the
auditory nerve to drive Hodgkin-Huxley-type point models of various neurons in the cochlear
nucleus. In this investigation we only report data from onset neurons, which exhibit
extraordinary fast membrane time-constants below 1 ms. Onset neurons are known for their
precise temporal processing. They achieve precisely timed action potentials by coincidence
detection: they fire only if at least 10% of the auditory nerve fibers which innervate them fire
69
Poster Session I, Wednesday, September 30
synchronously. With information theory, we analyzed the transmitted information rate coded
in neural spike trains of modeled neurons in the cochlear nucleus for vowels. We found that
onset neurons are able to code temporal information with sub-millisecond precision (<0.02
ms) across a wide range of characteristic frequencies. Temporal information is coded by
precisely timed spikes per se, not only temporal fine structure. Moreover, the major portion
of information (60%) is coded with a temporal precision from 0.2 to 4 ms. Enhancing the
temporal resolution from 10 ms to 3 ms and from 3 ms to 0.3 ms is expected to increase the
transmitted information by approximately twofold and 2.5 fold, respectively.
In summary, our results provide quantitative insight into temporal processing strategies of
neuronal speech processing. We conclude that coding of information in the time domain
might be essential to complement the rate-place code, especially in adverse acoustical
environments.
Acknowledgements:
Supported by within the Munich Bernstein Center for Computational Neuroscience by the
German Federal Ministry of Education and Research (reference numbers 01GQ0441 and
01GQ0443).
W31
Computational modeling of reduced excitability in the dentate
gyrus of betaIV-spectrin mutant mice
Peter Jedlicka*2, Raphael Winkels2, Felix K Weise2, Christian Schultz1, Thomas Deller2,
Stephan W Schwarzacher2
1 Institute of Anatomy and Cell Biology, Justus Liebig University, Giessen, Germany
2 NeuroScience Center, Clinical Neuroanatomy (Anatomy I), Goethe University, Frankfurt,
Germany
* [email protected]
The submembrane cytoskeletal meshwork of the axon contains the scaffolding protein
betaIV-spectrin. It provides mechanical support for the axon and anchors membrane
proteins. Quivering (qv3j) mice lack functional betaIV-spectrin and have reduced voltagegated sodium channel (VGSC) immunoreactivity at the axon initial segment and nodes of
Ranvier. Because VGSCs are critically involved in action potential generation and
conduction, we hypothesized that qv3j mice should also show functional deficits at the
network level. To test this hypothesis, we investigated granule cell function in the dentate
gyrus of anesthetized qv3j mice after electrical stimulation of the perforant path in vivo. This
revealed an impaired input-output (IO) relationship between stimulus intensity and granule
cell population spikes and an enhanced paired-pulse inhibition (PPI) of population spikes,
indicating a reduced ability of granule cells to generate action potentials and decreased
network excitability. In contrast, the IO curve for evoked field excitatory postsynaptic
potentials (fEPSPs) and paired-pulse facilitation of fEPSPs were unchanged, suggesting
70
Information processing in neurons and networks
normal excitatory synaptic transmission at perforant path-granule cell synapses in qv3j
mutants.
To better understand the influence of betaIVspectrin and VGSC density changes on the
dentate gyrus network activity, we employed computational modeling approach. We used a
recently developed and highly detailed computational model of the dentate gyrus network
(Santhakumar et al., J Neurophysiol 93:437–453, 2005). The network model is based on
realistic morphological and electrophysiological data and consists of perforant path inputs
and connections of granule, mossy, basket and hilar cells. The role of VGSCs in network
excitability was analyzed by systematically varying their densities in axosomatic
compartments. This in silico approach confirmed that the loss of VGSCs is sufficient to
explain the electrophysiological changes observed in qv3j mice. Computer simulations of the
IO and PPI test indicated that in the dentate circuit with altered VGSCs, network excitability
decreases owing to impaired spike-generator properties of granule cells and subsequent
relative increase of GABAergic inhibitory control over granule cell firing.
Taken together, our in vivo and in silico data demonstrate that the destabilization of VGSC
clustering in qv3j mice leads to a reduced spike-generating ability of granule cells and
considerably decreased network excitability in the dentate circuit. This provides the first
evidence that betaIV-spectrin is required for normal granule cell firing and for physiological
levels of network excitability in the mouse dentate gyrus in vivo.
W32
The evolutionary emergence of neural organization in a hydra-like
animat
Ben Jones*2, Yaochu Jin1, Bernhard Sendhoff 1, Xin Yao2
1 Honda Research Institute Europe GmbH, Offenbach, Germany
2 University of Birmingham, Birmingham, UK
* [email protected]
Neural systems have a phylogenetic and ontogenetic history which we can exploit to better
understand their structure and organization. In order to reflect this evolutionary influence in
our analysis, we have to break down the overall functional benefit (from an evolutionary
perspective within a certain niche) of a neural system into properties which are more
constructive and which lead directly to constraints of a developing neural system. We
therefore take the stance that all advanced neural organization can be traced back to a
common ancestor from which major evolutionary transitions provided innovation and
ultimately, survivability. In the literature, this organism is often considered to be a hydra-like
organism with a radially symmetric body plan and a diffuse nerve net, since an actual
freshwater hydra is phylogenetically the simplest biological organism having such features.
For this reason, we also adopt the freshwater hydra as a model organism.
71
Poster Session I, Wednesday, September 30
Our objective for this research has been to understand the organizational principles behind
wiring brain networks as a foundation of natural intelligence and our guiding hypothesis has
been that neural architecture has likely evolved to maximize the efficiency of information
processing. The simulation environment which we have devised is based on a three
dimensional cylindrical animat. It further adopts a network of integrate and fire spiking
neurons simulated by the Neural Simulation Toolkit (NEST) which serves to provide
rudimentary `wobbling' movements.
The architecture of this network (neuron localities) is evolved to minimize energy loss (or
maximize its conservation) and maximize functional advantage which is to cause the animat
to catch food particles falling from the top of the environment. This functional advantage both
utilizes energy due to the spiking network and gains energy whenever a food particle is
caught (note that a unit of energy is expended whenever a neuron spikes, and the
magnitude of this loss is further proportional to the connection length). Therefore, the task is
essentially about a trade-off between energy loss and energy gain.
Over a process of simulated evolution, we observe that the neural architecture emerges, (i),
to afford maximal functional benefit (that of obtaining food particles) and (ii), with an
innovative minimalistic structure, in which motor neurons which are part of the nerve net,
arrange themselves to be proximal to the sensory neurons located around the head of the
animat. This result firstly shows how the efficiency of information processing is directly
related to neural architecture: closely connected neurons expend less energy as well as
providing functional advantage. Moreover, this suggests that evolution can discover efficient
information processing through neural architecture adaptation. Secondly, lifetime
architectural perturbations of the neurons which we further introduce to reflect more closely
the continual movements of neural cells in real hydra, are seen to increase the prevalence of
this efficiency-promoting structure.
The latter result indicates that a system can become robust to inherent lifetime plasticity by
essentially strengthening the feature which promotes its survival. Such robustness is an
emerged property and comes about entirely as a by-product of evolution.
W33
Simulation of large-scale neuron networks and its application to a
cortical column in sensory cortex
Stefan Lang*1, Marcel Oberlaender2, Peter Bastian1, Bert Sakmann2
1 Interdisciplinary Center for Scientific Computing, University of Heidelberg, Heidelberg,
Germany
2 Max-Planck Institute of Neurobiology, Munich, Germany
* [email protected]
A fundamental challenge in neuroscience is to determine a mechanistic understanding of
how the brain processes sensory information about its environment and how this can be
72
Information processing in neurons and networks
related to behavior. Recently available methods, such as high-speed cameras, in vivo
physiology and mosaic/optical-sectioning microscopy, allow to relate behavioral tasks with
anatomically and functionally well defined brain regions.
Specifically, the information related to the deflection of a single facial whisker on the snout of
rodents (e.g. mice and rats) is processed by a network of approximately 15000 neurons (in
rat), organized within a so called cortical column. The electrophysiological output from this
network is sufficient to trigger simple behaviors, such as the crossing of a gap. By
reengineering the detailed 3D anatomy and connectivity of individual neurons, and neuron
populations, an average model network (cortical column in silico) is established. By
animating this network with in vivo measured input will help to understand the sub cellular
mechanisms of simple sensory evoked behaviors.
In the presented work we introduce the simulation framework, NeuroDUNE, which
enablesmodeling and simulation of signal processing in such large-scale, full-compartmental
neuron networks on sub cellular basis. The fundamental equation for signal propagation, the
well-known passive cable equation, is discretized in space with a second order correct
Finite-Volume scheme (FV). Time discretization includes implicit schemes such as backward
euler or crank-nicholoson. Via error estimation a precise control of the simulation parameters
is possible. Modeling of active components supports Hodgkin-Huxley type channels with an
arbitrary number of gating particles. Furthermore, specific biophysical relevant ion
concentrations, e.g. Ca++, can be simulated on demand to capture advanced channel
behavior.
Generation of networks is based upon measured 3D neuron distributions and reconstructed
and then quantitatively classified neuronal cell types. These cell types are three
dimensionally interconnected based upon measured anatomical and functional data. An
example for such a quantitatively determined microcircuit within a cortical column is given by
reconstructing the major thalamocortical pathway, giving excitatory input to more or less
every cell in the cortical network.
The methods provided by NeuroDUNE will then enable to perform large-scale network
simulations with high degree of spatial and temporal detail. This will yield in silico
experiments that potentially shed light on sub cellular mechanisms and constraints about the
synapse distribution, for large functional units within the brain.
W34
Analysis of the processing of noxious stimuli in patients with
major depression and controls
Lutz Leistritz*1, Jaroslav Ionov1, Thomas Weiss1, Karl-Jürgen Bär1, Wolfgang Miltner1
1 Friedrich Schiller University, Jena, Germany
* [email protected]
73
Poster Session I, Wednesday, September 30
It has been found in clinical practice that depression is a common comorbidity of chronic
pain. Conversely, chronic pain represents a common additional symptom of depressed
patients. However, although a correlation between depression and pain has been accepted
in the last few years, the underlying physiological basis for a hypoalgesia of depressed
patients when exposed to experimentally induced pain still remains unsolved. We
hypothesized that the processing in the so-called “pain matrix” might be different in these
patients.
The study investigates the processing of noxious stimuli and interactions within the pain
matrix in patients with major depression (MD) by means of frequency selective generalized
partial directed coherence (gPDC).
Sixteen patients with MD and 16 controls underwent stimulations on both the right and left
middle finger with moderately painful intracutaneous electrical stimuli. The connectivity
analysis was based the nine selected EEG electrodes F3, Fz, F4, C3, Cz, C4, P3, Pz, and
P4 according to the extended International 10–20 System. These electrodes were chosen in
order to minimize the dimensionality, and because they are situated above important regions
of pain processing, attention, and depression (frontal, central, and parietal brain regions).
The relevant frequency range for the connectivity analysis based on the evoked potentials is
the delta-, theta- and the alpha-band (1 to 13 Hz, -700 to 0 ms pre-stimulus, 0 to 700 ms
post-stimulus). For a consolidated analysis, the mean gPDCs of these frequencies were
considered.
We could show stimulus-induced changes of the gPDC in a pre/post stimulus comparison
and changes in the connectivity pattern in the post stimulus condition. Furthermore, we could
identify network changes correlating to the side stimulated, as well as differences between
the controls and MD patients. In a pre/post stimulus comparison, one can observe that
patients with MD show less changes in comparison to the controls, and that a stimulation at
the right side results in more changes in comparison to stimulations at the left side. In the
post-stimulus condition, we can observe both group and side differences in the network
structure. There are side differences in the interaction direction between F3 and Fz with
respect to a stimulation at the right or left middle finger, respectively. Independent of which
side is stimulated, a connection from P3 to Cz is present only in the controls, where the
connections from Pz to Cz and Pz to P4 could only be identified for patients with MD.
The gPDC shows networks that include both an attentional area, especially in the frontal
regions, as well as a nociceptive area, containing connections in the centroparietal region.
Differences between groups in the posterior region might be explained by differences in
attentional processes, in processes of stimulus evaluation, or by a temporoparietal
dysfunction in depressive patients.
74
Information processing in neurons and networks
W35
A network of electrically coupled cells in the cochlear nucleus
might allow for adaptive information
Andreas Neef*12
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Max-Planck Institute for Nonlinear Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Information about the temporal structure of the sound impinging on our ears is conveyed to
the brain solely by the pattern of action potentials of the auditory nerve fibres (ANFs). Each
of these ANFs receives input from only a single specialized ribbon synapse of a sensory cell,
the inner hair cell The first stage in the auditory pathway at which ANFs converge is the
cochlear nucleus (CN). At this stage a variety of different postsynaptic activity pattern in
different neuronal types is computed. Examples are multipolar cells (chopper cells) whoch
display periodic peri-stimulus time histograms (PSTHs) and onset neurons, which fire action
potentials only at sound onset.
Here I focus on the information processing in a particular type of CN neurons: the bushy
cells. Upon stimulation with a very loud sound, most bushy cells display firing patterns
similar to those of ANFs, with an initial firing rate of 500 to 1000 Hz and a subsequent rate
adaptation during the first 10 ms after sound onset followed by a rather constant firing rate of
100 to 300 Hz for the remainder of the sound duration. However in a sizable subset of bushy
cells the instantaneous firing rate at sound onset is as high as 3 to 10 kHz. Consequently the
first spike latency (after sound onset) can be as low as 100 microseconds (see for example
Strenzke et al. 2009).
Here I use a combination of biophysically motivated modeling of the signaling transduction
from inner hair cells synaptic signaling to action potential patterns in the ANFs until the
integration of postsynaptic signals in the bushy cells. Recent findings (Gomez-Nieto and
Rubio, 2009) suggest that bushy cells are electrically coupled by gap junctions. If such an
electrical coupling is introduced to the model, a second level of information convergence is
introduced. Analyzing the consequences for the information that the bushy cells’ action
potential patterns contain about the temporal structure of the stimulus, I suggest that the
coupling by gap junctions might allow to increase the onset response as well as the dynamic
range.
75
Poster Session I, Wednesday, September 30
W36
Attention modulates the phase coherence between macaque
visual areas V1 and V4
Simon Neitzel*1, Sunita Mandon1, Andreas K Kreiter1
1 Brain Research Institute, Department of Theoretical Neurobiology, University of Bremen,
Bremen, Germany
* [email protected]
In a complex visual scene, typically multiple objects are present in the large receptive fields
(RFs) of neurons in higher visual areas. Selective processing of the behaviourally relevant
object is therefore faced with the problem that often only a small part of the entire synaptic
input carries the relevant signals. It is therefore very remarkable that neurons are able to
respond to the attended object, as if no others would be present (Moran & Desimone, 1985;
Science, 229, 782-784). We therefore hypothesize that attention enhances dynamically the
effective connectivity of such a neuron with those afferents representing the attended object
and diminishes effective connectivity with others carrying irrelevant signals. Recently it has
been proposed that changes of neuronal synchronization in the gamma-frequency range
(40-100 Hz) may serve this purpose (Kreiter, 2006; Neural Networks, 19, 1443-1444).
To test this hypothesis, we recorded local field potentials (LFPs) with multiple intracortical
electrodes from visual areas V1 and V4 of a macaque monkey performing an attentionally
demanding shape-tracking task. Two objects going through a sequence of continuous
deformations of their shape were presented extrafoveally and simultaneously (see also
Taylor et al., 2005; Cerebral Cortex, 15, 1424-1437). Both objects had the size of a classical
V1 RF and were placed close to each other to fit into one V4 RF. The monkey had to
respond to the reoccurrence of the initial shape for the cued object. Because the shapes
were continuously morphing, and shape perception depends critically on attention (Rock &
Gutman, 1981; Journal of Experimental Psychology: Human Perception and Performance, 7,
275-285), the monkey had to attend the cued stream continuously in order to recognize the
reappearance of the target shape. We used Morlet-wavelets to extract phase information
from the recorded LFPs for estimating the phase coherence as a measure of
synchronization between V1 and V4 recordings sites.
We found that the two populations of V1 neurons driven by the attended and by the nonattended stimulus showed strongly different strength of synchronization with the V4
population getting synaptic input from both of them. If the recorded V1 population was
representing the attended stimulus robust phase coherence was measured. If the same
population was representing the non-attended stimulus the phase coherence was strongly
diminished.
The stronger coupling between neurons in area V4 and that part of their afferent neurons in
V1 carrying the behaviourally relevant information supports the hypothesis that information
flow in the visual system is modulated by attention-dependent changes of neuronal
synchronization. Thus, differential synchronization may serve as a mechanism to switch
76
Information processing in neurons and networks
between different patterns of effective connectivity within the network of anatomical
connections and thereby to route signals and information according to the quickly varying
demands of information processing.
Acknowledgements:
Supported by BMBF Bernstein Group Bremen, "Functional adaption of the visual system".
W37
Synchrony-based encoding in cerebellar neuronal ensembles of
awake mobile mice
Ilker Ozden*1, D.A. Dombeck1, T.M. Hoogland1, F. Collman1, D.W. Tank1, Samuel Wang1
1 Department of Molecular Biology and Princeton Neuroscience Institute, Princeton
University, Princeton, NJ
* [email protected]
The cerebellum is essential for processing sensory and cortical input to guide action. One of
its major inputs, the inferior olive, drives synchronous complex spike firing in ensembles of
Purkinje cells (PCs), detectable as dendritic calcium transients. Previously, PC synchrony
during behavior has been investigated by undersampling the PC population by extracellular
recording. Here, we used 2-photon laser scanning microscopy to monitor calcium coactivation patterns of many neuronal structures at once in the molecular layer of the intact
cerebellum (Sullivan et al. 2005 J. Neurophysiol. 94:1635). The cerebellar cortex of adult
mice was bulk-loaded with Oregon Breen BAPTA-1/AM in lobules IV/V of medial vermis,
which represent limb and trunk sensory-motor information. Recordings were done in awake
mice walking on a spherical treadmill (Dombeck et al. 2007 Neuron 56:43). Data were
collected in fields of view of 60 x 250 µm. Each frame was corrected for brain motion related
artifacts by subpixel 2D crosscorrelation and individual PC dendrites were identified by
independent component analysis. In this way we could monitor up to 45 PC dendrites at
once.
In resting animals, PC dendrites generated spontaneous calcium transients at a rate of 1.0 ±
0.2 Hz (mean ± SD), comparable to previously observed rates of complex spiking and
consistent with our previous demonstration of a near one-to-one correspondence between
calcium transients and electrophysiologically recorded complex spikes under anesthesia.
When the animal started to move, the firing rate increased slightly to 1.4 ± 0.2 Hz.
However, a more prominent feature of the transition from resting to locomotion was an
increase in the rate of co-activation of many PCs at once. Synchronous firing events were
defined as the occurrence of simultaneous calcium transients in 35% or more PC dendrites
within a 35 ms movie frame. When animals began to locomote spontaneously, the rate of
co-activation events rose from 0.05 ± 0.06 events/s to 0.3 ± 0.2 events/s, a 6-fold increase in
synchrony. During walking, PC co-activation events were associated with changes in the
77
Poster Session I, Wednesday, September 30
speed and direction of animal locomotion, suggesting that in a walking animal, synchrony is
related to modifications of movement.
In resting animals, auditory clap stimuli often triggered locomotion. Each clap co-activated 39
± 27% of PC dendrites at once in the field view. Clap responses were reduced when the
animal was standing on a swinging ball (10 episodes, 2 animals) and absent when the
animal was walking (7 episodes in 2 animals). Thus the olive responds to salient sensory
stimuli with synchronous firing that is modulated by the movement state of the animal. Our
observations are consistent with the idea that synchronous firing in groups of olivary neurons
can shape movement. Synchrony in the olivocerebellar system may convey signals relevant
for immediate function.
W38
Unsupervised learning of gain-field like interactions to achieve
head-centered representations
Sebastian Thomas Philipp*1, Frank Michler3, Thomas Wachtler2
1 Computational Neuroscience, Department Biologie II, Ludwig-Maximilians-Universität,
Munich, Germany
2 Fakultät für Biologie, Ludwig-Maximilians-Universität Munich, Munich, Germany
3 Neurophysik, Philipps-Universität, Marburg, Germany
* [email protected]
Movement planning based on visual information requires a transformation from a retinacentered into a head- or body-centered frame of reference. It has been shown that such
transformations can be achieved via basis function networks [1,2]. We investigated whether
basis functions for coordinate transformations can be learned by a biologically plausible
neural network. We employed a model network of spiking neurons that learns invariant
representations based on spatio-temporal stimulus correlations [3].
The model consists of a three-stage network of leaky integrate-and-fire neurons with
biologically realistic conductances. The network has two input layers, corresponding to
neurons representing the retinal image and neurons representing the direction of gaze.
These inputs are represented in the map layer via excitatory or modulatory connections,
respectively, that exhibit Hebbian-like spike-timing dependent plasticity. Neurons within the
map layer are connected via short range lateral excitatory connections and unspecific lateral
inhibition.
We trained the network with stimuli corresponding to typical viewing situations when a visual
scene is explored by saccadic eye movements, with gaze direction changing on a faster time
scale than object positions in space. After learning, each neuron in the map layer was
selective for a small subset of the stimulus space, with excitatory and modulatory
connections adapted to achieve a topographic map of the inputs. Neurons in the output layer
with a localized receptive field in the map layer were selective for positions in head-centered
78
Information processing in neurons and networks
space, invariant to changes in retinal image due to changes in gaze direction.
Our results show that coordinate transformations via basis function networks can be learned
in a biologically plausible way by exploiting the spatio-temporal correlations between visual
stimulation and eye position signals under natural viewing conditions.
Acknowledgements:
Supported by DFG Forschergruppe 560 and BCCN Munich.
References:
[1] A Pouget, TJ Sejnowski: Spatial Transformations in the parietal cortex using basis
functions. J Cogn Neurosci 1997, 9:65-69.
[2] A Pouget, S Deneve, JR Duhamel: A computational perspective on the neural basis of
multisensory spatial representations. Nat Rev Neurosci 2002, 3(9):741-747.
[3] F Michler, R Eckhorn, T Wachtler: A network of spiking neurons for learning invariant
object representations in the visual cortex based on topographic maps and spatiotemporal correlations. Society for Neuroscience Annual Meeting, #394.8, San Diego, CA,
2007.
W39
The morphology of cell nuclei regulates calcium coding in
hippocampal neurons
Gillian Queisser*1
1 Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, University of Heidelberg,
Heidelberg, Germany
* [email protected]
Calcium acts as a key regulator in the nucleus for biochemical events that trigger gene
transcription and is involved in processes such as memory formation and information
storage. Recent research shows that the morphology of hippocampal neuron nuclei is
regulated by NMDA receptors, which led us to investigate the morphological influence in a
modeling environment. We introduce novel concepts of neuron nuclei and their morphology
as a regulator for nuclear calcium signaling. In a model study we developed a threedimensional mathematical model for nuclear calcium signaling based on experimental data
and three-dimensionally reconstructed cell nuclei. When investigating the influence of the
nuclear morphology on calcium signals, we find two main types of nuclei, infolded and
spherical. While spherical nuclei are observed to be "signal-integrators", infolded nuclei are
adept at resolving high-frequency signals, an area not yet explored in detail until this point.
Downstream of calcium, the morphology of nuclei might affect biochemical processes that
are involved in gene transcription.
79
Poster Session I, Wednesday, September 30
W40
Field potentials from macaque area V4 predict attention in single
trials with ~100% accuracy
David Rotermund2, Simon Neitzel1, Udo Ernst*2, Sunita Mandon1, Katja Taylor1, Yulia
Smiyukha1, Klaus Pawelzik2
1 Department for Theoretical Neurobiology, Center for Cognitive Science, University of
Bremen, Bremen, Germany
2 Department for Theoretical Physics, Center for Cognitive Sciences, University of Bremen,
Bremen, Germany
* [email protected]
Coherent oscillations and synchronous activity are suggested to play an important role in
selective processing and dynamic routing of information across the primary visual cortical
areas. In this contribution we show that local power spectral amplitudes and phase
coherency between distant recording sites allow to distinguish almost perfectly between two
attentional states in a behavioural task, thus giving strong quantitative support for a
functional role of oscillatory neural dynamics.
Macaque monkeys were trained to perform a delayed-match-to-sample task, in which the
animals had to direct attention to one of two sequences of morphing shapes presented on a
computer screen. The task was to signal the reoccurrence of the initial shape of the attended
morphing sequence. Recordings of local field potentials (LFPs) were performed with an
array of chronically implanted intracortical microelectrodes in one animal, and epidural
recording arrays in two animals. These arrays covered parts of areas V1 and V4. We
employed different stimulus sizes and configurations, ranging from 1 to 4 degrees in visual
angle for the shape's diameters, and from 1 to 4 degrees visual angle in shape separation.
The signals were split into their frequency components by applying a Morlet-wavelet
transform. From the transformed data, we computed the phase coherency (i.e. a complexvalued scalar with amplitude <=1 and a phase difference) averaged over a time interval of
2500 ms, for every electrode pair. We then used a support vector machine (SVM) to classify
the attended state (attention directed either to one or to the other sequence) from the power
spectral amplitudes and mean phase differences between two recording sites. Strikingly,
nearly perfect state identification (up to 99.9% correct) was possible from several pairs of
electrodes in V4, mainly in the frequency bands of 48 Hz and 61 Hz. From V1-V4 electrode
pairs, classification with up to 76% correct was possible. A similar performance was obtained
using the spectral power of single electrodes in V4 in the Gamma frequency range.
Our results show that power spectral amplitudes as well as phase differences between
signals from V4 can accurately inform about the direction of attention to different locations in
visual space in every single trial. This effect is robust against continuous changes of the
shapes at the attended location. In addition, these findings are stable under the use of
different recording techniques and various stimulus configurations, thus pointing to a key
mechanism based on coherent oscillations for processing information under attention.
80
Information processing in neurons and networks
W41
Applying graph theory to the analysis of functional network
dynamics in visual cortex
Katharina Schmitz*3, Gordon Pipa23, Ralf A. W. Galuske13
1 Department of Biology, Darmstadt University of Technology, Darmstadt, Germany
2 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
3 Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
In order to study information processing in neuronal networks the analysis of the functional
connectivity among its elements is one of the key issues. One approach to define such
functional networks is to use temporal relation of the firing of its individual elements and
define functional connectivity on the basis of millisecond precise synchronous firing of pairs
of cells. In the present study we present a novel approach to analyze the dynamics of
neuronal networks using mathematical graph theory.
We tested the applicability of such an approach by using data from electrophysiological
multi-electrode recordings in cat visual cortex. The examined dataset had been obtained in a
study on the influence of global connectivity patterns between cortical areas on the dynamics
of local neuronal networks in primary visual cortex.
In the electrophysiological data which contained simultaneously recorded signals from up to
16 electrodes action potentials were isolated using thresholding and spike sorting
techniques. We characterized connectivity patterns based on correlated spiking in multi-unit
signals of all possible pairs of electrodes. In order to identify synchronous firing beyond
chance we used the non-parametric method NeuroXidence (Pipa et al., 2008). Graphs were
constructed by interpreting each of the recorded neurons as a node of the graph and edges
were inserted where NeuroXidence detected a significantly high number of synchronous
spiking events between the two respective signals. The resulting networks were undirected.
Further analysis was performed in three steps: We first specified the connectivity pattern for
each experimental condition and tested whether the graphs were not random, i.e. ErdösRényi. To this end we used the distribution of the number of edges and the degree. In a
second step, we tested whether local connectivity was stronger than long-range
synchronization. To test this we defined a neighborhood relation to discriminate between
'short' and 'long' connections regarding the topology of the electrode array, and tested
whether one of the groups was significantly stronger represented. Finally we tested whether
entire networks were different for different experimental conditions. To this end we analyzed
the similarity of different networks based on the Hamming distance between two graphs X
and Y, defined as dh(X,Y):=Σi|Xi–Yi|, i=1,...,N; N = number of edges, to count the number of
edges that differed in each two graphs. To test whether a certain Hamming distance dh was
significant, we developed a statistical test comparing the mean Hamming distance in a set of
graphs to the expected Hamming distance in an equally sized set of Bernoulli graphs with
the same edge probabilities.
81
Poster Session I, Wednesday, September 30
We found that the observed networks did not match the features of Erdös Rényi graphs. A
comparison of 'short' and 'long' connections showed a stronger representation of short links.
For graphs obtained under the same experimental conditions, the Hamming distance was
significantly small. Because the NeuroXidence algorithm corrects for changes in spike rate,
these findings indicate that temporal coding does play a crucial role in transmitting
information between neurons.
W42
Differential processing through distinct network properties in two
parallel olfactory pathways
Michael Schmuker*1, Nobuhiro Yamagata12, Randolf Menzel1
1 Institute for Biology - Neurobiology, Freie Universität Berlin, Berlin, Germany
2 Graduate School of Life Sciences, Tohoku University, Tokio, Japan
* [email protected]
In the honeybee olfactory system sensory information is first processed in the antennal lobe
before it is relayed to the mushroom body where multimodal information is integrated.
Projection neurons (PNs) send their axons from the antennal lobe to the mushroom body via
two parallel pathways, the lateral and the medial antenno-cerebral tract (l- and m-ACT). We
recorded Ca2+-activity in presynaptic boutons of PNs in the mushroom body in order to
characterize the coding strategies in both pathways. We found that m-ACT PNs exhibit
broad odor tuning and strong concentration dependence, i.e. they responded to many of the
tested odorants and their responses increased with increasing odor concentration. In
contrast, PNs in the l-ACT showed narrow odor tuning and weak concentration dependence,
responding only to few odors and only weakly varying with odor concentration [1].
Since PNs of the two tracts innervate glomeruli which are clearly segregated in the antennal
lobe, it is possible that these glomeruli belong to partially segregated local networks. We
hypothesized that their differential functional characteristics could emerge from distinct
network properties in the two pathways. Using a mean-field model of the antennal lobe [2]
we could reproduce narrow and broad odor tuning by using simply varying the amount of
lateral inhibition in the antennal lobe. Increasing the amount of lateral inhibition led to
increasingly narrow odor tuning. In addition, we used gain control by inhibitory feedback to
mimic the situation in the presynaptic boutons of PNs, which receive reciprocal inhibitory
connections from their downstream targets. Increasing the amount of gain control resulted in
less concentration dependence.
Our results suggest that the different coding properties in the l- and m-ACT could emerge
from different network properties in those pathways. Our model predicts that the m-ACT
network exhibits weak lateral inhibition and weak gain control, leading to broad odor tuning
and strong concentration dependence, while the l-ACT network shows strong lateral
inhibition and strong gain control, which leads to narrow odor tuning and weak concentration
dependence.
82
Information processing in neurons and networks
References:
[1] Yamagata N, Schmuker M, Szyszka, Mizunami M and Menzel R (2009): Differential odor
processing in two olfactory pathways in the honeybee. Under review.
[2] Schmuker M and Schneider G (2007): Processing and classification of chemical data
inspired by the sense of smell. PNAS 104:20285-20289.
W43
A columnar model of bottom-up and top-down processing in the
neocortex
Sven Schrader*2, Marc-Oliver Gewaltig21, Ursula Körner2, Edgar Körner2
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
Thorpe et al. (1996) demonstrated that our brains are able to process visual stimuli within
the first 150 ms, without considering all possible interpretations. It is therefore likely that a
first coarse hypothesis, which captures the most relevant features of the stimulus, is made in
a pure feed-forward manner. Details and less relevant features are postponed to a later,
feedback-mediated stage.
Based on our assumptions (Körner et al., 1999), we present a columnar model of cortical
processing that demonstrates the formation of a fast initial hypothesis and its subsequent
disambiguation by inter-columnar communication. Neural representation occurs by forming
coherent spike waves (volleys) as local decisions. The model consists of three areas, each
representing more abstract features of the stimulus hierarchy. The areas are connected with
converging bottom-up projections that propagate activity to the next higher level. During this
forward propagation, the initial hypothesis is generated. Top-down feedback is mediated by
modulatory connections that amplify the correct representation and suppress the incorrect
ones, until only the most compact representation of the object remains active.
Our model foots on three postulates that interpret the cortical architecture in terms of the
algorithm it implements. First, we argue that the columnar modularization reflects a
functional modularization. We interpret columns as computational units that use the same
set of powerful processing strategies over and over again. Second, each cortical column
hosts the circuitry of two processing streams, a fast feed-forward "A-", and a slower
modulatory "B-" system that refines the decision taken in the A-system by mixing experience
with the afferent stimulus stream (predictive coding). Third, time is too short to estimate the
reliability of a neuron's response in a rate-coded manner. We therefore argue that cortical
neurons code reliability in their relative response latencies. By receiving the fastest
response, a target neuron automatically picks up the most reliable one.
At first, our model generates a sequence of spike volleys, each being a possible
representation of the stimulus. These candidates comprise about one percent of all 300
83
Poster Session I, Wednesday, September 30
learned objects. The correctness of a response is directly expressed in its latency: the better
a representation matches the stimulus, the earlier the response occurs. The B-system
implements top-down predictive coding: Based on the stored knowledge, responses are
modified until the set of candidates is on average reduced to one. Thus, the network makes
a unique decision on the stimulus. It is correct in 95% of the trials, even with degraded
stimuli. We analyze the spike volleys in terms of their occurrence times and precision, and
give a functional interpretation to rhythmic activity such as gamma oscillations. Our model
has been simulated with the simulation tool NEST (Gewaltig and Diesmann, 2007).
References:
S. Thorpe, D. Fize and C. Marlot (1996), Nature, 381:520-522
E. Körner, MO. Gewaltig, U. Körner, A. Richter, T. Rodemann (1999), Neural Networks
12:989-1005
MO. Gewaltig and M. Diesmann (2007), Scholarpedia 2(4):1430
W44
Towards an estimate of functional connectivity in visual cortex
David P Schulz*3, Andrea Benucci3, Laura Busse3, Steffen Katzner3, Maneesh Sahani2,
Jonathan W Pillow1, Matteo Carandini3
1 Center for Perceptual Systems, University of Texas, Austin, USA
2 Gatsby Computational Neuroscience Unit, University College London, London, UK
3 Visual Neuroscience, Institute of Ophthalmology, University College London, London, UK
* [email protected]
The responses of neurons in area V1 depend both on the incoming visual input and on
lateral connectivity. Measuring the relative contributions of these factors has been a
challenge. Visual responses are typically studied one neuron at a time, whereas functional
connectivity is typically studied by measuring correlations in the absence of the stimulus.
Following recent work on the modeling of neural population responses, we asked whether a
generalized linear model (GLM) could be used to capture both the visual selectivity and the
functional connectivity of visual responses in V1.
We recorded the activity of multiple neurons with a 10x10 Utah array in area V1 of
anesthetized cats. Stimuli were dynamic sequences of briefly (32ms) flashed gratings with
random phases and orientations. We identified well isolated single-units and pooled the
remaining multi-unit activity in 16 sub-populations according to preferred orientation.
For each single unit, we considered three GLM models of increasing complexity. (1) The
linear-non-linear Poisson model (LNP), which filters the visual input with weights that depend
on orientation and time, and passes the resulting time-varying trace through a non-linearity
that provides the rate function for a Poisson spike-generator. (2) The same model plus a
post-spike current (LNP-S). (3) The LNP-S model with the further addition of coupling
currents triggered by spikes of the sub-populations (LNP-SC).
84
Information processing in neurons and networks
These models differed in their ability to predict the spike trains. All three models captured the
basic structure of the neuron’s selectivity for orientation and response time-course, as
measured by the spike-triggered average stimulus in the orientation domain. We assessed
the quality of the model spike rasters by cross-correlating predicted spike trains with the
neuron’s measured spike trains. The cross-correlogram with the spike trains predicted by the
LNP-SC model had a more pronounced peak relative to the LNP and LNP-S models,
indicating a superior performance. The LNP-SC model was also better at predicting the
cross-correlations between the neuron and the sub-populations. Introducing a role for
functional connectivity between the subpopulations and the neuron under study, therefore,
results in improved predictions of the neuron’s spike trains.
These techniques allow for efficient parameter estimation and return a coupling matrix that
could serve as an estimate for functional connectivity. To the degree that this functional
connectivity reflects actual anatomical connections, this approach could be applied to larger
data sets to estimate how lateral connectivity in the orientation domain shapes the
responses of V1 neurons.
W45
Correlates of facial expressions in the primary visual cortex
Ricardo Sousa*1, João Rodrigues1, Hans du Buf1
1 Vision Laboratory, Institute for Systems and Robotics, University of Algarve, Faro,
Portugal
* [email protected]
Face detection and recognition should be complemented by recognition of facial expression,
for example for social robots which must react to human emotions. Our framework is based
on two multi-scale representations in cortical area V1: keypoints at eyes, nose and mouth
are grouped for face detection [1]; lines and edges provide information for face recognition
[2]. We assume that keypoints play a key role in the where system, lines and edges being
exploited in the what system. This dichotomy, together with coarse-to-fine-scale processing,
yields translation and rotation invariance, refining object categorisations until recognition,
assuming that objects are represented by normalised templates in memory. Faces are
processed the following way: (1) Keypoints at coarse scales are used to translate and rotate
the entire input face, using a generic face template with neutral expression. (2) At medium
scales, cells with dendritic fields at corners of mouth and eyebrows of the generic template
collect evidence for expressions using the line and edge information of the (globally
normalised) input face at those scales. Big structures, including mouth and eyebrows, are
further normalised using keypoints and first categorizations (gender, race) are obtained
using lines and edges. (3) The latter process continues until the finest scale, with
normalisation of the expression to neutral for final face recognition. The advantage of this
framework is that only one frontal view of a person's face with neutral expression must be
stored in memory.
85
Poster Session I, Wednesday, September 30
This procedure resulted from an analysis of the multi-scale line/edge representation of
normalised faces with seven expressions: neutral, anger, disgust, fear, happy, sad and
surprise. Following [3], where Action Units (AUs) are related to facial muscles, we analysed
the line/edge representation in all AUs.
We found that positions of lines and edges at one medium scale, and only at AUs covering
the mouth and eyebrows, relative to positions in the neutral face at the same scale, suffice to
extract the right expression. Moreover, by implementing AUs by means of six groups of
vertically aligned summation cells with a dendritic field size related to that scale (sizes of
simple and complex cells), covering a range of positions above and below the corners of
mouth and eyebrows in the neutral face, the summation cell with maximum response of each
of the six cell groups can be detected, and it is possible to estimate the degree of the
expression, from mild to extreme. This work is in progress, since the method must still be
tested using big databases with many faces and their natural variations. Perhaps some
expressions detected at one medium scale must be validated at one or more finer scales.
Nevertheless, in this framework detection of expression occurs before face recognition,
which may be an advantage in the development of social robots.
Acknowledgements:
FCT funding of ISR-IST with POS-Conhecimento and FEDER; FCT projects PTDC-PSI67381-2006 and PTDC-EIA-73633-2006.
References:
[1] Rodrigues and du Buf 2006. BioSystems 86,75-90 [2] Rodrigues and du Buf 2009.
BioSystems 95,206-26 [3] Ekman and Friesen 1978. FACS, Consulting Psychologists
Press, Palo Alto
W46
Uncovering the signatures of neural synchronization in spike
correlation coefficients
Tatjana Tchumatchenko*1, Aleksey Malyshev43, Theo Geisel51, Maxim Volgushev423, Fred
Wolf51
1
2
3
4
5
Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
Department of Neurophysiology, Ruhr-University Bochum, Bochum, Germany
Department of Psychology, University of Connecticut, Storrs, USA
Institute of Higher Nervous Activity and Neurophysiology, Moscow, Russian Federation
Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Neurons in the CNS exhibit temporally correlated activity that can reflect specific features of
sensory stimuli or behavioral tasks [1-2]. As a first step beyond the analysis of single
neurons, much attention has been focused on the pairwise correlation properties of neurons
86
Information processing in neurons and networks
in networks [1-3]. It has been shown that pairwise interactions can capture more than 90% of
the structure in the detailed patterns of spikes and silence in a network [4]. Here, we
propose a stochastic model for spike train generation which replicates computational
properties of pairwise spike correlations of cortical neurons in many important aspects [5].
We use this model to investigate which measurable quantities reflect best the degree of
synchronization in a neuronal pair. In particular we study the properties of the spike
correlation coefficient between two neurons as a function of firing rates, correlation strength
and functional form of input correlations. Correlation coefficients are frequently used to
quantify the degree of synchronization of neuronal pair in a network [6-9] and synthetic spike
trains with specified correlation coefficients are used to emulate neuronal spike trains [1012]. Despite their popularity little is known about their quantitative determinants. And so far,
an analytical description of spike train correlation coefficients and their dependence on the
single neuron parameters has been obtain only special limiting case [7,8]. Using our
framework, we find that spike correlation coefficients faithfully capture the correlation of two
spike trains only for small time bins, where they primarily reflect spike cross correlations and
depend only weakly on the temporal statistics of individual spike trains. It is only for small
time bins that spike correlation coefficients are proportional to synchronous conditional firing
rate and thus reflect its rate dependence for weak correlations and its rate independence for
large cross correlation strength. For a rate inhomogeneous pair we find asymmetric spike
correlations and spike coefficients which fail to capture the cross correlation between the two
spike trains.
Thus, our statistical framework is a key ingredient for building a quantitative basis for a
concise and unambiguous description of neuronal correlations that can be used to
realistically emulate neuronal spike sequences.
References:
[1] E. Zohary et al., Nature 370, 140–-143, (1994).
[2] A. Riehle et al., Science 278, 1950–-1953, (1997).
[3] L.F. Abbott and P. Dayan, Neural Comput. 11, 91–-101, (1999).
[4] E. Schneidman et al.,Nature, 440(7087):1007–1012, (2006).
[5] Tchumatchenko et al., arXiv:0810.2901v3 [q-bio.NC](submitted).
[6] D. H. Perkel et al., Biophys J., 7(4):419–440, (1967).
[7] J. de la Rocha et al. Nature, 448:802–806, (2007).
[8] E. Shea-Brown et al., Phys. Rev. Lett., 100:108102, (2008).
[9] D.S. Greenberg, Nat. Neurosci., 11(7):749–751, (2008).
[10] Brette, Neur. Comput. 21(1) 188-215, 2009
[11] Niebur, Neur. Comput. 19(7), 1720—1738, 2007
[12] Macke et al., Neur. Comput., 21(2), 397-423, 2009
87
Poster Session I, Wednesday, September 30
W47
Fast excitation during sharp-wave ripples
Álvaro Tejero-Cantero*1, Nikolaus Maier3, Jochen Winterer3, Genela Morris3, Christian
Leibold12
1 Division of Neurobiology, Department of Biology II, University of Munich, Munich,
Germany
2 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
3 Neurowissenschaftliches Forschungszentrum, Charité-Universitätsmedizin, Berlin,
Germany
* [email protected]
In freely behaving rodents, the local field potential measured in the hippocampus displays
prominent deflections during immobility and sleep. These are called sharp waves, last for
about 40 to 60 ms and are jagged with a fast oscillations, or ripples, of about 200Hz. Sharp
waves have been shown in rats to co-occur with multi-unit replay and preplay patterns
following and preceding a learned spatial experience [1-3]. Patterns are compressed in order
to fit within the tight temporal frame offered by the sharp-wave ripple complexes. On a
cellular level, it is known that both interneurons and pyramidal cells are significantly phaselocked to the ripple phenomenon.
We aim at understanding the coordinated cellular activity that during sharp-wave ripple
complexes. To this end, we resort to in vitro simultaneous field potential and single-cell
voltage clamp recordings on submerged mouse hippocampal slices, where the phenomenon
appears with characteristics known from the in vivo situation [4]. Our results stem from the
first direct analysis of sharp-wave associated post synaptic currents (PSCs). These were
recorded at different holding potentials representative of different excitation/inhibition mixes
(-75 mV vs around -50 mV) as well as under intracellular block of inhibition.
The following evidence suggests that the intracellular high frequency oscillations are
supported by strong excitatory currents (see also [5]) and challenges the present view that
high-frequency oscillations during sharp-waves in vivo are mainly mediated by inhibitory
interneurons:
1. The relative power in the ripple band was stronger for PSCs at the reversal
potential of GABAA receptors compared to more depolarized holding potentials.
2. The kinetics of sharp-wave associated currents were consistent with fast EPSCs.
3. Intracellular block of inhibition did not affect the power nor the frequency of the
sharp-wave associated fast PSCs.
4. Putative EPSCs showed strong locking to the extracellular ripple and preceded
the sharp wave peak by an average 1.5 ms.
88
Information processing in neurons and networks
Acknowledgements:
This work was supported by the Bundensministerium für Bildung und Forschung (BMBF,
grant numbers 01GQ0440 and 01GQ0410) and the Deutsche Forschungsgemeinschaft
(DFG, grant number LE 2250/2-1).
References:
[1] Lee AK, Wilson MA (2002) Neuron 36
[2] Foster DJ, Wilson MA (2006) Nature 440
[3] Diba K, Buzsaki G (2007) Nature Neurosci. 10
[4] Maier N, Nimmrich V, Draguhn A (2003) J Physiol 550
[5] Nimmrich V, Maier N, Schmitz D & Draguhn a (2005) Physiol 563
W48
The german neuroinformatics node: development of tools for data
analysis and data sharing
Thomas Wachtler*3, Martin P Nawrot4, Jan Benda23, Jan Grewe3, Tiziano Zito1, Willi
Schiegel1, Andreas Herz23
1
2
3
4
Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
Bernstein Center for Computational Neuroscience Munich, Munich, Germany
Department of Biologie II, Ludwig-Maximilians University, Munich, Germany
Institut für Biologie - Neurobiologie, Freie Universität Berlin, Berlin, Germany
* [email protected]
The German National Node of the International Neuroinformatics Coordinating Facility
(INCF), G-Node (www.g-node.org), has been established to facilitate interaction and
collaboration between experimental and theoretical neuroscientists, both nationally and
internationally, with a focus on cellular and systems neurophysiology. G-Node is funded by
the German Federal Ministry for Education and Research (BMBF) and is an integral part of
the Bernstein Network Computational Neuroscience. G-Node engages in the development of
tools for data analysis and data exchange, with the goal to establish an integrated platform
for the sharing of data and analysis tools. G-Node collaborates with individual researchers
as well as with other INCF National Nodes, the INCF Secretariat, and other neuroinformatics
initiatives. We present examples of G-Node activities supporting the key ingredients of
neuroscientific research: data access, data storage storage and exchange, and data
analysis, together with teaching and training. To facilitate data access, G-Node develops a
tool for importing and exporting commonly used data formats and contributes to establishing
data format standards. In addition, G-Node supports scientists developing metadata
management tools. G-Node offers support for data analysis by collaborating with
researchers in the development of analysis software and by establishing an analysis tool
repository. To foster data sharing, G-Node will provide a data base for long-term storage,
management, and analysis of neurophysiological data. Finally, G-Node has established a
89
Poster Session I, Wednesday, September 30
teaching program to offer training in advanced data analysis, computer science, and
neuroinformatics.
W49
Neuronal coding challenged by memory load in prefrontal cortex.
Maria Waizel*6, Felix Franke13, Gordon Pipa47, Nan-Hui Chen5, Lars F Muckli2, Klaus
Obermayer13, Matthias HJ Munk16
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Center of Cognitive Neuroimaging, University of Glasgow, Glasgow, UK
3 Department of Neural Information Processing, Technical University Berlin, Berlin,
Germany
4 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
5 KunMing Institute of Zoology, Chinese Acadamy of Science,Beijing, China
6 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
7 Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
As most cortical neurons are broadly tuned to various stimulus parameters, it is inevitable
that individual neurons participate in the representation of more than one visual object. We
asked here whether the prefrontal representation of immediately preceding objects would
interfere with the representation of subsequently processed object stimuli, supporting the
idea that neuronal processes challenged by more input and compressed in time leads to a
degradation of the quality of encoding. In the past, we analyzed simultaneously recorded
multi- and single-unit signals derived from arrays of single-ended microelectrodes and
tetrodes during a simple visual memory task (Waizel et al., SfN 2007&2008) and found that
accurate representations of individual objects require the participation of large neuronal
populations. Based on single trial firing rate values, we calculated one-way ANOVAs at 1%
significance thresholds and performed subsequent posthoc comparisons (Scheffé) in order
to detect stimulus selectivity and stimulus specificity for the activity at each single site,
respectively. With tetrodes we were able to detect highly-specific units in PFC with a narrow
band of stimulus preferences, which were remarkably stable throughout all stimulus
comparisons. In order to increase the probability to find more of these specific units, we
sharpened the impact and enhanced the temporal structure of the task. Two monkeys, who
were trained to perform the basic task at ~80% performance, were ad hoc presented with a
sequence of up to 4 objects that were shown consecutively within a fixed period of 900 ms.
Not only the monkeys were able to impromptu generalize from a simple (Load 1) to a
demanding task (Load 2-4) (Wildt et al., SfN 2008), they also showed highly selective sites
(p< .009- p< 7 × 10-13) in all four load conditions, even for those last objects during load 4
(p<.006) which were presented for less than 250 ms. For all load conditions, highly specific
sites could be found (118 pairwise comparisons with p<.01). One group of these sites kept
their object preference throughout the entire sequence of all four objects, others responded
position-dependent to different objects, but were still highly stable throughout all pairwise
90
Information processing in neurons and networks
comparisons.
These results suggest that neuronal ensembles in primate PFC are capable of encoding up
to 4 objects without interactions among the activity expressed in relation to other objects in
the sequence. In addition, they are able to resolve even very shortly presented objects (<250
ms) showing strong selectivity uniquely for one of them and without superimposing this
representation with signals evoked by more recently perceived objects.
W50
Detailed modelling of signal processing in neurons
Gabriel Wittum*1, Holger Heumann3, Gillian Queisser2, Konstantinos Xylouris1, Sebastian
Reiter1, Niklas Antes1
1 Center for Scientific Computing, Goethe University, Frankfurt, Germany
2 Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, University of Heidelberg,
Heidelberg, Germany
3 Seminar for Applied Mathematics, Eidgenössische Technische Hochschule, Zürich,
Switzerland
* [email protected]
The crucial feature of neuronal ensembles is their high complexity and variability. This
makes modelling and computation very difficult, in particular for detailed models based on
first principles. The problem starts with modelling geometry, which has to extract the
essential features from those highly complex and variable phenotypes and at the same time
has to take in to account the stochastic variability. Moreover, models of the highly complex
processes which are running on these geometries are far from being well established, since
those are highly complex too and couple on a hierarchy of scales in space and time.
Simulating such systems always puts the whole approach to test, including modelling,
numerical methods and software implementations. In combination with validation based on
experimental data, all components have to be enhanced to reach a reliable solving strategy.
To handle problems of this complexity, new mathematical methods and software tools are
required. In recent years, new approaches such as parallel adaptive multigrid methods and
corresponding software tools have been developed allowing to treat problems of huge
complexity.
In the lecture we present a three dimensional model of signalling in neurons. First we show a
method for the reconstruction of the geomety of cells and subcellular structures as three
dimensional objects. With this tool, NeuRA, complex geometries of neurons were
reconstructed.
We further show simulations with a three dimensional active model of signal transduction in
the cell which is derived from the Maxwell equations and uses generalized Hodgkin-Huxley
fluxes for the description of the ion channels.
91
Poster Session I, Wednesday, September 30
W51
Simultaneous modelling of the extracellular and innercellular
potential and the membrane voltage
Konstantinos Xylouris*1, Gillian Queisser2, Gabriel Wittum1
1 Center for Scientific Computing, Goethe University, Frankfurt, Germany
2 Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, University of Heidelberg,
Heidelberg, Germany
* [email protected]
In order to model initiation and propagation of action potentials, the 1D cable Theory
provides a fast and relatively accurate computational method. However, this theory faces
difficulties, if the extracellular potential and the membrane potential are to be computed at
the same time. This problem can be overcome if one couples the cable Theory with a
separate model for the extracellular potential as it is done in the “Line Source Method” (Gold
et al., 2006). Although such a method provides quite accurate results in the extracellular
action potential recordings, it appears difficult to unify the cable Theory’s main assumption
(that the extracellular potential is zero) with a full 3D model, in which, on the membrane, the
extracellular potential is prescribed to equal the membrane voltage.
Starting with the balance law of charges, a model of an active cell is presented which
considers the full 3D structure of the cell and the extracellular potential in the computation of
the membrane potential. Based on such a model it is possible to carry out simulations in
which the extracellular potential and the membrane potential can be simultaneously
recorded.
Such a model might be useful to examine interactions between the extracellular space and
the membrane potential. Moreover a concept is presented, how the model can be extended
in order to couple 1D structures with 3D ones. This approach can be used to focus on the
detail without a great loss of efficiency.
92
Neural encoding and decoding
Neural encoding and decoding
W52
Cochlear implant: from theoretical neuroscience to clinical
application
Andreas Bahmer*1, Gerald Langner5, Werner Hemmert24, Uwe Baumann3
1
2
3
4
5
Audiological Acoustics, Goethe University, Frankfurt, Germany
Bernstein Center for Computational Neuroscience Munich, Munich, Germany
Goethe-University, Frankfurt, Germany
Institute of Medical Engineering, Technical University Munich, Munich, Germany
Technical University Darmstadt, Darmstadt, Germany
* [email protected]
Cochlear implants are the first and until now the only existing prosthesis that can restore a
malfunctioning sensory organ – the inner ear – nearly completely. After implantation and a
period of rehabilitation, most previously deaf patients are able to use the telephone or listen
to the radio with their cochlear implant system. However, although top performing cochlear
implant subjects understand speech nearly perfectly in quiet, large difficulties remain in
acoustically complex environments. These difficulties are probably due to the rather artificial
electrical stimulation from distinct locations of the electrode. We therefore propose
stimulation techniques which account for neurophysiological and neuroanatomical properties
not only of the auditory nerve but also of the subsequent cochlear nucleus. The cochlear
nucleus shows a variety of cells that combine different encoding mechanisms. Chopper
neurons which are the main projecting cells of the ascending subsequent auditory system
build an important shunting yard in the cochlear nucleus. The periodicity encoding of those
cells is outstanding throughout the cochlear nucleus because they represent frequency
tuning and periodicity at the same time by integration of broadband input [1].
We have carried out simulations of a physiologically inspired neuronal network including
chopper neurons. In our simulation, chopper neurons receive input from both auditory nerve
fibers and onset neurons [2,3]. With this topology, the model has the advantage of
explaining the large dynamic range of periodicity encoding of chopper neurons in
combination with their narrow frequency tuning. Like the models investigated previously, the
present model is able to simulate interspike intervals of spike trains of the chopper
responses with high precision [3]. Moreover, the simulation can explain essential properties
of real chopper neurons by an additional input from onset neurons. Simulations show that
variations of the integration widths of onset neurons results in corresponding variations of
the spectral resolution and periodicity encoding of chopper neurons [3,4]. Physiological
evidence supports our assumption that periodicity information coded by chopper neurons is
conveyed via onset neurons [1]. These simulations gave rise for a test of a new stimulation
paradigm for cochlear implants.
93
Poster Session I, Wednesday, September 30
To investigate the influence of the width of the area of stimulation on the accuracy of
temporal pitch encoding, synchronous multi-electrode stimulation with biphasic electrical
pulse trains was compared to single-electrode stimulation.
Temporal pitch discrimination performance was determined by means of a 2-AFC
experiment in human cochlear implant subjects at different base rates (100, 200, 283, 400,
566 pps) in both conditions (single- vs. multi-electrode). Overall performance deteriorated
with increasing base rate. Although multi-electrode parallel stimulation showed significantly
improved pitch discrimination in some subjects at certain base rates, no general
enhancement compared to single electrode performance appeared. We will discuss whether
the entrainment of the auditory nerve spike pattern to electrical pulsatile stimulation is
responsible for the lack of pitch discrimination benefit in the multi-electrode parallel
condition.
References:
[1] R. D. Frisina et al. 1990. Hear Res, 44, 99-122
[2] A. Bahmer and G. Langner 2006. Biol Cybern, 95, 371-379
[3] A. Bahmer and G. Langner 2006. Biol Cybern, 95, 381-392
[4] G. Langner 2007. Z Audiol 46(1), 8-21
W53
Feature-based attention biases perception of motion direction
Matthew Chalk*1, Aaron Seitz2, Peggy Seriès1
1 Institute for Adaptive and Neural Computation, School of Informatics, Edinburgh
University, Edinburgh, UK
2 Psychology Department, University of California, Riverside, USA
* [email protected]
To some extent, what we see depends on what we are trying to do. How perception is
affected by the behavioral task that is being performed is determined by top-down attention.
While it has long been known that visual attention increases the sensitivity of perception
towards an attended feature or spatial location (Downing 2008), recent psychophysical
experiments suggest that spatial attention can also qualitatively change the appearance of
visual stimuli, for example by changing the perceived contrast (Carrasco et al 2004),
stimulus size (Anton-Erxleben et. al. 2007) or spatial frequency (Gobell & Carrasco 2005).
To try and understand these findings, we considered a simple encoder-decoder cascade
model of perception, in which the encoder represents the response of a population of
sensory neurons, and the decoder represents the transformation from this population activity
into a perceptual estimate (Seriès et al, in press). Top-down attention was modelled by
increasing the gain of neurons tuned towards an attended feature or location.
In the case where the readout is fixed and the encoder is changing (an 'unaware decoder'),
our model predicts that feature-based attention should lead to perceptual biases, where
94
Neural encoding and decoding
stimulus features are perceived as being more similar to the attended feature than they
actually are.
We investigated the predictions of our model by conducting psychophysical experiments
looking at whether feature-based attention induces biases in the perception of motion
direction. Subjects were presented with either low contrast moving dot stimuli or a blank
screen. On each trial they performed an estimation task (reporting the direction of motion)
followed by a detection task (reporting whether the stimulus was present). We invoked
feature-based attention towards two different directions of motion by presenting stimuli
moving in these directions more frequently. Performance in the detection task was
significantly better for the two more frequently presented motion directions, indicating that
subjects were indeed attending to these directions of motion.
As predicted by our model, we found that subjects’ estimates of motion direction were biased
towards the two ‘attended’ directions and we were able to discount a simple response bias
explanation for this finding. As well as providing strong support for our ‘unaware decoder’
model of attention, these results are in accordance with Bayesian models, where selective
attention represents prior beliefs about the expected stimuli (Yu and Dayan 2005, Dayan
and Zemel 1999). Finally, on trials where no stimulus was presented, but where subjects
reported seeing a stimulus, their estimates of motion direction were strongly biased towards
the two ‘attended’ directions.
In contrast, no such effect was observed on trials where subjects did not report seeing a
stimulus, arguing against a pure response bias explanation for this effect. Thus, in common
with perceptual learning, where subjects often report seeing dots moving in the trained
direction when no stimulus is presented (Seitz. et. al 2005), and in accordance with our
model, feature-based attention can make us see an attended feature, even when it is not
there.
W54
Reproducibility – a new approach to estimating significance of
orientation and direction coding
Agnieszka Grabska-Barwinska*1, Benedict Shien Wei Ng2, Dirk Jancke1
1 Bernstein Group for Computational Neuroscience Bochum, Ruhr-University Bochum,
Bochum, Germany
2 International Graduate School of Neuroscience, Ruhr-University Bochum, Bochum,
Germany
* [email protected]
We propose a method estimating the reliability of orientation (or direction) coding, by
examining the reproducibility of the preferred orientation (direction) measured across trials. A
resulting normalized measure, with values between 0 and 1, is easily transformed to pvalues, providing explicit statistical significance information of orientation (direction) coding
95
Poster Session I, Wednesday, September 30
of the recorded response.
Selectivity to orientation of contours (or direction of their motion) has been a thoroughly
studied feature of the visual system. A standard experimental procedure involves recording a
sequence of responses to a clearly oriented pattern (for example – sinusoidal gratings),
while varying the orientation angle.
A number of methods to estimate a preferred orientation were proposed (Swindale, 1998),
with each model function providing a different measure of orientation selectivity. Those more
intuitive – like the width of a Gaussian fitted to the response curve require a fine sampling of
the orientation (direction) domain. The frequently used OSI (Orientation Selectivity Index)
strongly depends on the measurement type and the signal-to-noise ratio. In contrast, our
approach is applicable to any kind of signal and does not require sophisticated fitting
methods. We present results from both electrophysiology and Optical Imaging recordings.
W55
Multi-electrode recordings of delay lines in nucleus laminaris of
the barn owl
Nico Lautemann*2, Paula Kuokkanen4, Richard Kempter1, Hermann Wagner23
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Department for Zoology and Animal Physiology, Rheinisch-Westfaelische Technische
Hochschule, Aachen, Germany
3 Department of Biology, University of Maryland, College Park, USA
4 Institute for Theoretical Biology, Humboldt University, Berlin, Germany
* [email protected]
Barn owls (Tyto alba) are nocturnal hunters that are able to catch their prey in complete
darkness by using auditory cues. The cue used to localize the azimuthal position of a sound
source is the interaural time difference (ITD). ITD is the difference of the arrival time of a
sound at the two ears. The time pathway computing the ITD starts in the cochlear nucleus
magnocellularis (NM). The axons of NM neurons project bilaterally to nucleus laminaris (NL),
making NL the first binaural stage in the time pathway. The NL neurons are narrowly tuned
to sound frequency and act as coincidence detectors. Simultaneous inputs from the right
and left side cause the neurons to be maximally active. Their firing frequency changes
periodically in dependence on an imposed phase shift between the left and right inputs.
Nucleus laminaris contains both, a tonotopic map and a map of ITD. The projections from
the ipsi- and contralateral NM are supposed to form delay lines. The ipsilateral axon
collaterals contact and penetrate NL from dorsal, while the contralateral axon collaterals run
on the ventral side and transverse NL from ventral to dorsal. In the barn owl the map of ITD
results from the synapses of the axon collaterals with NL neurons at different dorso-ventral
depths. In this way a time-code, present in the NM collaterals, is converted into a place-code
in NL neurons.
96
Neural encoding and decoding
The key elements and features of such a sound-localization circuit have been proposed by
Jeffress in 1948 [1]. Since then a large amount of evidence has been accumulated,
supporting the hypothesis that this model is realized in birds. However, the existence of
delay lines in the barn owl has not yet been directly shown. To do so, we used acute coronal
slice preparations of the NM-NL circuit of the barn owl brainstem and recorded the
extracellular multi-unit activity in the network at many different positions with planar multielectrode arrays (MEA) while electrically stimulating the NM fibers (Fig. 1A). We
demonstrate the propagation of the response along and within NL, directly showing the
existence of delay lines (Fig. 1B). The delays inside and outside of NL were quantified by
determining propagation velocities, showing different propagation velocities of the fibers inand outside NL. Since the network is still developing in the first weeks, we used animals of
different ages (P2-P11) to study the maturation of the delay lines, taking place in the first
days post hatch.
References:
[1] Jeffress L. A. (1948) A place theory of sound localization. J. Comp. Physiol. Psychol. 41,
35-39.
[2] Carr CE and Konishi M (1988). Axonal delay lines for time measurement in the owl's
brainstem. Proceedings of the National Academy of Sciences 85: 8311-8315.
W56
Invariant representations of visual streams in the spike domain
Aurel Lazar*1, Eftychios A. Pnevmatikakis1
1 Department of Electrical Engineering, Columbia University, New York, USA
* [email protected]
We investigate a model architecture for invariant representations of visual stimuli such as
natural and synthetic video streams (movies, animation) in the spike domain. The stimuli are
encoded with a population of spiking neurons, processed in the spike domain and finally
decoded. The population of spiking neurons includes interconnected neural circuits with level
crossing spiking mechanisms as well as integrate-and-fire neuron models with feedback. A
number of spike domain processing algorithms are demonstrated including faithful stimulus
recovery, as well as simple operations on the original visual stimulus such as translations,
rotations and zooming. All these operations are executed in the spike domain. Finally, the
processed spike trains are decoded and the faithful recovery of the stimulus and its
transformations is obtained.
We show that the class of linear operations described above can easily be realized with the
same basic stimulus decoding algorithm [1]. What changes in the architecture, however, is
the switching matrix (i.e., the input/output "wiring'') of the spike domain switching building
block. For example, for a particular setting of the switching matrix, the original stimulus is
faithfully recovered. For other settings, translations, rotations and dilations (or combinations
97
Poster Session I, Wednesday, September 30
of these operations) of the original video stream are obtained. The implementability of these
elementary transformations originates from the structure of the neuron receptive fields that
form an overcomplete spatial (or spatiotemporal) filterbank.
Our architecture suggests that identity-preserving transformations between different layers
of the visual system are easily obtained by changing the connectivity between the different
neural layers. Combinations of the aforementioned elementary transformations realize any
linear transformation (e.g., zoom into a particular region). This addresses the
correspondence problem of identifying equivalent stimuli while constantly changing visual
fixations.
Furthermore, our architecture generates in real-time the entire object manifold [2]. The latter
is obtained by a set of identity-preserving transformations, and thereby, it is invariant with
respect to (essentially) arbitrary translations, rotations and zooming. By computing the object
manifold in real-time, the problem of object recognition is therefore mapped into one of
determining whether any arbitrary stored object belongs to the just-computed object manifold
[3].
Acknowledgements:
The work presented here was supported in part by NIH under grant number R01 DC00870101 and in part by NSF under grant number CCF-06-35252. E.A. Pnevmatikakis was also
supported by the Onassis Public Benefit Foundation.
References:
[1] A.A. Lazar and E.A. Pnevmatikakis. A Video Time Encoding Machine. Proc. IEEE Intl.
Conf. on Image Processing, 717-720, San Diego, CA, 2008.
[2] J.J. DiCarlo and D.D. Cox. Untangling Invariant Object Recognition. Trends in Cognitive
Sciences, 11(8):333-341, 2008.
[3] D.W. Arathorn. Map-Seeking Circuits in Visual Cognition: A Computational Mechanism
for Biological and Machine Vision. Stanford University Press, 2002.
W57
Kalman particle filtering of point processes observation
Yousef Salimpour*1
1 School of Cognitive Science, Institute for Research in Fundamental Sciences, Tehran, Iran
* [email protected]
Recording of neural response to specific stimulus in a repeated trial is very common in
neuroscience protocol. The perstimulus time histogram (PSTH) is a standard tool for
analysis of neural response. However it could not capture the non-deterministic properties of
the neuron especially in higher level cortical area such as inferior temporal cortex. The
stochastic state point process filter theory is used for the estimation of the conditional
intensity of the point process observation as a time varying firing rate and the particle filter is
98
Neural encoding and decoding
used to numerically estimate this density in time. The kalman particle filters are applied to
the point process observation of the spiking activities of the neurons for compensating the
Gaussian assumption. The results of applying point process modeling on a real data from
inferior temporal cortex of macaque monkey indicate that, based on the assessment of
goodness-of-fit, the stimulus modulated response and biophysical property of neuron can be
captured more accurately than the conventional methods.
Acknowledgements:
This work was supported by the Computational Neuroscience and Neural Engineering group
in School of Cognitive Sciences, Institute for Studies in Fundamental Sciences (IPM). The
neural data was provided by Dr.Esteky, the director of IPM VisionLab.
W58
Decoding perceptual states of ambiguous motion from high
gamma EEG
Joscha Schmiedt*1, David Rotermund1, Canan Basar-Eroglu2
1 Department for Theoretical Physics, Center for Cognitive Sciences, Bremen University,
Bremen, Germany
2 Institute of Psychology and Cognition Research, Universität Bremen, Bremen, Germany
* [email protected]
Recently, it was shown that the perceptual experience of a viewer can be tracked using
multivariate analysis on non-invasive functional magnetic resonance imaging (fMRI) data. In
these experiments time series of three-dimensional images related to brain activity were
successfully classified using machine learning methods like Support Vector Machines
(SVM). In a similar line of research, cognitive states were distinguished in individual trials,
such as the two possible perspectives in binocular rivalry. In this project we investigate if and
how the bistable perception of a human viewer observing an ambiguous stimulus could be
decoded from electroencephalographic (EEG) time series. For this purpose, we classify the
direction of motion of the stroboscopic ambiguous motion (SAM) pattern, which is known to
be functionally related to oscillatory activity in the delta, alpha and gamma band of the EEG.
Taking advantage of the high temporal resolution of EEG data, we use SVMs that operate in
the time-frequency domain in order to study the oscillative coding of an ambiguous visual
stimulus in the brain. Furthermore, by applying the same method to an unambiguous variant
of the SAM we aim to study the specific coding of ambiguous stimuli.
Our results show that it is possible to detect the direction of motion on a single trial basis
(data from 500 ms windows) with accuracy far above chance level (up to 69% with
significance at least p<0.001). The best classification performance is reached using high
frequency gamma-band power above 80 Hz, which suggests an underlying percept-related
neuronal synchronization. In contrast, for the unambiguous stimulus variant no specific
frequency band allows decoding, which possibly indicates the existence of a gamma-related
99
Poster Session I, Wednesday, September 30
Gestalt interpretation mechanism in the brain. Our findings demonstrate that dynamical
mechanisms underlying specific mental contents in the human brain can be studied using
modern machine learning methods in extension of conventional EEG research which uses
average quantities to spatially and temporally localize cognitive features.
W59
Learning binocular disparity encoding simple cells in a model of
primary visual cortex
Mark Voss*2, Jan Wiltschut2, Fred H Hamker1
1 Computer Science Department, Technical University Chemnitz, Chemnitz, Germany
2 Psychologisches Institut II, Westfälische Wilhelms Universität, Münster, Germany
* [email protected]
The neural process of stereoscopic depth discrimination is thought to be initiated in the
primary visual cortex. So far, most models incorporating binocular disparity in primary visual
cortex build upon constructed, disparity encoding neurons (e.g. Read, Progress in Molecular
Biology and Biophysics, 2004), but see (Hoyer & Hyvarinen, Network, 2000) for applying ICA
to stereo images. Although these artificially constructed neurons can take into account
different types of binocular disparity encoding, namely by position or phase, and can cover a
defined range of disparities, they give no insight into the development of structural and
functional patterns in primary visual cortex and depict a very limited subset of neurons that
might contribute to disparity encoding. Here, we have extended our monocular model of
primary visual cortex with nonlinear dynamics and Hebbian learning (Wiltschut & Hamker,
Vis. Neurosci., 2009) to binocular vision. After presenting natural stereo scenes to our
model, the learned neurons show disparity tuning in diverse degrees and with complex
structure. We observe different types of near- and far- tuned, oriented receptive fields similar
as has been observed in V1. As compared to ICA, our model appears to provide a better fit
to physiological data. We conclude that unsupervised Hebbian learning provides a useful
model to explain the development of receptive fields, not only in the orientation and spatial
frequency domain but also with respect to disparity.
W60
Models of time delays in the gamma cycle should operate on the
level of individual neurons
Peng Wang*2, Martha Havenith2, Micha Best2, Wolf Singer21, Peter Uhlhaas2
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
100
Neural encoding and decoding
Neural synchronization is observed across numerous experimental paradigms, species and
measurement methods. Recent results suggest that small time delays among synchronized
responses can convey information about visual stimuli [1, 2], which becomes an interesting
problem for an implementation in the models of cortical dynamics. We found evidence that
this temporal code operates at the level of individual neurons and not at the level of larger
anatomical structures such as the hyper-columns or brain areas. Delays between signals
recorded from spatially distant electrodes (e.g., electrodes of scalp EEG, 1 to 5 cm
separation) were compared to delays between signals obtained from more proximal
electrodes (either ~2 mm apart between two Michigan probes or 200-800 microns apart,
within a single probe). We also compared the delays between different types of signals,
ranging from single-unit (SU) and multi-unit activity (MU) to local-field potentials (LFP) and
EEG.
An increase in the spatial distance between electrodes did not increase the delays between
the signals. Thus, when the signals from distant electrodes were synchronized at gamma
frequencies, the associated delays were about as large as those between neighboring
electrodes. Instead, the variable that affected most strongly the magnitudes of the delays
was the type of the signal used in the analysis. The fewer neurons contributed to a given
signal, the larger were the overall delays. Hence, SUs exhibited larger delays than MUs,
which in turn exhibited larger delays than LFPs. The smallest delays were observed for scalp
EEG despite the fact that these electrodes were segregated spatially to the highest extent
(Figure 1).
Similar results were obtained with respect to stimulus-induced changes in the time delays.
The strongest effects were found for SUs, and the effects gradually decreased as the
analysis shifted progressively towards signals with ever lager numbers of contributing
neurons, i.e., MU, LFP and EEG. Again, an increase in the distance between the electrodes
did not augment the effects.
These results suggest that only individual neurons adjust the time at which they fire relative
to the ongoing activity. An entire hyper-column or a brain area will usually not be activated
earlier than another hyper-column or a brain area. Thus, models of time delays within a
gamma cycle should restrict the operation level of this mechanism putative brain code
seems to be restricted to individual neurons, which in case of distant synchronization, may
also be spread over a range of cortical areas. Moreover, in these models, the conduction
delays between distant brain areas do not seem should not be responsible for the induction
of the delays in synchronization.
101
Poster Session I, Wednesday, September 30
W61
Effects of attention on the ablity of MST neurons to signal
direction differences of moving stimuli
Lu Zhang*21, Daniel Kaping2, Sonia Baloni21, Stefan Treue2
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 German Primate Center, Göttingen, Germany
* [email protected]
The allocation of spatial attention has been reported to improve the ability of orientationselective neurons in area V4 to signal the orientation of visual stimuli (McAdams & Maunsell,
2002). We similarly studied how attention affects stimulus discriminability of MST neurons
which have a characteristic and prominent tuning for direction in spiral motion space (SMS).
SMS has been introduced by Graziano et al. (1994) as a circular dimension that considers
expansion, clockwise rotation, contraction and counterclockwise rotation as neighboring
stimuli in this space, with a continuum of stimuli in between these cardinal directions.
We recorded SMS responses from MST neurons of two macaque monkeys. The monkeys
were trained to attend to one SMS random dot pattern (RDP) target stimulus in the presence
of another RDP (distractor) while maintaining their gaze on a fixation point. One of the RDPs
was placed in the receptive field (RF) while the other was placed outside the RF.
In two different conditions behaviorally relevant target stimuli either inside or outside the RF
moved in one of twelve possible SMS directions in the presence of a distractor stimulus. The
monkeys reported a speed change within the target stimulus while ignoring all changes
within the distractor stimulus. The tuning profile of individual MST neurons and therefore the
response of populations of such neurons can be well-fitted by a Gauss function. These fitted
tuning curves, together with the variability of responses to the repetition of the same stimulus
under the same behaviorial condition allow for a quantitative comparison of neuronal
responses and stimulus discriminability for behaviorally relevant (attend-in) or unrelevant
(attend-out) stimuli at different spatial positions.
We computed the discriminability, defined as the slope of the tuning curve divided by the
response variance, for 119 MST neurons for the attend-in vs. attend-out condition. Attention
improved the direction discriminability of individual MST neurons on average by about 30%.
We previously reported an attentional gain modulation that increased the amplitude of the
tuning curves by the same factor without affecting tuning width. Here we additionally
observed that the relationship between the neural response magnitude and response
variance (fano-factor) was unaffected by the attentional condition. These observations
indicate that the enhancement of direction discrimability by spatial attention in MST is
entirely accounted for by an attentional effect on response gain.
102
Neural encoding and decoding
Acknowledgements:
This work was supported by grant 01GQ0433 from the Federal Ministry of Education and
Research to the Bernstein Center for Computational Neuroscience Goettingen.
Neurotechnology and brain computer interfaces
W62
A new device for chronic multielectrode recordings in awake
behaving monkeys
Orlando Galashan1, Hanna Rempel1, Andreas K Kreiter1, Detlef Wegener*1
1 Brain Research Institute, Department of Theoretical Neurobiology, University of Bremen,
Bremen, Germany
* [email protected]
Neurophysiological studies on brain function often require to obtain data from many neurons
at the same time, and accordingly, several techniques for chronic implantation of multielectrode arrays have been developed. However, disadvantages of many of these
techniques are that they (a) do not allow for controlled movement of electrodes, or
movement in one direction only; (b) do not allow for fast and easy replacement of electrodes;
(c) have been designed for electrophysiological measurements in the cortex of small animals
(rodents and birds) and are not suitable for the work with non-human primates, and (d) are
either difficult to produce or very expensive.
We here present a new micro-drive array that overcomes these limitations and permits
chronic recordings of single cell activity and local field potentials over prolonged periods of
time. The system fulfills the specific requirements for multi-electrode recordings in awake
behaving primates. It allows for movement of electrodes in forward and backward directions
in small steps and for a distance within the tissue of up to 10mm. The entire set of
electrodes can be exchanged in very short time and without the need of any additional
surgical procedure or anesthetic intervention and electrodes can be (re-)inserted into the
cortex in a precisely defined manner. The micro-drive array permits sterile closure of the
trepanation, is of low cost and can easily be produced.
We present first data obtained with the new array to approve functionality of the system.
Neuronal signals were recorded from primary visual cortex over a period of three months.
We demonstrate that the system allows for stable and reproducible recordings of population
receptive fields in different depths of the visual cortex and over many recording sessions.
Single cell activity was recorded even after many weeks following initial implantation of the
array.
103
Poster Session I, Wednesday, September 30
W63
Decoding neurological disease from MRI brain patterns
Kerstin Hackmack*1, Martin Weygandt1, John-Dylan Haynes12
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Charité-Universitätsmedizin, Berlin, Germany
* [email protected]
Recently, pattern recognition approaches have been successfully applied in the field of
clinical neuroimaging in order to differentiate between clinical groups [1]. Against this
background, we present a fully automated procedure using local brain tissue characteristics
of structural brain images for the prediction of the subjects’ clinical condition.
We proceeded as follows. After segmenting the images into grey and white matter we
applied a first statistical analysis referred to as voxel-based morphometry [2,3]. Here,
standard statistical procedures are employed to make a voxel-wise comparison of the local
concentration of grey and white matter between clinical groups. The result is a statistical
parametric map indicating differences between these groups. In order to classify the
segmented images into patient or control group, we used a two-stage procedure. In the first
step, independent classifiers are trained on local brain patterns using a searchlight approach
[4,5]. By employing a nested cross-validation scheme we obtained accuracy maps for each
region in the brain. In the second step, we used an ensemble approach to combine the
information of best discriminating (i.e. most informative) brain regions in order to make a final
decision towards the clinical status for a novel image. The ensemble-method was chosen,
since it has been shown that classifier-ensembles tend to have better generalization abilities
compared to individual classifiers [6]. To predict symptom severity, a further regression
analysis within the clinical group with respect to different clinical markers was included.
To our best knowledge this is the first pattern recognition approach that combines local
tissue characteristics and ensemble methods to decode clinical status. Because multivariate
decoding algorithms are sensitive to regional pattern changes and therefore provide more
information than univariate methods, the identification of new regions accompanying
neurological disease seem to be conceivable and thus enable clinical applications.
Acknowledgements:
This work was funded by the German Research Foundation, the Bernstein Computational
Neuroscience Program of the German Federal Ministry of Education and Research and the
Max Planck Society.
References:
[1] Klöppel, S. et al., 2008. Brain, 131, 681-689
[2] Ashburner, J. et al., 2000. NeuroImage, 11, 805-821
[3] Good, C.D. et al., 2001. NeuroImage, 14, 21–36
[4] Haynes, J.D. et al., 2007. Curr Biol, 17, 323-328
104
Neurotechnology and brain computer interfaces
[5] Kriegeskorte, N. et al., 2006. Proc. Natl Acad. Sci. USA, 103, 3863–3868
[6] Martinez-Ramon, M. et al., 2006. NeuroImage, 31, 1129-1141
W64
Effect of complex delayed feedback in a neural field model
Julien Modolo*1, Julien Campagnaud1, Anne Beuter1
1 Laboratoire de l'Intégration du Matériau au Système, Centre national de la recherche
scientifique, Université Bordeaux 1, Bordeaux, France
* [email protected]
Therapeutic modulation of cerebral activity holds promises for symptomatic treatment of
neurodegenerative disorders such as Parkinson’s disease. Indeed, neurodegenerative
disorders are characterized by identified changes in neural activity at the cortical or subcortical levels, which may be reversed with appropriate therapeutic interventions. A wellknown example is deep brain stimulation in Parkinson’s disease. One challenge is to
propose new stimulation patterns, applied preferably to the cortex to minimize invasiveness,
and designed to target selectively predetermined brain rhythms while minimizing interference
with physiological brain activity. As a step towards this goal, we first study a neural field
model where a closed-loop stimulation term (i.e., a feedback loop is added to the system) is
present.
We derive a closed-loop stimulation term called complex delayed feedback since it includes:
(1) a distributed delayed contribution of the potential; (2) the derivative of the undesirable
component of the potential as well as the undesirable component itself (see supplementary
Eq. 1). This closed-loop stimulation is designed to attenuate target frequency bands, while
taking under consideration constraints such as spatial and temporal selectivity, robustness
and minimized interference with physiological brain rhythms. Second, we perform a linear
stability analysis of the neural field model with a limit case of complex delayed feedback,
namely linear delayed feedback (Rosenblum and Pikovsky, 2004; see supplementary Eq. 2),
and derive the dispersion relation between temporal and spatial modes of cortical waves
(see supplementary Eq. 3).
Our results indicate that linear delayed feedback selectively attenuates the propagation of
cortical waves at given frequencies, depending on the feedback loop delay (see
supplementary Eqs. 4 and 5). Consequently, it counteracts neuronal coupling and abnormal
synchronization at these frequencies, without affecting individual dynamics. Furthermore, our
results propose a more selective modulation of neuronal activity in which the dynamics of
neuronal groups, as well as their coupling, may be affected. This modulation minimizes
energy consumption by stimulating only where and when needed. Principles based on this
approach may be exploited for the development of future stimulation devices interacting in a
closed-loop manner with cortical tissue in the case of Parkinson’s disease. Another
consequence of this work is that if frequency bands may be attenuated, they might also be
augmented. The consequences of frequency bands augmentation on human behavior
105
Poster Session I, Wednesday, September 30
remain to be explored.
Acknowledgements:
The authors thank Axel Hutt and Roderick Edwards for useful discussions. This work is
supported by the European Network of Excellence BioSim LSHB-CT-2004-005137.
Probabilistic models and unsupervised learning
W65
Applications of non-linear component extraction to spectrogram
representations of auditory data.
Jörg Bornschein*1, Jörg Lücke1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
* [email protected]
Applications of Non-linear Component Extraction to Spectrogram Representations Of
Auditory Data.
The state-of-the-art in component extraction for many types of data is based on variants of
models such as principle component analysis (PCA), independent component analysis
(ICA), sparse coding (SC), factor analysis (FA), or non-negative matrix factorization (NMF).
These models are linear in the sense that they assume the data to consist of linear superpositions of hidden causes, i.e., these models try to explain the data with linear superpositions of generative fields. This assumption becomes obvious in the generative
interpretation of these models [1].
For many types of data, the assumption of linear component super-positions represents a
good approximation. An example is the super-position of air-pressure waveforms. In
contrast, we here study auditory data represented in the frequency domain. We consider
data similar to those processed by the human audio system just after the cochlea. Such data
is closely aligned with the log-power-spectrogram representations of auditory signals. It is
long known that the super-position of data components in these data is non-linear and well
approximated by a point-wise maximum of the individual spectrograms [2].
For component extraction from auditory spectrogram data we therefore investigate learning
algorithms based on a class of generative models that assume a non-linear superposition of
data components. The component extraction algorithm of Maximal Causes Analysis (MCA;
[3]) assumes a maximum combination where other algorithms use the sum. Training such
non-linear models is, in general, computationally expensive but can be made feasible using
approximation schemes based on Expectation Maximization (EM). Here we apply an EM
106
Probabilistic models and unsupervised learning
approximation scheme that is based on the pre-selection of the most probable causes for
every data-point. The approximation results in approximate maximum likelihood solutions,
reduces the computational complexity significantly while at the same time allowing for an
efficient and parallelized implementation running on clustered compute nodes.
To evaluate the applicability of non-linear component extraction to auditory spectrogram
data, we generated training data by randomly choosing and linearly mixing waveforms from
a set of 10 different phonemes (sampled at 8000Hz). We then applied an MCA algorithm
based on EM and pre-selection. The algorithm was presented only the log-spectrograms of
the mixed signals. Assuming Gaussian noise the algorithm was able to extract the logspectrograms of the individual phonemes. We obtained similar results for different forms of
phoneme mixtures including mixtures of always three randomly chosen phonemes.
References:
[1] Theoretical Neuroscience, P. Dayan and L. F. Abbott, 2001
[2] Automatic Speech Processing by Inference in Generative Models, S. T. Roweis(2004),
Speech Separation by Humans and Machines, Springer. Pp 97—134. (Roweis quotes
Moore, 1983, as the first pointing out the log-max approximation)
[3] Maximal Causes for Non-linear Component Extraction, J. Lücke and M. Sahani (2008)
JMLR 9:1227-1267.
W66
Planning framework for tower of hanoi task
Gülay Büyükaksoy Kaplan*3, Neslihan Serap Sengör2, I. Hakan Gürvit1
1 Department of Neurology, Behavioral Neurology and Movement Disorders Unit, Faculty of
Medicine, Istanbul University, Istanbul, Turkey
2 Electric Electronic Faculty, Electronic Engineering Department, Istanbul Technical
University, Istanbul, Turkey
3 TÜBITAK Marmara Research Center, Information Technologies Institute, Kocaeli, Turkey
* [email protected]
The Tower of Hanoi (ToH) task is one of the well known tests to assess the planning and
problem solving abilities in clinical neuropsycholoy. The novelty of this work is to obtain a
computational model which can manage a planning task which is ToH in this case.To
manage this task, the planning activity is thought to be achieved in two phases named initial
and on-line. In the initial phase, the subject should consider the order of main steps without
detailed realisation of them (Figure 1-a). We called these steps subgoals. In the on-line
planning, the subject has to imagine the detailed steps which are needed to reach a subgoal
(Figure 1-b).
We developed a computational framework to accomplish the planning activities. In the
framework, the initial planning is carried on by an embedded mechanism, when it is broken,
the subject solve the problem with random movements. The on-line planning framework
107
Poster Session I, Wednesday, September 30
generates possible disc movements for the current disc state and also evaluates the new
disc position’s contribution to the solution.The contribution is graded with a state value.
In every ToH task, there are some states in which the move of discs is straightforward as, to
move the biggest disc to the empty third rod. We also defined advantageous states which
are one step away from the subgoal states. When the ToH tasks are applied succesively,
the subject can remember some moves which lead to one of the subgoals or advantageous
states, from earlier experiments.
In order to simulate this fact, learning is included in the framework. Reinforcement learning
(RL) method is used to simulate becoming familiar executing some moves in certain states.
RL also provides an evaluation of the state values (Figure 2). In the evaluation the
movements are rewarded if the new state is an advantegous or a subgoal and also punished
if it causes a repetition. This evaluation procedure corresponds to inner satisfaction of the
subject when a subgoal is reached and also unsatisfication due to repeation in vain. The
state value is determined by two attributes: considered disc being free for movement and the
target place being available.
In this work, Tower of Hanoi with three and four discs are considered. During the
simulations, the possible moves are generated for the current state, if one of these moves
lead to reaching an advantageous or subgoal states, this movement is executed and also
evaluated. These processes correspond to the working memory activities and need properly
functioning of working memory.
For three discs problems, proposed working memory framework reaches the minimum step
solution in succesive test applications. For four discs problem, although the successive
simulations improve the solution, minimum step solution could not be reached for some
starting states. In order to solve this problem, we increased the working memory capacityn to
provide imaging the succesive three moves. In this way, four discs problems can be solved
in minimum steps. This study showes the correlation between working memory capacity and
achivement of the ToH tasks.
W67
Robust implementation of a winner-takes-all mechanism in
networks of spiking neurons
Stefano Cardanobile*1, Stefan Rotter12
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Faculty of Biology, Albert-Ludwig University, Freiburg, Germany
* [email protected]
Neural networks implementing winner-takes-all mechanisms are assumed to play an
important role in neural information processing [1]. These networks are usually constructed
by reciprocally connecting populations of inhibitory neurons in such a manner that the
population receiving the most input can suppress the concurrent population.
108
Probabilistic models and unsupervised learning
In [2] a winner-takes-all network of rate-based neurons is constructed and a stability analysis
for the system of rate equations is carried out. Based on the framework developed in [3], we
construct a network consisting of spiking neurons with exponential transfer functions such
that the accompanied system of differential equations governing the expected rates
coincides with the system developed in [2].
We show that the same winner-takes-all mechanism is realised by the spiking dynamics,
although it is prone to classification errors due to its probabilistic nature. Finally, based on
simulations, we study the performance of these networks and show that they are efficient for
a broad range of system parameters.
References:
[1] A neural theory of binocular rivalry, Blake R, Psychological Review (1989)
[2] A simple neural network exhibiting selective activation of neuronal ensembles: from
winner-take-all to winners-share-all, Fukai T and Tanaka S, Neural Computation (1997)
[3] Interacting Poisson processes and applications to neuronal modeling, Cardanobile S and
Rotter S, Preprint, arXiv 0904.1505 (2009)
W68
A recurrent working memory architecture for emergent speech
representation
Mark Elshaw*1, Roger K Moore1
1 Department of Computer Science, University of Sheffield, Sheffield, UK
* [email protected]
This research considers a recurrent self-organising map (RSOM) working memory
architecture for emergent speech representation, which is inspired by evidence from human
neuroscience studies. The main purpose of this research is to demonstrate that a neural
architecture can develop meaningful self-organised representations of speech using phonelike structures. By using this representational approach it should be possible, in a similar
fashion to infants, to improve the performance of automatic recognition systems by aiding
speech segmentation and fast word learning.
This RSOM architecture takes inspiration, at an abstract level, from evidence on word
representation, the learning approach of the cerebral cortex and the working memory
system’s phonological loop. The neurocognitive evidence of Pulvermuller (2003) offers
inspiration to the RSOM architecture related to how the brain represents words using
spatiotemporal cell assembly firing patterns. The cell assembly representation of a word
includes assemblies associated with its word form (speech signal characteristics) and others
associated with the word’s semantic features. Baddeley (1992) notes in his working memory
model that the phonological loop is used for the storage and rehearsal of speech based
knowledge.
109
Poster Session I, Wednesday, September 30
To achieve recurrent temporal speech processing and representation in an unsupervised
self-organised manner RSOM uses the extension by Voegtlin (2002) of the Kohonen selforganising map. The training and test inputs for the RSOM model are spoken words
extracted from short utterances by a female speaker such as ‘do you see the nappy’. At
each time-slice the RSOM working memory receives as input the current speech signal slice
(27ms) from a moving window and to act as context the activations from the RSOM at
previous time-step. From this input a learned temporal topological representation of the
speech is produced on the RSOM output layer at each time-step. By examining the
sequences of RSOM best matching units (BMUs) for words, it is possible to find that there is
a temporal representation of speech in terms of phone-like structures.
By the RSOM architecture developing a representation of words in terms of phones this
matches the findings of researchers in cognitive child development on infant speech
encoding. Infants have been found to use this phonetic representation approach to aid word
extraction and the development of word understanding. The neurocognitive findings of
Pulvermuller are recreated in the RSOM model with different BMUs (as abstract cell
assemblies) being activate over time as a chain to create the word form representation. In
terms of the working memory model of Baddeley the RSOM model recreates functionality of
the phonological loop by producing a learned representation of the current speech input
using stored weights. Further, by training using multiple observations of the same speech
samples this equates to the phonological loop performing rehearsal of speech.
References:
D. Baddeley, Working memory, Science, 255(5044) (1992), pp. 556-559.
F. Pulvermuller, The neuroscience of language: On brain circuits of words and language,
Cambridge Press, Cambridge, UK, 2003.
T. Voegtlin, Recursive self-organizing maps, Neural Networks, 15(8-9) (2002), pp. 979-991.
W69
Contrastive divergence learning may diverge when training
restricted boltzmann machines
Asja Fischer21, Christian Igel*21
1 Bernstein Group for Computational Neuroscience Bochum, Ruhr-Universität Bochum,
Bochum, Germany
2 Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
* [email protected]
Understanding and modeling how brains learn higher-level representations from sensory
input is one of the key challenges in computational neuroscience and machine learning.
Layered generative models such as deep belief networks (DBNs) are promising for
unsupervised learning such representations, and new algorithms that operate in a layer-wise
fashion make learning these models computationally tractable [1-5].
110
Probabilistic models and unsupervised learning
Restricted Boltzmann Machines (RBMs) are the typical building blocks for DBN layers. They
are undirected graphical models, and their structure is a bipartite graph connecting input
(visible) and hidden neurons. Training large undirected graphical models by likelihood
maximization in general involves averages over an exponential number of terms, and
obtaining unbiased estimates of these averages by Markov chain Monte Carlo methods
typically requires many sampling steps. However, recently it was shown that estimates
obtained after running the chain for just a few steps can be sufficient for model training [3]. In
particular, gradient-ascent on the k-step Contrastive Divergence (CD-k), which is a biased
estimator of the log-likelihood gradient based on k steps of Gibbs sampling, has become the
most common way to train RBMs [1-5].
Contrastive Divergence learning does not necessarily reach the maximum likelihood
estimate of the parameters (e.g., because of the bias). However, we show that the situation
is much worse. We demonstrate empirically that for some benchmark problems taken from
the literature [6], CD learning systematically leads to a steady decrease of the log-likelihood
after an initial increase (see supplementary Figure 1). This seems to happen especially
when trying to learn more complex distributions, which are the targets if RBMs are used
within DBNs.
The reason for the decreasing log-likelihood is an increase of the model parameter
magnitudes. The estimation bias depends on the mixing rate of the Markov chain, and it is
well-known that mixing slows down with growing magnitude of model parameters [1,3].
Weight-decay can therefore solve the problem if the strength of the regularization term is
adjusted correctly. If chosen too large, learning is not accurate enough. If chosen too small,
learning still divergences.
For large k, the effect is less pronounced. Increasing k, as suggested in [1] for finding
parameters with higher likelihood, may therefore prevent divergence. However, divergence
occurs even for values of k too large to be computationally tractable for large models. Thus,
a dynamic schedule to control k is needed.
References:
[1] Bengio Y, Delalleau O. Justifying and Generalizing Contrastive Divergence. Neural
Computation 21(6):1601-1621, 2009
[2] Bengio Y, Lamblin P, Popovici D, Larochelle H, Montreal U. Greedy layer-wise training of
deep networks. Advances in Neural Information Processing Systems (NIPS 19), pp. 153160, 2007, MIT Press
[3] Hinton GE. Training products of experts by minimizing contrastive divergence. Neural
Computation 14(8):1771-1800 , 2002
[4] Hinton GE. Learning multiple a layers of representation. Trends in Cognitive Science
11(1):428-434, 2007
[5] Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural
Computation 8(7):1527-1554 , 2006
[6] McKay D. Information Theory, Inference, and Learning Algorithms, Cambridge University
Press, 2003
111
Poster Session I, Wednesday, September 30
W70
Hierachical models of natural images
Reshad Hosseini*1, Matthias Bethge1
1 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
* [email protected]
Here, we study two different approaches to estimate the multi-information of natural images.
In both cases, we begin with a whitening step. Then, in the first approach, we use a
hierarchical multi-layer ICA model [1] which is an efficient variant of projection pursuit density
estimation. Projection pursuit [2] is a nonparametric density estimation technique with
universal approximation properties. That is, it can be proven to converge to the true
distribution in the limit of infinite amount of data and layers.
For the second approach, we suggest a new model which consists of two layers only and
has much less degrees of freedom than the multi-layer ICA model. In the first layer we apply
symmetric whitening followed by radial Gaussianization [3,4] which transforms the norm of
the image patches such that the distribution over the norm of the image patches matches
the radial distribution of a multivariate Gaussian. In the next step, we apply ICA. The first
step can be considered as a contrast gain control mechanism and the second one yields
edge filters similar to those in primary visual cortex.
By evaluating quantitatively the redundancy reduction achieved with the two approaches, we
find that the second procedure fits the distribution significantly better than the first one. On
the van Hateren data set (400.000 image patches of size 12x12) with log-intensity scale, the
redundancy
reduction
in
the
multi-layer
ICA
model
yields
0.162,0.081,0.034,0.021,0.013,0.009,0.006,0.004,0.003,0.002 bits/pixel after the first,
second, third, fourth, …, tenth layer, respectively.( For the training set size used, the
performance decreases after the tenth layer). In contrast, we find a redundancy reduction of
0.342 bits/pixel after the first layer and 0.073 bits/pixel after the second layer for the second
approach.
In conclusion, the universal approximation property of the deep hierarchical architecture in
the first approach does not pay off for the task of density estimation in case of natural
images.
References:
[1] Chen and Gopinath. 2001. Proc. NIPS, vol. 13, pp. 423–429.
[2] Friedman J. et al. 1984. J. Amer. Statist. Assoc., vol. 71, pp. 599–608.
[3] Lyu S. and Simoncelli E. P. 2008. Proc. NIPS, vol. 21, pp.1009–1016.
[4] Sinz F. H. and Bethge M. 2008. MPI Technical Report
112
Probabilistic models and unsupervised learning
W71
Unsupervised learning of disparity maps from stereo images
Jörn-Philipp Lies*1, Matthias Bethge1
1 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
* [email protected]
The visual perception of depth is a striking ability of the human visual system and an active
part of research in fields like neurobiology, psychology, robotics, or computer vision. In real
world scenarios, many different cues, such as shading, occlusion, or disparity are combined
to perceive depth. As can be shown using random dot stereograms, however, disparity alone
is sufficient for the generation of depth perception [1]. To compute the disparity map of an
image, matching image regions in both images have to be found, i.e. the correspondence
problem has to be solved. After this, it is possible to infer the depth of the scene.
Specifically, we address the correspondence problem by inferring the transformations
between image patches of the left and the right image. The transformations are modeled as
Lie groups which can be learned efficiently [3]. First, we start from the assumption that
horizontal disparity is caused by a horizontal shift only. In that case, the transformation
matrix can be constructed analytically according to the Fourier shift theorem. The
correspondence problem is then solved locally by finding the best matching shift for a
complete image patch. The infinitesimal generators of a Lie group allow us to determine
shifts smoothly down to subpixel resolution. In a second step, we use the general Lie group
framework to allow for more general transformations. In this way, we infer a number of
transform coefficients per image patch. We finally obtain the disparity map by combining the
coefficients of (overlapping) image patches to a global disparity map. The stereo images
were created using our 3D natural stereo image rendering system [2]. The advantage of
these images is that we have ground truth information of the depth maps and full control
over the camera parameters for the given scene. Finally, we explore how the obtained
disparity maps can be used to compute accurate depth maps.
References:
[1] Bela Julesz. Binocular depth perception of computer-generated images. The Bell System
Technical Journal, 39(5):1125-1163, 1960.
[2] Jörn-Philipp Lies and Matthias Bethge. Image library for unsupervised learning of depth
from stereo. In Frontiers in Computational Neuroscience. Conference Abstract: Bernstein
Symposium 2008, 2008.
[3] Jimmy Wang, Jascha Sohl-Dickstein, and Bruno Olshausen. Unsupervised learning of lie
group operators from image sequences. In Frontiers in Systems Neuroscience.
Conference Abstract: Computational and systems neuroscience, 2009.
113
Poster Session I, Wednesday, September 30
W72
RLS- and Kalman-based algorithms for the estimation of timevariant, multivariate AR-models
Thomas Milde*1, Lutz Leistritz1, Thomas Weiss1, Herbert Witte1
1 Institute of medical statistics, informatics and documentation, Friedrich Schiller University,
Jena, Germany
* [email protected]
In this study two of the most important algorithmic concepts for the estimation of timevariant, multivariate AR-models, the RLS and the Kalman filter approach, are compared with
regard to their applicability to high-dimensional time series. In order to test both approaches
simulated and measured time series were used. In a multi-trial approach directed
interactions between event-related potentials (ERPs) derived from an experiment with
noxious laser stimuli were computed. The time-variant Granger Causality Index was used for
interaction analysis. It can be shown that the Kalman approach enables a time-variant
parameter estimation of a 58-dimensional multivariate AR model. The RLS-based algorithm
fails for dimensions higher than . The high-dimensional AR model provides an improved
neurophysiological interpretation of the computed interaction networks.
W73
A new class of distributions for natural images generalizing
independent subspace analysis
Fabian Sinz*1, Matthias Bethge1
1 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
* [email protected]
The Redundancy Reduction Hypothesis by Barlow and Attneave suggests a link between
the statistics of natural images and the physiologically observed structure and function in the
early visual system.
In particular, algorithms and probabilistic models like Independent Component Analysis,
Independent Subspace Analysis and Radial Factorization, which allow for redundancy
reduction mechanism, have been used successfully to generate several features of the early
visual system such as bandpass filtering, contrast gain control, and orientation selective
filtering when applied to natural images.
Here, we propose a new family of probability distributions, called Lp-nested symmetric
distributions, that comprises all of the above algorithms for natural images. This general
class of distributions allows us to quantitatively asses (i) how well the assumptions made by
all of the redundancy reducing models are justified for natural images, (ii) how large the
contribution of each of these mechanisms (shape of filters, non-linear contrast gain control,
114
Probabilistic models and unsupervised learning
subdivision into subspace) to redundancy reduction is. For ISA, we find that partitioning the
space into different subspace only yields a competitive model when applied after contrast
gain control. In this case, however, we find that the single filter responses are already almost
independent. Therefore, we conclude that a partitioning into subspaces does not
considerably improve the model which makes band-pass filtering (whitening) and contrast
gain control (divisive normalization) the two most important mechanisms.
115
Poster Session II, Thursday, October 1
Poster Session II, Thursday, October 1
Computer vision
T1
Learning object-action relations from semantic scene graphs
Eren Aksoy*1, Alexey Abramov1, Babette Dellen12, Florentin Wörgötter1
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Action recognition and object categorization have received increasing interest in the AI and
cognitive-vision community during the last decade. The problem of action recognition has
been addressed in previous works (Hongeng, 2004), but only rarely in conjunction with
object categorization (Sridhar et al., 2004). Sridhar et al. (2004) showed that objects can
also be categorized by considering their common roles in different manipulations, resulting
however in large and complex activity graphs which have to be analyzed separately. In this
work, we introduce a novel approach for detecting spatiotemporal object-action relations
using semantic scene graphs, leading to both action recognition and object categorization.
In the current study we analyze movies of scenes containing low-level context. As a first
processing step, the image segments are extracted and tracked throughout the movie
(Dellen et al., 2009), allowing the assignment of temporally stable labels to the respective
image parts. The scene is then described by undirected labeled graphs, in which the nodes
and edges represent segments and their neighborhood relations, respectively. We apply an
exact graph matching method in order to extract those graphs that represent a structural
change of the scene. The resulting “compressed” graph sequence represents an action
graph, providing a means for comparing movies of different scenes by measuring the
similarity between their action graphs. A template main graph model is constructed for each
action. Finally, actions are classified by calculating the similarity with those models. Nodes
playing the same role in an classified action sequence can be then used to categorize
objects.
We applied our framework to three different action types: “moving an object”, “opening a
book”, and “making a sandwich”. For each of these actions, we recorded four movies,
differing in trajectories, speeds, and object shapes. The experimental results showed that
the agent can learn all four action types and categorize the participating manipulated objects
116
Computer vision
according to their roles.
The framework presented here represents a promising approach for recognizing actions
without requiring prior object knowledge, and categorizing objects solely based on their
exhibited role within an action sequence. In the future, we plan to apply the framework to
more complex scenes containing high-level context and to let the agent learn the template
main graph models from a training data set. A parallel implementation of the framework on
GPUs for real-time robotics applications is currently investigated.
Acknowledgement:
The work has received support from the BMBF funded BCCN Goettingen and the EU Project
PACOPLUS under Contract No. 027657.
References:
Dellen, B., Aksoy, E. E., and Woergoetter, F. (2009). Segment tracking via a spatiotemporal
linking process including feedback stabilization in an n-d lattice model. IEEE
Transactions on Circuits and Systems for Video Technology (Submitted).
Hongeng, S. (2004). Unsupervised learning of multi-object event classes. Proc. 15th British
Machine Vision Conference, pages 487–496.
Sridhar, M., Cohn, G., and Hogg, D. (2004). Learning functional object-categories from a
relational spatio-temporal representation. Proc. 18th European Conference on Artificial
Intelligence, pages 487–496
T2
A neural network for motion perception depending on the minimal
contrast
Florian Bayer*1, Thorsten Hansen1, Karl Gegenfurtner1
1 Department of General Psychology, Justus Liebig University, Giessen, Germany
* [email protected]
The Elaborated Reichardt Detector (ERD, van Santen and Sperling 1984, J Opt Soc Am A 1,
451) and the functionally equivalent motion energy model (Adelson and Bergen 1985, J Opt
Soc Am A 2 284-299) predict that motion detection thresholds depend on the product of
contrasts of the input signals. However, in psychophysical studies this dependence has been
observed only at or near contrast detection threshold (Chubb and Morgan 1999, Vision
Research 39 4217-4231). Otherwise, minimal contrast predicts motion detection thresholds
over a wide range of contrasts (Allik and Pulver 1995, J Opt Soc Am A 12 1185-1197).
Here we develop a neural network for motion detection without multiplicative processing.
Using addition and subtraction, time delay and rectification we defined a model with a
minimal number of neurons that responds to motion but not to flicker. The resulting network
consists of two neurons receiving input from spatial filters, an inhibitory delay neuron and an
117
Poster Session II, Thursday, October 1
output neuron. In contrast to the ERD, the network output does not depend on the product of
contrasts but on the minimal contrast of the input signals.
T3
The guidance of vision while learning categories
Frederik Beuth*1, Fred H Hamker1
1 Computer Science Department, Technichal University Chemnitz, Chemnitz, Germany
* [email protected]
We recently proposed a computational model of perception as active pattern generation. It
suggests to combine attention and object/category recognition in a single interconnected
network. Perception will be formalized by an active, top-down directed inference process. In
it a target template will be learned and maintained. Little is known about the nature of these
templates. Our proposal is that the brain can create these from learning in a reward-based
scenario. The responsible brain region for reward and Reinforcement Learning is the Basal
Ganglia. For category discrimination the brain might learn more abstract or generalized
features of single objects. Our hypothesis is, that such high order visual templates also guide
visual perception, i.e. gaze, during learning which can be measured by an eye tracking
system.
To test this hypothesis we ran an experimental study and trained 12 human subjects on a
subordinate category recognition task (fish) with category stimuli similar to those in earlier
studies (Sigala & Logothetis, Nature, 2002; Sigala, Gabbiani & Logothetis, J Cog Neursci.,
2002; Peters, Gabbiani & Koch, Vision Res., 2003). We designed a decision space to allow
a full separation of two categories by only two of four features. This disjunction investigated
whether subjects are capable to detect and focus on the features with the relevant
information. In the study, a single stimulus was presented to the subjects. They had to press
one of two buttons to indicate their category decision. The stimulus disappeared after the
button had been pressed. The subjects received feedback only in case of wrong answers.
During the presentation, the subjects eye movements were recorded by an eye tracker
(Eyelink from SR Research). On average the subjects learned the task (85% correct) after
100 trials.
The data confirms our hypothesis. On average there is a general shift of fixations towards
locations with relevant features. Thus, subjects are able to learn which features are
informative and tend to fixate onto these to compute their final decision about the category
choice. These behavioral data complements an earlier electrophysiological study of Sigala &
Logothetis (2002), which demonstrated a more selective response of cells in area IT during
the learning of a comparable stimulus material. We propose that such learning could be
mediated by the Basal Ganglia and demonstrate the basic computational principles.
118
Computer vision
T4
Learning vector quantization with adaptive metrics for online
figure-ground segmentation
Alexander Denecke*12, Heiko Wersing2, Jochen Steil1, Edgar Körner2
1 Research Institute for Cognition and Robotics, Bielefeld University, Bielefeld, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
One classical problem in research on visual object learning and recognition is the basic
requirement to separate the object related regions in the image data from the surrounding
background clutter, namely the figure ground-segmentation. This is necessary in particular
for online learning and interaction where the restricted time and number of available training
complicate a direct training of the object classifier and generalization to the object parts. To
bootstrap the segmentation process we assume an initial segmentation hypothesis derived
from depth information. However a direct usage of this hypothesis for high-performance
object learning is not appropriate. Instead, on this basis a classifier for image regions
respectively the single pixels represented by their associated features (e.g. color) can be
trained. The core idea is that the classifier should be capable to generalize to the main
objects features and can be used to reclassify the pixels and derive a foreground
classification that is more consistent with the outline and appearance of the object.
We investigate variants of Generalized Learning Vector Quantization (GLVQ) for this
purpose. Since similarity based clustering and classification in prototype based networks
depends on the underlying metrics, the emphasis lies on the metrics adaptation in this
scenario.
We model figure and ground by prototypical feature representatives and investigate several
metrics extensions for GLVQ (P. Schneider et al., Proc. of 6th WSOM, 2007) to improve this
approach. Comparing those extensions, we show that using prototype-specific linear
projections of the feature-space enables an improved foreground generalization (A. Denecke
et al., Neurocomputing, 2009). The proposed method can handle arbitrary background, is
robust to changes in illumination and real-time capable which yields foreground
segmentations that allow for a significant enhancement in object learning and recognition.
Further the method is capable to outperform state of the art foreground segmentation
methods in our online learning scenario and achieves competitive results on public
benchmark data.
Finally we show that the proposed method has fewer constraints on the provided training
data (e.g. a priori assumptions about object position and size) and is less sensitive to the
quality of the initial hypothesis.
In general vector quantization methods are confronted with a model selection problem,
namely the number of prototypical feature representatives to model each class. In further
work (A. Denecke et al., WSOM, accepted) we address this problem and propose an
119
Poster Session II, Thursday, October 1
incremental extension which faces two problems. Firstly the local adaptive metrics
complicates distance-based criteria to place new prototypes, where we use the confidence
of the classification instead. Secondly the method has to cope with noisy supervised
information, that is, the labels to adapt the networks are not fully confident. In particular we
address the second problem by using a parallel evaluation method on the basis of a local
utility function, which does not rely on global error optimization. On our real world benchmark
dataset we show, that the incremental network is capable to maintain an adaptive networks
size and yield a significant smaller variance of the results, thus is more robust against the
initialization of the network.
T5
Large-scale real-time object identification based on analytic
features
Stephan Hasler*1, Heiko Wersing1, Edgar Körner1
1 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
Inspired by the findings that columns in inferotemporal cortex respond to complex visual
features generalizing over retinal position and scale (Tanaka, Ann. Rev. of Neurosc., 1996)
and that objects are then represented by the combined activation of such columns (Tsunoda
et al., Nature Neurosc., 2001), we previously developed a framework to select a set of
analytic SIFT-descriptors (Hasler et al., Proc. of ICANN, 2007), that is dedicated for 3D
object recognition. In this work we embed this representation in an online system that is able
to robustly identify a large number of pre-trained objects. In contrast to related work, we do
not restrict the objects' pose to characteristic views but rotate them freely in hand in front of
a cluttered background.
To tackle this unconstrained setting we use following processing steps: Stereo images are
acquired with cameras mounted on a pan-tilt unit. Disparity is used to select and track a
region of interest based on closest proximity. To remove background clutter we learn a
foreground mask using depth information as initial hypothesis (Denecke et al.,
Neurocomputing, 2009). Then analytic shape features and additional color features are
extracted. Finally, the identification is performed by a simple classifier. To our knowledge,
this is the first system that can robustly identify 126 hand-held objects in real-time.
The used type of representation differs strongly from the standard SIFT framework proposed
by Lowe (Int. J. of Comp. Vision, 2004). First, we extract SIFT-descriptors at each
foreground position in the attended image region. Thus, parts are found to be analytic that
would not have passed usual keypoint criteria. Second, we do not store the constellation of
object parts but keep only the maximum response per feature. This results in a simple
combinatorial object representation in accordance to biology, but depends on a good figureground segregation. Third, we match the local descriptors against an alphabet of visual
features. This alphabet is rather small (usually several hundreds) and the result of a
120
Computer vision
supervised selection strategy favoring object specific parts that can be invariantly detected in
several object poses. The selection method is dynamic in the way, that it selects more
features for objects with stronger variations in appearance. We draw a direct comparison to
the SIFT framework using the COIL100 database as a toy-problem.
Despite the quite simple object representation, our system shows a very high performance in
distinguishing the 126 objects in the realistic online setting. We underline this by the tests on
an offline database acquired under the same conditions. With a nearest neighbor classifier
(NNC) we obtain an error rate of 25 percent using analytic features only. When adding an
RGB histogram as complementary feature channel this error rate drops to 15 percent for the
NNC and to 10.35 percent using a single layer perceptron. Considering the high difficulty of
the database with a baseline NNC error rate of 85 percent on the gray-scale images
compared to 10 percent for the COIL100, these results mark a major step towards invariant
identification of 3D objects.
T6
Learning of lateral connections for representational invariant
recognition
Christian Keck*1, Jan Bouecke2, Jörg Lücke1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 Institute of Neural Information Processing, University of Ulm, Ulm, Germany
* [email protected]
The mammalian visual cortex is a fast and recurrent information processing system which
rapidly integrates sensory information and high-level model knowledge to form a reliable
percept of a given visual environment. Much is known about the local features this system is
using for processing. Receptive fields of simple cells are, for instance, well described by
Gabor wavelet functions. Many systems in the literature study how such Gabor wavelets can
be learned from input [1,2 and many more]. In contrast, we study in this work how the lateral
interaction of local Gabor features can be learned in an unsupervised way.
We study a system that builds up on recent work showing how local image features can be
combined to form explicit object representations in memory (e.g., [3-7]). In these theoretical
works objects in memory are represented as specific spatial arrangements of local features
which are recurrently compared with feature arrangements in a given input. It was shown
that this approach can be used successfully in tasks of invariant object recognition (e.g.,
[7,8]).
While previous work has used a pre-wired lateral connectivity for recurrent inference, and
predefined object representations (compare [3-8] but see [9]) we, in this work, address the
following questions: 1) How can object representations in the form of feature arrangements
be learned? 2) How can the transformations that relate such memory representations to a
given V1 image representation be learned?
121
Poster Session II, Thursday, October 1
For training, different images of the same object are shown to the studied system.
Depending on the input, the system learns the arrangement of features typical for the object
along with allowed object transformations. The choice of the set of training images of this
object hereby determines the set of transformations the system learns.
We present new results on one and two-dimensional data sets. If trained on one-dimensional
input, the system learns one-dimensional object representations along with one-dimensional
translations. If trained on 2-D data, the system learns an object representation of two
dimensional feature arrangements together with planar translations as allowed
transformations.
Acknowledgements:
This work was supported by the German Federal Ministry of Education and Research
(BMBF) grant number 01GQ0840 (Bernstein Focus Neurotechnology Frankfurt).
References:
[1] Olshausen, B., Field, D. J., Nature 381:607-609, 1996.
[2] Lücke, J., Neural Computation, in press, 2009
[3] Arathorn, D., Standford Univ. Press, California, 2002.
[4] Olshausen, B. A., Anderson, C. H., and Essen, D. C. V.,The Journal of Neuroscience,
13(11):4700-4719, 1993.
[5] Lücke, J., Keck, C., and Malsburg, C., Neural Computation 20(10):2441-2463, 2008.
[6] Wiskott, L. and von der Malsburg, C., In: Lateral Interactions in the Cortex: Structure and
Function, ISBN 0-9647060-0-8, 1995.
[7] Wolfrum, P., Wolff, C., Lücke, J., and von der Malsburg, C., Journal of Vision, 8(7):1-18,
2008.
[8] Messer, K., et al., BANCA competition, CVPR, 523-532, 2004.
[9] Bouecke, J.D., and Lücke, J., ICANN, 557-566, 2008.
T7
Foveation with optimized receptive fields
Daniela Pamplona*1, Cornelius Weber1, Jochen Triesch1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
* [email protected]
The sensors of today's artificial vision systems often have millions of pixels. It is a challenge
to process this information efficiently and fast. Humans effortlessly handle information from
10**7 photoreceptors, and manage to interact quickly with the environment.
To this end, ganglion cells in the retina encode the photoreceptors' responses efficiently by
exploiting redundancies in their responses, before sending the information to the visual
cortex. Furthermore,primates' ganglion cells develop space-variant properties: their density
becomes much higher in the fovea than in the periphery, and the shape and size of the
122
Computer vision
receptive fields vary with the radial distance [1], i.e. primate vision is foveated.
Some artificial systems have tried to mimic such foveation to preprocess the visual input [2].
However, these works are based on the photoreceptors' properties instead of those of the
ganglion cells, which leads to serious aliasing problems [3]. We propose that artificial
systems should implement a model of ganglion cells processing.
Our foveation method is formalized as the product between a matrix representing the
receptive fields of the ganglion cells and the input image. We combine the information that
the distribution of the ganglion cells follows approximately a log-polar law [4] and that the
receptive fields have a Difference-of-Gaussian shape [5].
Therefore, each row of the foveation matrix represents a receptive field that depends only on
4 parameters (these are the heights and variances of the two Gaussians: their centres are
fixed according to the log-polar density function). We optimize these parameters to reduce
the reconstruction error of a generative model using a gradient descent rule (for details see
supplementary PDF).
We verify that our method converges fast to space variant receptive fields with smaller
heigths and size in the fovea than periphery (see supplementary figure 1). We compare the
size and shape of the resulting receptive fields with the measures in humans, and discuss
about reconstruction optimality in the human early visual system. These results lend
themselves to extrapolation to larger image sizes, thereby allowing the implementation of
large-scale foveated vision with optimized parameters.
References:
[1] Shatz et al, 1986, Annual Review of Neuroscience, 9, 171-207
[2] Weber et al, 2009, Recent Patents on Computer Science, 2, 1, 75-85
[3] Wallace et al, 1994, International Journal of Computer Vision, 13, 1, 71-90
[4] Rovano et al, 1979, Experimental Brain Research, 37, 3, 495-510
[5] Borghuis et al, 2008, The Journal of Neuroscience, 28, 12, 3178-3189
T8
A neural model of motion gradient detection for visual navigation
Florian Raudies*1, Stefan Ringbauer1, Heiko Neumann1
1 Institute of Neural Information Processing, University of Ulm, Ulm, Germany
* [email protected]
Problem. Spatial navigation based on visual input (Fajen & Warren, TICS, 4, 2000) is
important for tasks like steering towards a goal or collision avoidance of stationary as well as
independently moving objects (IMOs), respectively. Such observer movement induces global
motion patterns while obstacles and IMOs lead to local disturbances in the optical flow. How
is this information about flow changes used to support navigation and what are the neural
mechanisms which produce this functionality?
123
Poster Session II, Thursday, October 1
Method. A biologically inspired model is proposed to estimate and integrate optical flow from
a spatio-temporal sequence of images. This model employs a log-polar velocity space,
where optical flow is represented using a population code (Raudies & Neumann,
Neurocomp, 2008). By extending the model proposed in (Ringbauer et al., ICANN, 2007),
motion gradients are locally calculated with respect to the flow direction (tangential) on the
basis of population encoded optical flow. Gradients themselves are encoded in a population
of responses for angular and speed differences which were independent of the underlying
flow direction (Tsotsos, CVIU, 100, 2005). For motion prediction, estimated motion is
modified according to the gradient responses and is fed back into the motion processing
loop. Local flow changes estimated in model area MT are further integrated in model area
MSTd to represent global motion patterns (Graziano, J. of Neuroscience, 14, 1994).
Results. The proposed model was probed with several motion sequences, such as the
flowergarden sequence (http://www-bcs.mit.edu/people/jyawang/demos/garden-layer/origseq.html) which contains motion parallax at different spatial scales. It is shown that motion
parallax occurs in conjunction with occlusions and disocclusions, e.g. when the foreground is
moving faster than the background. Employing motion gradients, disocclusions are detected
as locations of local acceleration and occlusions as deceleration in model area MT
(supplementary Fig.1). More complex configurations occur at motion boundaries of an IMO.
A sequence is investigated which contains a rectangular IMO in front of a wall which is
observed during slightly sidewards deflected forward movement. As in the flowergarden
sequence local occlusions and disocclusions are detected at vertical boundaries of the IMO
in model area MT. Additionally, not only the discriminating speed is encoded by the
gradients but also the angular difference. Thus, gradients encode how different parts of
foreground and background are moving relative to each other. Moreover, model area MST
signals a global motion pattern of expansion as an indicator of spatial observer forward
motion (supplementary Fig. 2).
Conclusion. The role of motion gradients in navigation is twofold: (i) at model area MT local
motion changes (e.g. accelerations/decelerations) are detected indicating obstacle or IMO
boundaries while (ii) at model area MST global motion patterns (e.g. expansion) are
encoded. If an IMO is present in the input sequence, this leads to the occurrence of motion
gradients always; however, motion gradients are also detected in cases if no IMO is present,
e.g. at depth discontinuities.
Acknowledgements:
Supported by BMBF 01GW0763(BPPL); Grad.School Univ.Ulm.
124
Computer vision
T9
Toward a goal directed construction of state spaces
Sohrab Saeb*1, Cornelius Weber1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
* [email protected]
Reinforcement learning of complex tasks presents at least two major problems. The first
problem is caused by the presence of sensory data that are irrelevant to the task. It will be a
waste of computational resources if an intelligent system represents information that are
irrelevant, since in such a case state spaces will be of high dimensionality and learning will
become too slow. Therefore, it is important to represent only the relevant data.
Unsupervised learning methods such as independent component analysis can be used to
encode the state space [1]. While these methods are able to separate sources of relevant
and irrelevant information in certain conditions, nevertheless all data are represented.
The second problem arises when information about the environment is incomplete as in socalled partially observable Markov decision processes. This leads to the perceptual aliasing
problem, where different world states appear the same to the agent even though different
decisions have to be made in each of them. To overcome this problem, one should
constantly estimate the current state based also on previous information. This estimation
process is traditionally performed using Bayesian estimation approaches such as Kalman
filters and hidden Markov models [2].
The above-mentioned methods for solving these two problems are merely based on the
statistics of sensory data without considering any goal-directed behaviour. Recent findings
from biology suggest an influence of the dopaminergic system on even early sensory
representations, which indicates a strong task influence [3,4]. Our goal is to model such
effects in a reinforcement learning approach.
Standard reinforcement learning methods often involve a pre-defined state space. In this
study, we extend the traditional reinforcement learning methodology by incorporating a
feature detection stage and a predictive network, which together define the state space of
the agent. The predictive network learns to predict the current state based on the previous
state and the previously chosen action, i.e. it becomes a forward model. We present a
temporal difference based learning rule for training the weight parameters of these additional
network components. The simulation results show that the performance of the network is
maintained both, in the presence of task-irrelevant features, and in the case of a nonMarkovian environment, where the input is invisible at randomly occurring time steps.
The model presents a link between reinforcement learning, feature detection and predictive
networks and may help to explain how the dopaminergic system recruits cortical circuits for
goal-directed feature detection and prediction.
125
Poster Session II, Thursday, October 1
References:
[1] Independent component analysis: a new concept? P. Comon. Signal Processing,
36(3):287-314 (1994).
[2] Planning and acting in partially observable stochastic domains. L. P. Kaelbling, M. L.
Littman and A. R. Cassandra. Artificial Intelligence, 101:99-134 (1995).
[3] Practising orientation identification improves orientation coding in V1 neurons. A.
Schoups, R. Vogels, N. Qian and G. Orban. (2001). Nature, 412: 549-53 (2001).
[4] Reward-dependent modulation of working memory in lateral prefrontal cortex. S. W.
Kennerley, and J. D. Wallis. J. Neurosci, 29(10): 3259-70 (2009).
T10
A recurrent network of macrocolumnar models for face
recognition
Yasuomi Sato*13, Jenia Jitsev12, Philipp Wolfrum1, Christoph von der Malsburg1, Takashi
Morie3
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 Goethe University, Frankfurt, Germany
3 Kyushu Institute of Technology, Kitakyushu, Japan
* [email protected]
Invariance is a key mechanism to understand in-depth visual object recognition in a human
brain. Invariant object recognition is achieved by correct matching of a sensory input image
to its most suitable representation stored in memory. The required information about one
single object, for example, a position and a shape, are initially uncertain under a realistic
visual condition The most likely shape and positional information must be specified or
detected selectively to integrate both the information into one entire identity.
“What”-information about a particular object is identified by finding correct correspondence of
an input image to its related image representation, to be more precise, by finding a set of
points, which can extract Gabor features for the input image and can then be identified as
the same points extracting the similar feature from the stored image. In addition, the
“where”-information about the relevant object should be detected, binding it to the object
information. We have to propose a neurally plausible mechanism on focal or spatial attention
when attention is oriented to a particular locus in the environment.
In this work, we are aiming at developing an artificial visual object recognition system being
capable of focal attention by making effective use of an invariant recognition. The system
depends on finding a best balance of Gabor feature similarities and topological constraints of
feature extraction sets. It is based on a global recurrent hierarchical switchyard system of a
macrocolumnar cortical model, setting several intermediate layers between an input layer
and the higher model layer.
126
Computer vision
The recognition system possesses a crucial function for the correspondence finding, which
can save the Gabor feature quality of one intermediate layer to the next intermediate layer
as decreasing the number of Gabor feature representations in higher and higher
intermediate layers. It facilitates input information flow in the bottom-up to match the most
suitable representation in the model layer, at the same time, detecting a position of the
object on the input via focal attention in the top-down flow. The dynamical recurrent
macrocolumnar network has an ability for integrating shape- and position-information of a
particular a particular object even though such information are uncertain.
Acknowledgements:
This work was supported by the European Commission-funded project, “Neocortical Daisy
Architectures and Graphical Models for Context-Dependent Processing” FP6-2005-015803,
by the German Federal Ministry of Education and Research (BMBF) within the “Bernstein
Focus: Neurotechnology through research grant 01GQ0840” and by the Hertie Foundation.
T11
Adaptive velocity tuning on a short time scale for visual motion
estimation
Volker Willert*12, Julian Eggert1
1 Honda Research Institute Europe GmbH, Offenbach, Germany
2 Technical University Darmstadt, Darmstadt, Germany
* [email protected]
Visual Motion is a central perceptual cue that helps to improve object detection, scene
interpretation and navigation. One major problem for visual motion estimation is the so called
aperture problem which states that visual movement cannot be unambiguously estimated
based on temporal correspondences between local intensity patterns alone. It is widely
accepted that velocity-selective neurons in visual area MT solve this problem via a
spatiotemporal integration of local motion information which leads to temporal dynamics of
the neural responses of MT neurons. There are several contributions that propose models
that simulate the dynamical characteristics of MT neurons, like [1]. All of these models are
based on a number of motion detectors each responding to the same retinotopic location but
tuned to different speeds and directions. The different tunings sample the entire velocity
space of interest densely and equally distributed. For each retinotopic location the number of
the motion detectors is assumed to be fixed and also the different velocity tunings do not
change over time.
Recent studies concerning the tuning of neurons in area MT in macaques point out that even
on a short time scale the tunings of motion-sensitive neurons adapt strongly to the
movement direction and to the temporal history of the speed of the current stimulus [2,3].
127
Poster Session II, Thursday, October 1
We propose a model for dynamic motion estimation that incorporates a temporal adaptation
of the response properties of motion detectors. Compared to existing models, it is able to
adapt not only the tuning of motion detectors but additionally allows to change the number of
detectors per image location.
For this reason, we provide a dynamic Bayesian filter with a special transition probability that
propagates velocity hypotheses over space and time whereas the set of velocity hypotheses
is adaptable both in the number of the set and the velocity values. Additionally, we propose
methods how to adapt the number and the values of velocity hypotheses based on the
statistics of the motion detector responses. We discuss different adaptation techniques using
velocity histograms or applying approximate expectation maximization for optimizing free
parameters, in this case velocity values and set numbers.
We show that reducing the number of velocity detectors in conjunction with keeping them
smartly adaptive to be able to cluster around some relevant velocities has several
advantages. The computational load can be reduced by a factor of three while the accuracy
of the estimate reduces only marginally. Additionally, motion outliers are suppressed and the
estimation uncertainty is reduced due to the reduction of motion hypotheses to a minimal set
that is still able to describe the movement of the relevant scene parts.
References:
[1] P.Burgi, A. Yuille and N. Grzywacz, Probabilistic Motion Estimation Based on Temporal
Coherence, Neural Computation, 12, 1839-1867, 2000.
[2] A. Kohn and J.A. Movshon, Adaptation changes the direction tuning of macaque MT
neurons, Nature Neuroscience, 7, 764-72. Epub, 2004.
[3] A. Schlack, B. Krekelberg and T. Albright, Recent History of Stimulus Speeds Affects the
Speed Tuning of Neurons in Area MT, Journal of Neuroscience, 27, 11009-11018, 2007.
T12
Tracking objects in depth using size change
Chen Zhang*1, Julian Eggert2
1 Control Theory and Robotics Lab, Darmstadt University of Technology, Darmstadt,
Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
Tracking an object in depth is an important task, since the distance to an object often
correlates with an imminent danger, e.g. in the case of an approaching vehicle. A common
way to estimate the depth of a tracked object is to utilize binocular methods like stereo
disparity. In practice, however, depth measurement using binocular methods is technically
expensive due to the need of camera calibration and rectification. In addition, higher depths
are difficult to estimate because of the inverse relationship between disparity and depth.
128
Computer vision
Here, we introduce an alternative approach for depth estimation, Depth-from-Size. This is a
human-inspired monocular method where the depth is gained by utilizing the fact that object
depth is proportional to the ratio of object physical size and object retinal size. Since both the
physical size and the retinal size are unknown terms, they have to be measured and
estimated together with the depth in a mutually interdependent manner. For each of the
three terms specific measurement and estimation methods are probabilistically combined.
This results in probability density functions (pdfs) at the output of three components for
measuring and estimating these three terms, respectively.
In every processing step, we use a 2D tracking system for first obtaining the object’s 2D
position in the current monocular 2D image. On the position where the target object is found,
the scaling factor of the object retinal size is measured by a pyramidal Lucas-Kanadealgorithm. In our setting, the object retinal size is the only observable subject to frequent
measurements, whereas physical size and depth are internal states that have to be inferred
by the system according to the constraint - depth / focal length = physical size / retinal size that couples the three terms. Bayesian estimators are used to estimate the pdfs of the retinal
size and the depth, whereas the physical size is gained by a mean estimator, since it is
assumed to remain constant over time. Additional measurement inputs for the physical size
and the depth are optional, acting as correcting evidences for these both terms.
Measuring only the retinal size leaves us with an inherent ambiguity in the system, so that
either the physical size or the depth must become available once at initialization. In our
system, for this purpose we used a known object size or depth information gained by other
depth cues like stereo disparity.
The performance of the proposed approach was evaluated in two scenarios: An artificial with
ground truth and a real-world scenario. In the latter, depth estimation performance of this
system is compared with that of a directly measured stereo disparity. The evaluation results
show that this approach is a reliable alternative to the standard stereo disparity approach for
depth estimation with several advantages: 1) simultaneous estimation of depth, physical size
and retinal size; 2) no stereo camera calibration and rectification; 3) good depth estimation
at higher depth ranges for large objects.
129
Poster Session II, Thursday, October 1
Decision, control and reward
T13
Learning of visuomotor adaptation: insights from experiments and
simulations
Mona Bornschlegl*1, Orlando Arévalo2, Udo Ernst2, Klaus Pawelzik2, Manfred Fahle1
1 Department for Human Neurobiology, Center for Cognitive Sciences, Bremen University,
Bremen, Germany
2 Department for Theoretical Physics, Center for Cognitive Sciences, Bremen University,
Bremen, Germany
* [email protected]
Repetitive prism adaptation leads to dual-adaptation, where switching between adapted and
normal state is instantaneous. Up to now, it was unclear whether this learning is triggered by
the number of movements during each phase of adaptation or instead by the number of
phase changes from adaptation to readaptation and back. Here, we varied these two factors
using a virtual environment, simulating prism adaptation. Ten groups of subjects (5 subjects/
group), each defined by a particular displacement and number of movements per phase,
conducted 1200 movements. The initial pointing errors of each phase decay exponentially
with the number of phase changes for all groups due to learning. We also observe a slightly
faster learning rate per phase change for longer adaptation and readaptation phases. These
results clearly indicate that learning of visuomotor adaptation is induced primarily by
repeated changes between the adapted and normal states and that the phase length only
plays a marginal role on both direct effect and aftereffect.
An additional aspect of dual-adaptation is the speed of adaptation and readaptation in the
individual phases. In the current literature some authors found a change in adaptation and
readaptation rates during repetitive adaptation, whereas others found constant rates.
Overall, we find an increase in adaptation and readaptation rates after repetitive adaptation,
but this trend cannot be found in each individual group.
We are motivated to study adaptation and dual-adaptation processes as reinforcement
learning-like problems, where the subject receives a global feedback signal (the
reinforcement/punishment/error signal) after each trial. With this global signal the subject is
able to change, individually, inner parameters like synaptic weights, in order to look for and
find an optimal behavior.
To understand the dynamics of dual-adaptation found in the empirical data, we investigate a
feed forward network subjected to a reinforcement learning scheme, which is based on
stochastic fluctuations of the synaptic weights. We simulated the learning of two different
situations and observed that both the order and duration of the stimulus presentation play an
important role for the learning speed. In particular, the more balanced the average
130
Decision, control and reward
punishment/reward/error function is during the learning process, the faster the learning
becomes. This balance of the punishment/reward/error function depends strongly on the
order and duration of the stimulus presentation, thus linking the model to our experimental
observations. In summary, switching phases as rapidly as possible, i.e. after a minimum
number of trials triggering learning, leads to a faster dual-adaptation.
T14
Neural response latency of smooth pursuit responsive neurons in
cortical area MSTd
Lukas Brostek*1, Seiji Ono2, Michael J Mustari2, Ulrich Büttner1, Stefan Glasauer1
1 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
2 Yerkes National Primate Research Center, Atlanta, USA
* [email protected]
Lesion and microstimulation studies in primate cortices have shown that the medial superior
temporal (MST) area is involved in the control of smooth pursuit (SP) eye movements. The
lateral part of MST (MSTl) has been implicated in the coding of visual target motion [1] used
to drive pursuit responses. The role of the dorsal part of MST (MSTd) in the generation of
SP eye movements is highly disputed, even though about one third of MSTd neurons show
strong neuronal responses to visual pursuit stimuli. Experimental evidence, for example by
blanking of the target, suggested that these responses contain an extraretinal component. It
has therefore been suggested that the pursuit-related neurons in MSTd may code an
estimate of gaze velocity [2].
Computational models of SP control posit that an efference copy of the ocular motor
command is used to generate an estimate of eye velocity via an internal plant model. The
estimate of target motion is constructed by adding the retinal error velocity to this signal.
Simulations of our dual pathway SP control model [3] show that for stability reasons the
delay of the estimated eye velocity signal with respect to the eye motor command should
approach the sum of latencies in the primary retinal feedback loop, i.e., the latency between
target motion and eye movement, which exhibits multi-trial mean values between 100 and
150 ms. Indeed, we recently showed that on average eye velocity related neuronal signals in
MSTd lag behind eye motion with a remarkably similar latency [4]. Thus, SP-related neurons
in MSTd may code an estimate of eye velocity suited to reconstruct target motion in MSTl.
Based on these observations, we hypothesized that if SP-related neurons carry a signal
derived from an efference copy, then the delay of SP-related neurons must be related to the
eye movement latency on a trial-to-trial basis. This relation could either be a constant delay
or a linear relation reflecting the actual variation of the eye movement latency. We examined
the responses of pursuit-sensitive MSTd neurons to step-ramp pursuit (laser spot, 4 target
velocities, max. 30°/s) recorded in two awake macaque monkeys. The latency of eye
movement and the delay of neuronal response with respect to target motion onset were
determined for each trial.
131
Poster Session II, Thursday, October 1
The neuronal latency with respect to target onset correlated significantly with eye movement
latency, thus supporting our hypothesis of an efferent copy origin of the MSTd signal.
Further analysis showed that the neuronal latency lagged behind eye movement onset by a
constant value of 100 to 150 ms, and did not reflect trial-to-trial variations in eye movement
latency. Thus, the delay mechanism between the efference copy of the eye motor command
and the estimate of eye velocity works independently of the variable latency in pursuit eye
movement onset.
References:
[1] Ilg UJ et al. Neuron 43:145-151, 2004
[2] Ono S, Mustari MJ. J Neurophysiol 96:2819-2825, 2006
[3] Nuding U et al. J Neurophysiol 99:2798-808, 2008
[4] Brostek L et al. Abstract for the CNS 2009, Berlin
T15
Neuronal decision-making with realistic spiking models
Ralf Häfner*1, Matthias Bethge1
1 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
* [email protected]
The neuronal processes underlying perceptual decision-making have been the focus of
numerous studies over the past two decades. In the current standard model [1][2] the output
of noisy sensory neurons is pooled and integrated by decision neurons. Once the activity of
the decision neuron reaches a threshold, the corresponding choice is made. This
framework’s prediction about the relationship between measurable quantities like
psychophysical kernel, choice probabilities, and reaction times, crucially depends on the
underlying noise model. To the best of our knowledge, all models to date assume the noise
variance, or the Fano factor, to be constant over time.
Our study explores the impact of assuming more realistic noise on reaction times,
psychophysical kernel and choice probability. First we generate spike trains with an
increasing noise variance over time. We find that the time course of the choice probabilities
follows the time course of the noise variance, while the psychophysical kernel does not. We
next generate more realistic spike trains of sensory neurons by simulating leaky integrateand-fire neurons with Gaussian inputs. The resulting spike counts are Poisson-like at short
counting intervals but increase their Fano factor as the counting interval is increased
(reaching about 5 for a counting window of width 2 sec) – in agreement with what is
observed empirically in cortical neurons [3]. As a result the distribution of reactions times
becomes much wider – just as expected from sensory neurons with increased variance. This
in itself would result in a psychophysical kernel that is decreasing more slowly than would be
expected from constant noise. However, the long temporal correlations in the noise also lead
to a strong decrease in the psychophysical kernel. As a consequence, even in a decision
132
Decision, control and reward
model that assumes full integration of sensory evidence over the entire duration of the
stimulus (and not just until a neuronal threshold is reached), the psychophysical kernel will
be decreasing over time.
Our findings have at least two direct implications for the interpretation of existing data. First,
a decreasing psychophysical kernel can in general not, as is usually done, be taken as direct
evidence that the subject is making their decision before the end of the stimulus duration.
Secondly, our findings are important for the debate on the source of choice probabilities:
One of the standard model’s central claims – that choice probabilities are causal – was
recently challenged by empirical evidence that showed that choice probabilities and
psychophysical kernel have a different time course [4]. Our findings show that while an
identical time course is only incompatible with a constant noise model, it may be compatible
with more realistic types of neuronal noise.
References:
[1] Shadlen, MN, Britten, KH, Newsome, WT, Movshon, JA: J Neurosci 1996, 16:1486-1510
[2] Cohen, MR, Newsome, WT: J Neurosci 2009, 29:6635-6648
[3] Teich, MC, Heneghan, C, Lowen, SB, Ozaki, T, Kaplan, E: J Opt Soc Am A Opt Image
Sci Vis 1997, 14:529-546
[4] Nienborg, H, Cumming, BG: Nature 2009, 459:89-92
T16
A computational model of basal ganglia involved in the cognitive
control of visual perception
Fred H Hamker*1, Julien Vitay2
1 Computer Science Department, Technichal University Chemnitz, Chemnitz, Germany
2 Psychologisches Institut II, Westfälischen Wilhelms-Universität, Münster, Germany
* [email protected]
Goal-directed visual perception requires to maintain and manipulate a template of the
desired target in visual working memory (WM) that allows to bias processing in the posterior
lobe. It is still unclear how such goal-directed perception is implemented in the brain. We
propose that such interaction between visual WM and attentional processing involves a
network of brain areas consisting of inferotemporal cortex (IT) for the retrieval of visual
information associated to the target, dorsolateral-prefrontal cortex (dlPFC) for the
manipulation and maintenance of the target template in face of distractors, medial temporal
lobe (MTL) regions for rapid encoding and novelty detection, as well as basal ganglia (BG)
for the switching and activation of WM representations in dlPFC.
We designed a novel computational model of BG, while it interacts with a model of perirhinal
cortex (PRh, part of the medial temporal lobe) and a simple model of dlPFC for memorizing
objects, to explore how the BG might be involved in the cognitive control of visual
perception. The BG model is composed of the striatum receiving connections from PRh and
133
Poster Session II, Thursday, October 1
dlPFC. The striatum inhibits SNr which in turn tonically inhibits a thalamic nucleus interacting
with PRh. Homeostatic Hebbian learning takes place simultaneously in the connections from
cortical areas to the striatum (representing the context of of the task) and in the connections
from striatum to SNr as well as within SNr (learning to retrieve the correct representation).
Moreover, a dopaminergic cell learns to compute the difference between the reward actually
received and the expectation of reward based on striatal representations and modulates
learning in the other areas.
We applied this model to simultaneously learn delayed matching-to-sample (DMS) and
delayed nonmatching-to-sample (DNMS) tasks. Whether DMS or DNMS should be
performed is defined by a contextual information presented after the sample and before the
search array composed of two targets. Reward is given to the system when it selects the
correct target through thalamic stimulation of PRh. The model has to represent efficiently the
context in the striatum to solve the tasks and is able to learn them concurrently after 400
trials, independently of the number of cells in SNr, what denotes a parallel search of the
correct representation. If at the beginning of learning, several cells in SNr can become
selective for the same striatal pattern, the learned competition between them progressively
selects the only one that disinhibits the correct target. The reward-predictive value of striatal
representations also takes into account the probability of reward associated to an object.
Similarly to the PVLV model of O'Reilly and Frank, our model reproduces the reward-related
firing pattern of dopaminergic cells in conditioning. However, our model highlights the role of
BG processing in visual WM processes, not only in its cognitive component but also in the
retrieval of target information. It explains how vision can be guided by the goals of the task at
hand.
T17
Reaching while avoiding obstacles: a neuronally inspired attractor
dynamics approach
Ioannis Iossifidis*1, Gregor Schöner1
1 Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
* [email protected]
How motor, premotor, and parietal areas represent goal-directed movements has been a
topic of intensive neurophysiological research over the last two decades. One key discovery
was that the direction of hand's movement in space was encoded by populations of neurons
in these areas together with many other movement parameters. These distributions of
population activation reflect how movements are prepared ahead of movement initiation, as
revealed by activity induced by cues that precede the imperative signal. The rich behavioral
literature on how movement preparation depends on prior task information can be accounted
for on the basis of these activation fields. These movement parameter representations are
updated in the course of a movement such as when movement direction changes when the
end-effector traces a path. Moreover, motor cortex is also involved in generating the time
134
Decision, control and reward
course of the movement itself. This is made plausible also by the fact that it has been
possible to decode motor cortical activity in real time and drive virtual or robotic endeffectors. In such tasks, monkeys have been able to learn to direct the virtual or robotic
effector toward a motor goal relying only on mental movement planning.
Is the level of description of end-effector movement direction sufficient to also satisfy other
constraints of movement generation such as obstacle avoidance or movement coordination?
In this contribution we demonstrate that this is possible in principle by implementing a
neuronal dynamics of movement direction on a robotic system which generates goaloriented reaching movements while avoiding obstacles.
This implementation is based on the attractor dynamics approach to behavior generation.
Reaching behavior is generated from the movement direction of the hand in three
dimensions which forms a two-dimensional vector. The dynamics of these variables is
structured by two contributions, attraction to the direction in which the movement target lies
and repulsion from movement directions in which obstacles are detected. The reaching
behavior is generated from the overall attractor that emerges as these various nonlinear
contributions are superposed. The translation of the emerging path of the hand into signals
controlling the joint angles makes use of an exact solution of the inverse kinematics of an
anthropomorphic seven-degree-of-freedom robotic arm. We show how the redundancy of
this arm can be used to propagate obstacle avoidance from the hand to the arm itself.
T18
Expected values of multi-attribute objects in the human prefrontal
cortex and amygdala
Thorsten Kahnt*1, Jakob Heinzle12, Soyoung Q Park3, John-Dylan Haynes12
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Charité-Universitätsmedizin, Berlin, Germany
3 Max-Planck Institute for Human Development, Berlin, Germany
* [email protected]
Almost every decision alternative consists of several attributes. For instance, different
attributes of a fruit - size, shape, color and surface texture - signal its nutritional value. To
make good or even optimal choices the expected reward of all attributes need to be
integrated into an overall expected value. The amygdala and the ventromedial prefrontal
cortex (vmPFC) have been shown to represent the expected value of environmental cues.
However, it is not known how these structures interact when options comprise multiple
reward related attributes. To investigate this question, we acquired fMRI data while subjects
performed a task in which they had to integrate multiple reward related attributes into an
overall expected value. Associations between single attributes and different magnitudes of
monetary rewards were learned prior to scanning. We used time-resolved multi-voxel pattern
recognition to predict the integrated expected value of multi-attribute objects of an
independent test data set in a parametric fashion. We found that patterns of local activity in
135
Poster Session II, Thursday, October 1
vmPFC and amygdala encode the integrated expected value of multi-attribute objects.
Encoding in the amygdala lagged temporally behind encoding in vmPFC. Furthermore,
Granger causality mapping (GCM) revealed an information flow from the vmPFC to the
amygdala during the presentation of the multi-attribute object. These findings suggest that
the expected value of multi-attribute objects is first integrated in the vmPFC and then
signaled to the amygdala where it could be used to support learning and adaptive behavior.
T19
Optimal movement learning for efficient neurorehabilitation
Petko Kiriazov*1
1 Biomechanics and Technically Assisted Rehabilitation Lab, Bulgarian Academy of
Sciences, Sofia, Bulgaria
* [email protected]
Parkinson's, stroke, cerebral palsy, and other neurological diseases may cause severe
problems in human motion behaviour. In particular, such diseases affect the control of
voluntary, goal-directed movements, e.g., reaching or performing steps. In such cases,
control functions (neural signals to muscles) are to be re-learnt and the problem is to find
efficient control learning (CL) strategies. In our study, a novel conceptual framework for
optimal CL of goal-directed movements is proposed. It is based on underlying principles of
neurophysiology, robot dynamics (Eq. 1 in the supplement), optimal control theory, and
machine learning.
Goal-directed movements with healthy persons are usually performed optimally as regards
motion speed, position accuracy, and energy expenditure. Optimal control functions for such
motion tasks have a triphasic (burst-pause-burst) shape, Fig.1, and can be described by
their magnitudes and switching times. They are the intrinsic parameters human has to learn
in point-to-point motion tasks. The CL scheme has the following main steps: 1) control
structure definition and parametrization; 2) select most appropriate pairs of control
parameters and controlled outputs; 3) make corrections in the control parameters until reach
the target, applying a natural, bisection algorithm. During learning, we keep the control
magnitudes constant and adjust only the switching times. The convergence of the CL
process is mathematically guaranteed and it is found that the CL algorithm is optimal with
respect to the number of trials.
Using realistic mathematical models, our CL approach was applied to motion tasks like arm
reaching, sit-to-stand-up, and performing steps. Correspondingly, we simulate them with
two-, three-, and six- degrees-of-freedom dynamical models. In the computer simulation,
Figs. 2-3 and Fig. 6, we verified that the learning of control parameters converges and the
number of trials is very small indeed. In practice, experiments with rapid aiming movements
of the arm confirm the feasibility and efficacy of the proposed CL approach as well. Special
attention in our current research is devoted to the challenging problem of CL in the real,
three-dimensional human locomotion, Fig. 4. We can perform proper decomposition of this
136
Decision, control and reward
complex motion task into several goal-directed movements, Fig. 5 and apply the proposed
CL scheme for each of them. Our CL approach makes it possible to derive control rules for
the different locomotion phases and perform steps with variable length, height, direction and
gait velocity.
The work outlined here can provide a fundamental understanding of optimal movement
learning and may lead to the development of strategies for efficient neuro-muscular
rehabilitation. Depending on the injury or neurological disorder, various technical means can
be applied: brain-computer interfaces, electromyography, functional electrical stimulation
and assistive robotic devices. We believe that the proposed CL approach is quite natural one
for rebuilding neural nets associated with voluntary motion tasks (cortical reorganization) by
applying proper training procedures. Focused attention for the trainees to achieve the
required motion targets is very important because it stimulates release of neurotransmitters
and improves plasticity and learning. Personal goal setting in motion tasks encourages
patient motivation, treatment adherence and self-regulation processes.
T20
Computational modeling of the drosophila neuromuscular
junction
Markus Knodel*12, Daniel B Bucher13, Gillian Queisser4, Christoph Schuster1, Gabriel
Wittum2
1 Bernstein Group for Computational Neuroscience, University of Heidelberg, Heidelberg,
Germany
2 Goethe Center for Scientific Computing, Goethe University, Frankfurt, Germany
3 Interdisciplinary Center for Neuroscience IZN, Heidelberg University, Heidelberg,
Germany
4 Interdisziplinäres Zentrum für Wissenschaftliches Rechnen, University of Heidelberg,
Heidelberg, Germany
* [email protected]
An important challenge in neuroscience is understanding how networks of neurons go about
processing information. Synapses are thought to play an essential role in cellular information
processing however quantitative and mathematical models of the underlying physiologic
processes which occur at synaptic active zones are lacking. We are generating
mathematical models of synaptic vesicle dynamics at a well characterized model synapse,
the Drosophila larval neuromuscular junction.
The synapse's simplicity, accessibility to various electrophysiological recording and imaging
techniques, and the genetic malleability which is intrinsic to Drosophila system make it ideal
for computational and mathematical studies.
We have employed a reductionist approach and started by modeling single presynaptic
boutons. Synaptic vesicles can be divided into different pools however a quantitative
137
Poster Session II, Thursday, October 1
understanding of their dynamics at the Drosophila neuromuscular junction is lacking (4). We
performed biologically realistic simulations of high and low release probability boutons (3)
using partial differential equations (PDE) taking into account not only the evolution in time
but also the spatial structure in two (and recently also three dimensions).
PDEs are solved using UG. UG is a program library for the calculation of multi-dimensional
PDEs which are solved using a finite volume approach and implicit time stepping methods
leading to extended linear equation systems which can be solved by using multi grid
methods (1,2). Numerical calculations are done on multi-processor computers allowing for
fast calculations using different parameters in order to asses the biological feasibility of
different models. In preliminary simulations, we modeled vesicle dynamics as a diffusion
process describing exocytosis as Neumann streams at synaptic active zones. The initial
results obtained with these models are consistent with experimental data however this
should be regarded as a work in progress.
As well, we started to study the Calcium dynamics with respect to the consequences of the T
bar active zones. Further refinements are currently being implemented, including simulations
using morphologically realistic geometries which were generated from confocal scans of the
neuromuscular junction using NeuRA (a Neuron Reconstruction Algorithm). Other
parameters such as glutamate diffusion and reuptake dynamics, as well as postsynaptic
receptor kinetics are intended to be incorporated in the future.
References:
[1] P. Bastian, K. Birken, K. Johannsen, S. Lang, N. Neuss, H. Rentz-Reichert, C.
Wieners:UG - A Flexible Software Toolbox for Solving Partial Differential Equations,
Computing and Visualization in Science 1, 27-40 (1997).
[2] P. Bastian, K. Birken, K. Johannsen, S. Lang, V. Reichenberger, C. Wieners, G. Wittum,
and C. Wrobel: A parallel software-platform for solving problems of partial differential
equations using unstructured grids and adaptive multigrid methods.
[3] G. Lnenicka, H Keshishian: Identified motor terminals in Drosophila larvae show distinct
differences in morphology and physiology, J Neurobiol. 2000 May;43(2):186-97
[4] S Rizzoli , W Betz: Synaptic vesicle pools, Nat Rev Neurosci. 2005 Jan;6(1):57-69.
Jager and E. Krause (ed): High performance computing in science and engineering, pages
326--339. Springer, 1999.
138
Decision, control and reward
T21
Effects of dorsal premotor cortex rTMS on contingent negative
variation and bereitschaftspotential
Ming-Kuei Lu3, Patrick Jung3, Noritoshi Arai31, Chon-Haw Tsai4, Manfred Kössl2, Ulf
Ziemann*3
1
2
3
4
Department of Neurology, Toyama Hospital, Tokyo, Japan
Institute for Cell Biology and Neuroscience, Goethe University, Frankfurt, Germany
Motor Cortex Group, Department of Neurology, Goethe University, Frankfurt, Germany
Neuroscience Laboratory, Department of Neurology, China Medical University Hospital,
Taichung, Taiwan
* [email protected]
Background:
Recently the repetitive transcranial magnetic stimulation (rTMS) technique has been broadly
used to study the motor control system in humans. Low-frequency rTMS (1 Hz or less)
produces long-term depression (LTD)-like plasticity and higher frequency rTMS produces
long-term potential (LTP)-like plasticity in the primary motor cortex. However, studies of
rTMS effects have been largely restricted to measuring corticospinal excitability by means of
motor evoked potential amplitude. Here we were interested in studying rTMS effects on
preparatory volitional motor activity. We examined the contingent negative variation (CNV)
and the Bereitschaftspotential (BP), measures of externally cued vs. intrinsic volitional motor
preparatory activity, respectively, using high-resolution electroencephalography (EEG).
RTMS was targeted to the dorsal premotor cortex (PMd), are brain region thought to be
primarily involved in externally cued motor preparation. Accordingly, we hypothesized that
rTMS would alter CNV but leave BP unaffected.
Methods:
Ten healthy, right-handed subjects (6 men, 27.9 ± 6.9 years) executed sequential right
fingers movement via a computer-based interface for the CNV recordings. They were
instructed to respond to imperative visual go-signals 2 seconds following a warning signal.
Surface electromyography (SEMG) and motor performance including the reaction time and
error rate were monitored. A total of 243 trials were completed before and after a 15 minrTMS intervention. MRI-navigated 1 Hz rTMS (15 min continuous stimulation) or 5 Hz rTMS
(15 times 12 s epochs, separated by 48 s pauses) was delivered to the left PMd in separate
sessions. RTMS intensity was adjusted to 110% of the individual active motor threshold. For
the BP recordings, nine subjects (5 men, 28.4 ± 7.1 years) performed the same type of
fingers movement, but intrinsically, i.e. without external cues. Early (1500 to 500 ms before
SEMG onset) and late (500 to 0 ms before SEMG onset) components of CNV and BP were
quantified. RTMS effects were analyzed separately for CNV and BP by three-way ANOVAs
with EEG electrode position (25 central electrodes) and time (before and after rTMS) as
within-subject factors and rTMS frequency (1 Hz vs. 5 Hz) as between-subject factor.
139
Poster Session II, Thursday, October 1
Results:
Motor performance and the early components of CNV and BP did not significantly change by
the rTMS interventions. ANOVAs and scalp voltage map showed that the late component of
CNV, but not BP, was facilitated significantly after 1 Hz left PMd rTMS but remained
unchanged after 5 Hz rTMS. This facilitation was located mainly over the fronto-central scalp
area with slight predominance to the left hemisphere.
Conclusions:
RTMS of the motor-dominant PMd interferes with the preparatory motor activity of externally
cued volitional movements, but not with preparatory activity of intrinsic volitional movement,
supporting the pivotal role of the motor-dominant PMd in motor preparation of externally
cued but not intrinsic movements. This effect was specific because it was observed only
after low-frequency rTMS (not high-frequency rTMS), it affected only the late CNV
component (not the early CNV component), and it occurred only at those electrode locations
overlying the fronto-central brain region predominantly of the stimulated left motor-dominant
hemisphere.
T22
A computational model of goal-driven behaviours and habits in
rats
Francesco Mannella1, Marco Mirolli1, Gianluca Baldassarre*1
1 Laboratory of Computational Embodied Neuroscience, Istituto di Scienze e Tecnologie
della Cognizione, Consiglio Nazionale delle Ricerche, Rome, Italy
* [email protected]
Research on animal learning has investigated learning on the basis of two main
experimental paradigms. The first, Pavlovian conditioning, can be explained mainly in terms
of the formation of associations between ‘conditioned stimuli’ and ‘unconditioned stimuli’
(CS-US) within amygdala (Amg). The second, instrumental conditioning, can be explained
mainly in terms of the formation of stimulus-response associations (S-R or ‘habits’) within
basal ganglia (BG: in particular dorsolateral-striatum, DLS, and globus-pallidum, GP) and the
expression of behaviour via an important ‘loop’ reaching cortical motor areas (e.g., premotor
cortex, PM).
Recently, the animal-learning literature has identified a further class of behaviours termed
‘goal-directed behaviours’. These are characterised by a sensitivity to the manipulation of the
association between actions and their outcomes (A-O associations). A typical experiment
showing such sensitivity is devaluation. This shows that rats perform with less intensity an
action (e.g. pressing a lever) which has been instrumentally associated with a certain
outcome (e.g. sucrose solution), if the value of the latter is previously decreased with either
satiation or nausea, in comparison to a second action (e.g. pulling a chain) which has been
instrumentally associated with a second non-devalued outcome (e.g. food pellets). A-O
associations might be stored in a second important BG-cortical loop involving nucleus
140
Decision, control and reward
accumbens (NAcc) and medial prefrontal cortex (in particular prelimbic cortex, PL) as their
lesion impairs the sensitivity to devaluation. Interesting, also lesions of Amg cause such
impairment (Balleine, Killcross, and Dickinson, 2003, Journal of Neuroscience ).
Recently, research on brain anatomy (Haber, Fudge, and McFarland, 2000, Journal of
Neuroscience) has started to show that there are at least two communication pathways
between the BG-cortical loops: the striato-nigro-striatal ‘spirals’, based on dopamine, and the
connections between the cortical regions of the loops. This suggest the existence of a
hierarchical organisation of A-O and S-R behaviours which, however, has not been fully
understood. This work proposes a system-neuroscience computational model which gives
for the first time a coherent account of some of the interactions existing between CS-US
associations and A-O associations, and between the DLS-PM loop and the NAcc-PL loop. In
particular, the model proposes that: (a) the A-O macro-channel can influence via
dopaminergic spirals the neural competition for action selection taking place within DLS/GP;
(b) the A-O loop can select specific outcomes and influence the selection of actions within
the S-R loop via the cortical pathways involving PL; (c) backward connections within and
between loops can inform the regions of the striatum on the action which has been selected
so as to strengthen or weaken the S-R and A-O associations on the basis of the
experienced value of the outcome. The model is constrained with known brain anatomy and
is validated by reproducing the results of a number of lesion experiments with rats.
T23
Fixational eye movements during quiet standing and sitting
Konstantin Mergenthaler*1, Ralf Engbert1
1 Department of Psychology, University of Potsdam, Potsdam, Germany
* [email protected]
Two competing functions for fixational eye movements are described: The stabilization of
retinal images against postural sway and the counteraction of retinal fatigue by miniature
displacements across photo receptors. Starting from the hypothesis that the importance for
fixations of the two functions is changed by different postures, we performed an experiment
with two different postural conditions. In one condition participants have to fixate a small
fixation spot during standing and in the second they have to fixate it during sitting. In the
standing condition and in addition to the fixational eye movements the center of pressure
movement is recorded.
Both types of movements are governed by temporal scaling properties with a persistent
behavior on the short time scale which changes at a certain scale to antipersistent behavior
on the long time scale. However, the crossover between scales is at different scales.
Analyzing the data, we show that changing the posture influences the scaling properties of
fixational eye movements in accordance with the hypothesis of different weighting of the two
functional roles. Furthermore, we could identify distinct changes between microsaccade
properties between the two conditions.
141
Poster Session II, Thursday, October 1
T24
Suboptimal selection of initial saccade in a visual search task
Camille Morvan*1, Laurence T Maloney1
1 Center for Neural Sciences, Department of Psychology, New York University, New York,
USA
* [email protected]
Human vision is most accurate at the center of the retina and acuity decreases with
eccentricity. In order to build a representation of the visual environment useful for every day
actions and survival humans make on average three saccades/s. The choice of successive
saccade locations is the outcome of a complex decision process that depends on the
physical properties of the image [Itti & Baldi (2005), Itti & Koch (2000)] and the internal goals
of the observer [Yarbus (1967) , Hayhoe & Ballard, (2005)]. Do observers optimally select
the next saccade location? We investigated how the visual system incorporates information
about possible losses and gains into selection of saccades. We examine how the visual
system “invests” the retina by selecting saccades in an economic game based on visual
search task.
We started by mapping the subjects' acuity: for a given visual stimulus we measure the
performance of identification of a stimulus presented at a given eccentricity. In doing this
mapping we measure one source of uncertainty: the probability of detecting or not detecting
targets at different retinal eccentricities. Then using this mapping we could predict where
people should saccade in order to maximize their gain in a following decision task.
In the decision task, the subjects saw three visual tokens in the periphery and were
instructed to make a saccade to any of those tokens. During the saccade, one of the two
side tokens would change and the subjects then judged how the token changed. Subjects
(eight) received rewards for correct responses.
We run two series of experiments to manipulate the optimal saccade position. In the first
series of experiment, the spacing between the side tokens was varied. When the tokens
were near to each other, the subject could reliably identify the token change from a fixation
point midway between the tokens. If the tokens were far apart, then the subject would
perform better by using a stochastic strategy consisting on saccading to one the side tokens.
In the second series of experiment we followed the same logic, but the spacing between the
side objects was constant and we varied their sizes. When objects were big, subjects should
saccade in the center of the display and when they were small they should saccade to any
of the side tokens.
Subjects were suboptimal in both experiments. Most subjects had a constant strategy and
did not take into account the visibility of the tokens when planning their saccades. They
earned, on average, half the maximum expected gain. To conclude, even in simple displays
with only two tokens and after training, subjects do not plan saccades that maximize
expected gain.
142
Decision, control and reward
T25
Timing-specific associative plasticity between supplementary
motor area and primary motor cortex
Florian Müller-Dahlhaus*2, Noritoshi Arai21, Barbara Bliem2, Ming-Kuei Lu2, Ulf Ziemann2
1 Department of Neurology, Toyama Hospital, Tokyo, Japan
2 Motor Cortex Group, Department of Neurology, Goethe University, Frankfurt, Germany
* [email protected]
The supplementary motor area (SMA) is essential for preparation and execution of voluntary
movements. Anatomically, SMA shows dense reciprocal connections to primary motor cortex
(M1), yet the functional connectivity within the SMA-M1 network is not well understood. Here
we modulated the SMA-M1 network in humans using multifocal transcranial magnetic
stimulation (TMS) and detected changes in functional coupling by electroencephalography
(EEG) as well as corticospinal output changes by motor evoked potentials (MEPs).
Twenty-four right-handed subjects aged 19-43 years participated in the study. MEPs were
recorded from right and left first dorsal interosseous muscles. Left and right M1 were
stimulated near-simultaneously (delta t: 0.8ms) in nine blocks of 50 trials each with an
intertrial interval of five seconds (Pre1-3, Cond1-3, Post1-3). In blocks Cond1-3 an additional
TMS pulse was applied over SMA at an ISI of -6ms (SMA stimulation prior to bilateral M1
stimulation) or +15ms (SMA stimulation following bilateral M1 stimulation). TMS intensity for
SMA stimulation equaled 140% of the individual active motor threshold. M1 stimulation
intensity was adjusted to produce an unconditioned MEP of 1-1.5mV. In a second set of
experiments scalp-EEG was recorded during rest at baseline (B0), after near-synchronous
bilateral M1 stimulation (B1), and after associative SMA and M1 stimulation (P1, P2).
Associative SMA and M1 stimulation at an ISI of -6ms long-lastingly increased MEP
amplitudes in left (F(8,64) = 3.04, p = 0.006) and right M1 (F(8,64) = 2.66, p = 0.014),
whereas at an ISI of +15ms MEP amplitudes were decreased in right M1 only (left M1:
F(8,64) = 1.07, p = 0.40; right M1: F(8,64) = 2.20, p = 0.039). These effects were critically
dependent on the ISI between SMA and M1 stimulation as well as SMA stimulation intensity
and site. Importantly, MEP amplitude changes could not be induced by associative SMA and
M1 stimulation without prior bilateral near-synchronous M1 stimulation during Pre1-3 trials.
Partial coherence analysis of EEG data revealed significant coherence changes (B1 vs. B0)
in the low and high alpha band in a distributed motor network including SMA and M1. These
EEG coherence changes were predictive for MEP amplitude changes in dominant (left) M1
after associative SMA and M1 stimulation.
Our findings demonstrate that priming of cortical motor networks may induce specific
changes in coherent oscillatory activity in these networks which are both necessary and
predictive for subsequent occurrence of stimulus-induced associative plasticity between
SMA and M1. The present results suggest a role for functional coupling of cortical areas to
promote associative plasticity, for instance in the context of learning.
143
Poster Session II, Thursday, October 1
T26
Fast on-line adaptation may cause critical noise amplification in
human control behaviour.
Felix Patzelt*1, Klaus Pawelzik1
1 Institute of Theoretical Neurophysics, University of Bremen, Bremen, Germany
* [email protected]
When humans perform closed-loop control tasks like in upright standing or while balancing a
stick, their behaviour exhibits non-Gaussian fluctuations with long-tailed distributions [1, 2].
The origin of these fluctuations is not known, but their statistics suggests a fine-tuning of the
underlying system to a critical point [3].
We investigated whether self-tuning may be caused by the annihilation of local predictive
information due to success of control [4]. We found that this mechanism can lead to critical
noise amplification, a fundamental principle which produces complex dynamics even in very
low-dimensional state estimation tasks. It generally emerges when an unstable dynamical
system becomes stabilised by an adaptive controller that has a finite memory [5]. It is also
compatible with control based on optimal recursive Bayesian estimation of a varying hidden
parameter.
Starting from this theory, we developed a realistic model of adaptive closed-loop control by
including constraints on memory and delays. To test this model, we performed
psychophysical experiments where humans balanced an unstable target on a computer
screen. It turned out, that the model reproduces the long tails of the distributions together
with other characteristics of the human control dynamics. Fine-tuning the model to match the
experimental dynamics identifies parameters characterising a subjects control system which
can be independently tested. Our results suggest, that the nervous system involved in
closed-loop motor control nearly optimally estimates system parameters on-line from very
short epochs of past observations. Ongoing experimental investigation of the models
predictions promises detailed insights into control strategies employed by the human brain.
T27
Inferring human visuomotor Q-functions
Constantin A Rothkopf*1, Dana H Ballard2
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 University of Texas, Austin, USA
* [email protected]
Huge amounts of experimental data show that reinforcement learning is a key component in
the organization of animal behavior. Formal reinforcement learning models (RL) potentially
can explain how humans can learn to solve a task in an optimal way based on experience
accumulated while interacting with the environment. But although RL algorithms can do this
144
Decision, control and reward
for small problems, their state spaces grow exponentially in the number of state variables.
This problem has made it difficult to apply RL to realistic settings.
One way to improve the situation would be to speed up the learning process by using a
tutor. Early RL experiments showed that even small amounts of teaching could be highly
effective, but the teaching signal was in the form of correct decisions that covered the
agent’s state space, an unrealistic assumption. In the more general problem an agent can
only observe some of the actions of a teacher. The problem of taking subsets of expert
behavior and estimating the reward functions has been characterized as inverse
reinforcement learning (IRL). This problem has been tackled by assuming that the agent has
access to a teacher’s base set of features on which to form a policy. Under that assumption
the problem can be reduced to trying to find the correct weighting of features to reproduce
the teacher’s policy1. The algorithm converges but is very expensive in terms of policy
iterations. A subsequent Bayesian approach assumes a general form of the reward function
that maximizes the observed data and then samples data in order to optimize the reward
function2 by making perturbations in the resultant reward estimates. However this method is
also very expensive, as it requires policy iteration in its innermost control loop in order to
converge.
We make dramatic improvements on2 by using a specific parametric form of reward function
in the form of step functions with just a few numbers of step transitions. With these functions
policy iteration is not required as the reward function’s parameters can be computed directly.
This method also extends to a modular formalism introduced by 3,4 that allows rewards to
be estimated individually for subtasks and then used in combination.
The algorithm is demonstrated on a humanoid avatar walking on a sidewalk and collecting
litter while avoiding obstacles. Previously we had tuned reward functions by hand in order to
make the avatar perform the three tasks effectively. We show that reward functions
recovered using human data performing the identical task are very close to those used to
program the human avatar initially. This demonstrates that it is possible to theorize as to a
human’s RL algorithm by implementing that algorithm on a humanoid avatar and then and
then test the theory by seeing if the reward structure implied by the human data is
commensurate with that of the avatar.
Acknowledgements:
Supported by NIH Grant R01RR009283.
References:
[1] Abeel & Ng Proc. 21st IJCML (2004)
[2] Ramachandran & Amir, Twentieth IJCAI (2007)
[3] Sprague et al ACM TAP (2007)
[4] Rothkopf and Ballard COSYNE (2007)
145
Poster Session II, Thursday, October 1
T28
Beaming memories:Source localization of gamma oscillations
reveals functional working memory network
Frederic Roux*52, Harald Mohr1, Michael Wibral4, Wolf Singer53, Peter Uhlhaas5
1
2
3
4
5
Department of Biological Psychology, Goethe University, Frankfurt, Germany
Department of Neurophysiology, Goethe University, Frankfurt, Germany
Frankfurt Institute for Advanced Studies, Frankfurt, Germany
MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
Empirical and theoretical evidence suggests that synchronous oscillatory activity may be
involved in the neuronal dynamics of working-memory (WM).
During WM, neuronal synchrony could act as a mechanism to maintain and manipulate
encoded items once information is no longer available in the environment. In humans, most
evidence linking neuronal synchrony and WM has been reported from EEG/MEG studies.
However, only few EEG/MEG studies have investigated the cortical sources underlying
synchronous oscillatory activity during WM.
We recorded MEG-signals from 20 healthy participants during a visual-spatial WM (VSWM)
task. MEG signals were analysed in the time-frequency domain and the sources of
oscillatory activity were localized using beamforming techniques. Our results show a taskdependent increase of oscillatory activity in the high gamma (60-120 Hz) , alpha (8 – 16 Hz)
and theta (4-7 Hz) bands over parietal and frontal sensors. In addition, we found that the
cortical sources of oscillatory activity in the gamma band reflect the activation of a frontoparietal network during VSWM.
T29
Task-dependent co-modulation of different EEG rhythms in the
non-human primate
Stefanie Rulla*1, Matthias HJ Munk1
1 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
* [email protected]
EEG signals are the most global brain signals which reflect a brain’s functional state,
primarily by the frequency composition of oscillatory signal components. Numerous studies
have shown that oscillations accompany many neuronal processes underlying cognitive
function. Although the role of particular frequency bands is starting to emerge, their
combined occurrence and dynamical interplay is scarcely understood with respect to their
topological impact on neuronal processes. We set out to determine temporal and spatial
146
Decision, control and reward
properties of various EEG rhythms in the best established animal model for studying the
neuronal mechanisms of cognition. Two monkeys were trained to perform a visuomotor task,
moving a lever as instructed by a moving visual stimulus while fixation was maintained. At
the end of each successful trial, a liquid reward was given and the monkey was waiting for
the next trial to start. EEG was recorded from 64 electrodes chronically implanted in the
bone bilaterally above numerous cortical areas: visual, auditory, parietal, sensorimotor,
premotor and prefrontal areas, digitized at 5 kHz and analyzed for changes in signal power
by sliding window FFT. These EEG signals are characterized by a broad distribution of
oscillation frequencies, ranging from delta (1-3 Hz) to high gamma frequencies (>150 Hz).
Different epochs of the task exhibited continual coming and going of prominent power
clusters in the time-frequency domain. Reliable effects (z-scores > 2) could be observed in
both monkeys: when attending the visual stimulus and precisely controlling the lever
position, a prominent beta rhythm (12-30 Hz) occurred with a latency of 240 ms to the visual
stimulus. As soon as the monkey initiated further lever movements, this beta rhythm was
replaced by prominent power in the delta and in the high gamma band (50-140 Hz). The
topography of the frequency bands differed: while beta oscillations could be seen mostly
over visual, parietal and premotor areas, the delta band dominated for prefrontal and
premotor electrodes and gamma rhythms were observed over prefrontal areas. In contrast,
the period just after reward was dominated by power in the alpha band (8-13 Hz) distributed
over the entire brain. In sum, we identified task-dependent EEG oscillations in diverse
frequency bands which alternated through the different stages of the task following their
typical topographical distributions. The observation that different EEG rhythms like in the
delta and gamma frequency band co-occurred repeatedly suggests that interactions across
frequencies might play a crucial role in processing task relevant information.
T30
A computational neuromotor model of the role of basal ganglia in
spatial navigation
Deepika Sukumar*1, V. Srinivasa Chakravarthy1
1 Indian Institute of Technology, Madras, India
* [email protected]
Navigation is a composite of wandering and goal-directed movements. Therefore a
neuromotor model of navigation must include a component with stochastic dynamics and
another which involves hill-climbing over a certain “salience” function. There are models of
navigation that incorporate both basal ganglia (BG) and hippocampus, the brain circuits that
subserve two forms of navigation – cue-based and place-based respectively. But existing
models do not seem to identify the neural substrate for the stochastic dynamics necessary
for navigation. We propose that the indirect pathway of BG is the substrate for exploratory
drive and present a model of spatial navigation involving BG.
147
Poster Session II, Thursday, October 1
We formulate navigation as a Reinforcement Learning (RL) problem and model the role of
BG in driving navigation. Actor, Critic and Explorer are the three key components of RL. We
follow the classical interpretation of the dopamine signal as temporal difference error, the
striatum as the Critic and the motor cortex (M1) as the Actor. The subthalamic nucleus and
Globus Pallidus externa loop, which is capable of complex neural activity, is hypothesized to
be the Explorer.
The proposed model of navigation is instantiated in a model rat exploring a simulated
circular Morris water maze. A set of eight poles of variable height placed on the
circumference of the pool provide the visual orienting cues. An invisible platform is placed at
one end of the pool. Guided by appropriate rewards and punishments, the model rat must
search for the platform.
The following are the RL-related components of the model:
Input: The rat’s visual view consisting of some poles, coded as a “view vector” is presented
to both M1 and BG.
Reward: As the rat randomly explores the pool, accidental arrival at the platform results in
reward and collision with the surrounding walls in punishment.
Critic: A function, V(t), of the view vector is trained by the dopamine signal (TD error).
Dopamine signal: Dopamine signal is used to switch between the direct and the indirect
pathways of BG. Stronger dopamine signal increases the activity of the direct pathway (DP),
while reducing the activity of the indirect pathway (IP).
Critic Training: The Critic is trained in multiple stages, starting with a small discount factor
and increasing it with time.
Actor (M1) training: The perturbations to M1 from BG, in combination with the dopamine
signal, are used to train M1.
Output: A weighted sum of the outputs of M1 and BG determines the direction of the rat’s
next step. Thus in the present model, dopamine modulates activity within BG, and BG
modulates learning in M1.
A novel aspect of the present work is that the substrate for the stochastic component is
hypothesized to be the IP of BG. Future developments of this model would include two forms
of spatial-coding in hippocampus (path-integration and spatial-context mapping) and their
contribution to navigation.
148
Decision, control and reward
T31
Working memory-based reward prediction errors in human ventral
striatum
Michael T. Todd*1, Jonathan D. Cohen1, Yael Niv1
1 Department of Psychology and Princeton Neuroscience Institute, Princeton University,
USA
* [email protected]
Reinforcement learning (RL) theory has developed into an elegant workhorse of
neuroscience that has been used to explain the firing of dopamine neurons, BOLD signal in
ventral striatum, and the overt behavior of organisms, as well as to propose specific
functional relationships between midbrain dopamine, striatal, and neocortical systems (see
[1] for review). A major strength of RL theory is its normative basis – RL algorithms, while
simple, yield accurate reward prediction and optimal reward-seeking choices under certain
assumptions. Thus RL theory is at once simple, descriptively accurate, and rational.
Although RL-based experimentation has emphasized simple classical or instrumental
conditioning paradigms in which higher level cognitive resources such as working memory
(WM) may be unnecessary, WM is surely an invaluable resource to a reward-seeking agent
and several authors have assumed that the brain's RL and WM systems are functionally
related (e.g., [2{4]). However, not only has this relationship never been explicitly tested, but
the alternative, that the RL system is “stimulus bound” and insensitive to information in WM,
is in some sense more consistent with RL theory. This is because RL algorithms owe their
normative justification to an essential memoryless or Markov assumption about the
environment, without which RL can lead to highly irrational, unstable behavior [5, 6].
Although this Markov property can hold in some special cases of WM-dependent learning, it
is generally violated when WM is required. Thus, evidence that neural RL systems are
indeed sensitive to WM would pose the significant empirical and theoretical challenge of
understanding this relationship while retaining the normative status that RL has enjoyed.
We present results from an fMRI experiment in which participants earn money by
continuously responding to a stream of stimuli. Critically, the current stimulus in isolation
never predicts positive or negative reinforcement, but may do so in combination with
previous stimuli which may be in WM. Thus a WM-insensitive RL system would exhibit
prediction errors at the time of reinforcement but not at the time of predictor stimuli, whereas
a WM-sensitive RL system would exhibit the opposite pattern. We found WM-sensitive
prediction errors in bilateral ventral striatum BOLD signal (58 contiguous voxels, pFDR<
0:05). This clearly demonstrates a WM-RL link, setting the stage for further investigation.
References:
[1] Daw, N. D., and Doya, K. Current Opinion in Neurobiology 16, 199{204 (2006).
[2] Braver, T., Barch, D., and Cohen, J. Biological Psychiatry 46(3), 312{328 (1999).
149
Poster Session II, Thursday, October 1
[3] Daw, N. D., Courville, A. C., and Touretzky, D. S. Neural Computation 18(7), 1637{1677
Jul (2006).
[4] Todd, M., Niv, Y., and Cohen, J. In Advances in Neural Information Processing Systems,
Koller, D., Bengio, Y., Schuurmans, D., Bottou, L., and Culotta, A., editors, volume 21,
(2008).
[5] Littman, M. In From Animals to Animats 3: Proceedings of the Third International
Conference on Simulation of Adaptive Behavior, Cliff, D., Husbands, P., Meyer, J.-A.,
and Wilson, S. W., editors, 238{245. MIT Press Cambridge, MA, USA, (1994).
[6] Singh, S., Jaakkola, T., and Jordan, M. In Proceedings of the Eleventh International
Conference on Machine Learning, 284{292. Morgan Kaufmann, (1994).
T32
Spatially inferred, but not directly cued reach goals are
represented earlier in PMd than PRR
Stephanie Westendorff*12, Christian Klaes1, Alexander Gail1
1 German Primate Center, Göttingen, Germany
2 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
* [email protected]
The parietal reach region (PRR) and the dorsal premotor cortex (PMd) both encode motor
goals for planned arm movements. A previous modeling study (Brozovic et al., 2007)
suggests that the motor goal tuning in PRR could be the result of feedback from motor tuned
structures. Here we test if motor goal latencies support the view that PRR inherits motor goal
tuning from PMd, and if relative latencies depend on the task condition.
We simultaneously recorded extracellularly with multiple microelectrodes from PRR and
PMd while the monkey performed a visually instructed anti-reach task. A combination of a
spatial cue and a cue about the transformation rule instructed whether to reach towards
(pro-reach) or opposite (anti-reach) of the spatial cue. In three different pre-cuing conditions
the spatial and the rule cue could be presented together either before or after a memory
period or the rule cue could be presented before the spatial information. Motor goal latency
was defined as the time when spatial selectivity for the reach-goal, not the spatial cue,
occurred in both, pro- and anti-reaches.
We found that the latencies for the motor goal were always lower in PMd compared to PRR
independent of the pre-cuing condition. We further tested if this latency difference affects
reach-goals for pro- as well as anti-reaches. Pro-/Anti-goal latencies were defined as the
time when spatial selectivity occurred in pro-/anti-reaches in those neurons which were
known to be exclusively motor goal related. We found lower motor-goal latencies in PMd
than PRR only for anti-reaches, but not for pro-reaches. Previous finding (Gail & Andersen,
2006) in PRR found that pro-goals emerge faster than anti-goals when the rule was precued, which is also the case here. Additionally, this pro-/anti-latency difference was smaller,
if at all present, in the other pre-cuing conditions with simultaneous cues. Preliminary data
suggests that the larger latency difference in the rule pre-cuing condition is due to the fact,
150
Decision, control and reward
that the latency of pro-goals rather than anti-goals is influenced by the pre-cuing condition.
Our results support the view that motor goal information in PRR is received as feedback
from PMd, but only when the reach goal position has to be spatially inferred from an
instruction cue. The influence of pre-cuing on the pro-goal latencies challenges the
interpretation that early pro-goal tuning represents an automatic default response.
References:
Brozovic et al. 2007 J Neurosci 27:10588.
Gail & Andersen 2006 J Neurosci 26:9376.
T33
Classification of functional brain patterns supports diagnostic
autonomy of binge eating disorder
Martin Weygandt*1, Kerstin Hackmack1, Anne Schienle3, John-Dylan Haynes12
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Charité-Universitätsmedizin, Berlin, Germany
3 Department of Clinical and Health-Psychology, Institute of Psychology, Karl-Franzens
University, Graz, Austria
* [email protected]
Background:
Binge eating disorder (BED) is not yet officially classified as a mental disorder, since it is
unclear whether it is an independent medical condition distinguishable from Bulimia Nervosa
(BN). Recently [1], we were able to collect first evidence of differential brain activation to
visual food stimuli in both groups using functional magnetic resonance imaging (fMRI).
However, in [1] brain responses were analyzed with traditional univariate methods, where
activity is investigated independently for each brain region, thus ignoring disorder-relevant
patterns of activity of neighboring regions. In contrast, we and others have previously
demonstrated the relevance and power of pattern-based methods for cognitive paradigms in
healthy subjects [2]. Additionally, it remains unclear whether activation differences as found
in [1] are sufficient to predict the clinical status. Therefore, we reanalyze the data of [1] using
multivariate pattern recognition techniques with the aim to predict the type of eating disorder
based alone on brain patterns. By that we want to clarify whether BED is an independent
medical condition distinguishable from BN.
Methods:
Subjects and Preprocessing: Functional images of BED patients (N=17), BN patients (N=14)
and of healthy volunteers (N=20) were acquired in a cue reactivity paradigm (presentation of
visual food stimuli and neutral images). Images were preprocessed using SPM2. Activation
maps were calculated using a general linear model.
151
Poster Session II, Thursday, October 1
Pattern Recognition: Functional contrast maps (activation maps for food minus neutral image
condition) entered a two-stage procedure. First, the algorithm selected maximally informative
areas within anatomically predefined regions of interest (ROIs; amygdala, anterior cingulate
cortex, insula, lateral and medial orbitofrontal cortex, and ventral striatum) using a nested
leave-one out cross-validation scheme. Then, the class of a left out image was predicted
once per ROI by an ensemble of classifiers retrained on the data from the selected areas.
This nested analysis is used to avoid circular inference that has recently been heavily
debated in neuroimaging.
Results:
Maximal accuracy in separating BED / bulimic patients vs. healthy controls was obtained for
an ensemble in the right insular cortex (86%, p<10-5 / 78%, p<0.005). Maximal accuracy in
the separation among eating disorders was obtained in the left ventral striatum (84%,
p<0.0005).
Discussion:
It is possible to predict BED and BN from brain patterns with high accuracy. As opposed to
univariate procedures, classifiers identify the ventral striatum as a region differentiating
among eating disorders. This involvement of reward-related regions is presumably because
this region responds stronger to the food stimuli as reward-related in bulimic patients. Thus,
the results imply that BED is an independent medical condition distinguishable from BN.
Acknowledgements:
This work was funded by the Max Planck Society, the German Research Foundation and the
Bernstein Computational Neuroscience Program of the German Federal Ministry of
Education and Research (Grant Number 01GQ0411).
References:
[1] Schienle A, Schäfer A, Hermann A, Vaitl D (2009). Binge-eating disorder: reward
sensitivity and brain activation to images of food. Biol Psychiat, 65, 654-61.
[2] Haynes JD & Rees G (2006). Decoding mental states from brain activity in humans. Nat
Neur, 7, 523-34.
152
Learning and plasticity
Learning and plasticity
T34
Hippocampal mechanisms in the initiation and perpetuation of
epileptiform network synchronisation
Gleb Barmashenko*1, Rika Bajorat1, Rüdiger Köhling1
1 Institute of Physiology, University of Rostock, Rostock, Germany
* [email protected]
This project is characterised by joint experimental approaches which contribute to the
development of new network models of one of the most common neurological diseases epilepsy. Hypersynchronisation is a key factor in ictal epileptic activity, and there is evidence
that abnormal synchronising effects of interneuronal GABAergic transmission play an
important role in the initiation of epileptic activity and the generation of ictal discharges.
However, a concise characterisation of the role of different types of interneurones and of
their function in the exact spatio-temporal organisation of the epileptogenic network has yet
to be determined.
Electrophysiological measurements in slices from acute animal models of focal epilepsy,
both in normal and chronically epileptic tissue, started to determine the role of different types
of interneurones with respect to initiation of epilepsy and interictal-ictal transitions. There is
growing evidence that functional alterations in the epileptiform hippocampus critically
depends on GABAergic mechanisms and cation-chloride cotransporters.
To understand the cellular basis of specific morphological and functional alterations in the
epileptic hippocampus we studied the physiological characteristics and transmembrane
currents of neurones in hippocampal slices from epileptic and control rats using whole-cell
and gramicidin perforated patch-clamp recordings.
Whereas the resting membrane potential, input resistance, time constant, rheobase and
chronaxy were not significantly different between control and epileptic tissue, the reversal
potential of the GABAAR mediated currents (EGABA) was significantly shifted to more
positive values in the epileptic rats, which can contribute to hyperexcitability and abnormal
synchronisation within the epileptic hippocampus. Pharmacological experiments showed that
the observed changes in the epileptic tissue were due to a combined upregulation of the
main Cl- uptake transporter (Na+-K+-2Cl- cotransporter, NKCC1) and downregulation of the
main Cl- extrusion transporter (K+-Cl- cotransporter, KCC2).
For paired recordings commonly juvenile animals (P16-P25) are taken. Therefore we
currently establish a model for chronic epilepsy in young rats verified by EEG recordings.
In the further course of the project, a detailed analysis of interneurone-principal neurone
interactions will be undertaken. These biophysical parameters will serve to establish realistic
153
Poster Session II, Thursday, October 1
models of computational behaviour of neurones and neuronal networks. This in turn will
allow to establish models with increasing complexity and to predict both functional and
dysfunctional synchronisation patterns. These predictions will then be tested experimentally
in order to validate the models (in cooperation with Bernstein Center for Computational
Neuroscience, Freiburg).
T35
The basal ganglia and the 3-factor learning rule: reinforcement
learning during operant conditioning
Rufino Bolado-Gomez*1, Jonathan Chambers1, Kevin Gurney1
1 Adaptive Behaviour Research Group, Department of Psychology, University of Sheffield,
Sheffield, UK
* [email protected]
Operant conditioning paradigms that explore interactive, or ‘trial and error’ learning in
animals, have provided evidence to suggest that the basal ganglia embody a form of
reinforcement learning algorithm, with phasic activity in midbrain dopaminergic neurons
constituting an internally generated training signal. In the presented work we employ a
biologically constrained, computational model of the basal ganglia, and related circuitry (see
supplementary Fig. 1), to explore the proposal of Redgrave and Gurney (supplementary ref.
[1]) that the phasic dopamine signal represents a ‘sensory prediction error’ as opposed to
the ‘reward prediction error’ more commonly posited. Under this scheme, the sensory
prediction error or reinforcement learning signal trains the basal ganglia to preferentially
select any action that reliably precedes a novel outcome, irrespective of whether that
outcome is associated with genuine reward or not. In other words, this neuronal signal
changes the normal basal ganglia action-selection mechanism into temporary ‘doing-it-again’
mode increasing the probability to more likely choose the key (bias) action causally
associated with the novel-outcome. We propose that through the purposeful repetition of
such actions, the brain rapidly forms robust action-outcome associations rendering
previously novel outcomes predictable. Consistent with the proposal of Redgrave and
Gurney, we further suggest that through this policy of temporary ‘repetition bias’, a naive
animal populates a library of action-outcome associations in long-term memory and that
these subsequently form the foundation for voluntary goal-seeking behaviour.
The computational model that we present tests the idea that a ‘repetition-bias’ policy is
encoded at cortico-striatal synapses by underlying modulatory-plasticity effects, with longterm potentiation leading to repetition and long-term depression being responsible for
returning the basal ganglia to an unbiased state (see supplementary Fig. 2). To this end, we
have constructed a novel learning rule (see supplementary Eq. 1) based upon the 3-factor
synaptic plasticity framework proposed by Reynolds & Wickens (supplementary ref. [4]). This
rule is composed of a dopamine factor (supplementary ref. [2]) that combines both phasic
and tonic dopamine signal characteristics, and the properties of a stable hebbian-like, BCM
154
Learning and plasticity
factor (supplementary ref. [3]). The combination of this two elements account for synaptic renormalization nearing the baseline set as initial condition.
We present results from the simulation of an operant conditioning task utilizing abstract
sensory and motor signals to demonstrate the model’s successful implementation of
repetition-bias hypothesis. We then compare the behavioural consequences of this policy to
natural animal behaviour by utilizing an embodied robot simulation in which the agent is free
to explore an open environment containing interactive objects. In addition, our results
demonstrate a biologically plausible relationship between robot behavioural performance
and simulated synaptic plasticity in cortico-striatal synapses.
T36
Dual coding in an auto-associative network model of the
hippocampus
Daniel Bush*21, Andrew Philippides2, Phil Husbands2, Michael O'Shea2
1 Collegium Budapest, Budapest, Hungary
2 University of Sussex, Brighton, UK
* [email protected]
Electrophysiology studies in a range of mammalian species have demonstrated that the
firing rate of single pyramidal neurons in the hippocampus encodes for the presence of both
spatial and non-spatial cues [1]. In addition, the phase of place cell firing with respect to the
theta oscillation that dominates the hippocampal EEG during learning correlates with the
location of an animal within the corresponding place field [2]. Importantly, it has been
demonstrated that the rate and phase of neural activity can be dissociated, and may thus
encode information separately and independently [3]. Here we present a spiking neural
network model which is, to our knowledge, the first to utilise a dual coding system in order to
integrate the learning and recall of associations that correspond to both temporally-coded
(spatial) and rate-coded (non-spatial) activity patterns within a single framework.
Our model consists of a spiking auto-associative network with a novel STDP rule that
replicates a BCM-type dependence of synaptic weight upon mean firing rate (figure 1). The
scale of external input, recurrent synaptic currents and synaptic plasticity are each
modulated by a theta frequency oscillation. Place cell activity is represented by a
compressed temporal sequence of neural firing within each theta phase, while the presence
of a non-spatial ‘object’ is represented by neural bursting at the trough of the theta phase.
We simulate the network moving along a circular track of 50 overlapping place fields with
non-spatial cues present at 5 equidistant locations (figure 2). Following learning, we
demonstrate that:
1. External stimulation of any place cell generates the sequential recall of upcoming
place fields on the learned route (figure 3a).
155
Poster Session II, Thursday, October 1
2. External stimulation of any place cell generates the recall of any ‘object’
previously encountered at that place (figure 3b).
3. External stimulation of cells which encode an ‘object’ generates recall of both the
place at which that ‘object’ was observed, and the upcoming place fields on the
learned route (figure 3c).
4. The network performs pattern completion, meaning that only a subset of cues is
required to generate this recall activity.
This model provides the first demonstration of an asymmetric STDP rule mediating ratecoded learning in a spiking auto-associative network that is inspired by the neurobiology of
the CA3 region. Furthermore, the dual coding system utilised integrates both dynamic and
static activity patterns, and thus unifies the disparate (spatial and episodic) mnemonic
functions ascribed to the hippocampus. This research therefore provides the foundations for
a novel computational model of learning and memory in the medial temporal lobe and
beyond.
References:
[1] O'Keefe J: Hippocampal Neurophysiology in the Behaving Animal. The Hippocampus
Book, Oxford University Press (2008)
[2] Huxter JR, Senior TJ, Allen K, Ciscsvari J: Theta Phase–Specific Codes for Two
Dimensional Position, Trajectory and Heading in the Hippocampus. Nature Neuroscience
11 (5): 587-594 (2008)
[3] Huxter JR, Burgess N, O’Keefe J: Independent Rate and Temporal Coding in
Hippocampal Pyramidal Cells. Nature 425 (6960): 828-832 (2003)
T37
Towards an emergent computational model of axon guidance
Rui P. Costa*1, Luís Macedo1, Ernesto Costa1, João Malva2, Carlos Duarte2
1 Center for Informatics and Systems, University of Coimbra, Coimbra, Portugal
2 Center for Neuroscience and Cell Biology, University of Coimbra, Coimbra, Portugal
* [email protected]
Axon guidance (AG) towards their target during embryogenesis or after injury is an important
issue in the development of neuronal networks. During their growth, axons often face
complex decisions that are difficult to understand when observing just a small part of the
problem. In this work we propose a computational model of axon guidance based on activityindependent mechanisms that takes into account the most important aspects of axon
guidance. This model may lead to a better understanding of the axon guidance problem in
several systems (e.g. midline, optic pathway, olfactory system) as well as the general
mechanisms involved.
156
Learning and plasticity
The computational model that we propose is strongly based on the experimental evidences
available from Neuroscience studies, and has a three-dimensional representation. The
model includes the main elements (neurons, with soma, axon and growth cone; glial cells
acting as guideposts) and mechanisms (attraction/repulsion guidance cues, growth cone
adaptation, tissue-gradient intersections, axonal transport, changes in the growth cone
complexity and a range of responses for each receptor).
The growth cone guidance is defined as a function that maps the receptor activation by
ligands into a repulsive or attractive force. This force is then converted into a turning angle
using spherical coordinates. A regulatory network between the receptors and the intracellular
proteins is considered, leading to more complex and realistic behaviors. The ligand diffusion
through the extracellular environment is modeled with linear or exponential functions.
Furthermore, we include an optimization module based on a genetic algorithm that helps to
optimize the model of a specific AG system. As a fitness function we consider the euclidean
distance to the path observed in the native biological system.
Concerning experimentation, we have been studying one of the best characterized systems,
the midline crossing of Drosophila commissural neuron axons. The computational model
created allows describing to a great extent the behaviors that have been reported in the
literature, both graphically and numerically, before and after midline crossing. In the future
we plan to study how the developed model can help to understand the decisions performed
by retinal axons at the optic chiasm. As evaluation measures the following parameters are
considered: (i) the turning angles of the growth cone, (ii) the euclidean distance to what is
observed in the native tissue and (iii) the importance of each guidance complex (pair
receptor-ligand).
In conclusion, in our approach AG is an emergent behavior of the system as a whole, with
realistic rules and elements that together could lead to the behaviors observed in
Neurobiology experimental studies. A simulator based on this model is being developed,
which can be used in the future by neuroscientists interested in a better comprehension of
the axon guidance phenomenon.
T38
Convenient simulation of spiking neural networks with NEST 2
Jochen Eppler*21, Moritz Helias1, Eilif Muller3, Markus Diesmann4, Marc-Oliver Gewaltig21
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
3 Laboratory for Computational Neuroscience, Ecole Polytechnique Federale de Lausanne,
Lausanne, Switzerland
4 RIKEN Brain Science Institute, Wako City, Japan
* [email protected]
157
Poster Session II, Thursday, October 1
NEST is a simulation environment for large heterogeneous networks of point-neurons or
neurons with a small number of compartments [1].
We present NEST 2 with its new user interface PyNEST [2], which is based on the Python
programming language (http://www.python.org). Python is free and provides a large number
of libraries for scientific computing (http://www.scipy.org), which make it a powerful
alternative to Matlab. PyNEST makes it easy to learn and use NEST. Users can simulate,
analyze, and visualize networks and simulation data in a single interactive Python session.
Other features of NEST 2 include support for synaptic plasticity, a wide range of neuron
models, and parallel simulation on multi-core computers as well as computer clusters [3]. To
customize NEST to their own purposes, users can add new neuron and synapse models, as
well as new connection and analysis functions. Pre-releases of NEST 2 have already been
used with great success and appreciation at Advanced Course in Computational
Neuroscience in Arcachon (2005-2007) and Freiburg (2008).
NEST is released under an open source license for non-commercial use. For details and to
download it, visit the NEST Initiative at http://www.nest-initiative.org.
References:
[1] Gewaltig M-O, Diesmann M; NEST (Neural Simulation Tool), Scholarpedia 2(4):1430,
2007
[2] Eppler JM, Helias M, Muller E, Diesmann M, Gewaltig M-O; PyNEST: A convenient
interface to the NEST simulator, Front. Neuroinform. 2:12, 2008
[3] Plesser HE, Eppler JM, Morrison A, Diesmann M, Gewaltig M-O; Efficient parallel
simulation of large-scale neuronal networks on clusters of multiprocessor computers,
Springer-Verlag LNCS 4641:672-681, 2007
T39
Prefrontal firing rates reflect the number of stimuli processed for
visual short-term memory
Felix Franke*13, Michal Natora3, Maria Waizel5, Lars F Muckli2, Gordon Pipa46, Matthias HJ
Munk5
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Center of Cognitive Neuroimaging, University of Glasgow, Glasgow, UK
3 Department of Neural Information Processing, Technical University Berlin, Berlin,
Germany
4 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
5 Max-Planck Institute for Biological Cybernetics, Tübingen, Germany
6 Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
The way a system reacts to increased task demands can reveal information about its
functional mechanism. We therefore assessed the question how non human primates
158
Learning and plasticity
process information about visual stimuli by driving two rhesus monkeys to their cognitive
limits in a visual memory task. The monkeys were first trained to successfully perform the
visual memory task (> 80% correct responses). Then the number of stimuli shown to the
monkey (load) was increased to up to 4. The stimulus presentation period (SP) was 900
milliseconds long. Thus, in the load 4 condition each single stimulus was only shown for less
than 225ms. After a three second delay period, a test stimulus was shown. The task of the
monkey was then to decide via differential button press, whether the test stimulus matched
any of the previously shown stimuli. Neuronal firing rates were recorded using up to 16 multi
electrodes placed in the prefrontal cortex. For every trial in which the monkey responded
correctly, the average multi unit rate during the SP was estimated.
We then assessed the question whether the firing rates in the SP during the distinct load
conditions were significantly different. To minimize the effect of non-stationarities present in
the data, we paired the data so that the trials of one pair were maximally 2.5 minutes apart.
We tested against the null-hypothesis that the firing rates during the SP did not differ
significantly among the load conditions using the nonparametric Friedman-test for paired
data. For every recording site where we could reject the null-hypothesis (p<0.05), we
investigated in which direction the rates of the different load conditions differed, correcting
for multiple tests using the Tukey-Kramer-correction. A total of 12681 correct trials were
recorded with a total of 160 recording positions (6 to 16 per session). In total, 23 positions
showed significant effects from which 20 were consistent. The firing rate differences were
called consistent if the difference compared to load 1 were stronger the higher the load. Out
of these 20 consistent recording sites 14 showed a firing rate which monotonically increased
with the load, 6 showed a monotonous decrease. This means that 12% of the recording sites
in prefrontal cortex show a significant modulation of firing rates with respect to the load
condition during a delayed match to sample task. However this modulation is not necessarily
excitatory. Interestingly, it seems that the majority of sites showed a load-consistent
modulation i.e. the higher the load, the stronger the modulation. This could be a possible
mechanism to code the number of stimuli or their sequence.
T40
Using ICA to estimate changes in the activation between different
sessions of a fMRI experiment
Markus Goldhacker1, Ingo Keck*1, Elmar W Lang1
1 Computational Intelligence and Machine Learning Group, Institute for Biophysics,
Universität Regensburg, Regensburg, Germany
* [email protected]
Independent Component Analysis (ICA) is a well developed analyse method for functional
magneto resonance imaging (fMRI) data. Its strength is the ability to do exploratory data
analysis so that now information about the time course of activation in the brain of the
subject is required. This is of special interest for psychological experiments where high
159
Poster Session II, Thursday, October 1
cognitive have to be performed by the subject and the temporal estimation of the brain
activation may be difficult or even impossible, thus severely limiting the effectiveness of
correlation based analyse techniques such as the generalised linear model.
In our work we present a new method to use independent component analysis to estimate
functional changes in the brain over time in separate fMRI sessions of the same subjects. In
a first step ICA is used to estimate the functional networks related to the experiment in each
session. The relevant networks can be selected either by correlating their time courses with
the experiment design or by correlation with well known areas within the brain related to the
experiment. In a second step the changes in position and extend of these active areas found
by the ICA are investigated and quantified. In a third step these changes on the singlesubject level can be compared between multiple subjects to find statistically relevant
changes on the group level.
To demonstrate the validity of the approach we analysed the data of a Morse code learning
experiment with 16 subjects to estimate the learning induced changes in the brain related to
the experiment and the default mode network. We found increased areas of task related
activation in the right parietal hemisphere before learning and lateral/prefrontal on the left
hemisphere after learning. In the default mode network we found a spatial translation of
activity from the frontal to the parietal region. We also noted that the cingulum that formed
part of the default mode network before learning did not appear in the default mode network
after learning.
T41
A biologically plausible network of spiking neurons can simulate
human EEG responses
Christoph Herrmann*12, Ingo Fründ2, Frank Ohl3
1 Bernstein Group for Computational Neuroscience, Otto-von-Guericke University,
Magdeburg, Germany
2 Institute of Psychology, Otto-von-Guericke University, Magdeburg, Germany
3 Leibniz Institute for Neurobiology, Magdeburg, Germany
* [email protected]
Early gamma band responses (GBRs) of the human electroencephalogram (EEG)
accompany sensory stimulation. These GBRs are modulated by exogenous stimulus
properties such as size or contrast (size effect). In addition, cognitive processes modulate
GBRs, e.g. if a subject has a memory representation of a perceived stimulus (known
stimulus) the GBR is larger as if the subject had no such memory representation (unknown
stimulus) (memory effect). Here, we simulate both effects in a simple random network of
1000 spiking neurons. The network was composed of 800 excitatory and 200 inhibitory
Izhikevich neurons. During a learning phase, different stimuli were presented to the network,
i.e. certain neurons received input currents. Synaptic connections were modified according
to a spike timing dependent plasticity (STDP) learning rule. In a subsequent test phase, we
160
Learning and plasticity
stimulated the network with (i) patterns of different sizes to simulate the abovementioned
size effect and (ii) with patterns that were or were not presented during the learning phase to
simulate the abovementioned memory effect. In order to compute a simulated EEG from this
network, the membrane voltage of all neurons was averaged. After about 1 hour of learning,
the network displayed event-related responses. After 24 hours of learning, these responses
were qualitatively similar to the human early GBRs. There was a general increase in
response strength with increasing stimulus size and slightly stronger responses for learned
stimuli. We demonstrated that within one neural architecture early GBRs can be modulated
both by stimulus properties and by basal learning mechanisms mediated via spike timing
dependent plasticity.
T42
Unsupervised learning of object identities and their parts in a
hierarchical visual memory
Jenia Jitsev*12, Christoph von der Malsburg1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 Goethe University, Frankfurt, Germany
* [email protected]
Visual cortex is thought to utilize parts-based representation to encode natural visual
objects, decomposing them into constituent components along hierarchically organized
visual pathway. Substantial psychophysical and neurophysiological evidence suggests that
visual system may use two different coding strategies to signal the relations between the
components. First, the relations can be explicitly conveyed by conjunction, or configuration,
sensitive neurons from higher visual areas. Second, the relations can be signaled by
dynamic assemblies of co-activated part-specific units, which can be constructed on demand
to encode a novel object or to recall an already familiar one from the memory as a
composition of its constituent parts.
We target the question what neural mechanisms are required to guide the self-organization
of a memory structure that supports the two different coding strategies. The model we
propose is based on two consecutive, reciprocally interconnected layers of distributed
cortical modules, or columns, which in turn contain subunits receiving common excitatory
afferents and bounded by common lateral inhibition, which is modulated by excitatory and
inhibitory rhythms in the gamma range. The lateral inhibition within the column results in
activity dynamics with a strong competitive character, casting the column a winner-take-alllike decision unit [1]. On the slow time scale, the structure formation is guided by activitydependent bidirectional plasticity and homeostatic regulation of unit's activity.
In the initial state, the connectivity between and within the layers is homogeneous, all types
of synapses - bottom-up, lateral and top-down - being excitatory and plastic. A data set
containing natural human face images is used to provide visual input to the network. During
incremental, open-end unsupervised learning, the lower layer of the system is exposed to
161
Poster Session II, Thursday, October 1
the Gabor filter banks extracted from local points on the face images [2]. The system is able
to develop synaptic structure capturing local features and their relations on the lower level as
well as the global identity of the person at the higher level of processing, improving gradually
its recognition performance with learning time. The symbols for person identities emerging
on the higher memory layer are grounded on the semantics of parts-based representations
emerging on the lower layer. Activation of such an identity symbol leads to reactivation of the
constituent distributed parts via established top-down connections, providing explicit
representation of symbol's low level configuration.
The memory system shows impressive recognition performance on the original and
alternative face views, underpinning its functionality. Experience-driven, unsupervised
structure formation instantiated here is thus able to form a basic memory domain with
hierarchical organization and contextual generative support, opening a promising direction
for further research.
Acknowledgements:
This work was supported by the EU project DAISY, FP6-2005-015803.
References:
[1] Lücke, J., 2005. Dynamics of cortical columns - sensitive decision making. In: Proc.
ICANN. LNCS 3696. Springer, pp. 25-30.
[2] L. Wiskott, J.-M. Fellous, N. Krueger, C. von der Malsburg, Face recognition by elastic
bunch graph matching, IEEE Trans. on Pattern Analysis and Machine Intelligence 19 (7)
(1997) 775-779.
T43
The role of structural plasticity for memory: storage capacity,
amnesia, and the spacing effect
Andreas Knoblauch*2, Marc-Oliver Gewaltig21, Ursula Körner2, Edgar Körner2
1 Bernstein Center for Computational Neuroscience Freiburg, Freiburg, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
The neurophysiological basis of learning and memory is commonly attributed to the
modification of synaptic strengths in neuronal networks. Recent experiments suggest also a
major role of structural plasticity including elimination and regeneration of synapses, growth
and retraction of dendritic spines, and remodeling of axons and dendrites. Here we develop
a simple model of structural plasticity and synaptic consolidation in neural networks and
apply it to Willshaw-type models of distributed associative memory [1]. Our model assumes
synapses with discrete weights. Synapses with low weights have a high probability of being
erased and replaced by novel synapses at other locations. In contrast, synapses with large
weights are consolidated and cannot be erased. Analysis and numerical simulations reveal
162
Learning and plasticity
that our model can explain various cognitive phenomena much better than alternative
network models employing synaptic plasticity only.
First, we show that networks with low anatomical connectivity employing structural plasticity
in coordination with stimulus repetition (e.g., by hippocampal replay) can store much more
information per synapse by ``emulating'' high effective memory connectivity close to potential
network connectivity. Moreover, such networks suffer to a much lesser degree from
catastrophic forgetting than models without structural plasticity if the number of consolidated
synapses remains sufficiently low.
Second, we show that structural plasticity and hippocampal replay lead to gradients in
effective connectivity. This means, neuronal ensembles representing remote memories show
a higher degree of interconnectivity than ensembles representing recent memories.
Correspondingly, our simulations show that recent memories become more vulnerable to
cortical lesions which is similar to Ribot gradients in retrograde amnesia. Previous models of
amnesia typically generated Ribot gradients by gradients in total replay time where the M-th
memory obtains an 1/M share of replay time, implicitely assuming infinite replay of all
memories. In contrast, our model can generate Ribot gradients for constant replay time per
memory. This seems consistent with recent evidence that novel memories are buffered and
replayed by the hippocampus for a limited time.
Third, we show that structural plasticity can easily explain the spacing effect of learning. This
means the fact that learning is much more efficient if rehearsal is spread over time compared
to rehearsal in a single block. The spacing effect has been reported to be very robust
occurring in many explicit and implicit memory tasks in humans and many animals being
effective over many time scales from single days to months. For these reasons it has long
been speculated about a common underlying mechanism at the cellular level. We propose
that structural plasticity is this common mechanism. According to our model, ongoing
structural plasticity reorganizes the network during the long time intervals between rehearsal
periods by growing a lot of new synapses at potentially useful locations. Therefore
subsequent training can strongly increase effective connectivity. In contrast, single block
rehearsal can increase effective connectivity only slightly above anatomical connectivity.
References:
[1] A.Knoblauch: The role of structural plasticity and synaptic consolidation for memory and
amnesia in a model of cortico-hippocampal interplay, Proceedings of NCPW11, pp 7990, 2009
163
Poster Session II, Thursday, October 1
T44
Investigation of the dynamics of small networks' connections
under hebbian plasticity
Christoph Kolodziejski*12, Christian Tetzlaff1, Florentin Wörgötter1
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Georg-August University, Göttingen, Germany
* [email protected]
Learning in networks either relies on well-behaved statistical properties of the input (Dayan
and Abbott, 2001) or requires simplifying assumptions (Hopfield, 1982). Hence, predicting
the temporal development of the network's connections when dropping assumptions like, for
instance, stationary inputs is still an open question. For instance, current models of network
dynamics (e.g. Memmesheimer and Timme (2006)) require a particular configuration of the
network's connections. Up to now, those networks have predefined fixed connection strength
and it is of interest whether and how those configurations develop in biological neuronal
networks. At the same time it would be also possible to infer relevant parameters of the used
plasticity rule while the network's behavior is close to the behavior recorded in the brain
(Barbour et al., 2007). We developed a method to analytically calculate the temporal weight
development for any linear Hebbian plasticity rule (Kolodziejski and Wörgötter, submitted).
This includes differential Hebbian plasticity which is the biophysical counterpart to spiketiming-dependent plasticity (Markram et al., 1997). In this work we concentrate on small and
presumably simple networks with up to three neurons and analytically investigate the
dynamics of the network's connections and, if existing, their fixed points. The results support
the notion that the dynamics depend on the particular type of input distribution used. Hence,
it shows that in order to infer relevant parameters of biological networks we would
additionally need to take the network's input into considerations. As we cannot assume that
all connections in the brain are predefined, learning in networks demands a better
understanding and the results presented here might serve as a first step towards a more
generalized description of learning in large networks with non-stationary inputs.
References:
Barbour, B., Brunel, N., Hakim, V., Nadal, J.-P., 2007. What can we learn from synaptic
weight distributions? Trends in Neurosciences 30 (12), 622629.
Dayan, P., Abbott, L. F., 2001. Theoretical neuroscience. Cambridge, MA; MIT Press.
Hopfield, J. J., 1982. Neural networks and physical systems with emergent collective
computational properties. Proceedings of the National Academy of Sciences of the
United States of America 79, 25542558.
Kolodziejski, C., Wörgötter, F., submitted. Plasticity of many-synapse systems. Neural
Computation
Markram, H., Lübke, J., Frotscher, M., Sakmann, B., 1997. Regulation of synaptic efficacy
by coincidence of postsynaptic APs and EPSPs. Science 275, 213215.
164
Learning and plasticity
Memmesheimer, R.-M., Timme, M., 2006. Designing the dynamics of spiking neural
networks. Physical Review Letters 97 (18), 188101.
T45
On the analysis of differential hebbian learning in closed-loop
behavioral systems
Tomas Kulvicius*13, Christoph Kolodziejski14, Minija Tamosiunaite3, Bernd Porr2, Florentin
Wörgötter1
1
2
3
4
Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
Department of Electronics & Electrical Engineering, University of Glasgow, Glasgow, UK
Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
Georg-August-Universität, Göttingen, Germany
* [email protected]
Behaving systems form a closed loop with their environment. If the environment is not too
complex, one can describe (linear) systems of this kind also in the closed loop case by
methods from systems theory. Things become much more complicated as soon as one
allows the controller to change, for example by learning. Several studies exist that try to
analyze closed loop systems from an information point of view (Prokopenko et al., 2006;
Klyubin et al., 2008), however only few attempts exist that consider learning (Lungarella and
Sporns, 2006; Porr et al., 2006). In this study we will focus on the following two questions. 1)
To what degree is it possible to describe the temporal development of closed loop adaptive
systems using only knowledge about their initial configuration, their learning mechanism and
knowledge about the structure of the world? and 2) Given a certain complexity of the world
can we predict which system from a given class would be the best?
We focus on systems that perform differential hebbian learning, where we simulate agents
which learn an obstacle avoidance task. In the first part of our study we provide an analytical
solution for the temporal development of such systems. In the second part we define energy
and entropy measures. We analyze the development of the system measures during
learning by testing different robots in environments of different complexity.
In answer to the questions above we find (1) that these systems have a specific sensormotor configuration and this leads to a biphasic weight development. It was possible, by
using the measured temporal characteristics of the robot’s behavior together with some
assumptions on the amplitude change of sensory inputs, to quite accurately calculate such a
weight development in an analytical way. (2) Using our system measures we also show that
learning equalizes the energy uptake across agents and worlds. However, when judging
learning speed and complexity of the resulting behavior one finds a trade-off and some
agents will be better than others in the different worlds tested.
Our study suggests that only together with some information on the general structure of the
development of their descriptive parameters, analytical solutions for our robots can be still
found for their temporal development. By using energy and entropy measures and
165
Poster Session II, Thursday, October 1
investigating their development during learning we have shown that within well-specified
scenarios there are indeed agents which are optimal with respect to their structure and
adaptive properties. As a consequence, this study may help leading to better understanding
of the complex dynamics of learning&behaving systems.
References:
Klyubin, A., Polani, D., and Nehaniv, C. (2008). Keep your options open: an informationbased driving principle for sensorimotor systems. PLoS ONE, 3:e4018.
Lungarella, M. and Sporns, O. (2006). Mapping information flow in sensorimotor networks.
PLoS Comput. Biol., 2:e144.
Porr, B., Egerton, A., and Woergoetter, F. (2006). Towards closed loop information:
Predictive information. Constructivist Foundations, 1(2):83–90.
Prokopenko, M., Gerasimov, V., and Tanev, I. (2006). Evolving spatiotemporal coordination
in a modular robotic system. In SAB 2006, pages 558–569.
T46
Hysteresis effects of cortico-spinal excitability during transcranial
magnetic stimulation
Caroline Möller*3, Noritoshi Arai31, Jörg Lücke2, Ulf Ziemann3
1 Department of Neurology, Toyama Hospital, Tokyo, Japan
2 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
3 Motor Cortex Group, Department of Neurology, Goethe University, Frankfurt, Germany
* [email protected]
Input-output (IO) curves of motor evoked potentials (MEP) are an important and widely used
method to assess motor cortical excitability by transcranial magnetic stimulation (TMS). IO
curves are measured by applying TMS stimuli at a range of different intensities and the slope
and amplitude of the curve is a sensitive marker for excitability changes of neuronal systems
under different physiological or pathological conditions. However, it is not known whether the
sequence in which the curves are obtained may by itself influence corticospinal activation.
Here, we investigated the effects of history dependence, known also as hysteresis effects on
IO curves. To test this IO curves from the first dorsal interosseous (FDI) muscle of 14
healthy volunteers were obtained in three different sequences of stimulus intensity order:
Increasing from low to high intensities, decreasing from high to low intensities and
randomizing intensities. Intensities ranged from 80% to 170% of the individual resting motor
threshold (RMT). At each intensity level 5 trials were recorded and averaged. Sequences
were measured with two different inter-trial intervals (ITI, 5s and 20s), and in the resting vs.
voluntarily active muscle. All recordings with the resting muscle were carefully checked for
voluntary muscle activation to control for unspecific arousal effects. In the resting muscle
and at ITI = 5s, IO curves measured with the decreasing sequence were significantly shifted
to the left compared to the increasing sequence while the IO curve obtained with the
166
Learning and plasticity
randomized sequence ran in between. Hysteresis was most pronounced in the upper part of
the IO curves at intensities of 130% RMT and above. No significant hysteresis was seen at
ITI = 20s or in the active FDI.
Our findings implicate that hysteresis could significantly influence IO curves. High intensity
stimuli at the beginning of the decreasing sequence seemed to have an enhancing effect on
consecutive stimuli during the same recording. As no hysteresis effects were present with
the longer ITI of 20s we propose that short-term plasticity may be a possible mechanism to
account for this effect.
T47
Going horizontal: spatiotemporal dynamics of evoked activity in
rat V1 after retinal lesion.
Ganna Palagina*3, Ulf T Eysel2, Dirk Jancke1
1 Bernstein Group for Computational Neuroscience, Ruhr-Universität, Bochum, Germany
2 Department of Neurophysiology, Ruhr-Universität, Bochum, Germany
3 Institute of Neuroinformatics, Ruhr-Universität, Bochum, Germany
* [email protected]
Sensory deprivation caused by peripheral injury can trigger functional cortical reorganization
across the initially silenced cortical area (lesion projection zone). It is proposed that longrange intracortical connectivity enables recovery of function in the lesion projection zone
(LPZ) , providing a bypass for the lost subcortical input.
Here we investigated retinal lesion-induced changes in the function of lateral connections in
the primary visual cortex of the adult rat. Using voltage-sensitive dye imaging, we visualized
in millisecond time resolution spreading synaptic activity across the LPZ. Briefly after lesion,
the majority of neurons within the LPZ were subthresholdly activated by delayed propagation
of activity that originated from unaffected cortical regions. With longer recovery time
latencies within the LPZ gradually decreased and activation reached suprathreshold levels.
Shortening of latencies of horizontal spread and increase in amplitudes of activity inside the
LPZ during reorganization support the idea, that increase in strength of lateral connections is
a substrate of functional recovery after retinal lesions.
167
Poster Session II, Thursday, October 1
T48
A study on students' learning styles and impact of demographic
factors towards effective learning
P. Rajandran Peresamy*3, Nanna Suryana1, Marthandan Govindan2
1 Faculty of Information and Communication Technology, Technical University of Malaysia,
Melaka, Malaysia
2 Faculty of Management, Multimedia University, Cyberjaya, Malaysia
3 Universitas Technical Mallaca, Melaka, Malaysia
* [email protected]
The objective of this research is to identify the learning styles of undergraduate students of
management at Klang Valley, Malaysia, to identify the relation of selected demographic
factors with the learning styles towards effective learning and to develop a modal of effective
learning incorporating learners demographic factors and learning styles. Index Learning style
(ILS) developed by Felder-Sivermann has been adopted and used as a survey instrument in
this study. Results of the study were used to see the significant relationship between
learning styles and demographic factors such as gender, ethnicity, academic qualification,
field of study, type of institution and year of study.
Seven hundred and three samples were collected. Based on the mean score, the study
showed that the most dominant learning styles in sequence are visual, sequential, reflective,
sensing, global, active, Intuitive and Verbal.
The male students are found to be more dominant in active, intuitive and global learning
compare to female students. In ethnicity, the mean score for active, intuitive and global are
more significant for at least one pair in each ethnic group. There is a significant mean score
differences for STPM, diploma and matriculation students in active and sensing learning
styles. In the field of study the mean score for marketing students are higher compared to
finance/banking students in sensing. The average score of public institution students are
more significant compared to private institutions in active, intuitive, visual, verbal and global
learning styles. In the year of study there are significant differences in mean for sensing,
visual, verbal and global learning styles for at least one pair of year of study. The learning
styles scores are significantly different between gender, ethnicity, academic qualification,
field of study, type of higher learning institution, and year of study.
This paper discusses about the undergraduate management students’ learning styles and
it’s relation with selected demographic factors. The impact of demographic factors has
various influences towards the learning styles of students. The effort to address the impact
of demographic factors and individual learners learning style aspects will lead to effective
learning for students, which is expected to be reflected in their academic performance. This
students’ perception study has initiated towards development of a ‘model of effective
learning incorporating the demographic factors and learning styles’ among the
undergraduate students of management. This model will be a good guidance for the
teachers and academic institutions in the management studies to plan and implement
168
Learning and plasticity
strategies for effective teaching and learning methods and techniques towards effective
learning for students as a whole in their education system. Further more the proposed modal
will contribute towards effective learning expectations of students whom are from various
demographic backgrounds and with different learning preferences and the findings of this
research will benefit the entire learners and the education system of higher learning
institutions.
Besides that, the next part of this research will look into a model of learning styles
incorporated multimedia learning environment for effective learning.
T49
Role of STDP in encoding and retrieval of oscillatory groupsyncrhonous spatio-temporal patterns
Silvia Scarpetta*12, Ferdinando Giacco1, Maria Marinaro1
1 Department of Physics, University of Salerno, Salerno, Italy
2 Istituto Nazionale di Fisica Nucleare, Rome, Italy
* [email protected]
Many experimental results have generated renewed appreciation that precise temporal
synchronization, and synchronized oscillatory activity in distributed groups of neurons, may
play a fundamental role in perception, memory and sensory computation, especially to
encode relationship and increase saliency.
Here we investigate how precise temporal synchronization of groups of neurons can be
memorized as attractors of the network dynamics.
Multiple patterns, each corresponding to different groups of synchronized oscillatory activity,
are encoded using a temporally asymmetric learning rule inspired to the spike-timingdependent plasticity recently observed in cortical area. In this paper we compare the results
previously obtained for phase-locked oscillation in the random phases hypothesis [1,2,3], to
the case of patterns with synchronous subgroups of neurons, each pattern having neurons
with only Q=4 possible values of the phase. The network dynamics is studied analitically as
a function of the STDP learnign window. Under proper conditions, external stimulus or initial
condition leads to retrieval (i.e. replay) of the group-synchonous pattern, since the activity
preserves the encoded phase relationship among units.
The same set of synchronous units of the encoded pattern is observed during replay, but the
replay occurs at a different oscillation frequency. The replay frequency depends on the
encoding frequency and on the shape of STDP learning window used to learn synaptic
connections.
169
Poster Session II, Thursday, October 1
References:
[1] Hebbian imprinting and retrieval in oscillatory neural networks; Scarpetta S, Zhaoping L,
Hertz J., NEURAL COMPUTATION Vol 14,Pages: 2371-2396, 2002
[2] Spatiotemporal learning in analog neural networks using spike-timing-dependent synaptic
plasticity. Yoshioka M, Scarpetta S, Marinaro M., PHYSICAL REVIEW E, Vol 75:5
051917, 2007
[3] Encoding and replay of Dynamic Attractors with Multiple Frequencies. S. Scarpetta M.
Yoshioka, M. Marinaro, LNCS Vol 5286, pages 38-61, 2008
T50
Single-trial phase precession in the hippocampus
Robert Schmidt*1, Kamran Diba4, Christian Leibold2, Dietnar Schmitz3, György Buzsaki4
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Division of Neurobiology, University of Munich, Munich, Germany, Bernstein Center for
Computational Neuroscience Munich, Munich, Germany
3 Neurowissenschaftliches Forschungszentrum, Charité-Universitätsmedizin, Berlin,
Germany
4 Rutgers University, Newark, USA
* [email protected]
During the crossing of the place field of a pyramidal cell in the rat hippocampus, the firing
phase of the cell decreases with respect to the local theta rhythm. This phase precession is
usually studied on the basis of data in which many place field traversals are pooled together.
Here we study properties of phase precession in single trials. We found that single-trial and
pooled-trial phase precession were different with respect to phase-position correlation,
phase-time correlation, and phase range. While pooled-trial phase precession may span 360
degrees, the most frequent single-trial phase range was only around 180 degrees. In pooled
trials, the correlation between phase and position (r=-0.58) was stronger than the correlation
between phase and time (r=-0.27), whereas in single trials these correlations (r=-0.61 for
both) were not significantly different.
Next, we demonstrated that phase precession exhibited a large trial-to-trial variability.
Overall, only a small fraction of the trial-to-trial variability in measures of phase precession
(e.g. slope or offset) could be explained by other single-trial properties (such as running
speed or firing rate), while the larger part of the variability remains to be explained. Finally,
we found that surrogate single trials, created by randomly drawing spikes from the pooled
data, are not equivalent to experimental single trials: pooling over trials therefore changes
basic measures of phase precession.
These findings indicate that single trials may be better suited for encoding temporally
structured events than is suggested by the pooled data.
170
Learning and plasticity
T51
Are age-related cognitive effects caused by optimization?
Hecke Schrobsdorff*14, Matthias Ihrke14, Jörg Behrendt12, J. Michael Herrmann3, Theo
Geisel14
1
2
3
4
Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
Georg-Elias-Müller Institute for Psychology, Georg-August University, Göttingen, Germany
Institute of Perception, Action and Behaviour, University of Edinburgh, Edinburgh, UK
Max-Planck Institute for Dynamics and Self-Organisation, Göttingen, Germany
* [email protected]
Cognitive aging seems to be a story of global degradation. Performance in psychological
tests e.g. of fluid intelligence, such as Raven's Advanced Progressive Matrices, tends to
decrease with age [1]. These results are strongly contrasted by performance improvements
in everyday situations [2]. We therefore hypothesize that the observed aging deficits are
partly caused by the optimization of cognitive functions due to learning.
In order to provide evidence for this hypothesis we consider a neural memory model that
allows for associative recall by pattern matching as well as for "fluid" recombination of
memorized patterns by dynamical activation. In networks with dynamical synapses, critical
behaviour is a generic phenomenon [3]. It might provide the optimum for the speed and
completeness tradeoff in the exploration of a large set of combinations of features like it is
required in Raven's test. The model comprises also the life-long improvement in crystallized
intelligence by Hebbian learning of the network connectivity while exposed to a number of
neural-activity patterns.
The synaptic adaptation is shown to cause a breakdown of the initial critical state which can
be explained by the formation of densely connected clusters within the network
corresponding to the learned patterns. Avalanche-like activity waves in the network will more
and more tend to remain inside a cluster thus reducing the exploratory effects of the network
dynamics. Meanwhile retrieval of patterns stored in the early phase of learning is still
possible. Mimicking the Raven's test we presented the model with new combinations of
previously learned subpatterns during various states of learning. Networks with
comparatively lower memory load achieve more stable activations of the new feature
combinations than the 'old' networks. This corresponds well to the results of the freeassociation mode in either network type where only the 'young' networks are close to a selforganized critical state. The speed and extent of the loss of criticality depends on properties
of the connectivity scheme the network evolves to during learning.
While on the one hand learning leads to impaired performance in unusual situations it may
on the other hand compensate for the decline in fluid intelligence if experienced guesses
even in complex situations are possible due to the live long optimization of memory patterns.
171
Poster Session II, Thursday, October 1
References:
[1] Babcock RL: Analysis of age differences in types of errors on the Raven's Advanced
Progressive Matrices. Intelligence 2002, 30:485 - 503.
[2] Salthouse TA: Cognitive competence and expertise in aging. Handbook of the
psychology of aging 1999, 3:310 - 319.
[3] A Levina, J M Herrmann and T Geisel: Dynamical synapses causing self-organized
criticality in neural networks. Nature Physics 2007, 3(12):857 - 860.
T52
Perceptual learning in visual hyperacuity: a reweighting model
Grigorios Sotiropoulos1, Aaron Seitz2, Peggy Seriès*1
1 Institute for Adaptive and Neural Computation, School of Informatics, Edinburgh
University, Edinburgh, UK
2 Psychology Department, University of California, Riverside, USA
* [email protected]
Perceptual learning has been extensively studied in psychophysics, and phenomena such
as specificity (where improvement following training is specific to a particular perceptual task
or configuration, such as orientation) and disruption (where improvements in a perceptual
task diminish following training on a similar task) have been unveiled. These phenomena are
particularly evident in hyperacuity tasks. Hyperacuity refers to the unusually high visual
acuity exhibited by humans in certain perceptual tasks, such as the well-studied Vernier and
its variants. Vernier line offsets detectable by humans are in the range of a few seconds of
arc and less than the diameter of photoreceptors in the fovea. This remarkable and
apparently paradoxical acuity has fascinated psychophysicists for decades. Despite the fact
that there is a lot of experimental data, these phenomena have been very poorly studied
from a modelling point of view.
Here we seek to explore the compatibility of the existing data in a single unifying framework.
We are particularly interested in understanding whether a model where perceptual learning
is accounted for by a modification of the read-out (e.g. Petrov et al., 2005) can account
satisfactorily for the data. We present an extension of a simple published model of
orientation-selective neurons in V1 that is able to learn perceptual tasks (Weiss et al., 1993).
We evaluate the model by means of computer simulations of psychophysical experiments on
disruption (Seitz et al., 2005) and specificity (Webb et al., 2007). The neural network
employed in the simulations is akin to radial basis function networks. The input layer
presents an encoding of the visual stimulus to the hidden layer, which consists of a
population of neurons with oriented receptive fields. By pooling the responses of the oriented
units, the output layer models the decision processes responsible for perceptual judgements.
The model predicts specificity of learning across either tasks or stimulus configurations but
generalisation across both tasks and configurations, under the condition that the variable
visual element is the same. For example, in 3-dot alignment (A) and 3-dot bisection (B)
172
Learning and plasticity
tasks, where the stimulus can be either horizontal (H) or vertical (V), learning of AH does not
transfer to BH, neither to AV, but it does transfer to BV, because in both AH and BV, the
variable element (middle dot) varies across the same direction. The model also predicts
disruption between tasks of the same configuration but not between identical tasks of
different configurations. For example, learning of left-offset only A (or B) tasks does not
transfer to right-offset-only A (or B, respectively) tasks. Both predictions are in agreement
with the latestpsychophysical evidence. Furthermore, we explore two distinct learning
mechanisms; one that belongs in the family of reweighting (read-out modification) models
and another that models learning-induced representational changes. We conclude that the
former, and under certain conditions the latter, can quantitatively account for performance
improvements observed in humans. We discuss the implications of these results regarding
the possible cortical location of perceptual learning.
T53
Spike-timing dependent plasticity and homeostasis: composition
of two different synaptic learning mechanism
Christian Tetzlaff*1, Markus Butz1, Florentin Wörgötter1
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
* [email protected]
Spike-timing dependent plasticity (STDP) is known as an important mechanism to change
the synaptic weights in a neuronal network. In pure STDP models the development of the
weights on long time scales becomes a bimodal function with one peak at zero and a second
one at the bounding value as maximum weight (Izhikevich et al., 2004) or an unimodal
function with a positive mode (Gütig et al., 2003). This is in contrast to findings in biological
networks, where the weight distribution is a Gaussian tail distribution (Brunel et al., 2004).
There are several mathematical implementations of STDP as, for instance, the BCM
(Bienenstock et al., 1982) or ISO rule (Porr & Wörgötter, 2003).
Another biological mechanism, named Homeostasis, of neuronal networks is the tendency of
each neuron to achieve and hold a certain activity value. This activity value can be reached
by changing the neuronal inputs, which affects the weight distribution, too (Butz et al., 2009).
We defined for the following analyses a rule which describes this mechanism
mathematically.
In this study, first of all, we are interested to see whether these two mechanisms (STDP,
Homeostasis) are different in their weight dynamics. To test this, we calculated for each rule
(BCM, ISO, Homeostasis) the synaptic weight changes of a small fully connected network.
We found that these two mechanisms lead to different weight configurations and, thus, can
be treated as dynamically different.
As both mechanisms are likely to be involved in the biological development of the synaptic
weights in a network, in the second part of the study, the dynamics of a composite rule
173
Poster Session II, Thursday, October 1
(BCM/ISO+Homeostasis) is analysed. For this, we used phase diagram analyses and
received a fixed point in the weight dynamic. This fixed point depends on the total number of
the input to the neuron. Thus, two neurons with different inputs will achieve two different
stable weight configurations. This will mean for a neuronal network, where each neuron gets
a different input, that each neuron has a different weight configuration and that the weight
distribution of the whole network will be biological more plausible as a bi- or unimodal
function.
In summary, we have demonstrated that the two biological mechanisms STDP and
Homeostasis are dynamical unequal and that their combination leads to a more realistic
weight distribution.
T54
The model of ocular dominance pattern formation in the presence
of gradients of chemical labels.
Dmitry Tsigankov*31, Alexei Koulakov2
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Cold Spring Harbor Laboratory, NY, USA
3 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
The formation of ocular dominance (OD) columns in the mammalian visual cortex is thought
to be driven by electrical activity of the projecting neurons. Indeed, theoretical modeling have
shown that lateral cortical activity-dependent interactions are capable of producing such a
periodic pattern of regions dominated by inputs from left or right eye. This scenario for OD
pattern formation based on self-organization of inputs due to excitatory and inhibitory lateral
interactions of Mexican-hat shape was recently confronted by several lines of experimental
observations. First, the anatomical data on primary visual cortex indicate that the inhibition
can have a shorter range than excitation, the regime in which the classical model fails to
produce OD structure. Second, the measurements of the width of OD regions following the
manipulations with the strength of the inhibitory connections are inconsistent with the
predictions of the model. When the strength of inhibitory connections is increased the OD
width is found to increase and when inhibition is decreased the OD width also decreases.
This behavior is opposite to one predicted by the classical model. Finally, the sole role of
activity-dependent self-organization in the formation of OD structure was questioned as it
was observed that OD patterns can be formed in optic tectum in the presence of other
factors such as gradients of interacting chemical labels.
Here we present theoretical analysis of the possible interplay between genetically encoded
labeling and experience-driven reorganization of the projections in the formation of OD
patterns. We show that in the presence of single gradient of chemical marker the projections
from two eyes are segregated into OD columns for a wide class of lateral interaction profiles.
We obtain that depending on the range and strength of inhibitory and excitatory lateral
174
Learning and plasticity
connections the projecting neurons may prefer to form segregated or mixed inputs. We find
the regimes when OD structure emerges for short-range inhibition and long-range excitation.
We also investigate the role of lateral inhibition and excitation for different interaction profiles
to find a novel regime when increase in the inhibition strength increases the width of OD
columns in agreement with the experiment.
T55
An explanation of the familiarity-to-novelty-shift in infant
habituation
Quan Wang*1, Jochen Triesch1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
* [email protected]
Habituation is generally defined as a reversible decrement of response to repeated
stimulation. It is considered one of the simplest and most fundamental forms of learning. As
such it has been studied at the neurophysiological, behavioral and computational levels in
species ranging from invertebrates to humans.
Habituation is of particular importance for the study of cognitive development in human
infants, since habituation paradigms like ‘violation of expectation’ use it to infer infants’
perceptual and cognitive abilities [1]. Current models of infant habituation typically assume
that the infant is constructing an internal model of the stimulus. The accuracy of this internal
model is interpreted as a predictor of the infant’s interest in or attention towards the stimulus.
In the early phase of habituation, infants look longer or react more to a stimulus, because
they have not yet learned an accurate model for it yet. As their internal model improves, their
interest in the stimulus decreases. This explains why novel stimuli tend to be preferred over
familiar ones. Importantly, however, such models do not account for the so-called familiarityto-novelty-shift, the finding that infants often transiently prefer a familiar stimulus over a
novel one, given sufficient complexity of both stimuli [2].
We propose a new account of infant habituation in which the infant’s interest in a stimulus is
related to the infant’s learning progress, i.e. the improvement of the infant’s internal model
[3]. As a consequence, infants prefer stimuli for which their learning progress is maximal.
Specifically, we describe the infant’s interest in a stimulus or its degree of attention as the
time derivative of the infant’s learning curve for that stimulus. We consider two kinds of
idealized learning curves with exponential and sigmoidal shape, corresponding to simpler
and more complex stimuli, respectively. The first kind of learning curve has an exponentially
decreasing time derivative, matching the most well-known habituation characteristic. For
sigmoidal learning curves, however, the time derivative has a bell shaped form, as supported
by experimental evidence [4].
This bell-shaped form naturally explains the presence of a familiarity-to-novelty-shift if, say, a
second (novel) stimulus is introduced when learning progress for a first (familiar) stimulus is
175
Poster Session II, Thursday, October 1
currently maximal. Thus, our model predicts that the familiarity-to-novelty-shift emerges for
stimuli that produce sigmoidal but not exponential learning curves.
Using the derivative of performance as a predictor of attention, our model proposes a
dynamic familiarity-to-novelty shift, which depends on both the subject's learning efficiency
and the task complexity. We speculate that the anterior cingulate cortex may contribute to
estimating the learning progress, since it has been reported that it is activated by change of
error rate but not by error per se [5].
References:
[1] Sirois & Mareschal, Trends Cogn Sci. 2002 6(7):293-8
[2] Hunter & Ames, Advances in infancy research. 1988, 5: 69-95
[3] Schmidhuber, 2009, Journal of SICE, 48(1)
[4] Rankin et al, Neurobiol Learn Mem. 2009 92(2):135-8
[5] Polli et al, Brain. 2008 131(4): 971-86
T56
A reinforcement learning model develops causal inference and
cue integration abilities
Thomas H Weisswange*1, Constantin A Rothkopf1, Tobias Rodemann2, Jochen Triesch1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
In recent years it has been suggested that the performance of human subjects in a large
variety of perceptual tasks can be modelled using Bayesian inference (e.g. [1]). The success
of these methods stems from their capacity to explicitly represent the involved uncertainties.
Recently, such methods have been extended to the task of model selection where the
observer not only has to integrate different cues into a single estimate, but needs to first
select which causal model best describes the stimuli [2]. As an example, consider the task of
orienting towards a putative object. The stimuli consist of an auditory and a visual cue.
Depending on the spatial distance between the position measurements provided by the two
modalities it is more probable to assume that the signals originated from the same source or
from two different sources. An open problem in this area is how the brain acquires the
required models and how it learns to perform the proper kind of inference. Since infants and
young children have been shown not to integrate cues initially [3,4], it seems likely that
extended learning processes play an important role in our developing ability to integrate
cues and select appropriate models.
In the present study we investigate whether the framework of reinforcement learning (RL)
could be used to study these questions. A one-dimensional version of an orienting task is
considered, in which an auditory and a visual cue are placed at either the same or different
positions. Each cue is corrupted by Gaussian noise with the variance of the auditory noise
176
Learning and plasticity
being larger than that of the visual, reflecting the different uncertainties in the sensory
modalities. A positive reward is given if the agent orients to the true position of the object. In
case the orienting movement does not target the object, we assume that an additional
movement has to be carried out. The cost for each additional movement is proportional to
the distance between the current position and the true position of the target. The action
selection of the agent is probabilistic, using the softmax rule. Learning takes place using the
SARSA algorithm [5].
The simulations show that the reinforcement learning agent is indeed capable of learning to
integrate cues taking their relative reliabilities into account when this interpretation leads to a
better detection of the target. Furthermore, the agent learns that if the position estimates
provided by the two modalities are too far apart, it is better not to integrate the two signals
but to select an action that only considers the cue with higher reliability. The displayed
behaviour therefore implicitly corresponds to selection of different causal models. Our results
suggest that generic reinforcement learning processes may contribute to the development of
the ability to integrate different sensory cues and select causal models.
References:
[1] Knill&Pouget 2004, TiNS 27(12), 712-19
[2] Körding et al. 2007, PloS One 2(9)
[3] Nardini et al. 2008, Current Biology 18(9), 689-93
[4] Gori et al. 2008, Current Biology 18(9), 694-98
[5] Rummery&Niranjan 1994, Tech Rep
T57
Continuous learning in a model of rate coded neurons with
calcium dynamics
Jan Wiltschut*2, Mark Voss2, Fred H Hamker1
1 Computer Science Department, Technichal University Chemnitz, Chemnitz, Germany
2 Psychologisches Institut II, Westfälische Wilhelms-Universität, Münster, Germany
* [email protected]
Learning in networks with continuous dynamics poses fundamental challenges. The amount
of change in the synaptic connection weights strongly depends on the duration of stimulus
presentation. If the duration is too small the amount of learning is minimal. If the duration is
too long the weight increase is far too large compromising the convergence of the whole
network. Additionally, considering attentional feedback connections, which are mediated by
reentrant loops, learning should rather be high in the late response than in the strong, early
response after stimulus onset.
To overcome these difficulties we developed a new learning rule by extending our recently
developed one, which has been demonstrated to learn V1 receptive fields (RF) from of
natural scenes [1], with calcium dynamics similar as proposed by the BCM framework [2].
177
Poster Session II, Thursday, October 1
The basic idea is that the synaptic change depends on the level of postsynaptic calcium.
Calcium determines the amount of learning as well regulates the speed of learning. Calcium
is induced by the postsynaptic depolarization and the presynaptic activation. The stronger
both activations the higher the calcium level. Additionally, as suggested by
electrophysiological data, the speed of the connection weight change directly depends on
the calcium level [3]. In the BCM learning rule long-term potentiation (LTP) and long-term
depression (LTD) are dependent on Q (a function of the output activity). In our model, the
threshold critically depends on the Calcium level and thus, directly influences the connection
weight change.
Our new learning rule leads to characteristic receptive fields when trained on bar stimuli and
on natural scenes. In addition, our framework also shows great success in stability over time.
That means, despite the constant learning the “receptive fields” converge to capture the
basic statistics in the input. In comparison to BCM, our network shows similar LTP and LTD
characteristics than in BCM. However, BCM learning has only been studied with a single or
just a few neurons, since BCM did so far not address how different neurons learn
simultaneously different aspects of the inputs, whereas our learning rule is capable of
learning simultaneously different receptive fields from bar stimuli and natural scenes.
References:
[1] J. Wiltschut and F. H. Hamker. (2009). Efficient coding correlates with spatial frequency
tuning in a model of V1 receptive field organization. Vis Neurosci. 26:21-34
[2] H. Z. Shouval, M. F. Bear, L. N. Cooper. (2002). A unified model of NMDA receptordependent bidirectional synaptic plasticity. Proc Natl Acad Sci U S A. 99:10831-6
[3] H. Z. Shouval, G. C. Castellani, B.S. Blais, L. C. Yeung, L. N. Cooper. (2002).
Converging evidence for a simplified biophysical model of synaptic plasticity. Biol
Cybern. 87:383-91.
Sensory processing
T58
A model of auditory spiral ganglion neurons
Paul Wilhelm Bade2, Marek Rudnicki*2, Werner Hemmert13
1 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
2 Fakultät für Elektrotechnik und Informationstechnik, Technische Universität Munich,
Munich, Germany
3 Institute for Medical Engineering, Technische Universität Munich, Munich, Germany
* [email protected]
178
Sensory processing
Our study focuses on biophysical modeling of the auditory periphery and initial stages of
neural processing. We examined in detail synaptic excitation between inner hair cells and
spiral ganglion type I neurons. Spiral ganglion neurons encode and convey information
about sound to the central nervous system in the form of action potentials.
For the purpose of our study we utilized a biophysical model of the auditory periphery
proposed by Sumner (2002). It consists of outer/middle ear filters, a basilar membrane filter
bank, an inner hair cell model coupled with complex vesicle pool dynamics at the
presynaptic membrane. Finally, fusion of vesicles, modelled with a probabilistic function,
releases neurotransmitter into the synaptic cleft. Response of auditory nerve fibers is
modeled with a spike generator. The absolute refractory period is set to 0.75 ms and the
relative refractory period is modelled with an exponentially decaying function.
In our approach we substituted the artificial spike generation and refraction model with a
more realistic spiral ganglion neuron model with Hodgkin-Huxley type ion channels proposed
by Negm and Bruce (2008). The model included several channels also found in cochlear
nucleus neurons ( K A , K ht , K lt ). Our model consisted of the postsynaptic bouton
(1.5x1.7µm) from high-spontaneous rate fibers.
We coupled the model of the synapse with the spiral ganglion neuron using a synaptic
excitation model fitted to results from Glowatzki and Fuchs' (2002) experiments, who
conducted patch clamp measurements at the afferent postsynapse. We verified our hybrid
model against various experiments, mostly pure tone stimulation. Rate intensity functions
fitted experimental data well, rates varied from about 40 spikes/s to a maximum of 260
spikes/s. Adaptation properties were investigated with peri-stimulus time histograms (PSTH).
As adaptation is mainly governed by vesicle pool dynamics, only small changes occurred
compared with the statistical spike generation model and adaptation was consistent with
experiments. Interestingly, Hodgkin-Huxley models of spiral ganglion neurons exhibited a
notch visible in the PSTH after rapid adaptation that could also be observed in experiments.
This was not revealed by the statistical spike generator. The fiber's refractory period was
investigated using inter-spike interval histograms. The refractory period varied with simulus
intensity from 1ms (spontaneous activity) to 0.7ms (84dB_SPL). We also analyzed phase
locking with the synchronization index. It was slightly lower compared to the statistical spike
generator. By varying the density of K lt and K A channels, we could replicate heterogenity
of auditory nerve fibers as shown by Adamson et al. (2002).
In summary, replacing the statistical spike generation model with a more realistic model of
the postsynaptic membrane obsoletes the introduction of non-physiologic parameters for
absolute and relative refraction. It improves the refractory behaviour and provides more
realistic spike trains of the auditory nerve.
Acknowledgements:
Supported by within the Munich Bernstein Center for Computational Neuroscience by the
German Federal Ministry of Education and Research (reference numbers 01GQ0441 and
01GQ0443).
179
Poster Session II, Thursday, October 1
T59
Theoretical study of candidate mechanisms of synchronous
“multivesicular” release at ribbon synapses
Nikolai Chapochnikov*2, Tobias Moser21, Fred Wolf31
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 InnerEarLab, Department of Otorhinolaryngology, Medical School, University of Göttingen,
Göttingen, Germany
3 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
When recording from the postsynaptic cell, ribbon synapses exhibit, mini EPSCs (Excitatory
PostSynaptic Currents) that are several times larger that those elicited by the release of a
single vesicle, but with the same rising and decaying kinetics. The origin of these large
EPSCs is not fully understood and is usually thought to be either the synchronous release of
multiple vesicles or the fusion of larger compounds.
To explore how feasible these candidate mechanisms are, we used modeling to examine the
properties of two hypothetical scenarios:
1. synchronization of vesicle fusion via the opening of individual calcium ion
channels located in close proximity of two vesicles, rise of the “common” calcium
concentration and calcium-triggered release.
2. prefusion of vesicles by a calcium dependent sensor similar to that responsible
for vesicle fusion to the cytoplasmic membrane.
To assess the first scenario, we used different models of calcium dependent release and
studied how changes in the rate of fusion following calcium binding, the calcium
concentration, as well the open time of the channel would affect the synchronization of
vesicles “sensing” this concentration. Assuming realistic exponential distribution of the open
times of the calcium ion channel, we derived the expected distribution of release size for
different values of parameters.
We find that for final fusion rates substantially higher than those present in the literature and
very high calcium concentrations (200 – 300 µM), the mean time interval between the fusion
of vesicles could be smaller than 0.1 ms, qualifying for a quality of synchronization that
exceeds the temporal bandwidth of the patch-clamp recording.
To assess the second scenario, we assumed different 3D positions of vesicles to each other
and relative to the membrane, performed Monte Carlo simulations of fusion events and
derived release size distribution histograms. We find that the presence of the ribbon and the
positioning of the vesicles around it would have a strong influence of the release events size
distribution.
Although not completely excluding one or the other scenario, this study gives a clearer
picture of the plausibility of both candidate mechanisms.
180
Sensory processing
T60
Modulation of neural states in the visual cortex by visual stimuli
Maolong Cui1, C. Chiu2, M. Weliky3, József Fiser1,4*
1 Department of Psychology, Brandeis University, Waltham, MA 02454
2 Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461
3 Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
14627
4 Volen Center for Complex Systems, Brandeis
* [email protected]
According to recently emerging views on visual cortical processing, activity in the primary
visual cortex is not fullly determined by the visual stimulus, but is,to a large extent, governed
by internal states that are changing dynamically. However, neither the dynamical nature of
these states nor the conditions required for their emergence has been investigated
systematically before.
We analyzed multi-electrode recordings in the primary visual cortex of awake behaving
ferrets (N=30) after normal and visually deprived development at different ages spanning the
range between postnatal day P24 and P170. Visual deprivation has been achieved by
bilateral lid suture up to the time of the visual tests. Multi-unit recordings were obtained in
three different conditions: in the dark, when the animals watched random noise sequences,
and when they saw a natural movie. We used 10-second segments of continuous recordings
in theses conditions to train the Hidden Markov Models, which assume dynamical
dependencies among internal states of the neural system. To test the ability of the obtained
models in characterizing the neural signals, these models are used to infer the condition
under which a specific piece of neural signal is recorded. For animals older than p44, the
correct rates of the inference are higher than 70% in both normal and lid sutured animals.
And the correct rate increases with age (P<0.05). In the controlling condition, Poisson
signals that retain the firing rate but not temporal structures of the signals are simulated, the
performance of the corresponding HMM models are significantly deteriorated (P < 0.01). We
also assessed the similarity between underlying states used by models that are trained on
data across different conditions (Movie, Noise and Dark), by computing the Kullback-Leibler
distance between the probability distribution of the observed population activity generated by
the underlying states. We found that similarity between underlying states across conditions
strongly increases with age between P28 and P44 for normal animals, but it remained
relatively unchanged between P44 and P170 for both normal and lid sutured animals.
The result suggests that the dynamic nature of the emerging underlying states is critical in
characterizing the neural activity in the primary visual cortex. However, this emergence does
not depend fully on proper visual input but rather is determined by internal processes.
181
Poster Session II, Thursday, October 1
T61
Evaluating the feature similarity gain and biased competition
models of attentional modulation
Mohammad Reza Daliri*21, Vladislav Kozyrev21, Stefan Treue2
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 German Primate Center, Göttingen, Germany
* [email protected]
Visual attention enables the brain to enhance the behaviorally relevant neuronal population
responses and suppresses the irrelevant information. In the past several models have been
proposed for the mechanisms of attention. Two more general theories include the biased
competition model (BCM) and the feature similarity gain model (FSGM), The BCM assumes
that stimuli compete for neuronal responses and attention biases this competition towards
the behaviorally relevant stimulus. The response to two different stimuli inside the same
receptive field is therefore biased towards the attended stimulus, i.e. a neuron’s response
under those attentional conditions approaches the response evoked by the attended
stimulus alone. The FSGM states that the gain of attentional modulation is a function of the
similarity between the attended feature and a cell’s preferred feature. Comparing responses
when attending to one or the other of two stimuli inside a receptive field causes the higher
response for the condition where the attended stimulus is better matched to the preferences
of the neuron, such as its preferred direction of motion.
Here, we evaluated the two models using by designing a paradigm which yields different
predictions for each model. We placed two coherently moving random dot patterns (RDPs)
inside the receptive field (RF) of direction-selective neurons in the medial-temporal area MT
of two macaque monkeys. Both patterns moved in the preferred direction of the neuron but
elicited different responses because they differed in their contrast. In a given trial the animal
was cued to attend either the low or the high-contrast patterns and to release a lever as
soon as a direction change occurred in the cued pattern while ignoring changes in the
uncued stimulus. Because the two RDPs evoke different responses when presented alone,
the BCM predicts a lower response when the animals attended to the low contrast RDP.
Because the two RDPs move in the same direction, the similarity between the attended and
preferred feature does not change when the animals attend to one vs. the other RDP in the
RF. The FSGM therefore predicts the same response in both conditions.
We recorded the responses of 81 MT cells of two macaque monkeys. Their responses were
significantly modulated by spatial attention. On average these neurons showed a response
increase of approx. 20% when the monkeys switched their attention from outside of the
receptive field (RF) to a stimulus inside the RF. But in the relevant comparison, i.e. when
attention was directed to the low vs. the high contrast pattern inside the receptive field, no
significant change in responses was observed.
In conclusion our data demonstrates an attentional modulation in primate extrastriate visual
cortex that is not consistent with the biased competition model of attention but rather is
182
Sensory processing
better accounted for by the feature similarity gain model.
Acknowledgements:
This work was supported by grant 01GQ0433 from the Federal Ministry of Education and
Research to the Bernstein Center for Computational Neuroscience Goettingen.
T62
While the frequency changes, the relationships stay
Weijia Feng*12, Peng Wang2, Martha Havenith2, Wolf Singer12, Danko Nikolic12
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
2 Max-Planck Institute for Brain Research, Frankfurt, Germany
* [email protected]
Neuronal oscillations cover a broad frequency range and vary even within distinct frequency
bands (beta and gamma) in a content dependent way. Currently, it is not clear which factors
determine the frequencies in a particular experimental setting. To investigate those factors,
we recorded responses from multiple neurons in cat area 17 under anaesthesia using
Michigan probes. The visual stimuli consisted of high-contrast sinusoidal grating stimuli
drifting in 12 directions.
First, the oscillation frequencies were affected by the state of the cortex. When the
responses of the same neuron were recorded at different times (up to 10 hours interrecording-interval), the overall oscillation frequency with which this neuron responded could
vary by up to 5 Hz (~20% of the average frequency). Second, during each recording (no
change in the cortical state), the oscillation frequencies were usually not identical in
response to different stimuli: Some drifting directions of the grating induced higher oscillation
frequencies than others. This “tuning” of oscillation frequency varied across different neurons
recorded simultaneously, even if these neurons had similar tunings of the firing rates. The
third and the most interesting result was that the tuning of a neuron’s oscillation frequency
remained constant over time, i.e., over different cortical states. Thus, the stimulus condition
producing the highest (or the lowest) oscillation frequency remained the same irrespective of
the overall range of frequencies exhibited during a particular cortical state.
These results suggest the following conclusion: While the overall oscillation frequency (i.e.
the range covered across all stimulus conditions) is flexible and state-dependent, the relative
changes in oscillation frequencies induced by the stimulus properties are fixed. This
suggests that the latter property of neuronal responses is determined anatomically— by the
connectivity patterns of the underlying networks and, because of its stability, can in principle
be used for coding.
183
Poster Session II, Thursday, October 1
T63
Optical analysis of Ca2+ channels at the first auditory synapse
Thomas Frank*21, Nikolai Chapochnikov2, Andreas Neef13, Tobias Moser21
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 InnerEarLab, Department of Otorhinolaryngology, Medical School, University of Göttingen,
Göttingen, Germany
3 Max-Planck Institute for Nonlinear Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Transmitter release at the first auditory synapse, the ribbon synapse of cochlear inner hair
cells (IHCs), is tightly regulated by Ca2+. Using fast confocal Ca2+ imaging, we have
recently described pronounced differences in presynaptic Ca2+ signals between single
synapses within the same cell. These Ca2+ microdomains differed both in their amplitude
and voltage-dependence of activation. As for the mechanism behind the amplitude
heterogeneity, we provided indirect evidence for differences in the Ca2+ channel
complement, pointing towards a differential regulation of Ca2+ channel number across
synapses.
In order to directly study synaptic Ca2+ channels, we are currently implementing an optical
fluctuation analysis approach. We will present preliminary results along with a theoretical
treatment. Moreover, we will present results of modeling the consequences of different Ca2+
channel complements for the sound encoding at different synapses. This work provides a
framework of how presynaptic heterogeneity can cause the diverse responses of the
postsynaptic neurons, which, together, encode the huge range of perceived stimulus
intensities (sound pressure varying over 6 orders of magnitude).
T64
Learning 3D shape spaces from videos
Mathias Franzius*1, Heiko Wersing1, Edgar Körner1
1 Honda Research Institute Europe GmbH, Offenbach, Germany
* [email protected]
We introduce an architecture for unsupervised learning of representations of the threedimensional shape of objects from movies. During the unsupervised learning phase, the
system optimizes a slowness learning rule and builds up a pose-invariant and shape-specific
representation, i.e., objects of similar shape cluster independently of viewing angle and
views of distinct shapes cluster in distinct region of the feature space. Furthermore, the
system generalizes to previously unseen shapes that result from 3D-morphing between the
training objects.
The model consists of four hierarchical converging layers with increasing receptive field
sizes. Each layer implements the same optimization of Slow Feature Analysis. The
184
Sensory processing
representations n the top layer of the model thus extract those features that on average
change slowly or rarely over time. During the training phase, views of objects are presented
to the system. The objects are freely rotated in space, either rendered artificially
(``rendered'') or in videos of physical objects presented to a camera (``video''). For the
``rendered'' dataset, these consist of views for five geometric shapes. In the ``video'' dataset,
views of geometric objects, toys and household objects are recorded with a camera while
they are freely rotated in space.
After learning, views of the same object under different perspectives cluster in the generated
feature space, which allows a high classification performance. While this property has been
reported before, we show here that the system can generalize to views of completely new
objects in a meaningful way. After learning on the ``rendered'' dataset, the system is tested
with morphed views of shapes generated from 3D interpolation between the training shapes.
The representations of such morph views form compact volumes between the training object
clusters (Figure 1 in Supplemental materials) and encode geometric properties instead of
low-level view features. We argue that this representation forms a shape space, i.e., a
parametrization of 3d shape from single 2D views. For the ``video'' dataset, clusters are less
compact but still allow good classification rates.
A shape space representation generated from object views in a biologically plausible model
is a step towards unsupervised learning of affordance-based representations. The shape of
an object (not its appearance) determines many of its physical properties -- specifically how
it can be grasped. The system provides a basis for integrating affordances into object
representations, with potential for automated object manipulation in robotic systems.
Additionally it provides a new approach for data-driven learning of Geon-like shape
primitives from real image data.
This model of a converging hierarchy of modules optimizing the slowness function has
earlier been successfully applied to many areas, including modeling the early visual system,
learning invariant object recognition, and learning of hippocampal codes. Slowness learning
might thus be a general principle for sensory processing in the brain.
T65
High-frequency oscillations in EEG and MEG recordings are
modulated by cognitive context
Theresa Götz*1, Gabriel Curio3, Otto Witte4, Herbert Witte2, Jens Haueisen1
1
2
3
4
Biomagnetic Center, University Hospital Jena, Jena, Germany
Friedrich Schiller University, Jena, Germany
Neurologie, Charité-Universitätsmedizin, Berlin, Germany
Neurologie, Universitätsklinikum Jena, Jena, Germany
* [email protected]
185
Poster Session II, Thursday, October 1
A context-dependent modulation of late event related potential (ERP) components, such as
the "P3", can be observed in oddball paradigms where a cognitive "context" is defined as the
relation between rare target events and an accompanying stream of frequent standard
events. EEG studies point to a two-stage processing of auditory stimuli: earlier components
(N1 and P2) are modulated within a specific modality whereas later components (P3) are
sensitive to a specific context.
Here, we studied the possibility of a context-dependent modulation of EEG and MEG highfrequency oscillations (HFOs; main energy at about 600 Hz) which can be evoked after
electrical stimulation of the median nerve. We showed earlier that these HFOs represent
noninvasive correlates of synchronised spikes in neuronal populations and are suitable to
assess information transfer since they occur in both, cortical and in subcortical structures. In
the present study, we used a bimodal paradigm employing electrical median nerve stimuli
together with oddball auditory interference and compared this to a control condition without
auditory stimulation.
HFO source waveforms were reconstructed by dipole modelling from multi-channel EEG and
MEG recordings in 12 healthy human subjects for three HFO components (a precortical
deep radial source, a cortical tangential source at Brodman Area 3b and a cortical radial
source at Brodman Area 1). We compared normalized maximum Hilbert envelope
amplitudes of these HFOs for three conditions (control, median nerve stimulus after standard
tones or, resp., after target tones). Hilbert envelope maxima were found significantly larger
during the control than in the oddball condition. Within the oddball condition itself, we found
higher HFO amplitudes after the standard than the target tone. Thus, noninvasively recorded
'spike-like' HFOs are modulated by different cognitive contexts.
T66
Mobile brain/body imaging (MoBI) of active cognition
Klaus Gramann*1, Nima Bigdely-Shamlo1, Andrey Vankov1, Scott Makeig1
1 Swartz Center for Computational Neuroscience, University of California, San Diego, USA
* [email protected]
Human cognition is embodied in the sense that cognitive processes are based on and make
use of our physical structure while being situated in a specific environment. Brain areas
originally evolved to organize motor behavior of animals in three-dimensional environments
also support human cognition (Rizzolatti et al., 2002), suggesting that joint imaging of human
brain activity and motor behavior could be an invaluable resource for understanding the
distributed brain dynamics of human cognition. However, despite existing knowledge there is
a lack of studies investigating the brain dynamics underlying motivated behaviors. This is
due to technical constraints of brain imaging methods (e.g., fMRI, MEG) that require
subjects to remain motionless because of high sensitivity to movement artifacts. This
imposes a fundamental mismatch between the bandwidth of recorded brain dynamics (now
up to 106 bits/second) and behavior (button presses at ~1/second). Only
186
Sensory processing
electroencephalography (EEG) involves sensors light enough to allow near-complete
freedom of movement of the head and body. Furthermore, EEG provides sufficient time
resolution to record brain activity on the time scale of natural motor behavior, making joint
EEG and behavioral recording the clear choice for mobile brain imaging of humans.
To better understand the embodied aspect of human cognition, we have developed a mobile
brain/body imaging (MoBI) modality to allow for synchronous recording of EEG and body
movements as subjects actively perform natural movements (Makeig et al., 2009). MoBI
recording allows analyses of brain activity during preparation, execution, and evaluation of
motivated actions in natural environments. In a first experiment, we recorded high-density
EEG with a portable active-electrode amplifier system mounted in a specially constructed
backpack, while whole body movements were assessed with an active motion capture
system. The concurrently recorded time series data were synchronized online across a
distributed PC LAN. Standing subjects were asked to orient to (point, look, or walk towards)
3-D objects placed in a semi-circular array (Figure 1). Online routines tracked subject
pointing and head directions to cue advances in the stimulus sequence. Independent
components (ICs) accounting for eye movements, muscle, and brain activities were
identified in results of independent component analysis (ICA, Makeig et al., 2004) applied to
EEG data. Equivalent dipoles for IC processes were located throughout cortex, the eyes,
and identifiable neck and scalp muscles. Neck muscle activity exhibited task-dependent
modulations across a broad frequency range while spectral activities of brain ICs exhibited
modulations time-locked to eye movements, segments of body and head movements,
including precisely timed high gamma band modulations in frontal medial cortex.
Simultaneous recording of whole-body movements and brain dynamics during free and
naturally motivated 3-D orienting actions, combined with data-driven analysis of brain
dynamics, allows, for the first time, studies of distributed EEG dynamics, body movements,
and eye, head and neck muscle activities during active cognition in situ. The new mobile
brain/body imaging approach allows analysis of joint brain and body dynamics supporting
and expressing natural cognition, including self-guided search for and processing of relevant
information and motivated behavior in realistic environments.
T67
Color edge detection in natural scenes
Thorsten Hansen*1, Karl Gegenfurtner1
1 Department of General Psychology, Justus Liebig University, Giessen, Germany
* [email protected]
In a statistical analysis of over 700 natural scenes from the McGill calibrated color image
database (Olmos and Kingdom, 2004, http://tabby.vision.mcgill.ca) we found that luminance
and chromatic edges are statistically independent. These results show that chromatic edge
contrast is an independent source of information that natural or artificial vision systems can
linearly combine with other cues for the proper segmentation of objects (Hansen and
187
Poster Session II, Thursday, October 1
Gegenfurtner, 2009, Visual Neuroscience).
Here we investigate the contribution of color and luminance information to predict humanlabeled edges. Edges were detected in three planes of the DKL color space (Lum, L-M, S(L+M)) and compared to human-labeled edges from the Berkeley segmentation data set. We
used a ROC framework for a threshold-independent comparison of edge detector responses
(provided by the Sobel operator) to ground truth (given by the human marked edges). The
average improvement as quantified by the difference between the areas under the ROC
curves for pure luminance and luminance/chromatic edges was small. The improvement was
only 2.7% if both L-M and S-(L+M) edges were used in addition to the luminance edges,
2.1% for simulated dichromats lacking an L-M channel, and 2.2% for simulated dichromats
lacking an S-(L+M) channel. Interesting, the same improvement for chromatic information
(2.5%) occurred if the ROC analysis was based on human-labeled edges in gray-scale
images. Probably, observers use high-level knowledge to correctly mark edges even in the
absence of a luminance contrast. While the average advantage of the additional chromatic
channels was small, for some images a considerably higher improvement of up to 11%
occurred. For few images the performance decreased. Overall, color was advantageous in
74% of the 100 images we evaluated. We interpret our results such that color information is
on average beneficial for the detection of edges and can be highly useful and even crucial in
special scenes.
T68
Simulation of tangential and radial brain activity: different
sensitivity in EEG and MEG.
Jens Haueisen*1, Michael Funke5, Daniel Güllmar4, Roland Eichardt3, Herbert Witte2
1 Biomagnetic Center, University Hospital Jena, Jena, Germany
2 Friedrich Schiller University, Jena, Germany
3 Institute of Biomedical Engineering and Informatics, Technical University Ilmenau,
Ilmenau, Germany
4 Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena
University Hospital, Jena, Germany
5 University of Utah, Salt Lake City, USA
* [email protected]
Based on the main direction of the neuronal currents with respect to the local skull curvature,
it is common to distinguish between tangential brain activity originating mainly from the walls
of the sulci and radial brain activity originating mainly from the gyri or the bottom of the sulci.
It is well known that MEG is more sensitive to tangential activity while EEG is sensitive to
both radial and tangential activity. Thus, it is surprising that studies in epileptic patients
report cases were spikes are visible in MEG but not in EEG. Similarly, in sensory processing
sometimes MEG signal components occur where there are no EEG components. Recently, it
was discussed that a lower sensitivity of MEG to background activity might be the reason for
188
Sensory processing
the signal visibility in MEG but not in EEG. Consequently, we analyze the signal-to-noise
ratio (SNR) of simulated source signals at varying orientations and with varying background
activity in realistic head models. For a fixed realistic background activity, we find a higher
SNR for source signals in the MEG as long as the source signals orientation is not more
than 30 degrees deviating from the tangential direction. Vice versa the SNR for source
signals in the EEG is higher as long as the source signals orientation is not more than 45
degrees deviating from the radial direction. Our simulations provide a possible explanation
for the experimentally observed differences in occurrence of EEG / MEG sensory signal
components and epileptic spike detection in EEG and MEG. Combined EEG / MEG
measurements will lead to a more complete picture of sensory processing in the brain.
T69
Cortico-cortical receptive fields – how V3 voxels sample
information across the visual field in V1
Jakob Heinzle*12, Thorsten Kahnt1, John-Dylan Haynes12
1 Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
2 Charité-Universitätsmedizin, Berlin, Germany
* [email protected]
Introduction:
Visual information processing is often viewed as a hierarchical process. This cortical
hierarchy is well established in animal experiments from both, electrophysiological and
anatomical studies, and computational models have been proposed that build on the very
same hierarchical structure. Although, in fMRI, visual areas can be determined by retinotopic
mapping, it is not known whether a computational model of vision could also be based on
BOLD activations only. In this study we present some first steps towards the understanding
of how activation of voxels in a higher visual area (specifically V3) is related to and can be
predicted from the activation of the entire ensemble of voxels in a lower visual area, e.g. V1.
Methods:
We scanned subjects on a 3T MRI system (Siemens TIM Trio) while they watched a circular
checkerboard with randomly and independently varying local contrasts. Subjects were
required to fixate during visual stimulation. EPI images coverage was restricted to visual
areas to allow for a high sampling rate (TR=1.5 sec). The visual areas of each subject were
defined by standard retinotopic mapping techniques and the activation of all voxels within
visual areas V1 and V3 was extracted and corrected for motion and global activation of the
whole brain. We then calculated, using SVM or standard linear regression, the coefficients
that allowed for the best prediction of responses in V3 given the activation in V1. The
resulting regression coefficients define a “prediction map” in area V1 that reflects the
contribution of individual voxels in V1 for the prediction of a particular voxel in area V3.
Results:
The ensemble of voxels in V1 predicted single voxel activity in V3. The individual prediction
189
Poster Session II, Thursday, October 1
maps show a high variability, ranging from a distribution of weights that closely reflects the
retinotopic position of the predicted voxel to broad distributions that pool information from all
over V1. However, when the prediction maps are aligned relative to their position in visual
space, the average map closely resembles retinotopy.
Discussion:
Despite the noise in raw fMRI data, it is possible to find direct relations between activations
in different visual areas. The regression we used corresponds to a simple one layered
perceptron and is a simplification of the true biological network. Future models should also
include additional layers and nonlinear models. Finally, it will be crucial to compare such
voxel based computational models to the existing neuronal models of visual processing by
using generative models, such as e.g. dynamic causal modeling.
Acknowledgements:
This work was supported by the Max Planck Society, the German Research Foundation and
the Bernstein Computational Neuroscience Program of the German Federal Ministry of
Education and Research.
T70
Predicting the scalp potential topography in the multifocal VEP by
fMRI
Shariful Islam*2, Torsten Wüstenberg1, Michael Bach3, Hans Strasburger2
1 Charité-Universitätsmedizin Berlin, Berlin, Germany
2 Department of Medical Psychology, Universitätsmedizin Göttingen, Göttingen, Germany
3 Universitäts-Augenklinik, Freiburg, Germany
* [email protected]
Visual evoked potentials from localized stimuli depend on the personal folding of the primary
and secondary visual cortex. To cross-validate three non-invasive imaging approaches, we
were interested to predict multifocal VEP amplitude on the scalp from retinotopic fMRI and
EEG data. To obtain retinotopic information we stimulated the central visual field using three
same sets of segmented checkerboard patterns (rings, wedges and segments) in both fMRI
and EEG recordings.
The results are used to predict evoked potentials from multifocal methods where orthogonal
time-series stimulation allows decomposing the single-electrode EEG signal into
components attributable to each stimulus region. A retinotopic map in areas V1 and V2 has
been obtained on an inflated cortical surface generated after preprocessing of the fMRI data
in Brain Voyager.
We have also developed a Matlab graphical user interface (GUI) which, solving the EEG
forward problem in a two-layer (cortical and scalp surface) real-head model, shows the scalp
potential distribution of a certain dipole generator obtained from fMRI along with its location
190
Sensory processing
and orientation in the brain. For the same brain, with stimulation at specific visual-field
locations we show dipoles from multi-electrode EEG obtained using sLoreta.
T71
Influence of attention on encoding of two spatially separated
motion patterns by neurons in area MT
Vladislav Kozyrev*12, Anja Lochte2, Mohammad Reza Daliri12, Demian Battaglia13, Stefan
Treue2
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 German Primate Center, Göttingen, Germany
3 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Attending to a spatial location or to non-spatial features of simultaneously presented visual
stimuli enhances neuronal responses in the primate visual cortex to relevant stimuli and
reduces responses to irrelevant ones. Previous extra-cellular recording studies have shown
that switching attention from outside the receptive field (RF) to a single stimulus inside the
RF of neurons in the extrastriate visual cortex causes a multiplicative modulation of the
neuron's tuning curve. Here we investigated how attention affects the tuning curves created
by a systematic variation of two moving patterns while recording single neurons from the
middle temporal visual area (MT) of two rhesus monkeys. We used random dot patterns
(RDPs) moving within two spatially separated stationary apertures, sized and positioned to fit
within the classical RF. Another pair of RDPs was presented far outside the RF.
The monkeys were trained to attend to one of those patterns (the target) while maintaining
their gaze on a fixation spot. The target was specified by a cue that preceded every trial. The
monkeys were required to detect either a luminance change in the fixation spot (attend-fix
condition) or a transient change of direction or speed in the RDP either inside the RF
(attend-in condition) or far outside the RF (attend-out condition). In the latter two conditions
the cue appeared at the same location and moved in the same direction as the target
pattern. The two RDPs inside the RF always moved with a relative angle of 120 deg. Tuning
curves were determined in the attend-fix and attend-in conditions by systematically varying
the RDPs' directions. In the attend-out condition the target moved either in the preferred or
null direction with the stimulus in the RF moving in the preferred direction.
The tuning curves showed two peaks corresponding to the two stimulus configurations in
which one of the patterns inside the RF moved in the neuron's preferred direction. We
hypothesized that attention independently modulates the responses evoked by each of the
two stimuli. Therefore, in order to quantitatively estimate the effects of attention on the tuning
curves, we fitted our data using the sum of two Gaussians corresponding to the independent
responses to the two RDPs. The fitting parameters in the attend-in versus the attend-fix
condition demonstrated an attentional gain enhancement (15%) and an increase in width
(17%) of the Gaussian representing the target pattern as well as a gain reduction (17%) of
191
Poster Session II, Thursday, October 1
the second Gaussian. This pattern of results suggests that attention exerts its influence at a
processing level where the two stimuli are encoded by independent neuronal populations,
such as area V1. The effect of attentional broadening of the tuning curve is nonmultiplicative and cannot be predicted by existing models of attention.
Acknowledgements:
The project was supported by the Volkswagen Foundation, grant I/79868, and the BCCN
grant 01GQ0433 from the BMBF.
T72
Modeling and analysis of the neurophonic potential in the laminar
nucleus of the barn owl
Paula Kuokkanen*4, Hermann Wagner32, Catherine Carr2, Richard Kempter1
1
2
3
4
Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
Department of Biology, University of Maryland, College Park, USA
Institute for Biology II, RWTH Aachen, Aachen, Germany
Institute for Theoretical Biology, Humboldt Universty, Berlin, Germany
* [email protected]
It is a challenge to understand how the brain represents temporal events. One of the most
intriguing questions is how sub-millisecond representations can be achieved despite the
large temporal variations at all levels of processing. For example, the neurophonic potential,
a frequency-following potential occurring in the network formed by nucleus magnocellularis
and nucleus laminaris in the brainstem of the bird, has a temporal precision below 100
microseconds.
Here we address the question of how the neurophonic potential is generated and how its
remarkable temporal precision is achieved. The neurophonic potential consists of at least
three spectral components [1], and our studies aim at revealing their origin. Our hypothesis
is that magnocellular axons are the origin of high-frequency (> 3 kHz) component of the
neurophonic. To test this hypothesis, we present an advanced analysis of in-vivo data,
numerical simulations of the neurophonic potential, and analytical results. Describing the
neurophonic as an inhomogeneous Poisson process (with periodic rate) that is convolved
with a spike kernel, we show how the signal-to-noise ratio (SNR) of this signaldepends on
the mean rate, the vector strength, and the number of independent sources. Interestingly,
the SNR is independent of the spike kernel and subsequent filtering. The SNR of the in-vivo
neurophonic potential in response to acoustic stimulation with tones then reveals that the
number of independent sources contributing to this signal is large. Therefore, action
potentials of laminaris neurons cannot be the main source of neurophonic because neurons
are sparsely distributed with a mean distance of about 70 micrometers. Synapses between
magnocellular axons and laminaris neurons are assumed to contribute little to the
neurophonic because neurons in the high-frequency region of laminaris are nearly spherical
192
Sensory processing
with a diameter in the range of 10 micrometers and they have virtually no dendritic tree. On
the other hand, the summed signal from densely packed magnocellular axons can explain
the high SNR of the neurophonic. This hypothesis is also supported by our finding that the
stimulus frequency at which the maximum SNR is reached is lower than the unit’s best
frequency (BF), which can be explained by the frequency-tuning properties of the vector
strength [2] and the firing rate [3] of magnocellularis neurons.
Acknowledgements:
This work was supported by the BMBF (Bernstein Collaboration in Computational
Neuroscience: Temporal Precision, 01GQ07102).
References:
[1] Wagner H, Brill S, Kempter R, Carr CE: Microsecond precision of phase delay in the
auditory system of the barn owl. J Neurophysiol 2005, 94(2):1655-1658.
[2] Koeppl C: Phase locking to high frequencies in the auditory nerve and cochlear nucleus
magnocellularis of the barn owl Tyto alba. J Neurosci 1997, 17(9):3312-3321.
[3] Koeppl C: Frequency tuning and spontaneous activity in the auditory nerve and cochlear
nucleus magnocellularis of the barn owl Tyto alba. J Neurophysiol 1997, 77(1):334-377.
T73
Attentional modulation of the tuning of neurons in area MT to the
direction of transparent motion
Anja Lochte*2, Valeska Stephan2, Vladislav Kozyrev12, Annette Witt13, Stefan Treue2
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 German Primate Center, Göttingen, Germany
3 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
Transparent motion perception requires the segmentation and separate representation of
multiple motion directions within the same part of visual space. In a previous study we
recorded responses from direction-selective neurons in macaque middle temporal area (MT)
to unattended bidirectional random dot patterns (RDPs; Treue et al., 2000). The profile of
responses to such transparent motion patterns is the scaled sum of responses to the
individual components, showing two peaks when the angle between the component
directions exceeds the neuron’s tuning width.
Here we investigated the influence of attention on the representation of the direction
components of transparent motion by recording from MT in an attentional paradigm. Our
question was, whether the effects of attention can be better characterized as a modulation of
the population response in MT or as a modulation of two independent neuronal populations,
each encoding one of the two directions (as might be expected to happen in area V1).
193
Poster Session II, Thursday, October 1
Two monkeys were trained on a task, where an initial cue indicated the relevant direction of
motion in a given trial. Two RDPs were then presented, moving within a common stationary
aperture, sized and positioned to fit within the classical receptive field. While maintaining
gaze on a fixation point, the animals were instructed to respond to a speed increment within
the cued surface. In a sensory condition, the monkeys were asked to respond to a
luminance change of the fixation point. By systematically varying the overall pattern
direction, tuning curves were measured with a constant relative angle of 120 degrees
between the component directions.
The activity profile across 90 MT units showed two peaks corresponding to the two stimulus
configurations in which one of the directions moved in the neuron’s preferred direction. The
profile can be well fit by the sum of two Gaussians, enabling a quantitative comparison of
neuronal responses for the attended versus the sensory condition. The fitted tuning curves
showed an average increase of 52% around the peak where the preferred direction was
attended relative to the sensory condition. For the peak corresponding to the condition when
the preferred direction was unattended, we observed an average suppression of 5%. Neither
of the fitted individual Gaussians showed a change in tuning width. Our results, supported by
preliminary numerical modeling, show that attending to one surface in a transparent motion
stimulus causes a direction-dependent modulation of the population response in MT,
representing the neural correlate of attentional allocation to an individual surface.
Acknowledgements:
The project was supported by the Volkswagen Foundation (grant I/79868) and by grant
01GQ0433 from the Federal Ministry of Education and Research to the Bernstein Center for
Computational Neuroscience Goettingen.
T74
Pinwheel crystallization in models of visual cortical development
Lars Reichl*3, Siegrid Löwel2, Fred Wolf31
1 Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
2 Institute of General Zoology and Animal Physiology, Friedrich Schiller University, Jena,
Germany
3 Max-Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
* [email protected]
The formation of orientation preference maps during the development of the visual cortex is
sensitive to visual experience and impulse activity[1]. In models for the activity dependent
development of these maps orientation pinwheels initially form in large numbers but
subsequently decay during continued refinement of the spatial pattern of cortical selectivities
[2]. One attractive hypothesis for the developmental stabilization of orientation pinwheels
states that the geometric relationships between different maps, such as the tendency of isoorientation domains to intersect ocular dominance borders at right angles can prevent
194
Sensory processing
extensive orientation map rearrangement and pinwheel decay[2].
Here we present a analytically tractable model for the coupled development of orientation
and ocular dominance maps in the visual cortex. Stationary solutions of this model and their
dynamical stability are examined by weakly nonlinear analysis. We find three different basic
solutions, pinwheel free orientation stripes, and rhombic and hexagonal pinwheel crystals
locked to a hexagonal pattern of ipsilateral eye domains. Using amplitude equations for
these patterns, we calculate the complete stability diagram of the model. In addition, we
study the kinetics of pinwheel annihilation or preservation using direct numerical simulations
of the model in model cortical areas encompassing several hundred orientation
hypercolumns. When left and right eye representations are symmetrical, inter-map coupling
per se is not capable of stabilizing pinwheels, in this model. However, when the
overrepresentation of the contralateral eye exceeds a critical value inter-map coupling can
stabilize hexagonal or rhombic arrays of orientation pinwheels. In this regime, we find a
transition from a dominance of low pinwheel density states to high density states with
increasing strength of inter-map coupling. We find that pinwheel stabilization by inter-map
coupling and contralateral eye dominance leads to the formation of perfectly repetitive
crystalline geometrical arrangements of pinwheel centers.
References:
[1] White & Fitzpatrick, Neuron, 2007
[2] Wolf & Geisel, Nature, 1998
T75
Cellular contribution to vestibular signal processing - a modeling
approach
Christian Rössert*21, Hans Straka3, Stefan Glasauer1
1 Bernstein Center for Computational Neuroscience Munich, Munich, Germany
2 Institute for Clinical Neurosciences, Ludwig-Maximilians-Universität, Munich, Germany
3 Laboratoire de Neurobiologie des Réseaux Sensorimoteurs, Centre National de la
Recherche Scientifique, Université Paris Descartes, Paris, France
* [email protected]
Computational modeling of the vestibulo-ocular circuitry is essential for understanding the
sensory-motor transformation that generates spatially and dynamically appropriate
compensatory eye movements during self-motion. Central vestibular neurons in the
brainstem are responsible for the major computational step that transforms head
acceleration-related sensory vestibular signals into extraocular motor commands that cause
compensatory eye motion for gaze stabilization. In frog, second-order vestibular neurons
(2°VN) separate into two functional subgroups (tonic - phasic neurons) that distinctly differ in
their intrinsic membrane properties and discharge characteristics. While tonic 2°VN exhibit a
continuous discharge in response to positive current steps, phasic 2°VN display a brief,
195
Poster Session II, Thursday, October 1
high-frequency burst of spikes but no continuous discharge, corresponding to class 1 and
class 3 excitability, respectively. Based on the dynamics of sinusoidally modulated changes
of the membrane potential, tonic 2°VN show low-pass filter-like response properties,
whereas phasic 2°VN have band-pass filter-like characteristics. Correlated with these
cellular properties, tonic and phasic 2°VN exhibit pronounced differences in subthreshold
response dynamics and discharge kinetics during synaptic activation of individual
labyrinthine nerve branches with sinusoidally modulated trains of single electrical pulses.
Physio-pharmacological analyses indicated that the two types of 2°VN are differentially
embedded into local inhibitory circuits that reinforce the cellular properties of these neurons,
respectively, thus indicating a co-adaptation of intrinsic membrane and emerging network
properties in the two neuronal subtypes. The channel mechanisms responsible for the
different discharge characteristics of the two neuronal subtypes were revealed by a
frequency-domain analysis in the subthreshold domain: tonic 2°VN exhibit an increasing
impedance with membrane depolarization which likely results from an activation of persistent
sodium currents, while phasic 2°VN show a decreasing impedance and increasing
resonance with membrane depolarization due to the activation of low-threshold, voltagedependent ID-type potassium channels.
These results also revealed the necessary channel mechanisms to generate spiking multicompartment models. By extending these models with conductance-based synapses that
simulate the corresponding activation and inhibition it was possible to reproduce the distinct
firing behavior of the two neuronal subtypes during intracellular and synaptic activation,
respectively. By modifying different components of the intrinsic cellular or the synaptic circuit
properties it is now possible to determine the relative contributions of membrane and
network properties for vestibular signal processing. Selective modifications of different
neuronal circuit components or particular properties of ion channel conductances in the
model allow making predictions of how eco-physiological or patho-physiological changes
affect vestibular signal processing and how cellular and network mechanisms might
compensate for induced alterations.
Acknowledgements:
Supported by Bayerische Forschungsstiftung (C.R.) and Bundesministerium für Bildung und
Forschung (BCCN 01GQ0440).
196
Sensory processing
T76
Modeling the influence of spatial attention on visual receptive
fields
Henning Schroll3, Jérémy Fix1, Marc Zirnsak3, Thilo Womelsdorf2, Stefan Treue4
1 Department of Computer Science, Technical University Chemnitz, Chemnitz, Germany
2 Department of Physiology, Pharmacology & Psychology, University of Western Ontario,
London, Canada
3 Department of Psychology, Westfälische Wilhelms-University, Münster, Germany
4 German primate center, Göttingen, Germany
* [email protected]
Voluntary spatial attention has been shown to significantly modulate the properties of visual
receptive fields (vRFs). Womelsdorf et al. [1] recently reported that vRFs in macaque cortical
area MT show an average shift towards attentional targets, accompanied by a small amount
of shrinkage, as measured by single cell recordings. Considerable variability between
different MT cells regarding both the direction of vRF shift and changes in vRF size raises
the question, as to what factors influence these properties.
By application and extension of a neuroanatomically plausible computational model,
originally developed to explain characteristics of the perisaccadic perception of objects [2,3],
we provide a better understanding of the factors that influence vRF dynamics. The model
assumes a layer of gain modulated cells, distributed according to cortical magnification and
subject to attentional modulation. Interactions between the cells are realized by lateral
inhibition. Pool cells integrate their responses through a max function, providing measures of
response that we compared to experimental cell recordings.
The resulting model fit is comparable to the fit of a simplified attentional gain model that
relies on a multiplication of two Gaussians [4]. Thus, the more realistic properties of our
model do not improve the fit. However, we propose a modified experimental design that
allows for revealing differences between those models – for example by placing the center of
attention into the periphery of the vRF. Moreover, we show that our model predicts
systematic variations in the direction of vRF shift, dependent on the vRF center and the
center of attention, rather than a direct shift towards the attended location.
References:
[1] Womelsdorf et al. 2006. Nat. Neurosci., 9(9), 1156-1160.
[2] Hamker 2005. Cereb. Cortex., 15, 431-447.
[3] Hamker et al. 2008. PLOS Comp. Biol., 4(2), e31.
[4] Womelsdorf et al. 2008. J. Neurosci., 28(36), 8934-8944.
197
Poster Session II, Thursday, October 1
T77
Pattern mining in spontaneous background activity from the
honeybee antennal lobe.
Martin Strauch*1, Julia Rein1, C. Giovanni Galizia1
1 Department of Neurobiology, University of Konstanz, Konstanz, Germany
* [email protected]
The honeybee antennal lobe, a structural analog of the vertebrate olfactory bulb, is a neural
circuit dedicated to the representation and processing of odorant stimuli. It receives input
from more than 60000 olfactory receptor neurons that converge onto 160 functional subunits
called olfactory glomeruli. A dense network of more than 4000 intra- and interglomerular
local neurons putatively performs processing such as contrast enhancement or
normalisation over concentration ranges. Projection neurons relay the processed information
to higher order brain centers. As a first approach, a modeling study [1] has suggested a
network topology based on the connection of functionally correlated glomeruli, rather than a
purely lateral connectivity.
In order to obtain a more detailed picture of network connectivity, we set out to analyse
spontaneous background activity in antennal lobe projection neurons. Previous findings
suggest that global application of octopamine, a neurotransmitter involved in olfactory
learning, increases both the mean and the variance of spontaneous activity in all glomeruli
[2]. Comparing spontaneous activity in octopamine-treated and untreated animals, we aim at
uncovering network effects that arise by inhibition or excitation of connected glomeruli
through increased activity.
Extending our previous ICA-based approach for separating glomerular signals in antennal
lobe recordings [3], we have developed a pattern mining method for the automated
extraction of glomerular activity patterns. Comparing the pattern decomposition of treated
and untreated spontaneous activity will be a useful tool for uncovering treatment-induced
effects.
References:
[1] Christiane Linster, Silke Sachse, C. Giovanni Galizia: "Computational modeling suggests
that response properties rather than spatial position determine connectivity between
olfactory glomeruli.", J Neurophysiol, Vol 93, pp 3410-3417, 2005
[2] Julia Rein, Martin Strauch, C. Giovanni Galizia: "Novel techniques for the exploration of
the honeybee antennal lobe.", poster abstract in Neuroforum Feb 2009(1) Vol.XV,
Supplement p. 958, 8th Meeting German Neuroscience Soc., Göttingen, Germany, Mar
25-29, 2009
[3] Martin Strauch, C.Giovanni Galizia: "Registration to a neuroanatomical reference atlas identifying glomeruli in optical recordings of the honeybee brain.", Lecture Notes in
Informatics , P-136, pp. 85-95, 2008
198
Sensory processing
T78
Developing a model of visuomotor coordination involved in
copying a pair of intersecting lines
Pittala Vednath*1, V. Srinivasa Chakravarthy1
1 Indian Institute of Technology, Madras, India
* [email protected]
Copying line diagrams involves transforming a static image into dynamic movements of a
hand-held writing device. There is an inherent ambiguity in this mapping because a line
segment of a given orientation can be drawn in two opposite directions (e.g., left to right or
right to left), while preserving orientation. One way to ameliorate this ambiguity is to bind
orientation with direction so that a line of a given orientation is always drawn in a given
direction. But even then there must exist at least one angle where the pairing between
direction and orientation is violated. How do humans cope with this ambiguity?
Our earlier experiments with human subjects drawing single oriented lines revealed that 1)
there is a systematic “two-peak” error in orientation and 2) there is a sudden jump in
direction as the writers copy lines with continuously varying orientation, 3) hysteresis effect
when copying lines of increasing and then decreasing orientation. All these effects were
captured accurately by a phase dynamic model in which the input orientation (spatial phase)
and output orientation are modeled as the temporal phases of oscillators.
The present paper extends the previous work to that of copying a pair of intersecting lines,
which involves a greater ambiguity since a pair of lines can be drawn in four different ways.
We collected data from human subjects copying a pair of symmetrically intersecting lines.
Here too 1) systematic error in orientation, 2) flipping behavior and 3) hysteresis effects were
observed. This data was modeled using two architectures consisting of three and four
oscillators respectively. The three oscillator system corresponds to a tight coupling between
the line 1 and line 2 dynamics in the model, whereas the four-oscillator system represents
loose coupling. Coupling coefficients among oscillators are set by minimizing orientation
error. The four-oscillator system is found to model the shape of the orientation error profile,
and the hysteresis profile more closely.
A novel aspect of the model is to represent spatial angular quantities like orientation as
temporal phases. Usually oscillatory models are used only to describe oscillatory
movements. It is noteworthy that the dynamics of the visuomotor system in this task can be
captured by simple oscillator equations, and the model that was used for copying a single
line is naturally extendible to a pair of lines. The ultimate goal of the present work is to see if
the phase variables of the present model can be related to phases, in specific bands, of
EEG measured over visuospatial (for orientation) and motor (for direction) regions of brain.
199
Poster Session II, Thursday, October 1
References:
Athènes, S., Sallagoïty, I., Zanone, P.-G. & Albaret, J.-M. (2003). Universal features of
handwriting: Towards a non-linear model. Proceedings of the 11th Conference of the
International Graphonomics Society IGS2003), 46-49.
Dubey, S., Sambaraju, S., Cautha S.C., & Chakravarthy, V.S. (2007) The Enigmatic TwoPeak Orientation Error Structure In Copying Simple Line Diagrams, Proc. of the 13th
International Graphonomics Society conference, Melbourne, Australia.
T79
The influence of attention on the representation of speed changes
in macaque area MT
Detlef Wegener*1, Orlando Galashan1, Hanna Rempel1, Andreas K Kreiter1
1 Brain Research Institute, Department of Theoretical Neurobiology, University of Bremen,
Bremen, Germany
* [email protected]
Neuronal processing of visual information strongly depends on selective attention. On the
single cell level, attentional modulation is usually studied by training awake animals to
selectively attend a target item and to report a change in one of its features, or the
reoccurrence of the item in a series of stimulus presentations. Data obtained by these tasks
have convincingly shown an attention-dependent modulation of basic neuronal response
patterns during the period the animal attended the item, i.e. prior to the event that had to be
reported. However, from a behavioral point of view, the feature change itself is most crucial
for solving the task. Therefore, we were interested in the question how this feature change is
represented in neuronal activity and whether it is influenced by attention in a similar, or
different manner as during the continuous representation of the object prior to the change. In
a first experiment, two monkeys were trained on a motion discrimination task, in which two
bars were presented, that each could undergo a change in velocity at pseudo-random times,
but only the change of the pre-cued bar was task-relevant. PSTHs of responses to both
behaviorally relevant and irrelevant objects show a firing rate increase shortly after speed-up
of the bar by approximately equal strength, reaching a maximum at about 250ms after
acceleration. However, for non-attended bars activity then falls down again to roughly the
value before acceleration whereas for attended bars the enhanced firing rates sustain (see
supplementary Fig.1). During this ongoing period the attention-dependent difference in firing
rate reached values considerably larger then before acceleration - for both monkeys we
found an increase in the Attention Index by about 40%. This attention-dependent difference
in response to feature changes was only found in successful trials, but was absent in trials in
which the monkeys missed the speed change. In a second experiment, two more monkeys
were trained on a speed change detection paradigm in which we used moving gabor stimuli
instead of bars. The gabors were accelerated by 50% or 100%, or decelerated by 50%.
Monkeys had to detect the speed change of the target gabor and to ignore any change on a
simultaneously presented distracter stimulus and had to keep fixation for an additional
200
Sensory processing
500ms after they responded. For the majority of neurons recorded so far preliminary analysis
of the data suggests that the post-response period can be subdivided into two phases: a
transient phase that is closely related to the velocity tuning of the neuron, and a sustained
phase that is strongly correlated with the response of the animal, but only weakly correlated
with the changed speed.
Thus, the results of the two experiments suggest that attention “tunes” the neurons for the
representation of the behaviorally relevant speed change which is then followed by a
“detection” signal that is transferred to postsynaptic targets.
T80
Spatial spread of the local field potential in the macaque visual
cortex
Dajun Xing1, Chun-I Yeh1, Robert M. Shapley1
1 Center for Neural Science, New York University, New York, USA
* [email protected]
The number of studies on the Local Field Potential has dramatically increased in recent
years. However, up to now, how local is the Local Field Potential (LFP) is not clear and the
estimates of the LFP cortical spread in different studies varied from 100 µm to 3000 µm.
Here, we provide a novel method to precisely estimate the cortical spread of the LFP in
Macaque primary visual cortex (V1) by taking advantage of the retinotopic map in V1. We
mapped multi-unit activity (MUA) and LFP visual responses with sparse-noise at several
cortical sites simultaneously with a Thomas 7-electrode system. The cortical magnification
factor near the recording sites was precisely estimated by track reconstruction. The
experimental measurements not only let us directly compare the visual responses of the LFP
and MUA, but also enabled us to obtain the cortical spread of the LFP at different cortical
depths in V1 by a model of signal summation. We found that V1's LFP was the sum of
signals from a very local region, on average 250 m in radius; the spatial spread reaches a
minimum value of 120 m in layer 4B. For the first time, we demonstrate the cortical spread
of the LFP varies with cortical depth in V1. The spatial scale of the visual responses, the
cortical spread, and their laminar variation led to new insights about the sources and utility of
the LFP. And the novel method provided here is also suitable to study the properties of the
LFP in other cortical areas, such as area S1 or A1 that have a topographic map.
Acknowlegdements:
This work was supported by grants from the US National Institutes of Health (T32 EY-07158
and R01 EY-01472) and the US National Science Foundation (grant 0745253), and by
fellowships from the Swartz Foundation and the Robert Leet and Clara Guthrie Patterson
Trust.
201
Demonstrations
Demonstrations
D1
Embodied robot simulation: learning action-outcome associations
Rufino Bolado-Gomez1, Jonathan M. Chambers1, Kevin Gurney1
1 Adaptive Behaviour Research Group, Department of Psychology, University of Sheffield,
Western Bank, UK
* {r.bolado, j.m.chambers, k.gurney}@sheffield.ac.uk
This demonstration will consist in a short duration video-A (approx. 30 seconds) showing an
embodied robot simulation in which a Khepera robot is free to explore a simple environment
containing interactive objects (see Fig. 1) and to learn action-outcome situations. The virtual
world and robot agent are implemented using Webots v6.1.0 simulator software utilizing its
open dynamic engine (ODE) features. The simulation control architecture mainly depends on
two sub-systems, the ‘biomimetic’ network that corresponds to the biological plausible
‘extended basal ganglia learning model’ (see suppelementary material Fig. 1 in [1]), and the
‘embedding architecture’ (engineering interface). The former sub-system accounts for the
Figure 1: This picture shows the virtual environment and Khepera-I robot characteristics
equipped for the embodied robot simulation. (a) 148 x 148 cm arena surrounded by four
walls-blue related to A1-water foraging. It contains three different colored cubes, green, red
and white, which are associated to A2-food foraging, A3-investigate red cube (key action),
and A4-investigate white cube (control action), respectively. (b) zoom in of the Khepera-I
differential-wheels robot showing with more detail the sensors used to receive information
from the outside world. By default it has eight infrared on-board sensors that are used for
obstacle proximity detection. It was added-on a RGB camera to see the world and a binary
(if touching 1, else 0) touch-sensor (bumper) to allow the robot to bump into objects (modify
the environment). Abbreviations: R, red; G, green; and B, blue, can be used, if necessary,
as labels to understand the text in relation with a white & black image instead of a color one.
202
Demonstrations
simulated neuronal substrate that solves the action-selection problem and simulates corticostriatal plasticity driving the ‘repetition-bias’ hypothesis. The latter sub-system allows the
neuronal substrate to communicate (speak or listen) to the virtual world in order to access
pre-processeced inputs (sensory information) and send away post-processed outputs (motor
commands).
In general, the robot interacts with the world by having to select between four possible
competitive actions: A1, water foraging (search & bump against blue walls); A2, food
foraging (search & bump against green objects); A3-key action, investigate (bump twice) red
cube; and A4-control action, investigate (bump twice) white cube. The robot investigative
behavior consists of a bump twice against objects, interpreted as a ‘fixed action pattern’
(FAP). To implement this behavioral sequence of actions it is introduced to the simulation a
two-state time-varying motivational sub-system framework consisting in a ‘robot-is-hungry’
and ‘robot-is-thirsty’ state variables. Therefore, A1 and A2 salience dynamics are based on
these two motivational states. On the other hand, A3 and A4 sensory saliences are built on
novel sensory events. A3 is the key action associated with the unpredicted stimuli (phasic
light-phasic dopamine signal), presenting the robot with the action-outcome learning
paradigm. A4-white block is used as the control procedure playing the same role as A2-red
block in almost all aspects only differing in that it is not associated with a novel event (no
phasic light). As a result, the robot should not approach a ‘doing-it-again’ mode because
there is no stimulus to predict. In addition, there will be two more complementary videos
showing the embodied robot simulation working with the following differences. In the first
place, video-B, the cortico-striatal learning rule is replaced with one that does not present renormalization process and for that reason it does not satisfy the ‘repetition-bias’ hypothesis.
In the second place, video-C, shows the implications of presenting the embodied robot
simulation to an aversive outcome situation, where the unpredicted sensory event (red block
phasic light) causes a transient ‘dip’ in dopamine instead of the dopamine burst.
References:
[1] Rufino Bolado-Gomez, Jonathan M. Chambers, Kevin Gurney, Poster T35 : “The basal
ganglia and the 3-factor learning rule: reinforcement learning during operant conditioning“
, page 154
D2
Balancing pencils using spike-based vision sensors
Jörg Conradt1, Raphael Berner1, Patrick Lichtsteiner1, Rodney Douglas1, Tobi Delbruck1,
Matthew Cook1
1 Institute of Neuroinformatics, UZH-ETH Zürich, Switzerland
* {conradt, raphael, patrick, rjd, tobi, cook}@ini.phys.ethz.ch
203
Demonstrations
Description:
Animals by far outperform current
technology when reacting to visual stimuli
in
low
processing
requirements,
demonstrating astonishingly fast reaction
times to changes. Current real-time vision
based robotic control approaches, in
contrast,
typically
require
high
computational resources to extract
relevant information from sequences of
Figure 2: Photo of balancer hardware: 2
images provided by a video camera. Most
Dynamic Vision Sensors (DVS, top center
of
the
information
contained
in
and top right), the motion table (center left).
consecutive images is redundant, which
The system can balance all objects show at
often turns the vision processing
the
bottom
without
modification
of
algorithms into a limiting factor in highparameters.
speed robot control. As an example,
robotic pole balancing with large objects is a well known exercise in current robotics
research, but balancing arbitrary small poles (such as a pencil, which is too small for a
human to balance) has not yet been achieved due to limitations in vision processing. At the
Institute of Neuroinformatics we have developed an analog silicon retina
(http://siliconretina.ini.uzh.ch), which, in contrast to current video cameras, only reports
individual events ("spikes") from individual pixels when the illumination changes within a
pixel's field of view. Transmitting only the "on" and "off" spike events, instead of transmitting
full vision frames, drastically reduces the amount of data processing required to react to
environmental changes. This information encoding is directly inspired by the spike based
information transfer from the human eye to visual cortex. In our demonstration, we address
the challenging problem of balancing an arbitrary standard pencil, based solely on visual
information. A stereo pair of silicon retinas reports vision events caused by the moving
pencil, which is standing on its tip on an actuated table. Then our processing algorithm
extracts the pencil position and angle without ever using a "full scene" visual representation,
but simply by processing only the spikes relevant to the pencil's motion. Our system uses
neurally inspired hardware and a neurally inspired form of communication to achieve a
difficult goal hence, it is truly a demo for an audience with interest in computational
neuroscience. A video showing the system’s performance is available on:
http://www.ini.uzh.ch/~conradt/PencilBalancer.
Setup:
The demonstration is almost self-explanatory and captivating. It shows the power of
combining high-performance spikebased sensors with conventional digital computation. It
has been running in our lab for about a year and we regularly show it to visitors. The
demonstrator takes an ordinary pencil or pen or other small rod-like object and asks a
member of the audience to try to balance it on their hand. So far, no visitor to our lab
managed to balance a pencil on its tip. Some people can balance larger screw drivers, but
only by wild movements of their hand that cover half a meter. The demonstrator then takes
204
Demonstrations
back the pencil and puts the tip into the rubber cup hand of the balancer table. After the X
and Y trackers capture the pencil location, the demonstrator lets go of the pencil and the
system starts to balance. During balancing the table is very active and oscillates with
frequencies of many Hz. A puff of air perturbs the pencil and the balancer responds by
quickly moving its hand to bring the pencil back into balance. Slowly moving the table (which
changes the background seen by each vision sensor) usually does not perturb the
balancing. This balancer demonstration only requires a table and 110250V power source.
We will bring along a laptop and the demonstrator hardware of size 400x400x300mm. If we
require more light we will purchase a small table lamp locally.
D3
yArbor: performing axon guidance simulations
Rui P. Costa*1
1 Center for Informatics and Systems, University of Coimbra, Coimbra, Portugal
* [email protected]
In this demonstration an axon guidance simulator named yArbor will be introduced. This
simulator is based on neuroscience knowledge and offers an accessible way of performing
axon guidance simulations in three-dimensions. In order to study an axon guidance system
in this simulator, several stages must be done:
1.
Incorporate the data already known from the neurobiology
1.1 Add the ligands and receptors
1.2 Add guidance cues based on the ligands and receptors
1.3 Define the regulatory network between receptors and proteins
1.4 Define the topographic map
2. Load a three dimensional model of the system (e.g. midline)
3. Define the computational model
3.1 Include elements (neurons or/and glial cells)
3.2 Define the content (e.g. receptors and ligands) and position of each element
3.3 Activate or inactivate mechanisms (e.g. axonal transportation and growth
cone adaptation)
4. Simulate the system and visualize it in three dimensions
5. Define the plots to be drawn
6. Study the results obtained
This simulator allows the researcher to easily change the parameters and observe its
effects. The simulations can then be used to guide in vivo or in vitro experiments. During this
demonstration the midline crossing in the Drosophila will be used as a study case. Finally
some preliminary axon guidance experiments in the optic pathway will be also presented.
205
Demonstrations
D4
Foveated vision with a FPGA camera
Georgi Tushev1, Ming Liu1, Daniela Pamplona*1, Jörg Bornschein1, Cornelius Weber1,
Jochen Triesch1
1 Frankfurt Institute for Advanced Studies, Frankfurt, Germany
* [email protected]
The human retina pre-processes visual information before sending it to the brain. It samples
and filters the signal across several layers, resulting in more acuity in the fovea than in the
periphery of the field of view. This is mainly due to the non-regular distribution of the cone
photoreceptors and the ganglion cells: their concentration is high in the fovea and decreases
approximately logarithmicly with the distance from the fovea. The difference-of-Gaussians
shaped receptive fields of the ganglion cells also denoise the information and reduce
redundancies. This transformation is the biological way of dealing with limited processing
resources: it guarantees high resolution at the gaze fixation point and a large field of view.
Artificial visual systems have to deal with redundances and limited resources as well. In
many tasks, processing in the periphery of the field of view is unnecessary and costly.
Consequently, a component reproducing the foveation process saves time and energy,
which is crucial in real time platforms. A real time software implementation on a sequential
processor would demand higher clock rates, more temporary memory and more bandwidth,
causing both more energy dissipation and higher hardware requirements. Therefore, in our
project, we simulate the processing of the ganglion cells on a Field-Programmable Gate
Array (FPGA) camera. The resulting information will be less noisy and compressed
compared with the original constant-resolution image, giving rise to a fast and efficient
system. Using such a smart platform gives us the advantage of accurate, real-time image
processing on a low technical level, minimizing software and hardware demands. When the
image vector is acquired by the camera’s sensor, it is immediately multiplied by a matrix, of
which the rows represent the receptive fields of the ganglion cells and, finally, the foveated
image is output. Our smart camera platform consists of three components: an embedded
processor running Linux, an FPGA processing board and a 5 Mega Pixel Complementary
Metal-Oxide-Semiconductor (CMOS) image sensor. The embedded processor is used for
general system orchestration, handles the network connection and is responsible for
configuring the camera system. The FPGA processing board consists of a Xilinx Spartan
1200 FPGA and a 64M Byte DRAM memory chip. It receives a continuous stream of raw
pixel data from the sensor and hands over chunks of processed image data to the
embedded processor. First, the raw image data is retrieved from the sensor and passed
toward the FPGA chip. There the foveated vision algorithm compresses the information and
rearranges the pixels into a suitable fovated image. This image is transferred, via a network
application, outside the camera to a remote computer. We use Xilinx ISE to program the
FPGA and Icarus/GTKWave to simulate the Verilog code. Our demonstration will show the
FPGA camera in action: it captures and transforms the image into a foveated and filtered
image, and then sends it to a computer screen for display. In case that the FPGA
206
Demonstrations
implementation is not available, a software based prototype implementation will be shown.
We plan to extend our work to a stereo active vision system, with foveation in both FGPA
cameras. In this system, we plan to study learning of visual representations and gaze
control.
207
Abstracts: Table of contents
Oral Presentations
Wednesday, September 30
15
15
Neuronal phase response curves for maximal information transmission...........15
Modeling synaptic plasticity................................................................................16
Adaptive spike timing dependent plasticity realises palimsest auto-associative
memories...................................................................................................16
A gamma-phase model of receptive field formation...........................................17
Thursday, October 1
18
Rules of cortical plasticity...................................................................................18
Efficient reconstruction of large-scale neuronal morphologies...........................19
Adaptive accurate simulations of single neurons...............................................20
Synchronized inputs induce switching to criticality in a neural network..............21
Role of neuronal synchrony in the generation of evoked EEG/MEG responses 22
Spike time coordination maps to diffusion process............................................23
Coding and connectivity in an olfactory circuit....................................................24
Neurometric function analysis of short-term population codes...........................24
A network architecture for maximal separation of neuronal representations experiment and theory...............................................................................25
Dynamics of nonlinear suppression in V1 simple cells.......................................27
Friday, October 2
28
Modelling cortical representations......................................................................28
Inferred potential motor goal representation in the parietal reach region...........28
A P300-based brain-robot interface for shaping human-robot interaction..........29
On the interaction of feature- and object-based attention..................................31
Interactions between top-down and stimulus-driven processes in visual feature
integration..................................................................................................32
Coding of interaural time differences in the DNLL of the mongolian gerbil........33
Probabilistic inference and learning: from behavior to neural representations...34
A multi-stage synaptic model of memory............................................................35
An integrated system for incremental learning of multiple visual categories......36
A mesoscopic model of VSD dynamics observed in visual cortex induced by
flashed and moving stimuli.........................................................................37
Dynamics of on going activity in anesthetized and awake primate....................38
Poster Session I, Wednesday, September 30
Dynamical systems and recurrent networks
40
40
W1 Numerical simulation of neurite stimulation by finite and homogeneous
electric sources..........................................................................................40
W2 Dynamic transitions in the effective connectivity of interacting cortical areas
...................................................................................................................41
W3 The selective attention for action model (SAAM).........................................42
W4 Matching network dynamics generated by a neuromorphic hardware system
and by a software simulator.......................................................................43
W5 Attractor dynamics in VLSI...........................................................................44
W6 A novel information measure to understand differentiation in social systems
...................................................................................................................44
W7 Enhancing information processing by synchronization ...............................45
W8 A computational model of stress coping in rats............................................47
W9 Self-sustained activity in networks of integrate and fire neurons without
external noise.............................................................................................48
W10 Intrinsically regulated self-organization of topologically ordered neural
maps..........................................................................................................49
W11 Are biological neural networks capable of acting as computing reservoirs?
...................................................................................................................50
W12 A model of V1 for visual working memory using cortical and interlaminar
feedback.....................................................................................................51
W13 Finite synaptic potentials cause a non-linear instantaneous response of the
integrate-and-fire model.............................................................................51
W14 Simple recurrent neural filters for non-speech sound recognition of reactive
walking machines.......................................................................................53
W15 A comparison of fixed final time optimal control computational methods
with a view to closed loop IM.....................................................................54
W16 Is cortical activity during work, idling and sleep always self-organized
critical?.......................................................................................................55
W17 Filtering spike firing frequencies through subthreshold oscillations...........56
W18 Sensitivity analysis for the EEG forward problem......................................57
W19 Cortical networks at work: using beamforming and transfer entropy to
quantify effective connectivity....................................................................58
W20 A activity dependent connection strategie for creating biologically inspired
neural networks..........................................................................................59
W21 Computational neurosciense methods in human walking behaviour ........60
W22 Invariant object recognition with interacting winner-take-all dynamics.......61
Information processing in neurons and networks
62
W23 Ephaptic interactions enhance temporal precision of CA1 pyramidal
neurons during pattern activity...................................................................62
W24 Characterisation of Shepherd’s crook neurons in the chicken optic tectum
...................................................................................................................63
W25 Multiplicative changes in area MST neuron’s responses of primate visual
cortex by spatial attention..........................................................................64
W26 Dynamical origin of the “magical number” in working memory..................65
W27 A novel measure of model error for conductance-based neuron models. .66
W28 Neuronal copying of spike pattern generators...........................................67
W29 Electrophysiological properties of interneurons recorded in human brain
slices..........................................................................................................68
W30 Temporal precision of speech coded into nerve-action potentials.............69
W31 Computational modeling of reduced excitability in the dentate gyrus of
betaIV-spectrin mutant mice......................................................................70
W32 The evolutionary emergence of neural organization in a hydra-like animat
...................................................................................................................71
W33 Simulation of large-scale neuron networks and its application to a cortical
column in sensory cortex...........................................................................72
W34 Analysis of the processing of noxious stimuli in patients with major
depression and controls.............................................................................73
W35 A network of electrically coupled cells in the cochlear nucleus might allow
for adaptive information..............................................................................75
W36 Attention modulates the phase coherence between macaque visual areas
V1 and V4..................................................................................................76
W37 Synchrony-based encoding in cerebellar neuronal ensembles of awake
mobile mice................................................................................................77
W38 Unsupervised learning of gain-field like interactions to achieve headcentered representations...........................................................................78
W39 The morphology of cell nuclei regulates calcium coding in hippocampal
neurons......................................................................................................79
W40 Field potentials from macaque area V4 predict attention in single trials with
~100% accuracy.........................................................................................80
W41 Applying graph theory to the analysis of functional network dynamics in
visual cortex...............................................................................................81
W42 Differential processing through distinct network properties in two parallel
olfactory pathways.....................................................................................82
W43 A columnar model of bottom-up and top-down processing in the neocortex
...................................................................................................................83
W44 Towards an estimate of functional connectivity in visual cortex.................84
W45 Correlates of facial expressions in the primary visual cortex.....................85
W46 Uncovering the signatures of neural synchronization in spike correlation
coefficients.................................................................................................86
W47 Fast excitation during sharp-wave ripples..................................................88
W48 The german neuroinformatics node: development of tools for data analysis
and data sharing........................................................................................89
W49 Neuronal coding challenged by memory load in prefrontal cortex.............90
W50 Detailed modelling of signal processing in neurons...................................91
W51 Simultaneous modelling of the extracellular and innercellular potential and
the membrane voltage...............................................................................92
Neural encoding and decoding
93
W52 Cochlear implant: from theoretical neuroscience to clinical application.....93
W53 Feature-based attention biases perception of motion direction.................94
W54 Reproducibility – a new approach to estimating significance of orientation
and direction coding...................................................................................95
W55 Multi-electrode recordings of delay lines in nucleus laminaris of the barn
owl..............................................................................................................96
W56 Invariant representations of visual streams in the spike domain................97
W57 Kalman particle filtering of point processes observation............................98
W58 Decoding perceptual states of ambiguous motion from high gamma EEG99
W59 Learning binocular disparity encoding simple cells in a model of primary
visual cortex.............................................................................................100
W60 Models of time delays in the gamma cycle should operate on the level of
individual neurons....................................................................................100
W61 Effects of attention on the ablity of MST neurons to signal direction
differences of moving stimuli....................................................................102
Neurotechnology and brain computer interfaces
W62 A new device for chronic multielectrode recordings in awake behaving
103
monkeys...................................................................................................103
W63 Decoding neurological disease from MRI brain patterns.........................104
W64 Effect of complex delayed feedback in a neural field model....................105
Probabilistic models and unsupervised learning
106
W65 Applications of non-linear component extraction to spectrogram
representations of auditory data...............................................................106
W66 Planning framework for tower of hanoi task.............................................107
W67 Robust implementation of a winner-takes-all mechanism in networks of
spiking neurons........................................................................................108
W68 A recurrent working memory architecture for emergent speech
representation..........................................................................................109
W69 Contrastive divergence learning may diverge when training restricted
boltzmann machines................................................................................110
W70 Hierachical models of natural images......................................................112
W71 Unsupervised learning of disparity maps from stereo images.................113
W72 RLS- and Kalman-based algorithms for the estimation of time-variant,
multivariate AR-models............................................................................114
W73 A new class of distributions for natural images generalizing independent
subspace analysis....................................................................................114
Poster Session II, Thursday, October 1
Computer vision
116
116
T1 Learning object-action relations from semantic scene graphs....................116
T2 A neural network for motion perception depending on the minimal contrast
.................................................................................................................117
T3 The guidance of vision while learning categories........................................118
T4 Learning vector quantization with adaptive metrics for online figure-ground
segmentation............................................................................................119
T5 Large-scale real-time object identification based on analytic features........120
T6 Learning of lateral connections for representational invariant recognition. .121
T7 Foveation with optimized receptive fields....................................................122
T8 A neural model of motion gradient detection for visual navigation..............123
T9 Toward a goal directed construction of state spaces..................................125
T10 A recurrent network of macrocolumnar models for face recognition.........126
T11 Adaptive velocity tuning on a short time scale for visual motion estimation
.................................................................................................................127
T12 Tracking objects in depth using size change.............................................128
Decision, control and reward
130
T13 Learning of visuomotor adaptation: insights from experiments and
simulations...............................................................................................130
T14 Neural response latency of smooth pursuit responsive neurons in cortical
area MSTd...............................................................................................131
T15 Neuronal decision-making with realistic spiking models...........................132
T16 A computational model of basal ganglia involved in the cognitive control of
visual perception......................................................................................133
T17 Reaching while avoiding obstacles: a neuronally inspired attractor dynamics
approach..................................................................................................134
T18 Expected values of multi-attribute objects in the human prefrontal cortex
and amygdala...........................................................................................135
T19 Optimal movement learning for efficient neurorehabilitation ....................136
T20 Computational modeling of the drosophila neuromuscular junction..........137
T21 Effects of dorsal premotor cortex rTMS on contingent negative variation and
bereitschaftspotential...............................................................................139
T22 A computational model of goal-driven behaviours and habits in rats........140
T23 Fixational eye movements during quiet standing and sitting.....................141
T24 Suboptimal selection of initial saccade in a visual search task.................142
T25 Timing-specific associative plasticity between supplementary motor area
and primary motor cortex.........................................................................143
T26 Fast on-line adaptation may cause critical noise amplification in human
control behaviour......................................................................................144
T27 Inferring human visuomotor Q-functions...................................................144
T28 Beaming memories:Source localization of gamma oscillations reveals
functional working memory network.........................................................146
T29 Task-dependent co-modulation of different EEG rhythms in the non-human
primate.....................................................................................................146
T30 A computational neuromotor model of the role of basal ganglia in spatial
navigation.................................................................................................147
T31 Working memory-based reward prediction errors in human ventral striatum
.................................................................................................................149
T32 Spatially inferred, but not directly cued reach goals are represented earlier
in PMd than PRR.....................................................................................150
T33 Classification of functional brain patterns supports diagnostic autonomy of
binge eating disorder................................................................................151
Learning and plasticity
153
T34 Hippocampal mechanisms in the initiation and perpetuation of epileptiform
network synchronisation...........................................................................153
T35 The basal ganglia and the 3-factor learning rule: reinforcement learning
during operant conditioning......................................................................154
T36 Dual coding in an auto-associative network model of the hippocampus...155
T37 Towards an emergent computational model of axon guidance.................156
T38 Convenient simulation of spiking neural networks with NEST 2...............157
T39 Prefrontal firing rates reflect the number of stimuli processed for visual
short-term memory...................................................................................158
T40 Using ICA to estimate changes in the activation between different sessions
of a fMRI experiment................................................................................159
T41 A biologically plausible network of spiking neurons can simulate human
EEG responses........................................................................................160
T42 Unsupervised learning of object identities and their parts in a hierarchical
visual memory..........................................................................................161
T43 The role of structural plasticity for memory: storage capacity, amnesia, and
the spacing effect.....................................................................................162
T44 Investigation of the dynamics of small networks' connections under hebbian
plasticity ..................................................................................................164
T45 On the analysis of differential hebbian learning in closed-loop behavioral
systems....................................................................................................165
T46 Hysteresis effects of cortico-spinal excitability during transcranial magnetic
stimulation ...............................................................................................166
T47 Going horizontal: spatiotemporal dynamics of evoked activity in rat V1 after
retinal lesion.............................................................................................167
T48 A study on students' learning styles and impact of demographic factors
towards effective learning........................................................................168
T49 Role of STDP in encoding and retrieval of oscillatory group-syncrhonous
spatio-temporal patterns..........................................................................169
T50 Single-trial phase precession in the hippocampus....................................170
T51 Are age-related cognitive effects caused by optimization?.......................171
T52 Perceptual learning in visual hyperacuity: a reweighting model................172
T53 Spike-timing dependent plasticity and homeostasis: composition of two
different synaptic learning mechanism.....................................................173
T54 The model of ocular dominance pattern formation in the presence of
gradients of chemical labels.....................................................................174
T55 An explanation of the familiarity-to-novelty-shift in infant habituation........175
T56 A reinforcement learning model develops causal inference and cue
integration abilities...................................................................................176
T57 Continuous learning in a model of rate coded neurons with calcium
dynamics..................................................................................................177
Sensory processing
178
T58 A model of auditory spiral ganglion neurons.............................................178
T59 Theoretical study of candidate mechanisms of synchronous “multivesicular”
release at ribbon synapses......................................................................180
T60 Modulation of neural states in the visual cortex by visual stimuli..............181
T61 Evaluating the feature similarity gain and biased competition models of
attentional modulation..............................................................................182
T62 While the frequency changes, the relationships stay................................183
T63 Optical analysis of Ca2+ channels at the first auditory synapse...............184
T64 Learning 3D shape spaces from videos....................................................184
T65 High-frequency oscillations in EEG and MEG recordings are modulated by
cognitive context .....................................................................................185
T66 Mobile brain/body imaging (MoBI) of active cognition...............................186
T67 Color edge detection in natural scenes.....................................................187
T68 Simulation of tangential and radial brain activity: different sensitivity in EEG
and MEG..................................................................................................188
T69 Cortico-cortical receptive fields – how V3 voxels sample information across
the visual field in V1.................................................................................189
T70 Predicting the scalp potential topography in the multifocal VEP by fMRI . 190
T71 Influence of attention on encoding of two spatially separated motion
patterns by neurons in area MT...............................................................191
T72 Modeling and analysis of the neurophonic potential in the laminar nucleus
of the barn owl..........................................................................................192
T73 Attentional modulation of the tuning of neurons in area MT to the direction
of transparent motion...............................................................................193
T74 Pinwheel crystallization in models of visual cortical development.............194
T75 Cellular contribution to vestibular signal processing - a modeling approach
.................................................................................................................195
T76 Modeling the influence of spatial attention on visual receptive fields........197
T77 Pattern mining in spontaneous background activity from the honeybee
antennal lobe............................................................................................198
T78 Developing a model of visuomotor coordination involved in copying a pair of
intersecting lines......................................................................................199
T79 The influence of attention on the representation of speed changes in
macaque area MT....................................................................................200
T80 Spatial spread of the local field potential in the macaque visual cortex....201
Demonstrations
202
D1 Embodied robot simulation: learning action-outcome associations............202
D2 Balancing pencils using spike-based vision sensors..................................203
D3 yArbor: performing axon guidance simulations...........................................205
D4 Foveated vision with a FPGA camera.........................................................206
Abstracts: Author index
A
Abbott, Larry..........................................50
Abramov, Alexey.................................116
Aertsen, Ad......................................43, 68
Agudelo-Toro, Andres...........................40
Aksoy, Eren.........................................116
Albers, Christian....................................16
Anand, Lishma......................................23
Anastassiou, Costas..............................62
Angay, Oguzhan....................................63
Antes, Niklas.........................................91
Arai, Noritoshi......................139, 143, 166
Arévalo, Orlando.................................130
B
Bach, Michael......................................190
Bade, Paul Wilhelm.............................178
Bahmer, Andreas..................................93
Bajorat, Rika........................................153
Baldassarre, Gianluca...................47, 140
Ballard, Dana H.............................17, 144
Baloni, Sonia.................................64, 102
Bär, Karl-Jürgen....................................73
Barahona, M..........................................62
Barmashenko, Gleb.............................153
Basar-Eroglu, Canan.............................99
Bastian, Peter............................19, 20, 72
Battaglia, Demian..........................41, 191
Bauer, Andreas.....................................43
Baumann, Uwe......................................93
Bayer, Florian......................................117
Behrendt, Jörg.....................................171
Benda, Jan............................................89
Benucci, Andrea....................................84
Berens, Philipp......................................24
Berner, Raphael..................................203
Best, Micha..........................................100
Bethge, Matthias. . .24, 112, 113, 114, 132
Beuter, Anne.......................................105
Beuth, Frederik....................................118
Bick, Christian.................................65, 66
Bigdely-Shamlo, Nima.........................186
Bliem, Barbara.....................................143
Böhme, Christoph..................................42
Bolado-Gomez, Rufino........154, 202, 203
Bornschein, Jörg.........................106, 206
Bornschlegl, Mona...............................130
Bouecke, Jan.......................................121
Braun, Jochen.......................................44
Brookings, Ted......................................66
Brostek, Lukas.....................................131
Brüderle, Daniel.....................................43
Bucher, Daniel B.................................137
Bush, Daniel..................................67, 155
Busse, Laura.........................................84
Büttner, Ulrich......................................131
Butz, Markus.......................................173
Büyükaksoy Kaplan, Gülay.................107
Buzsaki, G.............................................62
Buzsaki, György..................................170
C
Cabib, Simona.......................................47
Camilleri, Patrick...................................44
Campagnaud, Julien...........................105
Carandini, Matteo..................................84
Cardanobile, Stefano...........................108
Carr, Catherine....................................192
Chakravarthy, V. Srinivasa..........147, 199
Chalk, Matthew......................................94
Chambers, Jonathan...........................154
Chambers, Jonathan M...............202, 203
Chapochnikov, Nikolai.................180, 184
Chen, Nan-Hui.......................................90
Chiu, C................................................181
Cohen, Jonathan D.............................149
Collman, F.............................................77
Conradt, Jörg.......................................203
Cook, Matthew....................................203
Costa, Ernesto.....................................156
Costa, Rui P................................156, 205
Cui, Maolong.......................................181
Curio, Gabriel................................22, 185
D
Daliri, Mohammad Reza..............182, 191
Deger, Moritz.........................................51
del Giudice, Paolo.................................44
Delbruck, Tobi.....................................203
Dellen, Babette....................................116
Deller, Thomas......................................70
Denecke, Alexander............................119
Di Prodi, Paolo......................................44
Diba, Kamran......................................170
Diesmann, Markus........................51, 157
Dodds, Stephen.....................................54
Dombeck, D.A.......................................77
Douglas, Rodney.................................203
Drouvelis, Panos...................................19
du Buf, Hans..........................................85
Duarte, Carlos.....................................156
E
Ecker, Alexander...................................24
Eggert, Julian..............................127, 128
Ehn, Friederike......................................31
Eichardt, Roland..................................188
Elshaw, Mark.......................................109
Engbert, Ralf.......................................141
Eppler, Jochen....................................157
Ernst, Udo.........................32, 45, 80, 130
Eysel, Ulf T..........................................167
F
Fahle, Manfred..............................32, 130
Feng, Weijia........................................183
Fernando, Chrisantha............................67
Finke, Andrea........................................29
Fiore, Vincenzo.....................................47
Fischer, Asja........................................110
Fiser, József..................................34, 181
Fix, Jérémy..........................................197
Frank, Thomas....................................184
Franke, Felix..................................90, 158
Franzius, Mathias................................184
Freeman, Ralph.....................................27
Fregnac, Yves.......................................27
Fründ, Ingo..........................................160
Funke, Michael....................................188
Fusi, Stefano.........................................35
G
Gail, Alexander..............................28, 150
Galashan, Orlando................31, 103, 200
Galizia, C. Giovanni.............................198
Galuske, Ralf A. W................................81
García-Ojalvo, Jordi..............................56
Gegenfurtner, Karl.......................117, 187
Geisel, Theo............................21, 86, 171
Gerwinn, Sebastian...............................24
Gewaltig, Marc-Oliver......48, 83, 157, 162
Giacco, Ferdinando.............................169
Giulioni, Massimiliano............................44
Glasauer, Stefan.........................131, 195
Gläser, Claudius....................................49
Goerick, Christian..................................49
Goldhacker, Markus............................159
Götz, Theresa......................................185
Govindan, Marthandan........................168
Grabska-Barwinska, Agnieszka............95
Gramann, Klaus..................................186
Grewe, Jan............................................89
Grinvald, Amiram...................................38
Groß, Horst-Michael..............................36
Grothe, Benedikt...................................33
Grützner, Christine................................58
Güllmar, Daniel....................................188
Gurney, Kevin......................154, 202, 203
Gürvit, I. Hakan...................................107
Gutierrez, Gabrielle...............................50
H
Hackmack, Kerstin......................104, 151
Häfner, Ralf.........................................132
Hamker, Fred H...........100, 118, 133, 177
Hansen, Thorsten..................51, 117, 187
Hasler, Stephan...................................120
Haueisen, Jens............................185, 188
Havenith, Martha.........................100, 183
Haynes, John-Dylan....104, 135, 151, 189
Hefft, Stefan..........................................68
Heinke, Dietmar.....................................42
Heinzle, Jakob.............................135, 189
Helias, Moritz.................................51, 157
Hemmert, Werner....................69, 93, 178
Herrmann, Christoph...........................160
Herrmann, J. Michael....................21, 171
Herz, Andreas.................................59, 89
Herzog, Andreas...................................59
Heumann, Holger..................................91
Holmberg, Marcus.................................69
Hoogland, T.M.......................................77
Hosseini, Reshad................................112
Husbands, Phil..............................67, 155
I
Igel, Christian................................37, 110
Ihrke, Matthias.....................................171
Ionov, Jaroslav......................................73
Iossifidis, Ioannis.................................134
Isik, Michael...........................................69
Islam, Shariful......................................190
J
Jancke, Dirk.............................37, 95, 167
Jedlicka, Peter.......................................70
Jin, Yaochu......................................29, 71
Jitsev, Jenia.................................126, 161
Jones, Ben............................................71
Jortner, Ron...........................................25
Joublin, Frank........................................49
Jung, Patrick........................................139
K
Kahnt, Thorsten...........................135, 189
Kaiser, Katharina...................................63
Kaping, Daniel...............................64, 102
Karg, Sonja............................................69
Katzner, Steffen.....................................84
Keck, Christian....................................121
Keck, Ingo...........................................159
Kempter, Richard..........................96, 192
Kiriazov, Petko....................................136
Kirstein, Stephan...................................36
Klaes, Christian.............................28, 150
Knoblauch, Andreas............................162
Knodel, Markus...................................137
Koch, C..................................................62
Köhling, Rüdiger............................68, 153
Kolodziejski, Christoph................164, 165
Körner, Edgar. .36, 83, 119, 120, 162, 184
Körner, Ursula...............................83, 162
Kössl, Manfred....................................139
Koulakov, Alexei..................................174
Kozyrev, Vladislav...............182, 191, 193
Kreiter, Andreas K...........31, 76, 103, 200
Kremkow, Jens......................................43
Kriener, Birgit.........................................23
Kulvicius, Tomas.................................165
Kuokkanen, Paula.........................96, 192
Kurz, Thorben........................................19
L
Lang, Elmar W.....................................159
Lang, Stefan..............................19, 20, 72
Langner, Gerald....................................93
Laurent, Gilles...........................24, 25, 66
Lautemann, Nico...................................96
Lazar, Aurel...........................................97
Leibold, Christian...........................88, 170
Leistritz, Lutz.................................73, 114
Levina, Anna.........................................21
Levy, Manuel.........................................27
Lichtsteiner, Patrick.............................203
Lies, Jörn-Philipp.................................113
Lindner, Michael....................................58
Liu, Ming..............................................206
Lochte, Anja................................191, 193
Löwel, Siegrid......................................194
Lu, Ming-Kuei..............................139, 143
Lücke, Jörg..........................106, 121, 166
Luksch, Harald......................................63
Lüling, Hannes......................................33
M
Macedo, Luís.......................................156
Maier, Nikolaus......................................88
Makeig, Scott.......................................186
Maloney, Laurence T...........................142
Malva, João.........................................156
Malyshev, Aleksey.................................86
Mandon, Sunita...............................76, 80
Mannella, Francesco.....................47, 140
Manoonpong, Poramate........................53
Marder, Eve.....................................50, 66
Marinaro, Maria...................................169
Markounikau, Valentin...........................37
Masson, Guillaume................................43
Matieni, Xavier.......................................54
Mattia, Maurizio.....................................44
Meier, Karlheinz....................................43
Memmesheimer, Raoul-Martin..............23
Menzel, Randolf....................................82
Mergenthaler, Konstantin....................141
Michaelis, Bernd....................................59
Michler, Frank........................................78
Milde, Thomas.....................................114
Miltner, Wolfgang..................................73
Mirolli, Marco.................................47, 140
Modolo, Julien.....................................105
Mohr, Harald........................................146
Möller, Caroline...................................166
Montgomery, S.M..................................62
Moore, Roger K...................................109
Morie, Takashi.....................................126
Morris, Genela.......................................88
Morvan, Camille..................................142
Moser, Tobias..............................180, 184
Muckli, Lars F................................90, 158
Müller-Dahlhaus, Florian.....................143
Muller, Eilif...........................................157
Munk, Matthias HJ...........55, 90, 146, 158
Mustari, Michael J...............................131
N
Natora, Michal.....................................158
Nawrot, Martin P....................................89
Neef, Andreas.........................40, 75, 184
Neitzel, Simon.................................76, 80
Neumann, Heiko............................51, 123
Ng, Benedict Shien Wei........................95
Nicoletti, Michele...................................69
Nikolic, Danko.....................................183
Nikulin, Vadim.......................................22
Niv, Yael..............................................149
O
O'Shea, Michael..................................155
Oberlaender, Marcel........................19, 72
Obermayer, Klaus...........................28, 90
Ohl, Frank............................................160
Ohzawa, Izumi.......................................27
Omer, David..........................................38
Ono, Seiji.............................................131
Ozden, Ilker...........................................77
P
Palagina, Ganna..................................167
Pamplona, Daniela......................122, 206
Park, Soyoung Q.................................135
Patzelt, Felix........................................144
Pawelzik, Klaus.........16, 32, 80, 130, 144
Peresamy, P. Rajandran.....................168
Perrinet, Laurent....................................43
Philipp, Sebastian Thomas....................78
Philippides, Andrew.............................155
Pillow, Jonathan W................................84
Pipa, Gordon.....................58, 81, 90, 158
Pnevmatikakis, Eftychios A...................97
Popovic, Dan.........................................20
Porr, Bernd....................................44, 165
Priesemann, Viola.................................55
Puglisi-Allegra, Stefano.........................47
Q
Queisser, Gillian................79, 91, 92, 137
R
Rabinovich, Mikhail.........................65, 66
Raudies, Florian..................................123
Reichl, Lars.........................................194
Rein, Julia............................................198
Reiter, Sebastian...................................91
Rempel, Hanna...........................103, 200
Ringbauer, Stefan...............................123
Ritter, Helge..........................................29
Rodemann, Tobias..............................176
Rodrigues, João....................................85
Rössert, Christian................................195
Rotermund, David.....................45, 80, 99
Rothkopf, Constantin A...............144, 176
Rotter, Stefan................................51, 108
Roux, Frederic.....................................146
Roxin, Alex............................................35
Rubio, Diana..........................................57
Rudnicki, Marek.............................69, 178
Rulla, Stefanie.....................................146
S
Sadoc, Gérard.......................................27
Saeb, Sohrab......................................125
Sahani, Maneesh..................................84
Saintier, Nicolas....................................57
Sakmann, Bert.......................................72
Salimpour, Yousef.................................98
Sancho, José María..............................56
Sancristobal, Belen...............................56
Sato, Yasuomi.....................................126
Scarpetta, Silvia..................................169
Schemmel, Johannes............................43
Schiegel, Willi........................................89
Schienle, Anne....................................151
Schipper, Marc......................................32
Schleimer, Jan-Hendrik.........................15
Schmidt, Robert...................................170
Schmiedt, Joscha..................................99
Schmitz, Dietnar..................................170
Schmitz, Katharina................................81
Schmuker, Michael................................82
Schöner, Gregor..................................134
Schrader, Sven......................................83
Schrobsdorff, Hecke............................171
Schroll, Henning..................................197
Schultz, Christian..................................70
Schulz, David P.....................................84
Schuster, Christoph.............................137
Schwarzacher, Stephan W....................70
Seitz, Aaron...................................94, 172
Sendhoff, Bernhard...............................71
Sengör, Neslihan Serap......................107
Seriès, Peggy................................94, 172
Shapley, Robert M...............................201
Singer, Wolf.........................100, 146, 183
Sinz, Fabian........................................114
Siveke, Ida.............................................33
Smiyukha, Yulia.....................................80
Sotiropoulos, Grigorios........................172
Sousa, Ricardo......................................85
Steil, Jochen........................................119
Stemmler, Martin...................................15
Stephan, Valeska................................193
Straka, Hans........................................195
Strasburger, Hans...............................190
Strauch, Martin....................................198
Sukumar, Deepika...............................147
Suryana, Nanna..................................168
T
Tamosiunaite, Minija...........................165
Tank, D.W.............................................77
Taylor, Katja..........................................80
Tchumatchenko, Tatjana.......................86
Tejero-Cantero, Álvaro..........................88
Telenczuk, Bartosz................................22
Tetzlaff, Christian........................164, 173
Timme, Marc.........................................23
Todd, Michael T...................................149
Treue, Stefan.64, 102, 182, 191, 193, 197
Triesch, Jochen...........122, 175, 176, 206
Troparevsky, Maria................................57
Truchard, Anthony.................................27
Tsai, Chon-Haw...................................139
Tsigankov, Dmitry................................174
Tushev, Georgi....................................206
U
Uhlhaas, Peter.......................58, 100, 146
V
Vankov, Andrey...................................186
Vednath, Pittala...................................199
Vicente, Raul.........................................58
Vitay, Julien.........................................133
Volgushev, Maxim.................................86
von der Malsburg, Christoph. 61, 126, 161
Voss, Mark..................................100, 177
W
Wachtler, Thomas...........................78, 89
Wagner, Hermann.........................96, 192
Waizel, Maria.................................90, 158
Wang, Huan..........................................69
Wang, Peng.................................100, 183
Wang, Quan........................................175
Wang, Samuel.......................................77
Weber, Cornelius.................122, 125, 206
Wegener, Detlef....................31, 103, 200
Weigel, Stefan.......................................63
Weise, Felix K.......................................70
Weiss, Thomas..............................73, 114
Weisswange, Thomas H.....................176
Weliky, M.............................................181
Wersing, Heiko..............36, 119, 120, 184
Westendorff, Stephanie.................28, 150
Weygandt, Martin........................104, 151
Wibral, Michael........................55, 58, 146
Willert, Volker......................................127
Wiltschut, Jan..............................100, 177
Winkels, Raphael..................................70
Winterer, Jochen...................................88
Witt, Annette..................................41, 193
Witte, Herbert......................114, 185, 188
Witte, Otto...........................................185
Wittum, Gabriel........................91, 92, 137
Wolf, Andreas........................................59
Wolf, Fred..............................86, 180, 194
Wolfrum, Philipp..................................126
Womelsdorf, Thilo...............................197
Wörgötter, Florentin...................................
........................44, 53, 116, 164, 165, 173
Wu, Wei.................................................58
Wüstenberg, Torsten...........................190
X
Xing, Dajun..........................................201
Xylouris, Konstantinos.....................91, 92
Y
Yamagata, Nobuhiro.............................82
Yao, Xin.................................................71
Yeh, Chun-I.........................................201
Yousefi Azar Khanian, Mahdi................60
Z
Zhang, Chen........................................128
Zhang, Lu............................................102
Zhu, Junmei...........................................61
Ziemann, Ulf........................139, 143, 166
Zirnsak, Marc.......................................197
Zito, Tiziano...........................................89
NOTES
NOTES
NOTES