Download Understanding Embodied Cognition through Dynamical Systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Binding problem wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Rheobase wikipedia , lookup

Enactivism wikipedia , lookup

Incomplete Nature wikipedia , lookup

Endocannabinoid system wikipedia , lookup

Neurocomputational speech processing wikipedia , lookup

Neural engineering wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Neuroethology wikipedia , lookup

Neuroanatomy wikipedia , lookup

Sensory substitution wikipedia , lookup

Neural coding wikipedia , lookup

Development of the nervous system wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Neuroeconomics wikipedia , lookup

Biological neuron model wikipedia , lookup

Nervous system network models wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neurophilosophy wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neuroesthetics wikipedia , lookup

Optogenetics wikipedia , lookup

Convolutional neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Synaptic gating wikipedia , lookup

Efficient coding hypothesis wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Recurrent neural network wikipedia , lookup

Neural modeling fields wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Embodied language processing wikipedia , lookup

Central pattern generator wikipedia , lookup

Metastability in the brain wikipedia , lookup

Transcript
Understanding Embodied Cognition through
Dynamical Systems Thinking
Gregor Schöner, Hendrik Reimann
17th March 2008
1
Introduction
If we ask ourselves, what sets the human species apart in the animal kingdom, a variety of answers come to mind. The capacity to make skillful manipulatory movements, to direct action at objects, handle objects, assemble
and reshape objects, is one of the possible answers. We are amazing movers,
very good at dynamic actions as well, catching and throwing objects, anticipating requirements for upcoming movements. Some other species perform
amazing specialized stunts, but none is as versatile and flexible as we are.
Through our manipulatory skill we relate to the world in a highly differentiated way, transform some objects into tools, which we bind into our body
scheme to bring about change in another object. “Homo Faber” is a very
appropriate characterization of the human mind.
Examine a simple, daily life example: Your toaster has stopped ejecting
the slices of toast once done. In the hope of a cheap solution, you try
to repair this defect by opening the toaster and searching for a dislocated
spring. This will mean concretely that you will take the toaster toward a
convenient workplace, the bench of your workshop if you are ambitious about
such things. You will explore the toaster visually, turning it around while
observing it to identify screws to undo, setting the toaster down, finding an
appropriate screw driver, loosening the screws, setting the toaster upright
again and carefully lifting up its cover. Some more examination leads you
to, in fact, find a loose spring (your lucky day), which you attach back to
the obvious hook onto which it fits. You refit the cover, and find, insert and
turn each screw in succession. You carry the toaster back to the kitchen,
and test it out on an old piece of toast, and happily announce to all members
of the household your heroic deed.
Now this action involves a lot of cognition. First, there is an indefinable
amount of background knowledge (Searle, 1983) that is used in multiple
ways and at different levels during the repair. Knowing that a repair involves
opening a device by taking off its cover, that screws need to be undone to
that end, are examples of high-level knowledge. Knowing what springs look
1
like or that removing a screw means turning the screw driver in counterclockwise direction are examples of a lower level of knowledge, meaning
knowledge more closely linked to the sensory or motor surfaces. Some of the
background knowledge may have the discrete, categorical form of whether to
turn a screwdriver to the left or to the right. Other background knowledge
is more graded and fuzzy in nature, such as how much force to apply to a
plastic vs. to a metallic part.
Visual cognition is required to detect the screws against the background
and to categorize the screws so as to select the appropriate type of screwdriver. During active visual exploration, we may memorize where the screws
are located, together with the pose and viewing angle of the toaster required
to return to each screw, to work on it in the unscrewing phase.
At a more global level, we need to retain stably in our mind that we are
trying to repair the toaster as we go about all these detailed actions and
explorations. That overall goal was selected in light of predictions about its
duration (e.g. short enough to make it to the cinema 30 minutes later), the
worthiness of this project compared to alternatives and the probability of
success.
The whole project of repairing the toaster takes place in a concrete setting, which provides surfaces to work on, visual structure that helps orient,
mechanical structure that facilitates motor control by providing force feedback and stabilization through friction. Performing the action while situated
in a structured and familiar environment alleviates some of the cognitive load
of the task. For instance, working memory is less taxed because the visual
context provides cues to the screws’ locations, or even just because they may
always be found again by reexploring.
In addition to this sensory interaction, the task situation is central to
the generation of movements as well. Sensori-motor coordination is required
when turning the toaster around to be able to examine it from different
angles and later to hold the toaster while attempting to loosen the screws.
This entails generating just the right amount of torque so that the frictional
force is overcome but slipping is avoided. That torque must continuously
and rapidly be adjusted as the screw starts to turn and static friction is
replaced by dynamic friction.
As we move ahead with the task, we need to smoothly switch from one
motor state to another. For instance, while unscrewing a screw, we fixate
the toaster with one hand and control the screw driver with the other. Then
we must release the toaster from fixation and move both hands into a coordinated grasp of the whole object as we reposition it, probably performing
at the same time some finger acrobatics to hold on to the screw driver.
This simple process of repairing a toaster clearly shows how a cognitive
system can benefit from having a body and being in a specific situation.
Compare the ease of performing this situated action with the much more
challenging variant in which an engineer provides a robot with a sequence
2
of detailed commands to achieve the same end.
But how central are the notions of embodiment and situatedness to understanding how cognitive tasks are achieved? Traditionally, it has been
thought that the core of cognition forms around such things as language,
thought, reasoning, or knowledge, and is detached from the motor and sensory surfaces. This has given rise to the theoretical framework of information
processing in which cognition is the manipulation of symbolic information.
Instances of symbols represent states of affairs relatively independently of
how these states were detected sensorially or how they are acted on by the
motor system.
The stance of embodied and situated cognition (see other chapters in this
book) postulates the opposite: All cognition is fundamentally embodied,
that is, closely linked to the motor and sensory surfaces, strongly dependent
on experience, on context, and on being immersed in rich and structured
environments. Although there are moments when cognitive processes are
decoupled from sensory and motor processes, the embodied stance emphasizes that whenever sensory information is available and while motoric action
is going on, cognitive processes are maintaining continuous couplings to associated sensory and motor processes.
Arguments in favor of such an embodied and situated stance can take
many forms (Thelen95, 1995; Thelen, Schöner, Scheier, & Smith, 2001;
Riegler, 2002). The line of thought that shall be described in the following
is, in a half formal way, called dynamical systems thinking. It focuses on
the concepts of the stability of behavioral states, the spatio-temporal continuity of processes, and their capacity to update online at any time during
processing (Erlhagen & Schöner, 2002).
While symbolic information processing is about computing, receiving
some form of input and generating some form of output in response, dynamical systems thinking is fundamentally about coupling and decoupling.
Cognitive processes are tailored to the structure of the environment, so that
they form stable relationships with sensed aspects of the environment or
with motor systems. As a corollary, cognitive processes are not designed to
deal with arbitrary inputs.
Both the embodied and situated stances as well as the theoretical framework of dynamical systems thinking resonate with the renewed interest in
understanding cognition and behavior on a more explicitly neuronal basis. For a long time, a proposal made explicit by Marr (1982) had been the
shared assumption across a broad range of subdisciplines concerned with human cognition. The assumption was the we may study human perception,
cognition, and motor planning at different levels of abstraction. The most
abstract, purely computational level characterizes the nature of the problem
solved by the nervous system. The second, algorithmic level consists of specific forms in which such abstract computations can be structured. Finally,
the third level of neuronal implementation deals with how specific neuronal
3
mechanisms may effectively implement algorithms. Although there are logical links between these levels, the research taking place at the different levels
of description can, to some extent, be pursued independently of the other
levels. In this conception, for instance, properties of the neuronal substrate
are not relevant to the computational and algorithmic levels.
In contrast, the embodied stance emphasizes that the principles of neuronal function must be taken into account as fundamental constraints whenever models of neuronal function, of behavior and cognition, are constructed.
This does not mean that all models need to be neuronally detailed and realistic. Certain basic principles of neuronal function must not be violated,
however. Among these are the continuous nature of neuronal processing,
the potential to be continuously linked to sensory and motor surfaces, the
notion that neurons are activated or deactivated, but do not transmit other
specific message (other than their activational state), and others, as we shall
see below.
Being mindful of these properties has concrete implications for the kinds
of theoretical frameworks that are compatible with the embodied stance.
In particular, stability, continuous time, and graded states are necessary
elements of any theoretical account of nervous function.
Finally, learning and development are sources of conceptual constraints
for the embodied stance. At least since Piaget (1952), the sensori-motor basis of developmental processes has been in the foreground of developmental
thinking. That development is largely learning and that cognition emerges
from experience, building on sensory-motor skills is not universally accepted.
The embodied stance embraces this position and provides forceful arguments
in is favor (Blumberg, 2005). More generally, the openness to learning and
adaptation is an important constraint for any theoretical framework that is
compatible with neuronal principles. This requires, for instance, a substrate
for cognition in which graded changes can occur, which is capable of keeping track of the slower time scale on which learning occurs, and which has
sufficiently rich structure to enable the emergence of entirely new functions
or the differentiation of skills and behaviors.
2
Dynamicism
Dynamicism or dynamical systems thinking (DST) is a theoretical framework
and language which enables understanding embodied and situated cognition.
As a theoretical language DST makes it possible to talk about the continuous
temporal evolution of processes, their mutual or unidirectional coupling and
decoupling, as well as their coupling to sensory or motor processes. At the
same time, DST provides an account for how discrete temporal events may
emerge from such underlying temporal continuity (through instabilities).
Relatedly, DST contains language elements to talk about graded states, but
4
u2
u1
Figure 1: A two-dimensional vector field illustrating the rate of change of two
activation variables u1 and u2 of a neural dynamics. For each value (u1 , u2 ) of
the state, the arrow indicates the direction and magnitude of the change the state
variables will undergo in the immediate future. The dot in the lower left indicates
a fixed point attractor.
also to address the emergence of discrete categories.
The mathematical basis of DST is the mathematical theory of dynamical
systems (Braun, 1993; Perko, 1991). That theory provides the language for
much of what is quantitative in the sciences, for physics, chemical reaction
kinetics, engineering, ecology, and much more. The mathematical concepts
on which DST is based are much more specific, however, than this embedding
in mathematical theory suggests. This section will introduce these main
concepts now and provide the link to behavior primarily by reference to
sensory and motor processes and only the most modest forms of cognition.
2.1
State spaces and rates of change
The central idea behind the mathematics of dynamical systems is that the
temporal evolution of state variables is governed by a field of forces, the vector field, which specifies for every possible state of the system the direction
and rate of change in the immediate future. Such a vector field is illustrated
in Figures 1 for a system characterized by two variables, u1 and u2 , which
describe the state of the system.
To make things concrete, think of these variables as describing the level
of activation of two neurons (or, more realistically, of two neuronal populations). Negative levels of activation indicate that the neurons are disengaged,
unable to transmit information about their state on to other neurons. Positive levels of activation reflect an activated state, in which the neurons are
capable of affecting neurons downstream in the nervous system (by emitting
spikes which are transported along axons, but that mechanism will not be
discussed here).
5
The vector field is a mapping in which each state of the variables is
associated with the rate of change of the variables. These rates of change
form vectors that can be visualized as arrows attached to each point in
state space, as illustrated in Figure 1. In formal mathematical terms then, a
dynamical system is this mapping from a state space to the associated space
of rates of change, u 7→ du
dt = f (u). For any vector u, here u = (u1 , u2 ),
the mapping f (u) is the vector field of forces acting upon the variable u in
that specific state. The notation du
dt indicates the temporal derivative of the
du1 du2
=
(
,
activation variables, du
dt
dt
dt ), so that the vector field and the state
space have the same dimensionality.
Solutions of a dynamical system are time courses of the state variables,
u(t), that start from an initial state u(0), where t indicates time. How the
system evolves from any initial state is dictated by the forces acting upon
the variables at each point in time, as indicated by the vector field. Starting
from u(0), the system evolves along a path determined by the vector field in
a way that at each point in time, the path is tangent to the rate of change
f (u). The system state runs along these solution paths with a velocity that
is set by the length of the arrows.
In Figure 1, all vectors point towards a point in the lower left quadrant
of the space, marked by a small dot, at which the vector field converges and
around which the length of the vectors becomes infinitesimally small. Once
activation levels have reached this point, their rate of change is approximately zero, so the system will stay at those activation levels. This point
in the state space is thus called a fixed point. Because the fixed point is
automatically reached from anywhere around it, it is called an attractor.
2.2
Neural dynamics
Neuronal dynamics with attractors make it possible to understand how neural networks may be linked to sensory input. The attractor illustrated in Figure 1 lies in the lower left quadrant of the space of two activation variables:
both neurons are at a low level of activation, not engaged in transmitting
spikes to other neurons. This can be thought of as the resting state of the
two neurons. An activating sensory input to both neurons can be thought
of as an influence on the neurons that drives up their levels of activation.
In formal terms, this is written as du
dt = f (u)+in(t), where in(t) is a timevarying input function that does not depend upon the state of u. Figure 2
illustrates the case in which only neuron number 1 receives input (only the
first component of the input vector is different from zero). This leads to
an additional contribution to the vector field that points everywhere to the
right. The resultant vector field, shown in Figure 3, is still convergent, but
the attractor is shifted to the right, to positive levels, where the neuron u1
has been activated, while u2 remains at resting level.
If the input is added when the system is in the previous attractor seen
6
u2
u1
Figure 2: To the vector field of Figure 1 (redrawn here using thin arrows) is added
an ensemble of vectors pointing to the right and representing constant input to u1 .
u2
u1
Figure 3: The sum of the two contributions to the vector field displayed in Figures
1 and 2 leads to the convergent vector field shown here. The attractor has been
shifted to the right along the u1 -axis.
in Figure 1, with both neurons at resting level, the change of the system
dynamics is followed by a change in the state variables. The rate of change
at the previous fixed point is not zero anymore, but points to the right,
towards the new fixed point. The system leaves the previously stable state,
moving along a path dictated by the changed vector field, until it reaches
the new fixed point attractor.
This illustrates the defining property of attractor states: Attractors are
(asymptotically) stable states, with the system dynamics working against all
influences that may cause the state variables to deviate from the attractor,
driving it back to the fixed point. In this instance, the recent history of
activation is such an influence. If the added input in this example is removed
after only a short period, it effectively becomes a transient perturbation.
The system will return to the resting state, the original attractor in the
7
lower left quadrant. As neurons in the nervous system are richly connected,
there are many potential sources for such transient perturbations. Any given
neuronal state will persist long enough to have an effect on other neurons
only if it is stabilized against the majority of such perturbative inputs.
A central concept of DST is that the dynamical systems which nervous
systems form together with their coupling to the sensory and motor systems
are best characterized by their attractor states. Dynamical systems with
convergent vector fields that may form attractors represent a special class of
dynamical systems, sometimes referred to as “dissipative” systems (Haken,
1983). DST is thus postulating that the neuronal networks and their links
to sensory and motor systems are all within this specific class of dynamical
systems, so that the functional states of nervous system may be attractor
states. DST thus makes a much more specific proposal than merely saying
that nervous systems can be modelled by differential equations.
What exactly is the neuronal basis of this postulate of DST? The first
ingredient is the recognition that neurons are dissipative dynamical systems at the microscopic level, the biophysics of the neuronal membrane and
the single cell. This has been known for a long time and is captured in
a rich literature (Hoppensteadt & Izhikevich, 1997; Wilson, 1999; Deco &
Schürmann, 2000; Trappenberg, 2002). The example of Figures 1 to 3 illustrates then how this property of individual neurons is bootstrapped up to
the macroscopic level of entire neural networks and their linkages to sensory
and motor systems (see Hock, Schöner, and Giese (2003)). If a base contribution, f (u), generates a convergent vector field, then the more complex
vector fields that are built by adding constant (or bounded) contributions
also form convergent vector fields.
Thus, attractor states of neuronal networks take their origins in biophysical mechanisms, which are being propagated all the way to the macrostates
that become behaviorally relevant. This does not mean, that the entire
nervous system is always in an attractor. As illustrated in Figures 1 to 3,
transient sensory stimulation may quickly shift attractors around and the
system may forever be chasing behind these moving attractors. The functional structure of macroscopic neural states is captured, however, by the
layout of attractors and their dependence on sensory stimulation. In light
of the strong interconnectedness of the central nervous system, only stable states will lead to robust function that resists change along the many
directions along which other parts of the nervous system may pull at any
time.
Stability works all the way through to the motor system. The musclejoint system, for instance, the agent of elementary motor action, is characterized by an attractor dynamics that leads to a stable, mechanical state of
the effector. There are contributions to this stable state from the passive
elastic and viscous properties of muscles. Neuronal circuits support these
properties and endow the system with flexibility so that it may shift the
8
resulting attractor state (Latash, 1993). Joint movement comes about by
such a neuronally effected shift of the attractor of the muscle-joint system
which then engages muscular activation through reflex loops.
Stability also provides an account for how organisms may couple their
motor behavior to sensory input derived from the world and thus, given the
appropriate properties of the world, lock onto objects in their surroundings.
A well-studied example at a very basic level is the tracking behavior of
the house fly (Reichardt & Poggio, 1976). Flies are capable of locking their
flight motor onto a visual object, simply a speck moving over their facet eye.
They do this using basic motion detection circuitry and directly coupling the
direction of detected motion into their motor system. The viscosity of the air
helps stabilize the resultant tracking behavior, which looks truly impressive
when a male fly pursues a female one across complex evasive maneuvers (the
purpose is easily guessed).
This establishes the link to Cybernetics, a historical predecessor of DST.
Cybernetics provided accounts for how organisms may couple motor systems to sensory input to bring about stable behavioral states. While very
successful on its chosen terms, cybernetic thinking never really moved beyond the understanding of a single-purpose function, in which one particular
perception-action linkage is at work. Stably linking, say, the flight motor
output to the direction in which motion is sensed may lead to excellent pursuit behavior. But how may the fly break out of it? And if it is able to break
out of it, how does it initiate such a state, how does it select among other
behaviors? How does it control which things to do in which circumstances.
2.3
Behavioral flexibility
How does DST overcome the well-known limitations of Cybernetics? The
kind of behavioral flexibility described requires that a state may be released
from stability, the coupling may be broken, one behavioral pattern may become unstable while another pattern is stabilized. In other words, flexibility
requires instability (Schöner & Kelso, 1988).
To illustrate this idea we use the two dimensional dynamical system of
(u1 , u2 ) described above to model how a fly may track moving objects. The
neurons could be linked to sensory input such as to respond to the visual
motion of moving objects. One neuron would be tuned to become activated
when a horizontally moving object is seen, the other when a vertically moving object is seen (in addition, the area in the visual array from which the
neurons receive input may be be limited to a receptive field, but we’ll gloss
over that for now).
Typically, only one moving object would be seen and its direction of
motion would be registered by the appropriate neuron, which would be coupled appropriately to the wing motors, steering the fly into the appropriate
direction to track the object. But now imagine that there were actually two
9
flies in the scene, one moving upward, the other moving horizontally, both
initially in the same region of the visual array. Both neuron would receive
input. If they were both becoming activated, then the wing motors would
receive both a command to track toward the right and upwards, leading to
a flight maneuver into an oblique upward-rightward direction and missing
both targets.
Instead of allowing both neurons to respond to their respective inputs,
the fly’s nervous system needs the neurons to make a selection decision, in
which one neuron is activated so that only one type of motor command
is sent to the wings. The other neuron, although it receives input, must
be suppressed. This operation goes beyond a mere transformation of the
pattern of sensory inputs (two motion signals in this case), requiring an
active process of decision making instead. The selection decision may depend
on both the input and the current state of the neuronal dynamics itself. If
the two neurons have previously settled on one particular motion direction,
disregarding the other, that that decision must be stabilized to prevent
useless changes of strategy back and forth between the two possible choices.
The dependence of the rate of change of one state variable upon the value
of another state variable is called coupling. For two neurons, such coupling
may be brought about by direct synaptic connections between the neurons
or, indirectly, by connections through one or multiple interneurons. A minimal solution is a inhibitory connection between the two neurons, u1 and u2 .
A contribution to the vector-field implementing such inhibitory coupling is
illustrated in Figure 4. Only when a neuron is sufficiently activated does is
contribute to neuronal coupling. This reflects that nature of synaptic transmission in which only sufficiently activated neurons emit spikes and affect
the neurons with which they have synaptic connections. This dependence of
neural coupling on the level activation of a neuron is captured by a nonlinear, sigmoidal function. In Figure 4, for instance, inhibition of neuron, u1 ,
but neuron, u2 is represented by vectors pointing into the negative direction
along the u1 axis, indicating negative rates of change of u1 . These vectors
are present only, where neuron, u2 , has positive levels of activation (in the
upper half of the plane). The length of these vectors reflects the sigmoid
function. When the two inhibitory contributions in the two directions are
overlaid, the vectors point away from the quadrant in which both neurons
would be activated, promoting a splitting of the vector field into branches
favoring either u1 or u2 .
The effect of coupling can be examined by adding the inhibitory coupling contribution of Figure 4 to the convergent vector-field representing
the input-driven dynamics of Figure 3. As shown in Figure 5, this leads
again to a convergent vector field. For weak inputs, this vector field has
a single attractor, pushed toward weaker levels of activation by inhibitory
coupling. For sufficiently strong inputs, this leads to two attractors. In one,
neuron u1 has positive activation while neuron u2 is suppressed to negative
10
(a)
u2
(b)
u2
u1
(c)
u1
u2
u1
Figure 4: Contributions to the vector-field that represent inhibitory coupling between two neurons u1 and u2 . (a) Neuron u2 inhibits neuron u1 through a negative
contribution to the rate of change of neuron u1 (arrows pointing left). This contribution is only present while u2 is sufficiently activated (top). (b) The analogous
contribution for how neuron, u1 , inhibits neuron, u2 . (c) Both contributions added
up.
levels of activation in which it can transmit activation. In the other, the
opposite is true. If input is gradually increase, the single attractor at symmetrical activation levels of both neurons becomes unstable, splitting in a
bifurcation into two new asymmetrical attractors. In a variety of neuronal
and behavioral systems, instabilities of this nature have been observed and
identified as such (Schöner & Kelso, 1988; Kelso, 1995).
In the bistable regime, the dynamical neurons no longer just follow the
input pattern. They actually decide something on their own, which neuron
will become activated and thus, which moving target the fly will pursue. If
inputs are symmetrical as in our illustrations, this decision may be driven
by chance, a small fluctuation arising from noisy inputs. If one of the inputs
is stronger (one of the objects provides a strong motion signal because it
is closer, for instance), then the decision is biased toward that object. In
either case, one a decision has been made by the neurons’ activation levels
relaxing to one of the two attractors, the neuronal dynamics stabilizes that
11
(a)
u2
(b)
u2
u1
u1
u2
(c)
u1
ut
np
inc
a
re
gi
sin
Figure 5: Vector fields for two neurons with mutually inhibitory interaction are
shown when both neurons receive the same weak (a) or strong (b) input. (a) Weak
input leads to a single attractor at small levels of activation. (b) Strong input leads
to two attractors, one in which neuron 1 is activated, but not neuron 2 (bottom
right), another in which the reverse is true (top left). (c) In a schematic illustration
of the location of the attractors in the space of the two state variables, u1 and u2 ,
as a function of input strength shows the transition from a monostable state at
weak input to a bistable state when increasing input starts to bring in inhibitory
interaction.
decisions. Even if inputs vary somewhat, the system will tend to stay in the
attractor selected.
3
Dynamic Field Theory
The simple model of selecting one of two neurons we analyzed above incorporates two of the key elements of dynamical systems thinking: stability
and flexibility. By setting up appropriate couplings between different parts
of a neuronal dynamics, stable states may be generated, shifted, made to
appear or to disappear as the sensed environmental conditions change. This
may happen without ever explicitly representing the goal of an action and
how it changes under these changing conditions. There is no need, in this
12
example, for an abstract, symbolic representation of the chosen movement
target, an explicit computation of its velocity or position. The fly simply
needs to couple is neural control system driving the wings to these dynamical neurons that receive sensory inputs about the presence of moving specks
on the visual array. Given the capacity of these dynamical neurons to make
sensori-motor decisions, this system is sufficient to achieve complex tracking
behavior, in which one fly chases another and is not distracted by other
moving objects crossing its visual array.
But is this system truly free of representation? Would not the neurons
themselves “represent” sensory stimuli as well as the decisions made about
them? In the present arrangement, such an interpretation is possible, but
of limited power. For instance, although the neural dynamics can control
tracking behavior, it is unable to perform more abstract operations, such as
flying into a direction that is a variable amount off the sensed movement
direction (say 30 degrees when one signal is given, 60 degrees when another
is given). This would seem to require a form of representation that has
more flexibility and a stronger degree of independence of the particular,
fixed coupling structure between sensory representation and motor system.
Could other dynamical systems with more sophisticated coupling structure
be devised that move beyond such limitations? Could neuronal dynamics
be conceived of, that may generate stable states the represent something
like a 30-degree turn from a sensory source, while still being continuously
linked to sensory input, having stability properties, and generating behavior
autonomously in real time?
3.1
Activation Fields
The key issue is to find a way that neural activation may represent metric
information. The solution are fields of neuronal activation that endow any
dimension of metric information relevant to perception, action, or cognition
with a continuous activation function. Each point along the dimension is
assigned a particular level of activation. Such activation field are spatially
smooth and depend on time.
Metric information is then represented by patterns of activation with
high activation levels around specific locations along the dimension (around
30◦ or 60◦ in the example) and low levels of activation elsewhere (Fig. 6).
Such localized peaks of activation are the units of representation in Dynamic
Field Theory. The amplitude of a peak indicates the presence or absence of
information about the metric dimension, and modulates the extent to which
that information is capable of impacting on other neuronal representations
or directly on motor systems. The location of a peak encodes the metric
content of the representation, such as a sensory estimate, a motor plan, or
metric working memory. The peak at 30◦ depicted in Figure 6, for instance,
represents the task of turning by 30◦ . It may have arisen from sensory
13
activation
-60
dimension
-30
0
30
60
90
Figure 6: A localized peak of activation in an activation field that is defined over
a metric dimension (here: heading direction) may represent both the presence of
a visual or motor object (through its level of activation) and an estimate about
this object (through the location of the peak along the metric dimension, here, the
heading direction of 30◦ ).
information or by computation within a neuronal dynamics. The peak may
bring about the represented action by being coupled to the wing motors in
ways that steer the fly in the indicated direction. The link of activation
fields to sensory and motor surfaces both through inputs and as targets of
projection ensures that Dynamical Fields support embodied and situated
cognitive processes, while the stability and autonomy of the peak solutions
enables abstraction and operation on representations.
3.2
Field Dynamics
To enable localized peaks to play this role of the units of representation in
embodied cognition, they are made the attractor solutions of dynamical systems that describe the temporal evolution of activation fields. While inputs
may drive activation to induced peaks, neuronal interaction among different
field sites is responsible for stabilizing peak solutions against decay through
local excitation and against diffusive spread through broader or global inhibition (Fig. 7). Excitatory input into an activation field may come from any
other part of a neuronal dynamical architecture, in particular, from sensor
surfaces or other activation fields. Due to the broad connectivity in the
central nervous system, such inputs typically contain random components
modelled as gaussian white noise. Interaction among field sites depends on
the distance between any two sites, so that metrically close locations are
mutually coupled excitatorily while sites at larger distances are mutually
coupled (potentially through interneurons) inhibitorily (Fig. 7).
A generic mathematical formulation of the dynamics of activation fields
that contain peaks as stable solutions was analyzed by (Amari, 1977):
τ u̇(x, t) = −u(x, t) + resting level + input + interaction.
The first three terms set up the linear dynamics of the activation level,
u(x, t), at any field site x in the manner of the vector fields examined earlier
(Figs. 2 and 3). In the absence of interaction, these terms lead to an attractor
14
(a)
(b)
weight
distance
weight
distance
Figure 7: Two examples of interaction kernels: (a) gaussian kernel with local excitation and global inhibition; (b) mexican hat style kernel with local excitation,
broader, but still local inhibition and no global inhibition
at an activation level matching the resting state plus any inputs provided
at that field location, x.
Nonlinearity is introduced by the interaction between different field sites.
The contribution of interaction to the rate of change of activation at a given
field site, x, is a weighted sum over all other field sites, x0 . The weight,
w(x, x0 ), determines both the sign and strength of the influence of activation
at site x0 on the rate of change of activation at site x. Positive values of
w(x, x0 ) reflect excitatory coupling, negative values inhibitory coupling. The
weight factor, also called coupling strength or interaction kernel depends
only on the distance between x and x0 . It is typically positive for small
distances x − x0 , and negative over larger distances to ensure the stability
of localized peaks of activation (Fig. 7).
Only sufficiently activated field sites contribute to interaction. This is
formalized by multiplying each weight term with the current level of activation at site, x0 , which has passed through a sigmoidal threshold function,
σ(x0 ). This nonlinear function is zero for sufficiently low levels of activation
and one for sufficiently large levels of activation with a more or less step
transition near the threshold level (conventionally chosen as the zero level
of the activation variable). The generic dynamics of activation fields can
thus be written in this form:
Z
τ u̇(x, t) = −u(x, t) + h + in(x, t) + w(x − x0 )σ(x0 )dx0 ,
where h is the resting level and in(x, t) the sum of external inputs, which
may vary in time.
3.3
Instabilities
Dynamical activation fields of this kind have two classes of attractor solutions. The first class consists of activation patterns that merely reflect
the input to the field. In these solutions, interaction plays only a minor role, so that the activation pattern approximates the input pattern:
u(x, t) ≈ in(x, t) + h (left panel of Fig. 9). This is the dynamic regime in
which most classical connectionist networks operate. The input function,
in(x, t), may link a sensory surface or another activation field characterized
by another dimension, y, to the dimension, x, represented by the activation
15
inc
rea
s
ing
act
iva
tio
nl
eve
l
sig
mo
id
activation
output
1
0
dimension
Figure 8: Multiple copies of an activation field over dimension, x (horizontal axis)
illustrate the effect of the sigmoidal nonlinearity. From front to back, a local peak
of activation with linearly increasing strength is assumed (grey fat line). The amplitude of the sigmoided activation (thin line) grows nonlinearly, being zero for
small activation and reaching saturation for large activation levels with a narrow
transition regime. This amplitude is illustrated on the left, tracing the sigmoidal
function itself as a function of the maximal level of activation in each copy of the
field.
field in question. These feedforward links may implement feature extraction,
mapping, or more complex operations such as association or correlation.
The input driven pattern of activation is stable only as long activation
is low enough so that interaction remains limited. Once the threshold of the
sigmoidal function is reached at any field site, local excitatory interaction
kicks in. The input-driven solution becomes unstable. Activation continues to grow under the influence of local excitatory interaction. As a local
peak grows, its outer rim moves away from the center, ultimately coming
within the distance from the center at which interaction is predominantly
inhibitory. This counteracts the growth that is driven by excitatory interaction, eventually reaching an equilibrium when the effect of local excitatory
interaction and inhibition at the outer boundaries of the peak balance. The
solution that emerges is an exemplar of the other category of attractor solutions, a localized, self-stabilized peak of activation (right panel of Fig. 9).
The dynamic instability dividing these two types of attractor solutions,
the largely sub-threshold activity mirroring the input and a supra-threshold
self-stabilizing peak, is called the detection instability. It occurs, for in16
(a)
(b)
dimension
input
activation
dimension
Figure 9: Two different types of attractor solutions for a dynamic field. (a) In the
input-driven solution, the field activation mirrors the input pattern. (b) A localized,
self-stabilized peak is induced by stronger inputs. Within the peak, activation
exceeds input while elsewhere activation is suppressed below resting level.
stance, when the amplitude of a single localized input is increased gradually
(Fig. 10). At a critical point, the input-driven solution becomes unstable
and a peak forms, ‘detecting’ the input. When the input level is dropped
again, local self excitation supports the peak for a range of input strengths
that previously were not sufficient to induce a peak from the input-driven
regime. This bistable regime stabilizes the detection decision against fluctuating input.
Another fundamental instability is linked to the capacity to select among
multiple inputs and is illustrated in Fig. 11. When two localized inputs are
provided to locations that are close to each other, a single self-stabilized
peak may form over an averaged location. This is due to local excitatory
interaction. If the distance between the two locations is increased, the single
averaging solution becomes unstable, yielding to a bistable regime in which a
peak may either be formed over one or over the other input location, but not
over both at the same time. Which input is selected depends on which site
receives larger input as well any fluctuations in the field. Once a peak has
arisen over the site with stronger input, that selection decision is stabilized
so that even when input over the alternate site becomes larger, the selected
peak remains stable as illustrated in Fig. 11.
Signatures of the selection instability can be observed in behavioral experiments. In general, tasks in which the correct response is not uniquely
specified are experimentally problematic as participants tend to develop interpretations or strategies. One exception is the preparation and initiation
of saccadic eye movements in response to visual stimuli, a process that is
so highly automatic and fast, that cognitive strategies have limited impact.
Participants who initially fixate a visual stimulus spontaneously make an
abrupt eye movement or saccade when a new visual target is presented. If
two targets are presented at the same time in two symmetrical visual locations (e.g., at the same distance at plus and minus 45 degrees from the
horizon), the saccade depends on the metrics of these targets. If the targets
are metrically sufficiently close, so that both targets can be foveated at the
same time, an averaging saccade is directed approximately to the center of
an imagined line that connects the two targets. If the targets are further
17
peak activation
peak input level
time
monostable
bistable
monostable
Figure 10: In this schematic illustration of the detection instability, the strength of
a localized input is slowly increased in time before reaching a plateau (dashed line)
or, alternatively, decreased in time following the same trace. The induced activation
pattern is illustrated by the peak activation level (bold with arrow pointing to the
right for the increase; light grey with arrow pointing to the left for the decrease
of input strength). This exposes the bistability of the input-driven and the selfstabilized solutions at intermediate levels of input strength, which leads to hysteresis
for the detection decision. This regime is delimited by a mono-stable regime of the
input-driven solution at low input levels and a mono-stable solution for the selfstabilized solution at high input levels.
apart, one of the two targets is selected (Ottes, Gisbergen, & Eggermont,
1984). Across multiple trials, this leads to a bimodal distribution with either
target being selected on different trials.
In a dynamical field model of saccade preparation, the difference between
these two regimes is accounted for in terms of the transition from averaging to selection (Kopecz & Schöner, 1995). This transition occurs in an
activation field that represents the saccadic end-point in a manner that can
be linked to the neuronal activation patterns observed in colliculus superior
as well as the frontal eye fields. A number of experimental features can
be explained by the model. One example concerns the interaction between
response probability and metrics. When trials in which the two targets
18
input
activation
me
ti
t4
t3
t2
t1
Figure 11: Two patterns of self-stabilized solutions of a dynamic field, the averaging and the selecting peak solutions, are separated by the selection instability.
Input is given into a dynamic field at two different sites of varying distance. At first
(point t1 in time), the two input sites are close to each other and the field averages
between these two locations forming one broad peak. As the input sites are moved
further apart (t2 ), the averaging peak solution becomes unstable. When the distance between input sites is further increased at t3 , the field selects the leftmost of
the two sites with slightly larger input strength. This selection is maintained even
after input strength at the right-most location becomes larger than at the left (t4 ).
are presented are intermixed with trials in which only one of the targets is
shown, the rate at which either target is experienced can be manipulated.
If one target appears more frequently than the other, then this leads to a
metric bias in the averaging regime: the averaging saccade does not fall into
the middle between the two targets, but onto a point closer to the more
frequent target (Kowler, 1990). In the bistable regime, in contrast, no such
bias is observed. Saccades fall onto either target, although the less frequent
target is selected less often. In the model, the probability of a target is
acquired through a learning mechanism, which accumulates memory traces
of prior activation patterns (Erlhagen & Schöner, 2002). In the averaging
mode, the peak is positioned such as to be sensitive to the asymmetrical
total input, while in the selection mode, the peak is sensitive only to the
local total input, which remains symmetric around either saccadic target
(Fig. 12).
This is only one of a range of experimental signatures through which
the metrics (that is, where the saccades fall) and timing of the saccades
(that is, when the saccades are initiated) reveal the underlying neural dynamic mechanisms (Kopecz & Schöner, 1995; Trappenberg, Dorris, Munoz,
& Klein, 2001). Signatures of the detection instability can be found in how
a self-stabilized peak is formed against the resistance of the fixation system
(Wilimzig, Schneider, & Schöner, 2006).
Finally, a third instability separates a regime in which self-stabilized
peaks persist only in the presence of localized input from a regime, in which
such peaks may be sustained in the absence of localized input. This is il-
19
(a)
(b)
total input
total input
bias
field activation
field activation
memory trace
memory trace
stimulus
dimension
stimulus
dimension
Figure 12: Dynamic activation fields defined over the dimension of saccadic endpoint (fat line) are shown together with stimulus input (bottom), the memory trace
(middle) and their sum, the total input (thin line on top). (a) When the two targets
are metrically close, the self-stabilized activation peak straddles the two locations
and is sensitive to differences in total input due to a stronger preshape at the more
frequent left-most than at the less frequent right-most target. This leads to a bias
away from the arithmetic mean of the two target positions toward the more frequent
target (arrow). (b) When the two targets are metrically sufficiently far, the selfstabilized activation peak is positioned over one of the two target locations, here
the less frequent right-most one. Different strengths in memory trace do not affect
the positioning of the peak.
lustrated in Fig. 13. The first two time-slices demonstrate a self-stabilized
peak induced by localized input, that decays to a homogeneous pattern at
the negative resting level when the localized input is removed at the second
time frame. If the balance between input and interaction is shifted toward
a stronger contribution of interaction (e.g., by increasing the resting level
of the field, Amari (1977)), then a self-stabilized peak induced by localized
input at the third point in time remains stable even when localized input is
completely removed at the fourth point in time. This pattern of sustained
activation has been invoked as a model of working memory in a variety
of contexts (e.g., Amit (1994); Spencer and Schöner (2003)). It becomes
unstable in the memory instability, when the balance between input and
interaction is shifted back toward a weaker interaction. A wide variety of
behavioral and neuronal signatures of the memory instability and this particular mechanism for working memory exist (review in Johnson, Spencer,
and Schöner (2007)). This includes the only other model system, in which
selection decisions can be reliably observed: having infants select a target for
goal-directed reaching movements in Piaget’s A not B paradigm. A Dynamical Field account for the rich phenomenology of this paradigm postulates,
20
input
activation
me
ti
t4
t3
t2
t1
Figure 13: A self-stabilized peak of activation (fat line) induced by localized input
(thin line) in the first frame (time t1 ) does not persist when that input is removed
(time t2 ). The last two frames show how this is changed at a higher resting level or
for a stronger interaction kernel. Now, an input induced self-stabilized peak (time
t3 ) is sustained when the localized input is removed (time t4 ).
that infants may move through the memory instability during development,
but also when environmental or task conditions are varied (Thelen et al.,
2001; Schöner & Dineva, 2006).
Finally, let us return to the question posed at the outset: How do selfstabilized peaks in dynamic activation fields enable operations to be performed on neuronally represented metric information? Figure 14 illustrates
the operation invoked earlier: Input from a sensory surface is to be shifted
by 30 degrees to generate a motor command that is at that angle to the
source of sensory stimulation. That amount of shift is represented in a second activation field. The activation field receiving sensory input and this
shift field project together onto a motor field (e.g., in additive form with
a threshold or in a multiplicative “shunting” form). The geometry of this
projection is illustrated by two neuronal couplings indicated in thin and
dashed lines. The dashed line reflects the projection with zero shift, but is
not effective here because no activation is present in the shift field at zero
shift. The peak at a shift of 30 degrees makes that this shifted projection is
effective leading to the correct output peak.
4
Discussion
We have referred to self-stabilized peaks of activation arising from Dynamic
Fields as units of representation. Do they satisfy typical criteria for representation? The authors are not trained philosophers, so our discussion of
this issue, inspired by the list of such criteria presented in Rowlands (2006),
is necessarily amateurish. Hopefully, readers will be able to transform this
21
sensory input
120
150
45
90
13
0
ou oto
tp r
ut
90
60
-45
m
-30
15
0
-15
15
30
shift
60
Figure 14: An architecture with three fields coupled systematically so that sensory
information is transmitted to all locations in the output field, but modulated by
the shift field. In this instance, activation in the sensory field at 90 deg is projected
only to the 130 deg location in the motor field, because the associated shift of 30
deg is activated. All other projections from 90 deg in the sensory input are inactive,
including the zero shift input indicated in dashed lines.
sketch into an acceptable argument.
The peaks definitely stand in relationship to something outside the nervous system. They are induced, for instance, by input that derives from sensory surfaces and reflects the structure of the environment. They may track
such inputs and thus actively maintain this relationship. They may also be
coupled to motor systems and bring about movement and thus change the
state of the organism in the outer world. In fact, when that happens, the
sensory information changes as well in a way that reflects the structure of
the world. The peaks continuously reflect how the system is embedded in
its environment.
22
On the other hand, the peaks may also become decoupled from their
initial sensory causes. This is obvious when peaks operate in the sustained
regime, where no specific sensory information is needed to maintain the
peaks. But this is also true, in a more subtle form, in the bistable regime
around the detection instability, in which peaks stabilize a representation
even as sensory information falls below an initial level of significance. Selection also reflects a partial decoupling, in which a peak no longer depends on
sensory information concerning the suppressed alternative locations along
the represented dimension. This capacity to stabilize selection decisions
provides a form of robust estimation, that is, helps suppress outliers, maybe
the most concrete and simple form of decoupling from sensory information.
We illustrated, for instance, how in the selection regime the estimate of a
metric dimension represented by a peak becomes insensitive to priors about
that dimension. In this respect, Dynamic Fields go beyond the Bayesian
framework, in which priors always play their role.
The peaks of activation are functional in that they support a particular
mode of operation or of functioning, which is linked to a history of function.
In fact, quite literally, the capacity to form peaks in a particular field is
promoted by memory traces, which reflect that such peaks have been built
before. In the Dynamic Field Theory account of Piaget’s infants, young
infants are in a “out of sight, out of mind” mode, older infants are in a
mode in which they are capable of stabilizing decisions against habits or
distractors. Both modes are functional states that emerge at different points
during development and are linked to the history of the nervous system.
Infants may move from one to the other mode as they build up experience
in a given context.
Peaks in Dynamic Fields are not merely dictated by sensory inputs, as
happens in purely feed-forward neuronal networks. They may be generated
and maintained through neuronal interaction within the field. This may lead
to misrepresentation. For instance, a peak may be stabilized even though
its inducing stimulus has actually moved to a new location if that shift
occurred too fast or while sensory input strength was weak. In this case,
the peak misrepresents a sensory estimate. Or a peak may be stabilized
as a result of a selection decision although sensory evidence disfavors that
decision. Perseveration, as observed in Piaget’s A not B paradigm, is a form
of misrepresentation.
Finally, peaks can be operated on and can be integrated into complex
systems and interact in combination with other systems. We provided a
very simple example of an operation above. For a demonstration of integration into a complex architecture see, for instance, Simmering, Schutte, and
Spencer (2007) in the context of spatial working memory. It is the very stability of peak solutions that gives them the robustness required to link them
into larger neuronal dynamics, so that they do not lose their identify and
their link and linkability to sensory input. Stability is also critical for linking
23
dynamic fields in closed loop to sensory-motor systems. Having said that,
we recognize that we are only at the beginning of a fuller understanding of
how the principles of Dynamical Field Theory will impact on an ultimate
account of complex, fully integrated cognition.
If we accept that Dynamic Fields with self-stabilized peaks deliver legitimate forms of representation, then we obtain the embodied and situated
aspect of representation for free. We have emphasized how self-stabilized
peaks may remain linked to the sensory and motor surfaces and have hinted
at their neurophysiological foundation. A practical proof of embodiment
comes from an ensemble of robotic implementations (e.g., Schöner, Dose,
and Engels (1995); Bicho, Mallet, and Schöner (2000); Erlhagen and Bicho (2006); Faubel and Schöner (in press)). These demonstrate that using
fairly simple sensory and motor systems, the Dynamic Field concepts are
sufficient to endow autonomous robots with elementary forms of cognition.
The situatedness of Dynamic Fields goes beyond the immediate coupling to
the sensed environment. The mechanisms of prestructuring activation fields
through memory traces of activation patterns provides a form in which the
structure of an environment may be acquired in a form that directly impacts
on the units of representation and the associated behaviors.
One of the most conceptually challenging implications of Dynamical Systems Thinking is often perceived of as a limitation: What is the ultimate
account of cognition delivered by Dynamical Systems Thinking? Unlike classical, information processing accounts of cognition, it is not plausible within
Dynamical Systems Thinking, that a complete model would be that final
product, a model whose modules would have fixed function, which would
capture the complete range of behavioral functions. Instead, Dynamical
Systems Thinking suggests a view in which an organism and its nervous
system are immersed in complex structured environments, endowed with a
history that has left its traces. Immersion and history together structure
a huge, complex dynamical system. That system would not be arbitrarily shapeable by experience. It would be governed by constraints, such as
the principle of stability, the need for instabilities to enable flexibility, the
existences of different dynamic regimes that support identifiable functions.
Dynamical Systems Theory would consist of identifying such principles.
In this conception, coupling is central rather than the forward flow of
information processing. But uncovering the coupling structure of that big
dynamical systems does not subsume the role the concept of “architecture”
plays in information processing thinking. This is because any subsystem
of the big dynamical system may undergo qualitative change, modifying its
function and forming new substructures through coupling with other subsystems. Such change may be brought about by unspecific changes in the
environment, in the system’s own experience, or in its internal structure.
Unspecific and graded changes may lead to the appearance of specific functions, that were implicit in the neuronal dynamics and are lifted out of it
24
as favorable conditions are created. These functions maybe created “on the
spot”, they do not necessarily reside somewhere waiting to be called up.
They are, instead, potentialities of the nervous system, that may “emerge”
if conditions are right (Schöner & Dineva, 2006). This also implies that there
may be multiple causes for any particular function to arise and, conversely,
that any individual subsystem may be involved in multiple functions. Again,
understanding cognition then consists most likely of the understanding of
general constraints, of limitations, symmetries, or modes of operation rather
than of an exhaustive account of cognitive function.
References
Amari, S. (1977). Dynamics of pattern formation in lateral-inhibition type
neural fields. Biological Cybernetics, 27, 77-87.
Amit, D. J. (1994). The hebbian paradigm reintegrated: Local reverberations as internal representations. Behavioral and Brain Sciences,
18 (4), 617-626.
Bicho, E., Mallet, P., & Schöner, G. (2000). Target representation on an
autonomous vehicle with low-level sensors. The International Journal
of Robotics Research, 19, 424-447.
Blumberg, M. S. (2005). Basic instinct: The genesis of behavior. New York:
Thunder’s Mouth Press.
Braun, M. (1993). Differential equations and their applications (4 ed.).
Springer Verlag, New York.
Deco, G., & Schürmann, B. (2000). Information dynamics: Foundations
and applications. New York, NY: Springer Verlag.
Erlhagen, W., & Bicho, E. (2006). The dynamic neural field approach to
cognitive robotics. Journal of Neural Engineering, 3 (3), R36-R54.
Erlhagen, W., & Schöner, G. (2002). Dynamic field theory of movement
preparation. Psychological Review, 109, 545-572.
Faubel, C., & Schöner, G. (in press). Learning to recognize objects on the
fly: a neurally based dynamic field approach. Neural Networks.
Haken, H. (1983). Synergetics–an introduction (3 ed.). Springer Verlag,
Berlin.
Hock, H. S., Schöner, G., & Giese, M. A. (2003). The dynamical foundations of motion pattern formation: Stability, selective adaptation, and
perceptual continuity. Perception & Psychophysics, 65, 429-457.
Hoppensteadt, F. C., & Izhikevich, E. M. (1997). Weakly connected neural
networks. Springer Verlag New York Inc.
Johnson, J. S., Spencer, J. P., & Schöner, G. (2007). Moving to higher
ground: The dynamic field theory and the dynamics of visual cognition. New Ideas in Psychology.
25
Kelso, J. A. S. (1995). Dynamic patterns: The self-organization of brain
and behavior. The MIT Press.
Kopecz, K., & Schöner, G. (1995). Saccadic motor planning by integrating visual information and pre-information on neural, dynamic fields.
Biological Cybernetics, 73, 49-60.
Kowler, E. (1990). The role of visual and cognitive processes in the control
of eye movement. In E. Kowler (Ed.), Eye movements and their role
in visual and cognitive processes (p. 1-70). Elsevier.
Latash, M. (1993). Control of human movement. Human Kinetics.
Marr, D. (1982). Vision. W. H. Freeman & Co., New York.
Ottes, F. P., Gisbergen, J. A. M. van, & Eggermont, J. J. (1984). Metrics
of saccade responses to visual double stimuli: two different modes.
Vision. Res., 24, 1169-1179.
Perko, L. (1991). Differential equations and dynamical systems. Berlin:
Springer Verlag.
Piaget, J. (1952). The origins of intelligence in children. New York: International Universities Press.
Reichardt, W., & Poggio, T. (1976). Visual control of orientation behaviour
in the fly: I. A quantitative analysis. Quarterly Reviews in Biophysics,
9, 311-375.
Riegler, A. (2002). When is a cognitive system embodied? Cognitive Systems
Research, 3, 339-348.
Rowlands, M. (2006). Body language: Representation in action. Cambridge,
MA, USA: MIT Press.
Schöner, G., & Dineva, E. (2006). Dynamic instabilities as mechanisms for
emergence. Developmental Science, 10, 69-74.
Schöner, G., Dose, M., & Engels, C. (1995). Dynamics of behavior: Theory
and applications for autonomous robot architectures. Robotics and
Autonomous Systems, 16, 213-245.
Schöner, G., & Kelso, J. A. S. (1988). Dynamic pattern generation in
behavioral and neural systems. Science, 239, 1513-1520.
Searle, J. R. (1983). Intentionality — An essay in the philosophy of mind.
Cambridge University Press.
Simmering, V. R., Schutte, A. R., & Spencer, J. P. (2007). Generalizing the
dynamic field theory of spatial cognition across real and developmental
time scales. Brain Research(doi:10.1016/j.brainres.2007.06.081).
Spencer, J. P., & Schöner, G. (2003). Bridging the representational gap
in the dynamical systems approach to development. Developmental
Science, 6, 392-412.
Thelen, E., Schöner, G., Scheier, C., & Smith, L. (2001). The dynamics
of embodiment: A field theory of infant perseverative reaching. Brain
and Behavioral Sciences, 24, 1-33.
Thelen95. (1995). Time scale dynamics and the development of an embodied
cognition. In R. F. Port & T. van Gelder (Eds.), Mind as motion 26
explorations in the dynamics of cognition (p. 69 - 100). Cambridge,
MA: MIT Press.
Trappenberg, T. P. (2002). Fundamentals of computational neuroscience.
Oxford, UK: Oxford University Press.
Trappenberg, T. P., Dorris, M. C., Munoz, D. P., & Klein, R. M. (2001).
A model of saccade initiation based on the competitive integration of
exogenous and endogenous signals in the superior colliculus. Journal
of Cognitive Neuroscience, 13 (2), 256-271.
Wilimzig, C., Schneider, S., & Schöner, G. (2006). The time course of
saccadic decision making: Dynamic field theory. Neural Networks, 19,
1059-1074.
Wilson, H. R. (1999). Spikes, decisions, and actions: Dynamical foundations
of neurosciences. Oxford University Press.
27