Download Neural Cell Assemblies for Practical

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Memory consolidation wikipedia , lookup

Axon guidance wikipedia , lookup

Single-unit recording wikipedia , lookup

Neurophilosophy wikipedia , lookup

Connectome wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Neuroethology wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Environmental enrichment wikipedia , lookup

Axon wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Multielectrode array wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Neurotransmitter wikipedia , lookup

Neuroeconomics wikipedia , lookup

State-dependent memory wikipedia , lookup

Catastrophic interference wikipedia , lookup

Mirror neuron wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Neural oscillation wikipedia , lookup

Convolutional neural network wikipedia , lookup

Neural engineering wikipedia , lookup

Biological neuron model wikipedia , lookup

Artificial neural network wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Neural modeling fields wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neural coding wikipedia , lookup

Concept learning wikipedia , lookup

Synaptogenesis wikipedia , lookup

Circumventricular organs wikipedia , lookup

Donald O. Hebb wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Central pattern generator wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Metastability in the brain wikipedia , lookup

Recurrent neural network wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Development of the nervous system wikipedia , lookup

Neuroanatomy wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Chemical synapse wikipedia , lookup

Optogenetics wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Synaptic gating wikipedia , lookup

Nervous system network models wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Transcript
Neural Cell Assemblies for Practical Applications
Christian R. Huyck
Middlesex University
NW4 4BT, UK
([email protected])
www.cwa.mdx.ac.uk/hebb/hebb.html
and
Siri Bavan
Middlesex University
NW4 4BT, UK
[email protected]
ABSTRACT
Humans process information using their neurons and
our understanding of how this processing is done is
growing daily. It is widely agreed that neurons
represent concepts by highly recurrent neural networks
called Cell Assemblies (CAs). These CAs are learned
using a Hebbian learning rule. Using a model of CAs
that is simplified but largely consistent with our
understanding of neurophysiology, we may develop
associative memories. The Hebbian learning rule can
again be used to associate concepts so that synapses
between associated concepts have a lower weight than
synapses within concepts, but a higher weight than
synapses between unassociated concepts.
An
associative memory based on this type of learning
would be able to implement host of practical
applications including datamining applications and
search engines. Our understanding of CA models is
incomplete, but it is clear that they can be used for
these practical applications.
Keywords: Cell Assembly, Recurrent Neural
Network, Search Engine, Hebbian Learning,
Associative Memory
1. Introduction
Cell Assemblies (CAs) are reverberating neural
circuits that form the neural basis of concepts. CAs
were first proposed by Hebb [3], and have
subsequently been widely studied. There is a large
amount of neuro-biological evidence for their
existence in the brain [1,2,11]. A concept is stored in
a large number of neurons that are connected via
synapses with large strengths. Neurons fire when they
get a large amount of input, so the activation of a few
of these neurons will cause the other neurons to
activate. The activation of a large portion of the
neurons in a CA is known as ignition and is the neural
correlate of remembering a concept. That is, an active
CA is a short-term or working memory.
CAs represent both short and long-term memories.
The brain does not have concepts when it is created
and thus concepts and therefore CAs must be learned.
A CA is learned by adjusting synaptic weights
between neurons in the CA. This is done in an
unsupervised manner so that the concepts are things
that are selected from the environment. The organism
sees features traveling around together, and the
neurons relating to those features have their
connections strengthened. This eventually leads to a
network of neurons that is capable of maintaining
activation. Concepts are thus groups of features that
tend to travel around together. One advantage of this
approach is that the concepts are grounded in the
environment. A concept is learned by repeated
presentation of examples of the concept. The concept
is then activated (the CA is ignited) by presentation of
an instance of the concept. The CA can be activated
by an instance it has not seen before, and the CA can
remain active even after the stimulus has ceased.
CAs are categorisers which evolve by activating a set
of neurons where some of these neurons may be
shared by other CAs. When an instance is presented
the CA for the category that it is a member of is
activated.
Neurons can participate in more than one CA [11].
Practically this means there can be more CAs than
there are neurons [13]. So, a CA is a stable state [4]
and an attractor state [7] of the network. It is a set of
neurons that are activated and if a sufficiently strong
subset is externally activated, the full set will self
activate.
Neurons in a CA have strong connections between
them, but they also have connections to neurons
outside the CA. These inter-CA connections allow a
sophisticated semantic net to emerge [10]. Concepts
that are related have strong connections leading to the
formation of related CAs. This feature can be
exploited to implement datamining systems and expert
systems. The connections from one CA to another are
insufficient to activate the second CA in isolation, but
when combined with other activation (evidence) the
second CA can be activated. This kind of induced
activation can be useful in setting up search engines
and datamining systems.
This is largely neuro-psychological theory, but we
have been working on computational models of these
systems. Unlike most connectionist systems, we are
trying to emulate the important properties of neurons
to get important psychological phenomena to emerge.
We have promising initial results, and hope that these
results can be transferred to industrial applications as
well as neuro-psychological explanations.
2. Learning Rules
Though there is neural evidence of Hebbian learning
[8], its precise computational implementation is not
clearly understood and is currently under debate. The
Hebbian learning rule states that the strength of
synapse that connects two neurons that tend to fire
simultaneously will tend to increase[3]. An obvious
extension of this is that Hebbian learning leads to
synapses being correlators. That is, the synaptic
weight between two neurons is based on how likely
they are to fire together.
We have developed a simple learning rule that makes
synapses linear correlators [6]. When the pre-synaptic
neuron fires, if the post-synaptic neuron fires X
percent of the time, the weight will be X. This can be
modified by a constant so that the weight is X*C, but
it still acts as a linear correlator. There is no need for
the synaptic weight to be a linear correlator and this
could be replaced by any monotonic function.
However, synapses as linear correlators give us a
sound mathematical basis to out learning algorithm.
Synapses as correlators are the basis of learning.
However, these correlators will not be sufficient to
cause CAs to grow for a wide range of patterns. It
will not work for patterns that are too dense or are too
sparse. The compensatory learning rule can allow for
a wider range of patterns.
for sparse patterns. Similarly, if the total synaptic
strength of a neuron is high, the synaptic strength of a
particular neuron is increased less during learning, and
more is taken away if strength is reduced. This
encourages dense patterns to break into separate CAs.
3. A Cell Assembly Example
A simple example might help to explain the issues. A
network of neurons gets input directly from the
environment. This network sees various instances of
an object, for example a triangle. When an instance is
seen, neurons are activated. These neurons may react
to lines of certain orientations, or to the meeting of
two lines. Each time these neurons are coactive, the
strength between them is increased via Hebbian
learning, but if one is active without the other the
strength is decreased.
Each triangle is different so different neurons will be
active each time. However, a core subset of neurons
will be active together, leading to a mutual increase in
synaptic strength between neurons in the as yet
unformed CA.
The strength based on correlation is too small. As the
neurons are not used in many other CAs, the total
synaptic strength of the neurons is small. Thus,
compensatory learning will increase the synaptic
strength above the base correlation value. This will
enable the strength to grow sufficiently to enable some
self-stimulation of the CA. At this point, the CA will
consolidate rapidly, because it will now activate itself
causing a reinforcement loop with Hebbian learning.
Simultaneous firing causes learning, which in turn
increases the likelihood of firing. Using this
mechanism the CA for triangle is learned.
Now, a new triangle is presented to the system. Some
neurons are activated directly from the environment.
Those inside the triangle CA will cause other neurons
in the CA to fire leading to a cascade of neural
activity, and the reverberating CA is ignited.
Other neurons not in the CA may also be activated.
Stimulus from a triangle overlaps with that from a
square. Thus the square CA will also be stimulated;
however, the triangle CA is in competition with the
square and will suppress it, leaving only the triangle
CA active. Other CAs may be activated by the same
stimuli; for example, the triangle may be green, so the
green CA will also become active.
4. Associative Memory
The compensatory learning rule modifies the above
Hebbian learning rule by including total synaptic
strength of the pre and post-synaptic neurons as part of
the rule. When the strength of a synapse is increased,
it will be increased more if the sum of the synaptic
strengths of the neuron is low. This encourages CAs
CAs can easily be used for associative memory. CAs
are good at being a content addressable memory and
this can be demonstrated by training sub-components
of a pattern over a number of CAs and linking them to
interact by sending signals to each other and activate
desired patterns. For example, there is a green square,
and this CA has connections to the green CA and to
the square CA. So activating the green-square CA by
environmental presentation and the colour CA via the
question what colour is it, the green CA is activated.
Exploitation of the categorisation properties of the
CAs will no doubt lead to a better associative memory
model that is implicitly fuzzy and be good at
approximating patterns of associations. This can be
explained using an example: two almost similar
looking patterns (e.g. dog and cat) can be trained.
When a particular object is presented, both CAs are
activated, however they are in competition because
both CAs have mutually inhibitory links. Due to
inhibition only one CA will ignite and will be recalled
based on the small differences between the two
patterns. Thus defuzzification is implemented via
competitive inhibitive-excitatory process.
With CAs, there is the potential to be able to realize
variable binding [5]. The possibility of developing a
strategy using location binder etc. will no doubt lead
to such systems based on CAs to produce better scene
analysis models. It has been proved that the variable
binding is possible even with a conventional
connectionist model [12]. This naturally puts CAs in a
strong position to achieve such models.
5. Large Associative Memories.
Large associative memory (LAM) models can be used
to store composite patterns where each component can
be a concept/or part of a concept. Once such a system
is available, one can have the advantage of viewing
the full picture before getting at the low level details.
This naturally is dependent on the feasibility of
untangling the sub-patterns that makes up the large
memory. This is possible with a group of cell
assemblies that collaborate to form the model for
LAMs. LAMs are useful in modeling and
manipulating datamining systems. In our case, the
advantage lies in the flexibility in training the patterns
in a structured form of our choice so that it will reflect
the relationships between the sub-patterns. These
relationships may be based on semantics or on some
arbitrary concept. This kind of system can be used to
build large LAMs. For example, consider the situation
where a person cannot remember the name of the band
who sang the song ‘Satisfaction’, but can remember
that the band had a hit with this song in the 60s, and
the singer was Mick Jagger. By activating these
nodes, along with the band node, the person can come
up with the name Rolling Stones. Extracting
information by spreading activation in this way would
produce the most practical datamining system that can
be used by a novice (manager). With Databases it is
impossible to return the Rolling Stones without a key
relationship. Categorisation is another major property
of datamining systems and it can be more easily
achieved with a CAs based system because of its
ability to co-exists with other CAs and yet respond
independently. Changes in features are easier to
introduce into CAs than feed forward nets as there is
less need for complete retraining of all the CAs
involved in a system. CAs are an incremental learning
system which are adaptive.
Search engines: this can be implemented using
associative memory as above or by taking advantage
of categorisation. If a person finds one document they
like, they often want to see more documents similar to
the original. For example, if documents are grouped,
retrieving one gives access to all. Consider the case
where, we ask for a document on "big dogs" and get
several returned. Searching through the list, we find
one we particularly like. Prior to this all documents
have been grouped. Using the original "big dog"
document we can retrieve other documents in its
group. This can be further improved by exploiting the
flexible architectural feature of the CAs to, for
example, retrieve documents containing “big bad dog”
or “large puppy”. Documents are examples of a
pattern. Presenting the document activates the pattern.
Associative memories allow you to retrieve instances
of this pattern.
6. Conclusion
CAs are a model of the neural basis of human
categorisation and associative memory. As such, they
can be used in three major applications: content
addressable memory, datamining, and search engines.
They potentially have substantial advantages over
other approaches. They are superior to symbolic
models because they have a basis for grounding
symbols in the environment. They are superior to feed
forward connectionist models because they can easily
implement incremental learning. They are superior to
RAAM models [9] because they are easy to train and
they are much more resistant to saturation.
Bibliography
[1] Abeles, M., H. Bergman, E. Margalit, and E.
Vaadia (1993) Spatiotemporal Firing Patterns in the
Frontal Cortex of Behaving Monkeys. Journal of
Neurophysiology 70(4):1629-38
[2] Fuster, J (1995) Memory in the Cerebral Cortex:
An Emperical Approach to Neural Networks. In The
Human and Non-Human Primate MIT Press.
[3] Hebb, D. O. (1949) The Organization of Behavior.
John Wiley and Sons, New York
[4] Hopfield, J. (1982) Neural Networks and Physical
Systems with Emergent Collective Computational
Abilities. In Proceedings of the National Academy of
Sciences, USA 79 pp. 2554-8
[5] Huyck, Christian R. (2001) Cell Assemblies as an
Intermediate Level Model of Cognition. In Emerging
Neural Architectures based on Neuroscience. W.
Horn, J. Austin and D. Willshaw (eds.)
[6] Huyck, C. (2002) Cell Assemblies and Neural
Network Theory: From Correlators to Cell
Assemblies. Middlesex University Technical Report
CS-02-02
[7] Kohonen T. (1997) Self-Organizing Maps.
Springer ISBN:3-540-62017-6
[8] Markram, H, J. Lubke, M. Frotscher and B.
Sakman (1997) Regulation of synaptic efficacy by
coincidence of postsynaptic APs and EPSPs. Science,
275:213 – 215
[9] Pollack, J. B. (1990) Recursive Distributed
Represenations. Artificial Intelligence, vol 46, pp 77105
[10] Quillian, M. R. (1968) Semantic Memory. In
Semantic Information Processing M. Minsky (ed.)
Cambridge, MA: MIT Press
[11] Sakurai, Yoshio (1998) The search for cell
assemblies in the working brain. In Behavioural Brain
Research 91 pp. 1-13
[12] Sun, R. (1995) Robust reasoning; integrating
rule-based and similarity-based reasoning.
In
Artificial Intelligence 75 pp. 241-95.
[13] Wickelgren, W. A. (1999) Webs, Cell
Assemblies, and Chunking in Neural Nets. Canadian
Jounral of Experimental Psychology 53:1 pp. 118-31