Download Lateral inhibition in neuronal interaction as a biological

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Central pattern generator wikipedia , lookup

Convolutional neural network wikipedia , lookup

Neural engineering wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Artificial neural network wikipedia , lookup

Development of the nervous system wikipedia , lookup

Optogenetics wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Catastrophic interference wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Biological neuron model wikipedia , lookup

Neural modeling fields wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Metastability in the brain wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
Lateral inhibition in neuronal interaction as a biological,
computational and linguistic commodity
CLAR-NET (Koutsomitopoulou 2004) is a model of neuronal activation patterns of
language production and understanding, and within this framework we explore
lateral inhibition (LI) as a biological, computational and linguistic commodity. The
model utilizes Adaptive Resonance Theory equations (ART, Grossberg 1972 et
seq.) and draws from natural language (NL) data mapped as nodes representing
the basic argument structure of the input.
Biologically motivated cognitive modeling is primarily concerned with the issue of
the representation of serial NL input in dynamic parallel neural networks (NNs).
For reasons of biological plausibility the neural models in this area of research
often rely on unsupervised learning algorithms based on the Hebbian learning
rule. For instance, the Kohonen self-organizing network (Kohonen 1982) utilizes
unsupervised learning and it is useful especially for modeling data whose
relationships are unknown.
However, models like Kohonen’s often sacrifice biological faithfulness in favor of
the engineering success of the endeavor. In Kohonen architectures the
connections between a stimulated node and its closest neighbors are excitatory,
whereas the connections to units farther away are considered inhibitory (Kohonen
1989, Miikkulainen 1991). This type of “long-distance" inhibitory links between
neurons is not cerebral (Loritz 1999, 2002).
The proposed CLAR-NET model shows how “minimal dipole anatomies” and LI
help to systematically represent NL in a biologically faithful model. LI is a wellknown property of neuronal behavior. In prior LI models that have been proposed
for NL (e.g. Rumelhart and McClelland 1981, Cottrell and Small 1983, Cottrell
1985, Small et al 1988 and Cottrell et al 1988), in spite of the presence of LI,
there is no further provision for other crucial neurobiological features, such as
sensitivity to the inhibition to excitation ratio (e.g. Rubenstein and Merzenich
2003, Mel and Schiller 2004), long-term vs. short-term memory activation
processes (Koutsomitopoulou 2004), and the appreciation that in a model like
CLAR-NET single neurons represent subnetworks (cortical columns) of “selfsimilar” neuronal anatomies which are the principal units of cerebral computation
(Loritz 1999, 2002).
With informative examples from homonymy, neologism and negation in NL we
show how minimal dipole anatomies model NL within the CLAR-NET framework
with biological faithfulness. The reference to LI is not wholly novel, but the
particular analyses serve as a new way of modeling NL processing and quantifying
fundamental notions like “context” and “homonymy” based on the neurobiological
reality of LI.
References
Koutsomitopoulou E. (2004). A neural network model for the representation
of natural language. PhD thesis, Georgetown University, Washington DC. Ann
Arbor: UMI. 65:6, 3137058.
Loritz D. (1999, paperback ed. 2002). How the Brain Evolved Language. Oxford
University Press.