Download 10.4. What follows from the fact that some neurons we consider

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neuromuscular junction wikipedia , lookup

Axon guidance wikipedia , lookup

Synaptogenesis wikipedia , lookup

Endocannabinoid system wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neural oscillation wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Axon wikipedia , lookup

Neuroanatomy wikipedia , lookup

Development of the nervous system wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neurotransmitter wikipedia , lookup

Metastability in the brain wikipedia , lookup

Multielectrode array wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Chemical synapse wikipedia , lookup

Circumventricular organs wikipedia , lookup

Optogenetics wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Mirror neuron wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neural coding wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Central pattern generator wikipedia , lookup

Sparse distributed memory wikipedia , lookup

Single-unit recording wikipedia , lookup

Recurrent neural network wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Convolutional neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Neural modeling fields wikipedia , lookup

Biological neuron model wikipedia , lookup

Nervous system network models wikipedia , lookup

Synaptic gating wikipedia , lookup

Transcript
10.4. What follows from the fact that some neurons we consider
neighbor?
(translation Rafał Opiał, [email protected])
A fact that some neurons are considered to be adjacent (are neighbors) has a very important
meaning. When, during a teaching process, certain neuron becomes a winner, and is subject to the
teaching process – it's neighbors are being learnt along with it. Soon I will show you how it
happens, but before, I'll remind you of how proceeds the teaching of a single neuron in
self- learning networks (fig. 10.13).
Fig. 10.13. Self-learning process for single neuron
Now, compare fig. 10.14. Notice what follows with it: a winner neuron (marked with navy-blue
point) is subject to teaching, because its initial weighting factors were similar to components of
signal shown during the teaching process (green point). Therefore here happens only amplification
and substantiation of natural, „innate“ preferences of this neuron, you could notice this in other selflearning networks. On a figure it looks as if „the winner“ was strongly attracted by an input point,
which caused that exactly this neuron has become a winner – its vector of weights (and a point
representing this vector on a figure) moves strongly towards the point representing the input signal.
Neigbors of a winner neuron (yellow points lightly toned in red) are not so lucky – however
regardless of what their initial weights and following it output signals were, they are taught to have
tendency to recognize exactly this input signal, for which the „remarkably talented“ neighbor turned
out to be winner! But to be justly – neighbors are taught slightly less intensively than the winner
(arrows indicating magnitudes of their displacements are visibly shorter). One of the important
parameters defining characteristics of networks with neighborhood is exactly the coefficient
specifying how much less the neighbors should be taught than the winner itself. Please notice that
neurons (yellow points), which parameters many times much better predestined them to be taught
(they were much closer to the input point) – didn't undergo any teaching during this step.
Fig 10.14. Self-learning of the winning neuron and its neighbors
What will be the result of such a strange teaching method?
Well, if the input signals for the network will come in a such manner that there will be clearly
existing clusters of them, then the individual neurons will endeavor to occupy (by its vectors of
weights) positions in the centers of these clusters, whereas the adjacent neurons will „cover“ the
neighboring clusters. Such situation is presented on the fig. 10.15, on which green dots represent
input signals whereas red stars correspond with the location (in the same coordinate system) of
vectors of weights of the individual neurons.
Fig. 10.15. Result of self-learning – clustering of the input data
A much worse situation will occur when input signals will be equally distributed in some area of
input signal space, as it is shown in fig. 10.16. Then, neurons of the network will have tendency to
“share” the function of recognizing these signals, so that each subset of signals will have its
“guardian angel” in the form of neuron, which will detect and recognize all signals from one
sub-area, another will detect signals from another sub-area, etc. Fig. 10.17 illustrates this.
Fig. 10.16. Self-learning using uniform distribution of input data present difficult task for neural
network
Fig. 10.17. Localization of weight vectors of self-learning neurons (bigger circles) in points of input
space, where such neurons can represent some sub-sets of input signals (small circles) in the same
color.
While looking at it there is necessary – as it seems – one comment. Well, not immediately may be
obvious for you that in case of randomly appearing set of points from some area and
systematically conducted teaching – the location occupied by the point representing neuron's
weights will be the central location in the set. But that's how it actually is, as it may be seen in fig.
10.18.
Fig. 10.18. Mutual compensation of pulling from different input vectors, reacting with weight
vector of self-learning neuron when it is located in center of data cluster
As seen from this figure, when neuron (represented, as usual, by its vector of weights) occupies the
location in the centre of the “nebula” of points which it is meant to recognize, therefore the further
course of teaching is not able to move it from this location for permanent, because different points
that appear in the teaching sequence cause displacements that compensate each other. To reduce the
neuron's “yawing” around its final location, in the Kohonen's networks is often applied a principle
of teaching with decreasing teaching coefficient, therefore the essential movements associated
with each neuron finding its proper location happens mostly at the beginning of teaching (when the
teaching coefficient is still large). While points being shown at the end of teaching process very
weakly influence the position of neuron which, after some time, fixes its location and does not
change it anymore.
Besides this process of weakening consecutive corrections, during the teaching of network there
also occurs another process: the range of neighborhood systematically decreases. On the beginning
the changes following from the neighborhood concerns, by every step of teaching, many neurons,
gradually the neighborhood restricts and tightens so that on the end each neuron is lone and devoid
of neighbors (fig. 10.19).
Fig. 10.19. Decreasing of neighborhood area during self-learning process
If you think about all the above information, notice one thing, that after the teaching finishes
neurons of the topological layer will portion the input signal space between each other so that each
area of this space will be signalized by another neuron. And what more, as a consequence of
influence of neighborhood these neurons which you regarded to be adjacent – will demonstrate
ability to recognize close – that means similar to oneself input objects. It will turn out to be very
convenient and useful because this kind of self-organization is the key to remarkably intelligent
applications of networks as self-organizing representations. We were considering this at the
particular examples in the first sub-chapters of this chapter.
When presenting the results of teaching of Kohonen's network you will come upon one more
difficulty, which is worth discussing, before you contact with a real results of simulations, so that
everything was completely clear later. Well, when presenting results (in the form of, occurring
during teaching, location change of points corresponding to individual neurons) you must have
possibility to watch also what happens with the adjacent neurons. In the figure 10.14 you could
easily correlate what happened to the “winner” neuron and its neighbors, because there were just a
few points and identifying neighbors on the basis of the changed color was easy and convenient.
During the simulations you will sometimes have to deal with hundreds of neurons and such
technique of presentation is impossible to maintain. Therefore when presenting the activity of
Kohonen's networks there is a commonly practiced technique of drawing “map” of neurons
positions with marked relation of neighborhood – as in the figure 10.20.
Fig. 10.20. One step of the Kohonen’s network learning
In the figure you can notice that points corresponding to the adjacent neurons are presented as
connected with lines. If, as a result of teaching process, the points shift – also shift the
corresponding lines. Of course this should concern all the neurons and all the relations of
neighborhood, but in the figure 10.20. for a maximum clarity I showed only those lines, which
referred to the “winner” neuron and its neighbors, while I omitted all other connections. In detail for
the full network you will see this, in a while, on the example of program Example 11, which I
prepared for you.