Download Attention aided perception in sparse-coding networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

CAN bus wikipedia , lookup

Internet protocol suite wikipedia , lookup

UniPro protocol stack wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Transcript
Attention aided perception in sparse-coding networks
Janusz A. Starzyk, Yinyin Liu
Ohio University, Athens, OH
SPARSE CODING AND SPARSE STRUCTURE
OLIGARCHY-TAKES-ALL (OTA)


•
•
Top-down
Current –mode WTA circuit (Signal – current)
h 1
h 1
Local competition
n1h 1 n2 n3
n1h 1
•
12
10 neurons in human brain are sparsely connected
n1s 2
j



level 
 max   w jk sk  (i  1,2,..N level )
jN


kN

level 1
i
2 3
1
N level
j
Input layer
Dual Neurons
activation.
(4). Finding the winner network (back-propagation process)
FOR Layer = Top Layer-1: -1: 1
FOR i=1: number of neurons on (Layer)


layer 1
layer 
S winner
 (i  1,2,..N layer )
i  max   w jk S k
jN
kN

layer 1
S ilayer  S winner
i
layerl  1
i
layer
j
 1 j : local winner among N ilayerl1
1
l layer

ji
layerl1
 0 j : not local winner among N i
ENDFOR
ENDFOR
(5). Finding final oligarchy Nfinal (feedforward process)
ij
 threshold
 threshold
layer
j
 1 j : local winner among N
1
l layer

ji
 0 j : not local winner among N
(3). Learning in the winner network (feedforward process)

10
jN ilayer
layerl1
i
layerl1
i
layer 1
i

layer layer
wij lijlayer S layer
 threshold

j
  wij lij S j
 jN
jN


0
 threshold
 wij lijlayer S layer
j

jN
layer
i
layer
i
layer
i
1
1 layer layer1 layer
wlayer
 layer
Si S j
lij
j ,i
j ,i
10
10
ENDFOR
ENDFOR
The final marker Nfinal is compared with (N1, N2, N3…..) to determine its category.
1
0
Nhouse
-1
0
100
200
300
number of output neurons
0.9
20
15
10
5
0.7
0.6
0.5
0.4
Attention
Face
0.3
0.2
0.1
0
5
10
15
20
25
30
35
40
45
50
0.9
0.8
0.7
0.6
0.5
6
8
10
12
14
16
Number of input connections per neuron lin
18
20
Attention
House
8
8
6
4
2
0
1
0.4
6
4
2
0
6
5
4
3
2
1
0
1
1
0.3
2
SOM
OTA
0.2
0.1
4
N
N
0.8
1
25
Nhouse
Nface
SOM
OTA
Performance vs. Overall loss of neurons
30
N
400
number of bits changed in the pattern
 = 2.25
=4
Nface
Performance vs. Amount of information distortion
Number of active neurons on the top level
vs. Number of input connections per neuron
2
layer
i
1
1 layer layer1 layer
wlayer
 layer
Si S j
lij
j ,i
j ,i
0
35
0
2
1
ENDFOR
ENDFOR
40
layer
i
number of common neurons
w S
layer
j

layer layer
 threshold
 wij lijlayer S layer
j
  wij lij S j

jN
S ilayer1   jN

0
 threshold
 wij lijlayer S layer
j

jN
Computation cost (simulation time)
3
number of common neurons
layer
j
ij
CPU time
w S
jN ilayer
percentage of correct recognition
layer 1
i

layer
 layerwij S j
 jNi


0




layer 1
layer 
S winner
 (i  1,2,..N layer )
i  max   w jk S k
jN


kN

layer
layer 1
Si
 S winner i
Number of active neurons
on the top level of network with OTA
Hierarchical levels
The attention signal is applied on the marker, Natt, of the attended object Catt. ( Catt  {C1 , C2 , C3 ...} ).
layer
i
10
(2). Finding the winner network (backpropagation process)
S
Competition is needed:
Finding neurons with stronger
activities and suppress the ones
with weaker activities
 Have to use multiple layers to
transmit enough information and
try to provide “full connectivity”
in sparse structure
(3). Applying attention signal
OTA
Kohonen SOM
percentage of correct recognition
Increasing number of
Overall neurons
S
Input pattern

10
FOR Layer = 2: Top Layer
FOR i=1: number of neurons on (Layer)
(1). Data transmission (feedforward process)
…
…
A pattern containing multiple perceptual objects is presented to the network.  Nwin, by OTA.
Set of pre-synaptic
neurons of N4level+1
Signal goes through layer by layer
Local WTA competition is done on each layer
Multiple local winner neurons on each level
Multiple winner neurons on the top level –
oligarchy-takes-all (OTA)
• Oligarchy is the obtained sparse representation
• Provides coding redundancy and robustness
• Increases representational memory capacity
…
Finding sensory input
activation pathway
(2). Finding Oligarchy
FOR Layer = 2: Top Layer
FOR i=1: number of neurons on (Layer)
…
How to find it?
level+1
level
i
winner
…
4 5 6 7
•
•
•
•
layerl  1
i
Sparse representation
…... …………
…
…
Primary level h
6 7 8 9
The visual attention: cognitive control over perception and representation
building
• Object-based Attention: when several objects are in the visual scene
simultaneously, the attention helps recognizing the attended object.
• One candidate competition mechanism: the top-down feedback signal that
synchronizes the activity of target neurons that represent the attended object.
• A similar mechanism can also be applied to invariant representation building
through continuous observation of various patterns of the same object.
The active neurons on the top level include {Nwin, Natt} and they have the same level of
 Hierarchical learning network:
…
5
Set of post-synaptic
neurons of N4level
Oligarchy-takes-all algorithm
HIERARCHICAL SELF-ORGANIZING MEMORY
Secondary level s
S
•
Objects (C1, C2, C3…)  neuronal marker (N1, N2, N3…..).
level
j
Sparse coding in sparsely-connected network
…
l3
h 1
2
process for attention-aided perception
(1). Learning a group of objects
N4level+1 is the winner among 4,5,6  N4level+1  N4level

N ilevel1
3 4
2
1

Primary level h+1
n3h1
n
n1
Local competitions on network
level 1
swinner
i
On average, each neuron is connected to other
neurons through about 104 synapses
• Use secondary neurons to
provide “full connectivity”
in sparse structure
• More secondary levels
can increase the sparsity
• Primary levels and
secondary levels
S 3h 1
l1 l2s 2
h 1
Signal on n2 goes to

http://parasol.tamu.edu/groups
/amatogroup/research/NeuronPRM/
S
S
h 1
2
h 1
2
Branches logically cut off: l1 l3
C. Connor, “Friends and grandmothers’,
Nature, Vol. 435, June, 2005
Sparse structure enables efficient computation and
saves energy and cost in implementing a memory in
hardware
Local winner
h 1
1
S 2h1
local winner
Produce sparse neural representation —“sparse coding”

local competition and feedback
number of common neurons
Cortical learning: unsupervised learning
 Finding sensory input activation pathway
 A small group of neuron become active on the
top level representing an object
 “Grandmother cell” by J. V. Lettvin – only
one neuron on the top level representing and
recognizing an object (extreme case)
ATTENTION-AIDED PERCEPTION
2
4
6
face marker
8
10
12
14
16
percentage of missing neurons
18
20
house marker
2
face marker
house marker
2
face marker
house marker