Download Slides - Mathematics of Networks meetings

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neurocomputational speech processing wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Cortical cooling wikipedia , lookup

Mirror neuron wikipedia , lookup

Connectome wikipedia , lookup

Neuroanatomy wikipedia , lookup

Neuroethology wikipedia , lookup

Multielectrode array wikipedia , lookup

Neuroeconomics wikipedia , lookup

Neurotransmitter wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Binding problem wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Optogenetics wikipedia , lookup

Central pattern generator wikipedia , lookup

Single-unit recording wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neural oscillation wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neural modeling fields wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Neural coding wikipedia , lookup

Synaptic gating wikipedia , lookup

Artificial neural network wikipedia , lookup

Neural engineering wikipedia , lookup

Convolutional neural network wikipedia , lookup

Neural binding wikipedia , lookup

Development of the nervous system wikipedia , lookup

Metastability in the brain wikipedia , lookup

Biological neuron model wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Nervous system network models wikipedia , lookup

Recurrent neural network wikipedia , lookup

Transcript
The Random Neural Network: the
model and some of its applications
Erol Gelenbe1,2 and Varol Kaptan2
1
www.ee.imperial.ac.uk/gelenbe
Denis Gabor Professor
Head of Intelligent Systems and Networks
2
Dept of Electrical and Electronic Engineering
Imperial College
London SW7 2BT
Prof. Gelenbe is grateful to his PhD students who
have worked or are working with him on random
neural networks and/or their applications
- Univ. of Paris: Andreas Stafylopatis, Jean-Michel Fourneau,
Volkan Atalay, Myriam Mokhtari, Vassilada Koubi, Ferhan
Pekergin, Ali Labed, Christine Hubert
- Duke University: Hakan Bakircioglu, Anoop Ghanwani,
Yutao Feng, Chris Cramer, Yonghuan Cao, Hossam
Abdelbaki, Taskin Kocak
- UCF: Rong Wang, Pu Su, Peixiang Liu, Will Washington,
Esin Seref, Zhiguang Xu, Khaled Hussain, Ricardo Lent
- Imperial: Arturo Nunez, Varol Kaptan, Mike Gellman,
Georgios Loukas, Yu Wang
Thank you to the agencies and companies who have
generously supported RNN work over the last 15 yrs
- France (1989-97): ONERA, CNRS C3,
Esprit Projects QMIPS, EPOCH and
LYDIA
- USA (1993-): ONR, ARO, IBM, Sandoz,
US Army Stricom, NAWCTSD, NSF,
Schwartz Electro-Optics, Lucent
- UK (2003-): EPSRC, MoD, General
Dynamics UK Ltd, EU FP6 for grant
awards for the next three years, hopefully
more ..
Outline
• Biological Inspiration for the RNN
• The RNN Model
• Some Applications
– modeling biological neuronal systems
– texture recognition and segmentation
– image and video compression
– multicast routing
Random Spiking Behaviour of Neurons
Random Spiking Behaviour of Neurons
Random Spiking Behaviour of Neurons
Work started as an individual basic research project,
motivated by a critical look at modeling biological neurons,
rather than using popular connectionist models
Biological characteristics of the model needed to include:
- Action potential “Signals” in the form of spikes of fixed
amplitude
- Modeling recurrent networks
- Random delays between spikes
- Conveying information along axons via variable spike rates
- Store and fire behaviour of the soma (head of the neuron)
- Reduction of neuronal potential after firing
- Possibility of representing axonal delays between neurons
Random Spiking Behaviour of Neurons
Mathematical properties that we hoped for, but did not
always expect to obtain, but which were obtained
-Existence and uniqueness of solution to the models
-Closed form analytical solutions for large systems
-Convergent learning for recurrent networks
-Polynomial speed for recurrent gradient descent
-Hebbian and reinforcement learning algorithms
-Analytical annealing
We exploited the analogy with queuing networks, and this
also opened a new chapter in queuing network theory now
called “G-networks” where richer models were developed
Queuing Networks: Exploiting the Analogy
- Discrete state space, typically continuous time, stochastic
models arising in studying populations, dams, production
systems, communication networks ..
- Important theoretical foundation for computer systems
performance analysis
- Open (external Arrivals and Departures), as in Telephony,
or Closed (Finite Population) as in Compartment Models
- Systems comprised of Customers and Servers
- Theory is over 100 years old and still very active .. e.g.
- Big activity at Bell Labs, AT&T Labs, IBM Research
- More than 100,000 papers on the subject ..
Queuing Network <-> Random Neural Network
-
Both Open and Closed Systems
Systems comprised of Customers and Servers
Servers = Neurons
Customer .. Arriving to server will increase the queue
length by +1
Excitatory spike arriving to neuron will increase its soma’s
potential by +1
Service completion (neuron firing) at server (neuron) will
send out a customer (spike), and reduce queue length by 1
Inhibitory spike arriving to neuron will decrease its soma’s
potential by -1
Spikes (customers) leaving neuron i (server i) will move to
neuron j (server j) in a probabilistic manner
Mathematical Model: A “neural” network with n neurons
• Internal State of Neuron i at time t, is an Integer xi(t) > 0
• Network State at time t is a Vector
x(t) = (x1(t), … , xi(t), … , xk(t), … , xn(t))
• If xi(t)> 0, we say that Neuron i is excited and it may fire
at t+ (in which case it will send out a spike)
• Also, if xi(t)> 0, the Neuron i will fire with probability riDt
in the interval [t,t+Dt]
• If xi(t)=0, the Neuron cannot fire at t+
When Neuron i fires:
- It sends a spike to some Neuron j, w.p. pij
- Its internal state changes xi(t+) = xi(t) - 1
Mathematical Model: A “neural” network with n neurons
The arriving spike at Neuron j is an:
- Excitatory Spike w.p. pij+
- Inhibitory Spike w.p. pij - pij = pij+ + pij- with Snj=1 pij < 1 for all i=1,..,n
From Neuron i to Neuron j:
- Excitatory Weight or Rate is wij+ = ri pij+
- Inhibitory Weight or Rate is wij- = ri pij- Total Firing Rate is ri = Snj=1 (wij+ + wij–)
To Neuron i, from Outside the Network
- External Excitatory Spikes arrive at rate Li
- External Inhibitory Spikes arrive at rate li
State Equations
p (k , t )  Pr[ x(t )  k ] where {x(t):t  0 } is a discrete state - space Markov process,
and
k ij   k  ei  e j ,
k i  k  ei ,
k ij   k  ei  e j
k i  k  ei :
The Chapman - Kolmogorov Equations :
d
p (k , t )   [ p (k ij  , t )ri pij 1[k j (t )  0]  p (k ij  , t )ri pij ]   [ p (k i , t )(li ri d i )  L i p (k i , t )1[k i (t )  0]]
dt
i, j
i
 p (k , t ) [(li  ri )1[k i (t )  0]  L i ]
i
Let :
p (k )  lim Pr[ x(t )  k ],
t 
qi  lim Pr[ xi (t )  0]
and
t 
Theorem : If the C - K Equations have a stationary solution,
n
then it has the ' ' product form' '
p (k )   qiki (1  qi ) , where
i 1
External Arrival
Rate of Excitatory
Spikes


ji
Probability that
Neuron i is excited
0  qi 
Firing Rate of
Neuron i
L i   j q j rj p

ji
ri  li   j q j rj p
External Arrival
Rate of Inhibitory
Spikes
 1

ji


ji
Theorem (Gelenbe 93, Gelenbe - Schassberg er 95)
The
system of non  linear
equations
qi 
has
L i   j q j r j p ji
ri  li   j q j r j p
,

ji
an unique solution
if
Theorem (Gelenbe et al. 99) Let
and
RNN
all
1 i  n
the qi  1.
g : [0,1]v  R
be continuous
bounded . For any   0 , there exists an
with two output neurons
q o  , q o  s.t.
sup x[ 0,1]v | g ( x)  y ( x) | 
for
y ( x) 
qo
q
 o
1  qo 1  qo
Random Neural Network
• Neurons exchange Excitatory and Inhibitory Spikes
(Signals)
• Inter-neuronal Weights are Replaced by Firing Rates
• Neuron Excitation Probabilities obtained from NonLinear State Equations
• Steady-State Probability is Product of Marginal
Probabilities
• Separability of the Stationary Solution based on
Neuron Excitation Probabilities
• Existence and Uniqueness of Solutions for Recurrent
Network
• Learning Algorithms for Recurrent Network are O(n3)
• Multiple Classes (1998) and Multiple Class Learning
(2002)
Some Applications
• Modeling Cortico-Thalamic Response …
• Supervised learning in RNN (Gradient
Descent)
• Texture based Image Segmentation
• Image and Video Compression
• Multicast Routing
Cortico-Thalamic Response to Somato-Sensory Input
(i.e. what does the rat think when you tweak her/his whisker?)
Input from the brain stem (PrV) and response at thalamus (VPM) and cortex (SI),
reprinted from M.A.L. Nicollelis et al. “Reconstructing the engram: simultaneous,
multiple site, many single neuron recordings”, Neuron vol. 18, 529-537, 1997
Rat Brain Modeling with the Random Neural Network
Network Architecture from Physiological Data
Comparing Measurements and Theory:
Calibrated RNN Model and Cortico-Thalamic Oscillations
Simultaneous Multiple
Cell Recordings
(Nicollelis et al., 1997)
Predictions of Calibrated
RNN Mathematical Model
(Gelenbe & Cramer ’98, ’99)
Oscillations Disappear when Signaling Delay in Cortex is
Decreased
Thalamic Oscillations Disappear when Positive
Feedback from Cortex is removed
When Feedback in Cortex is Dominantly Negative,
Cortico-Thalamic Oscillations Disappear
Summary of findings
Gradient Computation for the Recurrent RNN is O(n3)
Texture Based Object Identification Using the
RNN, US Patent ’99 (E. Gelenbe, Y. Feng)
1) MRI Image Segmentation
MRI Image Segmentation
Brain Image Segmentation with RNN
Extracting Abnormal Objects from MRI
Images of the Brain
Separating Healthy
Tissue from Tumor
Simulating and Planning
Gamma Therapy &
Surgery
Extracting Tumors
from MRI
T1 and T2 Images
2) RNN based Adaptive Video Compression:
Combining Motion Detection and RNN Still Image
Compression
RNN
Neural Still Image Compression
Find RNN R that Minimizes
|| R(I) - I ||
Over a Training Set of Images {I }
RNN based Adaptive Video Compression
3) Analytical Annealing with the RNN: Multicast Routing
(Similar Results with the Traveling Salesman Problem)
•
•
•
Finding an optimal many-tomany communications path
in a network is equivalent to
finding a Minimal Steiner
Tree which is an NPComplete problem
The best heuristics are the
Average Distance Heuristic
(ADH) and the Minimal
Spanning Tree (MSTH) for
the network graph
RNN Analytical Annealing
improves the number of
optimal solutions found by
ADH and MST by
approximately 10%
Selected References
E. Gelenbe, ``Random neural networks with negative and positive signals and product
form solution,'' Neural Computation, vol. 1, no. 4, pp. 502-511, 1989.
E. Gelenbe, ``Stability of the random neural network model,'‘Neural Computation, vol. 2,
no. 2, pp. 239-247, 1990.
E. Gelenbe, A. Stafylopatis, and A. Likas, ``Associative memory operation of the random
network model,'' in {\it Proc. Int. Conf. Artificial Neural Networks}, Helsinki, pp. 307312, 1991.
E. Gelenbe, F. Batty, ``Minimum cost graph covering with the random neural network,''
Computer Science and Operations Research, O. Balci (ed.), New York, Pergamon,
pp. 139-147, 1992.
E. Gelenbe, ``Learning in the recurrent random neural network,'‘ Neural Computation,
vol. 5, no. 1, pp. 154-164, 1993.
E. Gelenbe, V. Koubi, F. Pekergin, ``Dynamical random neural network approach to the
traveling salesman problem,'' Proc. IEEE Symp. Syst., Man, Cybern., pp. 630-635,
1993.
A. Ghanwani, ``A qualitative comparison of neural network models applied to the vertex
covering problem,'' Elektrik, vol. 2, no. 1, pp. 11-18, 1994.
E. Gelenbe, C. Cramer, M. Sungur, P. Gelenbe ``Traffic and video quality in adaptive
neural compression'', Multimedia Systems, Vol. 4, pp. 357--369, 1996.
…
Selected References
…
C. Cramer, E. Gelenbe, H. Bakircioglu ``Low bit rate video compression with neural
networks and temporal subsampling,'' Proceedings of the IEEE, Vol. 84, No. 10, pp.
1529--1543, October 1996.
E. Gelenbe, T. Feng, K.R.R. Krishnan ``Neural network methods for volumetric magnetic
resonance imaging of the human brain,'' Proceedings of the IEEE, Vol. 84, No. 10,
pp. 1488--1496, October 1996.
E. Gelenbe, A. Ghanwani, V. Srinivasan, ``Improved neural heuristics for multicast
routing,'' IEEE J. Selected Areas in Communications, vol. 15, no. 2, pp. 147-155,
1997.
E. Gelenbe, Z. H. Mao, and Y. D. Li, ``Function approximation with the random neural
network,'' IEEE Trans. Neural Networks, vol. 10, no. 1, January 1999.
E. Gelenbe, J.M. Fourneau ``Random neural networks with multiple classes of signals,''
Neural Computation, vol. 11, pp. 721--731, 1999.
E. Gelenbe, E. Seref, and Z. Xu, ``Simulation with learning agents’’ Proceedings of the
IEEE, 89 (2) pp.148-157 (2001)
E. Gelenbe, K. Hussain ``Learning in the multiple class random neural network’’, IEEE
Trans. Neural Networks, vol. 13, no. 6, pp. 1257—1267, 2002.
E. Gelenbe, R. Lent and A. Nunez ``Self-aware networks and QoS ‘’, Proceedings of the
IEEE, 92 ( 9) pp. 1479-1490 (2004)