Download 1 1 1 1 - UPM ASLab

Document related concepts

Single-unit recording wikipedia , lookup

Neural engineering wikipedia , lookup

Binding problem wikipedia , lookup

Neuroanatomy wikipedia , lookup

Neural coding wikipedia , lookup

Central pattern generator wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Artificial neural network wikipedia , lookup

Catastrophic interference wikipedia , lookup

Artificial consciousness wikipedia , lookup

Mind uploading wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Development of the nervous system wikipedia , lookup

Neural modeling fields wikipedia , lookup

Mathematical model wikipedia , lookup

Convolutional neural network wikipedia , lookup

Synaptic gating wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Biological neuron model wikipedia , lookup

Metastability in the brain wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
BICS 2010
INFORMATIONAL THEORIES OF
CONSCOUSNESS:
A REVIEW AND EXTENSION
Igor Aleksander and David Gamez
Imperial College London
1
Tononi, G. Consciousness as Integrated Information: a Provisional
.
Manifesto. Biological Bulletin 215: 2008. pp 216-242
INFORMATIONAL THEORIES OF
CONSCOUSNESS:
A REVIEW AND EXTENSION
Illustrating how (according to Tononi)
does a neural network integrate
information?
An alternative perspective from neural
automata theory
Now for some pictures:
When we look at them we can
assume that we can say that we
are conscious of them
We can consciously differentiate
between the last two images.
What about the next two?
Not so easy, even if the two are
orthogonally different.
Computational distinctions
1. AI
A semantic net breaks the scene down into
a)  Very clever semantic net: Italian
elements.
restaurant … music
We become conscious of the images first
b)  Greek restaurant … music
and then might start decomposing them.
c)  ???
A semantic net is not a useful (predictive)
d) 
???of what it is to be conscious.
model
Computational distinctions
2. NON-RECURSIVE NEURAL
NETWORKS
a)  Hamming similarity to a training pattern
b)  Hamming similarity to a training pattern
c)  .
. . Sensisive to a strong difference
d)  .
Can’t apply static neural theory to the brain.
Computational distinctions
3. ATTRACTOR NEURAL NETWORKS
a)  Hamming similarity: training attractor
b)  Hamming similarity: training attractor
c)  .
. . Arbitrary attractors
d)  .
A possible route but the states need to have special
characteristics.
MORE ON DIFFERENTIATION
Information: resolution of
uncertainty
Generate
s a vast
number
of bits
Generate
s
one bit
Information integration theory
(IIT)
IIT argues that consciousness is
the capacity of a network of
neurons (or other elements) to
detect causal relationships.
This is called
INFORMATION INTEGRATION
and is given the symbol
Φ
MORE ON CAUSAL RELATIONSHIPS
AND Φ
Can detect
causal
relationships
Φ
0
Cannot detect
causal
relationships
Φ=0
Another example: the difference
between what supports the
sensation of a visual scene and the
way it’s viewed by a digital
camera?
MORE ON CAUSAL RELATIONSHIPS
AND Φ
If these are normal mental states the system
should not automatically treat those below as
normal
Φ=
Interim Recap: the machine support for a
mental state must …
Ensure differentiation from other mental
states –
i.e. maximise information generation by
resolving uncertainty (uniqueness).
Ensure integration –
i.e. maintain causal relationships
between elements of a single sensory
experience (indivisibility).
Calculating the amount of integration Φ
network
consider an arbitrary partition
Calculating the amount of integration Φ
NOISE
(max
entropy)
network
Measure effective information
(entropy of resulting states)
Calculating the amount of integration Φ
network
REPEAT. Φ is the sum of the two – a measure of
information crossing the cut
Problem 1 :
The Φ calculation has to be done for
all subsets and all cuts in all subsets to
discover the least Φ which is the Φ for
the whole network.
Gamez has shown that to predict the Φ
of a 30-neuron network it would take a
state-of-the-art computer 1010 years (!)
Information integration theory
(IIT)
COMPLEXES
IIT calculations should come up with
areas of a network that have a high
Φ.
These are called complexes.
Complexes can shift with time.
Consciousness in the brain is thought
to exist in a ‘main complex’.
Problem 2 :
IIT contains statements about qualia,
but they seem not to have
representational content.
Information integration theory
(IIT)
QUALIA
Described mathematically not only by
a high Φ, but also by a representation
of the activity of all the combinations
of sub mechanisms that produce the
high Φ.
What we are doing about it:
Develop IIT using older ‘neural automata’
ideas
Desired Output
N
inputs
Output
A lookup table
A ‘probabilistic nearest neighbour’ lookup table
Aleksander: How to build a mind, Columbia UP, 2000
The liveliness
model
LIVELINESS
A simple way of determining clusters
with high causal interactions between
elements.
Parallels discovery of complexes with
high Φ
Aleksander, I. and Atlas, P. 1973. Cyclic Activity
in Nature: Causes of Stability. International
Journal of Neuroscience 6: 45-50.
The liveliness
model
1
0
AND
0
1
An input line is lively if a change in the
input is transmitted to the output
The liveliness
model
1
0
1
AND
0
The liveliness
model
0
0
1
AND
0
The liveliness
model
0
0
AND
1
NO
0
The liveliness
model
1
0
1
AND
0
The liveliness
model
1
1
1
AND
1
The liveliness
model
1
0
AND
1
NO
YE
S
1
The liveliness
model
1
0
AND
1
1
Neuron Liveliness : Total liveliness of
incoming connections.
Above case: neuron liveliness = 1
The liveliness
model
1
1
AND
1
1
Neuron Liveliness : Total liveliness of
incoming connections.
NEW STATE: neuron liveliness = ?
The liveliness
model
1
1
AND
1
1
Neuron Liveliness : Total liveliness of
incoming connections.
NEW STATE: neuron liveliness = 3
The liveliness
model: RECAP
6
The liveliness
model
CLUSTERS
A cluster for a state is the group of
lively neurons linked by lively
connections.
2 0
All AND Gates
1
2
1
1
1
2
1
2
1
1
The Cluster
The liveliness
model
CLUSTERS
A cluster for a state is the group of
lively neurons linked by lively
connections.
Cluster liveliness
calculation:
2 0
All AND Gates
0
2
0
0
0 1.8
λc : cluster
liveliness
0
62
2
λmax : maximum
0 liveliness of cluster
0
λa : actual liveliness 15
Clusters 121
and
value
The liveliness
model
CLUSTERS
A cluster for a state is the group of
lively neurons linked by lively
connections.
2 0
All AND Gates
1
2
λc =1 1.86
1
1
2
1
2
1
1
The Cluster
The liveliness
model
CLUSTERS
A cluster for a state is the group of
lively neurons linked by lively
connections.
0 0
All AND Gates
1
0
0
0
0
0
0
1
0
0
Another state
The liveliness
model
CLUSTERS
A cluster for a state is the group of
lively neurons linked by lively
connections.
1 0
All AND Gates
0
1
0
0
0
1
0
0
0
1
Another state
REPRESENTATIONAL
ISSUES
Iconic learning: forcing reality
representing states on a network
(for illustration)
An example of what happens to state
structure when a cluster learns world
events (acquires experiential states)
GREEN
SQUARE
1
1
0
0
Green Square
RED
CIRCLE
World property sensors
0
0
1
1
Red Circle
Complete experience
(re-entrant states)
Resulting logic and
λ
GC
R
G
0
11
0
G
R
0
11
0
λ=
1
λ=
1
S
00
d
01
0 10
111
d
R
G
0
11
0
λ
1/2
0
0
1/2
λ=
1
Resulting
State structure
00
01
10
01
10
00
00
00
11
11
10
11
01
00
01
01
10
10
11
01
00
10
11
00
00
11
11
10
p=0.5
01
10
01
11
p=0.5
Fully connected
GREEN
Φ and λ high
SQUARE
1
1
0
0
Green Square
RED
CIRCLE
World property sensors
0
0
1
1
Red Circle
Complete experience
(re-entrant states)
Resulting
State structure
00
00
11
10
01
00
10
10
10
00
01
10
10
01
11
01
00
10
11
00
00
11
p=1
01
01
11
11
01
11
10
11
00
01
IMPORTANT STUFF
Φ, λ are a function of the
connectedness and the learning
method. Connectedness
determines the result of
exposure to reality (qualia?). Are
Φ, λ useful?
FINALLY
Effects of integration in a 10,000
neuron iconically trained weightless
net.
Embodiment of the ‘presence’
axiom
Feedback
from other
neurons at
random
Samp
Weightless neuron
les fro
m inp
ut
Iconic training
Embodiment of the ‘presence’
axiom
A 98 x 98 network
with different
connection strategies
Training Set (Attractors Created)
Falling into an attractor
Unique
Indivisible
Able to hold attractor
Unable to hold attractor
Time progression
Loss of
Uniqueness
Loss of
Indivisibility
Conclusions
Ideas of information integration expand
BUT IT’Shorizons
SERIOUS
the mathematical
of FUN
neural
systems that address consciousness.
This both informs and benefits from
older (liveliness, iconic learning)
methods.
A long way to go in the representational
arena, in applications to the neurological
brain, and the making conscious