Download cogsci200

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Embodied language processing wikipedia , lookup

Artificial neural network wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Neuroeconomics wikipedia , lookup

Neural modeling fields wikipedia , lookup

Apical dendrite wikipedia , lookup

Connectome wikipedia , lookup

Environmental enrichment wikipedia , lookup

Neuroplasticity wikipedia , lookup

Binding problem wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Synaptogenesis wikipedia , lookup

Biological neuron model wikipedia , lookup

Catastrophic interference wikipedia , lookup

Multielectrode array wikipedia , lookup

Axon wikipedia , lookup

Neural oscillation wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Mirror neuron wikipedia , lookup

Recurrent neural network wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Neural coding wikipedia , lookup

Axon guidance wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Development of the nervous system wikipedia , lookup

Metastability in the brain wikipedia , lookup

Neuroanatomy wikipedia , lookup

Convolutional neural network wikipedia , lookup

Circumventricular organs wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Optogenetics wikipedia , lookup

Nervous system network models wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Efficient coding hypothesis wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Central pattern generator wikipedia , lookup

Synaptic gating wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Transcript
A Theory of Cerebral Cortex
(or, “How Your Brain Works”)
Andrew Smith (CSE)
Outline
• Questions
• Preliminaries
• Feature Attractor Networks
• Antecedent Support Networks
• Attractive properties of the theory / Conclusions
Questions (to be answered!)
• What is cortical knowledge and how is
it stored?
•
How is it used to carry out thinking?
•
How is it integrated with sensory input
and motor output?
Preliminaries
• Thinking is a symbolic process.
• Thinking relies only on classical mechanics.
(Unlike the Penrose/Hameroff model.)
• Thinking is not a mathematically grounded
reasoning process, rather confabulation!
Feature Attractor Neuronal Networks
Each Feature Attractor Network Implements one ‘Column’ of Tokens
cortical region (one of about 120,000)
cerebral
cortex
human cortical surface area
 240,000 mm2
a feature attractor
network
paired
thalamic region
thalamus
Each region encompasses a cortical surface
area of roughly 2 mm2 and possesses a total of
about 200,000 neurons.
1
2
3
4
5
.
.
.
4376
bidirectional
connections
An object (sensory, abstract, etc.) or
action (movement process, thought
process, etc.) is represented by a
collection
of
feature
attractor
tokens, each expressing a single
token (node) from its lexicon.
Feature Attractor Networks
Each network has a lexicon of random (!) tokens, sparsely encoded; each
token has 100’s of neurons on at a time, out of 50,000. This lexicon is
fixed very early in life and never changes.
The function of the network is to change the pattern of activation within a
particular region so that it expresses the token in its lexicon “closest” to
the original pattern of activation. (aka “vector quantizers”)
The Feature Attractor Networks are extremely robust to noise/partial
tokens.
- A region can start out with 10% of a particular token and
within one iteration, express the complete token.
- A region can start out expressing many (100’s) of partial
tokens and within one iteration, express just one token that was
most complete. (more on this later…)
Now we have ~120,000 powerful pattern recognizers, let’s wire them up…
Antecedent Support Networks
(ASNs)
The role of the ASN is to do the thinking.
- If several active tokens have strong links to an
inactive token, the ASN will activate that token
(e.g. “smoke” + “heat” -> “fire”).
- Learning occurs when the ASN increases the link
weight between two tokens.
Short term memory = Which tokens are currently active
Long term memory = The link strengths between tokens
Antecedent Support Neuronal Network Implementation – Randomness to the rescue!
“Axons from neuron
of token i send their
collaterals randomly
to millions of
neurons in the local
area. Of these, a
few thousand
transponder
neurons just happen
to receive sufficient
input from i to
become active. Of
those, a few
hundred just happen
to send axons to
neurons belonging
to token j on the
target region,
activating (part of)
token j.”
The wiring of transponder neurons (pyramidal
neurons) is also fixed at a very early age.
transponder
neurons
these are the synapses
that are strengthened
target
region
token j
cerebral
cortex
source
region
token i
Input / Output
Input: Sensory neurons connect to token
neurons (layers III and IV), just like
transponder neurons.
Output: Motor neurons can receive their inputs
from the token neurons, just like transponder
neurons.
Attractive features (no pun intended…)
The Hecht-Nielsen model shows:
- how neurons can grow randomly and become organized.
- that a large range of synaptic weights is not necessary.
- how you can get a song stuck in your head. (You’re unable to
reset regions of your cortex. One bar evokes the next…)
- a model that can be viewed as implementing Paul Churchland’s
“semantic maps” from last lecture of CogSci 200. (IMHO…)
A simulation of this has solved the classic “cocktail-party problem.”
Conclusions
“[All knowledge comes from creating
associations between experiences.]”
- Aristotle
“Within 12 to 36 months, this theory will
revolutionize Artificial Intelligence.”
- Hecht-Nielsen
(as of last week…)