Download Cognitive Activity in Artificial Neural Networks

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Microneurography wikipedia , lookup

Connectome wikipedia , lookup

Binding problem wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Optogenetics wikipedia , lookup

Neuroesthetics wikipedia , lookup

Neural oscillation wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Artificial intelligence wikipedia , lookup

Neurophilosophy wikipedia , lookup

Central pattern generator wikipedia , lookup

Neuroeconomics wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Nervous system network models wikipedia , lookup

Development of the nervous system wikipedia , lookup

Neural binding wikipedia , lookup

Metastability in the brain wikipedia , lookup

Neural engineering wikipedia , lookup

Artificial neural network wikipedia , lookup

Catastrophic interference wikipedia , lookup

Convolutional neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Recurrent neural network wikipedia , lookup

Transcript
DeGroff 7/31/2017
Classic Paper Study/Discussion Guide
Title: Cognitive Activity in Artificial Neural Networks
Author: Paul M. Churchland
Knowledge Relating to Learning Outcomes:
1.
Big Idea + Interdisciplinary Assumption (Neural Networks + Biology):
a. The three guiding convictions of this chapter are that the rationale just
outlined is importantly flawed, that the symbol/rule paradigm may well
comprehend only a vanishingly small percentage of cognitive activity, and
that even an elementary understanding of the microstructure of the brain
funds a fertile and quite different conception of what cognitive activity
really consists in.
2.
Neural Networks:
a. The networks to be explored attempt to simulate natural neurons with
artificial units of the kind depicted in figure 12.2. These units admit of
various levels of activation, which we will assume to vary between 0 and
1. Each unit receives input signal from other units via “synaptic”
connections of various weights and polarities.
3.
Neural Networks:
a. Looking now at the whole network, we can see that it is just a device for
transforming any given input-level activation vector into a uniquely
corresponding output-level activation vector. And what determines the
DeGroff 7/31/2017
character of the global transformation effected is the peculiar set of values
possessed by the many connection weights.
4.
Neural Networks:
a. It would of course be a miracle if the network made the desired
discrimination immediately, since the connection weights that determine
its transformational activity are initially set at random values. At the
beginning of this experiment, then, the output vectors are sure to
disappoint us.
5.
Interdisciplinary Assumption (Neural Networks and Linguistics):
a. Spurred on by this success, work is currently underway to train up a
network to distinguish the various phonemes characteristic of English
speech.
6.
Interdisciplinary Assumption (Neural Networks and Biology):
a. This network is of special interest because a subsequent examination of
the ‘receptive fields’ of the trained hidden units shows them to have
acquired some of the same response properties as are displayed by cells in
the visual cortex of mature animals. Specifically, they show a maximum
sensitivity to spots, edges, and bars in specific orientations.
DeGroff 7/31/2017
Top Five Items of Interest (With Titles):
1.
The Hidden Layer:
a. The model of neural network given the most amount of time by
Churchland was the traditional three-level network which consisted of the
input, output, and hidden layer. The hidden layer is where all of the magic
happens. I view the hidden layer as being a metaphor for what the
biological brain really does when it is working. Even if the backwardspropagation theory is incorrect, the hidden layer still serves a metaphoric
purpose.
2.
Purely Physical Properties
a. Churchland brings up the difficulty in replicating even the most
seemingly-simple of human actions. For example, producing the sound
for the letter ‘a’ is easy for a human to do, but requires a lot of
programming time for a digital computer to do. What accounts for this
level of difficulty is the problem of defining what exactly is occurring in
any given physical action. Defining the problem in physical terms could
be harder than programming a computer to perform it.
3.
Purely Physical Assumption:
a. As I just addressed in #2, Churchland needs a purely physical assumption
for his theory of neural network learning to even get off the ground. His
theory is as follows: ‘we must assume that some configuration of purely
physical elements is capable of grasping and manipulating these features,
DeGroff 7/31/2017
and by means of purely physical principles’. He leaves no room for
dualism or magic bullets.
4.
The Training Set:
a. The training set is the rule by which a neural network is supposed to
receive feedback regarding what it is outputting. Thus the training set
should be the collection of ‘all the right answers’. So, I must ask the
question: why not just use the training set?
5.
NETtalk:
a. NETtalk is the network that was programmed to speak the English
language by Rosenberg and Sejnowski. Although NETtalk learned to
transform printed words into audible speech, no understanding of the
words it read was involved. However, the ability for the network to
perform the English language is still impressive despite its lack of
understanding. Searle, anyone?