Download History of Neural Computing

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Pattern recognition wikipedia , lookup

AI winter wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Concept learning wikipedia , lookup

Machine learning wikipedia , lookup

Catastrophic interference wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Convolutional neural network wikipedia , lookup

Transcript
History of Neural Computing
• McCulloch - Pitts 1943
- showed that a ”neural network” with
simple logical units computes any
computable function
- beginning of Neural Computing, Artificial
Intelligence, and Automaton Theory
• Wiener 1948
- Cybernetics, first time statistical
mechanics model for computing
- - compare Hopfield 1982
• Hebb 1949
- physiological learning rule based on the
synaptic modification, Hebbian learning
- - repeated synaptic activity strenghen
synaptic response
• Marvin Minsky 1954
- ”Neural - anlog system & brain model”
Ph.D. thesis at Princeton
- An article ”Toward AI” in 1961, a chapter
”Neural Computing”
- The book ”Computation: Finite and
infinite machines” transform McCulloch Pitts results into Automaton theory
• Gabor 1954
- nonlinear adaptive filter
• Taylor 1956
- associative memory -> learning matrix
- also early works for correlation matrix
memory (Anderson 1972, Kohonen 1972,
Nakano 1972)
PERCEPTRON
PERCEPTRON
• Rosenblatt 1958
- a new method for supervised learning
”perceptron convergence theorem”
• Widrow - Hoff 1960
- LMS-algorithm for learning Adaline
• Widrow 1962
- Madaline: leyered neural networks
• Amari 1967
- stochastic gradient method
• Nilsson 1965
- linearly separable sets
During golden era of Perceptrons, in
60’s, it was believed that they solve all
problems.
PERCEPTRON
• Minsky - Papert 1969
- the book ”Perceptrons”
- showed mathematically the restrictions of
1-leyer perceptrons
- they doubted that more leyers do not
bring essentially more power
Neural Network research went into
”HALT” state
• The research was low about ten years
- reasons: low computing power
psychologically math results
• research was continued in neurosciences
and in psychology
Self-Organizing Maps
• This reseach was continued during
”Perceptron-halt”
• von der Malsburg 1973
- first demonstration of self-organization
- first paper was inspired by topological
maps in brain
• Grossberg 1980
- a new form of self-organization; ART
• Kohonen 1982
- 1 and 2 dimensional lattice, different to
von der Malsburg
- nowadays a benchmark SOM
Self-Organizing Maps
Hopfield networks
• Hopfield 1982
- formulation of an energy function for
understanding how attraction network work
- popular in 80’s:
feedback Neural Net = Hopfield Net
- no neurophysiologically adequate, but
interesting since information could be stored
into a stable net
• Paper triggered a new era of Neural
Networks
• Paper caused much controversy, there were
similar ideas in the literature: CraggTamperley (1954), Cowan (1967),
Grossberg (1967)
New rise of NN
• Kirkpatrick - Gelatt - Vecchi 1983
- Simulated annealing for combinatorial
optimization problem
- idea from statistical mechanics model for
cooling in crystal formation
• Ackley - Hinton - Sejnowski 1985
- Bolzmann machine, first succeeded
realization of multileyer network
--> earlier psychological barrier was broken
• Barto - Sutton - Anderson 1983
- reinforcement learning, balance of a
broomstick
MULTILEYER PERCEPTRON
(Error) Back Propagation
• Problem in multileyer perceptron network:
How to update the weights?
• Rumelhart - Hinton - Williams 1986
- The book: Parallel Distributed Processing
- back propagation algorithm solve problem
- most popular learning algorithm for MLP’s
• found also by Parker 1985, LeCun 1985
• earlier by Werbos 1974 (Bryson-Ho 1969)
MULTILEYER PERCEPTRON
Latest additions
• Broomhead - Lowe 1988
- Radial basis functions (RBF)
- input leyer : nonlinear hidden leyer : linear
output leyer
- link neural networks to numerical analysis
• Linsker 1988
- self organization in perceptual networks
- triggered again interest of information
theorists
• Bell - Sejnowski 1995
- blind source separation