Download 슬라이드 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Neural Network
Hopfield model
Kim, Il Joong
Contents
1.
Neural network: Introduction
①
②
③
2.
Hopfield model
①
②
③
3.
Definition & Application
Network architectures
Learning processes (Training)
Summary of model
Example
Limitations
Hopfield pattern recognition on a scale-free
neural network
Definition of Neural Network


A massively parallel system made up of simple
processing units and dense interconnections,
which has a natural propensity for storing experiential knowledge and making it available for use.
Interconnection strengths,
known as synaptic weights,
are used to store the acquired
knowledge.
=> Learning process.
Application of Neural Network

Patterns-pattern mapping, pattern
completion, pattern classification





Image Analysis
Speech Analysis & Generation
Financial Analysis
Diagnosis
Automated Control
Network architectures

Single-layer feedforward network
Network architectures

Multilayer feedforward network
Network architectures

Recurrent network
Learning processes (training)
 Error-correction
learning
 Memory-based learning
 Hebbian learning
 Competitive learning
 Boltzmann learning
Hebbian learning process


If two neurons on either side of a synapse connection are
activated simultaneously,
then the strength of that synapse is increased.
If two neurons on either side of a synapse are activated
asynchronously,
then the strength of that synapse is weakened or eliminated.
Hopfield model

Network architecture




N processing units (binary)
Fully(Infinitely) connected
: N(N-1) connections
Single-layer(no hidden layer)
Recurrent(feedback) network
: No self-feedback loof
Hopfield model

Learning process

Let1 , 2 , 3 ,    ,  M denote a known set of N-dim. memories.
1 M
W  (  T  M)
N  1
Hopfield model

Inputting and updating


Let  probe denote an unknown N-dimensional input vector.
Update asynchronously (i.e., randomly and one at a time)
according to the rule
Hopfield model

Convergence and Outputting


Repeat updating until the state vector remains unchanged.
Let X fixed denote the fixed point (stable state).
Y  X fixed

Associated memories
E
1
 ji x j xi

2 j i
i j


1
E j  E j (n  1)  E j (n)   x j   ji xi
2
i
i j
Memory vectors 1 ,  2 , 3 ,    ,  M are states that
corresponds to minimum E.
Any input vector converges to the stored memory vector
that is most similar or most accessible to the input.
Hopfield model

N=3 example

Let (1,-1,1), (-1,1,-1) denote the stored memories. (M=2)
 0 2 2 
1
W   2 0  2
3
 2  2 0 
Limitations of Hopfield model
①
The stored memories are not always stable.

The signal-to-noise ratio:


②
N
M
for large M.
The quality of memory recall
breaks down at M=0.14N
There may be stable states that were not the stored
memories. (Spurious states)
Limitations of Hopfield model
③
Stable state may not be the state that is most
similar to the input state.
On a scale-free neural network

Network architecture: the BA scale-free network
 A small core of m nodes. (fully connected)
 N (≫m) nodes are added.
 Total N + m processing units.
 Total Nm connections. (for 1≪m≪N)
On a scale-free neural network

Hopfield pattern recognition





Stored P different patterns: i (   1,2,  , P)
1
Input pattern: 10% reversal of  i (  =0.8)
Output pattern: Si
1
The quality of recognition: overlap    Sii1
N i
On a scale-free neural network

Small m : N=10000, m=2,3,5
On a scale-free neural network

Large m : N+m=10000, P=10,100,1000
On a scale-free neural network

Comparison with a fully connected network (m=N)
 For small m, low quality of recognition.
 For 1≪m≪N, good quality of recognition.
 Gain a factor N/m>>1 in the computer memory and time.
 A gradual decrease of quality of recognition.
References





A. S. Mikhailov, Foundations of Synergetics 1, Springer-Verlag
Berlin Heidelberg (1990)
John Hertz et al., Introduction to the theory of neural
computation, Addison-Wesley (1991)
Judith E. Dayhoff, Neural Network Architectures, Van Nostrand
Reinhold (1990)
S. Haykin, Neural Networks, Prentice-Hall (1999)
D. Stauffer et al., http://xxx.lanl.gov/abs/cond-mat/0212601
(2002)