Download RBF

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Data assimilation wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Regression toward the mean wikipedia , lookup

Choice modelling wikipedia , lookup

Time series wikipedia , lookup

Coefficient of determination wikipedia , lookup

Regression analysis wikipedia , lookup

Linear regression wikipedia , lookup

Transcript
RBF ch1,2,3
Overview of RBF Networks
• RBF networks have three layers: input layer ,
output layer, and hidden layer.
• Output is a real value.
• One neuron in the input layer corresponds to
each predictor variable.
• Each neuron in the hidden layer consists of a
RBF function(Gaussian,etc)
• Each neuron centered on a point with the
same dimensions as the predictor variables
• The output layer has a weighted sum of
outputs from the hidden layers.
Overview of RBF Networks
Supervised Learning &Unsupervised
Learning
Supervised Learning:
categorized into "regression" and "classification"problems.
Unsupervised Learning:
We can derive structure from data where we don't necessarily
know the effect of the variables.
Nonparametric Regression&
Parametric Regression
Parametric Regression: parameters have meaningful
interpretations, such as initial water level or rate of flow
Y depends on X
Nonparametric Regression: parameters have no particular
meaning in relation to the problems to which they are
applied.
Nonparametric Regression
Nonparametric Regression : the primary goal is
to estimate the underlying function
Y depends on weight and basis function
• RBF神經網路在架構上是一種3層前饋網路。
輸入層到輸出層的是非線性的(即:隱藏層
的函數),但是隱藏層到輸出層的映射卻是
線性的(即:輸出層的函數),因此可以加快
網路的學習速度。
• 高維度空間的資料分類問題,比低維度空
間更符合線性分離趨勢。
The idea
y
Training
Data
x
The idea
y
Training
Data
x
Basis Functions (Kernels)
The idea
y
Function
Learned
x
Basis Functions (Kernels)
The idea
y
Nontraining
Sample
Function
Learned
x
Basis Functions (Kernels)
Linear model
m
Formula:
f (x)   wii (x)
i 1
Example Linear Models
• Polynomial
f ( x)   wi x
i
i ( x)  x , i  0,1, 2,
i
i
• Fourier Series
f ( x)   wk exp  j 2k0 x 
k
k ( x)  exp  j 2k0 x  , k  0,1, 2,
Single-Layer Perceptrons as
Universal Aproximators
y
w2
w1
Hidden
Units
1
x=
wm
2
x1
x2
m
xn
m
f (x)  aswii (x)
Radial Basis Function Networks
i 1
Universal Aproximators
y
w2
w1
Hidden
Units
1
x=
wm
2
x1
x2
m
xn
With sufficient number of
radial-basis-function units, it
can also be a universal
approximator.
Linear model