Download Self-constructing Fuzzy Neural Networks with Extended Kalman Filter

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Genetic algorithm wikipedia , lookup

Perceptual control theory wikipedia , lookup

Machine learning wikipedia , lookup

Gene expression programming wikipedia , lookup

Neural modeling fields wikipedia , lookup

Pattern recognition wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Fuzzy concept wikipedia , lookup

Type-2 fuzzy sets and systems wikipedia , lookup

Catastrophic interference wikipedia , lookup

Fuzzy logic wikipedia , lookup

Convolutional neural network wikipedia , lookup

Transcript
66
International Journal of Fuzzy Systems, Vol. 12, No. 1, March 2010
Self-constructing Fuzzy Neural Networks with Extended Kalman Filter
M. J. Er, F. Liu, and M. B. Li
Abstract1
and neural networks [14]. The well-known adaptive-network-based fuzzy inference system (ANFIS) was
In this paper, a self-constructing fuzzy neural net- proposed by Jang [10]. Juang et al. proposed an online
work employing extended Kalman filter (SFNNEKF) self-constructing neural fuzzy inference network, which
is designed and developed. The learning algorithm is a modified Takagi-Sugeno-Kang (TSK) type fuzzy
based on EKF is simple and effective and is able to system possessing neural network’s learning ability [16].
generate a fuzzy neural network with a high accuracy Another TSK-type fuzzy system implemented with raand compact structure. The proposed algorithm dial basis function (RBF) neural networks, termed dycomprises of three parts: (1) Criteria of rule genera- namic fuzzy neural network (DFNN), has been proposed
tion; (2) Pruning technology and (3) Adjustment of by Wu et al. [3] [21]. The silent feature of the DFNN
free parameters. The EKF algorithm is used to adjust algorithm is that not only the parameters can be adjusted
the free parameters of the SFNNEKF. The perform- but also the structure can be self-adaptive via growing
ance of the SFNNEKF is compared with other learn- and pruning technologies. An enhanced version of
ing algorithms in function approximation, nonlinear DFNN, termed GDFNN, based on the ellipsoidal basis
system identification and time-series prediction. function (EBF) was presented in [22], where a novel
Simulation studies and comparisons with other algo- online parameter allocation mechanism was developed to
rithms demonstrate that a more compact structure alleviate random choice of initialization. The resulting
with high performance can be achieved by the pro- approaches termed DFNN and GDFNN have been apposed algorithm.
plied to a wide range of problems [23]-[25]. Similar to
the GDFNN, a self-organizing fuzzy neural network
Keywords: Dynamic system identification, Extended (SOFNN) [26] employing the optimal brain surgeon
Kalman filter, Function approximation, Fuzzy neural (OBS) method has been proposed. A novel hybrid algonetworks, Mackey-Glass time-series prediction.
rithm based on genetic algorithms (GA) to design a
fuzzy neural network, termed self-organizing fuzzy neu1. Introduction
ral network based on GA (SOFNNGA), to implement
the TSK model has been proposed in [27]. However, like
Neural network (NN) is one of the important tech- most online learning algorithms, it also encounters the
nologies towards realizing artificial intelligence and problem that complicated growing and pruning criteria
machine learning. Many types of NNs with different would slow down the learning speed and make the
learning algorithms have been designed and developed learning process complicated due to the use of GA in
[1]-[6]. Fuzzy system, as a model-free approach, can optimizing the topology of the initial network structure.
approximate any continuous function on a compact set to The author of [17] developed a sequential learning algoany desired accuracy. It has been shown that rithm, known as resource allocating network (RAN), to
fuzzy-logic-based modeling and control could serve as a dynamically determine the number of hidden layer neupowerful methodology for dealing with imprecision and rons based on the property of input samples. Ennonlinearity efficiently [7] [8].
hancement of RAN, termed RANEKF, was proposed in
Recently, there has been a great resurgence of interest [18] where the EKF method rather than the least mean
in fuzzy neural network (FNN) systems and their appli- squares (LMS) algorithm was used for updating the
cations have been found to be very effective and wide- network parameters. The drawback of the RAN and
spread in several areas [2], [9]-[14], [30] because the RANEKF is that once the hidden neuron is generated, it
FNN is able to capture advantages of both fuzzy logic will never be pruned anymore, which leads to generating
more neurons than those required by the network. To
Corresponding Author: M. J. Er is with the School of Electrical and circumvent this problem, minimal resource allocation
Electronic Engineering, Nanyang Technological University, 50 Nan- network (MRAN) of [1] and [19] was developed by usyang Avenue Rd., Singapore, 639798.
ing the pruning criteria whereby inactive hidden neurons
E-mail: [email protected].
can be detected and removed during the training progress.
Manuscript received 18 Dec. 2008; revised 3 Jul. 2009; revised 2 Nov.
By virtue of the MRAN, a more parsimonious network
2009; accepted 16 Dec. 2009.
67
M. J. Er et al.: Self-constructing Fuzzy Neural Networks with Extended Kalman Filter
topology can be realized. Recently, a sequential growing
and pruning algorithm for RBF (GAP-RBF) [28] has
been proposed by using piecewise linear approximation
of the Gaussian function to reflect the significance of
each neuron. The significance of neurons was used in the
learning algorithm to realize a compact RBF network.
The generalized GAP-RBF (GGAP-RBF) algorithm of
[29] can be used for online learning in real-time applications where the training observations are sequentially
presented to the learning system.
The algorithm used in this paper can generate and
prune neurons dynamically according to their significance to the system’s performance. By employing the
pruning technology, a parsimonious structure with high
performance can be achieved. The EKF method is basically a gradient-based online algorithm that is used for
predicting the states of a nonlinear dynamic system. In
this paper, the initial structure of the FNN is generated
based on criteria of growing and pruning neurons and the
EKF method is used to adjust free parameters including
centers, widths and weights of the FNN.
This paper is organized as follows. Section 2 briefly
describes the architecture of the SFNNEKF. The details
of the learning algorithm are presented in Section 3. Section 4 presents simulation results of the SFNNEKF, including performance comparisons with other learning
algorithms. Finally, conclusions are drawn in Section 5.
2. Architecture of Self-constructing Fuzzy
Neural Networks
The architecture of the SFNNEKF is a four-layer FNN
shown in Figure 1. It realizes a Sugeno-type fuzzy inference system and is described as follows:
Layer 1: In this layer, X = [ x1 x 2 L x m ] is the input
vector, each node represents an input linguistic variable x i , i = 1,2, L , m .
u membership functions μij , which is in the form of a
Gaussian function given by
⎡ ( x i − c ij ) 2 ⎤
⎥, i = 1, L , m, j = 1, L , u
σ 2j
⎣⎢
⎦⎥
μ ij = exp ⎢−
(1)
where cij and σ j is the center and width of the Gaussian function respectively and u is the number of membership functions.
Layer 3: Each node represents a possible IF-part of
fuzzy rules. If the T–norm operator is chosen as multiplication to calculate each rule’s firing strength, the output of the jth rule is give by
⎡
ψ j = exp ⎢⎢−
⎣⎢
∑
m
i =1
⎡
( x i − c ij ) 2 ⎤
⎥ = exp ⎢− ( X − C j )
⎥
⎢
σ 2j
σ 2j
⎣⎢
⎦⎥
2
⎤
⎥ , j = 1, L , u.
⎥
⎦⎥
(2)
Layer 4: This layer is the output layer where
u
y( X ) =
∑ψ
j
wj
(3)
j =1
and a more compact form of (3) is given by
Y = ΨW
(4)
where W = [ w1 , L , wu ]T is the output weight vector and
Ψ = [ψ 1 ,L,ψ u ] is the output vector of the hidden layer
neuron respectively.
3. Learning Algorithm of SFNNEKF
A. Error Reduction Ratio
Suppose that for n observations, the FNN has generated u RBF neurons. The output matrix of the hidden
layer is given by
⎡ψ 11 L ψ u1 ⎤
Φ = ⎢ M M M ⎥.
(5)
⎢ψ 1n L ψ un ⎥
⎣
⎦
Equation (3) can be written as
T = ΦW + E
(6)
where T ∈ R n is the desired output vector and E is
the error vector. The output matrix of the hidden layer,
Φ can be rewritten as follows:
Φ = KA
(7)
where K is a n × u matrix with orthogonal columns
and A is a u × u upper triangular matrix. Substituting (7) into (6), we obtain
T = KAW + E = KG + E.
(8)
The orthogonal lease squares solution, G is given by
G = ( K T K ) −1 K T T or equivalently
gi =
Figure 1. Architecture of the SFNNEKF.
Layer 2: Each input variable x i , i = 1,2, L , m
has
k iT T
k iT k i
1 ≤ i ≤ u.
(9)
An error reduction ratio (ERR) due to k i as defined in
68
International Journal of Fuzzy Systems, Vol. 12, No. 1, March 2010
[20] is given by
2
erri =
hi = φiT φi / n , i = 1, L , u
T
i
gi k ki
T TT
1 ≤ i ≤ u.
(10)
1 ≤ i ≤ u.
(11)
Substituting (9) into (10) yields
erri =
(k iT T ) 2
k iT k i T T T
The ERR offers a simple and effective way of seeking a
subset of significant regressors. In this paper, we use it
as a growing criterion to evaluate the network generalization capability. We define
η=
u
∑
erri .
(12)
i =1
If η < k err , where k err is a predefined threshold, the
FNN needs more hidden nodes to achieve good generalization performance and a neuron will be added to the
network. Otherwise, no neurons will be added.
B. Criteria of Growing Neurons
For the ith observation ( Pi , t i ) , calculate the output
error of the SFNNEKF, ei , the distance d i ( j ) and the
ERR as follows:
ei = t i − y i
(13)
d i ( j ) = Pi − C j , j = 1, L , u
η=
(14)
n
∑ err .
i
(15)
i =1
If
ei > k e , d min > k d and η < k err
(16)
d min = arg min(d i ( j )) ,
j = 1,2, L , u and
where
k e , k d are two predetermined parameters which are chosen as follows:
k e = max[e max × β i , e min ] 0 < β < 1
(17)
k d = max[d max × γ i , d min ] 0 < γ < 1.
(18)
A new neuron is added to the SFNNEKF network and
the center, width and the output layer weight of the
newly generated neuron are set as follows:
C i = Pi
(19)
wi = e i
(20)
σ i = k × d min
(21)
where k is an overlap factor that determines the overlap
of responses of the RBF units.
C. Criteria of Pruning Neurons
To facilitate the following discussion, we rewrite the
output matrix of the hidden layer, Φ as follows:
⎡ψ 11 L ψ u1 ⎤
(22)
Φ = ⎢ M M M ⎥.
⎢ψ 1n L ψ un ⎥
⎣
⎦
In order to evaluate the contribution of each hidden neuron to the network output, the root mean square error
(RMSE) of each hidden node is calculated as follows:
(23)
where φi = [ψ i1 , L ,ψ in ]T is the column vector of matrix Φ .
If hi < k rmse where k rmse is a predefined threshold, the
ith hidden neuron is inactive and should be deleted.
D. Adjustment of Weights
When no neurons are added or pruned from the network, the network parameter vector WEKF is adjusted
using the EKF method of [19] as follows:
WEKF (n) = W EKF (n − 1) + en k n
(24)
where WEKF = [ w1 , C1T , σ 1 , L , wu , CuT , σ u ] is the network
parameter vector and k n is the Kalman gain vector given
by
kn = [ Rn + anT Pn −1an ]−1 Pn −1an
(25)
Here, an is the gradient vector and has the following
form
a n = [ψ 1 ( X n ),ψ 1 ( X n )(2 w1σ 1 )( X n − C1 ) T ,
2
ψ 1 ( X n )(2 w1σ 13 ) X n − C1 , L ,
(26)
ψ u ( X n ),ψ u ( X n )(2wu σ u )( X n − C u ) T ,
2
ψ u ( X n )(2wu σ u3 ) X n − C u ]T
where Rn is the variance of the measurement noise
and Pn is the error covariance matrix which is updated
by
Pn = [ I − k n a nT ]Pn −1 + QI
(27)
where Q is a scalar that determines the allowed random step in the direction of the gradient vector and I
is the identity matrix. When a new neuron is allocated,
the dimensionality of the Pn increases to
0 ⎞
⎛P
Pn = ⎜ n −1
⎟
0
P
0I ⎠
⎝
(28)
where P0 is an estimate of uncertainties in initial values
assigned to parameters.
4. Simulation Results
In this section, the effectiveness of the proposed algorithm is demonstrated in static function approximation,
nonlinear system identification and Mackey-Glass
time-series prediction. Simulation results are compared
with other learning algorithms, such as MRAN [1],
DFNN [21], RAN [17] and RANEKF [19].
A. Example1: Function Approximation
First, a very popular function, Hermite polynomial, is
used to evaluate the performance of the proposed
SFNNEKF. The function is given by
f ( x) = 1.1(1 − x + 2 x 2 ) exp(−
x2
).
2
(29)
Parameters of the SFNNEKF are set as follows:
d max = 1 , d min = 0.2 , e max = 0.8 , e min = 0.01 , k err = 0.99 ,
69
M. J. Er et al.: Self-constructing Fuzzy Neural Networks with Extended Kalman Filter
k rmse = 0.005 ,
k = 0.5 ,
β = 0.97 , γ = 0.97 , P0 = 1.1
and Q = 0.01 .
A total of 200 training samples are randomly chosen
from the interval [−4, 4] for all the methods. Figure 2
shows the growth of hidden neurons and Figure 3 shows
the RMSE with respect to the training samples. A comparison of structure and performance with MRAN [1]
and DFNN [21] is summarized in Table 1. It can be seen
that the DFNN algorithm achieves the best RMSE performance in this case, but it needs more training time
because the TSK model makes the DFNN network
structure more complicated. The SFNNEKF algorithm
can obtain almost the same RMSE performance as the
DFNN with faster learning speed and it has better performance than the MRAN algorithm in terms of training
time and RMSE.
B. Example 2: Nonlinear Plant Identification
The plant to be identified is described by the second-order difference equation of [5] as follows:
y (t + 1) = f [ y (t ), y (t − 1)] + u (t )
(30)
where
f [ y (t ), y (t − 1)] =
y (t ) y (t − 1)[ y (t ) + 2.5]
1 + y 2 (t ) + y 2 (t − 1)
Figure 2. Growth of neurons.
Figure 3. RMSE during training.
(31)
Table 1. Comparison of structure and performance of different
algorithms (Example 1).
Algorithm
Number of
neurons
RMSE
Training
time(s)
SFNNEKF
6
0.0078
0.38
DFNN
6
0.0052
2.97
MRAN
7
0.0376
0.53
Parameters of the SFNNEKF are set as follows:
d max = 1 , d min = 0.2 , e max = 0.8 , e min = 0.01 , k err = 0.96 ,
k rmse = 0.005 , k = 0.5 , β = 0.97 , γ = 0.97 , P0 = 1.1 , and
Q = 0.01 .
The input u (k ) is assumed to be independent identically distributed (i.i.d) random signal uniformly distributed in the interval [−2, 2] and a total of 2000 samples
are randomly chosen for the training process. After this,
the sinusoidal input signal u (t ) = sin( 2πt / 25) is used to
test the identified model. Figure 4 depicts the identified
result using the SFNNEKF algorithm. In order to observe the result clearly, only the first 100 samples are
shown in Figure 4. Here, the solid-line curve is the desired plant output and the dash-line curve is the identified model output. It can be seen that the SFNNEKF algorithm can identify the plant very well. The RMSE
corresponding to training samples is shown in Figure 5.
The SFNNEKF algorithm generated a total of 15 neurons at the end of the training process. Table 2 shows a
comparison of structure and performance of different
algorithms. It is clear that the SFNNEKF algorithm can
achieve a satisfactory RMSE performance with a simple
network structure. Although the RMSE of the DFNN
algorithm is 0.056, it needs more neurons (i.e. complicated network structure) to realize it. For the training
speed, the SFNNEKF algorithm is the fastest method in
all three sequential learning algorithms.
Figure 4. Identification result: Desired (−) and Identified
(---).
70
International Journal of Fuzzy Systems, Vol. 12, No. 1, March 2010
0.1
0.08
0.06
Prediction error
0.04
0.02
0
-0.02
-0.04
-0.06
-0.08
0
Figure 5. RMSE during training.
50
100
150
200
250
Time t
300
350
400
450
500
Figure 7. Prediction error.
Table 2. Comparison of structure and performance of different
algorithms (Example 2).
Table 3. Generalization comparison between SFNNEKF,
DFNN, RAN, RANEKF and MRAN.
Algorithm
Number of
neurons
RMSE
Training
time(s)
Algorithm
Number of
neurons
RMSE
Training
time(s)
SFNNEKF
15
0.087
30.25
SFNNEKF
2000
41
0.0327
DFNN
26
0.056
608.51
DFNN
2000
25
0.0544
MRAN
40
0.1525
69.79
RAN
5000
50
0.071
On the whole, the SFNNEKF algorithm demonstrates
superior performance than the DFNN and MRAN algorithms in terms of networks complexity and training
time.
C. Example 3: Mackey-Glass Time-Series Prediction
The Mackey-Glass time-series prediction is a benchmark problem which has been considered by a number
of researchers [1], [3], [9] and [16] - [18]. The time series in our simulation studies is generated by
1.6
Desired & predicted output
1.4
1.2
1
0.8
0.6
5000
56
0.056
MARN
5000
28
-a
a. The generalization error is measured by weighted
prediction error (WPE).
x(t + 1) = (1 − a) x(t ) +
bx (t − τ )
1 + x10 (t − τ )
(32)
for a = 0.1 , b = 0.2 and τ = 17 . A sampled version of the
time series is obtained by the above equation.
The prediction model is given by
x(t + p) = f [ x(t ), x (t − Δt ), x(t − 2Δt ), x(t − 3Δt )]. (33)
For the purpose of training and testing, 6000 samples are
generated between t = 0 and t = 6000 from (32) with
initial conditions x(t ) = 0 for t < 0 and x(0) = 1.2 .
Hence, these data are used for preparing for the input
and output pairs in (33).
Suppose p = 6 , Δt = 6 , select first 500 sample pairs
between 118 ≤ t ≤ 617 as training data. Parameters of
the SFNNEKF are set as follows:
d max = 1 , d min = 0.18 , emax = 1.1 , emin = 0.02 , k err = 1.2 ,
0.4
0.2
RANEKF
k rmse = 0.00025 , k = 0.5 , β = 0.95 , γ = 0.98 , P0 = 1.1 and
0
50
100
150
200
250
300
Training samples
350
400
450
500
Figure 6. Mackey-Glass time series from 642 to 1141 and
six-step ahead prediction desired (−) and actual (---) output.
Q = 0.01 .
The prediction result and prediction error
for n = 500 , p = 6 are shown in Figure 6 and Figure 7
respectively. The number of rules generated is 34 which
indicate that the SFNNEKF possesses remarkable generalization capability.
If we select p = 50 and Δt = 6 , for the convenience of
M. J. Er et al.: Self-constructing Fuzzy Neural Networks with Extended Kalman Filter
comparison, the generalization capability is evaluated by
the normalized root mean squared error (NRMSE) of [17]
or the non-dimensional error index (NDEI), which is [3]
defined as the RMSE divided by the standard deviation
of the target series [17]. Generalization comparison between SFNNEKF, DFNN [21], RAN [17], RANEKF [19]
and MRAN [1] is listed in Table 3. It clearly demonstrates that the SFNNEKF can obtain better performance [4]
even it has generated more rules than the DFNN. However, the SFNNEKF shows superiority compared with
the RAN and RANEKF algorithms in terms of NRMSE,
which measures the average prediction performance. The
final NRMSE value for SFNNEKF is 0.0327 while the [5]
NRMSE value is 0.071 and 0.056 for the RAN and the
RANEKF respectively.
5. Conclusions
In this paper, a new self-constructing fuzzy neural
network has been proposed. The basic idea of the proposed approach is to construct a self-constructing fuzzy
neural network based on criteria of generating and pruning neurons. The EKF algorithm has been used to adapt
the parameters when a hidden unit is not added. The superior performance of the SFNNEKF over some other
learning algorithms has been demonstrated in three examples in this paper. For function approximation and
nonlinear plant identification, the SFNNEKF algorithm
demonstrates superior performance than the DFNN and
MRAN algorithms in the sense fewer rules are generated
and training time is significantly reduced.
For
Mackey-Glass time-series prediction, the SFNNEKF can
obtain better performance even the number of generated
rules is more than that generated by the DFNN and it
shows superiority compared with the RAN and the RANEKF. The SFNNEKF algorithm employs the ERR as a
condition in constructing the network which makes the
growth of neurons smooth and fast. The EKF algorithm
has been used to adjust free parameters of the FNN to
achieve an optimal solution. Simulation results show that
a more effective fuzzy neural network with high accuracy and compact structure can be self-constructed by
the proposed algorithm.
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
References
[1]
[2]
Y. Lu, N. Sundararajan, and P. Saratchandran, “A
sequential learning scheme for function approximation using minimal radial basis function neural
networks,” Neural Computation, vol. 9, no. 2, pp.
461-478, 1997.
J. Deng, N. Sundararajan, and P. Saratchandran,
“Communication channel equalization using complex - valued minimal radial basis function neural
[14]
[15]
71
networks,” IEEE Transactions on Neural Networks,
vol. 13, no. 3, pp. 687-696, 2002.
S. Wu and M. J. Er, “Dynamic fuzzy neural networks - a novel approach to function approximation,” IEEE Transactions on Systems, Man and
Cybernetics, Part B: Cybernetics, vol. 30, no. 2, pp.
358-364, 2000.
Y. Lu., N. Sundararajan, and P. Saratchandran,
“Identification of time-varying nonlinear systems
using minimal radial basis function neural networks,” IEE Proceedings: Control Theory and Applications, vol. 144, no. 2, pp. 202-208, 1997.
K. S. Narendra and K. Parathasarathy, “Identification and control of dynamic systems using neural
networks,” IEEE Transactions on Neural Networks,
vol. 1, no. 1, pp. 4-27, 1990.
A. U. Levin and K. S. Narendra, “Control of
nonlinear dynamical system using neural networks-part II: Observability, identification, and
control,” IEEE Transactions on Neural Networks,
vol. 7, no. 1, pp. 30-42, 1996.
L. X. Wang, Adaptive Fuzzy Systems and Control:
Design and Stability Analysis, Englewood Cliffs,
NJ: Prentice-Hall, 1994.
J. S. R. Jang, C. T. Sun, and E. Mizutani,
Neuro-Fuzzy and Soft Computing, Englewood
Cliffs, NJ: Prentice-Hall, 1997.
K. B. Cho and B. H. Wang, “Radial basis function
based adaptive fuzzy systems and their applications
to system identification and prediction,” Fuzzy Sets
and Systems, vol. 83, no. 3, pp. 325-339, 1996.
J. S. R. Jang, “ANFIS: Adaptive-Network-Based
Fuzzy Inference System,” IEEE Transactions on
Systems, Man, and Cybernetics, vol. 23, no. 3, pp.
665-684, 1993.
J. S. Wang and C. S. G. Lee, “Efficient neuro-fuzzy
control systems for autonomous underwater vehicle
control,” Proceedings of
IEEE International
Conference on Robotics and Automation 3, pp.
2986-2991, 2001.
C. T. Lin and C. S. G. Lee, Neural Fuzzy Systems:
a Neural-Fuzzy Synergism to Intelligent Systems,
Englewood Cliffs, NJ: Prentice-Hall, May 1996.
K. Tanaka, M. Sano, and H. Watanabe, “Modeling
and control of carbon monoxide concentration using a neuro-fuzzy technique,” IEEE Transactions
on Fuzzy Systems, vol. 3, no. 3, pp. 271-279, 1995.
F. Y. Wang and H. M. Kim, “Implementing adaptive fuzzy logic controllers with neural networks: a
design paradigm,” Journal of Intelligent and Fuzzy
Systems, vol. 3, no. 2, pp. 605-614, 1995.
D. A. Linkens and H. O. Nyongesa, “Learning systems in intelligent control: An appraisal of fuzzy,
neural and genetic algorithm control applications,”
72
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
International Journal of Fuzzy Systems, Vol. 12, No. 1, March 2010
IEE Proceeding: Control Theory and Applications,
vol.143, no. 4, pp. 367-386, 1996.
C. F. Juang and C. T. Lin, “An on-line
self-constructing neural fuzzy inference network
and its applications,” IEEE Transactions on Fuzzy
Systems, vol. 6, no. 1, pp. 12-32, 1998.
J. Platt, “A resource-allocating network for function interpolation,” Neural Computation, vol. 3, no.
2, pp. 213-225, 1991.
V. Kadirkamanathan and M. Niranjan, “A function
estimation approach to sequential learning with
neural networks,” Neural Computation, vol. 5, no.
6, pp. 954-975, 1993.
Y. Lu, N. Sundararajan, and P. Saratchandran,
“Performance evaluation of a sequential minimal
radial basis function (RBF) neural network learning
algorithm,” IEEE Transactions on Neural Networks, vol. 9, no. 2, pp. 308-318, 1998.
S. Chen, C. F. N. Cowan, and P. M. Grant, “Orthogonal least squares learning algorithm for radial
basis function network,” IEEE Transactions on
Neural Networks, vol. 2, no. 2, pp. 302-309, 1991.
M. J. Er and S. Wu, “A fast learning algorithm for
parsimonious fuzzy neural systems,” Fuzzy Sets
and Systems, vol. 126, pp. 337-351, 2002.
S. Wu, M. J. Er, and Y. Gao, “A fast approach for
automatic generation of fuzzy rules by generalized
dynamic fuzzy neural networks,” IEEE Transactions on Fuzzy Systems, vol. 9, no. 4, pp. 578-594,
2001.
Y. Gao and M. J. Er, “Online adaptive fuzzy neural
identification and control of a class of MIMO
nonlinear systems,” IEEE Transactions on Fuzzy
Systems, vol. 11, no. 4, pp. 12-32, 2003.
Y. Gao and M. J. Er, “NARMAX time series
model prediction: feedforward and recurrent fuzzy
neural network approaches,” Fuzzy Sets and Systems, vol. 150, pp. 331-350, 2005.
M .J Er, F. Liu, and M. B. Li, “Channel equalization using dynamic fuzzy neural networks,” International Journal of Fuzzy Systems, vol. 11, no. 4,
pp. 10-19, 2009.
G. Leng, G. Prasad, and T. M. McGinnity, “An
on-line algorithm for creating self-organizing fuzzy
neural networks,” Neural Networks, vol. 17, pp.
1477-1493, 2004.
G. Leng and T. M. McGinnity, “Design for
self-organizing fuzzy neural networks based on
genetic algorithms,” IEEE Transactions on Fuzzy
Systems, vol. 14, no. 6, pp. 755-765, 2006.
G. B. Huang, P. Saratchandran, and N. Sundararajan, “An efficient sequential learning algorithm for
growing and pruning (GAP-RBF) networks,” IEEE
Transactions on Systems, Man and Cybernetics,
Part B: Cybernetics, vol. 34, pp. 2284-2292, 2004.
[29] G. B. Huang, P. Saratchandran, and N. Sundararajan, “A generalized growing and pruning
(GGAP-RBF) neural network for function approximation,” IEEE Transactions on Neural Networks, vol. 16, no. 1, pp. 57-67, 2005.
[30] S. Mitra and Y. Hayashi, “Neuro-fuzzy rule generation: survey in soft computing framework,”
IEEE Transactions on Neural Networks, vol. 11, no.
3, pp. 748-768, 2000.