Download Fuzzy Logic

Document related concepts
no text concepts found
Transcript
Neural-Network-Based Fuzzy Logical
Control and Decision System
主講人 虞台文
Content





Introduction
Basic Structure of Fuzzy Systems
Connectionist Fuzzy Logic Control and Decision
Systems
Hybrid Learning Algorithm
Example: Fuzzy Control of Unmanned Vehicle
Neural-Network-Based Fuzzy Logical
Control and Decision System
Introduction
Reference
Chin-Teng Lin and C. S. George Lee, “Neural-network-based
fuzzy logic control and decision system,” IEEE Transactions
on Computers, Volume: 40 , Issue: 12 , Dec. 1991,
Pages:1320 – 1336.
Neural-Network & Fuzzy-Logic Systems

Neural-Network Systems
–
–
–
–

Highly connected PE’s (distributive representation)
Learning capability (Learning from examples)
Learning result is hardly interpretable
Efficient in pattern matching, but inefficient in computation
Fuzzy-Logic Systems
–
–
–
–
–
–
Inference based on human readable fuzzy rules
Linguistic-variable based fuzzy rules
Fuzzy rules from experienced engineers
Fuzzification before inference
Inference using compositional rule
Defuzzification before output
Neural-Network & Fuzzy-Logic Systems

Neural-Network Systems
–
–
–
–

Highly
connectedlearning
PE’s (distributive
Back-propagation
algorithm is representation)
efficient if the
Learning
capability
from
appropriate
network (Learning
structure is
used.examples)
However,result
the determination
of the appropriate network
Learning
is hardly interpretable
structure in
is pattern
difficult. matching, but inefficient in computation
Efficient
Fuzzy-Logic Systems
–
–
–
–
–
–
Inference based on human readable fuzzy rules
Linguistic-variable based fuzzy rules
Fuzzy
rules from experienced
engineers
The construction
of fuzzy rule
base & the determination
Fuzzification before inference
of membership functions are subjective.
Inference using compositional rule
Defuzzification before output
Neuro-Fuzzy Systems
Neural
Network
Good for learning.
• Supervised leaning
• Unsupervised learning
• Reinforcement learning
Not good for human to
interpret its internal
representation.
+
Fuzzy
Logic
Human reasoning scheme.
• Readable Fuzzy rules
• Interpretable
Fuzzy rules and membership
functions are subjective.
Neuro-Fuzzy Systems
Neural
Network
Fuzzy
Logic
+
Good for learning.
Human reasoning scheme.
A neuro-fuzzy
system is a fuzzy system that uses a
• Supervised leaning
• Unsupervised learning
learning
algorithm
derived
• Reinforcement
learning
from
• Readable Fuzzy rules
• Interpretable
or
inspired by neural
network
to determine
its
parameters
by
Fuzzy
rules and membership
Not goodtheory
for human
to
interpret its internal
processing
data samples.
representation.
functions are subjective.
Neuro-Fuzzy Systems
Neural
Fuzzy
fuzzy sets and
Network
Logic
fuzzy rules
+
A neuro-fuzzy system is a fuzzy system that uses a
learning algorithm derived from or inspired by neural
network theory to determine its parameters by
processing data samples.
Neural-Network-Based Fuzzy Logical
Control and Decision System
Basic Structure of
Fuzzy Systems
Basic Structure of Fuzzy Systems
X
Fuzzifier
 ( X)
Inference
Engine
Fuzzy
Knowledge
Base
 (Y )
Defuzzifier
Y
Fuzzifier
X
Fuzzifier
 ( X)
Inference
Engine
 (Y )
Defuzzifier
Y
Fuzzy
Knowledge
Base
Converts the crisp input to a linguistic variable using the
membership functions stored in the fuzzy knowledge base.
Inference Engine
X
Fuzzifier
 ( X)
Inference
Engine
 (Y )
Defuzzifier
Y
Fuzzy
Knowledge
Base
Using If-Then type fuzzy rules converts the fuzzy input to the
fuzzy output.
Defuzzifier
X
Fuzzifier
 ( X)
Inference
Engine
 (Y )
Defuzzifier
Y
Fuzzy
Knowledge
Base
Converts the fuzzy output of the inference engine to crisp
using membership functions analogous to the ones used by the
fuzzifier.
Fuzzy Knowledge Base
X
Fuzzifier
 ( X)
Inference
Engine
Fuzzy
Knowledge
Base
Information storage for
1. Linguistic variables definitions.
2. Fuzzy rules.
 (Y )
Defuzzifier
Y
Input/Output Vectors
X
X
Y
Fuzzifier


 ( X)
Inference
Engine
 (Y )
Y
Defuzzifier
Fuzzy
Knowledge
Base
xi , U i , Tx1i , Tx2i ,

, Txki i , M 1xi , M x2i ,

, Tylii , M 1yi , M y2i ,
yi , U i, Ty1i , Ty2i ,


, M xkii

, M ylii
i 1, , p



i 1, , q
MIMO: multiinput and multioutput.
Fuzzy Rules
1
2
R  RMIMO
, RMIMO
,

i
RMIMO
: Tx1 
n
, RMIMO

 
 Tx p  Ty1 
RMi IMO : IF x1 is Tx1 and
X
Y


 Tyq
and x p is Tx p
THEN y1 is Ty1 and
and yq is Tyq
xi , U i , Tx1i , Tx2i ,

, Txki i , M 1xi , M x2i ,

, Tylii , M 1yi , M y2i ,
yi , U i, Ty1i , Ty2i ,



, M xkii

, M ylii
i 1, , p



i 1, , q
MIMO: multiinput and multioutput.
Fuzzy Rules
1
2
R  RMIMO
, RMIMO
,

i
RMIMO
: Tx1 
n
, RMIMO

 
 Tx p  Ty1 
RMi IMO : IF x1 is Tx1 and
 Tyq

and x p is Tx p
THEN y1 is Ty1 and
and yq is Tyq
MIMO MISO
Fuzzy Reasoning
R1 : IF x1 is Tx11 and x2 is Tx12 THEN y is Ty1
R2 : IF x1 is Tx11 and x2 is Tx22 THEN y is Ty2
Deffuzzifier
X
R3 : IF x1 is Tx21 and x2 is Tx12 THEN y is Ty3
R4 : IF x1 is Tx21 and x2 is Tx22 THEN y is Ty4
y
Fuzzy Reasoning
R1 : IF x1 is Tx11 and x2 is Tx12 THEN y is Ty1
R2 : IF x1 is Tx11 and x2 is Tx22 THEN y is Ty2

X
R3 : IF x1 is Tx21 and x2 is Tx12 THEN y is Ty3
R4 : IF x1 is Tx21 and x2 is Tx22 THEN y is Ty4
Deffuzzifier
y
X
Y



, M xkii

i 1, , p

, M ylii

i 1, , q
xi , U i , Tx1i , Tx2i ,

, Txki i , M 1xi , M x2i ,

, Tylii , M 1yi , M y2i ,
yi , U i, Ty1i , Ty2i ,


Rule Firing Strengths
1 = M 1x ( x1 )  M 1x ( x2 )
1
2
R1 : IF x1 is Tx11 and x2 is Tx12 THEN y is Ty1
2 = M 1x1 ( x1 )  M x22 ( x2 )
R2 : IF x1 is Tx11 and x2 is Tx22 THEN y is Ty2
X
3 = M x2 ( x1 )  M 1x ( x2 )
1
2
R3 : IF x1 is Tx21 and x2 is Tx12 THEN y is Ty3
4 = M x21 ( x1 )  M x22 ( x2 )
R4 : IF x1 is Tx21 and x2 is Tx22 THEN y is Ty4

Deffuzzifier
y
X
Y



, M xkii

i 1, , p

, M ylii

i 1, , q
xi , U i , Tx1i , Tx2i ,

, Txki i , M 1xi , M x2i ,

, Tylii , M 1yi , M y2i ,
yi , U i, Ty1i , Ty2i ,


Fuzzy Sets of Decisions
1 = M 1x ( x1 )  M 1x ( x2 )
1
2
M 1y (w)  1
R1 : IF x1 is Tx11 and x2 is Tx12 THEN y is Ty1
2 = M 1x1 ( x1 )  M x22 ( x2 )
M y2 ( w)  2
R2 : IF x1 is Tx11 and x2 is Tx22 THEN y is Ty2
X
3 = M x2 ( x1 )  M 1x ( x2 )
1
2
M y3 (w)  3
R3 : IF x1 is Tx21 and x2 is Tx12 THEN y is Ty3
4 = M x21 ( x1 )  M x22 ( x2 )
M y4 ( w)  4
R4 : IF x1 is Tx21 and x2 is Tx22 THEN y is Ty4

Deffuzzifier
y
X
Y



, M xkii

i 1, , p

, M ylii

i 1, , q
xi , U i , Tx1i , Tx2i ,

, Txki i , M 1xi , M x2i ,

, Tylii , M 1yi , M y2i ,
yi , U i, Ty1i , Ty2i ,


Fuzzy Sets of Decisions
1 = M 1x ( x1 )  M 1x ( x2 )
1
2
M 1y (w)  1
1
11
R1 : IF x1 isM
Tx111y (and
is T
TH
E
N
y
is
T
w) x2 M
(
w
)


x
y
y 2
1
2 = M 1x1 ( x1 )  M x22 ( x2 )
M y2 ( w)  2
2
R2 : IF x1 isMTyx211 (and
is y2T(x22wTH
w) x2M
) EN2 y is Ty
X
3 = M x2 ( x1 )  M 1x ( x2 )
1
2
M y3 (w)  3
R3 : IF x1 isM
Txy321 (and
is y3T(x12wTH
w) x2 M
) EN3 y is Ty3
4 = M x21 ( x1 )  M x22 ( x2 )
M y4 ( w)  4
R4 : IF x1 isMTyx421 (and
is y4T(x22wTH
w) x2M
) EN4 y is Ty4

Deffuzzifier
y
M y (w)  M 1y (w)  M y2 (w)  M y3 (w)  M y4 (w)
Fuzzy Sets of Decisions
1
11
R1 : IF x1 isM
Tx111y (and
is T
TH
E
N
y
is
T
w) x2 M
(
w
)


x
y
y 2
1
M y (w)
2
R2 : IF x1 isMTyx211 (and
is y2T(x22wTH
w) x2M
) EN2 y is Ty

X
R3 : IF x1 isM
Txy321 (and
is y3T(x12wTH
w) x2 M
) EN3 y is Ty3
R4 : IF x1 isMTyx421 (and
is y4T(x22wTH
w) x2M
) EN4 y is Ty4
Deffuzzifier
y
w M (w )
wM ( w)dw


y
or
 M (w )
 M (w)dw
Defuzzification  Decision Output
y
y
1
11
R1 : IF x1 isM
Tx111y (and
is T
TH
E
N
y
is
T
w) x2 M
(
w
)


x
y
y 2
1
j
j
j
y
y
j
j
M y (w)
2
R2 : IF x1 isMTyx211 (and
is y2T(x22wTH
w) x2M
) EN2 y is Ty

X
R3 : IF x1 isM
Txy321 (and
is y3T(x12wTH
w) x2 M
) EN3 y is Ty3
R4 : IF x1 isMTyx421 (and
is y4T(x22wTH
w) x2M
) EN4 y is Ty4
Deffuzzifier
y
General Model of Fuzzy Controller and
Decision Making System
Neural-Network-Based Fuzzy Logical
Control and Decision System
Connectionist Fuzzy
Logic Control and
Decision Systems
The Architecture
y1
output
Layer 5 linguistic
nodes
Layer 4
Output
term
node
Layer 3
rule
nodes
Layer 2
input
term
nodes
input
Layer 1 linguistic
nodes
x1
ŷ1
ym
x2
yˆ m
xn
The Architecture
y1
output
Layer 5 linguistic
nodes
Layer 4
Output
term
node
Layer 3
rule
nodes
Layer 2
input
term
nodes
input
Layer 1 linguistic
nodes
ŷ1
ym
yˆ m
Defuzzifier
Inference Engine
Fuzzifier
x1
x2
xn
The Architecture
y1
output
Layer 5 linguistic
nodes
Layer 4
Output
term
node
Layer 3
rule
nodes
Layer 2
input
term
nodes
input
Layer 1 linguistic
nodes
ŷ1
ym
yˆ m
Fully
Connected
x1
x2
Fully
Connected
xn
The Architecture
y1
output
Layer 5 linguistic
nodes
Layer 4
Output
term
node
Layer 3
rule
nodes
Layer 2
input
term
nodes
input
Layer 1 linguistic
nodes
ŷ1
ym
yˆ m
consquent
antecedent
x1
x2
xn
Basic Structure of Neurons
oik
output  oik  a( f )
a ()
net-input  f (u1k ,
, u kp ; w1k ,
f ()
, wkp )
k
1
w
u1k
w2k
u 2k
Layer k
wkp
u kp
oik
a ()
Layer 1 Neurons
y1
ŷ1
ym
f ()
k
1
w
u1k
w2k
u kp
u 2k
yˆ m
f u
1
i
x1
x2
xn
a f
wkp
oik
a ()
Layer 2 Neurons
y1
ŷ1
ym
f ()
k
1
w
u1k
w2k
u kp
u 2k
yˆ m
f  M xji (mij ,  ij )  
(ui2  mij )2
width
center
x1
x2
wkp
xn
ae
f
 ij2
oik
a ()
f ()
Layer 3 Neurons
y1
ŷ1
ym
k
1
w
u1k
w2k
u 2k
f  min  u , u ,
a f
x2
u kp
yˆ m
3
1
x1
wkp
xn
3
2
,u
3
p

oik
a ()
f ()
Layer 4 Neurons
y1
ŷ1
ym
yˆ m
k
1
w
u1k
w2k
wkp
u 2k
u kp
Down-Up Mode
p
f  w u
i 1
4 4
i i
{0, 1}
a  min(1, f )
x1
x2
xn
oik
a ()
Layer 4 Neurons
y1
ŷ1
ym
yˆ m
f ()
k
1
w
u1k
wkp
w2k
u kp
u 2k
Up-Down Mode
f  M yji (mij ,  ij )  
(ui5  mij )2
 ij2
width
center
x1
x2
xn
ae
f
oik
a ()
Layer 5 Neurons
y1
ŷ1
ym
yˆ m
f ()
k
1
w
u1k
w2k
u 2k
Up-Down Mode
f  yi
a f
x1
x2
xn
wkp
u kp
oik
a ()
Layer 5 Neurons
y1
yˆ m
y2
yˆ m
f ()
k
1
w
u1k
wkp
w2k
u kp
u 2k
Down-Up Mode
(m  )u

f 
 u
ij
ij
5
ij i
a f
x1
x2
xn
5
i
Neural-Network-Based Fuzzy Logical
Control and Decision System
Hybrid Learning
Algorithm
Initialization
y1
ŷ1
y2
ŷ2
T ( x1 )  T ( x2 ) 
rule nodes
x1
x2
xn
 T ( xn )
Initialization
y1
ŷ1
ym
yˆ m
T ( x1 )  T ( x2 ) 
rule nodes
x1
x2
xn
 T ( xn )
Two-Phase Learning Scheme

Self-Organized Learning Phase
–
Unsupervised learning of the membership
functions.
–

Unsupervised learning of the rulebase.
Supervised Learning Phase
–
Error back-propagation for optimization of the
membership functions.
Note that the membership functions calculated are
far from ideal but this is only a pre-estimation in
order to create the rulebase.
Unsupervised Learning of the
Membership Functions

y1
ŷ1
ym
yˆ m

x1
x2
xn
Step 1: First estimation of
the membership function’s
centers using Kohonen’s
learning rule.
Step 2: The widths of the
membership functions are
estimated from the widths
using a simple mathematical
formula.
Note that the membership functions calculated are
far from ideal but this is only a pre-estimation in
order to create the rulebase.
Unsupervised Learning of the
Membership Functions

y1
ŷ1
ym
yˆ m
Step 1: First estimation of
the membership function’s
centers using Kohonen’s
learning rule.
Winner-take-all:
 Step 2: The widths of the
x(t )  mwinner (t )  min  x(t )  mi (t ) 
x1
x2
xn
1i |T ( x )|
membership
functions are
mwinner estimated
(t  1)  mwinner (tfrom
)   (t ) the
x(t ) widths
mwinner (t )
using a simple mathematical
mi (t formula.
1)  mi (t ) for mi  mwinner
Note that the membership functions calculated are
far from ideal but this is only a pre-estimation in
order to create the rulebase.
Unsupervised Learning of the
Membership Functions
N-nearest-neighbors
2


m

m
 i
1
j 
Minimize E     
r
 of
 Step 1: 2First estimation
1
i 

i 1  jN nearest 

2
n
y1
ŷ1
ym
yˆ m
the membership function’s
m  mclosest
centers using i Kohonen’s
 i
1-nearest-neighbors
r
learning rule.
r : overlay parameter

x1
x2
xn
Step 2: The widths of the
membership functions are
estimated from the widths
using a simple mathematical
formula.
Method:
• Competitive Learning + Learn-if-win
• Deletion of rule nodes
• Combination of rule nodes
Unsupervised Learning of the Rulebase

4
3
Learn-if-win: wij (t )  o j  wij  oi
y1
x1
ŷ1
ym
x2
yˆ m
xn
y1
x1
ŷ1
ym
x2

yˆ m
xn
Example of Combination of Rule Nodes
Error back-propagation for optimization of
the membership functions.
Supervise Learning Phase
E
1
2
 y (t )  yˆ (t ) 
y1
2
ŷ1
ym
yˆ m
E
w  
w
 E 
w(t  1)  w(t )    


w


Learning
Rate
x1
x2
xn
Error back-propagation for optimization of
the membership functions.
Supervise Learning Phase
E
1
2
 y (t )  yˆ (t ) 
2
E
w  
w
 E 
w(t  1)  w(t )    

 w 
E E f

w f w
a ()
How w effects f?
How f effects E?
How w effects E?
f ()
w
Error back-propagation for optimization of
the membership functions.
Supervise Learning Phase
How a effects E?
E
1
2
 y (t )  yˆ (t ) 
2
E
w  
w
How f effects a?
 E 
w(t  1)  w(t )    

 w 
E E f
E a f


w f w a f w
How w effects f?
How f effects E?
How w effects E?
a ()
f ()
w
Error back-propagation for optimization of
the membership functions.
Supervise Learning Phase
E
1
2
 y (t )  yˆ (t ) 
2
E
w  
w
E E f
E a f


w f w a f w


error
backpropagation
 E 
w(t  1)  w(t )    

 w 
a ()
f ()
w
E
1
2
 y (t )  yˆ (t ) 
2
 E 
w(t  1)  w(t )    

 w 
E E f E a f


w f w a f w
Learning Layer 5 Neurons
f5 
5
(
m

)
u
 i i i
y
5

u
 ii
ŷ
mi ,  i
a5  yˆ  f 5
E E a5 f 5
 5 5
mi a f mi
E E a 5 f 5
 5 5
 i a f  i
5

E

E

a
5  5  5 5
f
a f
x1
x2
xn
E
1
2
 y (t )  yˆ (t ) 
 E 
w(t  1)  w(t )    

 w 
2
E E f E a f


w f w a f w
Learning Layer 5 Neurons
f
5
(m  )u


 u
i
i
5
i i
a  yˆ  f
5
5
5
i
E
   y (t )  yˆ (t ) 
5
a
 i ui5
f 5

mi   i ui5
a 5
1
5
f
a5


 i  i
 (m  )u
 u
i
5
i i
 i ui5
E E a5 f 5
   y (t )  yˆ (t ) 
 5 5
5

u
mi a f mi
 ii
5
mi ui
E E a 5 f 5
  y (t )  yˆ (t ) 
 5 5
 i a f  i
5

E

E

a
 5  5  5 5    y(t )  yˆ (t )
f
a f
  u     m  u  u
  u 
5
i i
i
5 2
i i
i
5
i i
5
i
5
i
E
1
2
 y (t )  yˆ (t ) 
2
 E 
w(t  1)  w(t )    

 w 
E E f E a f


w f w a f w
Learning Layer 4 Neurons
p
fi 4   w4j u 4j ai4  min(1, fi 4 )
y
5
ŷ  
E
   y (t )  yˆ (t )
5
f
j 1
u i5
Error back-propagation only:
wij4 {0,1}
E E f 5
  4 5 4
fi
f fi
4
i
5
f 5 f 5 ai4 f 5 ai4
 4 4  5 4
4
fi
ai fi
ui fi
1 or 0
x1
x2
xn
E
1
2
 y (t )  yˆ (t ) 
 E 
w(t  1)  w(t )    

 w 
2
E E f E a f


w f w a f w
Learning Layer 4 Neurons
p
fi   w u
4
j 1
4 4
j j
a  min(1, fi )
4
i
4
f

5
ui
5
u i5
Error back-propagation only:
E E f 5
4
i  4  5 4
fi
f fi
5
f5 
5
(
m

)
u
j j j j
5

u
j j j
mi i
  u    m  u 
  u 
1 or 0
5
j
i
j
 mi i   j  j u5j  i   j m j j u5j  5

2

5

 j ju j 

0

f 5 f 5 ai4 f 5 ai4
 4 4  5 4
4
fi
ai fi
ui fi
j
j
ui5  1
ui5  1
 mi i   j  j u5j  i   j m j j u5j 
2


 j  ju5j 

0

ui5  1
ui5  1
j
5
j
j
2
j
j
5
j
E
1
2
 y (t )  yˆ (t ) 
2
 E 
w(t  1)  w(t )    

 w 
E E f E a f


w f w a f w
Learning Layer 3 Neurons
f j3  min  u13 , u23 ,
, u 3p  , a 3j  f j3
5
u 4j
Error back-propagation only:
E E a
  3 3
   i4
f j a j f
wij4  0
3
j
3
j
3
j
 i4
wij4 {0,1}
w3jk {0,1}
1
E E
E fi 4
 4  4 4
3
a j u j wij4 0 fi u j
 i4
1
x1
x2
xn
E
1
2
 y (t )  yˆ (t ) 
2
 E 
w(t  1)  w(t )    

 w 
E E f E a f


w f w a f w
Learning Layer 2 Neurons
f 
2
k
( x  mk )2

2
k
a e
2
k
f k2
5
uk3
 i4
E
E ak2 f k2
 2 2
mk ak f k mk
 3j
E
E ak2 f k2
 2 2
 k ak f k  k
E
E
E
3





 j
ak2 uk3 w3jk  0 f j3 w3jk  0
w3jk {0,1}
mk ,  k
x1
x2
xn
E
1
2
 y (t )  yˆ (t ) 
2
 E 
w(t  1)  w(t )    

 w 
E E f E a f


w f w a f w
Learning Layer 2 Neurons
f 
2
k
( x  mk )

2
k
2
a e
2
k
f k2
ak2
f k2
e
2
f k
uk3
 f 2 2( x  m )
E
E ak2 f k2 
3
k
    j e k
 2 2
2
k
mk ak f k mk  w3jk 0 
2


2 2( x  m )
E
E ak2 f k2
k
 2 2
    3j  e fk
 k ak f k  k  w3jk 0 
 k3
E
E
E
3





 j
ak2 uk3 w3jk  0 f j3 w3jk  0
f k2 2( x  mk )

mk
 k2
f k2
( x  mk )2
2
 k
 k3
Neural-Network-Based Fuzzy Logical
Control and Decision System
Example:
Fuzzy Control of
Unmanned Vehicle
The Fuzzy Car
x0
x1
x2 , y
The Fuzzy System
Learned
x0
x1
x2 , y
The Fuzzy Rules
Learned
0
x0
1
x1
0
x1
IF x0 is T and x1 is T and x2 is T
6
y
TEHN y is T
x0
x1
x2 , y
The Membership Functions Learned
x0
x1
x2 , y
The Membership Functions Learned
x0
x1
x2 , y
Learning Curves
Learning rate 0.15
Error tolerance 0.01
Simulation
Related documents