Download 1 - Zianet

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Nanjing University of Science & Technology
Pattern Recognition:
Statistical and Neural
Lonnie C. Ludeman
Lecture 8
Sept 23, 2005
1
May be
Optimum
2
Review 2: Classifier performance Measures
1. A’Posteriori Probability (Maximize)
2. Probability of Error ( Minimize)
3. Bayes Average Cost (Maximize)
4. Probability of Detection ( Maximize with fixed
Probability of False alarm)
(Neyman Pearson Rule)
5. Losses (Minimize the maximum)
3
Review 3: MAP, MPE , and Bayes Classification
Rule
C1
If
l( x ) >
N
<
Likelihood ratio
NMAP =
P(C2)
P(C1)
NBAYES =
C2 Threshold
NMPE =
P(C2)
P(C1)
(C22 - C12 ) P(C2)
(C11 - C21 ) P(C1)
4
Topics for Lecture 8
1. Two Dimensional problem
2. Solution in likelihood space
3. Solution in pattern space
4. Solution in feature space
5. Calculation of probability of error
6. Transformational Theorem
5
Example : 2 Class and 2 observations
Given:
C1 : x = [ x1, x2 ]T ~ p(x1, x2 | C1) , P(C1)
C2 : x = [ x1, x2 ]T ~ p(x1, x2 | C2) , P(C2)
C1 : x ~ N( M1 , K1 )
C2 : x ~ N( M2 , K2 )
0
M1 =
0
1
M2 =
1
10
K1 =
01
20
K2 =
02
P(C1) = P(C2) = 1/2
Find Optimum decision rule (MPE)
6
7
8
Solution in different spaces
taking the ln of both sides gives an equivalent rule
C1
If - (x1 + x2 - 1) > 0
<
C2
rearranging gives
C2
If x1 + x2
>
<
C1
C2
If y
>
<
C1
1
1
In
Observation
Space
In feature g(x ,x ) = x +x
1 2
1
2
space
y=g(x1,x2)
9
In Observation Space
x decide C2
2
x 1 + x2 = 1
1
1
x
1
decide C1
In Feature Space (Sufficient statistic for
this problem)
decide C1
0
decide C2
1
y
where
y = x1 + x2
10
Calculation of P(error | C1) for 2
dimensional Example in y space
P(error | C1) = P(decide C2 |C1)
=
p( y | C1 ) dy
R2
Under C1 : x1 and x2 are independent normally
distributed gaussian random variables N(0,1) thus y
is normally distributed as N(0,2).
oo
P(error | C1) =
1
1 exp(-y2/4)dy
2 pi
11
Calculation of P(error | C2) for 2
dimensional Example in y space
P(error | C2) = P(decide C1 |C2)
=
p( y | C2 ) dy
R1
Under C2: x1 and x2 are independent normally
distributed gaussian random variables N(1,1) thus y
is normally distributed as N(2,2).
1
1 exp{(-(y-2)2/4)} dy
P(error | C2) =
_ oo 2 pi
12
Probability of error for example
P(error) = P(error | C1) P(C1) + P(error |C2) P(C2)
oo
1
exp(-y2/4)dy P(C1)
2 pi
=
1
1
1 exp{(-(y-2)2/4)} dy P(C2)
_ oo 2 pi
+
13
Transformational Theorem
Given : X is a random Variable with known
probability density function pX(x).
y=g(x) is a real vlued function with
no flat spots
Define the random variable Y=g(X).
Then The probability density function for Y,
pY(y) is as follows:
pX(x)
pY(y) =
all xi
d g(x)
dx
x=xi
Where xi are
all real roots of
y=g(x)
14
Example: Transformational Theorem
Given: X ~ N(0,1)
Define function: y = x2
Define the random variable: Y = X2
Find the probability density function pY(y)
15
Solution:
y
y = x2
y>0
y<0
x1
x2
x
for y > 0 there are no real roots of y = x2
therefore pY(y) = 0 for those values of y
for y > 0 there are two real roots of y = x2 given by
x1 = - y
x2 = y
16
Apply Fundamental Theorem
pX(x)
pY(y) =
all xi
if real roots
d g(x)
dx
x=xi
= 0 if no real roots
d g(x) = 2x
dx
for y > 0
pY(y) = pX(x1) + pX(x2) = pX( - y ) + pX(
=
1
exp(- (- y )2/2)
2 pi
2 (- y )
+
y)
1 exp(- ( y )2 /2)
2 pi
2( y )
17
Final Answer
pY(y) =
exp(- y/2)
2 pi
u(y)
18
Summary for Lecture 8
1. Two Dimensional problem
2. Solution in likelihood space
3. Solution in pattern space
4. Solution in feature space
5. Calculation of probability of error
6. Transformation Theorem
19
End of Lecture 8
20
Related documents