Download Slides

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Overview
• Cumulative Distribution Functions Continued
• Conditional Probability Density Function
• Chain Rule for PDFs
• Bayes Rule for PDFs
• Independence
Cumulative Distribution Functions Continued
Suppose X and Y are two random variables.
• FXY (x, y ) = P(X ≤ x and Y ≤ y ) is the cumulative distribution
function for X and Y
• When X and Y are independent then:
P(X ≤ x and Y ≤ y ) = P(X ≤ x)P(Y ≤ y )
i.e.
FXY (x, y ) = FX (x)FY (y )
Cumulative Distribution Functions Continued
When X and Y are discrete random variables taking values {x1 , · · · , xn }
and {y1 , · · · , ym }:
P
P
• FXY (x, y ) = i:x ≤x j:y ≤y P(X = xi and Y = yj )
i
j
When X and Y are jointly continuous-valued random variables there
exists a probability density function (PDF) fXY (x, y ) ≥ 0 such that:
Rx Ry
• FXY (x, y ) = ∞ −∞ fXY (u, v )du dv
Can think of
P(u ≤ X ≤ u + du and v ≤ Y ≤
v + dv ) ≈ fXY (u, v )du dv when du,
dv are infinitesimally small.
Conditional Probability Density Function
Suppose X and Y are two continuous random variables with joint PDF
fXY (x, y ). Define conditional PDF:
fX |Y (x|y ) =
fXY (x, y )
fY (y )
Compare with conditional probability for discrete RVs:
P(X = x|Y = y ) =
P(X = x andY = y )
P(Y = y )
Chain Rule for PDFs
Since
fX |Y (x|y ) =
fXY (x, y )
fY (y )
the chain rule also holds for PDFs:
fXY (x, y ) = fX |Y (x|y )fY (y ) = fY |X (y |x)fX (x)
Also,
R∞
•
R∞
f (x|y )dx =
−∞ X |Y
f (x,y )dx
−∞ XY
fY (y )
=
fY (y )
fY (y )
=1
• We can marginalise PDFs:
Z
∞
Z
∞
fY |X (y |x)fX (x)dy
Z ∞
= fX (x)
fY |X (y |x)dy = fX (x)
fXY (x, y )dy =
−∞
−∞
−∞
Bayes Rule for PDFs
Since
fX |Y (x|y ) =
fXY (x, y )
fY (y )
then
fX |Y (x|y )fY (y ) = fXY (x, y ) = fY |X (y |x)fX (x)
and so we have Bayes Rule for PDFs:
fY |X (y |x) =
fX |Y (x|y )fY (y )
fX (x)
Independence
Suppose X and Y are two continuous random variables with joint PDF
fXY (x, y ). Then X are Y are independent when:
fXY (x, y ) = fX (x)fY (y )
Why ?
Z
x
P(X ≤ x and Y ≤ y ) =
−∞
x
Z
=
−∞
Z
y
fXY (u, v )dudv
Z y
fY (v )dv
fX (u)du
−∞
−∞
= P(X ≤ x)P(Y ≤ y )
Example
Suppose random variable Y = X + M, where M ∼ N(0, 1). Conditioned
on X = x, what is the PDF of Y ?
• fY |X (y |x) =
√1
2π
2
)
exp(− (y −x)
2
Suppose that X ∼ N(0, σ). What is fX |Y (x|Y ) ?
• Use Bayes Rule:
fX |Y (x|y ) =
=
fY |X (y |x)fX (x)
fY (y )
√1
2π
2
exp(− (y −x)
)×
2
√1
σ 2π
2
x
exp(− 2σ
2)
fY (y )
• fY (y ) is just a normalising constant (so that the area under
fX |Y (x|y ) is 1).
Related documents