Download (Gaussian) Random Variable - Purdue College of Engineering

Document related concepts
no text concepts found
Transcript
3. General Random Variables Part III: Normal (Gaussian) Random Variable ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak Normal (aka Gaussian) r.v. (x− µ )2
−
1
2σ 2
f X (x) =
e
2πσ
Here, σ > 0 and µ are two parameters characterizing the PDF.
Sometimes denoted X ~ N(µ,σ 2 )
fX(x)
µ
x
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
First, simplify through a change of variable : s =
x −µ
σ
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
x−µ
First, simplify through a change of variable: s =
σ
(This is often very useful when working with normal random variables!)
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
x−µ
σ
(This is often very useful when working with normal random variables!)
First, simplify through a change of variable: s =
∞
∫
−∞
1
e
2πσ
−
( x − µ )2
2σ 2
∞
dx =
∫
−∞
1
e
2π
s2
−
2
ds
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
x−µ
σ
(This is often very useful when working with normal random variables!)
First, simplify through a change of variable: s =
∞
∫
−∞
1
e
2πσ
−
( x − µ )2
2σ 2
∞
dx =
∫
−∞
1
e
2π
⎛
1
Second, compute ⎜ ∫
e
⎝ −∞ 2π
∞
s2
−
2
s2
−
2
⎞
ds ⎟
⎠
ds
2
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
x−µ
σ
(This is often very useful when working with normal random variables!)
First, simplify through a change of variable: s =
∞
∫
−∞
1
e
2πσ
−
( x − µ )2
2σ 2
∞
dx =
∫
−∞
1
e
2π
⎛
1
Second, compute ⎜ ∫
e
⎝ −∞ 2π
∞
s2
−
2
s2
−
2
2
ds
⎞
⎛ ∞ 1 −u ⎞ ⎛ ∞ 1 −v ⎞
ds ⎟ = ⎜ ∫
e 2 du ⎟ ⋅ ⎜ ∫
e 2 dv ⎟
⎠
⎝ −∞ 2π
⎠ ⎝ −∞ 2π
⎠
2
2
Ilya Pollak NormalizaKon f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
∞
Problem 3.14: show that
∫
f X (x)dx = 1.
−∞
x−µ
σ
(This is often very useful when working with normal random variables!)
First, simplify through a change of variable: s =
∞
∫
−∞
1
e
2πσ
−
( x − µ )2
2σ 2
∞
dx =
∫
−∞
1
e
2π
⎛
1
Second, compute ⎜ ∫
e
⎝ −∞ 2π
∞
s2
−
2
s2
−
2
ds
2
⎞
⎛ ∞ 1 −u ⎞ ⎛ ∞ 1 −v ⎞
ds ⎟ = ⎜ ∫
e 2 du ⎟ ⋅ ⎜ ∫
e 2 dv ⎟
⎠
⎝ −∞ 2π
⎠ ⎝ −∞ 2π
⎠
2
∞ ∞
=
1
∫−∞ −∞∫ 2π e
−
u 2 + v2
2
2
dudv
Ilya Pollak NormalizaKon, conKnued ∞ ∞
To compute
1
∫−∞ −∞∫ 2π e
−
u 2 + v2
2
dudv, use polar coordinates r = u 2 + v 2 , θ = tan −1
v
u
r
v
θ
u
Ilya Pollak NormalizaKon, conKnued ∞ ∞
To compute
1
∫−∞ −∞∫ 2π e
−
u 2 + v2
2
dudv, use polar coordinates r = u 2 + v 2 , θ = tan −1
v
u
Recall: rdrdθ = dudv.
r
v
θ
u
Ilya Pollak NormalizaKon, conKnued ∞ ∞
To compute
1
∫−∞ −∞∫ 2π e
−
u 2 + v2
2
dudv, use polar coordinates r = u 2 + v 2 , θ = tan −1
Recall: rdrdθ = dudv.
∞ ∞
1
∫−∞ −∞∫ 2π e
u 2 + v2
−
2
∞
⎫⎪
1 ⎧⎪ − r2
dudv =
⎨ ∫ e r dr ⎬ dθ
∫
2π 0 ⎪⎩ 0
⎪⎭
2π
v
u
r
2
v
θ
u
Ilya Pollak NormalizaKon, conKnued ∞ ∞
To compute
1
∫−∞ −∞∫ 2π e
−
u 2 + v2
2
dudv, use polar coordinates r = u 2 + v 2 , θ = tan −1
Recall: rdrdθ = dudv.
∞ ∞
1
∫−∞ −∞∫ 2π e
u 2 + v2
−
2
∞
∞
r
⎫⎪
−
1 ⎧⎪ − r2
dudv =
⎨ ∫ e r dr ⎬ dθ = ∫ e 2 r dr
∫
2π 0 ⎪⎩ 0
⎪⎭
0
2π
2
v
u
r
2
v
θ
u
Ilya Pollak NormalizaKon, conKnued ∞ ∞
To compute
1
∫−∞ −∞∫ 2π e
−
u 2 + v2
2
dudv, use polar coordinates r = u 2 + v 2 , θ = tan −1
Recall: rdrdθ = dudv.
∞
∞
r
⎫⎪
−
1
1 ⎧⎪ − r2
∫−∞ −∞∫ 2π e dudv = 2π ∫0 ⎨⎪ ∫0 e r dr ⎬⎪ dθ = ∫0 e 2 r dr
⎩
⎭
r2
Another change of variable, y = .
2
∞ ∞
u 2 + v2
−
2
2π
2
v
u
r
2
v
θ
u
Ilya Pollak NormalizaKon, conKnued ∞ ∞
1
To compute ∫ ∫
e
2π
−∞ −∞
u 2 + v2
−
2
v
dudv, use polar coordinates r = u + v , θ = tan
u
2
−1
2
Recall: rdrdθ = dudv.
∞
∞
r
⎫⎪
−
1
1 ⎧⎪ − r2
2
e
dudv
=
e
r
dr
d
θ
=
e
r dr
⎨
⎬
∫−∞ −∞∫ 2π
∫
∫
∫
2π 0 ⎪⎩ 0
⎪⎭
0
r2
Another change of variable, y = . Then dy = rdr, and
2
∞ ∞
∞
∫e
0
−
r2
2
−
u 2 + v2
2
2π
2
r
2
v
θ
u
∞
r dr = ∫ e− y dy
0
Ilya Pollak NormalizaKon, conKnued ∞ ∞
1
To compute ∫ ∫
e
2π
−∞ −∞
u 2 + v2
−
2
v
dudv, use polar coordinates r = u + v , θ = tan
u
2
−1
2
Recall: rdrdθ = dudv.
∞
∞
r
⎫⎪
−
1
1 ⎧⎪ − r2
2
e
dudv
=
e
r
dr
d
θ
=
e
r dr
⎨
⎬
∫−∞ −∞∫ 2π
∫
∫
∫
2π 0 ⎪⎩ 0
⎪⎭
0
r2
Another change of variable, y = . Then dy = rdr, and
2
∞ ∞
∞
∫e
0
−
r2
2
−
u 2 + v2
2
∞
r dr = ∫ e dy = −e
−y
2π
−u ∞
0
2
r
2
v
θ
u
=1
0
Ilya Pollak Mean of a normal r.v. Again, use a change of variable s =
∞
1
E[X] = ∫ x
e
2πσ
−∞
−
x−µ
, x = sσ + µ, dx = σ ds :
σ
( x − µ )2
2σ 2
dx
Ilya Pollak Mean of a normal r.v. Again, use a change of variable s =
∞
1
E[X] = ∫ x
e
2πσ
−∞
−
( x − µ )2
2σ 2
∞
x−µ
, x = sσ + µ, dx = σ ds :
σ
sσ + µ
dx = ∫
e
2πσ
−∞
s2
−
2
σ ds
Ilya Pollak Mean of a normal r.v. Again, use a change of variable s =
∞
1
E[X] = ∫ x
e
2πσ
−∞
=σ
∞
∫
−∞
1
e
2π
−
s2
−
2
( x − µ )2
2σ 2
x−µ
, x = sσ + µ, dx = σ ds :
σ
∞
sσ + µ
dx = ∫
e
2πσ
−∞
∞
sds + µ ∫
−∞
1
e
2π
s2
−
2
s2
−
2
σ ds
ds
Ilya Pollak Mean of a normal r.v. Again, use a change of variable s =
∞
E[X] =
∫
x
−∞
1
e
2πσ
∞
−
( x − µ )2
2σ 2
x−µ
, x = sσ + µ, dx = σ ds :
σ
∞
dx =
∞
s2
−
2
sσ + µ
∫−∞ 2πσ e
1
1
=σ ∫
e sds + µ ∫
e
π
2π
−∞ 2
−∞

s2
−
2
−
s2
2
σ ds
ds
odd
even
odd
odd
=
0
0
0
Ilya Pollak Mean of a normal r.v. x−µ
Again, use a change of variable s =
, x = sσ + µ, dx = σ ds :
σ
∞
E[X] =
∫
x
−∞
1
e
2πσ
∞
−
( x − µ )2
2σ 2
∞
dx =
∞
2
sσ + µ
∫−∞ 2πσ e
−
s2
2
σ ds
2
1 − s2
1 − s2
=σ ∫
e sds + µ ∫
e ds
π
2π
−∞ 2
−∞

odd



0
even
odd
odd
=
0
0
0
Ilya Pollak Mean of a normal r.v. x−µ
Again, use a change of variable s =
, x = sσ + µ, dx = σ ds :
σ
∞
E[X] =
∫
x
−∞
1
e
2πσ
∞
−
( x − µ )2
2σ 2
∞
dx =
∞
2
sσ + µ
∫−∞ 2πσ e
−
s2
2
σ ds
2
1 − s2
1 − s2
=σ ∫
e sds + µ ∫
e ds
π
2π
−∞ 2
−∞




odd
1


0
even
odd
odd
=
0
0
0
Ilya Pollak Mean of a normal r.v. x−µ
Again, use a change of variable s =
, x = sσ + µ, dx = σ ds :
σ
∞
E[X] =
∫
x
−∞
1
e
2πσ
∞
−
( x − µ )2
2σ 2
∞
dx =
∞
2
sσ + µ
∫−∞ 2πσ e
−
s2
2
σ ds
2
1 − s2
1 − s2
=σ ∫
e sds + µ ∫
e ds = µ
π
2π
−∞ 2
−∞




odd
1


0
even
odd
odd
=
0
0
0
Ilya Pollak Normal PDF is symmetric around
the mean f X (x) =
−
1
e
2πσ
( x − µ )2
2σ 2
u2
−
1
2σ 2
f X (µ + u) =
e
= f X (µ − u)
2πσ
Ilya Pollak The maximum of a normal PDF is at
its mean −
1
f X (x) =
e
2πσ
(x− µ )2
2σ
2
−
x −µ 1
f X′ (x) = − 2
e
σ
2πσ
(x− µ )2
2σ
2
Ilya Pollak The maximum of a normal PDF is at
its mean −
1
f X (x) =
e
2πσ
(x− µ )2
2σ
2
(x− µ )2
−
x −µ 1
2
2
σ
f X′ (x) = − 2
e
σ
2πσ
Unique extremum is at x = µ.
Since f X′ (x) > 0 for x < µ
and f X′ (x) < 0 for x > µ,
it's a maximum.
Ilya Pollak The variance and standard
deviaKon of a normal r.v. −
1
f X (x) =
e
2πσ
(x− µ )2
2σ
2
•  Variance = σ2 •  Standard deviaKon = σ •  Exercise: integraKon by parts Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. •  Suppose X is normal with mean μ and variance
σ2, and a ≠ 0, b are real numbers Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. •  Suppose X is normal with mean μ and variance
σ2, and a ≠ 0, b are real numbers •  Let Y = aX + b Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. •  Suppose X is normal with mean μ and variance
σ2, and a ≠ 0, b are real numbers •  Let Y = aX + b •  Then Y is a normal random variable Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. •  Suppose X is normal with mean μ and variance
σ2, and a ≠ 0, b are real numbers •  Let Y = aX + b •  Then Y is a normal random variable •  E[Y] = aμ + b Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. •  Suppose X is normal with mean μ and variance
σ2, and a ≠ 0, b are real numbers •  Let Y = aX + b •  Then Y is a normal random variable •  E[Y] = aμ + b •  var(Y) = a2σ2 Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
y − b⎞
⎛
= P⎜ X ≤
⎟
⎝
a ⎠
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
y − b⎞
⎛
= P⎜ X ≤
--- not quite!
⎟
⎝
a ⎠
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
⎧ ⎛
y − b⎞
⎪ P⎜ X ≤
⎟⎠ , a > 0
⎝
a
⎪
=⎨
⎪ P⎛ X ≥ y − b ⎞ , a < 0
⎟⎠
⎪ ⎜⎝
a
⎩
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
⎧ ⎛
⎧
y − b⎞
⎛ y − b⎞
,
a
>
0
F
⎪ P⎜ X ≤
⎪
⎟⎠
⎟⎠ , a > 0
X⎜
⎝
⎝
a
a
⎪
⎪
=⎨
=⎨
⎪ P⎛ X ≥ y − b ⎞ , a < 0
⎪ 1− F ⎛ y − b⎞ , a < 0
⎟⎠
X⎜
⎪ ⎜⎝
⎪
⎝ a ⎟⎠
a
⎩
⎩
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
⎧ ⎛
⎧
y − b⎞
⎛ y − b⎞
P
X
≤
,
a
>
0
F
, a>0
⎪ ⎜
⎪ X⎜
⎟
⎟
⎝ a ⎠
a ⎠
⎪ ⎝
⎪
=⎨
=⎨
⎪ P⎛ X ≥ y − b ⎞ , a < 0
⎪ 1− F ⎛ y − b⎞ , a < 0
⎟⎠
X⎜
⎪ ⎜⎝
⎪
⎝ a ⎟⎠
a
⎩
⎩
⎧ d
⎛ y − b⎞
FX ⎜
, a>0
⎪
⎟
⎪ dy ⎝ a ⎠
fY (y) = FY′ (y) = ⎨
⎪ − d F ⎛ y − b⎞ , a < 0
⎪ dy X ⎜⎝ a ⎟⎠
⎩
Ilya Pollak A linear funcKon of a normal r.v. is
a v r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
⎧ ⎛
⎧
y − b⎞
⎛ y − b⎞
P
X
≤
,
a
>
0
F
, a>0
⎪ ⎜
⎪ X⎜
⎟
⎟
⎝ a ⎠
a ⎠
⎪ ⎝
⎪
=⎨
=⎨
⎪ P⎛ X ≥ y − b ⎞ , a < 0
⎪ 1− F ⎛ y − b⎞ , a < 0
⎟⎠
X⎜
⎪ ⎜⎝
⎪
⎝ a ⎟⎠
a
⎩
⎩
⎧ d
⎧ 1 ⎛ y − b⎞
⎛ y − b⎞
FX ⎜
⎪
⎪ fX ⎜
⎟⎠ , a > 0
⎟⎠ , a > 0
⎝
⎝
a
a
⎪ dy
⎪ a
fY (y) = FY′ (y) = ⎨
=⎨
⎪ − d F ⎛ y − b⎞ , a < 0
⎪ − 1 f ⎛ y − b⎞ , a < 0
⎪ dy X ⎜⎝ a ⎟⎠
⎪ a X ⎜⎝ a ⎟⎠
⎩
⎩
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
⎧ ⎛
⎧
y − b⎞
⎛ y − b⎞
,
a
>
0
F
⎪ P⎜ X ≤
⎪
⎟⎠
⎟⎠ , a > 0
X⎜
⎝
⎝
a
a
⎪
⎪
=⎨
=⎨
⎪ P⎛ X ≥ y − b ⎞ , a < 0
⎪ 1− F ⎛ y − b⎞ , a < 0
⎟⎠
X⎜
⎪ ⎜⎝
⎪
⎝ a ⎟⎠
a
⎩
⎩
⎧ d
⎧ 1 ⎛ y − b⎞
⎛ y − b⎞
FX ⎜
⎪
⎪ fX ⎜
⎟⎠ , a > 0
⎟⎠ , a > 0
⎝
⎝
a
a
⎪ dy
⎪ a
fY (y) = FY′ (y) = ⎨
=⎨
⎪ − d F ⎛ y − b⎞ , a < 0
⎪ − 1 f ⎛ y − b⎞ , a < 0
⎪ dy X ⎜⎝ a ⎟⎠
⎪ a X ⎜⎝ a ⎟⎠
⎩
⎩
⎧
( y− a µ −b)2
−
1
2 2
⎪
e 2a σ , a > 0
⎪ 2π aσ
=⎨
( y− a µ −b)2
−
1
⎪
2a 2 σ 2
−
e
, a<0
⎪
2
π
a
σ
⎩
Ilya Pollak A linear funcKon of a normal r.v. is
a normal r.v. FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
⎧ ⎛
⎧
y − b⎞
⎛ y − b⎞
,
a
>
0
F
⎪ P⎜ X ≤
⎪
⎟⎠
⎟⎠ , a > 0
X⎜
⎝
⎝
a
a
⎪
⎪
=⎨
=⎨
⎪ P⎛ X ≥ y − b ⎞ , a < 0
⎪ 1− F ⎛ y − b⎞ , a < 0
⎟⎠
X⎜
⎪ ⎜⎝
⎪
⎝ a ⎟⎠
a
⎩
⎩
⎧ d
⎧ 1 ⎛ y − b⎞
⎛ y − b⎞
FX ⎜
⎪
⎪ fX ⎜
⎟⎠ , a > 0
⎟⎠ , a > 0
⎝
⎝
a
a
⎪ dy
⎪ a
fY (y) = FY′ (y) = ⎨
=⎨
⎪ − d F ⎛ y − b⎞ , a < 0
⎪ − 1 f ⎛ y − b⎞ , a < 0
⎪ dy X ⎜⎝ a ⎟⎠
⎪ a X ⎜⎝ a ⎟⎠
⎩
⎩
⎧
( y− a µ −b)2
−
1
2 2
⎪
e 2a σ , a > 0
( y− a µ −b)2
−
⎪ 2π aσ
1
2a 2 σ 2
=⎨
=
e
,a≠0
( y− a µ −b)2
2
π
a
σ
−
1
⎪
2a 2 σ 2
−
e
, a<0
⎪
2π aσ
⎩
Ilya Pollak Standard normal (aka standard
Gaussian) r.v. •  Normal random variable •  Mean μ=0 •  Standard deviaKon σ=1 Ilya Pollak Standard normal (aka standard
Gaussian) r.v. •  Normal random variable •  Mean μ=0 •  Standard deviaKon σ=1 fY (y) =
1
e
2π
y2
−
2
Ilya Pollak Standard normal (aka standard
Gaussian) r.v. • 
• 
• 
• 
Normal random variable Mean μ=0 Standard deviaKon σ=1 Its CDF is denoted by Φ Φ(y) = P(Y ≤ y) =
fY (y) =
1
2π
y
∫e
t2
−
2
1
e
2π
y2
−
2
dt
−∞
Ilya Pollak How to evaluate normal CDF •  No closed form available for normal CDF  Ilya Pollak How to evaluate normal CDF •  No closed form available for normal CDF  •  But there are tables (for standard normal) Ilya Pollak How to evaluate normal CDF •  No closed form available for normal CDF  •  But there are tables (for standard normal) •  To convert between any normal and a
standard normal, use the fact that –  if X ~ N(μ,σ2) and Y = (X−μ)/σ, –  then Y ~ N(0,1) Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). •  P(X ≤ 3) = P( (X−2)/4 ≤ (3−2)/4 ) Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). •  P(X ≤ 3) = P( (X−2)/4 ≤ (3−2)/4 ) = Φ(0.25) Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). •  P(X ≤ 3) = P( (X−2)/4 ≤ (3−2)/4 ) = Φ(0.25) ≈ 0.5987 –  from a table, e.g., www.math.unb.ca/~knight/uKlity/NormTble.htm Ilya Pollak Ilya Pollak Ilya Pollak Ilya Pollak Ilya Pollak Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). •  P(X ≤ 3) = P( (X−2)/4 ≤ (3−2)/4 ) = Φ(0.25) ≈ 0.5987 –  from a table, e.g., www.math.unb.ca/~knight/uKlity/NormTble.htm •  In general, if X ~ N(μ,σ2) and Y = (X−μ)/σ Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). •  P(X ≤ 3) = P( (X−2)/4 ≤ (3−2)/4 ) = Φ(0.25) ≈ 0.5987 –  from a table, e.g., www.math.unb.ca/~knight/uKlity/NormTble.htm •  In general, if X ~ N(μ,σ2) and Y = (X−μ)/σ, then –  P(X ≤ x) = P( (X−μ)/σ ≤ (x−μ)/σ ) = P( Y ≤ (x−μ)/σ ) Ilya Pollak Example •  X ~ N(2,16) •  Find P(X ≤ 3). •  P(X ≤ 3) = P( (X−2)/4 ≤ (3−2)/4 ) = Φ(0.25) ≈ 0.5987 –  from a table, e.g., www.math.unb.ca/~knight/uKlity/NormTble.htm •  In general, if X ~ N(μ,σ2) and Y = (X−μ)/σ, then –  P(X ≤ x) = P( (X−μ)/σ ≤ (x−μ)/σ ) = P( Y ≤ (x−μ)/σ ) = Φ( (x−μ)/σ ), –  because Y ~ N(0,1) Ilya Pollak Then there is Wolfram Alpha •  cdf[normal distribuKon, mean 1, standard
deviaKon 3, 2] computes the CDF of a N(1,9)
random variable at 2. Then there is Wolfram Alpha and
other soiware packages •  cdf[normal distribuKon, mean 1, standard
deviaKon 3, 2] computes the CDF of a N(1,9)
random variable at 2. •  Most scienKfic soiware packages (e.g.,
Matlab) have normal CDF. Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
Noisy
channel
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
Noisy
channel
Y=S+W
Noise W ~ N(0,σ2)
independent of S
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = ?
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
1
1
= P(W ≥ 1 | S = −1) ⋅ + P(W < −1 | S = 1) ⋅
2
2
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
1
1
= P(W ≥ 1 | S = −1) ⋅ + P(W < −1 | S = 1) ⋅
2
2
1
= [P(W ≥ 1) + P(W < −1)] ⋅
(independence of S and W )
2
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
1
1
= P(W ≥ 1 | S = −1) ⋅ + P(W < −1 | S = 1) ⋅
2
2
1
= [ P(W ≥ 1) + P(W < −1)] ⋅
(independence of S and W )
2
= P(W < −1) (symmetry of zero-mean normal PDF around 0)
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
1
1
+ P(W < −1 | S = 1) ⋅
2
2
1
= [ P(W ≥ 1) + P(W < −1)] ⋅
(independence of S and W )
2
= P(W < −1) (symmetry of zero-mean normal PDF around 0)
= P(W ≥ 1 | S = −1) ⋅
⎛ 1⎞
⎛ 1⎞
= Φ⎜ − ⎟ = 1 − Φ⎜ ⎟
⎝ σ⎠
⎝σ⎠
Ilya Pollak Example 3.8: Signal detecKon Transmitter
signal S = +1 or -1,
with prob ½ and ½
+1 if Y ≥ 0
Noisy
channel
Y=S+W
Receiver
−1 if Y < 0
Noise W ~ N(0,σ2)
independent of S
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
1
1
= P(W ≥ 1 | S = −1) ⋅ + P(W < −1 | S = 1) ⋅
2
2
1
= [ P(W ≥ 1) + P(W < −1)] ⋅
(independence of S and W )
2
= P(W < −1) (symmetry of zero-mean normal PDF around 0)
⎛ 1⎞
⎛ 1⎞
= Φ⎜ − ⎟ = 1 − Φ⎜ ⎟
⎝ σ⎠
⎝σ⎠
E.g., if σ = 1, this is 1 − 0.8413 ≈ 0.1587
Ilya Pollak Example 3.8: Signal detecKon fW(w)
-1
0
1
w
P(error) = P(error ∩ {S = −1}) + P(error ∩ {S = 1})
= P(error | S = −1)P(S = −1) + P(error | S = 1)P(S = 1) (total probability thm)
1
1
= P(W ≥ 1 | S = −1) ⋅ + P(W < −1 | S = 1) ⋅
2
2
1
= [ P(W ≥ 1) + P(W < −1)] ⋅
(independence of S and W )
2
= P(W < −1) (symmetry of zero-mean normal PDF around 0)
⎛ 1⎞
⎛ 1⎞
= Φ⎜ − ⎟ = 1 − Φ⎜ ⎟
⎝ σ⎠
⎝σ⎠
E.g., if σ = 1, this is 1 − 0.8413 ≈ 0.1587
Ilya Pollak Error funcKon (erf) as an
alternaKve to Φ funcKon The error function, erf, is built in to Matlab, Google, and Wolfram Alpha, and
can be called using erf
x
() ∫
erf x 
−x
1
π
e
−t 2
dt
Ilya Pollak Error funcKon (erf) as an
alternaKve to Φ funcKon The error function, erf, is built in to Matlab, Google, and Wolfram Alpha, and
can be called using erf
x
() ∫
erf x 
−x
2x
=
∫
− 2x
1
e
−t 2
π
1
2π
e
dt
−t 2 / 2
dt
Ilya Pollak Error funcKon (erf) as an
alternaKve to Φ funcKon The error function, erf, is built in to Matlab, Google, and Wolfram Alpha, and
can be called using erf
x
() ∫
erf x 
−x
2x
=
∫
− 2x
1
e
−t 2
π
1
2π
e
dt
−t 2 / 2
= P ⎡ − 2x < Y ≤
⎣
dt
For standard
2x ⎤ normal r.v.
⎦
Ilya Pollak Error funcKon (erf) as an
alternaKve to Φ funcKon The error function, erf, is built in to Matlab, Google, and Wolfram Alpha, and
can be called using erf
x
1
() ∫
erf x 
−x
2x
=
∫
− 2x
e
−t 2
π
1
2π
e
dt
−t 2 / 2
= P ⎡ − 2x < Y ≤
⎣
=Φ
dt
For standard
2x ⎤ normal r.v.
⎦
( 2x ) − Φ (− 2x ) = 2Φ ( 2x ) − 1
Ilya Pollak Error funcKon (erf) as an
alternaKve to Φ funcKon The error function, erf, is built in to Matlab, Google, and Wolfram Alpha, and
can be called using erf
x
1
() ∫
erf x 
−x
2x
=
∫
− 2x
e
−t 2
π
1
2π
e
dt
−t 2 / 2
= P ⎡ − 2x < Y ≤
⎣
=Φ
For standard
2x ⎤ normal r.v.
⎦
( 2x ) − Φ (− 2x ) = 2Φ ( 2x ) − 1
1+erf (x / 2)
Φ x =
2
()
dt
Ilya Pollak 
Related documents