Download Further Topics on Random Variables

Document related concepts
Transcript
Further Topics on Random Variables
Chia-Ping Chen
Professor
Department of Computer Science and Engineering
National Sun Yat-sen University
Probability
Derived Distribution
Often, a random variable is defined through its relation to
another random variable. Let X be a continuous random
variable and
Y = g(X)
Distribution of Y can be derived from distribution of X.
Two steps
FY (y) = P(Y ≤ y)
= P(g(X) ≤ y)
fY (y) =
Prof. C. Chen
d
FY (y)
dy
Further Topics on Random Variables
Example 4.1 Square Root of Uniform Random Variable
Let X be uniform on [0, 1], and
√
Y =
X
What is the PDF of Y ?
FY (y) = P(Y ≤ y)
√
= P( X ≤ y)
= P(X ≤ y 2 )
= FX (y 2 )
= y2
⇒ fY (y) =
d
FY (y) = 2y,
dy
Prof. C. Chen
0≤y≤1
Further Topics on Random Variables
Example 4.2 Trip Duration
John Slow is driving from Boston to New York, a distance of
180 miles at a constant speed X, whose value is uniformly
distributed between 30 and 60 miles per hour. What is the
PDF of the duration of the trip Y ?
Y =
⇒ FY (y) = P(Y ≤ y) = P
=1−P X ≤
=2−
180
y
180
X
180
180
≤y =P X≥
X
y
= 1 − FX
180
y
=1−
180
y
6
y
⇒ fY (y) =
Prof. C. Chen
6
,
y2
3≤y≤6
Further Topics on Random Variables
− 30
30
Example 4.3 Square
Suppose X is a random variable with PDF fX (x), and
Y = X2
What is the PDF of Y ?
FY (y) = P (Y ≤ y)
= P X2 ≤ y
√
= P (|X| ≤ y)
√
√
= P (− y ≤ X ≤ y)
√
√
= FX ( y) − FX (− y)
⇒ fY (y) =
d
1
1
√
√
FY (y) = √ fX ( y) + √ fX (− y) ,
dy
2 y
2 y
Prof. C. Chen
y≥0
Further Topics on Random Variables
Linear Functions
Suppose X is a random variable with PDF fX (x), and
Y = aX + b
where a 6= 0 and b are scalars. Then
y−b
1
fX
fY (y) =
|a|
a
For a > 0
FY (y) = P(Y ≤ y) = P(aX + b ≤ y)
y−b
y−b
= FX
=P X≤
a
a
1
y−b
⇒ fY (y) = fX
a
a
Similarly, for a < 0
1
y−b
fY (y) = − fX
a
a
Prof. C. Chen
Further Topics on Random Variables
Linear Function of an Exponential Random Variable
Suppose X is an exponential random variable with parameter
λ, and
Y = aX + b
What is the PDF of Y ?
y−b
1
fX
|a|
a
y−b
λ −λ( y−b )
a
=
≥0 ?
e
: 0
a
|a|
fY (y) =
For a > 0 and b = 0
⇒ fY (y) =
λ −( λ )y
e a ,
a
i.e.
y>0
λ
a
Y ∼ exponential
Prof. C. Chen
Further Topics on Random Variables
Example 4.5 Linear Function of a Gaussian
Suppose X is a normal random variable with mean µ and
variance σ 2 , and
Y = aX + b
What is the PDF of Y ?
fY (y) =
1
fX
|a|
y−b
a
2
( y−b −µ)
1
1
− a 2
2σ
√
=
e
|a| 2πσ
(y−(aµ+b))2
1
1
√
e− 2a2 σ2
=
|a| 2πσ
Indeed
Y ∼ N (aµ + b, a2 σ 2 )
Prof. C. Chen
Further Topics on Random Variables
Monotonic Function
Suppose X is a random variable and
Y = g(X)
where g(·) is strictly monotonic on the image of X. Let the
inverse function of g(·) be h(·), meaning
Y = g(X) ⇒ X = h(Y )
Then, then PDF of Y is related to the PDF of X by
dh(y) dy fY (y) = fX (h(y)) Prof. C. Chen
Further Topics on Random Variables
Proof
The event
{X ∈ [h(y), h(y) + dx]}
corresponds to the event
{Y ∈ [y, y + dy]}
⇒ fY (y)|dy| = fX (h(y))|dx|
dx dy dh(y) = fX (h(y)) dy ⇒ fY (y) = fX (h(y)) Prof. C. Chen
Further Topics on Random Variables
Example 4.2 (continued)
Let us apply this result to Example 4.2.
X ∼ uniform(30, 60), Y =
⇒ X=
180
= g(X)
X
180
= h(Y )
Y
dh(y) ⇒ fY (y) = fX (h(y)) dy 1 180
·
30 y 2
6
= 2, 3 ≤ y ≤ 6
y
=
Prof. C. Chen
Further Topics on Random Variables
Example 4.6 Square of Uniform Random Variable
Let X be a continuous uniform random variable on the interval
[0, 1], and
Y = X2
What is the PDF of Y ?
Y = X2
⇒ X=
√
Y = h(Y )
dh(y) dy ⇒ fY (y) = fX (h(y)) 1
=1· √ ,
2 y
Prof. C. Chen
0≤y≤1
Further Topics on Random Variables
Functions of Multiple Random Variables
A function of multiple random variables is, again, a random
variable. The PDF of such a random variable may be derived,
often through consideration of CDF.
Prof. C. Chen
Further Topics on Random Variables
Example 4.7 Archers
Two archers shoot at a target. The distance of each shot from
the center of the target is uniformly distributed from 0 to 1,
independent of the other shot. What is the PDF of the distance
of the losing shot from the center?
Let the distances of the shots from center be X and Y . The
distance of the losing shot is
Z = max(X, Y )
⇒ {Z ≤ z} = {X ≤ z} ∩ {Y ≤ z}
⇒ FZ (z) = FX (z)FY (z) =



0,
z2,


1,
⇒ fZ (z) = 2z,
Prof. C. Chen
z<0
0≤z≤1
z>1
0≤z≤1
Further Topics on Random Variables
Example 4.8 Ratio of Uniform Random Variables
Let X and Y be independent continuous uniform random
variables over the interval [0, 1]. What is the PDF of
Z=
Y
X


0,

Y
FZ (z) = P(Z ≤ z) = P
≤ z = z2 ,

X

1 −
⇒ fZ (z) =



0,
1
2,


 1 ,
2z 2
Prof. C. Chen
z<0
0≤z≤1
1
2z , z > 1
z<0
0≤z≤1
z>1
Further Topics on Random Variables
Example 4.9 Difference of Exponential Variables
Romeo and Juliet have a date at a given time. Each will arrive
at the meeting place with a random delay (X for Juliet, Y for
Romeo) that is exponentially distributed with parameter λ.
What is the PDF of
Z =X −Y
For z > 0
FZ (z) = P(X − Y ≤ z)
Z ∞ Z z+y
fX,Y (x, y)dx dy
=
0
0
Z ∞
=
λe−λy
Z0∞
λe−λx dx dy
0
λe
=
Z z+y
−λy
1 − e−λ(z+y) dy
0
= 1 − e−λz
Z ∞
0
Prof. C. Chen
1
λe−λ(2y) dy = 1 − e−λz
2
Further Topics on Random Variables
For z < 0
FZ (z) = P(X − Y ≤ z)
Z ∞ Z ∞
fX,Y (x, y)dy dx
=
x−z
0
Z ∞
=
Z ∞
λe
−λx
λe
−λx −λ(x−z)
x−z
0
Z ∞
=
0
= eλz
λe−λy dy dx
Z ∞
e
dx
λe−λ(2x) dx
0
1
= eλz
2
⇒ fZ (z) =
dFZ (z)
λ
= e−λ|z|
dz
2
Prof. C. Chen
Further Topics on Random Variables
Sum of Independent Random Variables
The PDF (PMF) of the sum of two independent random
variables is the convolution of PDFs (PMFs).
Prof. C. Chen
Further Topics on Random Variables
Discrete Convolution
Let X and Y be independent discrete random variables with
PMF pX (x) and pY (y). The PMF of Z = X + Y is
pZ (z) =
X
pX (x)pY (z − x)
x
pZ (z) = P(X + Y = z)
X
=
P(X = x, Y = y)
{(x,y)|x+y=z}
=
X
P(X = x, Y = z − x)
x
=
X
P(X = x)P(Y = z − x)
x
=
X
pX (x)pY (z − x)
x
Prof. C. Chen
Further Topics on Random Variables
Continuous Convolution
Let X and Y be independent continuous random variables with
PDF fX (x) and fY (y). The PDF of Z = X + Y is
Z
fZ (z) =
fX (x)fY (z − x)dx
The conditional CDF of Z given X = x is
FZ|X (z|x) = P(Z ≤ z|X = x) = P(Y ≤ z − x) = FY (z − x)
⇒ fZ|X (z|x) = fY (z − x)
⇒ fZ (z) =
Z
fXZ (x, z)dx
Z
fX (x)fZ|X (z|x)dx
=
Z
=
Prof. C. Chen
fX (x)fY (z − x)dx
Further Topics on Random Variables
Example 4.10 Sum of Uniform Random Variables
The random variables X and Y are independent, and uniformly
distributed in the interval [0, 1]. What is the PDF of
Z =X +Y
Z ∞
fZ (z) =
−∞
(
1,
0,
if 0 ≤ x ≤ 1
otherwise
(
1,
0,
if 0 ≤ (z − x) ≤ 1
otherwise
fX (x) =
fY (z − x) =
(
⇒ fZ (z) =
fX (x)fY (z − x)dx
min(1, z) − max(0, z − 1),
0,
Prof. C. Chen
0≤z≤2
otherwise
Further Topics on Random Variables
Example 4.11 Sum of Independent Gaussians
Let X and Y be independent Gaussian random variables with
means µx , µy and variances σx2 , σy2 respectively, and
Z =X +Y
What is the PDF of Z?
Prof. C. Chen
Further Topics on Random Variables
Z
fZ (z) =
Z
=
fX (x)fY (z − x)dx
−
1
e
2πσX σY
(x−µX )2
2σ 2
X
−
e
(z−x−µY )2
2σ 2
Y
dx
The exponent is a second-order polynomial of z, so Z is a
Gaussian. Furthermore, since X and Y are independent
E[Z] = E[X] + E[Y ] = µX + µY
2
+ σY2
var(Z) = var(X) + var(Y ) = σX
2
⇒ Z ∼ N (µX + µY , σX
+ σY2 )
Prof. C. Chen
Further Topics on Random Variables
Covariance
The covariance of random variables X and Y is defined by
cov(X, Y ) = E[(X − E[X])(Y − E[Y ])]
cov(X, Y ) = E[XY ] − E[X]E[Y ]
cov(X, aY + b) = a cov(X, Y )
cov(X, Y + Z) = cov(X, Y ) + cov(X, Z)
X⊥
⊥ Y ⇒ cov(X, Y ) = 0
cov(X, X) = var(X)
Prof. C. Chen
Further Topics on Random Variables
Example 4.13
The pair of random variables (X, Y ) takes the values of
(1, 0), (0, 1), (−1, 0), (0, −1)
each with probability 1/4.
What is the covariance of X and Y ?
Prof. C. Chen
Further Topics on Random Variables
Correlation Coefficient
The correlation coefficient of random variables X and Y is
defined by
cov(X, Y )
p
ρ(X, Y ) = p
var(X) var(Y )
It can be shown that
−1 ≤ ρ(X, Y ) ≤ 1
Prof. C. Chen
Further Topics on Random Variables
Example 4.14
Consider n independent tosses of a coin with probability p of a
head. Let X and Y be the numbers of heads and of tails,
respectively. What is the correlation coefficient of X and Y ?
Prof. C. Chen
Further Topics on Random Variables
X +Y =n
⇒ E[X + Y ] = n
⇒ E[X] + E[Y ] = n
⇒ X − E[X] = −(Y − E[Y ])
⇒ cov(X, Y ) = E[(X − E[X])(Y − E[Y ])]
= −var(X)
= −var(Y )
cov(X, Y )
var(X)var(Y )
−var(X)
=
var(X)
= −1
⇒ ρ(X, Y ) = p
Prof. C. Chen
Further Topics on Random Variables
Variance of Sum of Random Variables
var(X + Y ) = var(X) + var(Y ) + 2cov(X, Y )
var(X + Y ) = E[((X + Y ) − E[(X + Y )])2 ]
= E[((X + Y ) − (E[X] + E[Y ]))2 ]
= E[((X − E[X]) + (Y − E[Y ]))2 ]
= E[(X − E[X])2 ] + E[(Y − E[Y ])2 ] + 2E[(X − E[X])(Y − E[Y ])]
= var(X) + var(Y ) + 2cov(X, Y )
var
n
X
i=1
!
Xi
=
n
X
var(Xi ) +
i=1
Prof. C. Chen
n X
X
cov(Xi , Xj )
i=1 j6=i
Further Topics on Random Variables
Example 4.15 Hats
Consider the hat problem discussed earlier, where n people
throw their hats in a box and then pick a hat at random. What
is the variance of the number of people who pick their own
hat?
Prof. C. Chen
Further Topics on Random Variables
Let {Xi = 1} be the event that the ith person picks his own
hat, and {Xi = 0} be the event otherwise. We have
1
n
Xi ∼ Bernoulli
1
1
⇒ var(Xi ) =
1−
n
n
⇒ cov(Xi , Xj ) = E[Xi Xj ] − E[Xi ]E[Xj ]
2
= P(Xi = 1, Xj = 1) −
1
n
2
= P(Xi = 1)P(Xj = 1|Xi = 1) −
1
1
= ·
−
n n−1
1
= 2
n (n − 1)
Prof. C. Chen
1
n
2
1
n
Further Topics on Random Variables
The number of persons picking their own hats is
H = X1 + · · · + Xn
⇒ var(H) = var
n
X
!
Xi
i=1
=
n
X
var(Xi ) +
i=1
n X
X
cov(Xi , Xj )
i=1 j6=i
1
1
1−
=n·
n
n
1
1
=1− +
n n
=1
Prof. C. Chen
+ n(n − 1) ·
1
− 1)
n2 (n
Further Topics on Random Variables
Conditional Expectation
The conditional expectation of X given Y is
E[X|Y ]
E[X|Y ] takes the value of E[X|Y = y] when Y = y.
It is a function of Y
It is a random variable
Prof. C. Chen
Further Topics on Random Variables
Example 4.16 Random Probability
We are given a biased coin and we are told that because of
manufacturing defects, the probability of heads, denoted by Y ,
is itself random, with a known distribution over the interval
[0, 1]. We toss the coin a fixed number n of times and we let X
be the number of heads obtained. Then, for any y ∈ [0, 1], we
have
E[X|Y = y] = ny
It follows that
E[X|Y ] = nY
Prof. C. Chen
Further Topics on Random Variables
The Law of Iterated Expectation
The expectation of a random variable is the expectation of its
conditional expectation given another random variable.
That is
E[X] = E [E[X|Y ]]
for any random variables X and Y .
Prof. C. Chen
Further Topics on Random Variables
Example 4.16 (continued)
Suppose that Y is uniformly distributed over the interval [0, 1].
We toss the coin a fixed number n of times. What is the
expectation of the number of heads X?
E[X] = E [E[X|Y ]]
= E[nY ]
= nE[Y ]
n
=
2
Prof. C. Chen
Further Topics on Random Variables
Example 4.17 Break a Stick
We start with a stick with length l. We break it at a point
which is chosen randomly and uniformly over its length, and
keep the piece that contains the left end of the stick. We then
repeat the same process on the piece that we are left with.
What is the expectation of the length of the piece that we are
left with after breaking TWICE?
Let Y be the length of the stick after the first time, and X be
the length of the stick after breaking twice.
E[X] = E[E[X|Y ]] = E
Prof. C. Chen
Y
2
1
l
= E [Y ] =
2
4
Further Topics on Random Variables
Conditional Expectation as an Estimator
The conditional expectation of X given Y
X̂(Y ) = E[X|Y ]
is an estimator of X based on Y .
An estimation error is the difference between a random
variable and its estimator
Z = X̂(Y ) − X
Note
E[Z] = E[E[X|Y ] − X] = E[E[X|Y ]] − E[X] = 0
Prof. C. Chen
Further Topics on Random Variables
The Law of Total Variance
The variance of a random variable is the sum of
the variance of conditional expectation
the expectation of conditional variance
var(X) = var(E[X|Y ]) + E[var(X|Y )]
Prof. C. Chen
Further Topics on Random Variables
Proof
X̂ = E[X|Y ]
⇒ X = X̂ − (X̂ − X) = X̂ − Z
⇒ var(X) = var(X̂ − Z) = var(X̂) + var(Z) − 2cov(X̂, Z)
Prof. C. Chen
Further Topics on Random Variables
var(X̂) = var(E[X|Y ])
var(Z) = E[Z 2 ] − E 2 [Z]
= E[Z 2 ]
= E[E[Z 2 |Y ]]
=E E
h
X − X̂
2
|Y
h
= E E (X − E[X|Y ])2 |Y
ii
= E[var(X|Y )]
cov(X̂, Z) = E[X̂Z] − E[X̂]E[Z]
= E[X̂Z]
= E[E[X̂(Y )Z|Y ]]
= E[X̂(Y )E[Z|Y ]]
= E[X̂(Y ) · 0] = 0
Prof. C. Chen
Further Topics on Random Variables
Example 4.16 (continued)
We consider n independent tosses of a biased coin whose
probability of heads Y is uniformly distributed over the interval
[0, 1]. What is the variance of X, the number of heads obtained?
var(X) = var(E[X|Y ]) + E[var(X|Y )]
= var(nY ) + E[nY (1 − Y )]
= n2 var(Y ) + nE[Y (1 − Y )]
1 1
−
2 3
= n2 E[Y 2 ] − E 2 [Y ] + n E[Y ] − E[Y 2 ]
1 1
−
3 4
n2 n
=
+
12
6
= n2
+n
Prof. C. Chen
Further Topics on Random Variables
Example 4.17 (continued)
We break twice a stick of length l at randomly chosen points.
Here Y is the length of the piece left after the first break, and X
is the length after the second break. What is the variance of X?
var(X) = var(E[X|Y ]) + E[var(X|Y )]
Y
= var
2
"
Y2
+E
12
#
1 l2
1 l2
+
4 12 12 3
7 2
=
l
144
=
Prof. C. Chen
Further Topics on Random Variables
Example 4.21
Consider a continuous random variable X with the PDF given
in Fig. 4.13. Define an auxiliary random variable as follows:
(
Y =
x<1
x≥1
1,
2,
Compute the variance of X by conditioning on Y .
var(X) = var(E[X|Y ]) + E[var(X|Y )]
= E[(E[X|Y ] − E[E[X|Y ]])2 ] + E[var(X|Y )]
=
2
X
pY (i)(E[X|Y = i] − E[X])2 +
i=1
1 1 5
−
2 2 4
37
=
48
=
2
X
pY (i)var(E[X|Y = i])
i=1
2
1
5
2−
2
4
+
Prof. C. Chen
2
+
1 1
1 4
+
2 12 2 12
Further Topics on Random Variables
Moment Generating Function
Prof. C. Chen
Further Topics on Random Variables
Transform
The transform associated with a random variable X is defined
by
h
i
MX (s) = E esX
If X is discrete
MX (s) =
X
pX (x)esx
x
If X is continuous
Z
MX (s) =
fX (x)esx dx
A probability function of x is transformed to a function of s.
Prof. C. Chen
Further Topics on Random Variables
Example 4.22 A Discrete Random Variable
Suppose
pX (x) =

1


2,
1
,
6

1,
3
x=2
x=3
x=5
What is the transform associated with X?
Prof. C. Chen
Further Topics on Random Variables
Example 4.23 Poisson Random Variable
Let X be a Poisson random variable with parameter β. What is
the transform associated with X?
pX (k) = e−β
βk
,
k!
h
sX
⇒ MX (s) = E e
k = 0, 1, 2, . . .
i
=
∞
X
k=0
esk e−β
βk
k!
∞
X
(βes )k
= e−β
=e
k=0
−β βes
e
= eβ(e
Prof. C. Chen
k!
s −1)
Further Topics on Random Variables
Example 4.24 Exponential Random Variable
Let X be an exponential random variable with parameter λ.
What is the transform associated with X?
fX (x) = λe−λx ,
h
⇒ MX (s) = E esX
Z ∞
=
x≥0
i
esx λe−λx dx
0
Z ∞
=
λe−(λ−s)x dx
0
=
Prof. C. Chen
λ
λ−s
Further Topics on Random Variables
Example 4.25 Linear Function
Let MX (s) be the transform associated with a random variable
X. What is the transform associated with the random variable
Y = aX + b
h
MY (s) = E esY
i
h
= E es(aX+b)
h
= ebs E esaX
i
i
= ebs MX (as)
Prof. C. Chen
Further Topics on Random Variables
Example 4.26 Gaussian Random Variable
Let X be a normal random variable with mean µ and variance
σ 2 . What is the transform associated with X?
X = σZ + µ
where Z ∼ N (0, 1).
h
sZ
⇒ MZ (s) = E e
1
=√
2π
i
1
=√
2π
Z ∞
Z ∞
esz e−z
2 /2
dz
−∞
2 /2
e−(z−s)
es
2 /2
dz
−∞
2 /2
= es
⇒ MX (s) = eµs MZ (σs) = e
Prof. C. Chen
σ2 2
s +µs
2
Further Topics on Random Variables
Moment Generating Function
The transform associated with a random variable is also known
as the moment generating function (MGF) of the random
variable.
Prof. C. Chen
Further Topics on Random Variables
Derivative and the First Moment
The derivative of the MGF of a random variable X is
d
d
MX (s) =
ds
Zds
=
At s = 0
Z
fX (x)esx dx
xesx fX (x)dx
d
MX (s)
=
ds
s=0
Z
xfX (x)dx
= E[X]
Prof. C. Chen
Further Topics on Random Variables
Higher Moments
More generally, the nth order derivative of MX (s) evaluated at
s = 0 is
dn
dn
M
(s)
=
X
dsn
dsn
s=0
Z
=
Z
fX (x)e dx
sx
s=0
n
x fX (x)dx
= E[X n ]
Prof. C. Chen
Further Topics on Random Variables
Example 4.27
Consider the PMF given in Example 4.22. Find the
expectation and the second moment of X.
Find the expectation and the second moment of an
exponential random variable with parameter λ.
Prof. C. Chen
Further Topics on Random Variables
MX (s) =
λ
λ−s
d
MX (s)
ds
s=0
λ
=
(λ − s)2 s=0
1
=
λ
d2
2
⇒ E[X ] = 2 MX (s)
ds
s=0
⇒ E[X] =
2λ =
(λ − s)3 s=0
2
= 2
λ
Prof. C. Chen
Further Topics on Random Variables
Inverse Transform
The transform MX (s) associated with a random variable X
uniquely determines the PDF of X, assuming MX (s) is finite
for all s in some interval [−a, a], where a > 0.
Prof. C. Chen
Further Topics on Random Variables
Example 4.28 Discrete Random Variable
We are told that the transform associated with a random
variable X is
1
1 1
1
MX (s) = e−s + + e4s + e5s
4
2 8
8
What is the probability distribution of X?
Prof. C. Chen
Further Topics on Random Variables
Example 4.29 Geometric Random Variable
We are told that the transform associated with a random
variable X is
pes
MX (s) =
1 − (1 − p)es
What is the probability distribution of X?
Prof. C. Chen
Further Topics on Random Variables
Example 4.30 A Mixture of Two Distributions
A bank has three tellers, two of them fast, one slow. The time
to assist a customer is exponentially distributed with parameter
λ = 6 at the fast tellers, and λ = 4 at the slow teller. Jane
enters the bank and chooses a teller at random, each one with
probability 1/3. Find the PDF of the time it takes to assist
Jane and the associated transform.
Prof. C. Chen
Further Topics on Random Variables
Sum of Independent Random Variables
The transform associated with the sum of independent random
variables is the product of the transforms associated with those
random variables.
Prof. C. Chen
Further Topics on Random Variables
Proof
Suppose X ⊥
⊥Y.
Z =X +Y
h
⇒ MZ (s) = E esZ
i
h
= E es(X+Y )
h
= E esX esY
h
= E esX
i
i
i
h
E esY
i
= MX (s)MY (s)
More generally, if X1 , . . . , Xn are independent
MX1 +···+Xn (s) = MX1 (s) . . . MXn (s)
Prof. C. Chen
Further Topics on Random Variables
Example 4.31 Binomial Random Variable
Let X1 , . . . , Xn be independent Bernoulli random variables with
a common parameter p. What is the transform associated with
the binomial random variable
Z = X1 + · · · + Xn
MXi (s) = (1 − p)e0s + pes = (1 − p) + pes
⇒ MZ (s) = MX1 (s) . . . MXn (s) = (1 − p + pes )n
Prof. C. Chen
Further Topics on Random Variables
Example 4.32 Independent Poisson Variables
Let X and Y be independent Poisson random variables with
parameters λ and µ, respectively, and
Z =X +Y
What is the transform associated with Z?
MZ (s) = MX (s)MY (s)
= eλ(e
s −1)
s −1)
eµ(e
s −1)
= e(λ+µ)(e
Poisson(λ) + Poisson(µ) ∼ Poisson(λ + µ)
Prof. C. Chen
Further Topics on Random Variables
Example 4.33 Independent Gaussians
Let X and Y be independent Gaussian random variables with
means µx , µy , and variances σx2 , σy2 , respectively, and
Z =X +Y
What is the transform associated with Z?
MZ (s) = MX (s)MY (s)
2 2 /2)+µ s
x
= e(σx s
2
2
= e((σx +σy )s
2 2 /2)+µ s
y
e(σy s
2 /2)+(µ
x +µy )s
N (µx , σx2 ) + N (µy , σy2 ) ∼ N (µx + µy , σx2 + σy2 )
Prof. C. Chen
Further Topics on Random Variables
Random Sum
Prof. C. Chen
Further Topics on Random Variables
Definition
A random sum is the sum of a random number of iid random
variables. Specifically
Y = X1 + · · · + XN
where
N is a random variable taking nonnegative integer values
X1 , . . . , XN are iid random variables
We use X to denote the random variable with the common
distribution of Xi .
Prof. C. Chen
Further Topics on Random Variables
Example
Consider one day at a convenience store.
a random number N of customers make purchases
customer i makes a purchase of a random amount Xi
Thus, the total amount of sale is
S = X1 + · · · + XN
Prof. C. Chen
Further Topics on Random Variables
Properties
Y = X1 + · · · + XN
⇒ E[Y ] = E[E[Y |N ]]
= E[N E[X]]
= E[N ]E[X]
⇒ var(Y ) = E[var(Y |N )] + var(E[Y |N ])
= E[N var(X)] + var(N E[X])
= var(X) E[N ] + E 2 [X] var(N )
⇒ MY (s) = MN (log MX (s))
Prof. C. Chen
Further Topics on Random Variables
Proof
MY (s) = E[esY ] = E[E[esY |N ]]
= E[(MX (s))N ]
=
X
pN (n)(MX (s))n
n
MN (s) = E[esN ] =
X
pN (n)(esn )
n
=
X
pN (n)(es )n
n
Comparison of MY (s) and MN (s) shows
MY (s) = MN (log MX (s))
Prof. C. Chen
Further Topics on Random Variables
Example 4.34 Random Sum
A remote village has 3 gas stations. Each gas station is open on
any given day with probability 1/2, independent of the others.
The amount of gas available in each station is uniformly
distributed between 0 and 1000 gallons. What is the transform
associated with the total amount of gas available at the gas
stations that are open?
Prof. C. Chen
Further Topics on Random Variables
The total amount of gas available is a random sum
Y = X1 + · · · + XN
where Xi is the amount of gas available at the ith open gas
station and N is the number open gas stations.
MN (s) = MB (s)3 = (1 − p + pes )3 =
h
sX
MX (s) = E e
i
Z
=
esx f (x)dx =
1
(1 + es )3
8
1 1000s
e
−1
1000s
e1000s − 1
1
1+
⇒ MY (s) = MN (log MX (s)) =
8
1000s
Prof. C. Chen
!3
Further Topics on Random Variables
Example 4.35 Great Expectations
Jane visits a number of bookstores, looking for Great
Expectations. Any given bookstore carries the book with
probability p, independent of the others. In a bookstore visited,
Jane spends a random amount of time, exponentially
distributed with parameter λ, until she either finds the book or
she determines that the bookstore does not carry it. We wish to
find the mean, the variance, and the PDF of the total time
spent in bookstores.
Prof. C. Chen
Further Topics on Random Variables
The total time until Jane finds a copy is a random sum
Y = X1 + X2 + · · · + XN
where Xi is the time spent at the ith bookstore and N is the
number of bookstores she visits.
MN (s) =
pes
1 − (1 − p)es
MX (s) =
⇒ MY (s) =
λ
λ−s
λ
p λ−s
pMX (s)
=
1 − (1 − p)MX (s)
1 − (1 − p)
⇒ Y ∼ exponential(pλ), E[Y ] =
Prof. C. Chen
λ
λ−s
=
pλ
pλ − s
1
1
, var(Y ) = 2 2
λp
λ p
Further Topics on Random Variables
Example 4.36
Let N be geometrically distributed with parameter p. Each
random variable Xi is geometrically distributed with parameter
q. All of these random variables are independent. What is the
distribution of
Y = X1 + · · · + XN
MN (s) =
pes
,
1 − (1 − p)es
MX (s) =
qes
1 − (1 − q)es
s
qe
p 1−(1−q)e
s
pMX (s)
⇒ MY (s) =
=
s
qe
1 − (1 − p)MX (s)
1 − (1 − p) 1−(1−q)es
=
pqes
1 − (1 − pq)es
⇒ Y ∼ geometric(pq)
Prof. C. Chen
Further Topics on Random Variables