Download Example: Student`s t distribution

Document related concepts
no text concepts found
Transcript
Functions of Random Variables
Methods for determining the distribution of
functions of Random Variables
1. Distribution function method
2. Moment generating function method
3. Transformation method
Distribution function method
Let X, Y, Z …. have joint density f(x,y,z, …)
Let W = h( X, Y, Z, …)
First step
Find the distribution function of W
G(w) = P[W ≤ w] = P[h( X, Y, Z, …) ≤ w]
Second step
Find the density function of W
g(w) = G'(w).
Example 1
Let X have a normal distribution with mean 0, and
variance 1. (standard normal distribution)
1
f  x 
e
2
Let W = X2.
Find the distribution of W.
x2

2
First step
Find the distribution function of W
G(w) = P[W ≤ w] = P[ X2 ≤ w]
 P  w  X  w  if w  0
w


 w
F
where
1
e
2
x2

2
dx
 w   F  w 
1
F x  f  x 
e
2
x2

2
Second step
Find the density function of W
g(w) = G'(w).
 


 


 F
 f

d  w
d w
w
 F  w
dw
dw

1  12
1  12
w w f  w w
2
2

w
2

1
2
1
1
1

e
w 
e
2
2
2
1
w


1

w 2 e 2 if w  0.
2

w
2

1
w
2
1
2
Thus if X has a standard Normal distribution then
W = X2
has density

1
w

2
2
1
g  w 
w e
2
if w  0.
This distribution is the Gamma distribution with a = ½
and l = ½.
This distribution is also the c2 distribution with n = 1
degree of freedom.
Example 2
Suppose that X and Y are independent random
variables each having an exponential distribution
with parameter l (mean 1/l)
f1  x   lel x for x  0
f 2  y   lel y for y  0
f  x, y   f1  x  f 2  y 
2 l  x y 
l e
Let W = X + Y.
Find the distribution of W.
for x  0, y  0
First step
Find the distribution function of W = X + Y
G(w) = P[W ≤ w] = P[ X + Y ≤ w]
w w x
P  X  Y  w  
0
 f  x  f  y  dydx
1
0
w w x


0
0
2
2 l  x y 
le
dydx
w w x
P  X  Y  w  
 f  x  f  y  dydx
1
0
2
0
w w x


0
0
2 l  x y 
le
dydx
w x


2
l x
l y
 l  e   e dy dx
0
 0

w
l
w
2
e
0
l x
l y
w x
e 

 dx
 l  0
 l  w x 
0


e

e
2
l x
 l e 
dx
l
0


w
 l  w x 
0


e

e
2
l x
P  X  Y  w  l  e 
dx
l
0


w
w
 l  el x  el w dx
0
l x
w
e
lw 
l
 xe 
 l
0
0
 e  l w



e
lw
 l 
 we   

  l  
  l
lw
lw

 1  e  l we 
Second step
Find the density function of W
g(w) = G'(w).
d
1  e l w  l we l w 

dw
 lw
 dw  l w
de  l w  
 l e  l 
e w

dw  
 dw

lw
lw
2
lw

 l e  l e  l we 
 l 2 we  l w
for w  0
Hence if X and Y are independent random variables
each having an exponential distribution with parameter
l then W has density
g  w  l 2 welw
for w  0
This distribution can be recognized to be the Gamma
distribution with parameters a = 2 and l.
Example: Student’s t distribution
Let Z and U be two independent random
variables with:
1. Z having a Standard Normal distribution
and
2. U having a c2 distribution with n degrees
of freedom
Find the distribution of
Z
t
U
n
The density of Z is:
1
f  z 
e
2
The density of U is:
n
z2

2
 1 2
  n 1  u
2 2

h u  
u e 2
n 
 
2
Therefore the joint density of Z and U is:
n
 1 2
n
z 2 u
 
1 
2

2
f  z, u   f  z  h  u  
u e 2
n 
2   
2
The distribution function of T is:


Z
t



G  t   P T  t   P
 t  P Z 
 U

n



n

U

Therefore:
t

G  t   P T  t   P  Z 
n

n
t
 n

0


u

U

 1 2
n
z 2 u
 
1 
2
u 2 e 2 dzdu
n 
2   
2
Illustration of limits
t>0
t>0
U
z
z
U
Now:
n
t
 n
G (t )  
0


u
 1 2
n
z 2 u
 
1 
2
u 2 e 2 dzdu
n 
2   
2
and:



d 
g  t   G (t )   
dt 
0


n
t
n


u

 1 2

2
n
z u
 

1


2
2
2
u e
dz du
n 

2   

2

Using:
b
b
d
d
F ( x, t )dx   F ( x, t )dx

dt a
dt
a
Using the fundamental theorem of calculus:
x
F ( x)   f  t  dt
If
a
then
n
F ( x)  f  x 


2
t
1
u


2

n
z u
n



1


d 
2

2
2
g t     
u e
dz du
dt  
n 
0

2   


2
n

 1 2
 n
t 2u
u
 

1 
u
2

2n
2
2

u e e
du

n  0
n
2   
2
then
Hence
n
 1 2
 
2
g (t ) 

u

n 
 t2 
 1u
n 1  n 

2
2
e
2 n    0
2

Using
l
a 1  l x
1 
x e dx
 a 
0

or
a
x
0
a 1  l x
e
dx 
 a 
la
du
Hence

u
 t2 
 1u
n 1  n 

2
2
e
n 1 

2
2 

du 
t

  1
n

n
n 1
2
n 1
2
0
and
n 1
2
1
n 1 
n 1
2

2

 

  t2
2

2
2


g (t )   
 1

n   n

2 n   
2
2
or
n 1 
n 1
n 1



  t2  2
2
2


t
2


g (t ) 
 1
 K   1

n
n   n



n   
2
where
n 1 


2 

K
n 
n   
2
Student’s t distribution
 t2 
g (t )  K   1
n

where
n 1

n 1 


2 

K
n 
n   
2
2
Student – W.W. Gosset
Worked for a distillery
Not allowed to publish
Published under the
pseudonym “Student
t distribution
standard normal distribution
Distribution of the Max and Min
Statistics
Let x1, x2, … , xn denote a sample of size n from
the density f(x).
Let M = max(xi) then determine the distribution
of M.
Repeat this computation for m = min(xi)
Assume that the density is the uniform density
from 0 to q.
Hence
1

f ( x )  q
 
0  x q
elsewhere
and the distribution function
0
x

F ( x)  P  X  x   
q
 1
x0
0  x q
x q
Finding the distribution function of M.
G (t )  P  M  t   P  max  xi   t 
 P  x1  t ,
, xn  t 
 P  x1  t 
P  xn  t 
 0

n
 t 
  
 q 
 1

t0
0  t q
t q
Differentiating we find the density function of M.
 nt n 1
 n
g  t   G  t    q
 0

0.12
0  t q
otherwise
0.6
f(x)
0.1
0.5
0.08
0.4
0.06
0.3
0.04
0.2
0.02
0.1
0
0
0
2
4
6
8
10
g(t)
0
2
4
6
8
10
Finding the distribution function of m.
G (t )  P  m  t   P  min  xi   t 
 1  P  x1  t ,
, xn  t 
 1  P  x1  t 
P  xn  t 
0


n
  t
 1  1  
  q

1

t0
0  t q
t q
Differentiating we find the density function of m.
n  t 
 1  
g  t   G   t   q  q 

0

n 1
0  t q
otherwise
f(x)
g(t)
0.12
0.6
0.1
0.5
0.08
0.4
0.06
0.3
0.04
0.2
0.02
0.1
0
0
0
2
4
6
8
10
0
2
4
6
8
10
The probability integral transformation
This transformation allows one to convert
observations that come from a uniform
distribution from 0 to 1 to observations that
come from an arbitrary distribution.
Let U denote an observation having a uniform
distribution from 0 to 1.
1 0  u  1
g (u )  
 elsewhere
Let f(x) denote an arbitrary density function and
F(x) its corresponding cumulative distribution
function.
1
X

F
(U )
Let
Find the distribution of X.
1

G  x   P  X  x   P  F (U )  x 
 P U  F  x  
 F  x
Hence.
g  x   G  x   F   x   f  x 
Thus if U has a uniform distribution from 0 to 1.
Then
1
X  F (U )
has density f(x).
X  F 1 (U )
U
The Transformation Method
Theorem
Let X denote a random variable with
probability density function f(x) and U = h(X).
Assume that h(x) is either strictly increasing
(or decreasing) then the probability density of
U is:


1
dh (u )
dx
g  u   f h (u )
 f  x
du
du
1
Proof
Use the distribution function method.
Step 1 Find the distribution function, G(u)
Step 2 Differentiate G (u ) to find the
probability density function g(u)
G  u   P U  u   P  h  X   u 
 P  X  h 1 (u )  h strictly increasing
 


1
 X  h (u )  h strictly decreasing
P




 F h 1 (u )


1
1

F
h
(
u
)



h strictly increasing

h strictly decreasing
hence
g u   G u 

dh  u 
1
 F   h u 

du

1
 F  h 1 u dh  u 
 


du

1
h strictly increasing
h strictly decreasing
or
1
dh
(u )
dx
1
g  u   f h (u )
 f  x
du
du


Example
Suppose that X has a Normal distribution
with mean m and variance s2.
Find the distribution of U = h(x) = eX.
Solution:
2
xm 


1
2s 2
f  x 
e
2s
1
dh  u  d ln  u  1
1
h  u   ln  u  and


du
du
u
hence
1
dh
(u )
dx
1
g  u   f h (u )
 f  x
du
du



1 1 
e
2s u
 ln  u   m 
2s 2
2
for u  0
This distribution is called the log-normal
distribution
log-normal distribution
0.1
0.08
0.06
0.04
0.02
0
0
10
20
30
40
The Transfomation Method
Theorem
(many variables)
Let x1, x2,…, xn denote random variables
with joint probability density function
f(x1, x2,…, xn )
Let u1 = h1(x1, x2,…, xn).
u2 = h2(x1, x2,…, xn).
⁞
un = hn(x1, x2,…, xn).
define an invertible transformation from the x’s to the u’s
Then the joint probability density function of
u1, u2,…, un is given by:
g  u1 ,
, un   f  x1 ,
 f  x1 ,
, xn 
d  x1 ,
d  u1 ,
, xn  J
, xn 
, un 
 dx1
 du
 1
d  x1 , , xn 
where J 
 det 
d  u1 , , un 

 dxn
 du1
Jacobian of the transformation
dx1 
dun 



dxn 
dun 
Example
Suppose that x1, x2 are independent with density
functions f1 (x1) and f2(x2)
Find the distribution of
u1 = x1+ x2
u2 = x1 - x2
Solving for x1 and x2 we get the inverse transformation
u1  u2
x1 
2
u1  u2
x2 
2
The Jacobian of the transformation
J
d  x1 , x2 
d  u1 , u2 
1
2
 det 
1
 2
 dx1
 du
1

 det
 dx2
 du
 1
dx1 
du2 

dx2 

du2 
1 
2    1   1    1  1    1
  




1   2  2   2  2 
2

2 
The joint density of x1, x2 is
f(x1, x2) = f1 (x1) f2(x2)
Hence the joint density of u1 and u2 is:
g  u1 , u2   f  x1 , x2  J
 u1  u2   u1  u2  1
 f1 
 f2 

 2   2 2
From
 u1  u2   u1  u2  1
g  u1 , u2   f1 
 f2 

 2   2 2
We can determine the distribution of u1= x1 + x2
g1  u1  

 g  u , u  du
1





2
2
 u1  u2   u1  u2  1
f1 
 f2 
 du2
 2   2 2
u1  u2
u1  u2
dv 1
put v 
then
 u1  v,

2
2
du2 2
Hence
g1  u1  



 u1  u2   u1  u2  1
f1 
 f2 
 du2
 2   2 2


 f  v  f u
1
2
1
 v  dv

This is called the convolution of the two
densities f1 and f2.
Example: The ex-Gaussian distribution
Let X and Y be two independent random
variables such that:
1. X has an exponential distribution with
parameter l.
2. Y has a normal (Gaussian) distribution with
mean m and standard deviation s.
Find the distribution of U = X + Y.
This distribution is used in psychology as a model
for response time to perform a task.
Now
lel x
f1  x   
 0
x0
x0
2
xm 


1
2s 2
f2  y  
e
2s
The density of U = X + Y is :.
g u  

 f  v  f  u  v  dv
1
2


  le
0
 lv
1
e
2s
2
u v  m 


2s 2
dv
or

l
g u  
e

2s 0

2
u v  m 


l v
l

e

2s 0

2s 2
2
u v  m   2s 2l v


2s 2

l

e

2s 0
l

e
2s
dv
dv
v2  2 u  m v   u  m   2s 2l v
2
2s 2
2
u m 


2s
2

e
0

dv
v 2  2  u  m  s 2 l  v
2s 2
dv
l

e
2s
or

u m 
2s
2
2

e

v 2  2  u  m  s 2 l  v
2s 2
0
2
l

e
2s
 u  m 2   u  m s 2l  

2s 2
e

v2  2  u  m  s 2 l  v   u  m  s 2l 
 u  m 2   u  m s 2l  

2s 2

0
 le
2
2s 2
dv
0
2
 le
dv

1
e
2s
 u  m 2   u  m s 2l 

2s 2
v 2  2  u  m  s 2l  v   u  m s 2l 
2
P V  0
2s 2
2
dv
Where V has a Normal distribution with mean
mV  u   m  s l 
2
and variance s2.
Hence
g u   le

s 2l 
 l  u  m  

2



  m  s 2l   u  

1    


2


s




Where (z) is the cdf of the standard Normal
distribution
The ex-Gaussian distribution
0.09
g(u)
0.06
0.03
0
0
10
20
30
Use of moment generating
functions
Definition
Let X denote a random variable with probability
density function f(x) if continuous (probability mass
function p(x) if discrete)
Then
mX(t) = the moment generating function of X
 
 E etX
  tx
  e f  x  dx if X is continuous
  
  etx p  x 
if X is discrete
 x
The distribution of a random variable X is
described by either
1. The density function f(x) if X continuous (probability
mass function p(x) if X discrete), or
2. The cumulative distribution function F(x), or
3. The moment generating function mX(t)
Properties
1. mX(0) = 1
2. mXk   0   k th derivative of mX  t  at t  0.
 
 mk  E X
mk  E  X
3.
k

k
k

 x f  x  dx

k

  x p  x
mX  t   1  m1t 
m2
2!
t 
2
X continuous
X discrete
m3
3!
t 
3

mk
k!
t 
k
.
4. Let X be a random variable with moment
generating function mX(t). Let Y = bX + a
Then mY(t) = mbX + a(t)
= E(e [bX + a]t) = eatE(e X[ bt ])
= eatmX (bt)
5. Let X and Y be two independent random
variables with moment generating function
mX(t) and mY(t) .
Then mX+Y(t) = E(e [X + Y]t) = E(e Xt e Yt)
= E(e Xt) E(e Yt)
= mX (t) mY (t)
6. Let X and Y be two random variables with
moment generating function mX(t) and mY(t)
and two distribution functions FX(x) and
FY(y) respectively.
Let mX (t) = mY (t) then FX(x) = FY(x).
This ensures that the distribution of a random
variable can be identified by its moment
generating function
M. G. F.’s - Continuous distributions
Name
Continuous
Uniform
Exponential
Gamma
c2
nd.f.
Normal
Moment generating
function MX(t)
ebt-eat
[b-a]t
 l 
for t < l


l  t 
a
 l 
for t < l


l  t 
 1 
1-2t


n/2
for t < 1/2
tm+(1/2)t2s2
e
M. G. F.’s - Discrete distributions
Name
Discrete
Uniform
Bernoulli
Binomial
Geometric
Negative
Binomial
Poisson
Moment
generating
function MX(t)
et etN-1
N et-1
q + pet
(q + pet)N
pet
1-qet
 pet  k


t
1-qe


l(et-1)
e
Moment generating function of the
gamma distribution
 
mX  t   E etX 

tx
e
 f  x  dx

where
a
 l
a 1  l x
x
e

f  x     a 

0

x0
x0

    e f  x  dx
mX  t   E e
tX
tx


a
l
a 1  l x
 e
x e dx
 a 
0
a 
l
a 1  l t  x

x e
dx

 a  0
tx

using
or
ba a 1 bx
0   a  x e dx  1

 a
a 1  bx
0 x e dx  ba
then
a

l
a 1  l t  x
mX  t  
x e
dx

 a  0
l a  a 

a
 a   l  t 
a
 l 


 l t 
tl
Moment generating function of the
Standard Normal distribution

 
mX  t   E etX 
tx
e
 f  x  dx

where
1
f  x 
e
2
x2

2
thus
mX  t  

e

tx
1
e
2
x2

2

dx 


1
e
2
x2
  tx
2
dx


We will use
0
mX  t  









e
1
e
2
x2
  tx
2
x2  2tx

2


1
e
2 b
t2
2
2
x a 


2b2
dx  1
dx
1
e
dx
2
2
x t 

x 2  2 tx  t 2 t 2
t2 
1  2
1  2
2
2
e
e dx  e 
e
dx
2
2

Note:
2
3
4
x
x
x
ex  1  x    
2! 3! 4!
2
3
t  t 
   
t2
2
2 2
t

2
mX  t   e  1  


2
2!
3!
2
2
4
6
t
t
t
 1  2  3 
2 2 2! 2 3!
Also
mX  t   1  m1t 
m2
2!
t 
2
2
2m
t
 m 
2 m!
m3
3!
t 
3
Note:
2
3
4
x
x
x
ex  1  x    
2! 3! 4!
2
3
t  t 
   
t2
2
2 2
t

2
mX  t   e  1  


2
2!
3!
2
2
Also
4
2
6
2m
t
t
t
t
 1  2  3   m 
2 2 2! 2 3!
2 m!
m 2 2 m3 3
mX  t   1  m1t  t  t 
2!  3!
mk  k th moment 


x k f  x  dx
Equating coefficients of tk, we get
mk  0 if k is odd and
m2 m
1
for k  2m then m 
2 m !  2m  !
hence m1  0, m2  1, m3  0, m4  3
Using of moment generating
functions to find the distribution of
functions of Random Variables
Example
Suppose that X has a normal distribution with
mean m and standard deviation s.
Find the distribution of Y = aX + b
Solution:
s 2t 2
mt 
mX  t   e 2
2
s 2 at
m  at  
bt
maX b  t   e mX  at   e e
bt
 a m  b t 
e
 
2
s 2 a 2t 2
2
= the moment generating function of the normal
distribution with mean am + b and variance a2s2.
Thus Y = aX + b has a normal distribution with
mean am + b and variance a2s2.
Special Case: the z transformation
Z
X m
s
1
 m 
   X 
  aX  b
s 
 s 
1
 m 
mZ  am  b    m  
0
s 
 s 
2
1 2
2
2 2
sZ  a s    s 1
s 
Thus Z has a standard normal distribution .
Example
Suppose that X and Y are independent each having a
normal distribution with means mX and mY , standard
deviations sX and sY
Find the distribution of S = X + Y
Solution:
mX  t   e
mY  t   e
mX t 
mY t 
s X2 t 2
2
s Y2 t 2
2
Now
mX Y  t   mX  t  mY  t   e
mX t 
s X2 t 2
2
e
mY t 
s Y2 t 2
2
or
mX Y  t   e
 m X  mY
s

t 
2
X

s Y2 t 2
2
= the moment generating function of the
normal distribution with mean mX + mY and
2
2
variance s X  s Y
Thus Y = X + Y has a normal distribution
2
2
with mean mX + mY and variance s X  s Y
Example
Suppose that X and Y are independent each having a
normal distribution with means mX and mY , standard
deviations sX and sY
Find the distribution of L = aX + bY
Solution:
mX  t   e
mX t 
s X2 t 2
mY  t   e
2
mY t 
s Y2 t 2
2
Now
maX bY t   maX t  mbY t   mX  at  mY bt 
e
m X  at  
s X2  at 
2
2
e
mY  bt  
s Y2  bt 
2
2
or
as

t 
2
maX bY  t   e
 am X bmY
2
X

 b2s Y2 t 2
2
= the moment generating function of the
normal distribution with mean amX + bmY
2 2
2 2
and variance a s X  b s Y
Thus Y = aX + bY has a normal
distribution with mean amX + bmY and
2 2
2 2
variance a s X  b s Y
Special Case:
a = +1 and b = -1.
Thus Y = X - Y has a normal distribution
with mean mX - mY and variance
 1
2
s X   1 s Y  s X  s Y
2
2
2
2
2
Example (Extension to n independent RV’s)
Suppose that X1, X2, …, Xn are independent each having a
normal distribution with means mi, standard deviations si
(for i = 1, 2, … , n)
Find the distribution of L = a1X1 + a1X2 + …+ anXn
Solution:
mX i  t   e
Now
ma1 X1 
mi t 
s i2t 2
(for i = 1, 2, … , n)
2
 an X n
t   ma X t 
1 1
 mX1  a1t 
e
m1  a1t  
man X n t 
mX n  ant 
s12  a1t 
2
2
e
mn  an t  
s n2  an t 
2
2
or
ma1 X1 
 an X n
t   e
 a1m1 ... an mn
as

t 
2 2
2 2

...

a
1 1
ns n
t
2
2
= the moment generating function of the
normal distribution with mean a1m1  ...  an mn
2 2
2 2
and variance a1 s 1  ...  an s n
Thus Y = a1X1 + … + anXn has a normal
distribution with mean a1m1 + …+ anmn
2 2
2 2
and variance a1 s 1  ...  an s n
Special case:
a1  a2 
m1  m2 
s 12  s 12 
1
 an 
n
 mn  m
 s 12  s 2
In this case X1, X2, …, Xn is a sample from a
normal distribution with mean m, and standard
deviations s,and
1
L   X1  X 2   X n 
n
 X  the sample mean
Thus
Y  x  a1 x1  ...  an xn
 n
 1
 n x
x1  ...  1
n
has a normal distribution with mean
m x  a1m1  ...  an mn
 1 m  ...  1 m  m
n
n
and variance
 
 
s x2  a12s 12  ...  an2s n2
2
1
1
1
s
  2
  2
  2
   s  ...    s  n   s 
n
n
n
n
2
2
2
Summary
If x1, x2, …, xn is a sample from a normal
distribution with mean m, and standard
deviations s,then x  the sample mean
has a normal distribution with mean
mx  m
and variance
s 
2
x
s
2
n
s 

 standard deviation s x 

n

Sampling distribution
of x
0.4
0.3
Population
0.2
0.1
0
20
30
40
50
60
The Law of Large Numbers
Suppose x1, x2, …, xn is a sample (independent
identically distributed – i.i.d.) from a
distribution with mean m,
Let
x  the sample mean
Then
P  x  m     1 as n   for all   0
Proof: Previously we used Tchebychev’s Theorem.
This assumes s(s2) is finite.
Proof: (use moment generating functions)
We will use the following fact:
Let
m1(t), m2(t), …
denote a sequence of moment generating functions
corresponding to the sequence of distribution
functions:
F1(x) , F2(x), …
Let m(t) be a moment generating function
corresponding to the distribution function F(x) then
if
lim mi  t   m  t  for all t in an interval about 0.
i 
then lim Fi  x   F  x  for all x.
i 
Let x1, x2, … denote a sequence of independent
random variables coming from a distribution with
moment generating function m(t) and distribution
function F(x).
Let Sn = x1 + x2 + … + xn then
mSn t   mx1  x2 
=  m  t  
 xn
t  =mx t  mx t 
1
2
mxn t 
n
x1  x2 
now x 
n
 xn
Sn

n
n
 t    t 
or mx  t   m 1   t   mSn    m   
  Sn
 n    n 
n
n
  t 
t
now ln mx  t   ln  m     n ln m  
n
  n 

t ln m  u 
u
t
where u 
n
 t ln m  u  
Thus lim ln mx  t   lim 

n
u 0
u


 m  u  
t

m
u
m  0 




 lim
t
 tm
u 0 
1 
m  0




using L’Hopitals rule
Thus lim mx  t   m  t   et m
n
m  t   et m is the moment generating function of
a random variable that takes on the value m with
probability 1.
1 x  m
i.e. p  x   
and
 x  m
0 x  m
and distribution function F  x   
and
1 x  m
and lim  Fx  x   F  x  for all values of x.
n
Now
P  x  m     P  m    x  m   
 Fx  m     Fx  m   
 F  m     F  m     1 if   0
as n 
0 x  m
since F  x   
and
1 x  m
Q.E.D.
The Central Limit theorem
If x1, x2, …, xn is a sample from a distribution
with mean m, and standard deviations s,then
if n is large x  the sample mean
has a normal distribution with mean
mx  m
and variance
s 
2
x
s 
2
s 
standard deviation s x 


n 
n
Proof: (use moment generating functions)
We will use the following fact:
Let
m1(t), m2(t), …
denote a sequence of moment generating functions
corresponding to the sequence of distribution
functions:
F1(x) , F2(x), …
Let m(t) be a moment generating function
corresponding to the distribution function F(x) then
if
lim mi  t   m  t  for all t in an interval about 0.
i 
then lim Fi  x   F  x  for all x.
i 
Let x1, x2, … denote a sequence of independent
random variables coming from a distribution with
moment generating function m(t) and distribution
function F(x).
Let Sn = x1 + x2 + … + xn then
mSn t   mx1  x2 
=  m  t  
 xn
t  =mx t  mx t 
1
2
mxn t 
n
x1  x2 
now x 
n
 xn
Sn

n
n
 t    t 
or mx  t   m 1   t   mSn    m   
  Sn
 n    n 
n
Let z 
x m
s

n
then mz  t   e

nm
s
n
s
t
x
nm
s
 nt  
mx 
  e
 s 
nm
s
t
  nt  
m 
 
  s n  
  t 
and ln mz  t   
t  n ln m 

s
  s n 
nm
n
t
t2
Let u 
or n 
and n  2 2
su
s u
s n
t
  t 
Then ln mz  t   
t  n ln m 

s
  s n 
t 2m
t2
  2  2 2 ln  m  u 
s u s u
nm
t 2 ln m  u   mu
 2
s
u2



Now lim ln  mz  t    lim ln  mz  t  
n 
u 0


t2
s
t2
s
lim
ln  m  u   mu
2 u 0
lim
2 u 0
u2
m  u 
m
m u 
using L'Hopital's rule
2u
m  u  m  u    m  u 

t
2
s
lim
2 u 0

 m  u 
2
2
2
using L'Hopital's rule again
m  u  m  u    m  u 

t
 m  u 
2
2
s
lim
2 u 0
t m  0   m  0 
 2
s
2
2
 
2
2
using L'Hopital's rule again
2
t E x   E  xi 
t2
 2

s
2
2
2
2
i

2

2
t
thus lim ln  mz  t  
and lim  mz  t    e
n
n
2
t2
2
Now m  t   e
t2
2
Is the moment generating function of the standard
normal distribution
Thus the limiting distribution of z is the standard
normal distribution
i.e. lim Fz  x  
n
x


1
e
2
u2

2
du
Q.E.D.
The Central Limit theorem
illustrated
The Central Limit theorem
If x1, x2, …, xn is a sample from a distribution
with mean m, and standard deviations s,then
if n is large x  the sample mean
has a normal distribution with mean
mx  m
and variance
s 
2
x
s 
2
s 
standard deviation s x 


n 
n
The Central Limit theorem illustrated
If x1, x2 are independent from the uniform
distirbution from 0 to 1. Find the distribution
x  the sample mean
of:
let
S x1  x2
S  x1  x2 and x  
2
2
Now
G  s   P  S  s   P  x1  x2  s 
0


2
s

2


2
1   2  s 
2


1
s0
0  s 1
1 s  2
s 1
0  s 1
 s

g  s   G  s   2  s 1  s  2
 0
otherwise

Now:
S
x    12  S  aS
2
The density of x is:
dS
h x   g S 
 g  2x  2
dx
0  2x  1  2x
0  x  12
 2x


 2  2 x 1  2 x  2  2 1  x  12  x  1
 0
 0
otherwise
otherwise


n=1
0
1
n=2
0
1
n=3
0
1
Distributions of functions of
Random Variables
Gamma distribution, c2 distribution,
Exponential distribution
Therorem
Let X and Y denote a independent random variables
each having a gamma distribution with parameters
(l,a1) and (l,a2). Then W = X + Y has a gamma
distribution with parameters (l, a1 + a2).
Proof:
a1
 l 
mX  t   

 l t 
a2
 l 
and mY  t   

 l t 
Therefore mX Y  t   mX  t  mY t 
a1
a2
a1 a 2
 l   l 
 l 

 
 

 l t   l t 
 l t 
Recognizing that this is the moment generating
function of the gamma distribution with parameters
(l, a1 + a2) we conclude that W = X + Y has a
gamma distribution with parameters (l, a1 + a2).
Therorem (extension to n RV’s)
Let x1, x2, … , xn denote n independent random
variables each having a gamma distribution with
parameters (l,ai), i = 1, 2, …, n.
Then W = x1 + x2 + … + xn has a gamma distribution
with parameters (l, a1 + a2 +… + an).
Proof:
ai
 l 
mxi  t   

 l t 
i  1, 2..., n
Therefore
mx1  x2 ... xn  t   mx1 t  mx2 t  ...mxn t 
a1
a2
an
 l   l 
 l 

...
 



 l t   l t 
 l t 
a1 a 2 ...a n
 l 


 l t 
Recognizing that this is the moment generating
function of the gamma distribution with parameters
(l, a1 + a2 +…+ an) we conclude that
W = x1 + x2 + … + xn has a gamma distribution with
parameters (l, a1 + a2 +…+ an).
Therorem
Suppose that x is a random variable having a
gamma distribution with parameters (l,a).
Then W = ax has a gamma distribution with
parameters (l/a, a).
a
Proof:
 l 
mx  t   

 l t 
a
a
l 

 l  
a 
then max  t   mx  at   


 l  at   l  t 
 a 
Special Cases
1. Let X and Y be independent random variables
having an exponential distribution with parameter
l then X + Y has a gamma distribution with a= 2
and l
2. Let x1, x2,…, xn, be independent random variables
having a exponential distribution with parameter l
then S = x1+ x2 +…+ xn has a gamma distribution
with a= n and l
3. Let x1, x2,…, xn, be independent random variables
having a exponential distribution with parameter l
S x1   xn
then
x
n

n
has a gamma distribution with a= n and nl
Distribution of x
population – Exponential distribution
0.6
0.5
pop'n
n=4
n = 10
0.4
n = 15
n = 20
0.3
0.2
0.1
0
0
5
10
15
20
Another illustration of the central limit theorem
Special Cases -continued
4. Let X and Y be independent random variables
having a c2 distribution with n1 and n2 degrees of
freedom respectively then X + Y has a c2
distribution with degrees of freedom n1 + n2.
5. Let x1, x2,…, xn, be independent random variables
having a c2 distribution with n1 , n2 ,…, nn degrees
of freedom respectively then x1+ x2 +…+ xn has a
c2 distribution with degrees of freedom n1 +…+ nn.
Both of these properties follow from the fact that a
c2 random variable with n degrees of freedom is a
 random variable with l= ½ and a = n/2.
Recall
If z has a Standard Normal distribution then z2 has a
c2 distribution with 1 degree of freedom.
Thus if z1, z2,…, zn are independent random variables
each having Standard Normal distribution then
U  z12  z22  ...  zn2
has a c2 distribution with n degrees of freedom.
Therorem
Suppose that U1 and U2 are independent random
variables and that U = U1 + U2 Suppose that U1
and U have a c2 distribution with degrees of
freedom n1andn respectively. (n1 < n)
Then U2 has a c2 distribution with degrees of
freedom n2 =n -n1
Proof:
 12 
Now mU1  t    1 
 2 t 
v1
2
 12 
and mU  t    1 
 2 t 
v
2
Also mU t   mU1 t  mU2 t 
Hence mU 2  t  
mU  t 
mU1  t 
 12 
 1 t

 2
v
2
 12 
 1
v

1
 12  2  2  t 
 1 t
2

Q.E.D.
v
v
1

2 2
Tables for Standard Normal
distribution
Related documents