Download Convergence of a sequence of random variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
Convergence of a sequence of random variables
Let
X1 , X 2 ,..., X n be a sequence n independent and identically distributed random
variables. Suppose we want to estimate the mean of the random variable on the basis of
the observed data by means of the relation
n 
1 N
 Xi
n i 1
How closely does  n represent the true mean  X as
n is increased? How do we
measure the closeness between  n and  X ?
Notice that  n is a random variable. What do we mean by the statement  n converges
to  X ?
 Consider a deterministic sequence of real numbers x1 , x2 ,....xn .... The sequence
converges to a limit x if corresponding to every   0 , we can find a positive
integer N such that x  xn   for n  N .
For example, the sequence
1, 12 ,..., 1n ,... converges to the number 0. Because, for any   0, we can choose a
positive integer N 
0  xn 
1

such that
1
  for n  N .
n
 The Cauchy criterion gives the condition for convergence of a sequence without
actually finding the limit. The sequence x1 , x2 ,....xn .... converges if and only if ,
for every   0 there exists a positive integer N such that
xn m  xn   for all n  N and all m  0.
Convergence of a random sequence X 1 , X 2 ,.... X n .... cannot be defined as above. Note
that for each s  S , X1 (s), X 2 (s),.... X n ( s).... represent a sequence of numbers . Thus
X 1 , X 2 ,.... X n .... represents a family of sequences of numbers. Convergence of a random
sequence is to be defined using different criteria. Five of these criteria are explained
below.
Convergence Everywhere
A sequence of random variables is said to converge everywhere to X if
X (s)  X n (s)  0 for n  N and s  S.
Note here that the sequence of numbers for each sample point is convergent.
1
Almost sure (a.s.) convergence or convergence with probability 1
A random sequence X1 , X 2 ,.... X n ,.... may not converge for every s  S.
Consider the event {s | X n ( s)  X }
The sequence X1 , X 2 ,.... X n ,.... is said to converge to X almost sure or with probability 1
if
P{s | X n ( s )  X ( s )}  1
as
n  ,
or equivalently for every  >0 there exists N such that
P{s X n ( s )  X ( s )   for all n  N }  1
a.s.
X in this case
We write X n 
One important application is the Strong Law of Large Numbers (SLLN):
If X 1 , X 2 ,.... X n .... are independent and identically distributed random variables with a
finite mean  X , then
1 n
 X i   X with probability 1 as n  .
n i 1
Remark:
 n 
1 n
 X i is called the sample mean.
n i 1
 The strong law of large numbers states that the sample mean converges to the true
mean as the sample size increases.
 The SLLN is one of the fundamental theorems of probability. There is a weaker
version of the law that we will discuss later.
Convergence in mean square sense
A random sequence X 1 , X 2 ,.... X n .... is said to converge in the mean-square sense (m.s) to
a random variable X if
E ( X n  X )2  0
as
n
X is called the mean-square limit of the sequence and we write
l.i.m. X n  X
where l.i.m. means limit in mean-square. We also write
m. s .
X n 
X
 The following Cauchy criterion gives the condition for m.s. convergence of a
random sequence without actually finding the limit. The sequence
X1 , X 2 ,.... X n .... converges in m.s. if and only if , for every   0 there exists a
positive integer N such that
2
2
E  xn  m  xn   0 as n   for all m  0.
Example:
If X 1 , X 2 ,.... X n .... are iid random variables, then
1 n
 X i   X in the mean square sense as n  .
n i 1
1 n
We have to show that lim E (  X i   X )2  0
n
n i 1
Now,
1 n
1 n
E (  X i   X ) 2  E ( (  ( X i   X )) 2
n i 1
n i 1
1 n
1 n n
 2  E ( X i   X )2 + 2   E ( X i   X )( X j   X )
n i 1
n i=1 j=1,ji


n X2
+0 ( Because of independence)
n2
 X2
n
1
 lim E (  X i   X ) 2  0
n 
n i 1
N
Convergence in probability
Associated with the sequence of random variables X1 , X 2 ,.... X n ,...., we can define a
sequence of probabilities P{ X n  X   }, n  1, 2,... for every   0.
The sequence X 1 , X 2 ,.... X n .... is said to convergent to X in probability if this sequence
of probability is convergent that is
P{ X n  X   }  0
as
n 
P
 X to denote convergence in probability of the
for every   0. We write X n 
sequence of random variables X 1 , X 2 ,.... X n .... to the random variable X .
If a sequence is convergent in mean, then it is convergent in probability also, because
P{ X n  X 2   2 }  E ( X n  X ) 2 /  2
(Markov Inequality)
We have
P{ X n  X   }  E( X n  X ) 2 /  2
If E ( X n  X ) 2  0
P{ X n  X   }  0 as
as
n  , (mean square convergent) then
n  .
Example:
3
Suppose { X n } be a sequence of random variables with
1
n
P{ X n  1}  1 
and
P{ X n  1} 
1
n
Clearly
P{ X n  1   }  P{ X n  1} 
1
0
n
as n  .
P
{ X  0}
Therefore { X n } 
Thus the above sequence converges to a constant in probability.
Remark:
Convergence in probability is also called stochastic convergence.
Weak Law of Large numbers
If X 1 , X 2 ,.... X n .... are independent and identically distributed random variables, with
1 n
n i 1
P
sample mean n   X i . Then n 
  X as n  .
We have
1 n
n i 1
1 n
 E n   EX i   X
n i 1
and
n   X i
E ( n   X )2 
P{ n   X
 X2
(as shown above)
n
  }  E (n   X )2 /  2
 X2
 2
n
 P{ n   X   }  0 as n  .
Convergence in distribution
Consider the random sequence X 1 , X 2 ,.... X n .... and a random variable X . Suppose
FX n ( x) and FX ( x) are the distribution functions of X n and X respectively. The sequence
is said to converge to X in distribution if
FX n ( x )  FX ( x )
as n  .
4
for all x at which FX ( x) is continuous. Here the two distribution functions eventually
d
 X to denote convergence in distribution of the random
coincide. We write X n 
sequence X 1 , X 2 ,.... X n .... to the random variable X .
Example: Suppose X 1 , X 2 ,.... X n .... is a sequence of RVs with each random variable X i
having the uniform density
1
 xb
f X i ( x)   b
0 other wise
Define Z n  max( X1 , X 2 ,.... X n )
We can show that
0, z  0
 n
z
FZn ( z )   n , 0  z  a
a
otherwise
1
Clearly,
0, z  a
lim FZn ( z )  FZ ( z )  
n 
1 z  a
 zn  Converges to Z in distribution.
Relation between Types of Convergence
as
X n 
X
Convergence almost sure
p
X n 
X
Convergence in probability
d
X n 
X
Convergence in distribution
m. s .
X n 
X
Convergence in mean-square
5