Download Lecture 10: Characteristic Functions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Transcript
Lecture 10: Characteristic
Functions
1. Definition of characteristic functions
1.1
1.2
1.3
1.4
Complex random variables
Definition and basic properties of characteristic functions
Examples
Inversion formulas
2. Applications
2.1 Characteristic function for sum of independent random variables
2.2 Characteristic functions and moments of random variables
2.3 Continuity theorem
2.4 Weak law of large numbers
2.5 Central Limit theorem
2.6 Law of small numbers
1. Definition of characteristic function
1.1 Complex random variables
(1) Complex numbers has a form u = a + ib where a, b are real
numbers and i is so-called imaginary unit which has a property
i2 = −1.
1
(2) The summation of complex numbers u = a+ib and v = c+id
is performed according the rule u + v = (a + c) + i(b + d) and
the multiplication as uv = (ac − bd) + i(ad + bc).
(3) For a complex number u = a + ib, the conjugate complex
number is defined as ū = a − ib and the norm is defined by the
formula |u|2 = uū = a2 + b2 ; |uv| = |u||v|; |u + v| ≤ |u| + |v|.
(4) The basic role in what follows plays the formula
eit = cos t + i sin t, t ∈ R1 .
√
Note that |eit | = cos2 t + sin2 t = 1.
A complex random variable is defined as Z = X + iY where
X = X(ω) and Y = Y (ω) are usual real-valued random variables defined on some probability space < Ω, F, P >.
The distribution of a complex random variable Z is determined
by the distribution of the random vector (X, Y ).
The expectation of the complex random variable Z = X + iY is
defined as the complex number EZ = EX + iEY .
Two complex random variables Z1 = X1 +iY1 and Z2 = X2 +iY2
are independent if the random vectors (X1 , Y1 ) and (X2 , Y2 ) are
independent.
(5) The product of two complex random variables Z1 Z2 = (X1 X2 −
Y1 Y2 )+i(X1 Y2 +Y1 X2 ) then EZ1 Z2 = E(X1 X2 −Y1 Y2 )+iE(X1 Y2 +
Y1 X2 ). If the random variables Z1 and Z2 are independent, then
2
EZ1 Z2 = EZ1 EZ2 .
——————————EZ1 Z2 = E(X1 X2 − Y1 Y2 ) + iE(X1 Y2 + Y1 X2 ) = (EX1 EX2 −
EY1 EY2 ) + i(EX1 EY2 + EY1 EX2 ) = EZ1 EZ2 .
——————————(6) If |X| ≤ M then |EX| ≤ M .
——————————Let X = Y +iZ, then |EX|2 = |EY +iEZ|2 = (EY )2 +(EZ)2 ≤
EY 2 + EZ 2 = E|X|2 ≤ M 2 .
——————————(7) |EX| ≤ E|X|.
1.2 Definition and basic properties of characteristic
functions
Definition 10.1. Let X be a random variable with the distribution FX (A) and the distribution function FX (x). The characteristic function of the random variable X (distribution FX (A)
and the distribution function FX (x)) is called the complex-valued
function φX (t) defined on a real line as function of t by the formula,
Z ∞
φX (t) = E exp{itX} =
eitx FX (dx)
−∞
= E cos tX +iE sin tX =
Z ∞
−∞
cos txFX (dx)+i
Z ∞
−∞
sin txFX (dx).
(1) If X is a discrete random variable with the discrete distriP
bution pX (xn ) = P (X = xn ), n = 0, 1, . . . ( n pX (xn ) = 1), then
φX (t) = E exp{itX} =
3
X itx
n
e
n
pX (xn ).
(2) If X is a continuous random variable with the probability
density function fX (x), then
φX (t) = E exp{itX} =
Z ∞
eitx fX (x)dx,
∞
where the integration the Riemann integration is performed if
the density fX (x) is Riemann integrable (otherwise the Lebesgue
integration should be used).
(3) In the latter case the characteristic function is also known
as the Fourier transform for function fX (x).
Theorem 10.1. The characteristic function φX (t) of a random variable X possesses the following basic properties:
(1) φX (t) is a continuous function in t ∈ R1 ;
(2) φX (0) = 1
(3) φX (t) is positively defined function, i.e., a quadratic form
P
1≤j,l≤n cj c̄l φX (tj − tl ) ≥ 0 for any complex numbers c1 , . . . , cn
and real t1 , . . . , tn and n ≥ 1.
——————————(a) |φX (t+h)−φX (t)| ≤ |Eei(t+h)X −EeitX | = |EeitX (eihX −1)| ≤
E|eitX (eihX − 1)| = E|eihX − 1|.
(b) |eihx − 1| ≤ 2 and also |eihx − 1| ≤ |hx|. Choose A > 0 and
get the following estimates
E|eihX − 1| ≤
Z
2
|x|>A
|eihx − 1|FX (dx) +
Z
|x|>A
FX (dx) +
Z A
−A
Z A
−A
|eihx − 1|FX (dx)
|hx|FX (dx)
≤ 2P (|X| ≥ A) + hAP (|X| ≤ A) ≤ 2P (|X| ≥ A) + hA.
4
(c) Choose an arbitrary ε > 0. Then choose A so large that
2P (|X| ≥ A) ≤ 2ε and then h so small that hA ≤ 2ε . Then the
expression on the right hand side in the estimate given in (b)
will be less or equal ε.
(d) φX (0) = Eei0X = E1 = 1.
(e) To prove (3) we get
X
=E
cj c̄l φX (tj − tl ) =
1≤j,l≤n
X
X
1≤j≤n
1≤j≤n
cj eitj X
X
cj c̄l Eei(tj −tl )X
1≤j,l≤n
cj eitj X = E|
X
cj eitj X |2 ≥ 0.
1≤j≤n
——————————Theorem 10.2 (Bochner)**. A complex-valued function
φ(t) defined on a real line is a characteristic function of some
random variable X, i.e., can be represented in the form φ(t) =
R∞
EeitX = −∞
eitx FX (dx), t ∈ R1 if and only if it possesses the
properties (1) - (3) formulated in Theorem 1.
An additional properties of characteristic functions are:
(4) |φX (t)| ≤ 1 for any t ∈ R1 ;
(5) φa+bX (t) = eiat φX (bt);
(6) |eitX | ≡ 1 and, thus, |φX (t)| = |EeitX | ≤ 1;
(7) φa+bX (t) = Eeit(a+bX) = eiat EeitbX = eiat φX (bt);
1.3 Examples
5
(a) If X ≡ a where a = const, then φX (t) = eiat .
(1) Binomial random variable.
X = B(m, p), m = 1, 2, . . . , 0 ≤ p ≤ 1;
n n
pX (n) = Cm
p (1 − p)m−n , n = 0, 1, . . . , m;
φX (t) = (peit + (1 − p))m .
(2) Poisson random variable.
X = P(λ), λ > 0;
n
pX (n) = λn! e−λ , n = 0, 1, . . . ;
it
φX (t) = eλ(e −1) .
(3) Normal random variable.
X = N (m, σ 2 ), m ∈ R1 , σ 2 > 0;
x2
fX (x) = √12π e− 2 , x ∈ R1 ;
t2
φX (t) = e− 2
Y = m + σX;
fY (x) =
√ 1 e−
2πσ
2 2
iat− σ 2t
(x−m)2
2σ 2
, x ∈ R1
φY (t) = e
(4) Gamma random variable.
X = Gamma(α, β), α, β > 0;
x
R
1
xα−1 e− β , x ≥ 0 where Γ(α) = 0∞ xα−1 e−x ;
fX (x) = Γ(α)
1
)α .
φX (t) = ( 1−iβt
6
1.4 Inversion formulas
F (A) and G(A) are probability measures on a real line;
R∞
R∞
φ(t) = −∞
eitx F (dx) and ψ(t) = −∞
eitx G(dx) are their characteristic functions.
Lemma 10.1 (Parseval identity). The following so-called
Parseval identity takes place,
Z ∞
−∞
e−ity φ(t)G(dt) =
Z ∞
−∞
ψ(x − y)F (dx).
——————————(a) The following equality follows from the definition of a characteristic function,
e
−ity
φ(t) =
Z ∞
−∞
eit(x−y) F (dx)
(b) Integrating it with respect to the measure G(A) and using
Fubinni theorem we get,
Z ∞
=
−ity
e
−∞
Z ∞ Z ∞
[
φ(t)G(dt) =
−∞ −∞
e
it(x−y)
Z ∞ Z ∞
[
−∞ −∞
eit(x−y) F (dx)]G(dt)
G(dt)]F (dx) =
Z ∞
−∞
ψ(x − y)F (dx).
——————————Theorem 10.3 (inversion). Different characteristic function correspond to different distributions and vise versa.
This theorem is a corollary of the inversion formula given below.
7
Theorem 10.4. Let F (x) be a distribution function and CF
is the set of all point of continuity of this distribution function.
R∞
Let also φ(t) = −∞
eitx F (dx) be the characteristic function that
corresponds to this distribution function. The following inversion formula takes place
2 2
1 Z z Z ∞ −ity
− t 2σ
F (z) = lim
dt]dy, z ∈ CF .
[
e
φ(t)e
σ 2 →0 2π −∞ −∞
——————————(a) Let G(A) is a normal distribution with parameters m = 0
and σ12 . Then Parseval identity takes the following form
Z ∞
−∞
−ity
e
Z ∞
2
σ − t2 σ 2
− (x−y)
2
2
2σ
√
e
dt =
F (dx),
φ(t)
e
−∞
2π
or equivalently
Z ∞
(y−x)2
t2 σ 2
1
1 Z ∞ −ity
√
e φ(t)e− 2 dt =
e− 2σ2 F (dx).
−∞
2π −∞
2πσ
(b) Integrating with respect to the Lebesgue measure (Riemann
integration can be used!) over the interval (−∞, z] we get
Z z Z ∞
2
2 2
1 Z z Z ∞ −ity
1
− (y−x)
− t 2σ
2
2σ
√
[
e φ(t)e
dt]dy =
[
e
F (dx)]dy
−∞ −∞
2π −∞ −∞
2πσ
=
Z ∞ Z z
[
−∞ −∞
√
(x−y)2
1
e− 2σ2 dy]F (dx) = P (N (0, σ 2 ) + X ≤ z),
2πσ
where N (0, σ 2 ) is a normal random variable with parameters 0
and σ 2 independent of the random variable X.
(c) If σ 2 → 0 then the random variables N (0, σ 2 ) converge in
probability to 0 (P (|N (0, σ 2 )| ≥ ε) → 0 for any ε > 0) and, in
8
sequel, the random variables N (0, σ 2 ) + X weakly converge to
the random variable X that is P (N (0, σ 2 )+X ≤ z) → P (X ≤ z)
as σ 2 → 0 for any point z that is a point of continuity for the
distribution function of the random variable X.
——————————Theorem 10.5. Let F (x) be a distribution function and
R∞
φ(t) = −∞
eitx F (dx) be the characteristic function that corresponds to this distribution function. Let φ(t) is absolutely
R∞
integrable at real line, i.e., −∞
|φ(t)|dt < ∞. Then the distribution function F (x) has a bounded continuous probability density
function f (y) and the following inversion formula takes place
1 Z ∞ −ity
f (y) =
e φ(t)dt, y ∈ R1 .
2π −∞
——————————(a) The proof can be accomplished by passing σ 2 → 0 in the
Parseval identity
Z ∞
2
2 2
1
1 Z ∞ −ity
− (y−x)
− t 2σ
2
2σ
√
e φ(t)e
e
dt =
F (dx).
−∞
2π −∞
2πσ
(b) Both expressions on the left and right hand sides of the above
identity represent the probability density function fσ (y) for the
distribution function Fσ (y) of random variable N (0, σ 2 ) + X.
(c) Fσ (·) ⇒ F (·) as σ → 0.
(d) From the left hand side expression in the above identity, we
see that.
fσ (y) → f (y) as σ → 0.
9
(e) Note that fσ (y) and f (y) are bounded continuous functions;
(f) It follows from (c), (d) and (e) that for any x0 < x00
00
0
Fσ (x ) − Fσ (x ) =
Z x00
x0
fσ (y)dy →
Z x00
x0
f (y)dy
(g) But, if x0 , x00 are continuity point of F (x) that
Fσ (x00 ) − Fσ (x0 ) → F (x00 ) − F (x0 ) as σ → 0.
(h) Thus,
F (x00 ) − F (x0 ) =
Z x00
x0
f (y)dy,
i.e., indeed, f (y) is a probability density of the distribution function F (x).
——————————2. Applications
2.1 Characteristic function for sum of independent random variables
Lemma 10.2. If a random variable X = X1 + · · · + Xn is a
sum of independent random variables X1 , . . . , Xn then
φX (t) = φX1 (t) × · · · × φXn (t).
——————————Q
Q
φX (t) = Eeit(X1 +···Xn ) = E nk=1 eitXk = nk=1 EeitXk .
——————————-
10
Examples
(1) Let random variables Xk = N (mk , σk2 ), k = 1, . . . , n be independent normal random variables. In this case X = X1 +
· · · + Xn is also a noramal random variable with parameters
m = m1 + · · · + mn and σ 2 = σ12 + · · · + σn2 .
——————————2 t2
σk
σ 2 t2
Q
φX (t) = nk=1 eitmk − 2 = eitm− 2 .
——————————(2) Let random variables Xk = P(λk ), k = 1, . . . , n be independent Poissonian random variables. In this case X = X1 +
· · · + Xn is also a Poissonian random variable with parameters
λ = λ1 + · · · + λn .
——————————it
it
Q
φX (t) = nk=1 eλk (e −1) = eλ(e −1) .
——————————2.2 Characteristic functions and moments of random
variables
Theorem 10.6. Let X be a random variable such that
R∞
E|X|n = −∞
|x|n FX (dx) < ∞ for some n ≥ 1. Then, there
exist n-th continuous derivatives for the characteristic function
R∞
φX (t) = −∞
eitx FX (dx) for all t ∈ R1 given by the formula,
(k)
φX (t) = ik
Z ∞
−∞
xk eitx FX (dx), k = 1, . . . , n,
11
and, therefore, the following formula takes place
(k)
EX k = i−k φX (0), k = 1, . . . , n.
——————————R∞
itx eihx −1
X (t)
=
e
(1) φX (t+h)−φ
−∞
h
h FX (dx);
ihx
(2) | e h−1 | ≤ |x| that let to perform the limit transition in the
ihx
above relation as h → 0 using also that e h−1 → ix as h → 0;
(k)
(3) In general case, the formula for φX (t) can be proved by induction.
——————————Examples
(1) Let X = N (m, σ 2 ) be a normal random variable. Then
σ 2 t2
φX (t) = eitm− 2 and, therefore,
σ 2 t2
(1) φ0X (t) = (im − σ 2 t)eitm− 2 and EX = i−1 φ0X (0) = m.
σ 2 t2
σ 2 t2
(2) φ00X (t) = −σ 2 eitm− 2 + (im − σ 2 t)2 eitm− 2 and EX 2 =
i−2 φ00X (0) = σ 2 + m2 .
(2) Let random variables X = P(λ) be a Poissonian random
it
variable. Then φX (t) = eλ(e −1) and, therefore,
it
(1) φ0X (t) = iλeit eλ(e −1) and, thus, EX = i−1 φ0X (0) = λ.
it
it
(2) φ00X (t) = i2 λeit eλ(e −1) +i2 λ2 e2it eλ(e −1) and EX 2 = i−2 φ00X (0) =
λ + λ2 .
2.3 Continuity theorem
Xn , n = 0, 1, . . . is a sequence of random variables with distribution functions, respectively, Fn (x), n = 0, 1, . . .;
12
∞
φn (t) = EeitXn = −∞
eitx Fn (dx), n = 0, 1, . . . are the corresponding characteristic functions;
CF denotes the set of continuity points for a distribution function F (x).
R
Theorem 10.7 (continuity)**. Distribution functions Fn ⇒
F0 as n → ∞ if and only if the corresponding characteristic functions φn (t) → φ0 (t) as n → ∞ for every t ∈ R1 .
——————————(a) Let Fn ⇒ F0 as n → ∞.
(b) Then by Helly theorem, for every t ∈ R1 ,
φn (t) =
Z ∞
−∞
itx
e Fn (dx) →
Z ∞
−∞
eitx F0 (dx) = φ0 (t) as n → ∞.
(c) Let φn (t) → φ0 (t) as n → ∞, t ∈ R1 .
(d) As was proved in Lecture 9 using Cantor diagonal method,
for any subsequence nk → ∞ as k → ∞, one can select a subsequence n0r = nkr → ∞ as r → ∞ such that
Fn0r (x) → F (x) as n → ∞, x ∈ CF ,
where F (x) is proper or improper distribution function (nondecreasing, continuous from the right and such that 0 ≤ F (−∞)
≤ F (∞) ≤ 1).
(e) Let us write down the Parseval identity
Z ∞
(y−x)2
t2 σ 2
1 Z ∞ −ity
1
√
e φn0r (t)e− 2 dt =
e− 2σ2 Fn0r (dx).
−∞
2π −∞
2πσ
13
(f) By passing to the limit as r → ∞ in the above identity and
takin into account (c), and Lebesque theorem, and the fact that
√ 1 e−
2πσ
(y−x)2
2σ 2
→ 0 as x → ±∞, we get
Z ∞
2
2 2
1 Z ∞ −ity
1
− (y−x)
− t 2σ
2
2σ
√
dt =
e
e φ0 (t)e
F (dx).
−∞
2π −∞
2πσ
√
(g) By multiplying both part in (f) by 2πσ and taking y = 0,
we get
Z ∞
x2
σ − t2 σ 2
2
φ0 (t) √ e
dt =
e− 2σ2 F (dx) ≤ F (+∞) − F (−∞).
−∞
−∞
2π
Z ∞
(g) By passing to the limit as σ → 0 in the above equalt2 σ 2
R∞
√σ e− 2 dt = 1, (b)
ity and taking into account that (a) −∞
2π
t
√σ e−
2π
2 σ2
2
→ 0 as σ → ∞, for t 6= 0, and (c) φ(t) is continuous
in zero and φ(0) = 1, we get,
1 = σ→∞
lim
t2 σ 2
σ
φ0 (t) √ e− 2 dt ≤ F (+∞) − F (−∞).
−∞
2π
Z ∞
(h) This, F (+∞) − F (−∞) = 1, i.e., the distribution F (x) is a
proper distribution.
(i) In this case, the weak convergence Fn0r ⇒ F as n → ∞
implies by the first part of the theorem that
φn0r (t) → φ(t) =
Z ∞
−∞
eitx F (dx) as n → ∞, t ∈ R1 .
(j) This relation together with (c) implies that
φ(t) =
Z ∞
−∞
eitx F (dx) = φ0 (t), t ∈ R1 .
14
(k) Therefore due to one-to-one correspondence between characteristic functions and distribution functions F (x) ≡ F0 (x).
(l) Therefore, by the subsequence definition of the weak convergence Fn ⇒ F0 as n → ∞.
——————————Theorem 10.8* (continuity). Distribution functions Fn (x)
weakly converge to some distribution function F (x) if and only if
the corresponding characteristic functions φn (t) converge pointwise (for every t ∈ R1 ) to some continuous function φ(t). In
this case, φ(t) is the characteristic function for the distribution
function F (x).
——————————(a) Let φn (t) → φ(t) as n → ∞ for every t ∈ R1 . Then φ(0) = 1;
(b) φ(t) is positively defined since functions φn (t) possess these
properties.
(c) Since φ(t) is also continuous, the properties (a)-(c) pointed in
Theorem 10.1 hold and, therefore, φ(t) is a characteristic function for some distribution function F (x). Then Fn (x) weakly
converge to F (x) by Theorem 10.7.
——————————2.2 Weak law of large numbers (LLN)
< Ω, F, P > is a probability space;
Xn , n = 0, 1, 2, . . . are independent identically distributed (i.i.d.)
15
random variables defined on a probability space < Ω, F, P >.
Theorem 10.9 (weak LLN). If E|X1 | < ∞, EX1 = a, then
random variables
X1 + · · · + Xn d
−→ a as n → ∞.
n
——————————(a) Let φ(t) = EeitX1 , φn (t) = EeitX n be the corresponding characteristic functions. Then φn (t) = (φ( nt ))n .
Xn =
(b) Since E|X1 | < ∞, there exists the continuous first derivative
φ0 (t) and, therefore, Taylor expansion for φ(t) in point t = 0 can
be written down that is φ(t) = 1 + φ0 (0)t + o(t)
(c) Since φ0 (0) = iEX1 = ia, the above Taylor expansion can be
re-written. in the following form φ(t) = 1 + iat + o(t).
(d) Therefore, φn (t) = (φ( nt ))n = (1 + ia nt + o( nt ))n .
(e) If un → 0 and vn → ∞ then lim(1 + un )vn exists if and only
if there exists lim un vn = w and, in this case, lim(1 + un )vn = ew .
(f) Using this fact, we get limn→∞ φn (t) = limn→∞ (1 + ia nt +
o( nt ))n = eiat , for every t ∈ R1 .
(g) Thus by the continuity theorem for characteristic functions
d
X n −→ a as n → ∞.
——————————10.10 Central Limit Theorem (CLT)
< Ω, F, P > is a probability space;
Xn , n = 0, 1, 2, . . . are independent identically distributed (i.i.d.)
16
random variables defined on a probability space < Ω, F, P >;
Z = N (0, 1) is standard normal random variable with the distribution function F (x) =
2
Rx
− y2
√1
e
dy, −∞
nσ −∞
< x < ∞.
Theorem 10.10 (CLT). If E|X1 |2 < ∞, EX1 = a, V arX1 =
σ 2 > 0, then random variables
X1 + · · · + Xn − an d
√
Zn =
−→ Z as n → ∞.
nσ
(1) Since, the limit distribution F (x) is continuous, the above
relation means that P (Zn ≤ x) → F (x) as n → ∞ for every
−∞ < x < ∞.
——————————(a) Let introduce random variables Yn =
n
√ n −an = Y1 +···+Y
√
Zn = X1 +···+X
.
nσ
n
Xn −a
σ ,n
= 1, 2, . . ., then
√
(b) Let ψ(t) = EeitY1 , ψn (t) = EeitZ n . Then ψn (t) = (ψ(t/ n))n .
(c) If E|X1 |2 < ∞ then E|Y1 |2 < ∞ and, moreover, EY1 =
σ −1 (EX1 − a) = 0, E(Y1 )2 = σ −2 E(X1 − a)2 = 1.
(d) Since E|Y1 |2 < ∞, there exist the continuous second derivative ψ 00 (t) and, therefore, Taylor expansion for ψ(t) in point t = 0
can be written down that is ψ(t) = 1 + ψ 0 (0)t + 21 ψ 00 (0)t2 + o(t2 ).
(e) Since ψ 0 (0) = iEY1 = 0 and ψ 00 (0) = i2 E(Y1 )2 = −1, the
above Taylor expansion can be re-written in the following form
ψ(t) = 1 − 12 t2 + o(t2 ).
(f) Therefore, ψn (t) = (ψ( √tn ))n = (1 −
t2
2n
2
+ o( tn ))n .
(g) Using the above formula, we get limn→∞ ψn (t) = limn→∞ (1−
2
t2
t2 n
− t2
+
o(
))
=
e
, for every t ∈ R1 .
2n
n
17
(h) Thus by the continuity theorem for characteristic functions
d
Zn −→ Z as n → ∞.
——————————2.6 Law of small numbers
Let Xn,k , k = 1, 2, . . . are i.i.d. Bernoulli random variables
defined on a probability space < Ωn , Fn , Pn > and taking values
1 and 0, respectively, with probabilities pn and 1 − pn , where
0 ≤ pn ≤ 1.
N (λ) = P(λ) is a Poissonian random variable with parameter
r
λ, i.e., P (N (λ) = r) = λr! e−λ , r = 0, 1, . . ..
Theorem 10.11 (LSN). If 0 < pn · n → λ < ∞, then
random variables
d
Nn = Xn,1 + · · · + Xn,n −→ N (λ) as n → ∞.
(1) The random variable Nn is the number of successes in n
Bernoulli trials.
(2) The random variables Nn = Xn,1 + · · · + Xn,n , n = 1, 2, . . .
are determined by series of random variables
X1,1
X2,1 , X2,2
X3,1 , X3,2 , X3,3
......
This representation gave the name to this type of models. as
so-called triangle array mode.
18
——————————(a) Let φn (t) = EeitNn The characteristic function EeitXn,1 =
pn eit + 1 − pn and, thus, the characteristic function φn (t) =
EeitNn = (pn eit + 1 − pn )n .
(b) Using the above formula, we get φn (t) = (pn eit + 1 − pn )n ∼
it
it
epn (e −1)n → eλ(e −1) as n → ∞, for every t ∈ R1 .
(c) Thus by the continuity theorem for characteristic functions
d
Nn −→ N (λ) as n → ∞.
——————————LN Problems
1. What is the characteristic function of random variable X
which takes value −1 and 1 with probability 12 .
2. Prove that the function cosn t is a characteristic function
for every n = 1, 2, . . ..
3. Let X is a uniformly distributed random variable on the
interval [a, b]. Please find the characteristic function of the random variable X.
4. Let f (x) is a probability density for a random variable X.
Prove that the characteristic function is a real-valued function
if f (x) ia a symmetric function, i.e. f (x) = f (−x), x ≥ 0.
5. Let random variables X1 and X2 are independent and have
gamma distributions s parameters α1 , β and α2 , β, respectively.
Prove that the random variable Y = X1 + X2 also has gamma
distribution with parameters α1 , +α2 , β.
19
6. Let X has a geometrical distribution with parameter p.
Find EX and V arX using characteristic functions.
7.Let Xn are geometrical random variables with discrete distribution pXn (k) = pn (1 − pn )k−1 , k = 1, 2, . . ., where 0 < pn → 0
d
as n → ∞. Prove that pn Xn −→ X as n → ∞, where X =
Exp(1) is an exponential random variable with parameter 1.
20
Document related concepts
no text concepts found
Similar