Download On the Use of Characteristic Function for Generating Moments of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
On the Use of Characteristic Function for Generating Moments of Probability
Distributions.
O.D. Ogunwale, M.Sc.1*, O.Y. Halid, M.Sc.1, and F.D. Famule, M.Sc.2
1
Department of Mathematical Sciences, University of Ado-Ekiti, Ekiti State, Nigeria.
Department of Mathematics and Statistics, Osun State College of Technology, Esa-Oke, Osun State.
2
*
E-mail: [email protected]
ABSTRACT
It follows that:
Characteristic Functions (cf) have been used to
establish the convergence of several independent
and identically distributed (i.i.d) random variables.
It is also used in identifying uniquely, the
distribution of random variables (i.e., once the
(cf) is known), one can categorically state the
distribution of the random variable involved. In
this paper, the use of the (cf) in generating
moments of random variables about on origin is
presented. Means and variance of the distribution
are also obtained. Two discrete and two
continuous probability distribution are taken into
consideration.
φ x (t )
(Keywords: characteristic functions, CF, independent
and identically distributed random variables, IID)
<
∫e
itx
f(x) dx = 1.
(3)
x
This implies that the integral in the definition of
φ x (t ) will exist for any f(x) for all t, and hence
that, unlike the moment generating function, the
characteristic function of a random variable
always exists. It is also of importance to note that:
φ x (t )
and
φ x (t )
is always finite i.e
=1
φ x (t ) −1 < ∞
(4)
Properties of Characteristic Function
INTRODUCTION
Let X be a random variable with density function
f(x) and distribution function
F(x). The
φ x (t ) of the random
characteristic function
a.
(iii)
= E(eitx)
(1)
∫e
(2)
=
itx
f ( x)dx
b.
x
where i is the imaginary unit, i = − 1, and t
is frequently referred to as
the characteristic function corresponding to the
distribution F(x) since, by definition eitx = cost x
= i sin tx.
The Pacific Journal of Science and Technology
http://www.akamaiuniversity.us/PJST.htm
c.
<1
∀t. Since e itx < 1
The Characteristic function of the sum of
independent random variables is the
product of their characteristic function i.e
If sn = x1 + x2 + x3 + . . .+ xn where x1,
x2, . . ., xn are i.i.d random variables
then:
n
φ x (t )
φ x (t )
φ s (t )
is real.
Alternatively,
φ x (t ) is uniformly continuous on the
real line
(ii) φ x (0) =1
variable X is defined by:
φ x (t )
(i)
=
φ x (t ) φ x (t )
=
(φ
1
2
x
(t )
. .
.
φ x (t )
n
)
n
Unlike the moment generating functions,
φ x (t ) is finite ∀ variables x and all real
–228–
Volume 10. Number 2. November 2009 (Fall)
number t. This is because eit is bounded
while et is not bounded for - ∞ < t < ∞.
d.
The distribution function of x and hence
the pdf, if it exists can be obtained from
the characteristic function using an
“inversion formula”. If x is a discrete
random variable, then:
1
2π
f x (t ) =
∫
∞
e itxφ x (t )dx.
φ x (t ) be the characteristic function of
s − nμ
∗
(5)
each xi and let s s n = n
σ n
Proof: Let
Let φ(t) be the characteristic function of X By a
Taylor (maclaurin) expansion abort t = 0, we
have:
φ(t) = φ(o) + t φ( 0) + t2
1
−∞
If x is continuous, then:
1
2π
f x ( x) =
∫
∞
−∞
e.
∫
∞
e itx φ x ( x ) dx.
−∞
φ(110)
+ 0(t2)
( μ 2 + σ 2) 2!
t 2 + 0(t 2 )
2
= 1 + iμt assuming
φ x ( x) dx < ∞.
If two random variable have the same
characteristic function, then they have the
same distribution function.
USES OF CHARACTERISTIC FUNCTION
Characteristic function of a random variable X(w)
defined on (Ω, A, P) provided a powerful and
applicable tool in the theory of probability.
Characteristic functions is used to prove both the
weal law of large numbers and the central limit
Theorem among others.
φ
= φ (t ), s =
( t1 )
( sn )
∗
n
n
x
φs n∗ = e
(−iμ
n σt
sn
σ n
μ
−
n
σ
)φ n ⎛⎜
t ⎞
⎟⎟ s
⎜
⎝σ n ⎠
Sn = x1 + x2 + . . . + xn
Let the measure
s n − nμ
σ n
(7)
(8)
(9)
Taking logarithm, we have:
(
)
Log φ x (t ) = - − iμ nt + nln φ ⎛⎜ t ⎞⎟
sn
σ
⎝ nσ ⎠
(10)
- − iμ nt + nln ⎡
⎞
⎟
⎟
⎠
⎛ t2
iμt
μ2 +σ 2
⎜⎜
0
−
+
⎢ 1+
2
2 nσ 2
σ n
⎝ nσ
⎣
σ
(11)
The Central Limit Theorem
Let {xi} be the sequence of independent and
identically distributed random variables each
having mean μ and finite non-zero variance σ2.
Consider the sum.
(6)
Using the power series:
z2 z3
+
− ... z < 1,
2
2
⎩n(1 + z) = z -
(12)
we obtain
be denoted by Pn
Then Pn → N(o, 1), i.e Pn converges in measure
to the standard normal distribution.
The Pacific Journal of Science and Technology
http://www.akamaiuniversity.us/PJST.htm
(
)
⎩n φ (t ) = − iμ
s
x
n
nt
σ
+ ⎡ iμt
−
⎢
⎣σ n
⎛ t 2 ⎞⎤
μ2 +σ 2 μ2 + t2
⎟
+
+ 0⎜⎜
2
2
2 ⎟⎥
2 nσ
2 nσ
⎝ nσ ⎠ ⎦
(13)
(
)
t2
⎩n φ x (t ) = −
+ n.0
sn
2
⎛ t2
⎜⎜
2
⎝ nσ
⎞
⎟⎟
⎠
(14)
–229–
Volume 10. Number 2. November 2009 (Fall)
⎛1⎞
Taking
n→
∞ limit 0 ⎜ ⎟
⎝n⎠
Weak Law of Large Numbers
1
→0
n
Let {xi} be a sequence of independent identically
distributed random variables having mean μ.
(φ (t )) = − t2
2
lim
n→ ∞
Thus
ln
snx
φ s (t ) → e −
t2
2
x
n
(15)
Let sn = x1 + x2 + . . . + xn
(16)
Then
Which is the characteristic function of a standard
normal random variable, i.e.:
sn
→ μ in measure.
n
Proof: Let
x1
∗
lim
n→ ∞
ln φ x (t ) → φ x (t ) ,
s
s
n
Let s n =
(17)
φ x (t ) be the characteristic function of
sn
− μ , φ * (t ) → φ x (t ) s
sn
s1
n
(18)
⎛ t ⎞ μt ⎤ ⎫
⎟ − i ⎥⎬
n ⎦⎭
⎝n⎠
(19)
1
hence Pn → N(0, 1)
⎧ ⎡
(Q.ε.D)
= exp ⎨n ⎢log φ x ⎜
⎩ ⎣
Let t be fixed, if t≠ 0, we have
( n )− iμ ⎛⎜⎝ nt ⎞⎟⎠
log φ x t
⎧
⎛ t ⎞ iμt ⎫
n ⎨log φ x ⎜ ⎟ −
⎬ = t ⎩im
n→ ∞ ⎩
⎝n⎠ n ⎭
n→ ∞
lim
But
log
n→ ∞
lim
Thus
t
t = 0, It t = 0, φ(o) = 1
φ s (t ) → 1
φ x (t ) = 1
(21)
n
( )
*
n
(20)
n
iμt ⎫
⎧
n ⎨log φ x t −
⎬ = 0 ∀ t.
n
n ⎭
n→ ∞⎩
lim
Therefore,
That is,
( )
φx t n
= iμ . If
t
(22)
as n→ ∞
= E(eitx) implies that P(x = 0) = 1.
(23)
Hence, the distribution function of x is given by:
1
f(x) > 0
1
f(x) > 0
Fx(x) =
The Pacific Journal of Science and Technology
http://www.akamaiuniversity.us/PJST.htm
(24)
–230–
Volume 10. Number 2. November 2009 (Fall)
s n∗ → 0 in measure. By the continuity theorem:
This shows that
P ( s n* < - ∈) = Fx(-∈) = 0.
n→ ∞
lim
(25)
P ( s n* < ∈) = Fx(∈) = 1.
n→ ∞
lim
Thus
That is
( ) < ∈) = 1,
P s n*
n→ ∞
lim
Q.ε.D
(26)
sn
→ μ in measure.
n
i nφ x( n ) (t ) = i 2 n E ( x n ) e itx
THEORETICAL FRAME WORK
To further establish the use of the characteristic
function in generating moments of probability
distributions just like the other generating
functions (probability generating function (pgf),
moment generating function (mgf), factorial
moment generating function (fmg) etc), the
moment, about an arbitrary origin can be obtained
for probability distributions, by using the formula
divided below.
= (i2)n E(xn) e
itx
i nφ x( n ) (t ) = ( −1) n E ( x n ) e itx
( −1) n i nφ x( n ) (t ) = E ( x n ) e itx
( −i ) n φ x( n ) (t ) = E ( x n ) e itx
r
μ r1 = (−i) r Φ 0
(27)
Let t = 0
x
where
φ
(n)
x
μ r1
is the rth moment about the origin,
dn
(t ) =
E (e itx )
dt
= Ε (ix)n e
itx
( −i ) n φ x( n ) (0) = E ( x n ) = μ r1
r
Φ (0) is the rth derivative of the characteristic
x
function of the random variable x with respect to
t and estimated at t = 0.
The distributions that will be considered are:
Binomial, Poisson, Gamma and Normal.
= Ε (in xn) e
= in Ε(xn) e
itx
itx
Multiplying both sides by in
The Pacific Journal of Science and Technology
http://www.akamaiuniversity.us/PJST.htm
Consideration of Two Discrete and Two
Continuous Probability Distributions
The Binomial Distribution: The characteristic
function is given by the first moment about the
–231–
Volume 10. Number 2. November 2009 (Fall)
(
)
n
Φ ( x ) (t ) = (1 − p ) + Pe it .
(28)
μ 21 = ( −i ) 2 φ x11 (0)
That is,
μ 21 = - n(n-1) p2 –np
μ 21 ( −i ) 2 φ x11 (0)
arbitrary origin is
μ11 = (−i ) Φ 1x (0)
(29)
= (-i)2 (- n (n-1)p2 – np
= - 1 (- n (n-1) p2 – np
= np + n(n-1) p2
(35)
= μ 2 - ( μ1 )
= np + n(n-1)p2 – (np)2
= np (1-p)
= np(q)
= npq
(36)
which follows from 27
V(x)
(
Φ 1x (t ) = n (1 − p + Pe it
)
n −1
= nPieit ( pe + q)
it
pie it
n −1
φ (o) = npie ( pe + q)
1
( x)
0
0
(30)
= inp
1
1 2
Equation (31) and (35) gives, respectively, the 1st
and 2nd moments by virtue of the characteristic
function approach and consequently (36) called
the variance of the Binomial distribution.
The Poisson Distribution
Therefore from (29) and (30) we have,
The characteristic function in given by Equation
μ11 = (−i )1 npi
= - inpi = np
(31)
(1) as
φ x ( x) = e λ e λ
( e it −1)
(37)
Using (27), we have,
The second moment about the origin is similarly
obtained as:
μ 21 = ( −i ) 2 Φ 1x (0)
(32)
Equation (32) also follows from (27)
φ x11 (t ) =
φ x1 (t ) = λie it e λ
μ11 = (−i )φ x1 (o)
= ( −i )λi (1)e
= −i
= λ
d
(npie it ( pe it + q) n−1 )
dt
(33)
= npieit (n-1) (peit + q)n-2 + ni2 peit + (peit + q)n-1
(ni2 peit)
it 2
it
= n(n-1) p 2 (e ) (pe + q)
n-2
2
+ ni pe
φ x11 (t ) =
http://www.akamaiuniversity.us/PJST.htm
λ
d 1
φ (t )
dt x
(40)
it
d
(λie it e λ ( e −1) )
dt
it
it λ ( e it −1)
it
) + e λ ( e −1) (λi 2 e it )
= λ ie ( λie e
(34)
=
The Pacific Journal of Science and Technology
(39)
=
(p+q)n-2 + ni2p
= - n (n-1)p2 - np
2
λ (o)
Similarly, we obtain the 2nd moment using (32),
yielding:
it
Setting t to zero, we have:
φ(1x ) (o) = n(n − 1) p 2 i 2 +
(38)
Applying (29), then
d 1
φ x (t )
dt
2
i
( eit −1)
λ 2 i 2 e 2 it e λ ( eit −1) + λi 2 e it e λ ( e
it
−1
)
(41)
–232–
Volume 10. Number 2. November 2009 (Fall)
= - 1 αβ (−1)(α + 1)
setting t to zero, (41) becomes
11
φ (o ) = λ i + λi
2. 2
2
=
2
αβ 2 (α + 1)
(49)
(42)
x
V(x) =
By similar argument,
11
μ 21 = (−i) 2 φ (0)
= (− i )
2
from (32)
(λ i
2 2
+ λi .2 )
= λ2 + λ
V(x) =
(50)
x
The Normal Distribution:
= - 1 (- λ2 - λ)
μ 21
μ 21 − ( μ11 ) 2
2
2
= αβ (α + 1) - (αβ )
2
= αβ
(43)
μ 21 − ( μ11 ) 2
= λ2 + λ- (λ)2
φ x(t ) = e
2 2
μit − σ 2t
(51)
1
(44)
φ (t ) = (iμ − σ 2 t ) e
iμt − σ
2t 2
2
(52)
x
=λ
(45)
μ11 = (−i )(iμ − σ 2 (0))e 0 = μ
(53)
The Gamma Distribution:
11
φ x (t ) = (1 − β it )
Φ (t ) =
−α
x
φ x1 (t ) = αβi(1 − βit ) −α −1
μ11
2 2
d 1
μit − σ 2t
Φ x (t ) = (iμ − σ 2 t ) 2 e
dt
−σ e
2
= (-i)1 Φ x (o)
2 2
μit − σ 2t
(54)
1
11
Φ (0) = (iμ ) 2 (1) − σ 2
= - αβi2 (1 – 0)-α-1
x
=i
= αβ
(47)
2
μ2 −σ 2
11
Φ ( 0) = − μ 2 − σ 2
(55)
x
Φ 11x (t ) =
=
d 1
Φ x (t )
dt
μ 21 = (−i) 2 (− μ 2 − σ 2 )
2
2
= μ +σ
d
αβ i (1 − β it ) −α −1
dt
Φ11x (t ) = αβ 2.2 i(α + 1)(1 − βit ) −α − 2
μ 21 = ( −i ) 2 Φ 11x (o)
μ 21 = ( −i ) 2 φ x11 (0) =
(−i ) 2 αβ 2 i 2 (α + 1)(1 − 0) −α − 2
= i
αβ 2 i 2 (α + 1)
2
The Pacific Journal of Science and Technology
http://www.akamaiuniversity.us/PJST.htm
(48)
μ 21 − ( μ11 ) 2
2
2
2
= μ + σ − (μ )
2
2
2
2
= μ +σ − μ = σ
(56)
V(x) =
(57)
DISCUSSION OF RESULTS
Equations (31) and (35) give, respectively, the
first two moments of the Binomial distribution by
–233–
Volume 10. Number 2. November 2009 (Fall)
virtue of the characteristic function approach
presented by (27). This consequently yields (36)
called the variance.
Similar operations were subsequently carried out
in the poison, Gamma and Normal distributions
yielding appropriate results of the first two
moments as well as their variances.
ABOUT THE AUTHORS
O.D. Ogunwale holds an M.Sc. in Statistics and
is completing a Ph.D. in Statistics. He serves as
a Lecturer in the Department of Mathematical
Sciences, University of Ado-Ekiti, Nigeria. His
research interests are in the areas of probability
and stochastic processes.
It must be stated that the importance of this
(characteristic function) approach lies in the fact
that it takes care of probability dentition whose
moment generating functions do not exist.
O.Y. Halid holds an M.Sc. in Statistics. He
serves as a Lecturer in the Department of
Mathematical Sciences, University of Ado-Ekiti,
Nigeria. His research interests are probability
studies.
CONCLUSION
F.D. Famule holds an M.Sc. in Statistics and is
completing a Ph.D. in Statistics. He serves as a
Senior Lecturer at the College of Technology,
Esa-Oke Osun State, Nigeria. His research
interests are in stochastic processes.
In view of the foregoing it has been clearly
demonstrated
and established that apart from
obtaining the characteristic functions of probability
distributions it can be further used to obtain
moments for random variables, be it discrete or
continuous, just like other known generating
functions.
Since the characteristic function always exist for
any random variable x, unlike the moment
generating function, it provided the solution to the
problem of obtaining important statistical
measures like the mean, variances, kurtosis,
skewness, etc., for all distributions of random
variables
SUGGESTED CITATION
Ogunwale, O.D., O.Y. Halid, and F.D. Famule.
2009. “On the Use of Characteristic Function
from Generating moments of Probability
Distributions”. Pacific Journal of Science and
Technology. 10(2):228-234.
Pacific Journal of Science and Technology
REFERENCES
1.
Azeez I.O. 2003. Statistical Theory. Rajah
Oyanamic Printers: Ilorin, Nigeria.
2.
Odeyinka, J.A. 1998. Element of Statistical Theory.
Joytal Commercial Enterprises: Ibadan, Nigeria.
3.
Oyelola, A.A. 2003. Introductory Statistics,
Probability and Tests, of Hypotheses. Kola
Success Printer: Ilorin, Nigeria.
4.
Nelson, R. 1995. Probability, Stochastic Processes
and Queuing Theory. Springer-Verlag, New York,
NY.
5.
Spiegel, M.R. 2000. Theory and Problems of
Probability and Statistics (Schaum’s Outline
Series) SI (Metric) Edition. McGraw-Hill Publishing
Company: New York, NY.
The Pacific Journal of Science and Technology
http://www.akamaiuniversity.us/PJST.htm
–234–
Volume 10. Number 2. November 2009 (Fall)
Related documents