Download A Bivariate Distribution whose Marginal Laws are

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
International Journal of Mathematical Analysis
Vol. 10, 2016, no. 10, 455 - 467
HIKARI Ltd, www.m-hikari.com
http://dx.doi.org/10.12988/ijma.2016.6219
A Bivariate Distribution whose Marginal Laws
are Gamma and Macdonald
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Instituto de Matematicas
Universidad de Antioquia
Calle 67, No. 53108, Medellin, Colombia
c 2016 Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez. This article
Copyright is distributed under the Creative Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Abstract
Gamma and Maconald distributions are associated with gamma and
extended gamma functions, respectively. In this article, we define a bivariate distribution whose marginal distributions are gamma and Macdonald. We study several properties of this distribution.
Mathematics Subject Classification: 33E99, 60E05
Keywords: Confluent hypergeometric function; entropy; extended gamma
function; gamma distribution; Laguerre polynomial
1
Introduction
The gamma function was first introduced by Leonard Euler in 1729 as the
limit of a discrete expression and later as an absolutely convergent improper
integral,
Z ∞
Γ(a) =
ta−1 exp(−t) dt, Re(a) > 0.
(1)
0
The gamma function has many beautiful properties and has been used in
almost all the branches of science and engineering. Replacing t by z/σ, σ > 0,
456
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
in (1), a more general definition of gamma function can be given as
Z ∞
z
−a
z a−1 exp −
dz, Re(a) > 0.
Γ(a) = σ
σ
0
(2)
In statistical distribution theory, gamma function has been used extensively.
Using integrand of the gamma function (2), the gamma distribution has been
defined by the probability density function (p.d.f.)
v a−1 exp(−v/σ)
,
σ a Γ(a)
a > 0,
σ > 0,
v > 0.
(3)
We will write V ∼ G(a, σ) if the density of V is given by (3). Here, a and σ
determine the shape and scale of the distribution.
In 1994, Chaudhry and Zubair [3] defined the extended gamma function,
Γ(a; σ), as
Z ∞
σ
dt,
Γ(a; σ) =
ta−1 exp −t −
t
0
where σ > 0 and a is any complex number. For Re(a) > 0 and by taking
σ = 0, it is clear that the above extension of the gamma function reduces to
the classical gamma function, Γ(a, 0) = Γ(a). The generalized gamma function
(extended) has been proved very useful in various problems in engineering and
physics, see for example, Chaudhry and Zubair [2–6].
Using the integrand of the extended gamma function, an extended gamma
distribution can be defined by the p.d.f.
v a−1 exp (−v − σ/v)
,
Γ(a; σ)
v > 0.
The distribution given by the above density will be designated as EG(a, σ).
By using the definition of the extended gamma function, Chaudhry and
Zubair [4] have introduced a one parameter Macdonald distribution. By making a slight change in the density proposed by Chaudhry and Zubair [4], a three
parameter Macdonald distribution (Nagar, Roldán-Correa and Gupta [7, 8]) is
defined by the p.d.f.
fM (y; α, β, σ) =
σ −β y β−1 Γ(α; σ −1 y)
,
Γ(β)Γ(α + β)
y > 0,
σ > 0,
β > 0,
α + β > 0.
We will denote it by Y ∼ M (α, β, σ). If σ = 1 in the density above, then
we will simply write Y ∼ M (α, β). By replacing Γ(α; σ −1 y) by its integral
representation, the three parameter Macdonald density can also be written as
Z ∞
y α−1
σ −β y β−1
fM (y; α, β, σ) =
exp −x −
x
dx, y > 0, (4)
Γ(β)Γ(α + β) 0
σx
A bivariate distribution whose marginal laws are gamma and Macdonald
457
where σ > 0, β > 0 and α + β > 0. Now, consider two random variables X
and Y such that the conditional distribution of Y given X is gamma with the
shape parameter β and the scale parameter σx and the marginal distribution
of X is a standard gamma with the shape parameter α + β. That is
f (y|x) =
y β−1 exp(−y/σx)
,
Γ(β)(σx)β
y>0
and
g(x) =
xα+β−1 exp(−x)
,
Γ(α + β)
x > 0.
Then (4) can be written as
Z
∞
f (y|x)g(x) dx.
fM (y; α, β, σ) =
0
Thus, the product f (y|x)g(x) can be used to create a bivariate density with
Macdonald and standard gamma as marginal densities of Y and X, respectively. We, therefore, define the bivariate density of X and Y as
f (x, y; α, β, σ) =
xα−1 y β−1 exp (−x − y/σx)
,
σ β Γ(β)Γ(α + β)
x > 0,
y > 0,
(5)
where β > 0, α + β > 0 and σ > 0. The distribution defined by the density (5)
may be called the Macdonald-gamma distribution. The bivariate distribution
defined by the above density has many interesting features. For example, the
marginal and the conditional distributions of Y are Macdonald and gamma,
the marginal distribution of X is gamma, and the conditional distribution of X
given Y is extended gamma. The gamma distribution has been used to model
amounts of daily rainfall (Aksoy [1]). In neuroscience, the gamma distribution
is often used to describe the distribution of inter-spike intervals (Robson and
Troy [9]). The gamma distribution is widely used as a conjugate prior in
Bayesian statistics. It is the conjugate prior for the precision (i.e. inverse
of the variance) of a normal distribution. Further, the fact that marginal
distributions are gamma makes this bivariate distribution a potential candidate
for many real life problems.
In this article, we study distributions defined by the density (5), derive
properties such as marginal and conditional distributions, moments, entropies,
information matrix, and distributions of sum and quotient.
458
2
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Properties
Let us now briefly discuss the shape of (5). The first order derivatives of
ln f (x, y; α, β, σ) with respect to x and y are
fx (x, y) =
α−1
y
∂ ln f (x, y; α, β, σ)
=
+ 2 −1
∂x
x
σx
(6)
β−1
1
∂ ln f (x, y; α, β, σ)
=
−
∂y
y
σx
(7)
and
fy (x, y) =
respectively. Setting (6) and (7) to zero, we note that (a, b), a = α + β − 2,
b = σ(β − 1)(α + β − 2) is a stationary point of (5). Computing second order
derivatives of ln f (x, y; α, β, σ), from (6) and (7), we get
fxx (x, y) =
α−1
2y
∂ 2 ln f (x, y; α, β, σ)
= − 2 − 3,
2
∂x
x
σx
(8)
∂ 2 ln f (x, y; α, β, σ)
1
=
,
∂x∂y
σx2
(9)
∂ 2 ln f (x, y; α, β, σ)
β−1
=− 2 .
2
∂y
y
(10)
fxy (x, y) =
and
fyy (x, y) =
Further, from (8), (9) and (10), we get
fxx (a, b) = −
α + 2β − 3
,
(α + β − 2)2
fyy (a, b) = −
σ 2 (β
1
− 1)(α + β − 2)2
(11)
and
fxx (a, b)fyy (a, b) − [fxy (a, b)]2 =
σ 2 (β
1
.
− 1)(α + β − 2)3
(12)
Now, observe that
• If β > 1 and α+β > 2, then fxx (a, b)fyy (a, b)−[fxy (a, b)]2 > 0, fxx (a, b) <
0 and fyy (a, b) < 0 and therefore (a, b) is a maximum point.
• If β < 1 and α+β < 2, then fxx (a, b)fyy (a, b)−[fxy (a, b)]2 > 0, fxx (a, b) >
0 and fyy (a, b) > 0 and therefore (a, b) is a minimum point.
• If β < 1 and α + β > 2, then fxx (a, b)fyy (a, b) − [fxy (a, b)]2 < 0, and
therefore (a, b) is a saddle point.
A bivariate distribution whose marginal laws are gamma and Macdonald
459
A distribution is said to be positively likelihood ratio dependent (PLRD)
if the density f (x, y) satisfies
f (x1 , y1 )f (x2 , y2 ) ≥ f (x1 , y2 )f (x2 , y1 )
(13)
for all x1 > x2 and y1 > y2 . In the present case (13) is equivalent to
y1 x2 + x1 y2 ≤ x1 y1 + x2 y2
which clearly holds. Olkin and Liu [11] have listed a number of properties of
PLRD distributions.
By definition, the product moments of X and Y associated with (5) are
given by
Z ∞ Z ∞ α+r−1 β+s−1
x
y
exp (−x − y/σx)
r s
dy dx
E(X Y ) =
β
σ Γ(β)Γ(α + β)
0
0
Z ∞
σ s Γ(β + s)
=
xα+β+r+s−1 exp (−x) dx
Γ(β)Γ(α + β) 0
σ s Γ(β + s)Γ(α + β + r + s)
,
(14)
=
Γ(β)Γ(α + β)
where both the lines have been derived by using the definition of gamma
function. For r = −s, the above expression reduces to
σ s Γ(β + s)
E(X Y ) =
,
Γ(β)
−s
s
(15)
which shows that Y /σX has a standard gamma distribution with shape parameter β. Substituting appropriately in (14), means and variances of X and
Y and the covariance between X and Y are computed as
E(X) = α + β,
E(Y ) = σβ(α + β),
Var(X) = α + β,
Var(Y ) = σ 2 β(α + β)(α + 2β + 1),
and
Cov(X, Y ) = σβ(α + β).
The correlation coefficient between X and Y is given by
s
β
ρX,Y =
.
α + 2β + 1
The variance-covariance matrix Σ of the random vector (X, Y ) whose bivariate
density is defined by (5) is given by
1
σβ
Σ = (α + β)
σβ σ 2 β(α + 2β + 1)
460
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Further, the inverse of the covariance matrix is given by
1
σ(α + 2β + 1) −1
−1
.
Σ =
1
−1
σ(α + β)(α + β + 1)
βσ
The well-known Mahalanobis distance is given by
1
Y2
2
2
D =
σ(α + 2β + 1)X − 2XY +
σ(α + β)(α + β + 1)
βσ
− 2σ(α + β)(α + β + 1)X + σ(α + β)2 (α + β + 1)
with E(D2 ) = 2 and
E(D4 ) =
2
β(α + β)(α + β + 1)
× β(α + β)(α + β + 4) + 3(β + 1)(α + β + 2)(α + β + 3)
From the construction of the bivariate density (5), it is cleat that Y ∼
M (α, β, σ), X ∼ G(α + β), Y |x ∼ G(β, σx) and X|y ∼ EG(α, y/σ).
Making the transformation S = X + Y and R = Y /(X + Y ) with the
Jacobian G(y, x → r, s) = s in (5), the joint density of R and S is given by
(1 − r)α−1 rβ−1 sα+β−1 exp [−s + rs − r/σ(1 − r)]
fR,S (r, s; α, β, σ) =
,
σ β Γ(β)Γ(α + β)
where 0 < r < 1 and s > 0. Now, integrating s by using gamma integral, the
marginal density of R is derived as
fR (r; α, β, σ) =
(1 − r)−β−1 rβ−1 exp [−r/σ(1 − r)]
,
σ β Γ(β)
0 < r < 1.
From the above density it can easily be shown that R/σ(1 − R) = Y /σX has
a standard gamma distribution with shape parameter β. By integrating r, the
marginal density of S is derived as
Z
sα+β−1 exp (−s) 1
r
α−1 β−1
fS (s; α, β, σ) = β
(1 − r) r
exp rs −
dr.
σ Γ(β)Γ(α + β) 0
σ(1 − r)
Now, writing
exp −
∞
X
r
1
m
= (1 − r)
r Lm
,
σ(1 − r)
σ
m=0
A bivariate distribution whose marginal laws are gamma and Macdonald
461
where Lm (x) is the Laguerre polynomial of degree m, and integrating r by
using the integral representation of confluent hypergeometric function, we get
the marginal density of S as
Z 1
∞
1
sα+β−1 exp (−s) X
(1 − r)α+1−1 rβ+m−1 exp (rs) dr
Lm
fS (s; α, β, σ) = β
σ Γ(β)Γ(α + β) m=0
σ 0
∞
Γ(α + 1)sα+β−1 exp (−s) X
1
Γ(β + m)
=
L
m
σ β Γ(β)Γ(α + β)
σ Γ(α + β + m + 1)
m=0
× 1 F1 (β + m; α + β + m + 1; s),
s > 0,
where α + 1 > 0, β > 0 and σ > 0.
3
Central Moments
By definition, the (i, j)-th central joint moment of (X, Y ) is given by
µij = E[(X − µX )i (Y − µY )j ].
For different values of i and j, expressions for µij are given by
µ30
µ21
µ12
µ03
µ40
µ31
µ22
µ13
µ04
µ50
µ41
µ32
4
= 2(α + β),
= 2σβ(α + β),
= 2σ 2 β(α + β)(α + 2β + 1),
= 2σ 3 β(α + β)[(α + 2β + 1)2 + (β + 1)(α + β + 1)],
= 3(α + β)(α + β + 2),
= 3σβ(α + β)(α + β + 2),
= σ 2 β(α + β)[3β + (α + β + 1)(α + 4β + 6)],
= 3σ 3 β(α + β)[2(β + 1)(α + β + 1)(α + 2β + 2)
− β(α + β)(α + 2β + 1)],
= 3σ 4 β(α + β)[2(β + 1)(α + β + 1)(α + 2β + 2)(α + 2β + 3)
− β(α + β)(α + 2β + 1)2 ],
= 4(α + β)(5α + 5β + 6),
= 4σβ(α + β)(5α + 5β + 6),
= 4σ 2 β(α + β)[4α + 6 + (α + β + 2)(2α + 7β)].
Entropies
In this section, exact forms of Rényi and Shannon entropies are determined
for the bivariate distribution defined in Section 1.
462
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
Let (X , B, P) be a probability space. Consider a p.d.f. f associated with
P, dominated by σ−finite measure µ on X . Denote by HSH (f ) the well-known
Shannon entropy introduced in Shannon [13]. It is define by
Z
f (x) ln f (x) dµ.
(16)
HSH (f ) = −
X
One of the main extensions of the Shannon entropy was defined by Rényi [12].
This generalized entropy measure is given by
HR (η, f ) =
ln G(η)
1−η
(for η > 0 and η 6= 1),
(17)
where
Z
G(η) =
f η dµ.
X
The additional parameter η is used to describe complex behavior in probability
models and the associated process under study. Rényi entropy is monotonically
decreasing in η, while Shannon entropy (16) is obtained from (17) for η ↑ 1.
For details see Nadarajah and Zografos [10], Zografos and Nadarajah [15] and
Zografos [14].
Now, we derive the Rényi and the Shannon entropies for the bivariate
density defined in (5).
Theorem 4.1. For the bivariate distribution defined by the p.d.f. (5), the
Rényi and the Shannon entropies are given by
HR (η, f ) =
1
[ln Γ[η(β − 1) + 1] + ln Γ[η(α + β − 2) + 2] − (η − 1) ln σ
1−η
− [η(α + 2β − 3) + 3] ln η − η ln Γ(β) − η ln Γ(α + β)]
(18)
and
HSH (f ) = −[(β − 1)ψ(β) + (α + β − 2)ψ(α + β) − ln σ − (α + 2β)
− ln Γ(β) − ln Γ(α + β)].
(19)
Proof. For η > 0 and η 6= 1, using the p.d.f. of (X, Y ) given by (5), we have
Z ∞Z ∞
G(η) =
[f (x, y; α, β, σ)]η dy dx
0
0
Z ∞Z ∞
1
ηy η(α−1) η(β−1)
= β
x
y
exp
−ηx
−
dy dx
[σ Γ(β)Γ(α + β)]η 0
σx
0
Z ∞
Γ[η(β − 1) + 1]
xη(α+β−2)+1 exp (−ηx) dx
= η(2β−1)+1 η(β−1)+1
η
σ
η
[Γ(β)Γ(α + β)] 0
Γ[η(β − 1) + 1]Γ[η(α + β − 2) + 2]
= η−1 η(α+2β−3)+3
,
σ η
[Γ(β)Γ(α + β)]η
A bivariate distribution whose marginal laws are gamma and Macdonald
463
where, to evaluate above integrals, we have used the definition of gamma function. Now, taking logarithm of G(η) and using (17), we get (18). The Shannon entropy (19) is obtained from (18) by taking η ↑ 1 and using L’Hopital’s
rule.
5
Fisher Information Matrix
In this section we calculate the Fisher information matrix for the bivariate
distribution defined by the density (5). The information matrix plays a significant role in statistical inference in connection with estimation, sufficiency and
properties of variances of estimators. For a given observation vector (x, y), the
Fisher information matrix for the bivariate distribution defined by the density
(5) is defined as
 2
2
2

∂ ln L(α,β,σ)
∂ ln L(α,β,σ)
L(α,β,σ)
E
E
E ∂ ln ∂α
2
∂ β∂α
∂σ ∂α


 2
2
2



L(α,β,σ)
E ∂ ln ∂β
E ∂ ln∂βL(α,β,σ)
−  E ∂ ln∂βL(α,β,σ)
,
2
∂α
∂σ



2
2
 2
L(α,β,σ)
L(α,β,σ)
E ∂ ln∂σL(α,β,σ)
E ∂ ln∂σ∂
E ∂ ln ∂σ
2
∂α
β
where L(α, β, σ) = ln f (x, y; α, β, σ).
L(α, β, σ) is obtained as
From (5), the natural logarithm of
ln L(α, β, σ) = −β ln σ − ln Γ(β) − ln Γ(α + β) + (α − 1) ln x
y
,
+ (β − 1) ln y − x −
σx
where x > 0 and y > 0. The second order partial derivatives of ln L(α, β, σ)
are given by
∂ 2 ln L(α, β, σ)
= −ψ1 (α + β),
∂α2
∂ 2 ln L(α, β, σ)
= −ψ1 (β) − ψ1 (α + β),
∂β 2
∂ 2 ln L(α, β, σ)
β
2y
= 2− 3 ,
2
∂σ
σ
σ x
2
∂ ln L(α, β, σ)
= −ψ1 (α + β),
∂α ∂β
∂ 2 ln L(α, β, σ)
= 0,
∂α ∂σ
∂ 2 ln L(α, β, σ)
1
=− ,
∂β ∂σ
σ
464
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
where ψ1 (·) is the trigamma function. Now, noting that the expected value of a
constant is the constant itself and Y /σX follows a standard gamma distribution
with shape parameter β, we have
∂ 2 ln L(α, β, σ)
E
∂α2
∂ 2 ln L(α, β, σ)
E
∂β 2
= −ψ1 (α + β),
= −ψ1 (β) − ψ1 (α + β),
∂ 2 ln L(α, β, σ)
E
∂σ 2
∂ 2 ln L(α, β, σ)
E
∂α ∂β
=−
= −ψ1 (α + β),
∂ 2 ln L(α, β, σ)
E
∂α ∂σ
∂ 2 ln L(α, β, σ)
E
∂β ∂σ
6
β
,
σ2
= 0,
1
=− .
σ
Estimation
The density given by (5) is parameterized by (α, β, σ). Here, we consider
estimation of these three parameters by the method of maximum likelihood.
Suppose (x1 , y1 ), . . . , (xn , yn ) is a random sample from (5). The loglikelihood can be expressed as:
ln L(α, β, σ) = −nβ ln σ − n ln Γ(β) − n ln Γ(α + β) + (α − 1)
n
n X
X
yi
.
+ (β − 1)
ln yi −
xi +
σx
i
i=1
i=1
n
X
ln xi
i=1
The first-order derivatives of this with respect to the three parameters are:
n
X
∂ ln L(α, β, σ)
= −nψ(α + β) +
ln xi ,
∂α
i=1
A bivariate distribution whose marginal laws are gamma and Macdonald
465
n
X
∂ ln L(α, β, σ)
= −n ln σ − nψ(β) − nψ(α + β) +
ln yi ,
∂β
i=1
and
n
∂ ln L(α, β, σ)
nβ
1 X yi
=−
+ 2
.
∂σ
σ
σ i=1 xi
The maximum likelihood estimators of (α, β, σ), say (α̂, β̂, σ̂), are the simultaneous solutions of the above three equations. It is straightforward to see that
β̂ can be calculated by solving numerically the equation
" n
!#
n
X
yi
yi
1 X
ln
− ln
.
ψ(β̂) − ln β̂ =
n i=1
xi
x
i
i=1
Further, for β̂ so obtained, σ̂ and α̂ can be computed by solving numerically
the equations
n
1 X yi
β̂ σ̂ =
n i=1 xi
and
n
1X
ψ(α̂ + β̂) =
ln xi
n i=1
respectively. Using the expansion of the digamma function, namely,
1
1
23
17
10099
1
−
+
−
−
+ ···
ψ(z) = ln z + +
2 24 z 48 z 2 5760 z 3 3840 z 4 2903040 z 5
an approximation for β̂ can be given as
−1
1
q̃
β̂ =
−1
,
2 (nq̄)1/n
P
Q
1/n
and qi = yi /xi , i = 1, . . . , n. Using this
where q̄ = ni=1 qi /n, q̃ = ( ni=1 qi )
estimate of β, the estimates of σ and α are given by
q̃
σ̂ = 2q̄
−1
(nq̄)1/n
and
−1
1
(nq̄)1/n
α̂ = x̃ −
1−
,
2
q̃
Q
1/n
respectively, where x̃ = ( ni=1 xi ) .
Acknowledgements. The research work of DKN and LES was supported by
the Sistema Universitario de Investigación, Universidad de Antioquia under
the project no. IN10231CE.
466
Daya K. Nagar, Edwin Zarrazola and Luz Estela Sánchez
References
[1] H. Aksoy, Use of gamma distribution in hydrological analysis, Turkish
Journal of Engineering and Environmental Sciences, 24 (2000), 419–428.
[2] M. Aslam Chaudhry and Syed M. Zubair, On the decomposition of generalized incomplete gamma functions with applications to Fourier transforms, Journal of Computational and Applied Mathematics, 59 (1995),
253–284. http://dx.doi.org/10.1016/0377-0427(94)00026-w
[3] M. A. Chaudhry and S. M. Zubair, Generalized incomplete gamma functions with applications, Journal of Computational and Applied Mathematics, 55 (1994), 99–123.
http://dx.doi.org/10.1016/0377-0427(94)90187-2
[4] M. Aslam Chaudhry and Syed M. Zubair, Extended gamma and digamma
functions, Fractional Calculus and Applied Analysis, 4 (2001), no. 3, 303–
324.
[5] M. Aslam Chaudhry and Syed M. Zubair, Extended incomplete gamma
functions with applications, Journal of Mathematical Analysis and Applications, 274 (2002), no. 2, 725–745.
http://dx.doi.org/10.1016/s0022-247x(02)00354-2
[6] M. Aslam Chaudhry and Syed M. Zubair, On an extension of generalized
incomplete Gamma functions with applications, Journal of the Australian
Mathematical Society (Series B)- Applied Mathematics, 37 (1996), no. 3,
392–405. http://dx.doi.org/10.1017/s0334270000010730
[7] Daya K. Nagar, Alejandro Roldán-Correa and Arjun K. Gupta, Extended
matrix variate gamma and beta functions, Journal of Multivariate Analysis, 122 (2013), 53–69. http://dx.doi.org/10.1016/j.jmva.2013.07.001
[8] Daya K. Nagar, Alejandro Roldán-Correa and Arjun K. Gupta, Matrix
variate Macdonald distribution, Communications in Statistics - Theory
and Methods, 45 (2016), no. 5, 1311–1328.
http://dx.doi.org/10.1080/03610926.2013.861494
[9] J. G. Robson and J. B. Troy, Nature of the maintained discharge of Q,
X, and Y retinal ganglion cells of the cat, Journal of the Optical Society
of America A, 4 (1987), 2301–2307.
http://dx.doi.org/10.1364/josaa.4.002301
[10] S. Nadarajah and K. Zografos, Expressions for Rényi and Shannon entropies for bivariate distributions, Information Sciences, 170 (2005),
no. 2-4, 173–189. http://dx.doi.org/10.1016/j.ins.2004.02.020
A bivariate distribution whose marginal laws are gamma and Macdonald
467
[11] Ingram Olkin and Ruixue Liu, A bivariate beta distribution, Statistics &
Probability Letters, 62 (2003), no. 4, 407–412.
http://dx.doi.org/10.1016/s0167-7152(03)00048-8
[12] A. Rényi, On measures of entropy and information, Proceedings of the
Fourth Berkeley Symposium on Mathematical Statistics and Probability,
Vol. I, Univ. California Press, Berkeley, Calif., (1961), 547–561.
[13] C. E. Shannon, A mathematical theory of communication, Bell System
Technical Journal , 27 (1948), 379–423, 623–656.
http://dx.doi.org/10.1002/j.1538-7305.1948.tb01338.x
http://dx.doi.org/10.1002/j.1538-7305.1948.tb00917.x
[14] K. Zografos, On maximum entropy characterization of Pearson’s type II
and VII multivariate distributions, Journal of Multivariate Analysis, 71
(1999), no. 1, 67–75. http://dx.doi.org/10.1006/jmva.1999.1824
[15] K. Zografos and S. Nadarajah, Expressions for Rényi and Shannon entropies for multivariate distributions, Statistics & Probability Letters, 71
(2005), no. 1, 71–84. http://dx.doi.org/10.1016/j.spl.2004.10.023
Received: February 23, 2016; Published: April 1, 2016
Related documents