Download A class of distributions with the linear mean residual quantile

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Foundations of statistics wikipedia , lookup

Bootstrapping (statistics) wikipedia , lookup

Degrees of freedom (statistics) wikipedia , lookup

Taylor's law wikipedia , lookup

History of statistics wikipedia , lookup

Power law wikipedia , lookup

Transcript
Statistical Methodology 15 (2013) 1–24
Contents lists available at SciVerse ScienceDirect
Statistical Methodology
journal homepage: www.elsevier.com/locate/stamet
A class of distributions with the linear mean
residual quantile function and
it’s generalizations
N.N. Midhu, P.G. Sankaran ∗ , N. Unnikrishnan Nair
Department of Statistics, Cochin University of Science and Technology, Cochin-22, India
article
info
Article history:
Received 5 October 2012
Received in revised form
20 March 2013
Accepted 27 March 2013
Keywords:
Quantile function
Quantile density function
Linear mean residual quantile function
L-moments
abstract
In the present paper, we introduce and study a class of distributions
that has the linear mean residual quantile function. Various
distributional properties and reliability characteristics of the class
are studied. Some characterizations of the class of distributions
are presented. We then present generalizations of this class of
distributions using the relationship between various quantile
based reliability measures. The method of L-moments is employed
to estimate parameters of the class of distributions. Finally, we
apply the proposed class of distributions to a real data set.
© 2013 Elsevier B.V. All rights reserved.
1. Introduction
In modeling and analysis of statistical data with probability distributions, there are two equivalent
approaches, one is through the distribution function and the other is through the quantile function
defined by
Q (u) = F −1 (u) = inf{x : F (x) ≥ u},
0≤u≤1
(1.1)
where F (x) is the distribution function of random variable X . Even though both convey the same
information about the distribution with different interpretations, the concepts and methodologies
based on distribution functions are habitually employed in most forms of statistical studies. However,
quantile functions have several properties that are not shared by distributions, which make it more
convenient for analysis. For example, the sum of two quantile functions is again a quantile function.
∗
Corresponding author.
E-mail address: [email protected] (P.G. Sankaran).
1572-3127/$ – see front matter © 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.stamet.2013.03.002
2
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
For more properties and applications of quantile functions one could refer to Parzen [25], Gilchrist [3],
Nair et al. [19] and Nair and Sankaran [17].
In reliability studies, the distribution function F (x), the associated survival function F̄ (x) = 1 − F (x)
and the probability density function f (x) along with various other characteristics such as hazard rate,
mean, percentiles and higher moments of residual life etc, are used for understanding the generating
mechanism of the lifetime data and to distinguish between various models through aging properties.
Among those various concepts, the mean residual function is a well known measure, which has been
widely used in the fields of reliability, statistics, survival analysis and insurance. For a non-negative
random variable x, the mean residual life function is defined as
m(x) = E (X − x|X > x) =
1
F̄ (x)
∞

F̄ (t )dt .
x
It is interpreted as the expected remaining lifetime of a unit given survival up to time x. Muth [14]
and Guess and Proschan [4] discuss the basic results and various applications of the mean residual life
function. In an alternative approach Nair and Sankaran [16] view the mean residual life function as the
expectation of the conditional distribution of residual life given age, arising from the joint distribution
of age and residual life in renewal theory. Gupta and Kirmani [6] have provided characterizations
of lifetime distributions using the mean residual life function. The class of distributions with linear
mean residual life has been studied by Hall and Wellner [7,8] and Gupta and Bradley [5]. This class
of distributions with linear mean residual life contains Pareto, an exponential and rescaled beta
distribution. Oakes and Dasu [24] and Korwar [11] have developed characterizations for the class of
linear mean residual life distributions. Chen and Cheng [1] and Nanda et al. [22] studied proportional
mean residual life model for the analysis of survival data.
Recently, Nair and Sankaran [15] have introduced the basic concepts in reliability theory in terms of
quantile functions. Let X be a non-negative random variable with distribution function F (x), satisfying
F (0) = 0, F (x) is continuous and strictly increasing. Nair and Sankaran [15] defined the mean residual
quantile function which is given by
M (u) =
1
1

1−u
(Q (p) − Q (u))dp.
(1.2)
u
We can interpret M (u) as the mean remaining life of a unit beyond the 100(1 − u)% of the distribution.
In the present study, we consider a class of distributions with the linear mean residual quantile
function given by
M (u) = cu + µ,
µ > 0, − µ ≤ c < µ, 0 ≤ u ≤ 1.
(1.3)
This class includes various well known distributions.
The rest of the article is organized as follows. In Section 2 we present a class of distributions with
the linear mean residual quantile function and study its basic properties. Distributional properties like
measures of location and scale, L moments, order statistics etc. are given in Section 3. In Section 4 we
present approximation of some well known distribution with the proposed class of distribution. In
Section 5 we present various reliability characteristics of the class of distributions and provide four
characterization theorems. In Section 6 we present general classes of distributions in which the linear
mean residual quantile function is a member. Section 7 focuses on the inference procedures for the
class of distributions with the linear mean residual quantile function. Finally we provide an application
of this class of distributions in a real life situation.
2. A class of distributions
As mentioned in Section 1, we consider a class of distributions with
M (u) = cu + µ.
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
3
Using the fact that M (u) uniquely determines Q (u) (see Nair and Sankaran [15])
Q (u) = M (0) − M (u) +
u

0
M (p)
1−p
dp,
(2.1)
we obtain the quantile function as
Q (u) = −(c + µ) log(1 − u) − 2cu,
µ > 0, −µ ≤ c < µ, 0 ≤ u ≤ 1.
For the class of distributions, the quantile density function q(u) =
q(u) =
c+µ
dQ (u)
du
(2.2)
is of the form
− 2c .
1−u
(2.3)
Since Q (u) is increasing and continuous, q(u) must be non-negative. This gives u >
of the distribution is
c −µ
.
2c
The support
(Q (0), Q (1)) = (0, ∞).
For this class, the density function f (x) can be written as
f (x) =
1 − F (x)
(2.4)
2cF (x) − c + µ
and from (2.4) we have the derivative of f (x) as
f ′ ( x) = −
(c + µ)f (x)
≤0
(2cF (x) − c + µ)2
which provides that, the shape of the density is always non-decreasing (there is no mode or antimode).
The most decreasing member of the family is given by
Q (u) = −2µ log(1 − u) + 2µu
1
and the least decreasing is the uniform distribution over 0, 2c
. All other members lie between these
two for a given value µ. Plots of the density function for µ = 1, 2 and 3 are given in Fig. 1. The value
of c is given at the top of the figures.


3. Properties
The quantile-based measures of the distributional characteristics, location, dispersion, skewness
and kurtosis are given by
Median, M = Q
 
1
2
= (c + µ) log(2) − c
(3.1)
and the inter quartile range (IQR)
 
3
IQR = Q
4
 
−Q
1
4
= (c + µ) log(3) − c .
Galton’s coefficient of skewness is given by
S =
Q
3
4
+Q
1
4
− 2M
IQR
 
(c + µ) log 43
=
(c + µ) log(3) − c
(3.2)
4
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
0.5
0.6
0.4
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
1
2
3
4
5
6
Q(u)
2
4
6
8
Q(u)
10
1.0
1.2
0.8
1.0
0.8
0.6
0.6
0.4
0.4
0.2
0.2
2
4
6
8
Q(u)
2
4
6
8
Q(u)
Fig. 1. Plot of the density function at different values of parameters.
and Moor’s coefficient of kurtosis T is
T =
7
Q
8
−Q
5
8
+Q
3
8
−Q
1
8
IQR
 
(c + µ) log 21
−c
5
=
.
(c + µ) log(3) − c
The L-moments are often found to be more desirable than the conventional moments in describing
the characteristics of the distributions as well as for inference. The L-moments exist whenever E (X )
is finite, whereas for many distributions additional restrictions are required for the conventional
moments to be finite. The L-moments have generally lower sampling variances and robustness against
outliers. See Hosking [9] and Hosking and Wallis [10] for details. Descriptive measures of the class of
distributions (2.2) are expressed in terms of the first four L-moments λr , r = 1, 2, 3, 4. Of these λ1 is
the mean, given by
λ1 =
1

Q (u)du = µ.
(3.3)
0
The second L-moment is obtained as
λ2 =
1

(2u − 1)Q (u)du =
0
1
6
(c + 3µ).
(3.4)
The third and fourth L-moments are obtained as
λ3 =
1

(6u2 − 6u + 1)Q (u)du
0
=
c+µ
6
,
(3.5)
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
5
and
λ4 =
1

(20u3 − 30u2 + 12u − 1)Q (u)du
0
=
c+µ
12
.
(3.6)
Note that λ4 = 12 λ3 , which helps to identify approximately whether the distribution is appropriate.
The L-coefficient of variation, analogous to the coefficient of variation based on ordinary moments is
λ
given by τ2 = µ2 and for the class of distributions in (2.2), τ2 is
τ2 =

1
c
µ
6

+3 .
Note that τ2 lies in
1
3
λ

, 32 . To measure the skewness of the distribution we use the L-coefficient of
skewness τ3 = λ3 which is obtained as
2
τ3 =
c+µ
c + 3µ
.
Since c + µ > 0, τ3 > 0 so that it is positively skewed. Note that τ3 is a decreasing function of c.
λ
Skewness increases from 0 to 12 as c decreases from µ to −µ. Coefficient of kurtosis is τ4 = λ4 which
2
is given by
τ4 =
c+µ
2c + 6µ
.
Note that τ4 = 21 τ3 , which is also a decreasing function of c and increases from 0 to
from µ to −µ.
When c = 0 the distribution is exponential with
1
4
as c decreases
Q (u) = −µ log(1 − u),
and when c = −µ the distribution is uniform(0, 2µ) with
Q ( u) = 2 µ u.
When µ = −(1 + 3c ) and c ≤ 1/2, Q (u) can be also viewed as skewed sum of exponential and
uniform, that is
Q (u) = α u + (1 − α) log(1 − u)
where α = −2c.
For the class of distributions the expected values of order statistics are in simple forms. If Xr :n
denotes the rth order statistic in a random sample of size n, then the density function of Xr :n can
be written as
fr (x) =
1
B(r , n − r + 1)
f (x)F r −1 (x)(1 − F (x))n−r .
For the class of distributions (2.2), fr (x) is given by,
fr (x) =
1
F (x)r −1 (1 − F (x))n−r +1
B(r , n − r + 1)
2cF (x) − c + µ
and hence
E (Xr :n ) =
1
B(r , n − r + 1)
∞

x
0
F (x)r −1 (1 − F (x))n−r +1
2cF (x) − c + µ
dx.
6
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
In quantile terms,
E (Xr :n ) =
1

1
B(r , n − r + 1)
Q (u) ur −1 (1 − u)n−r du
0
= (c + µ)(Hn − Hn−r ) −
2cr
(3.7)
n+1
where Hn is the nth harmonic number given by Hn =
In particular
E (X1:n ) =
n
i =1
1/i.
c (1 − n) + µ(1 + n)
n(1 + n)
and
E (Xn:n ) = (c + µ)Hn −
2cn
n+1
.
The quantile functions of X1:n and Xn:n are respectively given by

1
Q1 (u) = Q (1 − (1 − u)1/n ) = −(c + µ) log (1 − u) n



1
− 2c 1 − (1 − u) n
and

1
Qn (u) = Q (u1/n ) = −(c + µ) log 1 − u n

1
− 2cu n .
4. Linear mean residual quantile distribution (LMRQD) — approximations of some distributions
In general, the quantile function (2.2) cannot be converted to find a tractable form for its
distribution function, except in a few cases mentioned above. Therefore, the relationship LMRQD
with other known standard distributions can be assessed only through approximations. The
advantage of seeking such cases is justified from the analytical and practical points of view. In
data situations where a linear mean residual quantile function observed, the logical choice is the
LMRQD, which being the only distribution possessing that property and not any other model,
however good the fit is. The usual procedure in data modeling is to choose one among the
candidate distributions, estimate the parameters and then carry out a goodness of fit. If the choice
is not adequate, the same step is undergone with another model, sometimes with a different
strategy for estimation and checking model adequacy. When we have a quantile function that
provides approximation to many types of distributions, only one functional form for Q (u) and the
related inferential aspects are sufficient for modeling and analysis, as the quantile function will
adapt automatically to the suitable model. Finally, it is comparatively much easier to generate
simulated observations from a quantile function than from a distribution function. To find such
approximations, there is a need for some criterion that will enable us to identify particular members
of the family from the given data. Such a criterion derives from the bounds on the variance of the
family
µ2
3
≤ σ2 <
7 µ2
3
where µ is the mean and σ 2 is the variance. This suggests that ι2
=
σ2
,
µ2
the square of the
coefficient of variation can be used to distinguish the exact members as well as distributions that
are close approximations of the linear mean residual quantile distribution. Examples of LMRQD
approximations to some well known distributions are given below. The approximations are made
by equating moments of LMRQD and the corresponding distributions. It may be noted neither the list
of distributions nor the parameter values chosen in the discussion is exhaustive.
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
7
Fig. 2. The p.d.f. of Weibull and p.d.f of LMRQD.
4.1. The Weibull distribution
The Weibull distribution has p.d.f.
f (x) =
αe
 α
− βx
 α−1
x
β
,
β
x > 0.
The mean of the Weibull distribution is given by α1 = β Γ 1 + α1 and the variance is α2 =
 


2 
β 2 Γ 1 + α2 − Γ 1 + α1
. For the linear mean residual quantile distribution mean is µ and

variance is
and β as
c2
3
+ c µ + µ2 . Equating α1 = µ and α2 =
µ = βΓ

1+
1
c2
3

+ c µ + µ2 and we get µ and c in terms of α

(4.1)
α
and
 

  


2 



√
1
α+2
1
1

c =  3β 2 4Γ
− 5Γ 1 +
− 3β Γ 1 +
2
α
α
α
(4.2)
so that
ι2 =


Γ 1 + α2

 − 1.
Γ 2 1 + α1
We can approximate Weibull distribution when α ≤ 1 and β > 0. For α ≤ 1, ι2 lies between 1 and
7
. The p.d.f’s of Weibull(0.9, 2) and corresponding LMRQD(2.1043, 0.4679) are given in Fig. 2.
3
4.2. The Beta distribution
The beta distribution has p.d.f.
f (x) =
xα−1 (1 − x)β−1
B(α, β)
,
0 < x < 1.
8
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
Fig. 3. The p.d.f. of beta and p.d.f of LMRQD.
αβ
α
The mean of the beta distribution is given by α1 = α+β
and the variance is α2 = (α+β)2 (α+β+1) .
c2
3
Equating α1 = µ and α2 =
µ=
+ c µ + µ2 and we get µ and c in terms of α and β as
α
α+β
and
√ 
3 −α(α + β)2 (α + β + 1)(α(α + β + 1) − 4β) − 3α(α + β)(α + β + 1)
c=
;
2(α + β)2 (α + β + 1)
hence,
ι2 =
β
.
α(α + β + 1)
We can approximate
beta distribution when α ≤ 1 and β > 1. For beta distribution, ι2 covers whole

range 31 , 73 . The p.d.f’s of beta(0.95, 4.5) and corresponding LMRQD(0.174312, −0.0513383) are
given in Fig. 3.
4.3. The Gamma distribution
The gamma distribution has p.d.f.
β −α xα−1 e
f ( x) =
Γ (α)
− βx
,
x > 0.
The mean of the gamma distribution is given by α1 = αβ and the variance is α2 = αβ 2 . Equating
2
α1 = µ and α2 = c3 + c µ + µ2 and we get µ and c in terms of α and β as µ = αβ and
√ 

3 −(α − 4)αβ 2 − 3αβ , therefore,
c = 12
ι2 =
1
α
.
We can approximate gamma distribution when α ≤ 1. For α ≤ 1, ι2 lies between 1 and 73 . The p.d.f’s
of gamma(08, 2) and corresponding LMRQD(1.6, 0.37128) are given in Fig. 4.
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
9
Fig. 4. The p.d.f. of gamma and p.d.f of LMRQD.
Fig. 5. The p.d.f. of half-normal and p.d.f of LMRQD.
4.4. The half-normal distribution
The half-normal distribution has p.d.f.
2e−
f (x) =
x2 θ 2
π
π
θ
,
x > 0.
2
. Equating α1 = µ
The mean of the half-normal distribution is α1 = θ1 and the variance is α2 = π2−
θ2
2
c
2
and α2 = 3 + c µ + µ and we get µ and c in terms of θ as
µ=
1
θ
and
√
c=
6π − 15 − 3
2θ
,
so that the value of ι2 = π2 − 1. The p.d.f’s of half-normal(0.5) and corresponding LMRQD(2, −1.038)
are given in Fig. 5.
10
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
Fig. 6. LMRQD approximations of distributions in terms of ι2 = σ 2 /µ2 .
To conclude this we can write in terms of ι2 values as
(i) uniform distribution with ι2 = 13
(ii) exponential distribution with ι2 = 1
(iii) half-normal distribution with ι2 = π2 − 1 ≃ 0.57
(iv) Weibull distribution with 1 ≤ ι2 <
(v) gamma distribution with 1 ≤ ι <
2
7
3
7
3
and
(vi) beta distribution with whole range
1
3
≤ ι2 < 73 .
Distributions with no shape parameters are lines in Fig. 6 and others are regions. As in the case of
the Ord family, the regions that approximate various distributions are not mutually exclusive. This is
not surprising since, the distributions involved can provide approximations to one another. Further
it may be noted that for DMRL(IMRL) distributions ι2 ≤ (≥)1 and hence the criterion used is also
meaningful with reference to the aging classes, defined by the mean residual quantile function. Among
the family, the uniform distribution is the most DMRL and the distribution with ι2 = 37 is the most
(least) IMRL(DMRL) satisfied by the members of the beta family.
We illustrate the utility of the above discussion with the aid of a real data set. Chhikara and
Folks [2] have considered the data consisting of 46 observations on the repair time for an airborne
communication transceiver while investigating the role of inverse Gaussian distribution as a model of
lifetimes. They found that log–normal and inverse Gaussian distributions provide good fits to the data
and justified the latter is more appropriate on a physical basis. We found that the Weibull distribution
also provides a reasonable fit by the chi-square test, when the parameter values are α̂ = 0.898583 and
β̂ = 3.39134, estimated by the method of maximum likelihood. The plot of mean residual quantile
function M (u) shown in Fig. 7 suggests LMRQD also as a model. The LMRQD with parameters estimated
by using (4.1) and (4.2), µ̂ = 3.57139, ĉ = 0.806686, also provide a good fit. On the basis of the
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
11
M(u)
4
3
2
1
0.2
0.4
0.6
0.8
1.0
u
Fig. 7. M (u) for the repair time data.
physical properties of the data generating mechanism, revealed by the form of M (u), the LMRQD is
preferred over the other model (see Fig. 7). This fact is also supported empirically by the chi-square
values obtained. LMRQD provides the smallest value 3.5234 for chi-square, while for the Weibull and
inverse Gaussian the figures are 10.9565 and 8.7826 respectively.
5. Reliability properties
For the class of distributions (2.2), the hazard quantile function given in Nair and Sankaran [15] is
obtained as
H (u) =
1
(1 − u)q(u)
=
1
µ + c (2u − 1)
,
1
1
Note that H (0) = µ−
and H (1) = µ+
. The distribution is strictly increasing hazard rate (IHR) if
c
c
c < 0, strictly decreasing hazard rate (DHR) if c > 0 and constant if c = 0. A lifetime X is new better
(worse) than used in hazard rate (NBUHR and NWUHR) if and only if H (0) ≤ (≥)H (u) for u ≥ 0. For
1
< (>) µ+c (12u−1) for c < (>)0, implies NBUHR for c < 0 and NWHUR for
the class of distribution µ−
c
c > 0 which is a weaker concept than IHR(DHR). Fig. 8 gives shape of H (u) for µ = 1, 2, and 3 and
value of c given at the top of the figure.
The reversed hazard quantile function and the reversed mean residual quantile function defined
in Nair and Sankaran [15] have the expressions respectively as
A(u) =
1
u q(u)
1
=
u
 c +µ
1 −u
− 2c

(5.1)
and
R(u) = u−1
u

p q(p)dp
0
=
(−c − µ) log(1 − u)
u
− (cu + µ) − c .
The total time on test transform (TTT) is a widely accepted statistical tool, which has many applications
in reliability analysis (see Lai and Xie [13]). The quantile based TTT introduced in Nair et al. [18] has
the form
T (u) =
u

(1 − p)q(p)dp.
0
12
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
H(u)
4
H(u)
1.2
1.0
3
0.8
2
0.6
0.4
1
0.2
u
0.2
0.4
0.6
0.8
1.0
H(u)
4
1.0
3
0.8
0.2
0.4
0.6
0.8
1.0
0.2
0.4
0.6
0.8
1.0
u
H(u)
0.6
2
0.4
1
0.2
0.2
0.4
0.6
0.8
1.0
u
u
Fig. 8. Plot of H (u).
For the class of distributions (2.2), T (u) is
T (u) = cu2 − cu + µu.
For the class of distributions (2.2) the first L-moment of X |X > t, vitality function, E (X |X > t ) studied
by Kupka and Loo [12] is given by
α1 (u) =
1

1
1−u
Q (p)dp = −(c + µ) log(1 − u) − cu + µ.
(5.2)
u
The second L-moment of residual life (Nair and Vineshkumar [20]) is obtained as
α2 (u) =
1

1
( 1 − u)
2
(2p − u − 1)Q (p)dp =
u
1
6
(2cu + c + 3µ).
(5.3)
In reversed time, the first L-moment of X |X ≤ x can be obtained as
E (X |X ≤ x) = θ1 (u) =
1
u
u

Q (p)dp =
(1 − u)(c + µ) log(1 − u)
0
u
+ c (1 − u) + µ
(5.4)
and the second L-moment is
 u
1
θ2 (u) = 2
p R(p)dp
u 0
=
u(c (6 − u(2u + 3)) − 3µ(u − 2)) − 6(u − 1)(c + µ) log(1 − u)
6u2
Variance residual quantile function V (u) is defined as
V (u) =
1
1−u
1

M 2 (p)dp.
u
For the class of distributions (2.2)
V (u) =



1  3 2  3
c u + c + 3c 2 µ u + c 3 + 3c 2 µ + 3c µ2 .
3c
.
(5.5)
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
13
Note that V (u) is quadratic in u. Residual life can be also explained in terms of percentiles as the
percentile residual life function. From (2.2), we have
Pα (u) = Q (1 − (1 − α)(1 − u)) − Q (u) = 2α c (u − 1) − log(1 − α)(c + µ)
(5.6)
and the reversed percentile residual life function is [21]
Rα (u) = Q (u) − Q (u(1 − α)) = (c + µ) log((α − 1)u + 1)
− 2α cu − (c + µ) log(1 − u).
(5.7)
It is customary to characterize life distributions by the relationships among reliability concepts. In the
same spirit we prove the following characterization theorems.
Theorem 5.1. A random variable X has the linear mean residual quantile function if and only if
T (u) = u[M (u) − c ].
(5.8)
Proof. Suppose that the identity (5.8) is true. From Nair et al. [18] we have
T (u) = µ − (1 − u)M (u).
(5.9)
From (5.8) and (5.9), we get
M (u) = cu + µ
which is linear.
Conversely for the class of distributions (2.2) we have
T (u) = cu2 − cu + µu
which completes the proof.
Theorem 5.2. A random variable X has the linear mean residual quantile function if and only if H (u) is a
reciprocal function of u or
H (u) =
1
(5.10)
A + Bu
where A > 0 and B is a real constant and A ̸= µ.
Proof. Suppose that (5.10) holds. Since Q (u) can be uniquely determined from H (u) by the relation
Q ( u) =
u

1
(1 − u)H (u)
0
du,
we obtain
(A + Bp)
dp
(1 − p)
0
= −(A + B) log(1 − u) − Bu,
Q ( u) =

u
which is the same as (2.2) with B = 2c and A = µ − c > 0. Conversely for the class of distributions
(2.2) we have
H (u) =
1
µ + c (2u − 1)
which can be written as
H (u) =
1
(µ − c ) + 2cu
,
which is reciprocal linear. This completes the proof.
14
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
Remark 5.1. M (u)H (u) = (µ−c )+2cu is a bilinear function of u. M (u)H (u) = Constant characterizes
the generalized Pareto distribution. This result cannot be obtained by the distribution function
approach.
cu+µ
Theorem 5.3. Let X be a non-negative random variable with quantile function Q (u). Then X has linear
mean residual quantile distribution if and only if
α2 (u) = α u + β.
(5.11)
Proof. We have

1
1
(1 − p)M (p)dp
(1 − u)2 u
 1
(1 − u)2 α2 (u) =
(1 − p)M (p)dp.
α2 (u) =
u
Differentiating and simplifying we get
M (u) = 3β u + 2α − β
which is linear. Conversely for the class of distribution in (2.2), α2 (u) from (5.3), when c = 3β and
µ = 2α − β , we get (5.11). This completes the proof.
Remark 5.2. V (u) and α2 (u) measure the dispersion of residual life. V (u) is quadratic and α2 (u) is
linear in mean residual life. Comparing the two, the behavior of the two are different and α2 (u) is less
sensitive to variations.
An important topic useful in the analysis of the aging phenomenon is that of equilibrium
distributions. The equilibrium distribution associated with X is defined by the density function
g ( x ) = µ− 1
∞

F̄ (t )dt .
(5.12)
x
We denote the random variable corresponding to (5.12) as Z , with survival function Ḡ(x). Then we
have a relation between the hazard quantile function of Z and the mean residual quantile function of
X as
HZ (u) =
1
MX (u)
.
(5.13)
Showing that the hazard quantile function of the equilibrium random variable is the reciprocal of
the mean residual quantile function of the baseline distribution. Families of distributions with X and
Z having the same functional form are of interest. Among distribution functions generalized Pareto
distribution is one such family where Hz Mz = Constant. Our family is a new distribution possessing
the property, without satisfying Hz Mx = Constant but a function of u. Distributions in which, product
of the hazard function and the mean residual function possesses a function of x have not been resolved
completely. See some cases in Navarro et al. [23]. We again have a result in the same direction, but
cannot be solved in terms of distribution functions.
Theorem 5.4. The random variable X is distributed with
QX (u) = −(c + µ) log(1 − u) − 2cu
if and only if Z has the quantile function
QZ (u) = −(c + µ) log(1 − u) − cu.
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
15
Proof. Assume that X is distributed with
QX (u) = −(c + µ) log(1 − u) − 2cu.
Then the mean residual quantile function of X is,
MX (u) = cu + µ.
Using the relation between the hazard quantile function of Z and the mean residual quantile function
of X , from (5.13) we have
1−u
.
(1 − u)HZ (u) =
cu + µ
 u
 u
cp + µ
dp
=
dp.
QZ (u) =
(
1
−
p
)
H
(
p
)
1−p
x
0
0
Thus
QZ (u) = −(c + µ) log(1 − u) − cu.
Conversely suppose that Z is distributed with
QZ (u) = −(c + µ) log(1 − u) − cu,
then the quantile density function of Z is
d
qZ (u) =
du
QZ (u) =
µ+c
−c
1−u
1
HZ (u) =
(1 − u)qZ (u)
This completes the proof.
=
1
cu + µ
.
Remark 5.3. If X has the linear mean residual life function, then Theorem 5.4 shows that equilibrium
distribution remains in the same family. The nth order equilibrium quantile function defined in Nair
et al. [18] corresponding to (2.2) can be written as
QZn (u) = −2cn u − (cn + µn ) log(1 − u)
where µn = c 1 − 2−n + µ and cn = c 2−n . Note that as n → ∞ the distribution of Zn converges to
exponential distribution with parameter c + µ.


6. Some general classes of distributions
In this section we derive some general classes of distributions using the relationship between
H (u), α2 (u) and M (u).
Theorem 6.1. Let X be a non-negative random variable with quantile function Q (u) and quantile density
function q(u). Then the hazard quantile function H (u) and mean residual quantile function M (u) of X
satisfy the relationship
H (u) = (A + BM (u))−1
(6.1)
for all 0 ≤ u ≤ 1 if and only if
Q (u) =
A
B−1
log(1 − u) +
K
B−1
where B ̸= 1, A and K are real constants.
(1 − (1 − u)B−1 )
(6.2)
16
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
Proof. From (6.1), we have
A + BM (u) =
1
H ( u)
or
A+
1

B
1−u
(1 − p)q(p)dp = (1 − u)q(u).
u
A(1 − u) + B
1

(1 − p)q(p)dp = (1 − u)2 q(u).
(6.3)
u
Differentiating (6.3) with respect to u we get
A = (2 − B)(1 − u)q[u] − (1 − u)2 q′ [u]
which provides
A
q(u) = K (1 − u)B−2 +
(1 − B)(1 − u)
.
(6.4)
Integrating from 0 to u, we recover (6.2). Conversely if (6.2) holds we have (6.4) so that
H ( u) =
1
A
K (1 − u)B−1 − B−
1
and
M (u) =
=
1

1
1−u
(1 − p)q(p)dp
u
K (1 − u)B−1
B
This completes the proof.
−
A
B−1
.
A
Remark 6.1. For A < 0, B > 1, Q1 (u) = B−
log(1 − u) is the quantile function of exponential with
1
A
K
B−1
mean B−
and
Q
(
u
)
=
(
1
−
(
1
−
u
)
)
is
the quantile function of the rescaled beta distribution.
2
1
B −1
Thus (6.2) can be written as Q (u) = Q1 (u) + Q2 (u).
Remark 6.2. When B = 2 and A = µ + c, (6.2) becomes the quantile function that has the linear
mean residual quantile function.
A
Remark 6.3. As K → 0 we have the exponential distribution with mean 1−
and as A → 0, B > 1
B
K
we have the rescaled beta, with mean B . In these cases we obtain
H ( u) =
1
M ( u)
and
H ( u) =
1
BM (u)
,
B>1
respectively.
Remark 6.4. When A < 0, and B < 1 we obtain
Q (u) = Q1 (u) + Q3 (u)
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
17
where Q3 (u) is the quantile function of the Pareto II distribution
F̄ (x) =

 1−1 B 
K
x+
1−B
− 1−1 B
K
1−B
.
As A → 0, B < 1 we have Pareto II.
Now we develop two different general classes of distributions using the relationship between α2 (u)
and M (u), α2 (u) and H (u). The linear mean residual quantile function is a member of both of the
general classes of distributions.
Theorem 6.2. Let X be a non-negative random variable with quantile function Q (u). Then α2 (u) =
AM (u) + B, A ̸= 21 ; if and only if
M ( u) =
2B
+ K (1 − u)
1 − 2A
1−2A
A
.
Proof. Assume that α2 (u) = AM (u) + B. Then α2 (u) = (1−1u)2
(1 − u)2 (AM (u) + B) =
1
u
(1 − p)M (p)dp gives
1

(1 − p)M (p)dp.
u
Differentiating with respect to u we get
(1 − u)2 AM ′ (u) − 2(1 − u)(AM (u) + B) = −(1 − u)M (u)
A(1 − u)M ′ (u) − 2(AM (u) + B) = −M (u)
A(1 − u)M ′ (u) + (1 − 2A)M (u) = 2B
(1 − 2A)
2B
M ′ (u) +
M ( u) =
.
A(1 − u)
A(1 − u)
Multiplying with (1 − u)
M ′ (u)(1 − u)
d 
du
2A−1
A
M (u)(1 − u)
2A−1
A
+
(1 − 2A)
A
2A−1
A

=
2B
A
M (u)(1 − u)
(1 − u)
2A−1
−1
A
2A−1
−1
A
=
2B
A
(1 − u)
2A−1
−1
A
.
On integrating
M (u)(1 − u)
2A−1
A
=
2B
1 − 2A
(1 − u)
2A−1
A
+ K,
thus
M ( u) =
2B
1 − 2A
+ K (1 − u)
1−2A
A
,
A ̸=
1
2
.
(6.5)
To find Q (u) using the relation (2.1), Q (u) is given by
Q ( u) =
−2B
1 − 2A
log(1 − u) − K
1−A
1 − 2A
(1 − u)1−2A .
For the distribution (6.6)
α2 (u) =
B
1 − 2A
+ A K ( 1 − u)
and hence from (6.5)
α2 (u) = AM (u) + B.
Thus the converse is also true.
1−2A
A
A ̸=
1
2
(6.6)
18
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
Remark 6.5. The model (2.2) is the special case when K = −c , B = 6 , A =
(6.6) the quantile function can be written as, Q (u) = Q1 (u) + Q2 (u) where
c +µ
Q1 (u) =
−2B
1
3
. For the distribution
log(1 − u)
1 − 2A
2A
, provided A > 1/2 and
is exponential with mean 1−
2B
Q2 (u) = K
1−A
1 − 2A
(1 − u)2A−1
A
is Pareto with α = 2A − 1 > 0 and σ = K 11−−2A
with K > 0 for 1/2 < A < 1 and K < 0 for A > 1.
Theorem 6.3. Let X be a non-negative random variable with quantile function Q (u). Then α2 (u) =
1
+ B, A ̸= 12 ; if and only if
AH (u)
M (u) =
2B
1 − 2A
+ c1 (1 − u)
Proof. Assume that α2 (u) =
1
H (u)
A
H (u)
√ √
−3 A√
− A+4
2 A
√
√
A+4
√−3 A
2 A
+ c2 (1 − u)
.
+ B. But we have the identities
= M (u) − (1 − u)M ′ (u)
and α2 (u) = (1−1u)2
1
u
(1 − p)M (p)dp gives


(1 − u)2 (A M (u) + (u − 1)M ′ (u) + B) =
1

(1 − p)M (p)dp.
u
Differentiating with respect to u we get
2(u − 1) 2A(u − 1)M ′ (u) + AM (u) + B + A(u − 1)3 M ′′ (u) = −(1 − u)M (u).


On simplifying we get


−A(1 − u) 4m′ (u) − (1 − u)m′′ (u) + (2A − 1)m(u) = −2B.
This is a second order differential equation with linear symmetries. On solving we get M (u) as
M (u) =
2B
1 − 2A
+ c1 (1 − u)
√ √
−3 A√
− A+4
2 A
√
√
+ c2 (1 − u)
A+4
√−3 A
2 A
.
(6.7)
To find Q (u) using the relation (2.1), Q (u) is given by

Q (u) =
1
− 1
− 23
( 1 − u)
2

(1 − u)
2(2A − 1)
× k2 (1 − u)
1
2
A
A+4
2
A
A+4




√
√

 −A + A + 4 A + 2
1
A
A+4
√
+
1−u−


− A+
√
√

A + 4 A − 2 k1 u(1 − u)

− (1 − u)
1
2

 1
A
A+4


√

1
2

1−u

 1
A
+1
A+4

+1


+ 1 + 4B log(1 − u) .
(6.8)
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
19
q(u) becomes

1
q(u) = 
2

4B(u−1)2
2A−1
+
(
√
√
√
)
k2 A 1−u− A+4 A−Au (1−u)
2
1
A
A+4
A
(u − 1)3
√
√
A+
− 1

A + 4 k1 (1 − u)
−
2
− 25
A
A+4



.

√
A
For the distribution (6.8)




1
2
 √
H (u) = 2 A(2A − 1)(1 − u)
 1
+3
A



A+4


 1
A

3/2
k2 (1 − u) A+4 + k1
 − 2A




1  1
 1
+
3
√ 
2
A
A

A+4
+ A −4B(1 − u)
+ k2 (1 − u) A+4 + k1 

√
− 2 A + 4A k1 − k2 (1 − u)
 1
A
A+4


√
+
A + 4 k1 − k2 (1 − u)
 1
A
A+4




and
α2 (u) =
1

2B
1 − 2A
2
− (1 − u)

+ A k2 (1 − u)
− 1
2
A
A+4
 1
A
A+4
− 23


√ √
A A + 4 k1 − k2 (1 − u)
 1
A
A+4


+ k1
and hence
α2 (u) =
A
H (u)
+ B.
This completes the proof.
Remark 6.6. The model (2.2) is the special case when A =
1
6
,B =
c
3
+ µ3 , k1 = 0, k2 = −c .
7. Inference and application
In the literature, there are different methods for the estimation of parameters of the quantile
functions (see Gilchrist [3]). Method of minimum absolute deviation, method of least squares, method
of maximum likelihood and method of L-moments are commonly used techniques. Like the method
of maximum likelihood, method of L-moments generally gives biased estimates, although the bias is
comparatively very small in moderate or large samples. Further with small and moderate samples the
method of L-moments is more efficient than MLE. For these and other details of the properties of Lmoment estimates, see Hosking and Wallis [10]. To estimate the parameters of the family (2.2), we use
the method of L-moments. Let x1 , x2 . . . xn be a random sample of size n with quantile function (2.2).
20
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
Since there are two parameters in the proposed model, we take two sample L-moments which are
given by
ℓ1 =
n
 n −1 
1
ℓ2 =
and
xi:n
(7.1)
i=1

n 
i−1
1  n  −1 
2
2
1
i=1

−
n−i

xi:n
1
(7.2)
where xi:n is the ith order statistic.
On equating sample L-moments to population L-moments we get the estimates of µ and c as
n

µ̂ = ℓ1 =
xi:n
i=1
(7.3)
n
6
n

(2i − n − 1)xi:n
i =1
ĉ = 6ℓ2 − 3ℓ1 =
n2 − n
3
−
n

i=1
n
xi:n
.
(7.4)
It can be easily proved from (3.7) that E µ̂ = µ and E ĉ = c, so that µ̂ and ĉ are unbiased estimators.
Generally, quantile functions are less frequently used in modeling due to the difficulties in estimating
the parameters. For example in the case of generalized lambda distribution, computational work is
quite involved in all methods with more than one set of estimates obtained as solutions. In the case of
LMRQD, estimating equations in the method of L-moments are linear so that estimates are uniquely
and quite easily determined. The method of maximum likelihood usually employed for competing
models mentioned in Section 4 is computationally more involved. Like the method of maximum
likelihood, method of L-moments generally give biased estimates, although the bias is comparatively
very small in moderate or large samples. Further with small and moderate samples the method of
L-moments is more efficient than MLE. For these and other details of the properties of L-moment
estimates, see Hosking and Wallis [10].
7.1. Asymptotic properties
Hosking (1990) has studied asymptotic properties of L-moment estimates. The following theorem
provides asymptotic normality of sample L-moments.
Theorem 7.1 (Hosking (1990)). Let X be a real-valued random variable with quantile function Q (u, θ ),
where θ is a vector of m parameters, L-moments λr and finite variance. Let ℓr , r = 1, 2 . . . , m be
sample
L-moments calculated from a random sample of size n drawn from the distribution of X . Then
√
(n)(ℓr − λr ), r = 1, 2 . . . , m, converge in distribution to the multivariate normal N (0, Λ), where the
elements Λrs (r , s = 1, 2 . . . , m) of Λ are given by
Λrs =

0<u<v<1
{Pr∗−1 (u)Ps∗−1 (v) + Ps∗−1 (u)Pr∗−1 (v)}u(1 − v)q(u)q(v)dudv,
where Pr∗ (x) being the rth shifted Legendre polynomial defined by
r
  r + k

r −k r
Pr (x) =
(−1)
xk .
∗
k
k=0
k
For the class of distribution in (2.2), using Theorem 7.1, we can have
ℓ1 − λ1
n
ℓ2 − λ2
√


∼ N (0, Λ),
(7.5)
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
21
where
c2


Λ=
1
18
3
+ µc + µ

1
2
(c + µ)(5c + 9µ)
(c + µ)(5c + 9µ) 
18
 .
1 
11c 2 + 25µc + 15µ2
45
Note that from (7.3) and (7.4), µ̂ and ĉ are linear functions of ℓ1 and ℓ2 . It is easy to show from (7.5)
that
n
1/2

(µ̂ − µ) ∼ N 0,
c2
3
+ µc + µ
2

and
n1/2 (ĉ − c ) ∼ N

0,
9c 2
5

+ c µ + 3µ2 .
We can construct 100(1 − α)% confidence intervals for µ and c as

CIµ = µ̂ ± zα/2
ĉ 2
3
+ ĉ µ̂ + µ̂2
n
and

CIc = ĉ ± zα/2
9ĉ 2
5
+ ĉ µ̂ + 3µ̂2
n
where zα/2 is 100(1 − α/2)th percentile of the standard normal. Since the sample L-moments
are consistent estimators of population L-moments (see Hosking (1990)), µ̂ and ĉ are consistent
estimators of µ and c.
7.2. Data analysis
To illustrate the procedure of estimation and application of the class of distributions in a practical
situation, we consider a real data set from Zimmer et al. [26]. The data consist of time to first failure
of 20 electric carts. We estimated the parameters using the method of L-moments and the estimates
of the parameters are
µ̂ = 14.675 and ĉ = −0.0182.
Since the estimate ĉ < 0, M (u) is decreasing in u (see Fig. 9) and H (u) is increasing (see Fig. 10). To
check the goodness of fit, we use Q –Q plot which is given in Fig. 11. Fig. 11 shows that the proposed
class of distributions is appropriate for the given data set.
7.3. Simulation study
To study finite sample properties of the estimates, we generated random samples from (2.2) with
various parameter combinations of µ and c. We considered sample sizes n = 25, 50 and 100. The
parameters of the model are estimated using the method of L-moments. Tables 1 and 2 represent
the empirical bias and mean squared error (MSE) of the estimates based on 1000 simulations. The
empirical coverage probabilities for different parametric combinations are also given. From the tables,
it is easy to see that the estimates have small bias and small MSE. Both bias and MSE decrease when
sample size increases, as expected. The results show that the estimates are virtually unbiased. The
nominal 95% confidence intervals have proper coverage probabilities.
22
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
(M(u) = 14.675 – 0.018 u)
M(u)
14
12
10
8
6
4
2
0.2
0.4
0.6
0.8
1.0
Fig. 9. M (u) for the data set.
Fig. 10. H (u) for the data set.
50
40
30
20
10
0
0
10
20
30
Theoretical
Fig. 11. Q –Q plot for the data set.
40
50
u
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
23
Table 1
Simulation results for µ = 1.
c
n
Bias
MSE
Cov. prob.
µ
c
µ
c
µ
c
−0.2
25
50
100
0.0032
0.0029
−0.0015
0.0130
−0.0087
0.0018
0.0322
0.0163
0.0080
0.1224
0.0602
0.0274
0.935
0.956
0.958
0.944
0.950
0.956
−0.1
25
50
100
0.0073
0.0047
0.0007
0.0092
0.0055
−0.0045
0.0336
0.0181
0.0093
0.1193
0.0618
0.0318
0.946
0.957
0.951
0.946
0.954
0.943
25
50
100
0.0076
−0.0135
0
−0.0033
−0.0014
0.0038
0.0003
0.0387
0.0194
0.0105
0.1391
0.0658
0.0322
0.942
0.949
0.951
0.948
0.947
0.952
0.1
25
50
100
−0.0023
−0.0022
−0.0018
0.0090
0.0006
−0.0005
0.0466
0.0219
0.0106
0.1395
0.0655
0.0310
0.955
0.937
0.945
0.950
0.958
0.955
25
50
100
0.0112
0.0102
0.0022
−0.0048
0.2
0.0442
0.0235
0.0123
0.1350
0.0645
0.0338
0.945
0.953
0.943
0.952
0.953
0.944
0.0034
0.0014
Table 2
Simulation results for µ = 2.
c
n
Bias
MSE
Cov. prob.
µ
c
µ
c
µ
c
−0.4
25
50
100
0.0207
0.0086
−0.0062
0.0225
0.0054
−0.0046
0.1419
0.0637
0.0335
0.4822
0.2375
0.1151
0.950
0.944
0.944
0.949
0.943
0.944
25
50
100
−0.0268
−0.0008
0.0165
−0.2
0.0006
−0.0124
−0.0035
0.1580
0.0705
0.0351
0.5620
0.2310
0.1231
0.945
0.954
0.952
0.952
0.956
0.947
0
25
50
100
−0.0112
−0.0095
−0.0094
0.0332
0.0158
0.0093
0.1718
0.0840
0.0351
0.4954
0.2765
0.1234
0.939
0.945
0.943
0.945
0.955
0.929
25
50
100
0.0131
0.0101
0.2
−0.0033
−0.0028
−0.0094
0.0011
0.1736
0.0876
0.0481
0.4947
0.2826
0.1303
0.940
0.947
0.932
0.964
0.951
0.953
0.4
25
50
100
−0.0126
−0.0121
−0.0082
0.0350
0.0297
0.0045
0.1924
0.1055
0.0469
0.5301
0.2827
0.1350
0.952
0.944
0.947
0.944
0.956
0.958
8. Conclusion
In the present study, we have introduced a class of distributions (2.2) with the linear mean residual
quantile function and have studied its various properties. It is observed that the proposed class has
several existing well known distributions which can be obtained as special cases of this distribution.
Various reliability characteristics are discussed and we derived useful characterizations connecting
identities among M (u), H (u) and T (u). We derived some general classes of distributions, using the
relations H (u), α2 (u) and M (u). The estimation of the parameters of the model using L-moments was
studied and the model was applied to a real data set. A small simulation study was conducted to assess
the finite sample properties.
Acknowledgments
We thank referee and associate editor for their constructive comments.
24
N.N. Midhu et al. / Statistical Methodology 15 (2013) 1–24
References
[1] Y.Q. Chen, S. Cheng, Semiparametric regression analysis of mean residual life with censored survival data, Biometrika 92
(1) (2005) 19–29.
[2] R.S. Chhikara, J.L. Folks, The inverse Gaussian distribution as a lifetime model, Technometrics 19 (4) (1977) 461–468.
[3] W. Gilchrist, Statistical Modelling with Quantile Functions, CRC Press, Abingdon, 2000.
[4] F. Guess, F. Proschan, Mean residual life: theory and applications, Quality Control and Reliability (1988) 215.
[5] R.C. Gupta, D.M. Bradley, Representing the mean residual life in terms of the failure rate, Mathematical and Computer
Modelling 37 (12–13) (2003) 1271–1280.
[6] R.C. Gupta, S.N.U.A. Kirmani, Some characterization of distributions by functions of failure rate and mean residual life,
Communications in Statistics. Theory and Methods 33 (12) (2004) 3115–3131.
[7] W.J. Hall, J.A. Wellner, Mean residual life, Statistics and Related Topics 169 (1981) 184.
[8] W.J. Hall, J.A. Wellner, Mean residual life, in: Proceedings of the International Symposium on Statistics and Related Topics,
North Holland, Amsterdam, 1984, pp. 169–184.
[9] J.R.M. Hosking, Some theoretical result concerning L-moments, Research report, rc 14492 (revised). IBM Research Division,
York town Heights, New York, 1996.
[10] J.R.M. Hosking, J.R. Wallis, Regional Frequency Analysis, Cambridge Univ. Press, Cambridge, UK, 1997.
[11] R.M. Korwar, A characterization of the family of distributions with a linear mean residual life function, Sankhyā: The Indian
Journal of Statistics, Series B (1992) 257–260.
[12] J. Kupka, S. Loo, The hazard and vitality measures of ageing, Journal of Applied Probability 26 (3) (1989) 532–542.
[13] C.D. Lai, M. Xie, Stochastic Ageing and Dependence for Reliability, Springer Science Business Media, 2006.
[14] E.J. Muth, Reliability models with positive memory derived from the mean residual life function, 1977.
[15] N.U. Nair, P.G. Sankaran, Quantile-based reliability analysis, Communications in Statistics. Theory and Methods 38 (2)
(2009) 222–232.
[16] N.U. Nair, P.G. Sankaran, Properties of a mean residual life function arising from renewal theory, Naval Research Logistics
(NRL) 57 (4) (2010) 373–379.
[17] N.U. Nair, P.G. Sankaran, Some new applications of the total time on test transforms, Statistical Methodology 10 (1) (2013)
93–102.
[18] N.U. Nair, P.G. Sankaran, B. Vineshkumar, Total time on test transforms of order n and their implications in reliability
analysis, Journal of Applied Probability 45 (4) (2008) 1126–1139.
[19] N.U. Nair, P.G. Sankaran, B. Vineshkumar, Modelling lifetimes by quantile functions using parzen’s score function, 2011.
[20] N.U. Nair, B. Vineshkumar, L-moments of residual life, Journal of Statistical Planning and Inference 140 (9) (2010)
2618–2631.
[21] N.U. Nair, B. Vineshkumar, Reversed percentile residual life and related concepts, Journal of the Korean Statistical Society
40 (1) (2011) 85–92.
[22] A. Nanda, S. Bhattacharjee, S. Alam, Properties of proportional mean residual life model, Statistics & Probability Letters 76
(9) (2006) 880–890.
[23] J. Navarro, J.M. Ruiz, C.J. Sandoval, Some characterizations of multivariate distributions using products of the hazard gradient and mean residual life components, Statistics 41 (1) (2007) 85–91. http://dx.doi.org/10.1080/02331880601012900.
[24] D. Oakes, T. Dasu, A note on residual life, Biometrika 77 (2) (1990) 409–410.
[25] E. Parzen, Non parametric statistical data modeling, Journal of the American Statistical Association 74 (1979) 105.
[26] W.J. Zimmer, J.B. Keats, F.K. Wang, The burr xii distribution in reliability analysis, Journal of Quality Technology 30 (4)
(1998) 386–394.