Download On the distribution of linear combinations of the

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

System of linear equations wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Ordinary least squares wikipedia , lookup

Transcript
The Canadian Journal of Statistics
Vol. 28, No. ?, 2000, Pages ???-???
La revue canadienne de statistique
On the distribution of linear
combinations of the components
of a Dirichlet random vector
Serge B. PROVOST and Young-Ho CHEONG
The University of Western Ontario
Key words and phrases: Cliff-Ord test, coefficient of determination, Dirichlet distribution, Geary index, model identification, Moran index, periodogram, quadratic
forms, robust tests, sample autocorrelations, sample spectrum, spherically symmetric distributions.
Mathematics subject classification codes (1991): primary 62E15, 62H05; secondary
62M15, 62M30.
ABSTRACT
On making use of a result of Imhof, an integral representation of the distribution function of linear combinations of the components of a Dirichlet random vector is obtained.
In fact, the distributions of several statistics such as Moran and Geary’s indices, the CliffOrd statistic for spatial correlation, the sample coefficient of determination, F -ratios and
the sample autocorrelation coefficient can be similarly determined. Linear combinations
of the components of Dirichlet random vectors also turn out to be a key component in a
decomposition of quadratic forms in spherically symmetric random vectors. An application involving the sample spectrum associated with series generated by ARMA processes
is discussed.
RÉSUMÉ
Les auteurs déduisent d’un résultat de Imhof une représentation intégrale de la fonction de répartition d’une combinaison linéaire quelconque des composantes d’un vecteur
aléatoire de Dirichlet. Comme ils le font valoir, les lois de plusieurs statistiques peuvent
être déterminées de façon similaire, notamment celles des rapports du type F , des indices
de Moran et de Geary, de la statistique de corrélation spatiale de Cliff-Ord, ou des coefficients de détermination et de corrélation sérielle expérimentaux. Les auteurs expliquent
en outre le rôle clef joué par les combinaisons linéaires des composantes d’un vecteur
aléatoire de Dirichlet dans la décomposition de formes quadratiques construites à partir
de vecteurs obéissant à une loi sphérique. Ils déterminent, à titre illustratif, la loi du
spectre échantillonnal associé à une chronique générée par un processus de type ARMA.
1
1. INTRODUCTION AND NOTATION
Several test statistics can be expressed in terms of linear combinations of the
components of Dirichlet random vectors or possess a similar structure. A representation of the distribution function of such linear combinations is derived in this
paper. This representation can be used to determine the distribution of any real
quadratic forms in spherically symmetric vectors. Some related but less general
distributional results pertaining to such quadratic forms were obtained for example by Fan (1986), Anderson and Fang (1987), Fang et al. (1987), Li (1987), Fang
and Zhang (1990), and Hsu (1990).
A random vector D = (D1 , . . . , D` )0 is said to have a Dirichlet distribution of
the first type with parameters a1 , . . . , a` if its density function is
!
X̀
Ỳ
ai
{diai −1 /Γ(ai )},
Γ
i=1
i=1
P
for 0 ≤ di ≤ 1, ai > 0, i = 1, . . . , `, with d` = 1 − `−1
i=1 di , and 0 otherwise. This
is denoted (D1 , . . . , D` )0 ∼ D` (a1 , . . . , a` ). It can be shown that, when the Xj ’s
are independently distributed chi-square random variables having respectively rj
degrees of freedom, j = 1, . . . , `, (X1 /X, . . . , X` /X)0 ∼ D` (r1 /2, . . . , r` /2) where
P`
X =
j=1 Xj (see, for example, Johnson and Kotz 1976, Chapter 40).
The p-dimensional vector X = (X1 , . . . , Xp )0 is said to have an elliptically
contoured (or elliptical) distribution if its characteristic function φ(t) can be written
0
as φ(t) = eit µ ξ(t0 Σ t) where µ is a p-dimensional real vector, Σ is a p × p
nonnegative definite matrix and ξ is a nonnegative function. This will be denoted
X ∼ Cp (µ, Σ; ξ). When µ is the null vector and Σ is Ip , the identity matrix of
order p, X is said to have a spherically symmetric (or spherical) distribution and
we write X ∼ Sp (ξ). The notation X ∼ Sp∗ (ξ) (or Cp∗ (µ, Σ; ξ)) indicates that a
distribution has no atom at the origin (or its mean). For further distributional
results on spherical and elliptical vectors, the reader is referred to Kelker (1970),
Cambanis et al. (1981), Fang and Zhang (1990), and Osiewalski and Steel (1993).
In addition, many examples of spherical distributions can be found in McGraw and
Wagner (1968, p. 113) and Fang et al. (1990, p. 69).
A computable representation of the distribution function of a linear combination of the components of a Dirichlet vector, hereafter denoted by Z, is derived in
the next section wherein it is also shown that the distribution function of many
statistics that can be expressed in term of ratios of quadratic forms can be obtained similarly. It is then pointed out that the distribution of quadratic forms in
spherically symmetric random vectors can be determined by expressing them as
the product of Z and the square of the norm of their vectors. Specific distributional results for two and three-dimensional vectors are given in the Appendix. A
numerical example involving the sample spectrum is presented in Section 3.
2. LINEAR COMBINATIONS OF THE COMPONENTS OF A DIRICHLET VECTOR
The distribution of Z, a linear combination of the components of a Dirichlet
random vector, as well as that of Q, a quadratic form in a spherically symmetric
random vector, are determined in this section. It is also shown that the distribution function of some ratios of quadratic forms can be similarly obtained. Several
applications are pointed out.
2
Pp
Pp
2
2
Let Z =
i = 1, . . . , p − 1, and
i=1 λi Xi /
i=1 Xi , with λi ≥ λi+1 ,
0
∗
(X1 , . . . , Xp ) ≡ X ∼ Sp (ξ). It is well known that if X ∼ Sp∗ (ξ) with density
f (x) = g(x0 x), then W = kXk is distributed independently of X/kXk, with the
former having density
(
2π p/2
p−1
g(w2 ) for 0 < w < ∞
Γ(p/2) w
hW (w) =
0
elsewhere
(cf. Mathai et al. 1995, p. 91) and the latter being distributed uniformly over
the surface of the unit sphere, regardless of the parent distribution. Thus, if Q =
X0 AX where A is a p × p symmetric matrix, we have that Q is distributed as
VZ, where V = W 2 and Z are independently distributed and λ1 ≥ · · · ≥ λp are
the characteristic roots of A. The distribution function of Q is then obtained by
integration from the density of V,
( p/2
π
p/2−1
g(v) for 0 < v < ∞
Γ(p/2) v
hV (v) =
0
elsewhere
and the distribution function of Z. In order to determine the distribution of Z, it
may be assumed without any loss of generality that X is a normal random vector
with mean 0 and covariance matrix Ip . This is denoted X ∼ Np (0, Ip ). On noting
that (X12 /kXk2 , . . . , Xp2 /kXk2 )0 ≡ (D1 , . . . , Dp )0 ∼ Dp (1/2, . . . , 1/2), one has
Z'
p
X
λi Di
i=1
for all X ∈ Sp∗ , i.e., Z is distributed as a linear combination of the components
of a Dirichlet vector. Let m be the number of distinct λi ’s, denoted by `j with
respective multiplicities rj , and let KZ (z) denote the distribution function of Z;
then one has
Pp
Pm
X
m
2
j=1 `j Tj
i=1 λi Xi
P
P
KZ (z) = P
≤z =P
≤z =P
(`j − z) Tj ≤ 0 ,
p
m
2
i=1 Xi
j=1 Tj
j=1
where the Tj ’s, j = 1, . . . , m, are independent chi-square random variables having
rj degrees of freedom, respectively. On making use of the representation of the
distribution function of a linear combination of independent chi-square random
variables derived by Imhof (1961), one can express the distribution function of Z
as follows:
h P
i
m
1
−1
Z
1 ∞ sin 2 j=1 rj tan {(`j − z) u}
1
Qm
−
KZ (z) =
du
2 π 0
u j=1 {1 + (`j − z)2 u2 }rj /4
for `m < z < `1 ; clearly, KZ (z) = 0 whenever z ≤ `m and KZ (z) = 1, whenever
z ≥ `1 . Representations of the density function of Z for two- and three-dimensional
vectors are provided in the Appendix.
The density of Z could be approximated for example by means of an Edgeworth
expansion or by Pearson or Johnson curves. The first three or four moments of
the distribution would then be required. The following representation of the hth
integer moment of Z,
h−h1 −···−hm−2
h h−h
m
X
X1
X
h! Γ(p/2) Y Γ(hj + rj /2) hj
`
···
E Zh =
Γ(h + p/2) j=1 Γ(hj + 1)Γ(rj /2) j
h1 =0 h2 =0
hn−1 =0
3
Pm−1
with hm = h − i=1 hi , results from an application of Theorem 2 on p. 343 of
Hannan (1970). The hth integer moments of the quadratic forms Q = X0 AX can
then be easily obtained since E(Qh ) = E(V h )E(Z h ).
Furthermore, the distribution of ratios of the form X0 A1 X/X0 A2 X wherein
X ∼ Sp∗ (ξ), A1 = A01 , A2 = A02 and A2 > O can also be determined by means of
Imhof’s formula since
!
0
p
X
X A1 X
< c = P {X0 (A1 − cA2 )X < 0} = P
λi Xi2 < 0 ,
P
X0 A2 X
i=1
where the λi ’s are the characteristic roots of A1 − cA2 , and it may be assumed
2
without
X
random variables
i ’s are independent
P any 2loss of generality
P the
P
P chi-square
P
2
as P ( λi Xi < 0) = P ( λi Xi / Xi2 < 0) and λi Xi2 / Xi2 has the structure
of Z, whose distribution is invariant within the class Sp∗ .
It should be noted that the distributional results also apply to quadratic forms
in central elliptically contoured random vectors as well: if R ∼ Cp∗ (0, Σ; ξ) with Σ >
O, then the quadratic form R0 AR = R0 Σ−1/2 Σ1/2 AΣ1/2 Σ−1/2 R is distributed
0
as Y0 A∗ Y where Y = Σ−1/2 R ∼ Sp∗ (ξ) and A∗ = Σ1/2 AΣ1/2 = A∗ .
Many test statistics can be expressed as ratios of quadratic forms. Consider for
instance, Moran and Geary’s indices which are used in spatial analysis. Geary’s index is proportional to X0 V(∆−Ω)VX/X0 VX, where X is a central normal vector,
Ω is a p×p matrix whose elements ωij are the weights assigned by an a priori criteria for the relation
between sites i and j, ∆ is a diagonal matrix whose ith diagonal
Pp
element is j=1 ωij and V = Ip − U/p, U being a p × p matrix whose elements
are all equal to one, Moran’s index being proportional to X0 VΩVX/X0 VX, see,
for example, Oden (1995), Waldhör (1996) and the references therein. It should be
noted that since V is a symmetric idempotent matrix of rank p−1, the denominator
can be expressed via an orthogonal transformation as a sum of p − 1 independent
chi-square random variables having one degree of freedom each. Other examples
include: (i) the sample autocorrelations that are used for model identification in
time series and are of the form X0 VQk VX/X0 VX where Qk is defined in (2) (cf.,
e.g., Provost and Rudiuk 1995); (ii) the sample coefficient of determination in connection with the regression model Y = Xβ + Z with Z ∼ Np (0, σ 2 Ip ), which can
be represented as Y0 (V − M)Y/Y0 VY where M = I − X(X0 X)−1 X0 and V is as
defined as above (cf., e.g., Pindyck and Rubinfeld 1998); (iii) a test for spatial correlation proposed by Cliff and Ord (1973) which has the form X0 VWVX/X0 VX,
where V is defined above and W is an a priori weight matrix. Their distributions
being invariant in the class S ∗ , these statistics may be used in more general settings, the usual assumptions of normality being no longer necessary; in that sense,
these statistics provide robust tests.
Quadratic forms and their ratios being ubiquitous in statistics, the results included in this paper could be applied in a variety of contexts. Spherically symmetric distributions have been considered for example in connection with filtering and
stochastic control (Chu 1973), random input signal (McGraw and Wagner 1968),
financial analyses (Zellner 1976 and the references therein), the analysis of stock
market data (Mandelbrot 1963, Fama 1965), and Bayesian Kalman filtering (Girón
and Rojano 1994). Studies on the robustness of statistical procedures when the
probability model departs from the multivariate normal distribution to the broader
class of elliptically contoured distributions were carried out for example by King
(1980) and Osiewalski and Steel (1993). Results related to regression analysis can
be found in Fraser and Ng (1980) for example. Several multivariate applications
4
are also discussed in Devlin et al. (1976). Furthermore, many test statistics and
optimality properties associated with the multinormal case remain unchanged for
elliptically contoured distributions; see for example Fang and Zhang (1990). The
Mahalanobis distance, which is a quadratic form, has been studied in the context
of elliptically contoured vectors by Mitchell and Krzanowski (1985) and Lange and
Sinsheimer (1993). Another quadratic form which is widely used in time series for
analyzing the frequency content of a sequence is the sample spectrum; its distribution is determined in the next section.
3. THE DISTRIBUTION OF THE SAMPLE SPECTRUM
The sample spectrum associated with series generated by ARMA processes is
expressed as a quadratic form under the assumption that the errors belong to
a certain subclass of the family of spherical distributions. Some quantiles are
tabulated under various distributional assumptions for the error vectors.
Spherically distributed errors have been considered for example by Jensen (1979)
and Hwang and Chen (1986) in connection with certain linear models, by Pázman
(1988) in the context of certain nonlinear models and by Krishnaiah and Lin (1986)
and Basu and Das (1994) in connection with some time series models.
The general autoregressive moving average process of order (p, q), which we
abbreviate as ARMA(p, q), is defined by a stochastic sequence, (Zt ), satisfying the
equation
φ(B)Zt = θ(B)At
(1)
where B is the backshift operator, φ(B) = (1 − φ1 B − · · · − φp B p ), θ(B) = (1 −
θ1 B − · · · − θq B q ) and (At ) is a sequence of uncorrelated and identically distributed
random variables with mean 0 and variance σ2 . More specifically, it is assumed in
this application that the vector of At ’s is distributed as a scale mixture of normal
vectors. We also require that all the roots of the characteristic equation, φ(B) = 0,
lie outside of the unit circle and that φ(B) and θ(B) have no common roots, these
conditions ensuring that such processes are stationary and have the infinite moving
average representation
∞
X
Zt = ψ(B)At =
ψj At−j
j=0
P∞
where ψ(B) = φ(B)/θ(B) with j=1 |ψj | < ∞, see Brockwell and Davis (1991),
Theorem 3.1.1 and Definition 3.1.3, and Box et al. (1994), pp. 77–78.
One also has to assume that any subset of k random variables from the countable
family of random variables (At )nt=−∞ will be spherically distributed. According to
Kelker (1970), Theorem 10, this will be the case provided that there exists a nonnegative random variable Y such that, conditionally given Y = y, the At ’s are
independent normal variables with mean 0 and variance y, or equivalently that the
joint distribution of the At ’s can be expressed as a scale mixture of normal vectors.
Such distributions include the multivariate Cauchy, multivariate t, contaminated
normal, power-exponential and the multivariate stable distributions among others;
see for instance Lange and Sinsheimer (1993).
Letting Ψ be an n × ∞ matrix with its (r, s)th element equal to ψs−r , ψ0 = 1
and ψj ≡ 0 for j < 0, we can write a set of realizations {Zt : t = 1, . . . , n} with
a corresponding set of error variables (At ), in the form Z = Ψ A , where Z =
(Zn , . . . , Z1 )0 and A = (An , . . . , A1 , A0 , A−1 , . . .)0 are respectively n × 1 and ∞ × 1
column vectors. Now, introducing the n×1 column vector Z̃ = (Zn −Z̄, . . . , Z1 −Z̄)0 ,
5
we have Z̃ = V ΨA where σ−1/2 A ∼ Sp∗ (ξ) and V = I − n−1 U is an idempotent
matrix of rank n − 1, U being an n × n matrix having all its elements equal to
one. Then the sample autocovariances at lag k for an ARMA(p, q) process can be
represented as
(n)
ck
= n−1
n−k
X
(Zt − Z̄)(Zt+k − Z̄) = A0 Ψ0 VQk VΨA/n = Z̃0 Qk Z̃/n,
(2)
t=1
where Qk is an n × n null matrix except for values 1/2 everywhere on the k th upper
and lower diagonals and Z̃ is distributed as an n-dimensional elliptically contoured
vector with mean 0 and covariance matrix σ 2 VΨΨ0 V ≡ Ξ, σ 2 ΨΨ0 being the
covariance matrix associated with the realizations (Zt )nt=1 of an ARMA(p, q) process. The last equality in (2) is to be interpreted in the sense that its right-hand
side is the limit in quadratic mean of the partial sums formed from its left-hand
side. The latter is an infinite sum of random variables (since Ψ has infinitely
many columns), but the absolute summability of the ψj ensures the convergence
in quadratic mean. Since Z̃ ∼ Cn∗ (0, Ξ; ξ) where Ξ has rank n − 1 with eigenvalues
λ1 , . . . , λn−1 , 0 and corresponding normalized eigenvectors v1 , . . . , vn−1 , vn , then,
by the spectral decomposition theorem,
Ξn×n ≡
n
X
λi vi vi0 =
n−1
X p
i=1
1/2
λi vi
p
0
0
λi vi = Ξ1/2 Ξ1/2
i=1
p
√
= ( λ1 v1 , . . . , λn−1 vn−1 ), and one can write Z̃ ∼ Ξ1/2 Y with
with Ξn×(n−1)
∗
Y ∼ Sn−1
(ξ).
The sample spectrum as defined in Box et al. (1994) and Wei (1990) — sometimes referred to as the sample spectral density function as in Anderson (1971) —
which we denote at frequency θj by f˜(θj ), can be expressed as the following linear
combination of the sample autocovariances:
(
)
n−1
X (n)
(n)
˜
c +2
c
cos(2 π θj k) /(2 π)
f(θj ) =
0
k=1
"
= Y
0
k
(
(Ξ
1/2 0
)
2
n−1
X
)
cos(2 π θj k)Qk
#
Ξ
1/2
/(2 π n) Y
k=0
n
o
≡ Y0 (Ξ1/2 )0 T(j) Ξ1/2 /(π n) Y
≡ Y0 G(θj ) Y,
where T(j) = (T (j) )αβ = cos(2πθj |α − β|) and G(θj ) = (Ξ1/2 )0 T(j) Ξ1/2 /(π n).
The frequencies of interest are the Fourier frequencies θj = j/n, j = 0, . . . , [n/2] at
which the sample spectrum multiplied by 4π is equal to the periodogram denoted
by I(θj ) — which, incidentally, was introduced by A. Schuster a century ago. Hence
f˜(j/n) = I(j/n)/4 π , j = 0, 1, . . . , [n/2], where [n/2] denotes the integer part of
n/2, and for a given n, either quantity may be used to analyze the frequency
content of a sequence. The Toeplitz matrix T(j) evaluated at Fourier frequencies
is in fact a symmetric regular circulant as defined in Graybill (1983, p. 241). Then,
by making use of Theorem 8.10.7 (op. cit.), it can be shown that T(j) has at most
two non-null characteristic roots. Hence the sample spectrum is either distributed
as λ∗i Yi2 or λ∗m Ym2 + λ∗` Y`2 where the λ∗ ’s are the non-null characteristic roots of
6
G(j/n). (Note that the exact density of Z is given in the Appendix for the case of
two components.)
By applying the results derived in the preceding section to the quadratic form
Y0 G(θj )Y, confidence intervals for the periodogram or the sample spectrum can
be determined under various assumptions about both the underlying process and
its associated errors. Given an observed time series, one should then be able to
identify more precisely the underlying model as well as the distribution of the errors
involved.
Table 1: Quantiles of the sample spectrum evaluated at frequency 1/10 for
certain selected percentage points c.
c
0.025
0.05
0.25
0.75
0.95
0.975
Cauchy vector
Exact
Simulated
0.0090
0.0090
0.0187
0.0189
0.1352
0.1358
2.6180
2.6190
69.7868
69.1764
279.0563
279.0471
t10 vector
Exact Simulated
0.0088
0.0088
0.0178
0.0175
0.1029
0.1029
0.5566
0.5559
1.4382
1.4236
1.9153
1.9176
Normal vector
Exact Simulated
0.0088
0.0088
0.0179
0.0178
0.0999
0.1004
0.4836
0.4837
1.0501
1.0476
1.2968
1.2808
Example. Consider 10 realizations from an AR(1) process with parameter φ =
−0.5 whose associated errors jointly have a multivariate t distribution with 10 degrees of freedom and scale parameter matrix 4 I. Some quantiles of the sample spectrum were evaluated at frequency 1/10 for certain percentage points of interest. (In
this case, the non-null characteristic roots of G(1/10) are 0.154116 and 0.195300.)
Exact and simulated values are given in Table 1. For comparison purposes, the corresponding quantiles were also evaluated assuming that the errors are distributed
as Cauchy and normal vectors. The integrations were done numerically by making
use of Mathematica, Version 3.0 with the command PrecisionGoal→ 7, and the
simulations were carried out with 100,000 replications.
APPENDIX: The exact density of Z for p = 2 and 3.
As seen in Section 3, the distribution of quadratic forms in two or threedimensional vectors is of particular interest in some applications. As another
potential application, one may consider the problem of determining the amount
of a given pollutant present in a certain area or a three-dimensional volume as was
done in Provost and Barnwal (1993) wherein the distribution of the polluting agent
was assumed to be Gaussian. Representations of the exact density of Z for p = 2
and 3 are given in this appendix.
Let p = 2, then
Z=
λ1 X12 + λ2 X22
(λ1 − λ2 )
+ λ2
=
X12 + X22
1 + (X22 /X12 )
where X22 /X12 ∼ F1,1 . A simple change of variables yields the following density
function:
 1
 π {(λ1 − z)(z − λ2 )}−1/2 λ2 < z < λ1
kZ (z) =

0
elsewhere.
7
For p = 3, the density of Z obtained by means of the transformation of variables
technique is
 1
−1/2
{(λ1 − λ3 )(z − λ2 )}
for λ2 < z < λ1

2 F1 (1/2, 1/2; 1; γ1)

 2


−1/2
1
kZ (z) =
for λ3 < z < λ2
2 F1 (1/2, 1/2; 1; γ2)
2 {(λ1 − λ3 )(λ2 − z)}





0
for z ¡ λ3 or z > λ1
with γ1 = {(z − λ1 )(λ2 − λ3 )}/{(z − λP
2 )(λ1 − λ3 )}, γ2 = {(z − λ3 )(λ1 − λ2 )/(z −
∞
λ2 )(λ1 − λ3 )} and 2 F1 (a1 , a2 ; b1 ; z) = r=0 (a1 )r (a2 )r z r /{(b1 )r r!}.
For p = 4, the density can be expressed in terms of elliptic integrals of the first
kind by solving a differential equation which was derived by Koopmans (1942).
ACKNOWLEDGEMENTS
We thank the Editor and the referees for helpful suggestions. We are particularly indebted to an Associate Editor whose recommendations greatly improved the
presentation of the results. This research was supported by the Natural Sciences
and Engineering Research Council of Canada.
REFERENCES
Anderson, T. W. (1971). The Statistical Analysis of Time Series. John Wiley, New
York.
Anderson, T. W., and Fang, K.-T. (1987). Cochran’s theorem for elliptically contoured
distributions. Sankhyā, Ser. A, 49, 305–315.
Basu, A. K., and Das, J. K. (1994). A Bayesian approach to Kalman filter for elliptically
contoured distribution and its application in time series models. Calcutta Statist.
Assoc. Bull., 44, 11–28.
Box, G. E. P., Jenkins, G. M., and Reinsel, G. C. (1994). Time Series Analysis: Forecasting and Control, Third Edition. Prentice Hall, New York.
Brockwell, P. J., and Davis, R. A. (1991). Time Series: Theory and Methods, Second
Edition. Springer-Verlag, New York.
Cambanis, S., Huang, S., and Simons, G. (1981). On the theory of elliptically contoured
distributions. J. Multivariate Anal., 11, 368–385.
Chu, K.-C. (1973). Estimation and decision for linear systems with elliptically random
process. IEEE Trans. Automat. Control, 18, 499–505.
Cliff, A. D., and Ord, J. K. (1973). Spatial Autocorrelation. Pion Limited, London.
Devlin, S. J., Gnanadesikan, R., and Kettenring, J. R. (1976). Some multivariate applications of elliptical distributions. In Essays in Probability and Statistics, (S. Ideka,
ed.), Sinko Tsusho, Tokyo, 365–393.
Fama, E. F. (1965). The behavior of stock-market prices. J. of Business, 38, 34–105.
Fan, J.-Q. (1986). Distributions of quadratic forms and non-central Cochran’s theorem.
Acta Math. Sinica (New Series), 2, 185–198.
Fang, K.-T., Fan, J.-Q., and Xu, J.-L. (1987). The distributions of quadratic forms
of random idempotent matrices with their applications. Chinese J. Appl. Probab.
Statist., 3, 289–297.
Fang, K.-T., Kotz, S., and Ng, K.-W. (1990). Symmetric Multivariate and Related
Distributions. Chapman and Hall, London.
8
Fang, K.-T., and Zhang, Y.-T. (1990). Generalized Multivariate Analysis. SpringerVerlag, New York.
Fraser, D. A. S., and Ng, K.-W. (1980). Multivariate regression analysis with spherical
error. In Multivariate Analysis, V (P. R. Krishnaiah, ed.). North Holland, New
York, 369–386.
Girón, F. J., and Rojano, J. C. (1994). Bayesian Kalman filtering with elliptically
contoured errors. Biometrika, 80, 390–395.
Graybill, F. A. (1983) Matrices with Applications in Statistics, Second Edition. Wadsworth, Bellmont, CA.
Hannan, E. J. (1970). Multiple Time Series. John Wiley, New York.
Hsu, H. (1990). Non-central distributions of quadratic forms for elliptically contoured
distributions. In Statistical Inference in Elliptically Contoured and Related Distributions, (K.-T Fang, and T. W. Anderson, eds.). Allerton Press Inc., New York,
97–102.
Hwang, J.-T., and Chen, J. (1986). Improved confidence sets for the coefficients of a
linear model with spherically symmetric errors. Ann. Statist., 14, 444–460.
Imhof, J. P. (1961). Computing the distribution of quadratic forms in normal variables.
Biometrika, 48, 419–426.
Jensen, D. R. (1979). Linear models without moments. Biometrika, 66, 611–618.
Johnson, N. L., and Kotz, S. (1976). Distributions in Statistics: Continuous Multivariate
Distributions. John Wiley, New York.
Kelker, D. (1970). Distribution theory of spherical distributions and a location-scale
parameter generalization. Sankhyā, Ser. A, 32, 419–430.
King, M. L. (1980). Robust tests for spherical symmetry and their application to least
squares regression. Amer. Statist., 8, 1265–1271.
Koopmans, T. (1942). Serial correlation and quadratic forms in normal variables. Ann.
Math. Statist., 13, 14–33.
Krishnaiah, P. R., and Lin, J. (1986). Complex elliptically symmetric distributions.
Comm. Statist. Theory Methods, 15, 3693–3718.
Lange, K., and Sinsheimer, J. S. (1993). Normal/independent distributions and their
applications in robust regression. J. Comput. Graph. Statist., 2, 175–198.
Li, G. (1987). Moments of a random vector and its quadratic forms. J. Statist. Appl.
Probab., 2, 219–229.
Mandelbrot, B. (1963). The variation of certain speculative prices. J. Business, 36,
394–419.
Mathai, A. M., Provost, S. B., and Hayakawa, T. (1995). Bilinear Forms and Zonal
Polynomials. Springer-Verlag, New York.
McGraw, D. K., and Wagner, J. F. (1968). Elliptically symmetric distributions. IEEE
Trans. Inform. Theory, 14, 110–120.
Mitchell, A. F. S., and Krzanowski, W. J. (1985). The Mahalanobis distance and elliptic
distributions. Biometrika, 72, 464–467. (Corr: V. 76, p. 407)
Oden, O. (1995). Adjusting Moran’s I for population density. Statistics in Medicine,
14, 17–26.
Osiewalski, J., and Steel, M. F. J. (1993). Robust Bayesian inference in elliptical regression models. J. Econometrics, 57, 345–363.
Pázman, A. (1988). Distribution of L. S. estimates in nonlinear models with spherically
symmetrical error. In Optimal Design and Analysis of Experiment, (Y. Dodge et
al., eds.). North-Holland, Elsevier, Amsterdam, New York, 177–184.
9
Pindyck, R. S., and Rubinfeld, D. L. (1998). Econometric Models and Economic Forecasts, Fourth Edition. McGraw-Hill, Boston, MA.
Provost, S. B., and Barnwal, R. (1993). A probabilistic model for the determination of
acid rain levels. Water Pollut. Res. J. Canada, 28, 337–353.
Provost, S. B., and Rudiuk, E. (1995). The sampling distribution of the serial correlation
coefficient. Amer. J. Math. Management Sci., 15, 57–81.
Wei, W. W. S. (1990). Time Series Analysis: Univariate and Multivariate Methods.
Addison-Wesley Publishing Company, Inc., New York.
Waldhör, T. (1996). The spatial autocorrelation coefficient Moran’s I under heteroscedasticity. Statistics in Medicine, 15, 887–892.
Zellner, A. (1976). Bayesian and non-Bayesian analysis of the regression model with
multivariate Student-t error terms. J. Amer. Statist. Assoc., 71, 400–405.
Received 14 January 1998
Accepted 14 May 1999
Department of Statistical and Actuarial Sciences
The University of Western Ontario
London, Ontario
Canada N6A 5B7
e-mail: [email protected]
10