Download portable document (.pdf) format

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Performance of Alternative Predictors for the Unit Root Process
By
Ahmed H. Youssef
Applied Statistics and Econometrics Department,
Institute of Statistical Studies and Research,
Cairo University, Egypt.
Email: [email protected]
Abstract
A comparison between Ordinary Least Squares (OLS), Weighted Symmetric
(WS), Modified Weighted Symmetric (MWS), Maximum Likelihood (ML), and our
new Modification for Least Squares (MLS) estimator for the first order autoregressive
are studied in the case of unit root using the Monte Carlo method. The Monte Carlo
study sheds some light on how well the estimators, and the predictors on different
samples size. We found that MLS estimator is less bias and mean squares error than any
other estimators, while MWS predictor error performs well, in the sense of MSE, than
any other predictors’ methods. The sample percentiles for the distribution of the τ
statistic for the first, the second, and the third periods in the future, for alternative
estimators, are reported to know if it agrees with those of normal distribution.
Keywords: First order autoregressive, Unit roots estimators, and Unit roots predictor.
1. Introduction
Autoregressive processes have been found to model a wide class of time series
data quite competently. For this reason, they have become the subject of extensive
research for many years. Because there are few exact small results, we have to rely on
asymptotic theory or simulation studies for both estimation and hypothesis testing for
these models.
Mann & Wald (1943) considered the zero mean first order autoregressive
process and showed that the least squares estimators of the autoregressive coefficient
say α, and is asymptotically distributed. For α =1, the process becomes non-stationary
and the limiting distribution become nonstandard. Dickey & Fuller (1979) found a
representation for the unit root distribution, which lent itself to computer simulation.
They tabulated various unit root distributions that can be used to perform unit root tests.
The behavior of the ordinary least squares estimator of the autoregressive coefficient is
quit different over the parameter space. The limiting distribution for the T statistic is
normal for all values of α in (-1, 1), and it is negative skewed for α =1. This difference
in behavior carries over to the distribution of the pivotal statistic.
If we let the first order autoregressive process { yt , t =1, 2, … } be defined by
y t = α 0 + α1 y t −1 + et ,
(1)
Where α 0 = µ (1 − α 1 ) , and et is a sequence of independent identically distributed
random variables with mean zero and variance σ2. The values of α0 , α1, and the form of
y1 determine the nature of the time series. If α1 <1 and
y1 = µ + ( 1 − α
1
2 −2
1
)
e1 .
(2)
The time series is covariance stationary, if yt is stationary and the mean of yt is µ. If the
et are normal distributed and equation (2) holds, the time series is a normal strictly
stationary time series. If α0 ≠ 0 and α1=1, the random walk is said to display drift. If
α1 >1 the process is called explosive.
Several authors discuss the properties of the estimators of α1 , in the first order
autoregressive, when α1 <1. Fuller and Haza (1980, 1981) derived the mean squares
error, and τ statistic for least squares prediction with α1 <1, α1 =1, and α1 >1.
Estimating and testing the parameters for the unit root autoregressive process
have received considerable attention since the work of White (1958), and Dickey and
Fuller (1979). For a survey of the unit root literature, see Diebold and Nerlove (1990).
Gonzalez-Farias and Dickey (1992) considered maximum likelihood estimation of the
parameters of the autoregressive process and suggested tests for unit roots based on
these estimators. Forchini and Marsh (2000) obtained the exact inference for the unit
root, while Elliott and stock (1992), and Elliott (1993) developed most powerful
invariant tests for testing the unit root hypothesis against a particular alternative.
Pantula, Gonzalez-Farias, and Fuller (1994) used a Monte Carlo study to compare the
power of the different criteria. Ahking (2002) studied the efficient of the unit root tests
on real exchange rates. Following Fuller (1996), a modification of the least squares
estimator is suggested in section 2.
2. Alternative Estimators
Several estimators for the first order autoregressive, when the unit roots are
considered, will be presented in this section. The estimators' parameter of the
autoregressive process such that the Ordinary Least Squares (OLS) estimator, the
Weighted Symmetric (WS) estimator, the Modification of Weighted Symmetric (MWS)
estimator for α ∈ ( − 1, ∞) , the Modification of Least Squares (MLS) estimator and the
Maximum Likelihood (ML) estimator will be described.
The least square estimator for (α 0 , α1 ) can be obtained by regressing, including
an intercept, yt on yt-1 as in model (1). So, we get
2
∑(y
)
n
α$1,ols =
t =2
n
∑(y
t =2
and
− y ( −1) y t
t −1
− y ( −1)
t −1
)
,
2
(3)
α$0,ols = y ( 0) − α$1,ols y ( −1) ,
(4)
where
[y
( 0)
]
, y( −1) = (n − 1)
−1
n
∑( y , y ) .
t −1
t
t =2
The estimated variance of α$1,ols is
V$ (α$ 1,ols ) =
σ$ 2ols
n
∑( y
t −1
− y( −1)
t =2
)
2
,
(5)
where
2
n
−1
σ$ 2ols = (n − 3) ∑ ( yt − y$t ) ,
t =2
and
y$ t = α$0,ols + α$1,ols y t −1 .
The ordinary least squares estimator of α1 is the value of α1 that minimizes the
sum of squares of the estimated et. We can construct a class of estimators, where the
estimator of α1 is the α1 that minimizes
n
n −1
Qws (α1 ) = ∑ wt (Yt − α1Yt −1 ) + ∑ (1 − wt +1 )(Yt − α1Yt +1 ) ,
2
t =2
2
(6)
t =1
n
where wt , t =2,3,…,n are weights, Yt = y t − y , and
y = n −1 ∑ y t . Note that the
t =1
ordinary least squares estimator is a member of this class with all wt =1. Dickey, Hasza,
and Fuller (1984) discussed the properties of the estimator, called simple symmetric
estimator, by setting wt = 0.5. Another member of the class of symmetric estimator was
studied by Park and Fuller (1995), and they called the estimator constructed with
wt = n-1 (t - 1) the Weighted Symmetric (WS) estimator. The WS estimators for the first
order process with [-1, 1] are
n
α$ 1, ws =
∑Y
Y
t −1 t
t =2
n
 n −1 2

−1
Y
+
n
Yt 2 
∑
∑
t

t =1
 t =2

and
3
,
(7)
(
)
α$0,ws = y 1 − α1∗,ws ,
(8)
where
α 1∗,ws
α$ 1,ws

= 1
 −1

if
if
if
α$ 1,ws p 1,
α$ 1,ws ≥ 1,
α$ 1,ws ≤ −1.
An estimator of the variance of α$1,ws is
(
)
V$ α$1,ws =
σ$ ws2
n −1
∑Y
t =2
where
(
t
2
+n
−1
,
n
∑Y
t =1
(9)
2
t
)
σ$ ws2 = (n − 2) Qws α$1,ws .
−1
Fuller (1996, p. 578) suggested a modification of the weighted estimator for
α1 ∈ ( − 11
, ] given by:
α$1,mw = α$1,ws
[
]
1
+ c(τ$w ) V$ (α$1,ws ) 2 ,
( 10 )
and an estimator of α0 is
α$0,mw = y (1 − α$1,mw ) ,
( 11 )
where
−τ$ w
2

c(τ$ w ) = 0.035672(τ$ w + 7.0)
0

τ$ w ≥ −12
. ,
if −7.0 p τ$ w ≤ −12
. ,
if τ$ w ≤ −7.0,
if
and
[
τ$w = V$ (α$1,ws )
]
−
1
2
(α$1,ws − 1) .
The function c (τˆw ) was chosen to be a smooth function of τˆw with value 1.2 at
τˆw = −1.2 . The modified estimator differs from the weighted symmetric estimator if
τˆw f −7.0 . The empirical properties of the weighted symmetric estimator and the
modified weighted estimator are compared by Fuller (1996) for the first order process.
The logarithm of the likelihood for the normal stationary first order
autoregressive is
n
2 log L( Y ∗ ; µ,α1 ) = −n log 2π + log(1 − α12 ) − Y1∗2 (1 − α12 ) − ∑ ( Yt∗ − α1Yt∗−1 )
2
( 12 )
t =2
Where Yt ∗ = y t − µ . Differentiating the log likelihood with respect to µ and α1 , and
setting the derivatives equal to zero, we obtain
4
n −1
µ$ml =
y1 + (1 − α$1 )∑ y t + y n
t =2
2 + (n − 2)(1 − α$1 )
,
and
n

1 
2
ˆ
( y 1 − µˆ ml ) −

α + ( y t − µˆ ml ) − αˆ1 ( y t −1 − µˆ ml )  ( y t −1 − µˆ ml ) = 0

t =2
(1 − αˆ12 )  1 ∑
( 13 )
( 14 )
If µ is known, Anderson (1971, p.354) shows that the maximum likelihood estimator of
α1 is a root of the cubic equation
f (α1 ) = α13 + c1α12 + c2α1 + c3 = 0 ,
( 15 )
where
 n

 ∑ Yt Yt −1 
,
c1 = − (n − 2)(n − 1) −1  t =n2−1

2 
 ∑ Yt 
 t =2

n


Yt 2 
∑

c2 = − (n − 1) −1 n + nt =−11  ,


Yt 2 
∑

t =2


and
c3 = − n(n − 2) −1 c1 .
Hasza (1980) gives explicit expressions for the three roots of (15) and shows
that there is a root in each of the intervals (- ∞, -1 ), ( -1, 1 ), and (1, ∞ ). If µ is
unknown, Gonzalez-Farias and Dicky (1992) showed that the unconditional maximum
likelihood estimator is a solution of a fifth degree polynomial. A numerical solution can
be obtained by iterating equation (15) and the estimator for µ in (13) beginning
with µˆ = y .
In our computations, we use a two-round approximation to the maximum
likelihood estimator. First µ is set equal to y and equation (15) solved for α1 . Then that
α1 is used in (13) to obtain an improvement value of µ and (15) evaluated at that value
of µ to obtain the approximate maximum likelihood estimator of α1 . The approximate
maximum likelihood estimator of µ will be used for the second round estimator of α1
in (13).
Several possible estimators of α$1,ml , See Gonzalez-Farias and Dicky (1992), can
be constructed. For our work, we use an estimator patterned after the ordinary least
squares estimator. The estimator variance is
5
(
)
V$ α$1,ml =
σ$ml2
n
∑(y
t =2
t −1
− µ$ml )
2
,
( 16 )
where
σ$ = ( n − 3)
2
ml
−1
∑[ y
n
t =2
t
]
2
− µ$ml − α$1,ml ( y t −1 − µ$ml ) .
The estimator σ$ml2 is not the maximum likelihood estimator of σ2 , but is that in the
form of the least squares estimator of σ2.
We suggest a modification to the ordinary least squares estimator that is
approximate for α1 ∈ ( − 1, ∞) , as follows
αˆ1,mls = αˆ1,ols
and
1
+ c (τˆols ) Vˆ (αˆ1,ols )  2 ,
( 17 )
αˆ 0,mls = y (0) − αˆ1,mls y ( −1) ,
( 18 )
where
0

2
0.062222 (τˆols + 7.1)

c (τˆols ) = 1.71 − 0.062222 (τˆols + 0.10)2

2
0.062222 (τˆols − 6.90)
0

if τˆols p −7.1,
if
−7.1 ≤ τˆols p −3.6,
if
−3.6 ≤ τˆols ≤ 3.4,
if 3.4 p τˆols ≤ 6.9,
if τˆols f 6.9,
and
[
τ$ols = V$ (α$1,ols )
]
−
1
2
(α$1,ols − 1) .
We choose the function c (τˆols ) to be a smooth function of the statistic τˆols to
remove fluctuations in an ordered series. So that, the result shall be smooth, in the sense
that the first order differences errors must be regular and small. The function c (τˆols )
was chosen to be a smooth function with value of zero at τˆols less than -7.1 or greater
than 6.9. The modified least squares estimator differs if τˆols greater, equal -7.1 , less or
equal 6.9. From equation (17), we have to estimate the least squares estimator before
constructing the modified least squares estimator. The empirical properties of the
modified least squares estimator and the other estimators are compared for the first
order process in section 4.
3. Prediction for the First Order Autoregressive Process
If α1 ≤ 1 , the use of estimated parameters to construct predictors of the process
adds a term of order 1 / n
to the estimation error. If the et are symmetrically
6
distributed, the predictions are unbiased, see Fuller (1996, p.443, 582). If α1 f 1 , a term
of order one is added to the estimation error. Following Fuller and Hasza (1980, 1981),
we suggest that the variance of the one period prediction error will be estimated as:
{
}
V$ ( y$ n +1 − y n +1 ) = σ$ 2 + (1, y n )V$ (α$0 , α$1 ) (1, y n ) ′ ,
( 19 )
where
 n −1σ$ 2 + µ$ 2V$ (α$ 1 ) − µ$ V$ (α$ 1 )
$
$
$
,
V (α 0 ,α 1 ) = 
V$ (α$ 1 ) 
 − µ$ V$ (α$ 1 )
and the expression for V$ (α$1 ) are the ones described for the particular estimators. For
ordinary least squares, V$ (α$ , α$ ) is obtained from the usual least squares formula.
{
}
{
0
1
}
The estimated variances for two and three period predictors are
{
}
V$ ( y$ n+2 ) = σ$ 2 (1 + α$12 ) + (1 + α$1 , yn+1 + α$1 yn )V$ (α$0 ,α$1 ) (1 + α$1 , yn+1 + α$1 yn ) ′
( 20 )
and
V$ ( y$n+3 ) = σ$ 2 (1 + α$ 12 + α$ 14 ) + (1 + α$ 1 + α$ 12 , y$n+2 + α$ 1 yn+1 + α$ 12 yn )
.V$ (α$ ,α$ ) (1 + α$ + α$ 2 , y$ + α$ y
{
0
1
}
1
1
n+2
1 n+1
+ α$ 12 yn ) ′.
( 21 )
Fuller and Hasza (1980) investigated the properties of the next observations for the first
order autoregressive. They use the regression τ statistic to construct a confidence
interval for the prediction, which has zero expectation as follows:
τ$ n + s =
y n + s − y$ n + s
 s −1

σ$ 2  ∑ α$ 12i + a 02s b00 + 2a 0 s a1s b01 + a12s b11 
 i =0

where
σ$ 2 = (n − 3)
−1
n
∑(y
t =2
− α$ 0 − α$ 1 y t −1 ) ,
2
t
n
b00 = D −1 ∑ y t2−1 ,
b01 = − D
t =2
n
−1
∑y
t −1
,
t =2
b11 = D −1 ( n − 1) ,
and
n
D = (n − 1)∑ y
t =2
2
2
t −1
 n

−  ∑ y t −1  ,
 t =2

7
,
( 22 )
a0s is the partial derivative of
the partial derivative of
y$ n + s with respect to α0 evaluated at (α$0 , α$1 ) and a1s is
y$ n + s with respect to α1 evaluated at (α$0 , α$1 ) . Then
E (τ$ n + s ) = 0 .
( 23 )
Since the normal distribution is symmetric, the predictors are unbiased and the mean
squares error of prediction is equal to variance, see Fuller and Haza(1980). The
properties of the first three predictors in the future, using Monte Carlo method, will be
investigated for τ statistic.
4. Monte Carlo Study
The estimators, the prediction error, and the estimated percentiles of the τ
statistic for the first order autoregressive processes with the unit roots will be discussed
in this section using a Monte Carlo method. The random variables et was generated
from normal distribution with mean zero and variances one. The first observation,
because of non stationary case, is generated as α 0 = 0 , α 1 = 1 , y1 = 0 and the
remaining observations by
yt = α 1 yt −1 + et ,
t = 2 ,3,K , n.
The error predicting Yn+1 given Y1, Y2,…,Yn can be written as
Yn +1 − Y$n +1 = en +1 + (α 0 − α$0 ) + (α1 − α$1 )Yn ,
and
(
E Yn +1 − Y$n +1
)
2
( 24)
= E ( en +1 ) + E [(α 0 − α$0 ) + (α1 − α$1 )Yn ] .
2
2
( 25 )
To obtain an estimate of the mean square error for one period prediction error, it is
necessary to simulate the distribution of
X i = (α 0 − α$ 0 ) + (α 1 − α$ 1 )Yn ,
and find its variance from N Monte Carlo trials as follows
( 26 )
2
N 2 1 N
 
V X = ( N − 1) ∑ X i −  ∑ X i  
N  i =1  
 i =1
Similar expressions for two and three errors predictions can be derived as
( )
(
E Yn + 2 − Y$n + 2
and
(
)
2
−1
[
]
2
= E ( en + 2 + α 1en +1 ) + E −α$ 0 (1 + α$ 1 ) + (α 12 − α$ 12 )Yn ,
2
[ (
)
)
( 27 )
]
2
2
2
E Yn+3 − Y$n+3 = E(en+3 + α1(en+2 + α1en+1)) + E −α$ 0 1+ α$ 1 + α$ 12 + (α13 − α$ 13)Yn .
8
( 28 )
Since the estimator of α̂1 is scale invariant, then there is no loss of generality,
see Fuller (1996), in assuming σ 2 = 1 . The estimated values of the unit root, the mean
square error, and the percentile distribution of the τ statistic for of the first three
predictions in the future are simulated from 10,000 samples. Each sample size is
constructed from an independent random variables N(0, 1), we take n = 15, 30, 100, and
200 to represent the small and large sample size for the unit root.
The estimated values of the unit root and its mean square errors from different
methods are contained in Table (1). We found that the modified least square is less bias
than any other estimators are and the bias is decreasing when the sample size increases.
We also found the modified weighted symmetric is less bias than the weighted
symmetric, and the maximum likelihood is less bias than ordinary least square. These
biases are decreasing when the sample size is increasing. Note that the bias is not a
good criterion; the mean square error is; to choose among the estimators. The mean
square error for MLS estimator is performed better than any other estimators. That is,
for n=15, the mean square error is 0.0422 while its 0.0003 for n=200. Similarly, the
mean square error for MWS is reduced from 0.0594 to 0.0004 when the sample size
increased from 15 to 200, and the MWS is less mean square error than WS. Finally, the
ML is less mean square errors than OLS. Therefore, the Monte Carlo simulation results
suggested that the MLS estimator performed well, in the sense of less biased and means
square error, in small and large sample size. Therefore, we recommend using MLS if
you want to estimate the unit roots.
The simulated mean squares errors of the first, the second, and the third period
predictors are reported, for all the estimators, in table (2). The mean squares errors
estimated using MWS are 1.0629, 2.1304, and 3.1942 for the first, the second, and the
third period predictors for small sample size, n=15. The corresponding mean squares
errors computed by WS predication formulas are 1.2001, 2.4571, and 3.7002. For large
sample size n=200, the mean squares' errors estimated by MWS are 1.0035, 2.0133, and
3.0286 for the first, the second, and the third period predictors, while they are 1.01373,
2.0526, and 3.1136 for mean squares' errors computed by WS predication formulas.
From table (2), we found that the mean squares error of MWS of one period predictive
is less than any other methods, and the ML is better, in the sense of mean square error,
than WS, but WS is better than OLS and MLS. The value of mean square error is
decreasing when the sample size increase. The same results hold for the second and
third period predictive of the unit root.
From table (3) until table (7), we simulate the sample percentile for the
distribution of the τ statistic, as given in (22), for first, second, and third periods
predictive of the unit root from different estimators to know how their distributions look
like. Table (6) shows that, for the first period predictor at n = 15, the cumulative
distribution of τ statistic smaller than zero is 0.5045 while it is 0.5 for the standard
normal distribution. Similarly, the probability of τ statistic for the second and the third
periods predictors in the future are 0.4972, and 0.4949 respectively, while for n = 200
are 0.4998 and 0.4972. So, the differences are getting smaller when the sample size is
getting large. Finally, we can say that the distribution of τ statistic, from different
9
methods, is symmetric about zero and very close, especially for large n, to the standard
normal distribution.
To see the closeness distribution of τ statistic for the first, second and third
periods' predictors to the standard normal distribution, we used P-value of the
Kolmogorov test (KT). We found from table (8) that, the P-value in different methods is
highly significant in small and large sample size. These significances mean that the
sampling distributions of τ statistic are coming from standard normal distribution.
Therefore, the tables for the probability of τ statistic for the first, second, and
third periods' predictors can be used for hypothesis testing and confidence interval of
the unit root or you can use the standard normal distribution directly.
Table (1) the estimated value and its MSE of the unit root from different methods.
Sample Size
OLS
MLS
WS
MWS
ML
α̂
15
MSE
α̂
30
MSE
α̂
50
MSE
α̂
100
MSE
α̂
200
MSE
0.6882
0.1391
0.8341
0.0384
0.8964
0.0148
0.9476
0.0038
0.9735
0.0010
0.9833
0.0422
0.9828
0.0112
0.9865
0.0043
0.9931
0.0011
0.9963
0.0003
0.7127
0.1249
0.8555
0.0321
0.9114
0.0120
0.9556
0.0031
0.9779
0.0008
0.8695
0.0594
0.9353
0.0154
0.9598
0.0058
0.9799
0.0015
0.9901
0.0004
0.7005
0.1269
0.8473
0.0994
0.9061
0.0114
0.9527
0.0028
0.9762
0.0007
Table (2) the mean squares error predictors of the unit roots from different methods.
Sample Size
Periods
OLS
MLS
WS
MWS
ML
15
30
50
100
200
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1.2018
2.5269
3.9801
1.0982
2.3087
3.5825
1.0584
2.1992
3.3907
1.0302
2.1113
3.2322
1.0149
2.0572
3.1238
1.4826
4.4610
10.4875
1.2071
2.9162
5.3166
1.1116
2.4694
4.1171
1.0562
2.2303
3.5315
1.0265
2.1071
3.2438
10
1.2001
2.4571
3.7002
1.0923
2.2790
3.4978
1.0562
2.1894
3.3654
1.0290
2.1063
3.2202
1.01373
2.0526
3.1136
1.0629
2.1304
3.1942
1.0252
2.0729
3.1264
1.0151
2.0500
3.0949
1.0077
2.0282
3.0578
1.0035
2.0133
3.0286
1.1919
2.4443
3.6935
1.0883
2.2698
3.4861
1.0538
2.1823
3.3536
1.0279
2.1024
3.2127
1.0156
2.0599
3.1293
Table (3) the percentiles of the τ statistic using ordinary least squares predictors.
Sample periods
Size
-2.33
15
1
.0238
2
.0370
3
.0517
30
1
.0141
2
.0212
3
.0284
50
1
.0138
2
.0185
3
.0246
100
1
.0131
2
.0141
3
.0157
200
1
.0144
2
.0098
3
.0125
Normal Dist.
0.01
-1.96
.0418
.0625
.0776
.0325
.0409
.0519
.0309
.0372
.0448
.0278
.0318
.0339
.0273
.0254
.0277
0.025
Probability of Smaller Value Than
-1.646 -1.28
0.0
1.28
1.646
.0686 . 1193 . 5019 . 8834 . 9328
.0925 .1476 .4946 .8552 .9096
.1113 .1603 .4963 .8337 .8867
.0581 .1117 .4929 .8853 .9436
.0723 .1236 .5013 .8734 .9271
.0831 .1391 .5054 .8635 .9152
.0575 .1069 .4973 .8963 .9454
.0656 .1215 .4983 .8818 .9364
.0739 .1242 .4990 .8768 .9273
.0543 .1041 .5044 .8900 .9428
.0584 .1067 .5012 .8944 .9432
.0620 .1142 .4987 .8861 .9376
.0480 .1005 .4985 .8971 .9493
.0516 .1027 .4971 .8950 .9446
.0534 .1018 .4958 .8931 .9460
0.05
0.1
0.5
0.90
0.95
1.96
. 9612
.9418
.9202
.9688
.9564
.9457
.9700
.9630
.9545
.9709
.9679
.9655
.9754
.9709
.9720
0.975
2.33
.9801
.9660
.9503
.9855
.9757
.9667
.9872
.9818
.9778
.9872
.9870
.9847
.9895
.9891
.9885
0.99
Table (4) the percentiles of the τ statistic using modified least squares predictors.
Probability of Smaller Value Than
Sample periods
size
15
1
2
3
30
1
2
3
50
1
2
3
100
1
2
3
200
1
2
3
Normal Dist.
-2.33
.0194
.0215
.0191
.0137
.0155
.0167
.0135
.0157
.0170
.0129
.0138
.0122
.0113
.0093
.0103
0.01
-1.96
.0398
.0403
.0373
.0312
.0340
.0349
.0295
.0331
.0355
.0284
.0291
.0307
.0259
.0242
.0269
0.025
-1.646
.0682
.0704
.0661
.0580
.0635
.0648
.0563
.0583
.0617
.0540
.0568
.0583
.0493
.0504
.0514
0.05
-1.28
.1209
.1232
.1226
.1091
.1194
.1229
.1050
.1153
.1148
.1053
.1077
.1109
.1003
.1017
.0987
0.1
11
0.0
.5044
.4998
.5004
.4909
.5007
.5072
.4953
.4993
.4980
.5044
.5017
.4963
.4980
.4970
.4998
0.5
1.28
.8880
.8799
.8807
.8900
.8852
.8781
.8961
.8868
.8838
.8907
.8982
.8930
.8982
.8955
.8965
0.90
1.646
.9370
.9371
.9366
.9424
.9371
.9334
.9449
.9414
.9370
.9435
.9454
.9425
.9484
.9461
.9465
0.95
1.96
.9650
.9628
.9629
.9696
.9651
.9615
.9722
.9664
.9660
.9703
.9730
.9699
.9749
.9722
.9727
0.975
2.33
.9819
.9816
.9787
.9869
.9835
.9815
.9867
.9855
.9849
.9893
.9878
.9875
.9901
.9886
.9892
0.99
Table (5) the percentiles of the τ statistic using weighted symmetric predictors.
Probability of Smaller Value Than
Sample periods
size
-2.33
15
1
.0181
2
.0311
3
.0448
30
1
.0112
2
.0171
3
.0263
50
1
.0124
2
.0171
3
.0220
100
1
.0126
2
.0135
3
.0146
200
1
.0106
2
.0094
3
.0117
Normal Dist.
0.01
-1.96
.0348
.0538
.0694
.0279
.0372
.0454
.0285
.0354
.0408
.0269
.0311
.0310
.0255
.0254
.0278
0.025
-1.646
.058
.0823
.0979
.0539
.0622
.0752
.0542
.0600
.0667
.0527
.0563
.0578
.0477
.0506
.0494
0.05
-1.28
.1050
.1314
.1456
.1048
.1163
.1281
.1042
.1135
.1177
.1014
.1028
.1105
.0989
.0971
.1014
0.1
0.0
.5008
.4995
.4934
.4888
.5033
.5008
.4990
.4998
.5006
.5027
.5020
.4993
.4990
.4997
.4966
0.5
1.28
.8954
.8709
.8486
.8927
.8800
.8736
.8990
.8876
.8845
.8916
.8973
.8904
.8978
.8953
.8969
0.90
1.646
.9427
.9203
.8988
.9471
.9337
.9244
.9490
.9410
.9318
.9452
.9432
.9398
.9500
.9466
.9475
0.95
1.96
.9692
.9497
.9294
.9716
.9606
.9499
.9726
.9673
.9610
.9709
.9704
.9683
.9752
.9725
.9733
0.975
2.33
.9841
.9707
.9560
.9870
.9785
.9701
.9876
.9839
.9789
.9879
.9868
.9850
.9898
.9891
.9887
0.99
Table (6) the percentiles of the τ statistic using modified weighted symmetric predictors.
Probability of Smaller Value Than
Sample periods
size
15
1
2
3
30
1
2
3
50
1
2
3
100
1
2
3
200
1
2
3
Normal Dist.
-2.33
.0111
.0157
.0209
.0083
.0101
.0137
.0107
.0123
.0136
.0118
.0108
.0100
.0108
.0084
.0097
0.01
-1.96
.0244
.0312
.0346
.0230
.0233
.0262
.0260
.0271
.0288
.0249
.0268
.0236
.0241
.0230
.0236
0.025
-1.646
.0449
.0513
.0566
.0449
.0465
.0484
.0485
.0482
.0492
.0501
.0511
.0482
.0467
.0469
.0466
0.05
-1.28
.0879
.0907
.0929
.0945
.0924
.0926
.0946
.0977
.0979
.0993
.0967
.0972
.0973
.0942
.0935
0.1
12
0.0
.5045
.4972
.4949
.4883
.5015
.5020
.4976
.4999
.4990
.5029
.5014
.5010
.4977
.4998
.4972
0.5
1.28
.9153
.9123
.9063
.9039
.9061
.9075
.9041
.9035
.9058
.8946
.9068
.9019
.8999
.8988
.9029
0.90
1.646
.9576
.9534
.9471
.9539
.9510
.9497
.9525
.9512
.9499
.9470
.9513
.9491
.9506
.9492
.9529
0.95
1.96
.9777
.9727
.9666
.9771
.9738
.9684
.9761
.9749
.9735
.9747
.9738
.9733
.9761
.9749
.9761
0.975
2.33
.9879
.9842
.9787
.9900
.9872
.9836
.9899
.9875
.9884
.9896
.9893
.9899
.9902
.9904
.9905
0.99
Table (7) the percentiles of the τ statistic using maximum likelihood Predictors.
Sample periods
Probability of Smaller Value Than
size
15
1
2
3
30
1
2
3
50
1
2
3
100
1
2
3
200
1
2
3
Normal Dist.
-2.33
.0171
.0301
.0436
.0113
.0171
.0256
.0123
.0166
.0218
.0124
.0129
.0141
.0103
.0094
.0122
0.01
-1.96
.0341
.0535
.0695
.0277
.0366
.0460
.0285
.0357
.0404
.0273
.0312
.0312
.0258
.0258
.0285
0.025
-1.646
.0575
.0800
.0972
.0529
.0634
.0755
.0536
.0604
.0678
.0528
.0562
.0584
.0479
.0497
.0504
0.05
-1.28
.1034
.1293
.1469
.1021
.1146
.1295
.1035
.1138
.1188
.1022
.1037
.1101
.0993
.0966
.1025
0.1
0.0
.5035
.5004
.4914
.4874
.5020
.5019
.5005
.5006
.4983
.5037
.5019
.4986
.4983
.4996
.4975
0.5
1.28
.8982
.8678
.8470
.8920
.8805
.8728
.9005
.8881
.8837
.8924
.8973
.8889
.8984
.8965
.8958
0.90
1.646
.9440
.9208
.8989
.9470
.9349
.9239
.9488
.9409
.9326
.9450
.9441
.9393
.9497
.9461
.9469
0.95
1.96
.9691
.9497
.9325
.9722
.9616
.9501
.9723
.9678
.9608
.9713
.9700
.9673
.9754
.9728
.9728
0.975
2.33
.9841
.9713
.9569
.9876
.9791
.9789
.9873
.9841
.9791
.9877
.9872
.9846
.9897
.9890
.9887
0.99
Table (8) the P-values from Kolmogorov test for different methods in the unit roots.
Sample Size
Periods OLS
MLS
WS
MWS
ML
15
30
50
100
200
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
0.0108
0.0097
0.0085
0.0118
0.0111
0.0104
0.0118
0.0113
0.0108
0.0119
0.0118
0.0116
0.0117
0.0122
0.0119
0.0113
0.0111
0.0113
0.0118
0.0116
0.0115
0.0118
0.0116
0.0115
0.0119
0.0118
0.012
0.0121
0.0123
0.0122
0.0114
0.0102
0.0091
0.0121
0.0115
0.0106
0.0119
0.0115
0.011
0.0119
0.0118
0.0117
0.0121
0.0122
0.012
0.0121
0.0116
0.0111
0.0124
0.0122
0.0118
0.0121
0.012
0.0118
0.012
0.0121
0.0122
0.0121
0.0123
0.0122
0.0115
0.0103
0.0092
0.0121
0.0115
0.0107
0.012
0.0115
0.011
0.0119
0.0119
0.0118
0.0122
0.0122
0.012
References
1. Ahking, F.W (2002). Efficient Unit Root Tests of Real Exchange Rates in the PostBretton. Journal of Economic Literature, 1-17.
2. Anderson, T.W. (1971). The Statistical Analysis of Time Series, Wiley, New York.
3. Dickey, D.A. & Fuller, W.A. (1979). Distribution of the Estimators For
Autoregressive Time Series With A Unit Root, Journal of the American Statistical
Association, 74, 427-431.
13
4. Dickey, D.A., Hazal, D.P. & Fuller, W.A. (1984). Testing for Unit Roots in Seasonal
Time Series, Journal of the American Statistical Association, 79, 355-367.
5. Diebold, F.X. & Nerlove, M. (1990). Unit Roots in Economic Time Series: a
selective survey, Econometrica, 8, 3-69.
6. Elliott, G. (1993). Efficient tests for a unit root when the initial observation is drawn
from its unconditional distribution, unpublished manuscript Harvard University,
Cambridge, Massachusetts.
7. Elliott, G. & Stock, J.H. (1992). Efficient tests for an autoregressive unit root, paper
presented at the NBER-NSF Time Series Seminar, Chicago.
8. Forchini, G and Marsh, P. (2000). Exact Inference for the Unit Root Hypothesis,
Discussion Papers No.2000/54, Department of Economics and Related Studies,
University of York, Heslington.
9. Fuller, W.A. (1996). Introduction to Statistical Time Series, 2nd Edition, Wiley, New
York.
10. Fuller, W.A. & Hasza, D.P. (1980). Predictors for the fist order autoregressive
process, Journal of Econometrics, 13, 139-157.
11. Fuller, W.A. & Hasza, D.P. (1981). Properties of predictors for autoregressive time
series, Journal of the American Statistical Association, 76, 155-161.
12. Gonzalez-Farias, G.M. & Dickey, D.A. (1992). An Unconditional Maximum
Likelihood Test for a Unit Root, 1992 Proceedings of the Business and Economic
Statistics Section, American Statistical Association, 139-143.
13. Hasza, D.P. (1980). A Note on Maximum Likelihood Estimation for the First Order
Autoregressive Process, Communication Statistics, Theory and Methods, A9, 13,
1411-1415.
14. Mann, H. B. & Wald, A. (1943). On the Statistical Treatment of the Linear
Stochastic Difference Equations, Econometrica, 11, 173-220.
15. Pantula, S.G.; Gonzalez-Farias, G. & Fuller, W.A. (1994). A Comparison of Unit
Root Criteria, Journal of Business and Economic statistics, 13, 449-459.
16. Park, H.J. & Fuller, W.A. (1995). Alternative Estimators and Unit Root Tests for the
Autoregressive Process, Journal of Time Series Analysis, 16, 415-429.
17. Roy,A. and Fuller, W.A. (2001). Estimation for Autoregressive Time Series with a
Root near 1, Journal of Business & Economic Statistics, vol. 19, 482 – 493.
18. White, J.S. (1958). The Limiting Distribution of the Serial Correlation Coefficient in
the Explosive Case, Annals of Mathematical Statistics, 29, 1188-1197.
14