Download Null: sigma_i_sq=alpha_1=sigma_sq

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
A Introduction and background information:
This paper study the exchange rate function for the
U.S. and England during the years 1957 to 1990 based on the
monetary model and an autoregressive model.
The monetary
model is an important and influential model concerning two
freely
floating
currencies.
The
exchange
rate
by
definition is the relative price of moneys of the two
corresponding countries. The major factors determining the
exchange rate include the money supply, income, and
interest rates of the countries.
Formally let M be the
money supply, GDP be the nominal GDP, IR be the nominal
interest rate and X be the exchange rate.
Furthermore,
denote P as the price level, D as the real demand for
money, E as the expected future rate of price inflation, r
as the real interest rate.
The model***[reference the
green book] starts with the assumption of monetary
equilibrium for both countries:
(1) P = M/D, P* = M*/D*,
where the superscript * indicates variables of the foreign
country.
The aggregate money demand function can be
written as
(2) D = KYIR^(-a), D* = K*Y*IR*^(-a),
where K's are fixed constants and -a is the interest
elasticity of demand for money. By I. Fisher, the nominal
interest is the sum of real interest rate and the expected
future inflation:
(3) IR = r + E, IR* = r* + E*.
By real interest rate parity we have
(4) r = r*,
implying that real rate of return on assets are the same in
both countries. Lastly, assuming that the purchasing power
parity holds, we have the following relationship:
(5) P = XP*,
showing how the price levels in the two countries are
linked by the exchange rate. Using the exchange rate, one
can convert the currency of one country to another.
Equation (5) essentially states that the exchange rate
equalizes the purchasing powers of the currencies at home
and abroad, when expressed in a common currency unit.
By substituting equations (1) to (4) into (5), one can
reach the simple monetary model for exchange rate:
(6) X = (M/M*)(K*/K)(Y*/Y)(IR/IR*)^a.
To estimate the model, one can take
transformation to attain the linear model:
log X =
the
logarithmic
log (M/M*) + log (K*/K) + log (Y*/Y) +
a log (IR/IR*).
There is much evidence in the literature that supports the
claim that exchange rates follow approximately a random
walk.***[reference the paper I xeroxed] If the variable X
follows a random walk process, it can be written as
X(t) = X(t-1) + epsilon(t)
In other words, the exchange rate of the current period
gives a good prediction for the exchange rate for the next
period.
The disturbance epsilon is introduced to reflect
any relevant news and information that may affect the
exchange rate.
In the random walk model, the coefficient
of X(t-1) is restricted to be one.
Many events have
effects that persist over time. To capture these effects,
the model should include lagged variables.
It is natural
to extend the model to include other more previous values
of X's in the model. This forms an autoregressive model of
the exchange rate:
X(t) = p1 X(t-1) + p2 X(t-2) + ... + pk X(t-k) +
epsilon(t).
Finally one can combine the above time series model with
the basic monetary model, with additional lagged variables.
In particular, we focus on the factors income and interest
rate of the countries.
Thus, the model studied in this
paper is the following:
M
ER t   0   1i
i 0
M
M
GDPUS t  i
IRUS t  i
  2i
   5 j ER t  j  u t
GDPEt  i
IRE t  i
i 0
i 0
Exchange rate is regressed on GDPR, which is the GDP ratio
of the two countries; IRR, which is the interest rate ratio
of the two countries; and the lag of exchange rate.
The
GDP ratio is obtained by dividing the GDPUS by GDPE.
The
interest rate ratio is obtained by dividing the IRUS by
IRE. GDP measures the total market value of a country’s
output and can be calculated by the expenditure approach
using the equation GDP=C+I+G+(X-M).
Interest rate, which
is determined in the money market, affects investments.
When
interest
rate
increases,
aggregate
expenditure
decreases because investment decreases.
A decrease in
aggregate expenditure lowers equilibrium output by a
multiple of the initial decrease in investment.
The
purchasing power parity states that exchange rates are set
so that the price of similar goods in different countries
is the same.
C. Report of empirical result
The data set contains the variables ER, which is the
exchange rate of U.S. dollars per English pound; GDPUS,
which is the nominal U.S. GDP; GDPE, which is the nominal
England GDP; IRUS, which is the 3 month Treasury bill rate
for U.S.; IRE, which is the 3 month Treasury bill rate for
England.
The data were quarterly data from 1957 first
quarter to 1990 third quarter. There are 135 observations
in the data set.
The statistics summary in Table 1 shows that the mean
exchange rate is approximately two dollars per English
pound from 1957 to 1990.
The highest exchange rate
occurred in the fourth quarter of 1957 where 2.8594 dollars
were exchanged for an English pound.
The lowest exchange
rate occurred in the fourth quarter of 1985 where 1.1565
dollars were exchanged for an English pound. The mean GDP
for U.S. is 12 times higher than the mean GDP for England.
The mean interest rate for England is 1.3 times as much as
the mean interest rate for U.S.
The data for GDPs, which
is not adjusted to inflation, is measured in nominal value.
This supports the fact that the maximum and minimum of
England’s GDP occurred in 1990 third quarter and 1957 first
quarter respectively. The maximum and minimum of U.S. GDP
occurred in 1990 third quarter and 1958 first quarter
respectively.
Table 1
Variable |
Obs
Mean
Std. Dev.
Min
Max
---------+----------------------------------------------------er |
135
2.235376
.5067401
1.1565
2.8594
gdpus |
135
1987.476
1518.92
441
5471.7
gdpe |
135
161.138
156.6626
21.67
549.26
irus |
135
6.104519
2.90844
1.02
15.09
ire |
135
8.305333
3.401131
3.18
16.04
gdpr |
135
15.88593
4.015345
9.915715
20.63329
irr |
135
.7432501
.1900515
.2073171
1.279551
The
correlation
matrix
in
Table
2
shows
that
multicollinearity is a problem in the data.
The nearly
unity correlation coefficient (0.9961) between GDPUS and
GDPE shows that GDPUS and GDPE are highly intercorrelated.
When multicollinearity is a problem, the OLS estimators,
which are still BLUE, have large variances and covariances.
The large variances make the estimate less precise. The tstatistics tend to be statistically insignificant but the R2
can still be very high. The confidence intervals are much
wider and lead to the acceptance of the “zero null
hypothesis” more readily.
Also, the OLS estimators and
their standard errors can be sensitive to small changes in
the data if multicollinearity exists.
One source of
multicollinearity may be due to the data collection method
employed, for example, sampling over a limited range of
values taken by the regressors in the population.
If the
data collection method cannot be improved, we can combine
cross-sectional and time-series data to alleviate the
problem of multicollinearity given that the cross-sectional
estimates do not vary substantially from one cross section
to another.
Table 2
er
gdpr
irr
--------+-------------------------------------------------------------er
|
1.0000
gdpr
|
-0.7932
1.0000
irr
|
-0.1066
-0.0105 1.0000
***let’s add other variables including er1 er2 er3… and gdpr1 gpr2…
In this study a linear model and a log model are
considered.
The objective is to choose a model that has
homoskedastic and non-serial correlated error terms so that
the OLS estimates are efficient.
Based on the derivation
of the monetary model, the log model is more appropriate
that the linear model.
Nevertheless, in this paper, the
decision is made solely based on the statistical propoerty
of
the
residualsheteroskedasticity
and
serial
correlationto avoid GLS estimations which are more costly
to perform.
Based on empirical findings, the linear model is
preferred
because
the
log
model
exhibits
heteroskedasticity.
The White test and the Breush-Pagan
test were used to test for heteroskedasticity.
The
2
variance of the error term  is assumed to be dependent on
GDPR, IRR; the square of GDPR; the square of IRR; and the
cross products of GDPR and IRR.
The White test was
inconclusive in determining whether the log model or the
linear model should be used because heteroskedasticity is
not observed in either model. For the White test of the
regression of the squared of the residuals on the
regressors,
the
null
hypothesis
that
there
is
no
heteroskedasticity
cannot
be
rejected
at
the
1%
significance level for both linear and log model.
The
Breusch-Pagan test, on the other hand, shows that the
linear model is preferred because the log model exhibits
heteroskedasticity.
The Breusch-Pagan test runs the
auxiliary regression of uˆi2 /( RSS / N ) on the regressors, where
uˆi2 is the squared of the residuals and RSS is the residual
sums of square and N is the number of observations. Using
the Breusch-Pagan test, the null hypothesis that the log
model has no heteroskedasticity is rejected at the 1%
significance level.
The t-statistic for the log model,
which is 15.1004, is greater than the critical value of
15.0863 of the chi-square distribution with five degrees of
freedom. For the test of heteroskedasticity of the linear
model, the null hypothesis that the linear model has no
heteroskedasticity is not rejected at the 1% significance
level.
The t-statistic of 6.808 for the linear model is
smaller than the critical value of 15.0863.
Refer to the
table below for comparison.
Linear Model
(chi_sq_5, 0.01)
Breusch
-Pagan Test
6.808
WhiteTest 2.0541
Log Model Critical
15.1004
7.4169
Value
15.0863
15.0863
Decision rule: Reject null if t-statistic is greater than
critical value of 15.0863
Heteroskedasiticity gives OLS estimators that are no
longer BLUE. The OLS estimators are biased and they do not
have the smallest variance among the class of unbiased
least square estimators.
If OLS estimation is used
disregarding heteroskedasticity, the usual t and F tests
are no longer valid.
Applying those tests would give
misleading conclusions about the statistical significance
of
the
estimated
regression
coefficients.
When
heteroskedasticity exists, the GLS method of estimation
should be used. GLS transforms the variables so that they
satisfy
the
standard
least-squares
assumptions
that
variance of the disturbance term should equal a constant.
LM
tests
for
first
order
and
fourth
order
autocorrelation are conducted but neither of the log models
nor the linear model exhibits autocorrelation. The LM test
for first order autocorrelation of the linear model
regresses the residuals on GDPR, the eight lags of GDPR,
IRR, the eight lags of IRR, the eight lags of ER, and one
lag of the residuals. The null hypothesis that the linear
model has no first order autocorrelation cannot be rejected
at the 1% significance level because 4.575 is smaller than
the critical value of 6.6349. The LM test for fourth order
autocorrelation regresses the residuals on GDPR, the eight
lags of GDPR, IRR, the eight lags of IRR, the eight lags of
ER, and the four lags of the residuals. The test statistic
of 8.3776 is smaller than the critical value of 13.2767 at
the 1% significance level. Hence, the null hypothesis that
the linear model has no fourth autocorrelation cannot be
rejected.
The LM tests cannot reject the null hypotheses that
the model has no autocorrelation for the log model.
For
the LM test of first order autocorrelation, the log of the
residuals are regressed on the log of GDPR, the log of the
eight lags of GDPR, the log of IRR, the log of the eight
lags of IRR, the log of the eight lags of ER, and the one
lag of the log of the residuals. The null hypothesis that
the log model has no first order autocorrelation cannot be
rejected at the 1% significance level because 0.7686 is
smaller than the critical value of 6.6349. The LM test of
fourth order autocorrelation regressed the log of the
residuals on the log of GDPR, the log of the eight lags of
GDPR, the log of IRR, the log of the eight lags of IRR, the
log of the eight lags of ER, and the four lags of the log
of the residuals.
The null hypothesis that the log model
has no fourth order autocorrelation cannot be rejected at
the 1% significance level because 4.3316 is smaller than
the critical value of 13.2767.
Refer to the table on the
next page for comparison.
Linear Model
Log Model Critical Values
st
1 order 4.575
0.7686
6.6349 (chi_sq, 1,
0.01)
4th order 8.3776
4.3116
13.2767 (chi_sq, 4,
0.01)
Decision rule: Reject null if test statistic is greater
than the critical value.
If there exist autocorrelation but we disregard it in
our OLS estimation, the residual variance( ˆ 2 ) is likely to
underestimate the true  2 . As a result, R2 will tend to be
overestimated.
Therefore, the usual t and F tests of
significance are no longer valid.
Applying those tests
would give misleading conclusions about the statistical
significance of the estimated regression coefficients.
Comparing the R2’s of the linear and log model, the
results are inconclusive in determining whether to use the
linear or the log model.
First, ln ERˆ is obtained from
the log model. Then, compute the exponential of lnER_hat.
The squared correlation between exp(lnER_hat) and ER_hat is
compared with R_square from the linear model.
Squared
correlation
=0.9901^2=0.980298
between
exp(lnER_hat)
and
ER_hat
R_square from the linear model
=0.9808
=>linear model with a higher R_square is preferred.
Second, ER_hat is obtained from the linear model and the
log of ER_hat is computed.
Then the square of the
correlation between ln(ER_hat) and lnER_hat is comapred
with the R_square from the log model.
Squared
correlation
between
ln(ER_hat)
and
lnER_hat
=0.9900^2=0.9801
R_squar e from the log model
=0.9805
=>log model with a higher R_square is preferred.
Based on the results from the Breusch-Pagan test that
found that heteroskedasticity exists in the log model, the
liner model was adopted in determining the lag length of
the model. Different regressions were ran to determine how
many lag lengths should be included in the model.
The tstatistics are insignificant for any coefficients from the
fifth to eight-order lags at the 1% significance level.
The F-test of the coefficients are jointly zero are
conducted and the null that the coefficients are jointly
zeros are all accepted from the fifth to eighth-order lags.
The fifth lags for ER, GDPR, and IRR are all dropped.
The
regression of ER on GDPR, the four lags of GDPR, IRR, the
four lags of IRR, and the four lags of ER shows that the
fourth lag of ER is significant at the 1% significance
level.
The test statistic of the fourth-lag is 0.008 is
smaller than the 0.01 shows that the model should include
the fourth-lag of ER.
The F-test of the coefficients of
the fourth-lag of GDPR and IRR are carried out and the null
hypothesis that the coefficients are jointly zeros is
accepted. A series of regressions and F-tests are carried
out but the lag-length for GDPR and IRR are all
insignificant at the 1% level from fourth-order to firstorder. Therefore, model 2 is determined to be:
4
GDPUS t  i
IRUS t  i
ER t   0  10
  20
   5 j ER t  j  u t
GDPEt  i
IRE t  i
i 0
Model 2 is estimated and
elasticities are calculated.
the
short-run
and
The short-run multiplier = 0.0095536
The short-run elasticity of ER with respect
0.0095536*(16.06696/2.251238)
=0.06818
to
long-run
GDPR
=
The short-run elasticity of ER with respect to IRR
=
0.0095536*(0.7497285/2.251238)
=3.1816*10-3
The
long-run
multiplier
=
0.0095536+(-0.064676)/[11.167024-(-0.3812324)-0.3507642
-(-0.2228933)]
= -0.638452584
The long-run elasticity of ER with respect of GDPR=0.638452584*(16.06696/2.251238)
=-4.5566
The long-run elasticity of ER with respect of IRR=0.638452584*(0.7497285/2.251238)
=-0.2126
The predicted ERs for the last four periods are
1.639637 for 1990 first quarter, 1.565121 for 1990 second
quarter, 1.673062 for 1990 third quarter, and 1.752046 for
1990 fourth quarter.
The predicted accuracy test was
carried out and the null hypothesis that the model is
correctly specified is not rejected at Q equals 4, where Q
is the number of periods for which predictions have been
made. The test statistic for the Predictive Accuracy test
is 4.238119, which is smaller than the critical value of
13.2767.
D. Formal calculation of the hypothesis tests:
Tests for heteroskedasticity of Linear Model:
Breusch-Pagan Test
Model:
u_i_hat_sq/sigma_tilda_sq=
alpha_1+alpha_2GDPR+alpha_3IRR
+alpha_4GDPR_sq+alpha_5IRR_sq
+alpha_6GDPR*IRR
Null:
sigma_i_sq=alpha_1=sigma_sq
Alternative:
sigma_i_sq=
alpha_1+alpha_2GDPR+alpha_3IRR
+alpha_4GDPR_sq+alpha_5IRR_sq
+alpha_6GDPR*IRR
Decision rule: Reject null if ESS/2 > chi_sq, 5, 0.01
ESS/2
=13.6163592/2
=6.808
chi_sq, 5, 0.01=15.0863
Since
6.808
<
15.0863,
null
hypothesis
heteroskedasticity does not exist cannot be rejected.
that
White Test:
Model:
u_i_hat_sq
=alpha_1+alpha_2GDPR+alpha_3IRR
+alpha_4GDPR_sq+alpha_5IRR_sq
+alpha_6GDPR*IRR
Null:
sigma_i_sq=alpha_1=sigma_sq
Alternative:
sigma_i_sq=
alpha_1+alpha_2GDPR+alpha_3IRR
+alpha_4GDPR_sq+alpha_5IRR_sq
+alpha_6GDPR*IRR
Decision rule: Reject null if NR_sq>chi_sq,5,0.01
NR_sq
=123*0.0167
=2.0541
chi_sq, 5, 0.01=15.0863
Since
2.0541<15.0863,
null
hypothesis
heteroskedasticity does not exist cannot be rejected.
Tests for heteroskedasticity of Log Model:
that
Breusch-Pagan Test:
Model:
u_i_hat_sq/sigma_tilda_sq=
alpha_1+alpha_2LGDPR+alpha_3LIRR
+alpha_4LGDPR_sq+alpha_5LIRR_
sq
+alpha_6LGDPR*LIRR
Null:
sigma_i_sq=alpha_1=sigma_sq
Alternative:
sigma_i_sq=
alpha_1+alpha_2LGDPR+alpha_3LIRR
+alpha_4LGDPR_sq+alpha_5LIRR_sq
+alpha_6LGDPR*LIRR
Decision rule: Reject null if ESS/2 > chi_sq, 5, 0.01
ESS/2
=30.200814/2
=15.1004
chi_sq, 5, 0.01=15.0863
Since
15.1004>15.0863,
null
hypothesis
heteroskedasticity does not exist is rejected.
that
White Test:
Model:
u_i_hat_sq
=alpha_1+alpha_2LGDPR+alpha_3LIRR
+alpha_4LGDPR_sq+alpha_5LIRR_
sq
+alpha_6LGDPR*LIRR
Null:
sigma_i_sq=alpha_1=sigma_sq
Alternative:
sigma_i_sq=
alpha_1+alpha_2LGDPR+alpha_3LIRR
+alpha_4LGDPR_sq+alpha_5LIRR_sq
+alpha_6LGDPR*LIRR
Decision rule: Reject null if NR_sq>chi_sq,5,0.01
NR_sq
=123*0.0603
=7.4169
chi_sq, 5, 0.01=15.0863
Since
7.4169<15.0863,
null
hypothesis
heteroskedasticity does not exist cannot be rejected.
that
LM Tests for autocorrelation for the linear model:
LM test for 1st order autocorrelation:
Null:
Alternative:
rho=0
not(rho=0)
Decision rule: Reject null if (T-1)R_sq>chi_sq, 1, 0.01
(T-1)*R_sq
=122*0.0375
=4.575
chi_sq, 1, 0,01=6.6349
Since 4.575 < 6.6349, the null hypothesis that there is no
autocorrelation is not rejected.
LM test for 4th order autocorrelation:
Null:
Alternative:
rho_1=rho_2=rho_3=rho_4=0
not (rho_1=rho_2=rho_3=rho_4=0)
Decision rule: Reject null if (T-4)R_sq>chi_sq, 4, 0.01
(T-4)*R_sq
=119*0.0704
=8.3776
chi_sq, 1, 0,01=13.2767
Since 8.3776 < 13.2767, the null hypothesis that there is
no autocorrelation is not rejected.
LM Tests for autocorrelation for the log model:
LM test for 1st order autocorrelation:
Null:
Alternative:
rho=0
not(rho=0)
Decision rule: Reject null if (T-1)R_sq>chi_sq, 1, 0.01
(T-1)*R_sq
=122*0.0063
=0.7686
chi_sq, 1, 0,01=6.6349
Since 0.7686 < 6.6349, the null hypothesis that there is no
autocorrelation is not rejected.
LM test for 4th order autocorrelation:
Null:
Alternative:
rho_1=rho_2=rho_3=rho_4=0
not (rho_1=rho_2=rho_3=rho_4=0)
Decision rule: Reject null if (T-4)R_sq>chi_sq, 4, 0.01
(T-4)*R_sq
=119*0.0364
=4.3316
chi_sq, 1, 0,01=13.2767
Since 4.3316 < 13.2767, the null hypothesis that there is
no autocorrelation is not rejected.
Predictive Accuracy Test of Model 2:
Define: u T+j|T_hat = Y T+j|T – Y T+j|T_hat
PA = summation
chi_sq_Q
j=1
to
Q
uT+j|T_hat_sq
/
sigma_hat_sq
~
Sigma_hat_sq is the estimated variance for the disturbance
term for Model 2 and Q is the number of periods for which
predictions have been made.
Null:
model is correctly specified
Alternative:
model is misspecified
Decision rule: Reject null f PA > chi_sq, 4, 0.01
PA
=0.0011654+0.006034+0.0047249+0.0147512 / 0.00294065
=4.238119
chi_sq,4, 0.01=13.2767
Since 4.238119 < 13.2767, the null hypothesis
model is correctly specified is not rejected.
that
the