Download Simulation Methods Based on the SAS System

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

System of linear equations wikipedia , lookup

Gaussian elimination wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Matrix multiplication wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Coefficient of determination wikipedia , lookup

Least squares wikipedia , lookup

Resampling (statistics) wikipedia , lookup

Ordinary least squares wikipedia , lookup

Transcript
SIMULATION METHODS llASED ON SAS SYSTEM
Paul Miill~r - VIm University, Computing Center, F.R.G.
Karin SchlauB - VIm University, Computing Center, F.R.G.
Abstract
Theory: In this paper we investigate the test statistics of Stein estimates as a non-linear
alternative to the OLS· estimator in linear models. This is due because some statistical and
numerical problems arise using OLS if the model assumptions are not fulfilled or we have some
,
correlations between the explanatory variables.
T() avoid the above problems we look at the class of impr6ved estimation functions. But
these estimators are in general biased and the distribution depends on the unknown parameters.
!
Therefore test statistics aie not directly available.
One alternative to calculate the relevant test statistics of 'improved estimation functions
especially of the Stein-estimator is the use of computer-based simulation methods in order to
approximates the empirical distribution of the estimation function. Here we use the Jackknife
(Tukey [13]) and Bootstrap simulation method (Bfron [3]) to derive empirical confidence intervals for the Stein estimation function in a linear model.
Example: We made an experiment to implement the algorithm in SAS. First the example
was done with GAUSS2.0. But the time needed for the simulation of PC is very long and the
GAUSS2.0 is mere a PC-SYSTEM. On the other hand there was the possibility to work with
NAG or IMSL on Mainframe. At our computer site we have also implemented SAS on Cluster
VAX941O/6440 and PC with the full power of all products. We want to demonstrate that it is
a good idea to perform the simulation with SAS. We also show, that if standard capability is
missing in SAS, there is the possibility to build it with the aid of SAS-Tools.
Keywqrds
Linear Model, Least Squares Regression, Stein-Rule Estimation, Standard Deviation, Singular
Value Decomposition (svd), Jackknife, Bootstrap, Monte-Carlo-Simulation.
1
Introduction
Here we consider the multiple linear regression moqel in the standardized form y = X {3 + u, where
y is the (T x 1) vector of T observations of the dependent variable, X the (T x K) matrix of the
explanatory variables with full rank K :S T, {3 the (K x 1) vector of the unknown parameters, and
u the (T x 1) random vector with E[uJ = 0 and Var[u] = 0'21 where 0' is also unknown. In this
model the unknown parameter vector {3 is usually estimated by the Ordinary Least Squares (OLS)
estimation function b := (X':q-l X'y, which is normally distributed b rv N({3, 0'2 (X' x t 1 ) in the
case of normality of u.
For computational purposes and numerical representation we use the singular value decomposition (svd) described in Golub/Reinsch [6]. That i& applied to the (T x K) matrix X with K :S T
we have the following decomposition:
(1)
where U'U = V'V = VV' = I and E := diag(O'b ... ,O'K). The diagonal elements of the matrix E
are the non negative square roots of the eigenvalues of X' X j they are called the singular values.
Using the singular value decomposition defined above we get the following representation of the OLS
estimator b :
b = VE-1U'y,
(2)
where the solution is unique iff all singular values are greater, than zero.
511.
The problem of deriving test statistics for the unknown model parameters is the knowledge about
the distribution of the related estimation function. But this distribution is correct only if all model
assumptions are fulfilled. These assumptions can rarely be examined and thus, in most cases we have
to reckon with a biased OLS estimator. Also we pave that the OLS estimation function is sensitive
in the presence of correlation between the explanatory variables which yields statistical as well as
numerical problems.
If we use other classes of also biased estimation functions, i.e. "improved estimation functions"
constructed by different criteria of goodness we have estimators which are e.g. more robust against
multicollinearity.
But unfortunately we do not know something about the joint distribution of these functions
because the relevant parameters depend directly on the unknown parameters. Therefore we can not
calculate the relevant test statistics directly. Here the jack~nife or bootstrap approach are appropriate
simulation methods for evaluating the relevant test statistics in the class of improved estimators.
2
The Stein-Rule Estimation
Func~ion
A nontraditional nonlinear biased alternative to the OLS estimation function was given by Stein [12].
Here we consider a general class of Stein-Rule functions discussed e.g. by Judge/Bock [7], MUller [8]
and Baur [1]. In the usual regression case where (7 is unknown, the so called Stein-Rule estimation
function dSR is given as a nonlinear transformation of the OLS estimation function b as follows :
dSR .- [IK - h(71)C]b
= ASRb,
(3)
with transformation matrix ASR := [IK -h(71)C] where 71 := (b'Bb)/(T-K)fT2, Band Care (K x K)
matrices and h is a real valued differentiable function.
Under certain conditions of regularity on B, C, h(71) and if K ~ 3 we have:
(4)
as shown e.g. in Judge/Bock [7]. This result holds even if a weighted risk function is used.
MoreQver for the Stein-Rule estimation function, the risk is always bounded by the risk of the
OLS function, and we have:
(5)
Furthermore, if we have some a priori information about the unknown parametervector (3, it is
possible to include this information in the estimation function. The result is that the risk of this new
estimation function will always be close to the minimum of the risk function.
Based on the criteria (4) we have we have that dSR is an improvement over OLS if the relevant
shrinkage factors must lie in the interval (0,1]. This suggests the definition of an estimator which
truncate the shrinkage factor by zero. The so called positive part Stein estimator is defined as :
dtR := [Ix -
M(p)]b,
p' = (Pt. ···,PK),
(6)
where
.(b'Bb) _ {C;h(1lBb/S), if c;h(b'Bb/s):::; 1
ai(b'Bb/s), if c;h(b'Bb/s) > 1
PI
and
aj
(7)
is a real-valued function such that:
2 - c;h(b'Bb/s):::; ai(1lBb/s):::; 1,
512.
for c;h(b'Bb/s) > 1.
(8)
Unlike the estimator defined in (3) the positive part Stein estimation function defined in (6) can
never change the sign of OLS estimates. Under this conditions the positive part Stein estimator will
dominate the Stein estimator defined in (3) relatjve to the criterion (4) if Pi (11 Bb/ 8) differs from
cjh(b' Bb/ 8) for some i = 1, ... , K on a set of positive measure see e.g. Judge/Bock [7].
Also with the svd we have with h(p hi ¥ 1, i = 1, ... , K that the transformation matrix ASR of
the Stein Rule estimation function dSR has the following representation :
ASR := V[IK - h('1)r)V',
(9)
where V consists of the eigenvectors of X' X, r = diag( 1'1, ... , 1'K) is the matrix of the eigenvalues of
C, and h( '1) is a real valued function.
Based on the above assumptions we can characterize the Stein-Rule estimator as a solution of a
transformed least squares problem (see Miiller [8]) in the following way:
dSR := VEsAU'y
with
EsR := ElIsA,
(10)
where IISR := (IK - h('1)r) = diag(1f'l, ... ,1f'K). For tlie positive part Stein estimator the diagonal
elements of IIsR are truncated at zero. Moreover fur practical use of the Stein estimator we define
C := (X'xtt, B := hand he,,) := a/'1 with a := (K - 2)/(T - K - 2) ..
3
The Jackknife and Bootstrap Approach
Introduced by Quenouille [10] and named by Tukey [13), the jackknife technique provides a distribu~
tion free method of parameter estimates in linear ~odels. And because the jackknife removes bias
of order n-t, it is an appropriate method in improved estimation.
Denoting the estimation of the unknown parametervector P of a linear model when observation
t, t = 1, ... , T is deleted by bC- t ), we have T estim",tes for each component of the unknown parametervector p. These sequences can be used as an estimate of the parameter vector p by taking the
average.
Because the bC-t) are based on partly the same information, the functions bC-t) t = I, ... , T are
correlated. In order to get approximately independent estimates, Tukey [13] introduced the "pseudovalues" :
(11)
bCt) := T x b - Cf - 1) x bC-f),
where b denotes the OL8 estimation function based on the full sample. Now the jackknife estimation
function can be defined as :
(12)
As the jackknife estimation function of the variance of the "pseudo-values" bCt) Tukey [13] proposed
.2._
1
~[
l1CbJ) .- T(T _ 1) ~ bet) - b(J)
]2 _ (T - 1) ~[
-]2
T ~ bC-f) - bC-t) ,
(13)
where bC-t) denotes the mean of the bC-f) for t = l, ... ,T.
In the following we use a numerical representation of the jackknife which was given by Miiller
[9]. Let tpe matrix I¥=O) denote the (T x T)-dim, identity matrix where the t-th row t = I, ... , T is
identically zero. Based on the svd, the t-th pseudo value bC-f) can be represented by :
(14)
:.
\
,\
513.
i!
that is for the jackknife estimation function defined in (12) we have analogously to the OLS repres~ntation (2) :
(15)
. b(J) = VE-lU~y,
where
UJ:= TU -
(T;l)
E:=l I¥=O)U[U'I¥=O)UI-l.
The key idea of the bootstrap method (Efron [4]) is to generate new residual vectors byresampling
the residuals see e.g Freedman [5], Bickel/Freedman [2] and Sing [11].
Assuming the model and the estimated parameters to be right, the resampling procedure generates
"pseudo~data" yW for the dependent variable. Nqw the model parameters can be estimated using
the "pseudo-data" where the errors are directly obtlervable.
By repeating the resampling procedure m-times and estimating the model (yW, X), j = 1, ... , m
with the same statistical procedure as before we get a random sample of every estimated parameter
of size m. This Monte-Carlo experiinent can be utled to approximate the distribution of the real
parameters.
Now we define :
b~):= (X'X)-lX'yW, j = 1, ... ,m
(16)
as the j-th OLS Bootstrap estimator for the resampled model.
Based on the svd, the j-th OLS bootstrap estimation of a linear model can be represented by :
(17)
where R(i):= p(j)(UU' - Ir) andp(j) is defineq as an (T x T)-dim. matrix, where each row has
one 1 and (T -l)zeros. The position i of the 1 in each row is determind by a resampling procedure
with replacement from {1, ... , T}.
In this integrated approach we must calculate tpe svd only once to reach all para.meter estimates
of the virtual models by resampling the rows of the matrix R(i). Therefore the resampling methods
(jackknife and bootstrap) can be characterized by an appropriate transformation of the U matrix
.
defined by the svd.
4
Empirical Distribution anq. Test Statistics
In the following example we use different design matrices X which differ in the degree of multicollinearity. Here we use the condition number (lar~est/sma.llest singular value) of the design matrix
X' X asa measure of multicollinearity. It should pe' sufficient to indicate here only the results for
the design X with cond(X) = 760.
The sampling experiment is based on the following linear model :
y = Xf3 +
.t.;\
U
=
=
t
X2f32 + xaf3a + U
1O.0Xl +6.0X2 + 8.0xa + u, .
X 1f3l
...,
(18)
(19)
where y is the (20 xl) dimensional vector of the pbservations, X the (20 x 3) dimensional design
matrix where Xl == 1 for all observations (constant term), f3 = (10.0,6.0,8.0)' the coefficient vector,
and u is a normal random vector with mean vector zero and variance (J"2 = 1.0.
The sample problem was done on a VAX9000-410 with SAS /IML the Interactiv Matrix Language.
With SAS/QC PROC CAPABILITY we describe the distribution and the statistics of the BOOTSTRAP - OLS and BOOTSTRAP-STEIN estimator. The simulation is very fast and the programming with SAS/IML is simple to learn. And so it is possible to give the sCientists a good tool to
build in their own algorithm. In the back round we have the full power of all products such as
statistics, graphics and programming language of BAS/BASE. With the process capability analysis
we can compare the distribution of output from an in-control process .
i
514 .
I
i
i ..
In most empirical studies we have the situation that we only observe one sample of the dependent
variable y without knowledge of the underlying distributional process. Therefore from the 1.000
simulated vectors of the dependent variable one vector was randomly selected for further calculations.
Based on the design matrix X (cond(X) = 760) and the selected vector y OLS and STEIN -estimates
are calculeted.
We use the bootstrap approach for calculating the standard deviation of the single components
of the coefficient vector band dSR • Because there is a constant term included in the model (20)
we can operate on the uncentered residuals. For practical purposes we have the following steps for
calculating standard errors of the estimated parameters d; E d = (d}, ... , dK ) by bootstrapping where
d denotes the OLS or Stein estimates respectively.
Let FT be the empirical distribution ~f the T residuals U. Next we use a random number generatorfor the uniform distribution to draw m times T new points u~j), i = 1, ... ,T, j = 1, ... ,m,
independently and with replacement from FT, So that each new point is an independent random
selection of one of the T original residuals. Here we have that some of the original residual,s will
have been selected zero times, some once, ~ome twice, etc. Based on the resampled residuals U~1), the
known design matrix X and the estima.ted parameter vector d, we calculate new dependent variables
y{i) = X d + u(i). Repeating the above steps a large number of times ( say m times) and estimating
the new models with the same procedure as before, we get a sequence of bootstrap parameter vectors
dU) = (d~j), ... , d%»), j = 1, ... m. The relavant statistics you can get from the output of the capability
procedur(l,
Statistics of the Least square estimator
Variable=Bl
Moments
N
1000
Mean
9,6~1$09
Std Dev
0.993917
Skewness
0.02706
USS
93560.32
CV
10.33016
T:Mean=O 306.12.08
Sgn Rank
250250
Num ~= 0
1000
W:Normal 0.990035
Sum Wgts
Sum
Variance
Kurtosis
ess
Std Mean
Prob>ITI
Prob>ISI
Prob<W
1000
9621.509
0.987872
-0.06304
986.8841
0.03143
0.0000
0.0000
0.934
Variable=B3
Moments
N
1000
Mean
8.535457
Std Dev
17.42538
Skewness
0.0369
USS
376194.2
ev
204.1528
T:Mean=O 15.48976
Sgn Rank
128569
Num ~= 0
1000
W:Normal
0.98762
Variable=B2
Moments
N
1000
5.179082
Mean
Std Dev
34.41268
Skewness
-0.0391
USS
1209871
ev
664.4552
T:Mean=O 4.759204
43416
Sgn Rank
Num ~= 0
1000
W:Normal 0.987566
SlUIl Wgts
1000
Sum
8535.457
Variance 303.6438
Kurtosis 0.041556
ess
303340.2
Std. Hean 0 .. 551039
Prob)ITI
0.0000
Prob>ISI
0.0000
Prob<W
51~
,
0.670
Sum Wgts
1000
5179.082
Sum
Variance 1184.232
Kurtosis 0.042059
ess
1183048
Std Hean 1.088224
Prob>ITI
0.0000
0.0000
Prob>ISI
Prob<W
0.661
Statistics of the Stein estimator
Variable=Dl
Moments
N
Mean
Std Dey
Skewness
USS
ev
T:Mean=O
Sgn Rank
Num ~= 0
W:Normal
1000
9.608634
0.541281
-0.04966
92618.54
5.633274
561.357
250250
1000
0.984857
Variable=D2
Moments
1000
N
4.286774
Mean
3.281892
Std Dey
Skewness -0.11411
29136.47
USS
76.55855
ev
T:Mean=O 41.30535
232659
Sgn Rank
1000
Num ~=O
W:Normal 0.985034
1000
Sum Wgts
9608.634
Sum
Variance 0.292985
-0.1727
Kurtosis
292.6918
ess
Std Mean 0.017117
0.0000
Prob>ITI
0.0000
Prob>ISI
Prob<W
0.208
Variable=D3
Moments
1000
N
8.988962
Mean
Std Dey
1.666105
. Skewness 0.113936
83574.57
USS
18.535
ev
T:Mean=O 170.6111
250250
Sgn Rank
Num ~= 0
1000
W:Normal 0.983898
1000
Sum Wgts
4286.774
Sum
Variance 10.77081
Kurtosis -0.04301
10760.04
ess
Std Mean 0.103783
0.0000
Prob>ITI
0.0000
Prob>ISI
Prob<W
0.232
1000
Sum Wgts
8988.962
Sum
Variance 2.775904
Kurtosis -0.05285
2773.128
ess
Std Mean 0.052687
0.0000
Prob>ITI
0.0000
Prob>ISI
Prob<W
0.105
We can assume the empirical distribution of the 1300TSTRAP - STEIN estimator is a normal
destribution. In the following pictures we can see the STEIN estimator has a smaller variance as the
OLS estima,tor by ill-conditoned data.
516
..::::.;--~.\ ~ ;~;~li~tt-' ,,:-'.~_-i.~ ::,.,,:;",;_,-:, s:'-:<,~5'.~';1.~:~rr;:':'t'~5""'~>;:;C~· :~~~,:-~~:"':S'~""~::"':-.<!;n:-:.~1..~·,..n::->r~">:..~_ -:.-:~,,,-:.,"-
.,.;.......r" ...,.
as -Ibitm
SiI-BolMap
P1
P1
,:~.=!
,,
2254
2110
~Ir:------~-;~------
175
175
150
C1I
......
....
C150
_C
125
0125
1)
II
u 100
n
•
t 111
t 75
50
25
01 1 j ~ , I, til,
75
50
25
r?
01
s.o
I
I
1;1
I
I
1 1I
I
I
1)
qI, I, I, f?,.,1
J(
7.63 8.13 8.63 9.13 9.63 10.13 10.63 U.13
6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.1)10.5 11.0 11.5 12.0 12.5 110 .
CIne: -
Nomd{Uu=9.6215 S9na=Q.99J9)
Curve: - Normol(Mu=9.6086 Sigma=O.5413)
. "~':+;'-'~!;~:.~"r'I ~::;-O:-~..;..: ""').:~ ~~ ·:";":'~'i,,'l,i_-r;-;'-' ;.- '-j,:"':' .1"i1!.;':-~_H.H}:tt:~";-""'t·1:~~~·~J~~'-<':-'~:·1~'·<:<;~.'::""l-;.t~ ~~_tte~:?6",?""n- ,.::-.:~.~~.., ',":.-"-<":. ~~~ ~ ~_~,.C'j'"
" ' ..'
(lS~_
Sin-.
-- P2
2004
. 175
~2
,
,:Uo.:\
I
I
~
200
175
C150
o 125
u 100
n
t 75
50
25
0'
150
125
(11
C
Ol
0
......
I
: lIXli
t
75
. 50
25
,.
'i
;!
J:
i
,K;
i I I I I I II I I a I I I I I I
I'
I
'=?>r
I
I
-6.75 -3.75 -0.75 2.25 5.25 8.25 11.25 14.25
0
-128
-98
-68
ClIve: -
-38
.~
1,
23
53
Nlmd(1F5.1791 S9na=34.41~
83
113
Curve: - Norrnal(Uu=4.2868 Sigma=3.2819)
5
Programmig techniques
libname lib '(]';
I***************************************~************************1
Titlel 'BOOTSTRAP - Technic 'j
Title2 'STEIN- and OLS- estimator'j
1***************************************,************************1
1***************************************,************************1
1*
MODUL RESAMPLE
*1
1***************************************,************************1
proc imlj
reset storage=lib.boot;
1************ Modul Simulation from the matrix
start resample(mt,seed,h);
p=j(20,20,0);
do i=l to 20 ;
j=1+int(20*ranuni(seed)); p[i,j]=lj
end;
r=p*mtj
h=i(nrow(mt))+r;
finish resample;
P*****************I
1************************************************1
store module=resamplej
quit;
I***************************************~************************1
1*
Read the Data in a SAS-DATASET
*1
I***************************************~************************1
data a;
infile 'x.dat';
input xl x2 x3j
infile 'yl.dat'i
input y i
1****************************************************************1
I*
*I
PROCEDURE IML
1*
Computing OLS and STEIN-estimator
*1
1****************************************************************1
proc imlj
reset storage=lib.bootj
load module=resamplei
use a;
* Read variables in the matrices x and Yi
read all var {xl x2 x3} into Xj
read all var {y} into Yi
* Computing the SAS-dataset from Bootstrap - OLS and STEIN-estimatori
create lib.estimate var {bl b2 b3 dl d2 d3}i
1****************************************************************1
1*
Singular value decomposition
*1
1****************************************************************1
call svd(u,q,v,x);
1****************************************************************1
1*
Manipulation of matrices
*1
519,
f****************************************************************f
f* Rows of matrix X *f
f* Colums of matrix X *f
f* identity matrix T*T *1
f* identity matrix K*K *f
1* X'*X
*1
1* c= Inverse of X'*X *1
1* OLS-estimator
. *1·
1* Matrix K*K: Diagonalmatrix of singularvalue*1
f* The inverse aaa,onalmatrix of singular value*1
;f* The diagonal va~ues of Gamma are the eigen values*/
t=nrow(x);
k=ncol(x);
it=i(t);
bi=i(k);
xx=x'*x;
c=inv(xx);
bols=c*x' *y;
qdiag=diag(q);
qinv=inv(qdiag);
gamma=inv(qdiag*qdiag)
/* of the inverse of matrix X*X'*/
/* U*U' - identity matrix
*/
f***************************************************** ***********/
f*
Computing of '-arianz
*1
/***************************************************** ***********1
ybeta=x"'bols;
resid=y-ybeta;
sse=ssq(resid);
dfe=t-k-i;
var=sse/dfe;
ut=u*u'-it;
f*******************"'***********"'*****"'***~******"'*"'** ***********/
/*
Computing of eta
*/
f******"'*********"'***************"'******************** ***********/
bbb=y'*u*inv(qdiag*qdiag)*u'*y;
eta=bbb/«t-k)*var);
f"''''*'''*'''*''''''*'''******''''''***************'''******,**********************1
f*
Computing of h(eta)
*/
/******"'******"'***********************"'***~**********************/
h=(k-2)f«t-k-2)*eta);
pi=bi-h*gamma;
qsr=inv(qdiag*pi);
dsr=v*qsr*u'*y;
/*******************************"'********************* ***********f
/*
SVD of STEIN
*1
f********************************************"'******** ***********/
d=v*qsr*u' ;
f***************************************************** ***********/
f*
SVD of OLS
*f
f***************************************************** ***********/
b=v*qinv*u' ;
f****************** BOOTSTRAP ********************"'''''''**''''''**''''''*'''*/
seed=-i;
do k=l to 1000;
run resample(ut,seed,h);
bb=b*h*y;
/* BOOTSTRAP - OLS *1
bd'Fd*h*y;
f* BOOTSTRAP - STEIN *1
bi=bb[i]; b2=bb[2]; b3=bb[3];
di=bd[i]; d2=bd[2]; d3=bd[3];
append var {bi b2 b3di d2 d3};
end;
520.
close lib.estimate;
quit;
qinv=inv(qdiag);
/. Matrix der Singuh,erwerte invertiert ./
/ .............................••••••••••• ~ ......•................ /
/.
1*
Test - Statistics, Histogram, Goodness-of-fit
PROC CAPABILITY
./
./
/ ..............•..............•..........•........•.•....•....... /
proc capability data-lib.estimate graphics normal gout-lib. estimate;
v~r bl b2 ~3dl d2 d3;
luterval /method=4 alpha=O.32;
HISTOGRAM bl b2 b3 dl d2 d3/
midptaxis=axisl
font=swissl
caxis=blue
chref=red
vscale=count
midpercents
hreflabels=( 'Mean' )
des='OLS'
normal;
symboll c=green;
run;
6
Conclusion
In this paper we use the bootstrap and jackknife approach in multiple linear models in order to attach
standard errors to improved estimation functions. Bere the idea is to use the simulation techniques
to simulate the empirical distribution of the Stein ~timator.
For computational purposes a numerical resampling approach of the bootstrap and jackknife
technique in multiple linear models is used. This approach yields a reduction of CPU-time which is
remarkable.
It is 'shown that the advantage of the Stein estimates becomes significant if the design matrix
X is not well conditioned. In the example the condition number was 760 which yields a very large
variability of the OLS estimated parameters. Further results (omitted in this paper) have shown
that for well conditioned designs the OLS- and Stein-estimation functions gives approximately the
same results.
7
References
[1] BAUR, F. (1984). Einige lineare und nicht-lineare Alternativen zum Kleinst-Quadrate-Schatzer
im verallgemeinerten !inearen Modell, Anton Bain, Konigstein/Ts.
[2] BICKEL, P./FREEDMAN, D.A. (1981). Some asymptotic theory for the bootstrap, Annals of
Statistics, 9, 1196 - 1217.
[3] EFRON, B. (1977). Bootstrap methods: another look at the jackknife, Annals of Statistics, 7, 1
- 26.
[4] EFRON, B. (1979). Computers and the theory of statistics: Thinking the
unthinkable~
SIAM
Review, 21/4,460 - 480.
[5] FREEDMAN, D.A. (1981): Bootstrapping rel!lression models, Annals of Statistics, 9,1218 - 1228.
[6] GOLUB, G./REINSCH, C. (1970). Singular value decomposition and least squares solutions,
Numer. Math., 14,403 - 420.
[7J JUDGE, G.G./BOCK, M.E. (1978). The statistical implications of pre-test and Stein-Rule estimators in econometrics, North Holland, Amsterdil-m.
[8] MULLER, P. (1990). Least squares characterization of improved estimation functions, Computational Statistics Quarterly, 3, 203 - 216.
[9J MULLER, P. (1990). The empirical distribution of the jackknife estimator simulated by bootstrapping, International Conference Bootstrapping And Related Techniques, Trier.
[10] QUENOUILLE, M.H. (1956). Notes on bias in estimation, Biometrica, 43,353 - 360.
[11] SING, K. (1981). On the asymptotic accuracy of Efron's bootstrap, Annals of Statistics, 9,1187
- 1195.
[12] STEIN, C. (1956). Inadmissibility of the usual estimator for the normal mean of a multivariate
normal distribution, Proceedings of the 3rd. Ber~eley Symposium on Mathematical Statistics and
Probability, 1, 197 - 206.
[13] TUKEY, J.W. (1958). Bias and confidence iq not-quite large samples, Annals of Mathematical
Statistics, 29, 614.
522,