Survey

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
```Chapter 6: Point Estimation
6.1.Concepts of Point Estimation
Point Estimator
A point estimator of a parameter  is a
single number that can be regarded as an
approximation of . A point estimator
can be obtained by selecting a suitable
statistic and computing its value from the
given sample data.
Point Estimator
A point estimator of a parameter  is a
single number that can be regarded as an
approximation of . A point estimator
can be obtained by selecting a suitable
statistic and computing its value from the
given sample data.
Point Estimator
A point estimator of a parameter  is a
single number that can be regarded as an
approximation of . A point estimator
can be obtained by selecting a suitable
statistic and computing its value from the
given sample data.
Unbiased Estimator
A point estimator ˆ is said to be an
ˆ
E
(

) 
unbiased estimator of  if
for every possible value of  . If ˆ is
not biased, the difference E (ˆ)   is
called the bias of ˆ .
The pdf’s of a biased estimator ˆ1
and an unbiased estimator ˆ2 for a
parameter .
pdf of ˆ2
pdf of ˆ1

Bias of 1
The pdf’s of a biased estimator ˆ1
and an unbiased estimator ˆ2 for a
parameter .
pdf of ˆ2
pdf of ˆ1

Bias of 1
Unbiased Estimator
When X is a binomial rv with
parameters n and p, the sample
proportion pˆ  X / n is an unbiased
estimator of p.
Principle of Unbiased Estimation
When choosing among several different
estimators of  , select one that is
unbiased.
Unbiased Estimator
Let X1, X2,…,Xn be a random sample
from a distribution with mean  and
2
variance  . Then the estimator
ˆ  S
2
2
X



i
X
n 1
is an unbiased estimator.

2
Unbiased Estimator
If X1, X2,…,Xn is a random sample from
a distribution with mean  , then X is
an unbiased estimator of  . If in
and symmetric, then X and any trimmed
mean are also unbiased estimators of  .
Principle of Minimum Variance
Unbiased Estimation
Among all estimators of  that are
unbiased, choose the one that has the
minimum variance. The resulting ˆ is
called the minimum variance unbiased
estimator (MVUE) of .
Graphs of the pdf’s of two
different unbiased estimators
pdf of ˆ1
pdf of ˆ2

MVUE for a Normal Distribution
Let X1, X2,…,Xn be a random sample
from a normal distribution with
parameters  and  . Then the
estimator ˆ  X is the MVUE for  .
A biased estimator that is preferable
to the MVUE
pdf of ˆ1 (biased)
pdf of ˆ2
(the MVUE)

The Estimator for 
X, X, X
e
, X tr (10)

1. If the random sample comes from a
normal distribution, then is the
best estimator since it has minimum
variance among all unbiased
estimators.
The Estimator for 
X, X, X
e
, X tr (10)

2. If the random sample comes from a
Cauchy distribution, then X is good
(the MVUE is not known).
X and X e are quite bad.
The Estimator for 
X, X, X
e
, X tr (10)

3. If the underlying distribution is
uniform, the best estimator is X e
this estimator is influenced by
outlying observations, but the lack
of tails makes this impossible.
The Estimator for 
X, X, X
e
, X tr (10)

4. The trimmed mean X tr (10) works
reasonably well in all three
situations but is not the best for any
of them.
Standard Error
The standard error of an estimator ˆ is
ˆ

=
V
(

) . If
its standard deviation ˆ
the standard error itself involves
unknown parameters whose values can
be estimated, substitution into  ˆ yields
the estimated standard error of the
estimator, denoted ˆˆ or sˆ .
6.2. Methods of Point Estimation
Moments:Let X1, X2,…,Xn be a random
sample from a pmf or pdf f (x). For k = 1,
2,… the kth population moment, or kth
moment of the distribution f (x) is estimated
be the kth sample moment
1 n k
   i 1 X i .
n
Moment Estimators
Let X1, X2,…,Xn be a random sample
from a distribution with pmf or pdf
f ( x;1 ,..., m ), where 1 ,..., m
are parameters whose values are
unknown. Then the moment estimators
1 ,..., m are obtained by equating the
first m sample moments to the
corresponding first m population
moments and solving for 1 ,..., m .
Likelihood Function
Let X1, X2,…,Xn have joint pmf or pdf
f ( x1 ,..., xn ;1 ,..., m )
where parameters 1 ,..., m
have unknown values. When x1,…,xn
are the observed sample values and f is
regarded as a function of 1 ,..., m , it is
called the likelihood function.
Maximum Likelihood Estimators
The maximum likelihood estimates
(mle’s) ˆ1 ,...,ˆm are those values of the
 i 's that maximize the likelihood
function so that
f ( x1 ,..., xn ;ˆ1 ,...,ˆm )  f ( x1,..., xn ;1,...,m )
for all 1 ,..., m
When the Xi’s are substituted in the place of
the xi’s, the maximum likelihood estimators
result.
The Invariance Principle
Let ˆ1 ,...,ˆm be the mle’s of the
parameters 1 ,..., m Then the mle of
any function h(1 ,..., m ) of these
parameters is the function h(ˆ1 ,...,ˆm )
of the mle’s.
Desirable Property of the Maximum
Likelihood Estimate
Under very general conditions on the joint
distribution of the sample, when the
sample size n is large, the maximum
likelihood estimator of any parameter 
ˆ
[
E
(

)   ] and has
is approx. unbiased
variance that is nearly as small as can be
achieved by any estimator.
mleˆ  MVUE of 
```
Related documents