Download Fundamentals of Mathematical Statistics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Foundations of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Outline: Fundamentals of Mathematical Statistics
Part One
I. Populations, Parameters, and Random Sampling
II. Finite Sample Properties of Estimators
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics
Part Two
IV. General Approaches to Parameter Estimation
V. Interval Estimation and Confidence Intervals
Read Wooldridge, Appendix C:
Part Three
VI. Hypothesis Testing
VII. Remarks on Notation
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
2
Populations, Parameters, and Random Sampling
I. Populations, Parameters, and Random Sampling
•
• Population refers to any well‐defined group of subjects.
By “learning”, we can mean several things. – Most important are estimation and hypothesis testing.
Example:
• Suppose our interest is to find the average percentage increase in wage given an additional year of education.
• Statistical inference involves learning something about the population from a sample.
– Population: obtain wage and education of 33 million working people
– Sample: obtain data on a subset of the population.
• Parameters are constants that determine the directions and strengths of relationship among variables.
Example: Results:
o
the return to education is 7.5%
o
the return to education is between 5.6% and 9.4%
o
Does education affect wage?
‐ example of point estimate.
‐ example of interval estimates.
– example of hypothesis testing.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
3
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
4
Sampling •
•
Sampling
Random Sampling: Definition
Let Y be a random variable representing a population with a probability density function f(y;).
•
If Y1, …, Yn are independent random variables with a common probability density function f(y;), then {Y1, …, Yn} is a random sample from the population represented by f(y;).
•
We also say the Yi are i.i.d. random variables from f(y;).
The probability density function (pdf) of Y is assumed to be known except for the value of 
– Different values of  imply different population distributions.
– i.i.d. (independent, identically distributed)
Example: a random sample from normal distribution.
•
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
5
Sampling
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
6
Sampling
Example: random sample from Bernoulli distribution.
Example: working population •
If Y1, ..., Yn are independent random variables, each is distributed as Bernoulli () so that
P(Yi=1) = 
P(Yi=0) = 1 ‐ 
then, {Y1, ..., Yn} constitutes a random sample from the Bernoulli () distribution. •
Note that Yi = 1 if passenger i show up
Yi = 0 otherwise
• We may obtain a sample of 100 families. – Note that the data we observe will differ for each different sample. A sample provides a set of numbers, say, {y1, …, yn}.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling
If Y1, …, Yn are independent random variables with a normal distribution with mean and  variance 2, then {Y1, …, Yn} is a random sample from the Normal(,2) population.
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
7
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Populations, Parameters, and Random Sampling
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
8
II. Finite Sample Properties of Estimators
A. Unbiasedness
B. Variance
C. Efficiency
Estimators and Estimates
Suppose {Y1, …, Yn} is a random sample from a population distribution that depends on an unknown parameter . •
A “finite sample” implies a sample of any size, no matter how large or small.
•
–
–
Small sample properties.
–
•
Asymptotic properties have to do with the behavior of estimators as the sample size grows without bound.
•
A. Unbiasedness
B. Variance
C. Efficiency
An estimator of  is the rule that assigns each possible outcome of the sample.
A rule is specified before any sampling is carried out.
An estimator W of a parameter  can be expressed as
W = h(Y1, …, Yn}
for some known function h. •
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Estimators and Estimates:
sampling distribution
9
When [particular set of values, say {y1, …, yn}, is plugged into the function h, we obtain the estimate of .
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
II. Finite Sample Properties of Estimators
A. Unbiasedness
B. Variance
C. Efficiency
Estimators and Estimates
10
A. Unbiasedness
B. Variance
C. Efficiency
Example:
The distribution of an estimator is called the sampling distribution. •
–
•
•
It describes the likelihood of various outcomes of W across different random samples. The entire sampling distribution of W1 can be obtained given the probability distribution of W1 and outcomes.
Let {Y1, …, Yn} be a random sample from the population with mean . The natural estimator of  is the average of the random sample:
Note that Y‐bar is called the sample average.
•
Unlike in Appendix A, we define the sample average of a set of numbers as a descriptive statistic.
•
For actual data outcomes, y1, …, yn, the estimate is the average in the sample
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
11
…
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
12
Example C.1: City Unemployment Rates
•
A. Unbiasedness
B. Variance
C. Efficiency
Unbiasedness A. Unbiasedness
B. Variance
C. Efficiency
Unbiased Estimator: Definition
Example:
An estimator W of  is unbiased if
E(W) = 
for all possible values of W
•
Estimator: •
Estimate: = 6.0
Intuitively, if the estimator is unbiased, then its probability distribution has an expected value equal to the parameter it is supposed to be estimating.
–
– Our estimate of the average city unemployment rate in the U.S. is 6.0%.
Unbiasedness does not mean that the estimate from a particular sample is equal to , or even very close to .
If we could indefinitely draw random samples on Y from the population
•
Notes
1) Each sample results in a different estimate.
2) The rule for obtaining the estimate is the same.
•
–
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Unbiasedness 13
A. Unbiasedness
B. Variance
C. Efficiency
then average these estimates over all random samples will obtain .
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Unbiasedness
14
A. Unbiasedness
B. Variance
C. Efficiency
Bias of an Estimator: Definition
•
If W is an estimator of , its bias is defined as
Bias() = E(W) – 
•
The unbiasedness of an estimator and the size of bias depend on –
–
•
1 n  1  n  1 n

E (Y )  E   Yi   E   Yi     E (Yi ) 
n
n
n
 i 1 
 i 1 
 i 1

An estimator has a positive bias if E(W) –  >0.
•
Show: the sample average is an unbiased estimator of the population mean .

the distribution of Y and the function h
1 n  1
   ( n )  
n  i 1  n
We cannot control the distribution of Y, but we could choose the choice of the rule h.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
15
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
16
Unbiasedness
A. Unbiasedness
B. Variance
C. Efficiency
The Sampling Variance of Estimators
Weaknesses: (1) Some very good estimators are not unbiased.
(2) •
The variance of an estimator is the measure of the dispersion in the distribution. It is often called sampling variance.
•
Example: the variance of sample average from a population.
Unbiased estimators could be quite poor estimators.
Example:
Let W = Y1 (from a random sample of size n, we discard all of the observations except A. Unbiasedness
B. Variance
C. Efficiency
the first)
E(Y1) = 
•
Unbiasedness ensures that the probability distribution of an estimator has a •
mean value equal to the parameter it is supposed to be estimating. Variance shows how spread out the distribution of an estimator.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
The Sampling Variance of Estimators
•
•
Summary:
If {Y1, …, Yn} is a random sample from a population with mean  and variance 2, then
•
•
17
A. Unbiasedness
B. Variance
C. Efficiency
has the same mean as the population
Its sampling variance equals the population variance 2 over the sample size. (2/n)
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
The Sampling Variance of Estimators
Suppose W1 and W2 both are unbiased estimators of , but W1 is more tightly centered about . (See graph!)
Define the estimator as
18
A. Unbiasedness
B. Variance
C. Efficiency
This implies that the probability that W1 is greater than any distance from  is less than
the probability W2 is greater than the same distance from .
which is usually called the sample variance. •
Show that the sample variance is an unbiased estimator of 2.
E(S2) = 2
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
19
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
20
The Sampling Variance of Estimators
A. Unbiasedness
B. Variance
C. Efficiency
The Sampling Variance of Estimators
Example:
For a random sample with mean  and variance 2.
Y1 is the estimator, the first observation drawn.
Estimator
is unbiased
Mean
Variance
Example:
From simulation in Table C.1. 20 random samples of size 10 (n=10) generated from the normal distribution with =2 and 2=1
Estimator Y1
E( ) = 
Y1 is unbiased
E(Y1) = 
Var( )= 2/n
Var(Y1)= 2

mean = 1.89

If the sample size n=10; this implies Var(Y1) is ten times larger than
Var( )
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Relative Efficiency
ranges from (1.16‐2.58)
mean = 1.96

I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
y1 ranges from (‐0.64‐4.27); 21
A. Unbiasedness
B. Variance
C. Efficiency
Which estimator is better?
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
22
A. Unbiasedness
B. Variance
C. Efficiency
Efficiency
Example:
• For estimating the population mean , Var( ) < Var(Y1) for any value of 2.
Relative Efficiency: If W1 and W2 are two unbiased estimators of , then W1 is efficient relative to W2 when Var(W1)Var(W2) for all .
•
The estimator is efficient relative to Y1 for estimating .
•
In a certain class of estimators, we can show that the sample average has the smallest variance.
Example:
Example:
Example:y1
Show that has the smallest variance among all unbiased estimators that are also linear functions of Y1, Y2, …, Yn.
– The assumptions are that Yi have common mean and variance, and that they are pairwise uncorrelated.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
23
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
24
A. Unbiasedness
B. Variance
C. Efficiency
Efficiency
A. Unbiasedness
B. Variance
C. Efficiency
Efficiency
• If we do not restrict our attention to unbiased estimators, then comparing variances is meaningless
•
Example: In estimating the population mean , we use trivial estimator equal to zero
•
If W is an estimator of , then
MSE(W) = E(W‐)2
= E[W‐E(W) +E(W)‐]2
= Var(W) + [bias(W)]2
•
The MSE measures how far, on average, the estimator is away from . It depends on the variance and bias.
– mean equal to zero
– Variance equal to zero:
– bias of this estimator equal ‐ 
E(0) = 0
Var(0) = 0
Bias(0) = ‐ 
Bias(0) = E(0) ‐  = ‐ 
•
So this trivial estimator is a very poor estimator when the bias of the estimator or  is large.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Problem C.1
25
A. Unbiasedness
B. Variance
C. Efficiency
26
A. Unbiasedness
B. Variance
C. Efficiency
This is an example of a weighted average of the Yi. Show that W is also an unbiased estimator of . Find the variance of W. [ans.]
(iii) Based on your answers to parts (i) and (ii), which estimator of  do you prefer, or W? [ans.]
(i) What are the expected value and variance of in terms of 
and 2? [ans.]
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
1
1
1
1
W  Y1  Y2  Y3  Y4
8
8
4
2
denote the average of these four random variables.
II. Finite Sample Properties of Estimators
II. Finite Sample Properties of Estimators
(ii) Now, consider a different estimator of :
1
(Y1 + Y2 + Y3 + Y4) 4
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Problem C.1 continue..
C.1 Let Y1, Y2, Y3, and Y4 be independent, identically distributed random variables from a population with mean  and variance 2. Let
Y 
A measure when comparing estimators that are not necessarily unbiased:
–
Mean squared error (MSE)
27
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
28
Problem C.1 (i) continue…
Problem C.1 (ii)
(i) This is just a special case of what we covered in the text, with n = 4: • E( ) = µ and Var( ) = 2/4.
•
E(W) = E(Y1)/8 + E(Y2)/8 + E(Y3)/4 + E(Y4)/2
Expected value of = 1 n  1  n  1 n
 1
E (Y )  E   Yi   E   Yi     E (Yi )   
 n i 1  n  i 1  n  i 1
 n
•
(ii) • W is unbiased.
Variance of n

i 1

= µ[(1/8) + (1/8) + (1/4) + (1/2)]
= µ(1 + 1 + 2 + 4)/8 = µ, Note that Yi are independent.
1
n
    ( n )  
= 2/4. Find variance of W
Var(W)
= Var(Y1)/64 + Var(Y2)/64 + Var(Y3)/16 + Var(Y4)/4
= 2[(1/64) + (1/64) + (4/64) + (16/64)]
= 2(22/64) = 2(11/32).
•
29
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
II. Finite Sample Properties of Estimators
III. Asymptotic or Large Sample Properties of Estimators
Problem C.1 (iii)
• (iii) Var(W)
Var( )
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
One notable feature of Y1 is that it has the same variance for any sample size
improves in the sense that its variance gets smaller as n gets larger.
–
–
•
•
• Because 11/32 > 8/32 = 1/4, Var(W) > Var( ) for any 2 > 0, so is preferred to W because each is unbiased. •
A. Consistency
B. Asy. Normality
For estimating a population mean 
•
= 11/32
= 8/32 = ¼ 30
Y1 does not improve in this case
We can rule out silly estimators by studying the asymptotic or large sample properties of estimators (n).
How large is “large” sample?
– This depends on the underlying population distribution.
– Note that large sample approximations have been known to work well for sample sizes as small as 20 observations (n=20).
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
II. Finite Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
31
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
32
A. Consistency
B. Asy. Normality
Consistency
Consistency concerns how far the estimator is likely to be from the parameter it is supposed to be estimating •
as sample size increases indefinitely.
–
A. Consistency
B. Asy. Normality
Consistency
1. The distribution Wn becomes
more and more concentrated about 
as sample size increases (n).
3. When Wn is consistent, we say that  is
the probability limit of Wn, written as
2. For larger sample sizes, Wn is less and
less likely to be very far from .
4. The conclusion that Yn is consistent of  is
known as the law of large numbers (LLN)
plim(Wn) = 
Definition: Consistency
•
Let Wn be an estimator of  based on Y1, …, Yn of sample size n. Then, Wn is a consistent estimator if for for every >0
P(Wn‐ ) >   0 as n  
Note that we index the estimator by the sample size, n, in stating this definition.
•
I. Random Sampling
II. Finite Sample
III. Asymptotic Sample
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Consistency
A. Consistency
B. Asy. Normality
•
Unbiased estimators are not necessarily consistent, but those whose variances shrink to zero as sample size increases are consistent. •
Formally, if Wn is an estimator of  and Var(Wn)0 as n  , then plim(Wn)=.
•
33
34
A. Consistency
B. Asy. Normality
Definition: LLN
Let Y1, …, Yn be i.i.d. random variables with mean . Then plim( n) = 
Intuitively, the LLN says that if we are interested in finding population average , we get arbitrarily close to  by choosing a sufficiently large sample.
the sample average is unbiased
Var (Yn ) 
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Law of Large Number
Example: the average of a random sample drawn with mean  and variance 2
–
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
2
n
– Thus, Var( n)  0 as n ‐> 
n
is a consistent estimator of 
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
35
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
36
A. Consistency
B. Asy. Normality
Consistency
A. Consistency
B. Asy. Normality
Consistency
Property PLIM.1
Property PLIM.2
Let  be a parameter and define a new parameter =g() for some continuous function g( ). Suppose plim(Wn)= . Define an estimator of  as Gn=g(Wn). Then
plim(Gn) = .
If plim(Tn)= and plim(Un)=, then
Alternatively,
1)
2)
3) plim[g(Wn)] = g[plim(Wn)]
for some continuous function, g().
•
plim(Tn+Un) =  + 
plim(TnUn) = 
plim(Tn/Un) = / provided that   0
What is a continuous function?
–
Note a continuous function is a “function that can be graphed without lifting your pencil from the paper”.
Examples:
• g()= a + b
• g()= 2
• g() = 1/
• g() = exp()
I. Random Sampling
II. Finite Sample
III. Asymptotic Sample
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
I. Random Sampling
37
A. Consistency
B. Asy. Normality
Consistency
II. Finite Sample
III. Asymptotic Sample
Fundamentals of Mathematical Statistics: Part One . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Consistency
38
A. Consistency
B. Asy. Normality
• Example: ∑
(1)
(2)
Yn* Example:
E
∑
Estimating standard deviation from a population with mean  and variance 2
Given sample variance
E(Y*) • As n  , and X* are consistent estimators of .
(1) plim( ) = 
(2) plim(Y*) = 
Var( ) = plim(
Var( ) = •
∑
0 as n 
= plim(
–
–
plim( )= 1* = 
–
–
Y* is also a consistent estimator since Y* approaches the value of the parameter  as sample size gets larger and larger.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Sample standard deviation
•
0 as n 
Sample variance is unbiased estimator for 2.
Sn2 is also a consistent estimator for 2.
Sn is not an unbiased estimator of . Why?
Sn is a consistent estimator of .
plim S n  plim S n   2  
39
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
40
A. Consistency
B. Asy. Normality
Consistency
Asymptotic Normality
Central Limit Theorem
Example:
Yi
annual earnings with a high school education
(population mean = Y)
annual earnings with a college education
(population mean = Z)
Let {Y1,…, Yn} and {Z1,…, Zn} be a random sample of size n from a population of workers, and we want to estimate the percentage difference in annual earnings between two groups, which is
Zi
•
Consistency is a property of point estimators, so is unbiasedness. Consistency and distribution
•
  100  (  Z  Y ) / Y
•
Due to the facts that
•
A. Consistency
B. Asy. Normality
plim( Z n )   Z
Consistency does not tell us about the shape of that distribution for a given sample size.
Most econometric estimators have distributions that are well approximated by a normal distribution for large samples (n).
plim(Yn )  Y
It follows from PLIM.1 and PLIM.2 that
•
Gn  100  ( Z n  Yn ) / Yn
Gn is a consistent estimator of . It is just the percentage difference between ̅ n and n.
•
plim(Gn )  
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
41
A. Consistency
B. Asy. Normality
Asymptotic Normality
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Central Limit Theorem (CLT)
42
A. Consistency
B. Asy. Normality
Definition: Asymptotic Normality
• Definition: CLT
Let {Zn: n=1, …, n} be a sequence of random variables such that for all numbers z, •
Yi  d(, 2) implies that {Y1, …, Yn} be a
random sample with mean  and variance
2.
If Yi  d(, 2), then
P(Zn z)  (z) as n  , where (z) is the standard normal cumulative distribution function (cdf)
The variable Zn is the standardized
version of n: we have subtracted off
E( n)= and divided by sd( n)=/ .
has an asymptotic standard normal distribution.
cdf for Zn
(z)  as n
“a” stands for “asymptotically”
or “approximately”.
• Intuitively,
│z
•
The central limit theorem (CLT) suggests that the
average from a random sample for any population,
when standardized, has an asymptotic standard normal distribution.
ZN
Intuitively,
This property means that the cdf for Zn gets closer and closer to the cdf of the
standard normal distribution as the sample size n gets large.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
43
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
44
Asymptotic Normality
Central Limit Theorem
•
A. Consistency
B. Asy. Normality
Problem C.3
C.3 Let denote the sample average from a random sample with mean  and variance 2. Consider two alternative estimators of  : if  is replaced by Sn, does have an approximate standard normal distribution for size n?
(1)
W1 = [(n‐1)/n]
W2 = /2.
The exact distributions of
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Problem C.3 continue …
45
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
A. Consistency
B. Asy. Normality
• E(W1) = [(n – 1)/n]E( ) = [(n – 1)/n]µ, Bias(W1) = [(n – 1)/n]µ – µ = –µ/n. As n , Bias(W1)  0
• Similarly, E(W2) = E( )/2 = µ/2,
Bias(W2) = µ/2 – µ = –µ/2. As n , Bias(W1) = –µ/2
(iv) Argue that W1 is a better estimator than if  is “close” to zero. [ans.]
(Consider both bias and variance.)
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
46
(i) (iii) Find Var(W1) and Var(W2). [ans.]
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Problem C.3 (i)
(ii) Find the probability limits of W1 and W2. {Hint: Use properties PLIM.1 and PLIM.2; for W1, note that plim[(n‐1)/n] = 1.} Which estimator is consistent? [ans.]
III. Asymptotic or Large Sample Properties of Estimators
and (i) Show that W1 and W2 are both biased estimators of  and find the biases. What happens to the biases as n  ? Comment on any important differences in bias for the two estimators as the sample size gets large. [ans.]
are not the same as (1) , but the difference is often small enough to be ignored for large n.
III. Asymptotic or Large Sample Properties of Estimators
A. Consistency
B. Asy. Normality
•
47
The bias in W1 tends to zero as n  , while the bias in W2 is –µ/2 for all n. This is an important difference.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
48
Problem C.3 (ii)
Problem C.3 (iii)
(ii) • plim(W1) = plim[(n – 1)/n]plim( )
= 1µ = µ. (iii) • Var(W1) = [(n – 1)/n]2Var( )
= [(n – 1)2/n3]2
• plim(W2) = plim( )/2 = µ/2. • Var(W2) = Var( )/4 = 2/(4n). • Because plim(W1) = µ and plim(W2) = µ/2, W1 is consistent whereas W2 is inconsistent.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
49
(iv) • Because is unbiased, its mean squared error is simply its variance. MSE( )
= Var( ) + [Bias( )]2
= 2/n
• On the other hand, MSE(W1) = Var(W1) + [Bias(W1)]2
= [(n – 1)2/n3]2 + µ2/n2. •
For large n, the difference between the two estimators is trivial.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A. Moments
B. Max Likelihood
C. Least Squares
Question:
Are they general approaches that produce estimators with good properties?
Given a parameter  appearing in a population distribution, there are usually many ways to obtain unbiased and consistent estimator of .
•
MSE(W1) < MSE( ) or
Var(W1) < Var( )
[(n – 1)2/n2](2/n) < 2/n because (n – 1)/n < 1. Therefore, MSE(W1) is smaller than Var( ) for µ close to zero.
50
We have learned finite and asymptotic properties for estimation –
unbiasedness, consistency and efficiency.
•
Let µ = 0, MSE(W1) = Var(W1) = [(n – 1)2/n2](2/n)
•
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
IV. General Approaches to Parameter Estimation
Problem C.3 (iv)
Thus, I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
III. Asymptotic or Large Sample Properties of Estimators
There are three methods
•
–
–
–
51
Method of Moments
Method of Maximum Likelihood
Method of Least Squares
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
52
Method of Moments
•
–
A. Moments
B. Max Likelihood
C. Least Squares
Method of Moments
Example: Population covariance
•
The population covariance between two random variables X and Y is
XY = E(X‐X)(Y‐Y)
The basis of the method of moments proceeds as follows. The parameter  is shown to be related to some function of some expected value in the distribution of Y, usually E(Y) and E(Y2).
The method of moment suggests the following estimator,
•
S XY 
Given the sample average is an unbiased and consistent estimator of , it is natural to replace  for . Thus, g( ) is the estimator . Example: Sample covariance
Why is this a method of moments? •
–
Here we replace the population moment  with the sample average
.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Method of Moments
1) It can be shown that this is an unbiased estimator of XY.
2) This is a consistent estimator of XY.
53
A. Moments
B. Max Likelihood
C. Least Squares
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Maximum Likelihood
•
Example: Population correlation
XY = XY/(XY) •
The sample covariance is
•
In addition, the estimator g( ) is a consistent estimator of . If g () is linear
function of , then g( ) is an unbiased estimator of .
–
1 n
 ( X i  X )(Yi  Y )
n i 1
1)This is a consistent estimator of XY.
2) However, this is a biased estimator. Example: Population mean
•
Suppose  is a function of ; i.e., =g()
•
A. Moments
B. Max Likelihood
C. Least Squares
Maximum likelihood Estimator (MLE)
A. Moments
B. Max Likelihood
C. Least Squares
1.
In the discrete case, this is
P(Y1=y1, Y2=y2,…, Yn=yn)
= P(Y1=y1)P(Y2=y2) … P(Yn=yn)
•
Let {Y1, …, Yn} be a random sample from the population distribution f(y;). •
The likelihood function, which is a random variable, can be defined as
The method of moments suggests estimating XY as
54
2. The joint distribution of {Y1, …, Yn}
can be written as the product of
the densities: f(Y1;)f(Y2,)  f(Yn,).
L(; Y1, …, Yn) = f(Y1;)f(Y2;)  f(Yn;)
•
This is called the sample correlation coefficient. It is easier to work with the log‐likelihood function
2) The log of the product is the sum
of the logs.
Notes:
(1) RXY is a consistent estimator of XY. (Why?): SXY, SX and SY are consistent.
•
(2) RXY is not an unbiased estimator of XY. (Why?) First: SX and SY are not unbiased estimators.
Second: RXY is a ratio of estimators, so it would not be unbiased.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Facts:
1) It is obtained by taking the natural
log of the likelihood function.
55
Then, the maximum likelihood estimator of , call it W, is the value of  that maximizes the likelihood function.
– Intuitively, out of the possible values for , the value that makes the likelihood that the observed values are largest should be chosen.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
56
A. Moments
B. Max Likelihood
C. Least Squares
Maximum Likelihood
Least Squares
Least squares estimators – a third kind of estimator.
The sample mean, , is a least square estimator of the population mean .
–
It can be shown that the value of m which make the sum of squared deviations as small as possible is m = •
•
• Properties:
1) MLE is usually consistent and sometimes unbiased.
2) MLE is the generally the most asymptotically efficient estimator (when the population model f(y;) is correctly specified).
–
–
A. Moments
B. Max Likelihood
C. Least Squares
MLE has the smallest variance among all unbiased estimators of .
MLE is the minimum variance unbiased estimator.
Properties:
1)
LSE is consistent and unbiased.
2)
LSE is the generally the most efficient estimator in finite and large samples.
–
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Methods of Moments, Least Squares, Maximum Likelihood
57
A. Moments
B. Max Likelihood
C. Least Squares
Unbiased
Consistent
Efficiency
Moments
unbiased
consistent efficient Least Squares
unbiased
consistent efficient Maximum
Likelihood
usually consistent consistent asymptotically efficient I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
IV. General Approaches to Parameter Estimation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
V. Interval Estimation and Confidence Intervals
•
• The principles of least squares, method of moments, and maximum likelihood often result in the same estimator.
Summary
LSE has the smallest variance among all linear unbiased estimators of .
58
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
A point estimate is the researcher’s best guess at the population value, but it provides no information how close the estimate is likely to be to the population parameter.
Example:
• On the basis of a random sample of workers,
– a researcher reports that job training grants increase hourly wage by 6.4%
– We cannot know how close an estimate is for a particular sample because we do not know the population value.
59
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
60
The Nature of Interval Estimation
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
The Nature of Interval Estimation
Interval estimation comes in when we make statement involving •
•
The standardized version of has a standard normal distribution.
•
Rewrite as
•
The random interval is probabilities.
–
One way of assessing the uncertainly in an estimator – sampling standard deviation.
Interval estimation uses information on the point estimate and the standard deviation by constructing Confidence interval.
•
–


Y 
 1.96   .95
P  1.96 
/ n


P(Y 1.96 / n <   Y  1.96 / n )  0.95
(Y  1.96 / n , Y  1.96 / n )
It shows how the population value is likely to lie in relation to the estimate.
1)
2)
Concept of Interval estimation:
•
Assume {Y1, …, Yn} is a random sample from the Normal(, 2) population. Suppose that the variance 2 is known (or 2=1).
–
The sample average has a normal distribution with mean  and variance 2 /n; i.e., –
 Normal(, 2 /n).
The probability that the random interval contains the population mean  is .95 or 95%
This information allows us to construct an interval estimate of 
‐ by plugging the sample outcome of the average, and  =1 into the random interval . P(Y 1.96 / n <   Y  1.96 / n )  0.95
•
It is called a 95% confidence interval.
•
(Y  1.96 / n , Y  1.96 / n )
A shorthand notation is y  1.96 / n
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
The Nature of Interval Estimation
61
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
The 95% confidence interval for  is 7 .3  1 .96 / 16 = 7.3 .49
•
We can write in an interval form as [6.81, 7.89].
•
The meaning of a confidence is more difficult to understand.
We mean that the random interval contains  with probability .95
– There is a 95% chance that random interval contains .
•
Random interval is an example of interval estimator.
random interval
confidence interval
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
The Nature of Interval Estimation
(Y  1.96 / n , Y  1.96 / n )
Example:
• Given the sample data {y1, y2, …, yn} are observed. We can find .
Suppose n= 16, =7.3, =1
•
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
62
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Correct Interpretation:
A random interval contains  with probability 0.95.
Incorrect interpretation:
The probability that  is in the interval is 0.95. • since  is unknown and it either is or is not in the interval.
– since endpoints change with different samples.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
63
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
64
The Nature of Interval Estimation
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
CIs for the Mean from a Normally Distributed Population A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Example:
•
•
•
Table C.2 contains calculations for 20 random samples
Assume Normal(2,1) distribution with sample size n=10.
Interval estimates of  are .62.
Results:
1) The interval changes with each random sample.
2) 19 of the 20 intervals contain the population value of .
3) Only for replication number 19, =2 is not in the confidence interval. 4) 95% of the samples result in a confidence interval that contain .
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
CIs for the Mean from a Normally Distributed Population
•
•
•
In practice, we rarely know the population variance 2.
To allow for unknown , we can use an estimate:
•
However, the random interval no longer contains  with probability .95 because the constant  has been replaced with the random variable S.
Y  1 .96 ( S / n )
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Suppose the variance is 2 and known. The 95% confidence interval is
•
IV. Parameter Estimation V. Interval Estimation & Confidence Interval
65
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Fundamentals of Mathematical Statistics: Part Two . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
CIs for the Mean from a Normally Distributed Population
We use t distribution, rather standard normal distribution.
– Note that
•
•
To construct a 95% confidence interval using t distribution, let c be the 97.5th percentile in the tn‐1 distribution. P(-c<t<c) =.95
Once the critical value c is chosen, the random interval contains .
[Y  c  S / n , Y  c  S / n ]
The 95% confidence interval is y  2.093( s / 20 )
and s are the values obtained from the sample.
Table G.2 in Appendix G.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Example:
• Let n=20
df = n‐1 = 19
c = 2.093 (See Table G.2 in Appendix G)
where S is the sample standard deviation of the random sample {Y1, …, Yn}.
•
66
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
67
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
68
CIs for the Mean from a Normally Distributed Population
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
•
More generally, let c be the 100(1‐/2) percentile in the tn‐1 distribution. •
A 100(1‐ )% confidence interval is c/2 – known after choosing  and degree of freedom n‐1.
•
Recall that sd(Y )  / n
•
s/
is the point estimate of sd(
Example C.2 Effect of Job Training Grants on Worker Productivity
or the standard error of .
•
A sample of firms receiving job training grants in 1988.
 Scrap rates – number of items per 100 produced that are not useable and need to be scrapped.
 The change in scrap rates has a normal distribution.
•
n=20, = ‐1.15 se( ) = s/ = .54 (Note that s=2.41)
•
A 95% confidence interval for the mean change in scrap rate  is
 2.093se( )
[‐2.28, ‐0.02]
•
With 95% confidence, the average change in scrap rates in the population is not zero!
se(Y )  s / n
A 100(1‐)% confidence interval can be written as
•
The notion of the standard error of an estimate plays an important role in econometrics.
•
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Example C.2 Effect of Job Training Grants on Worker Productivity
69
The analysis above has some potentially serious flaws.
•
It assumes that any systematic reduction in scrap rates is due to the job training grants.
V. Interval Estimation and Confidence Intervals
•
Note that t distribution approaches the standard normal distribution as the degrees of freedom gets large.
•
In particular,
=.05, c/2  1.96 as n
(see graph)
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
A Simple Rule of Thumb for a 95% Confidence Interval
•
71
70
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
A Rule of Thumb for an approximate 95% confidence interval is
1)
2)
– Many things (variables) can happen over the course of the year to change worker productivity!
V. Interval Estimation and Confidence Intervals
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
•
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
It is slightly too big for large sample sizes
It is slightly too small for small sample sizes.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
72
Asymptotic Confidence Intervals
for Nonnormal Populations
•
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Example C.3 Race Discrimination in Hiring
•
For some applications, the population is nonnormal.
In some cases, the nonnormal population has no standard distribution.
This does not matter as long as sample sizes are sufficiently large for the central limit theorem to give a good approximation of the distribution of the sample average .
A large sample size has a nice feature since it results in small confidence intervals. This is because standard error for –
[se( )] shrinks to zero as the sample size grows.
–
•
•
•
•
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Example C.3 Race Discrimination in Hiring
•
We are interested in the difference B ‐ W
Bi=1
If the black person gets a job offer from employer i
Wi=1
If the white person gets a job offer from employer i
Unbiased estimators of B and W are and the fractions of interviews for which blacks and whites were offered jobs. •
A new variable
•
Yi can take these values, then,
  E ( B i )  E (W i )   B   W
Sample size n=241
73
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Yi =-1
if the black did not get the job, but
the white did
Yi =0
if both did or did not get the job
Yi =1
if the white did not get the job, but
the black did
Sample standard deviation: s=0.482
Find an approximate 95% confidence interval for 
A 99% CI for  = B ‐ w is V. Interval Estimation and Confidence Intervals
1.96(.482/(241)½
‐.133 
‐.133 .031
[‐.164, ‐.102]
(i) Using the following data on 15 workers, construct an exact 95% confidence interval for . [ans.]
‐.133  2.58(.482/(241)½
[‐.213, ‐.053]
We are very confident that the population difference is not zero!
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
C.7 The new management at a bakery claims that workers are now more productive than they were under old management, which is why wages have “generally increased.” Let Wib be Worker i’s wage under the old management and let Wia be Worker i’s wage after the change. The difference is Di = Wia ‐ Wib. Assume that the Di are a random sample from a Normal(, 2) distribution.
b  .224 and w  .357, so y  .224  .357  .133
A 95% CI for  = B ‐ w is I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
75
obs
Wb
Wa
D=Wa-Wb
1
8.3
9.25
0.95
2
9.4
9
-0.4
3
9
9.25
0.25
4
10.5
10
-0.5
5
11.4
12
0.6
6
8.75
9.5
0.75
7
10
10.25
0.25
8
9.5
9.5
0
9
10.8
11.5
0.7
10
12.55
13.1
0.55
11
12
11.5
-0.5
12
8.65
9
0.35
13
7.75
7.75
0
14
11.25
11.5
0.25
15
12.65
13
0.35
mean
10.16667
10.40667
0.24
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
74
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Problem C.7
– 22.4% of black were offered jobs, while 35.7% of white were offered jobs
– This is prima facie evidence of discrimination!
•
Probability that the white person is offered a job
as n increases without bound, the t distribution approaches standard normal distribution.
V. Interval Estimation and Confidence Intervals
•
Probability that the black person is offered a job
Yi = Bi – Wi
Note that the standard normal distribution is used in place of t distribution since we deal with asymptotics.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
•
•
B
w
For large n, an approximate 95% confidence interval is
–
•
Matched pairs analysis – each person in a pair interviewed for the same job.
•
where 1.96 is the 97.5th percentile in the standard normal distribution.
•
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
76
Problem C.7 (i)
Problem C.7 (i)
Date: 05/07/07 Time: 07:57
(i) • The average increase in wage is ̅ = .24, or 24 cents. •
The sample standard deviation is about s = .451
n = 15, se( ̅ ) = .1164. Wb
Wa
D=Wa-Wb
1
8.3
9.25
0.95
2
9.4
9
-0.4
Mean
3
9
9.25
0.25
Median
4
10.5
10
-0.5
Maximum
5
11.4
12
0.6
Minimum
7.75
7.75
-0.5
6
8.75
9.5
0.75
1.569084
1.595291
0.450872
•
10
0.25
13.1
0.95
10.25
9.5
0.25
0
9
10.8
11.5
0.7
10
12.55
13.1
0.55
11
12
11.5
12
8.65
9
13
7.75
7.75
0
14
11.25
11.5
0.25
Sum
152.5
156.1
3.6
15
12.65
13
0.35
Sum Sq. Dev.
34.46833
35.62933
2.846
mean
10.16667
10.40667
0.24
Observations
15
15
15
Skewness
0.175376
0.290842
-0.34947
Kurtosis
1.810807
2.022774
2.161199
-0.5
Jarque-Bera
0.960754
0.80833
0.745067
0.35
Probability
0.61855
0.667534
0.688986
77
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
78
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Suppose the election results are as follows: – Candidate A=42% and – Candidate B=58% of the popular vote
•
Candidate A argued that the election was rigged.
Consulting agency: a sample of 100 voters. It was found that 53% voted for Candidate A.
Question:
how strong is the sample evidence against the officially reported percentage of 42%? Devising methods for answering such questions, using a sample of data, is known as hypothesis testing.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Hypothesis Testing
Sometimes the question we are interested in has a definite yes or no answer.
1) Does a job training program effectively increase average worker productivity?
2) Are blacks discriminated against in hiring?
VI. Hypothesis Testing
10
12.65
10
•
–
D
0.24
9.5
We have reviewed how to evaluate point estimators and to construct confidence intervals. •
WA
10.40667
8
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
VI. Hypothesis Testing
Std. Dev.
WB
10.16667
7
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
V. Interval Estimation and Confidence Intervals
Sample: 1 15
obs
79
•
One way to proceed is to set up a hypothesis test.
Let  be the true proportion of the population voting for Candidate A
•
The null hypothesis is
H0:  =.42
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
80
Fundamentals of Hypothesis Testing
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Fundamentals of Hypothesis Testing
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
The null hypothesis plays a role similar to that of a defendant. – A defendant is presumed to be innocent until proven guilty.
• There are two kinds of mistakes:
– The null hypothesis is presumed to be true until the data strongly suggest otherwise.
1)We reject the null hypothesis when it is true – Type I error
Example: We reject H0 when the true proportion of voting for Candidate A is in fact 0.42.
The alternative hypothesis is that the true proportion voting for Candidate A is above 0.42. •
2)We “accept” or do not reject the null hypothesis when it is false – Type II error
H1: >.42
In order to conclude H1 is true and H0 is false, we must prove beyond reasonable doubt.
•
Observing 43 votes out of a sample of 100 is not enough to overturn the original result.
–
Such an outcome is within the expected sampling variation.
•
–
Example: we “accept” H0, but >.42.
How about observing 53 votes out of a sample of 100?
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Hypothesis Testing
81
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
We can compute the probability of making either a Type I or a Type II error.
•
Hypothesis testing requires choosing the significance level, denoted by .
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Hypothesis Testing
•
•
Read: the probability of rejecting null hypothesis, given that H0 is true.
Type II error
The power of the test is one minus the probability of a Type II error. Mathematically,
where  the actual value of the parameter.
Classical hypothesis requires that we specify a significance level for a test.
•
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
() = P(Reject H0) = 1 – P(Type II)
A significance level is the probability of committing Type I error.
•
82
– Want to minimize the probability of Type II error
– Alternatively, want to maximize the power of a test.
 = P(Reject H0 H0)
•
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Common values for  are .10, .05, and .01. We would like the power to equal unity whenever the null hypothesis is false.
– They quantify our tolerance for an error.
•
=.05: The researcher is willing to make mistakes (falsely reject H0) 5% of the time.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
83
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
84
Testing Hypotheses about the Mean in a Normal Population A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Testing Hypotheses about the
Mean in a Normal Population
In order to test hypothesis, we need to choose a test statistic
and a critical value. •
•
The test statistic T is some function of the random sample. •
When we compute the statistic for particular outcome, we obtain an outcome of the test statistic, denoted by t. A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
Provided that the null hypothesis is true, the critical value c is determined by the distribution of T and the chosen significance level .
•
All rejection rules depend on the outcome of the test statistic t and the critical value c.
•
To test hypothesis about the mean  from a Normal(, 2) is as follows. The null hypothesis is
H0: = 0,
where 0 is a value we specify. In the majority of applications, =0.
•
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Testing Hypotheses about the Mean in a Normal Population 85
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
•
One sided alternative:
H1:  > 0,
H1:  < 0,
Two sided alternative:
H1:   0,
Here we are interested in any departure from the null hypothesis.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Testing Hypotheses about the
Mean in a Normal Population
Three alternatives of interest
•
The rejection rule depends on the nature of the alternative hypothesis. 87
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
For example, for an one‐sided alternative,
H1: 0 > 0.
•
The null hypothesis is effectively H0:   0.
•
Here we reject the null hypothesis when the value of sample average, , is sufficiently greater than 0. How?
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
86
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
88
Testing Hypotheses about the
Mean in a Normal Population
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Example C.4: Effect of Enterprise Zones on Business Investments
•
We use standardized version,
•
Note that s is used in place of  and se( y )  s / n
This is called the t statistic. The t statistic measures the distance from to 0
relative to the standard error of .
•
•
•
•
Under the null hypothesis, the random variable is
T  n (Y   0 ) / S
where c is the
100(1-)
percentile in a tn1 distribution.
This is an
example of a
one-tailed test.
T has a tn‐1 distribution.
Example of a one‐tailed test:
•
Choose the significance level =.05. The critical value c is chosen so that
P( T > cH0) = .05
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Y denotes the percentage change in investment from the year before and year after a city became an enterprise zone.
Assume that Y has a Normal(,2) distribution.
(Null hypothesis: Enterprise zones have no effect)
H0: =0 H1: >0
(Alternative hypothesis:They have a positive effect)
•
Suppose that we wish to test H0 at the 5% level. The test statistic is
•
A sample of 36 cities.
•
We conclude that, at the 5% significance level, enterprise zones have an effect on average investment.
At 1% significance level, do enterprise zones have an positive effect?
•
 =0.5; C=1.69 (see Table G.2)
 y‐bar=8.2; s=23.9
 t = 2.06
The rejection rule is t > c
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
89
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Testing Hypotheses about the
Mean in a Normal Population
90
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
• For the null hypothesis and the alternative hypothesis
H0:  ≥ 0,
H1:  < 0.
• The rejection rule is
B
a
c
k
t < ‐c
This implies that < 0 that are sufficiently far from zero to reject H0.
U
p
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
91
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
92
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Example C.5: Race Discrimination in Hiring
•
Testing Hypotheses about the
Mean in a Normal Population
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
=B‐W is the difference in the probability that blacks and whites receive job offers.
 is the population mean of the variable Y=B‐W where B and W are binary variables. •
Testing
H0: =0
H1: <0
•
•
Given n=241, y   .133 se ( y )  .48 /
The t statistic for testing H0: =0
t = ‐.133/.031 = ‐4.29
•
•
Critical value = ‐2.58 (one‐sided test; =.005)
t<‐2.58 There is very strong evidence against H0 in favor of H1.
241  . 031
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Testing Hypotheses about the
Mean in a Normal Population
We have to be careful in obtaining the critical value, c. •
The critical value c (See graph!)
93
Example: Let n=22, •
Rejection Rule: the absolute value of t statistic must exceed 2.08.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
The rejection rule is
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Testing Hypotheses about the Mean in a Normal Population – It is the 100(1‐/2) percentile in a tn‐1 distribution.
– If =.05, c is the 97.5th percentile in the tn‐1 distribution.
c=2.08, the 97.5th percentile in a t21
distribution.
(See Table G.2)
•
This gives a two‐tailed test.
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
•
For the null hypothesis and the alternative hypothesis,
H0:  = 0,
H1:   0.
t> c
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
•
95
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
Proper language for hypothesis testing:
“We fail to reject H0 in favor of H1 at the 5% significance level”
•
Incorrect wording:
“ We accept H0 at the 5% significance level”
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
94
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
96
Asymptotic Tests for Nonnormal Population
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
If the sample is large enough, we can invoke central limit theorem.
•
Asymptotic theory is based on n increasing without bound
Asymptotic Tests for Nonnormal Population
Because asymptotic theory is based on n increasing without bound, •
–
–
Under the null hypothesis, T  n (Y   0 ) / S a Normal (0,1)
As n gets large, the tn‐1 distribution converges to the standard normal distribution.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Example C.5: Race Discrimination in Hiring
•
97
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Note that our chosen significance levels are only approximate.
•
When the sample size is large, the actual significance level will be very close to 5%.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
–
•
•
The t statistic for testing H0: =0
t = ‐.133/.031 = ‐4.29
•
•
Critical value = ‐2.58 (two‐sided test; =.01)
t<‐2.58 There is very strong evidence against H0 in favor of H1.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
99
98
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
The traditional requirement of choosing the significance level ahead of time means that different researchers could wind up with different conclusions, •
Testing
H0: =0
H1: 0
Given n=241, y  .133se( y )  .48 / 241  .031
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Computing and Using p‐Values =B‐W is the difference in the probability that blacks and whites receive job offers.
•
For moderate values of n, say between 30 and 60, it is traditional to use the t distribution.
For n  120, the choice between two distributions is irrelevant.
•
 is the population mean of the variable Y=B‐W where B and W are binary variables. •
standard normal or t critical values are pretty much the same
Suggestions:
•
–
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
although they use the same set of data and same procedures.
p‐value of the test
•
p‐value of the test
It is the largest significance level at which we fail to reject the null hypothesis.
•
p‐value of the test
It is the smallest significance level at which we reject the null hypothesis.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
100
Computing and Using p‐Values
•
One sided Test: Let H0: =0 in a Normal(,2). The test statistic is T  n  Y / S
•
The observed value of T for our sample is t = 1.52
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Computing and Using p‐Values
Interpretation: t=1.52 and p‐value=.065
•
–
–
The p‐value is the area to the right hand side of 1.52, which is
–
p‐value = P(T>1.52H0) = 1 – (1.52) = .0655
–
–
where () is the standard normal cumulative distribution function (cdf).
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Computing and Using p‐Values
101
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
–
–
–
If the null hypothesis is true, we observe a value of T as large as 2.85 with probability .002. If we carry out the test at the significance level above .002, we reject the null hypothesis.
The smallest significance level at which we reject the null hypothesis is .002.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
The largest significance level at which we carry out the test and fail to reject the H0 is .065.
The probability that we observe a value of T as large as 1.52 when the null hypothesis is true.
If we carry out the test at the significance level above .065, we reject the null hypothesis.
The smallest significance level at which we reject the null hypothesis is .065.
We would observe the value of T as large as 1.52 due to chance 6.5% of the time.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Example C.6: Effect of Training Grants on Worker Productivity
(one tail test)
Interpretation: t=2.85 and (n is large)
p‐value= 1 – (2.85) =.0022
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
102
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
 is the average change in scrap rates; n=20. Note that the change in scrap rates has a normal distribution.
•
Hypothesis
H0: = 0 (Training grants have no effect)
H1:  <0 n=20 (for one‐tail test)
103
•
If we carry out the test at the significance level above 0.023, we reject the null hypothesis.
•
The smallest significance level at which we reject the null hypothesis is 0.023.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
104
Example: Training Grants and Worker Productivity (two tails)
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Example C.7 Race Discrimination in Hiring
• Given
Two sided alternative
•
•
•
•
•
For t testing about population means, the p‐value is
P(Tn‐1>t) = 2P(Tn‐1>t)
t
Tn-1
n=241; = ‐.133; se( ) = .48/
value of the test statistic
t random variable
•
P‐value is computed by finding the area to the right oftand multiplying the area by two.
Hull Hypothesis and Two‐sided alternative
H0: =0 against H1:   0
•
If Z is a standard normal random variable
P(Z<‐4.29)  0
•
Hull Hypothesis and Two‐sided alternative
Area = p-value = 023
•
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
There is very strong evidence against H0 in favor of H1.
– Note that Critical value = ‐2.58 (=.01)
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
105
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
–
If 0 does not in the confidence interval, we reject the null hypothesis at the 5% level.
If the hypothesized value of , 0, lies in the confidence interval, we fail to reject the null hypothesis at the 5% level. After a confidence interval is constructed, many values of 0 can be tested. •
–
107
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Rejection Rule
–
• Rejection rules for p‐value: Summary
Choose significance level, 
1) We reject the H0 at the 100% level if
p‐value < 
2) We fail to reject H0 at the 100% level if
p‐value  
106
The confidence interval and hypothesis testing are linked.
Assume =.05. The confidence interval can be used to test two sided alternatives. Suppose
H0:  = 0
H1:   0
•
•
•
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
The Relationship between Confidence Interval and Hypothesis Testing
• Rejection rules for t value: Summary
1) For H1:  > 0, the rejection rule is t>c and the p‐value is P(T>t).
2) For H1:  < 0, the rejection rule is t<‐c and the p‐value is P(T<t).
3) For H1:  ≠ 0, the rejec on rule is t>c and the p‐value is P(T>t).
VI. Hypothesis Testing
For nonnormal distribution, the exact p‐value can be difficult to obtain, but we can find asympototic p‐
values by using the same calculations.
H0: =0 against H1:   0
2.13
Computing and Using p‐Values
= .031
The t statistic for testing H0: =0
t = ‐.133/.031 = ‐4.29
P-value
= 0.023+0.023
= 0.046
If we carry out the test at the significance level above 0.046, we reject the null hypothesis.
The smallest significance level at which we reject the null hypothesis is 0.046.
VI. Hypothesis Testing
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Since a confidence interval contains more than one value, there are many null hypotheses that can be rejected.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
108
Example C.8: Training Grants and Worker Productivity
•
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Practical Significance and Statistical Significance A 95% confidence interval for the mean change in scrap rate  is
[‐2.28, ‐0.02]
Since zero is excluded from this interval, at 5% level, we reject •
Three covered evidences of population parameters include 1) point estimates, 2) confidence intervals and 3) hypothesis tests. •
In empirical analysis, we should also put emphasis on the magnitudes of the point estimates!
•
Statistical significance depends on the size of the test statistic, not on the size of . H0: =0 against H1:   0
•
If H0: =‐2, we fail to reject the null hypothesis. •
Don’t say
We “accept” the null hypothesis H0: =‐1.0 at the 5% significance level.
•
This is because in the same set of data, there are usually many hypotheses that cannot be rejected. •
For example, –
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Practical Significance and Statistical Significance
109
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
•
•
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Let Y denote the change in commute time, measured in minutes, for commuters before and after a freeway was widened.
Assume YNormal(, 2)
Hypotheses:
 H0: =0
 H1: <0
• Practical significance depends on the magnitude of . – The estimate can be statistically significant without being large, especially when we work with large sample sizes.
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 110
Example C.9: Effect of Free Way Width on Commute Time
•
• Note that the magnitude and sign of the test statistic determine the statistical significance.
It depends on the ratio of to its standard error: t  y / se( y )
The test statistic could be large because se( ) is large or is large.
•
– it is logically incorrect to say that H0: =‐1 and H0: = ‐2 are both “accepted.” – It is possible that neither are rejected. – Thus, we say “fail to reject”.
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
•
•
Given n=900, y  3.6; sample sd  32.7 se( y )  32.7 / 900  1.09
The t statistic for testing H0: =0
t = ‐3.6/1.09 = ‐3.30
p‐value=.005
•
Statistical Significance: We conclude that the estimated reduction in comunte time had a statistically significant effect on average commute time.
Practical Significance: The estimated reduction in average commute time is only 3.6 minutes.
•
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 111
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 112
VII. Remarks on Notation
Remarks on Notation
•
In the main text, we use a simpler convention that is widely used in econometrics.
•
If  is a population parameter, the notation will be used to denote both an estimator and an estimate of 
• We have been careful to use standard conventions to denote
random variable
W
estimator
W
estimate
w
= theta hat
•
• Distinguishing between an estimator and estimate (outcome of the random variable W) is important for understanding various concepts in
Example:
• If the population parameter is , then denotes an estimator or estimate of .
– Estimation and
– Hypothesis Testing.
•
If the parameter is 2, then denotes an estimator or estimate of 2.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VII. Remarks on Notation
VII. Remarks on Notation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 113
Problem C.6
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Problem C.6 continue
C.6 You are hired by the governor to study whether a tax on liquor has decreased average liquor consumption in your state. You are able to obtain, for a sample of individuals selected at random, the difference in liquor consumption (in ounces) for the years before and after the tax. For person i who is sampled randomly from the population, Yi denotes the change in liquor consumption. Treat these as a random sample from a Normal(, 2) distribution.
VI. Hypothesis Testing VII. Remarks on Notation
A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
(iii) Now, suppose your sample size is n = 900 and you obtain the estimates = ‐ 32.8 and s = 466.4.
Calculate the t statistic for testing H0 against H1; obtain the p‐value for the test. (Because of the large sample size, just use the standard normal distribution tabulated in Table G.1.) Do you reject H0 at the 5% level? at the 1% level? [ans.]
(iv) Would you say that the estimated fall in consumption is large in magnitude? Comment on the practical versus statistical significance of this estimate. [ans.]
(i) The null hypothesis is that there was no change in average liquor consumption. State this formally in terms of . [ans.]
(ii) The alternative is that there was a decline in liquor consumption; state the alternative in terms of . [ans.]
Fundamentals of Mathematical Statistics: Part Three . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 114
(v) What has been implicitly assumed in your analysis about other determinants of liquor consumption over the two‐year period in order to infer causality from the tax change to liquor consumption? [ans.]
115
VI. Hypothesis Testing VII. Remarks on Notation
Fundamentals of Mathematical Statistics: Part Three . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
116
y
Problem C.6 (i) (ii)
Problem C.6 (iii)
(iii) • The standard error of is
= 466.4/30 = 15.55. se( ) = s/
Yi – the change in liquor consumption.
Yi are a random sample from a Normal (, 2)
•
(i) •
– H0: µ = 0.
(ii)
– H1: µ < 0.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
117
Therefore, the t statistic for testing H0: µ = 0 is t = /se( ) = – 32.8/15.55
= –2.11. We obtain the p‐value as P(Z  –2.11), where Z ~ Normal(0,1). •
These probabilities are in Table G.1: p‐value = .0174. •
(=.05) Because the p‐value is below .05, we reject H0 against the one‐sided alternative at the 5% level. •
(=.01) We do not reject at the 1% level because p‐value = .0174 > .01.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
118
Problem C.6 (iv)
(iv) • The estimated reduction, about 33 ounces, does not seem large for an entire year’s consumption. – If the alcohol is beer, 33 ounces is less than three 12‐ounce cans of beer. – Even if this is hard liquor, the reduction seems small. – (On the other hand, when aggregated across the entire population, alcohol distributors might not think the effect is so small.)
(v) • The implicit assumption is that other factors that affect liquor consumption – such as income, or changes in price due to transportation costs, are constant over the two years.
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
119
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
120
A. Nature
B. CI N(0,1)
C. Rule of Thumb
D. Asymptotic CI
Problem C.7
C.7 The new management at a bakery claims that workers are now more productive than they were under old management, which is why wages have “generally increased.” Let Wib be Worker i’s wage under the old management and let Wia be Worker i’s wage after the change. The difference is Di = Wia ‐ Wib. Assume that the Di are a random sample from a Normal(, 2) distribution.
(i) Using the following data on 15 workers, construct an exact 95% confidence interval for . [ans.]
Problem C.7 continue …
obs
Wb
Wa
D=Wa-Wb
1
8.3
9.25
0.95
2
9.4
9
-0.4
3
9
9.25
0.25
4
10.5
10
-0.5
5
11.4
12
0.6
6
8.75
9.5
0.75
7
10
10.25
0.25
8
9.5
9.5
0
9
10.8
11.5
0.7
10
12.55
13.1
0.55
11
12
11.5
-0.5
12
8.65
9
0.35
13
7.75
7.75
0
14
11.25
11.5
0.25
15
12.65
13
0.35
mean
10.16667
10.40667
0.24
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
121
•
•
•
The sample standard deviation is about s = .451, n = 15, se( ̅ ) = .1164.
From Table G.2, the 97.5th percentile in the t14
distribution is 2.145. So the 95% CI is [d  c  S / n , d  c  S / n ]
= .24  2.145(.1164), = or about –.010 to .490.
(ii) •
If µ = E(di) then H0: µ = 0. •
The alternative is that management’s claim is true: H1: µ > 0.
• (iii) Test the null hypothesis from part (ii) against the stated alternative at the 5% and 1% levels. [ans.]
• (iv) Obtain the p‐value for the test in part (iii). [ans.]
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 122
Date: 05/07/07 Time: 07:57
Sample: 1 15
obs
Wb
Wa
D=Wa-Wb
1
8.3
9.25
0.95
2
9.4
9
-0.4
Mean
3
9
9.25
0.25
Median
4
10.5
10
-0.5
Maximum
5
11.4
12
0.6
Minimum
7.75
7.75
-0.5
6
8.75
9.5
0.75
1.569084
1.595291
0.450872
7
10
10.25
0.25
8
9.5
9.5
0
Std. Dev.
WB
WA
D
10.16667
10.40667
0.24
10
10
0.25
12.65
13.1
0.95
Skewness
0.175376
0.290842
-0.34947
Kurtosis
1.810807
2.022774
2.161199
9
10.8
11.5
0.7
10
12.55
13.1
0.55
11
12
11.5
-0.5
Jarque-Bera
0.960754
0.80833
0.745067
12
8.65
9
0.35
Probability
0.61855
0.667534
0.688986
13
7.75
7.75
0
14
11.25
11.5
0.25
Sum
152.5
156.1
3.6
15
12.65
13
0.35
Sum Sq. Dev.
34.46833
35.62933
2.846
mean
10.16667
10.40667
0.24
Observations
15
15
15
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
(ii) Formally state the null hypothesis that there has been no change in average wages. In particular, what is E(Di) under H0? If you are hired to examine the validity of the new management’s claim, what is the relevant alternative hypothesis in terms of E(Di)? [ans.]
Problem C.7 (i)
Problem C.7 (i)
(i) •
The average increase in wage is ̅ = .24, or 24 cents. A. Fundamentals
B. HT N(0,1)
C. Asymptotic
D. P-Value
E. CI & HT
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
123
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
124
Problem C.7 (iv)
(iv) • We obtain the p‐value as P(T > 2.062), where T is from the t14 distribution.
• The p‐value obtained from Eview is .029; – this is half of the p‐value for the two‐sided alternative. – (Econometrics packages, including Eviews, report the p‐
value for the two‐sided alternative.)
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
125
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
126
Problem C.7 (iv)
Hypothesis Testing for DI
Good Luck!
Date: 05/07/07 Time: 08:03
Sample: 1 15
Included observations: 15
Test of Hypothesis: Mean = 0.000000
Sample Mean = 0.240000
Sample Std. Dev. = 0.450872
Method
t-statistic
FT19, PT15: See you around!
Value
Probability
2.061595
0.0583
View / Test of Descriptive Stats / Simple Hypothesis Tests
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VI. Hypothesis Testing
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat
127
I. Random S. II. Finite S. III. Asymptotic S. IV. Parameter E. V. Interval E. & Confidence I. VI. Hypothesis T VII. Remarks
VII. Remarks on Notation
Fundamentals of Mathematical Statistics . Intensive Course in Mathematics and Statistics . Chairat Aemkulwat 128