Download doc - UT Mathematics

Document related concepts

Hardware random number generator wikipedia , lookup

Randomness wikipedia , lookup

Confidence interval wikipedia , lookup

Probability box wikipedia , lookup

Bootstrapping (statistics) wikipedia , lookup

Taylor's law wikipedia , lookup

Transcript
1
NOTES FOR SUMMER STATISTICS INSTITUTE COURSE
COMMON MISTAKES IN STATISTICS –
SPOTTING THEM AND AVOIDING THEM
Part II: Inference
MAY 24 – 27, 2010
Instructor: Martha K. Smith
2
CONTENTS OF PART 2
Simple Random Samples, Part 2: More Precise Definition of
Simple Random Sample
3
Why Random Sampling is Important
9
Overview of Frequentist Hypothesis Testing
10
Overview of Frequentist Confidence Intervals
16
Frequentist Hypothesis Tests and p-Values
p-values
27
24
Misinterpretations and Misuses of p-values
30
Type I and Type II Errors and Significance Levels
Connection between Type I Error and
Significance Level
Pros and Cons of Setting a Significance Level
Type II Error
Considering Both Types of Error Together
Deciding what Significance Level to Use
33
Power of a Statistical Procedure
Elaboration
Power and Type II Error
Common Mistakes Involving Power
34
35
36
37
41
44
46
48
52
3
SIMPLE RANDOM SAMPLES, PART 2: More Precise
Definition of Simple Random Sample
In practice in applying statistical techniques, we are interested in
random variables defined on the population under study. To
illustrate, recall the examples in Simple Random Sample, Part 1:
1. In a medical study, the population might be all adults over
age 50 who have high blood pressure.
2. In another study, the population might be all hospitals in the
U.S. that perform heart bypass surgery.
3. If we’re studying whether a certain die is fair or weighted,
the population is all possible tosses of the die.
In these examples, we might be interested in the following random
variables:
Example 1: The difference in blood pressure with and without
taking a certain drug.
Example 2: The number of heart bypass surgeries performed in
a particular year, or the number of such surgeries that are
successful, or the number in which the patient has
complications of surgery, etc.
Example 3: The number that comes up on the die.
If we take a sample of units from the population, we have a
corresponding sample of values of the random variable.
4
For example, if the random variable (let's call it Y) is “difference
in blood pressure with and without taking the drug”, then the
sample will consist of values we can call y1, y2, ..., yn, where
 n = number of people in the sample from the population of
patients
 The people in the sample are listed as person 1, person 2, etc.
 y1 = the difference in blood pressures (that is, the value of Y)
for the first person in the sample,
 y2 = the difference in blood pressures (that is, the value of Y)
for the second person in the sample
 etc.
We can look at this another way, in terms of n random variables
Y1, Y2, ..., Yn , described as follows:
 The random process for Y1 is “pick the first person in the
sample”; the value of Y1 is the value of Y for that person –
i.e., y1.
 The random process for Y2 is “pick the second person in the
sample”; the value of Y2 is the value of Y for that person –
i.e., y2.
 etc.
The difference between using the small y's and the large Y's is that
when we use the small y's we are thinking of a fixed sample of size
n from the population, but when we use the large Y's, we are
thinking of letting the sample vary (but always with size n).
Note: The Yi’s are sometimes called identically distributed,
because they have the same probability distribution.
5
Precise definition of simple random sample of a random
variable:
"The sample y1, y2, ... , yn is a simple random sample" means
that the associated random variables Y1, Y2, ... , Yn are
independent.
Intuitively speaking, "independent" means that the values of any
subset of the random variables Y1, Y2, ... , Yn do not influence the
values of the other random variables in the list.
Recall: We defined a random sample as one that is chosen by a
random process.
 Where is the random process in the precise definition?
Note: To emphasize that the Yi’s all have the same distribution, the
precise definition is sometimes stated as, “Y1, Y2, ... , Yn are
independent, identically distributed,” sometimes abbreviated as
iid.
6
Connection with the initial definition of simple random sample
Recall the preliminary definition (from Moore and McCabe,
Introduction to the Practice of Statistics) given in Simple Random
Samples, Part 1:
"A simple random sample (SRS) of size n consists of n
individuals from the population chosen in such a way that
every set of n individuals has an equal chance to be the
sample actually selected."
Recall Example 3 above: We are tossing a die; the number that
comes up on the die is our random variable Y.
 In terms of the preliminary definition, the population is all
possible tosses of the die, and a simple random sample is n
different tosses.
 The different tosses of the die are independent events (i.e.,
what happens in some tosses has no influence on the other
tosses), which means that in the precise definition above, the
random variables Y1, Y2, ... , Yn are indeed independent: The
numbers that come up in some tosses in no way influence the
numbers that come up in other tosses.
7
Compare this with example 2: The population is all hospitals in the
U.S. that perform heart bypass surgery.
 Using the preliminary definition of simple random sample of
size n, we end up with n distinct hospitals.
 This means that when we have chosen the first hospital in our
simple random sample, we cannot choose it again to be in
our simple random sample.
 Thus the events "Choose the first hospital in the sample;
choose the second hospital in the sample; ... ," are not
independent events: The choice of first hospital restricts the
choice of the second and subsequent hospitals in the sample.
 If we now consider the random variable Y = the number of
heart bypass surgeries performed in 2008, then it follows that
the random variables Y1, Y2, ... , Yn are not independent.
The Bottom Line: In many cases, the preliminary definition does
not coincide with the more precise definition.
More specifically, the preliminary definition allows sampling
without replacement, whereas the more precise definition
requires sampling with replacement.
8
The Bad News: The precise definition is the one that is used in the
mathematical theorems that justify many of the procedures of
statistical inference. (More detail later.)
The Good News:
 If the population is large enough, the preliminary definition
is close enough for all practical purposes.
 In many cases where the population is not “large enough,”
there are alternate theorems giving rise to alternate
procedures using a “finite population correction factor” that
will work
Unfortunately, the question, "How large is large enough?" does not
have a simple answer.
The upshot:
1) Using a “large population” procedure with a “small
population” is a common mistake.
2) One more difficulty in selecting an appropriate sample,
which leads to one more source of uncertainty.
9
WHY RANDOM SAMPLING IS IMPORTANT
Myth: "A random sample will be representative of the population".
 This is false: a random sample might, by chance, turn out to
be anything but representative.
 For example, it’s possible (though unlikely) that if you toss a
fair die ten times, all the tosses will come up six.
 If you find a book or web page that gives this reason, apply
some healthy skepticism to other things it claims.
A slightly better explanation that is partly true but partly
urban legend: "Random sampling prevents bias by giving all
individuals an equal chance to be chosen."
 The element of truth: Random sampling will eliminate
systematic bias.
 A practical rationale: This statement is often the best
plausible explanation that is acceptable to someone with little
mathematical background.
 However, this statement could easily be misinterpreted as the
myth above.
 An additional, very important, reason why random sampling
is important, at least in frequentist statistical procedures,
which are those most often taught (especially in introductory
classes) and used:
The real reason: The mathematical theorems that justify most
frequentist statistical procedures apply only to random samples.
The next section elaborates.
10
OVERVIEW OF FREQUENTIST HYPOTHESIS TESTING
Most commonly-used, frequentist hypothesis tests involve the
following elements:
1. Model assumptions
2. Null and alternative hypotheses
3. A test statistic.
 This is something calculated by a rule from a sample.
 It needs to have the property that extreme values of the test
statistic cast doubt on the null hypothesis.
4. A mathematical theorem saying, "If the model assumptions
and the null hypothesis are both true, then the sampling
distribution of the test statistic has this particular form."
Note:
 The sampling distribution is the probability distribution of
the test statistic, when considering all possible suitably
random samples of the same size. (More later.)
 The exact details of these four elements will depend on the
particular hypothesis test.
11
Example: In the case of the large sample z-test for the mean with
two-sided alternative, the elements are:
1. Model assumptions: We are working with simple random
samples of a random variable Y that has a normal distribution.
2. Null hypothesis: “The mean of the random variable in
question is a certain value µ0.”
Alternative hypothesis: "The mean of the random variable Y is
not µ0." (This is called the two-sided alternative.)
3. Test statistic: y (the sample mean of a simple random
sample of size n from the random variable Y).
Before discussing item 4 (the mathematical theorem), we first
need todo two things:
I. Note on terminology:
 The mean of the random variable Y is also called the
expected value or the expectation of Y.
o It’s denoted E(Y).
o It’s also called the population mean, often denoted as µ.
o It is what we do not know in this example.
 A sample mean is typically denoted y (read "y-bar").
o It is calculated from a sample y1, y2, ... , yn of values of
Y by the familiar formula y = (y1+ y2+ ... + yn)/n.
 The sample mean y is an estimate of the population mean µ,
but they are usually not the
same.
o Confusing them is a common mistake.
 "the population mean" but "a sample
 Note that I have written,
mean". 
o A sample mean depends on the sample chosen.
o Since there are many possible samples, there are many
possible sample means.
12
o However, there is only one population mean associated
with the random variable Y.
II. We now need to step back and consider all possible simple
random samples of Y of size n.
 For each simple random sample of Y of size n, we get a
value of y .
 We thus have a new random variable Yn :
 o The associated random process is “pick a simple
random sample of size n”

o The value of Yn is the sample mean y for this
sample
 Note that


o Yn stands for the new random variable

o y stands for the value of Yn , for a particular sample
of size n.
 The distribution of Yn is called the sampling distribution
 of Yn (or the sampling
distribution of the mean).
Now we can state the theorem:

4. The theorem states: If the model assumptions are all true (i.e.,
if Y is normal and all samples considered are simple random
samples), and if in addition the mean of Y is indeed µ0 (i.e., if
the null hypothesis is true), then
 The sampling distribution of Yn is normal
 The sampling distribution of Yn has mean µ0


13
 The sampling distribution of Yn has standard deviation

, where σ is the standard deviation of the original
n
random variable Y.

More Terminology: σ is called the population standard deviation

of X; it is not the same as the sample standard deviation s,
although s is an estimate of σ.
The following chart and picture summarize the theorem and related
information:
Mean
Random
variable Y
(population
distribution)
Population
mean µ
(µ = µ0 if null
hypothesis true)

Standard Population
deviation standard
deviation 

Related quantity
calculated from a
sample y1, y2, … ,
yn
Sample mean
y =
(y1+ y2+ … yn)/n
Random
variable Yn
(sampling
distribution)
Sampling
distribution
mean µ
(µ = µ0 if null
y is an estimate hypothesis
of the population true)
mean µ
Sample standard Sampling
deviation
distribution
n
standard
1
s=
(y  y i ) 2

n 1 i1
deviation
s is an estimate of 
n
the population
standard deviation


Comment: The model assumption that the sample is a simple
random sample (in particular, that the Yi’s as defined earlier are
independent) is used to prove that the sampling distribution is
14
normal and (even more importantly) that the standard deviation of
the sampling distribution is 
n

Consequences (More detail later):

I. If the conclusion of the theorem is true, the sampling distribution
of Yn is narrower than the original distribution of Y
 In fact, the conclusion of the theorem gives us an idea of just
how narrow it is, depending on n.
 This will allow us to construct a hypothesis test.
II. The only way we know the conclusion is true is if we know the
hypotheses of the theorem (the model assumptions) are true.
III. Thus: If the model assumptions are not true, then we do not
know that the theorem is true, so we do not know that the
hypothesis test is valid.
In the example (large sample z-test for a mean), this translates to:
15
If the sample is not a simple random sample, or if the random
variable is not normal, then the reasoning establishing the
validity of the test breaks down.
Comments:
1. Different hypothesis tests have different model assumptions.
 Some tests apply to random samples that are not simple.
 For many tests, the model assumptions consist of several
assumptions.
 If any one of these model assumptions is not true, we do not
know that the test is valid.
2. Many techniques are robust to some departures from at least
some model assumptions.
 This means that if the particular assumption is not too far
from true, then the technique is still approximately valid.
 More later.
3. Using a hypothesis test without paying attention to whether or
not the model assumptions are true and whether or not the
technique is robust to possible departures from model assumptions
is a common mistake in using statistics. (More later.)
16
OVERVIEW OF
FREQUENTIST CONFIDENCE INTERVALS
Before continuing the discussion of hypothesis tests, it will be
helpful to first discuss the related concept of confidence intervals.
The General Situation:
 We are considering a random variable Y.
 We are interested in a certain parameter (e.g., a proportion,
or mean, or regression coefficient, or variance) associated
with the random variable Y.
 We do not know the value of the parameter.
 Goal 1: We would like to estimate the unknown parameter,
using data from a sample.
 Goal 2: We would also like to get some sense of how good
our estimate is.
The first goal is usually easier than the second.
Example: If the parameter we are interested in estimating is the
mean of the random variable, we can estimate it using a sample
mean.
17
The idea for the second goal (getting some sense of how good our
estimate is):
 Although we typically have just one sample at hand when we
do statistics, the reasoning used in frequentist (classical)
inference depends on thinking about all possible suitable
samples of the same size n.
 Which samples are considered "suitable" will depend on the
particular statistical procedure to be used.
 Each statistical procedure has model assumptions that are
needed to ensure that the reasoning behind the procedure is
sound.
 The model assumptions determine which samples are
"suitable."
Example: The parameter we are interested in estimating is the
population mean µ = E(Y) of the random variable Y. The
confidence interval procedure is the large-sample z-procedure.






The model assumptions for this procedure are: The random
variable is normal, and samples are simple random samples.
Thus in this case, "suitable sample" means “simple random
sample”.
We will also assume Y is normal, so that the procedure will
be legitimate.
We’ll use  to denote the standard deviation of Y.
So we collect a simple random sample, say of size n,
consisting of observations y1, y2, ... , yn.
o For example, if Y is "height of an adult American
male", we take a sample random sample of n adult
American males; y1, y2, ... , yn are their heights.
We use the sample mean y = (y1+ y2+ ... + yn)/n as our
estimate of µ.

18
This is an example of a point estimate -- a numerical
estimate with no indication of how good the estimate
is.
To get an idea of how good our estimate is, we look at all
possible simple random samples of size n from Y.
o In the specific example, we consider all possible simple
random samples of adult American males, and for each
sample of men, the list of their heights.
One way we can get a sense of how good our estimate is in
this situation is to consider the sample means y for all
possible simple random samples of size n from Y.
o This amounts to defining a new random variable, which
we will call Yn (read Y-bar sub n).

o We can describe the random variable as Yn as "sample
mean of a simple random sample of size n from Y", or
perhaps more clearly as: "pick a simple random sample
of 
size n from Y and calculate its sample mean".
estimate of the
o Note that each value of Yn is an
population mean µ.
This new random variable Yn has a distribution. This is called
a sampling distribution, since it arises from considering
varying samples. 
o The values of Yn are all the possible values of sample
means y ofsimple random samples of Y – i.e, the
values of our estimates of µ.
o The distribution of Yn gives us information about the

variability
(as samples vary) of our estimates of the
 population mean µ.
o There is a mathematical theorem that proves that if the

model assumptions
are true, the mean of the sampling
distribution is also µ.
o This theorem also says that the sampling distribution is
normal, and its standard deviation is  n
o




19
o
The chart and picture below summarize some of this
information.
20
Population
Associated
Distribution
Associated
Mean(s)
One Simple Random
Sample y1, y2, ... , yn
of size n
None
Distribution of Y
Population mean µ, Sample mean
also called E(Y), or
y = (y1+ y2+ ... +
the expected value
yn)/n
of Y, or the
expectation of Y
It is an estimate of µ.
Sampling Distribution (Distribution of Yn )
1) Each sample has its own mean y . This
allows us to define a random variable Yn .
 random
The population for Yn is all simple
samples from Y. The value of Yn for a
 sample is the
particular simple random

sample mean y for that sample.

Associated
Standard
Deviation
All Simple Random Samples of size n

Population
standard deviation

Sample standard
deviation
n
s=
1
(x  x i ) 2

n 1 i1
s is an estimate of the
population standard

 deviation 
 variable, Y also has a
2) Since it is a random
n
mean, E( Yn ). Using the model assumptions
for
this particular example, it can be proved
that E( Yn ) = µ. In other words, the random
the same mean.
variables Y and Ȳn have

Sampling distribution standard deviation


n
21

The theorem that depends on the model assumptions will tell
us enough so that it is possible to do the following:
o If we specify a probability (we'll use .95 to illustrate), we
can find a number a so that
(*) The probability that Yn lies between µ - a and µ + a is
approximately 0.95.
Caution: It is important to get the reference category straight
 to keeping in mind what is a random
here. This amounts
variable and what is a constant. Here, Yn is the random
variable (that is, the sample is varying), whereas µ is
constant.

Note: The z-procedure for confidence
intervals is only an
approximate procedure; that is why the “approximately” is in
(*) and below. Many procedures are “exact”; we don’t need
the “approximately” for them.
22
o
A little algebraic manipulation allows us to restate (*) as
(**) The probability that µ lies between Yn - a and Yn + a
is approximately 0.95
Caution: It is again important to get the reference category
correct here. It hasn't changed: itis still thesample that is
varying, not µ. So the probability refers to Yn , not to µ.
Thinking that the probability refers to µ is a common
mistake in interpreting confidence intervals.

It may help to restate (**) as
(***) The probability that the interval from
Yn - a to Yn + a contains µ is approximately 0.95.

We are now faced with two possibilities (assuming the model
assumptions are indeed all true):


1) The sample we have taken is one of the approximately 95%
for which the interval from Yn - a to Yn + a does contain µ. ☺
2) Our sample is one of the approximately 5% for which the
interval from Yn - a to Yn + a does not contain µ.


 
 
 


Unfortunately, we can't know which of these two possibilities is
true. 


 
 
 

23

Nonetheless, we calculate the values of Yn - a and Yn + a for the
sample we have, and call the resulting interval an approximate
95% confidence interval for µ.
o We can say that we have obtained the confidence interval
 for approximately

by using a procedure that,
95% of all
simple random samples from Y, of the given size, produces
an interval containing the parameter we are estimating.
o Unfortunately, we can't know whether or not the sample
we have used is one of the approximately 95% of "good"
samples that yield a confidence interval containing the true
mean µ, or whether the sample we have is one of the
approximately 5% of "bad" samples that yield a
confidence interval that does not contain the true mean µ.
o We can just say that we have used a procedure that
"works" about 95% of the time.
o Various web demos can demonstrate.
In general: We can follow a similar procedure for many other
situations to obtain confidence intervals for parameters.

Each type of confidence interval procedure has its own model
assumptions.
o
o
o
If the model assumptions are not true, we are not sure that
the procedure does what is claimed.
However, some procedures are robust to some degree to
some departures from models assumptions -- i.e., the
procedure works pretty closely to what is intended if the
model assumption is not too far from true.
Robustness depends on the particular procedure; there are
no "one size fits all" rules.
24






We can decide on the "level of confidence" we want;
o E.g., we can choose 90%, 99%, etc. rather than 95%.
o Just which level of confidence is appropriate depends on
the circumstances. (More later)
The confidence level is the percentage of samples for which
the procedure results in an interval containing the true
parameter. (Or approximate percentage, if the procedure is
not exact.)
However, a higher level of confidence will produce a wider
confidence interval. (See demo)
o i.e., less certainty in our estimate.
o So there is a trade-off between degree of confidence and
degree of certainty.
Sometimes the best we can do is a procedure that only gives
approximate confidence intervals.
o i.e., the sampling distribution can be described only
approximately.
o i.e., there is one more source of uncertainty.
o This is the case for the large-sample z-procedure.
If the sampling distribution is not symmetric, we can't expect
the confidence interval to be symmetric around the estimate.
o There may be slightly different procedures for calculating
the endpoints of the confidence interval.
There are variations such as "upper confidence limits" or
"lower confidence limits" where we are only interested in
estimating how large or how small the estimate might be.
25
FREQUENTIST HYPOTHESIS TESTS AND P-VALUES
We’ll now continue the discussion of hypothesis tests.
Recall: Most commonly used frequentist hypothesis tests involve
the following elements:
1. Model assumptions
2. Null and alternative hypothesis
3. A test statistic (something calculated by a rule from a sample)
o This needs to have the property that extreme values of the
test statistic cast doubt on the null hypothesis.
4. A mathematical theorem saying, "If the model assumptions
and the null hypothesis are both true, then the sampling
distribution of the test statistic has this particular form."
The exact details of these four elements will depend on the
particular hypothesis test.
We will use the example of a one-sided t-test for a single mean to
illustrate the general concepts of of p-value and hypothesis testing
as well as sampling distribution for a hypothesis test.
 We have a random variable Y that is normally distributed. (This
is one of the model assumptions.)
 Our null hypothesis is: The population mean µ of the random
variable Y is µ0.
 For simplicity, we will discuss a one-sided alternative
hypothesis: The population mean µ of the random variable Y is
greater than µ0. (i.e., µ > µ0)
 Another model assumption says that samples are simple random
samples. We have data in the form of a simple random sample
of size n.
26
 To understand the idea behind the hypothesis test, we need to
put our sample of data on hold for a while and consider all
possible simple random samples of the same size n from the
random variable Y.
For any such sample, we could calculate its sample
mean y and its sample standard deviation s.
o We could then use y and s to calculate the t-statistic
t
y  0
= s

n

o Doing this for all possible simple random samples of size n
from Y gives us a new random variable, Tn. Its distribution
is called a sampling distribution.
 o The mathematical theorem associated with this inference
procedure (one-sided t-test for population mean) tells us
that if the null hypothesis is true, then the sampling
distribution has what is called the t-distribution with n
degrees of freedom. (For large values of n, the tdistribution looks very much like the standard normal
distribution; but as n gets smaller, the peak gets slightly
smaller and the tails go further out.)
 Now consider where the t-statistic for the data at hand lies on
the sampling distribution. Two possible values are shown in red
and green, respectively, in the diagram below.
o
Remember that this picture depends on the validity of the
model assumptions and on the assumption that the null
hypothesis is true.
o
27
Samping Distribution
0.45
0.4
0.35
0.3
0.25
sampling distribution
0.2
0.15
0.1
0.05
0
-4
-3
-2
-1
0
1
2
3
4
If the t-statistic lies at the red bar (around 0.5) in the picture,
nothing is unusual; our data are consistent with the null hypothesis.
But if the t-statistic lies at the green bar (around 2.5), then the data
would be fairly unusual -- assuming the null hypothesis is true.
So a t-statistic at the green bar would cast some reasonable doubt
on the null hypothesis.
A t-statistic even further to the right would cast even more doubt
on the null hypothesis.
y  0
Note: A little algebra will show that if t = s
is unusually
n
large, then so is y , and vice-versa


28
p-values
We can quantify the idea of how unusual a test statistic is by the pvalue. The general definition is:
p-value = the probability of obtaining a test statistic at least as
extreme as the one from the data at hand, assuming the model
assumptions and the null hypothesis are all true.
Recall that we are only considering samples, from the same
random variable, that fit the model assumptions and of the same
size as the one we have.
So the definition of p-value, if we spell everything out, reads
p-value = the probability of obtaining a test statistic at least as
extreme as the one from the data at hand, assuming
 the model assumptions are all true, and
 the null hypothesis is true, and
 the random variable is the same (including the same
population), and
 the sample size is the same.
Comment: The preceding discussion can be summarized as
follows:
If we obtain an unusually small p-value, then (at least) one of the
following must be true:



At least one of the model assumptions is not true (in which
case the test may be inappropriate).
The null hypothesis is false.
The sample we have obtained happens to be one of the
small percentage that result in a small p-value.
29
The interpretation of "at least as extreme as" depends on the
alternative hypothesis:

For the one-sided alternative hypothesis µ > µ0, "at least as
extreme as" means "at least as great as".
Recalling that the probability of a random variable lying in
a certain region is the area under the probability
distribution curve over that region, we conclude that for
this alternative hypothesis, the p-value is the area under the
distribution curve to the right of the test statistic calculated
from the data.
o Note that, in the picture, the p-value for the t-statistic at the
green bar is much less than that for the t-statistic at the red
bar.
Similarly, for the other one-sided alternative, µ < µ0 , the pvalue is the area under the distribution curve to the left of the
calculated test statistic.
o Note that for this alternative hypothesis, the p-value for the
t-statistic at the green bar would be much greater than the
t-statistic at the red bar, but both would be large as pvalues go.
For the two-sided alternative µ ≠ µ0, the p-value would be the
area under the curve to the right of the absolute value of the
calculated t-statistic, plus the area under the curve to the left
of the negative of the absolute value of the calculated tstatistic.
o Since the sampling distribution in the illustration is
symmetric about zero, the two-sided p-value of, say the
green value, would be twice the area under the curve to the
right of the green bar.
o


30
Note that, for samples of the same size, the smaller the p-value, the
stronger the evidence against the null hypothesis, since a smaller
p-value indicates a more extreme test statistic.
Thus, if the p-value is small enough (and assuming all the model
assumptions are met), rejecting the null hypothesis in favor of the
alternate hypothesis can be considered a rationale decision.
Comments:
1. How small is "small enough" is a judgment call.
2. "Rejecting the null hypothesis" does not mean the null
hypothesis is false or that the alternate hypothesis is true.
3. Comparing p-values for samples of different size is a common
mistake.
 In fact, larger sample sizes are more likely to detect a
difference, so are likely to result in smaller p-values than
smaller sample sizes, even though the context being
examined is exactly the same.
We will discuss these comments further later.
31
MISINTERPRETATIONS AND MISUSES OF P-VALUES
Recall:
p-value = the probability of obtaining a test statistic at least as
extreme as the one from the data at hand, assuming:




the model assumptions for the inference procedure used
are all true, and
the null hypothesis is true, and
the random variable is the same (including the same
population), and
the sample size is the same.
Notice that this is a conditional probability: The probability that
something happens, given that various other conditions hold. One
common mistake is to neglect some or all of the conditions.
Example A: Researcher 1 conducts a clinical trial to test a drug for
a certain medical condition on 30 patients all having that condition.
 The patients are randomly assigned to either the drug or a
look-alike placebo (15 each).
 Neither patients nor medical personnel know which patient
takes which drug.
 Treatment is exactly the same for both groups, except for
whether the drug or placebo is used.
 The hypothesis test has null hypothesis "proportion
improving on the drug is the same as proportion improving
on the placebo" and alternate hypothesis "proportion
improving on the drug is greater than proportion improving
on the placebo."
 The resulting p-value is p = 0.15.
32
Researcher 2 does another clinical trial on the same drug,
with the same placebo, and everything else the same except that
200 patients are randomized to the treatments, with 100 in each
group. The same hypothesis test is conducted with the new data,
and the resulting p-value is p = 0.03.
Are these results contradictory? No -- since the sample sizes are
different, the p-values are not comparable, even though everything
else is the same. (In fact, a larger sample size typically results in a
smaller p-value; more later).
Example B: Researcher 2 from Example A does everything as
described above, but for convenience, his patients are all from the
student health center of the prestigious university where he works.
 He cannot claim that his result applies to patients other than
those of the age and socio-economic background, etc. of the
ones he used in the study, because his sample was taken from
a smaller population.
Example C: Researcher 2 proceeds as in Example A, with a sample
carefully selected from the population to which he wishes to apply
his results, but he is testing for equality of the means of an
outcome variable for the two groups.
 The hypothesis test he uses requires that the variance of the
outcome variable for each group compared is the same.
 He doesn’t check this, and in fact the variance for the
treatment group is twenty times as large as the variance for
the placebo group.
 He is not justified in rejecting the null hypothesis of equal
means, no matter how small his p-value.
33
Another common misunderstanding of p-values is the belief that
the p-value is "the probability that the null hypothesis is true".
 This is essentially a case confusing a conditional probability
with the reverse conditional probability: In the definition of pvalue, “the null hypothesis is true” is the condition, not the
event.
 The basic assumption of frequentist hypothesis testing is that the
null hypothesis is either true (in which case the probability that
it is true is 1) or false (in which case the probability that it is true
is 0).
Note: In the Bayesian perspective, it makes sense to consider "the
probability that the null hypothesis is true" as having values other
than 0 or 1.

In that perspective, we consider "states of nature;" in different
states of nature, the null hypothesis may have different
probabilities of being true.

The goal is then to determine the probability that the null
hypothesis is true, given the data.

This is the reverse conditional probability from the one
considered in frequentist inference (the probability of the data
given that the null hypothesis is true).
34
Type I and II Errors and Significance Levels
Type I Error:
Rejecting the null hypothesis when it is in fact true is called a Type
I error.
Significance level:
Many people decide, before doing a hypothesis test, on a
maximum p-value for which they will reject the null hypothesis.
This value is often denoted α (alpha) and is also called the
significance level.
When a hypothesis test results in a p-value that is less than the
significance level, the result of the hypothesis test is called
statistically significant.
Confusing statistical significance and practical significance is a
common mistake.
Example: A large clinical trial is carried out to compare a new
medical treatment with a standard one. The statistical analysis
shows a statistically significant difference in lifespan when
using the new treatment compared to the old one.
 However, the increase in lifespan is at most three days,
with average increase less than 24 hours, and with poor
quality of life during the period of extended life.
 Most people would not consider the improvement
practically significant.
Caution: The larger the sample size, the more likely a
hypothesis test will detect a small difference. Thus it is
especially important to consider practical significance when
sample size is large.
35
Connection between Type I error and significance level:
A significance level α corresponds to a certain value of the test
statistic, say tα, represented by the orange line in the picture of a
sampling distribution below (the picture illustrates a hypothesis
test with alternate hypothesis "µ > 0").
sampling distribution
0.45
0.4
0.35
0.3
0.25
sampling distribution
0.2
0.15
0.1
0.05
0
-4
-3
-2
-1
0
1
2
3
4
 Since the shaded area indicated by the arrow is the p-value
corresponding to tα, that p-value (shaded area) is α.
 To have p-value less than α, a t-value for this test must be to
the right of tα.
 So the probability of rejecting the null hypothesis when it is
true is the probability that t > tα , which we have seen is α.
 In other words, the probability of Type I error is α.
 Rephrasing using the definition of Type I error:
The significance level α is the probability of making the
wrong decision when the null hypothesis is true.
 Note:
o
o
α is also called the bound on Type I error.
Choosing a value α is sometimes called setting a bound on
Type I error.
36
Pros and Cons of Setting a Significance Level:


Setting a significance level (before doing inference) has the
advantage that the analyst is not tempted to chose a cut-off
on the basis of what he or she hopes is true.
It has the disadvantage that it neglects that some p-values
might best be considered borderline.
o
o
o
This is one reason why it is important to report p-values
when reporting results of hypothesis tests. It is also good
practice to include confidence intervals corresponding to
the hypothesis test.
For example, if a hypothesis test for the difference of two
means is performed, also give a confidence interval for the
difference of those means.
If the significance level for the hypothesis test is .05, then
use confidence level 95% for the confidence interval.
37
Type II Error
Not rejecting the null hypothesis when in fact the alternate
hypothesis is true is called a Type II error.

Example 2 below provides a situation where the concept of
Type II error is important.

Note: "The alternate hypothesis" in the definition of Type II
error may refer to the alternate hypothesis in a hypothesis
test, or it may refer to a "specific" alternate hypothesis.
Example: In a t-test for a sample mean µ, with null hypothesis "µ =
0" and alternate hypothesis "µ > 0":
 We might talk about the Type II error relative to the general
alternate hypothesis "µ > 0".
 Or we might talk about the Type II error relative to the
specific alternate hypothesis "µ = 1".
 Note that the specific alternate hypothesis is a special case of
the general alternate hypothesis.
In practice, people often work with Type II error relative to a
specific alternate hypothesis.
 In this situation, the probability of Type II error relative to the
specific alternate hypothesis is often called β.
 In other words, β is the probability of making the wrong
decision when the specific alternate hypothesis is true.
 The specific alternative is considered since it is more feasible
to calculate β than the probability of Type II error relative to
the general alternative.
 See the discussion of power below for related detail.
38
Considering both types of error together:
The following table summarizes Type I and Type II errors:
Truth
(for population studied)
Decision
(based on
sample)
Null
Hypothesis
True
Null
Hypothesis
False
Reject Null
Hypothesis
Type I Error
Correct
Decision
Don’t reject
Null
Hypothesis
Correct
Decision
Type II
Error
An analogy that can be helpful in understanding the two types of
error is to consider a defendant in a trial.
 The null hypothesis is "defendant is not guilty."
 The alternate is "defendant is guilty."
 A Type I error would correspond to convicting an innocent
person.
 Type II error would correspond to setting a guilty person free.
 This could be more than just an analogy if the verdict hinges
on statistical evidence (e.g., a DNA test), and where rejecting
the null hypothesis would result in a verdict of guilty, and not
rejecting the null hypothesis would result in a verdict of not
guilty.
39
 The analogous table would be:
Truth
Not Guilty
Guilty
Verdict
Not Guilty
Guilty
Type I Error -Innocent
person goes to Correct
jail (and maybe Decision
guilty person
goes free)
Correct
Decision
Type II
Error -Guilty
person goes
free
40
The following diagram illustrates the Type I error and the Type II
error
 against the specific alternate hypothesis "µ =1"
 in a hypothesis test for a population mean µ,
 with null hypothesis "µ = 0,"
 alternate hypothesis "µ > 0",
 and significance level α= 0.05.
0.45
0.4
0.35
0.3
0.25
sampling distribution
Sampling distriubitn for one alternative
0.2
0.15
0.1
0.05
0
-4
-3
-2
-1
0
1
2
3
4
5
41
In the diagram,
 The blue (leftmost) curve is the sampling distribution of the
test statistic assuming the null hypothesis ""µ = 0."
 The green (rightmost) curve is the sampling distribution of
the test statistic assuming the specific alternate hypothesis "µ
=1".
 The vertical red line shows the cut-off for rejection of the
null hypothesis:
o The null hypothesis is rejected for values of the test
statistic to the right of the red line (and not rejected for
values to the left of the red line).
 The area of the diagonally hatched region to the right of the
red line and under the blue curve is the probability of type I
error (α).
 The area of the horizontally hatched region to the left of the
red line and under the green curve is the probability of Type
II error against the specific alternative (β).
42
Deciding what significance level to use:
This should be done before analyzing the data -- preferably before
gathering the data. There are (at least) two reasons why this is
important:
1) The significance level desired is one criterion in deciding on an
appropriate sample size.
 See discussion of Power below.
2) If more than one hypothesis test is planned, additional
considerations need to be taken into account.
 See discussion of Multiple Inference below.
The choice of significance level should be based on the
consequences of Type I and Type II errors:
1. If the consequences of a Type I error are serious or expensive, a
very small significance level is appropriate.
Example 1: Two drugs are being compared for effectiveness in
treating the same condition.
o Drug 1 is very affordable, but Drug 2 is extremely
expensive.
o The null hypothesis is “both drugs are equally effective.”
o The alternate is “Drug 2 is more effective than Drug 1.”
o In this situation, a Type I error would be deciding that
Drug 2 is more effective, when in fact it is no better than
Drug 1, but would cost the patient much more money.
o That would be undesirable from the patient’s perspective,
so a small significance level is warranted.
43
2. If the consequences of a Type I error are not very serious (and
especially if a Type II error has serious consequences), then a
larger significance level is appropriate.
Example 2: Two drugs are known to be equally effective for a
certain condition.
o
They are also each equally affordable.
o
However, there is some suspicion that Drug 2 causes a
serious side effect in some patients, whereas Drug 1 has
been used for decades with no reports of the side effect.
o
The null hypothesis is "the incidence of the side effect in
both drugs is the same".
o
The alternate is "the incidence of the side effect in Drug 2
is greater than that in Drug 1."
o
Falsely rejecting the null hypothesis when it is in fact true
(Type I error) would have no great consequences for the
consumer.
o
But a Type II error (i.e., failing to reject the null
hypothesis when in fact the alternate is true, which would
result in deciding that Drug 2 is no more harmful than
Drug 1 when it is in fact more harmful) could have serious
consequences from a public health standpoint.
o
So setting a large significance level is appropriate.
44
Comments:


Neglecting to think adequately about possible consequences
of Type I and Type II errors (and deciding acceptable levels
of Type I and II errors based on these consequences) before
conducting a study and analyzing data is a common mistake
in using statistics.
Sometimes there may be serious consequences of each
alternative, so some compromises or weighing priorities may
be necessary.
The trial analogy illustrates this well: Which is better or
worse, imprisoning an innocent person or letting a guilty
person go free?
o This is a value judgment; value judgments are often
involved in deciding on significance levels.
o Trying to avoid the issue by always choosing the same
significance level is itself a value judgment.
Different people may decide on different standards of
evidence.
o This is another reason why it is important to report pvalues even if you set a significance level.
o It is not enough just to say, “significant at the .05 level,”
“significant at the .01 level,” etc.
Sometimes different stakeholders have different interests that
compete (e.g., in the second example above, the developers
of Drug 2 might prefer to have a smaller significance level.)
See Wuensch (1994) for more discussion of considerations
involved in deciding on reasonable levels for Type I and
Type II errors.
See also the discussion of Power below.
Similar considerations hold for setting confidence levels for
confidence intervals; see
http://www.ma.utexas.edu/users/mks/statmistakes/conflevel.h
tml.
o





45
POWER OF A STATISTICAL PROCEDURE
Overview
The power of a statistical procedure can be thought of as the
probability that the procedure will detect a true difference of a
specified type.
 As in talking about p-values and confidence levels, the
reference for "probability" is the sample.
 Thus, power is the probability that a randomly chosen sample
o satisfying the model assumptions
o will give evidence of a difference of the specified type
when the procedure is applied,
o if the specified difference does indeed occur in the
population being studied.
 Note also that power is a conditional probability: the
probability of detecting a difference, if indeed the difference
does exist.
46
In many real-life situations, there are reasonable conditions that we
would be interested in being able to detect or that would not make
a practical difference.
Examples:
 If you can only measure the response to within 0.1 units, it
doesn't really make sense to worry about falsely rejecting
a null hypothesis for a mean when the actual value of the
mean is within less than 0.1 units of the value specified in
the null hypothesis.
 Some differences are of no practical importance -- for
example, a medical treatment that extends life by 10
minutes is probably not worth it.
In cases such as these, neglecting power could result in one or
more of the following:
 Doing much more work or going to more expense than
necessary
 Obtaining results which are meaningless
 Obtaining results that don't answer the question of interest.
47
Elaboration
For many confidence interval procedures, power can be defined as:
The probability (again, the reference category is “samples”)
that the procedure will produce an interval with a half-width
of at least a specified amount.
For a hypothesis test, power can be defined as:
The probability (again, the reference category is “samples”)
of rejecting the null hypothesis under a specified condition.
Example: For a one-sample t-test for the mean of a population,
with null hypothesis Ho: µ = 100, you might be interested in the
probability of rejecting Ho when µ ≥ 105, or when |µ - 100| > 5,
etc.
As with Type I Error, we may think of power for a hypothesis test
in terms of power against a specific alternative rather than against
a general alternative.
Example: If we are performing a hypothesis test for the mean of a
population, with null hypothesis H0: µ = 0 and alternate hypothesis
µ > 0, we might calculate the power of the test against the specific
alternative H1: µ = 1, or against the specific alternate H3: µ = 3,
etc. The picture below shows three sampling distributions:



The sampling distribution assuming H0 (blue; leftmost curve)
The sampling distribution assuming H1 (green; middle curve)
The sampling distribution assuming H3 (yellow; rightmost
curve)
48
The red line marks the cut-off corresponding to a significance level
α = 0.05.



Thus the area under the blue curve to the right of the red
line is 0.05.
The area under the green curve the to right of the red line
is the probability of rejecting the null hypothesis (µ = 0) if
the specific alternative H1: µ = 1 is true.
o In other words, this area is the power of the test against
the specific alternative H1: µ = 1.
o We can see in the picture that in this case, this power is
greater than 0.05, but noticeably less than 0.50.
Similarly, the area under the yellow curve the to right of
the red line is the power of the test against the specific
alternative H3: µ = 3.
o Notice that the power in this case is much larger than
0.5.
This illustrates the general phenomenon that the farther an
alternative is from the null hypothesis, the higher the power of the
test to detect it. (See Claremont WISE Demo)
49
Note: For most tests, it is possible to calculate the power against a
specific alternative, at least to a reasonable approximation. It is not
usually possible to calculate the power against a general
alternative, since the general alternative is made up of infinitely
many possible specific alternatives.
Power and Type II Error
Recall: The Type II Error rate β of a test against a specific alternate
hypothesis test is represented in the diagram above as the area
under the sampling distribution curve for that alternate hypothesis
and to the left of the cut-off line for the test. Thus
β + (Power of a test against a specific alternate hypothesis)
= total area under sampling distribution curve
= 1,
so
Power = 1 - β
50
Power and Sample Size
Power will depend on sample size as well as on the specific
alternative.
 The picture above shows the sampling distributions for one
particular sample size.
 If the sample size is larger, the sampling distributions will be
narrower.
 This will mean that sampling distributions will have less
overlap, and the power will be higher.
 Similarly, a smaller sample size will result in more overlap of
the sampling distributions, hence in lower power.
 This dependence of power on sample size allows us, in
principle, to figure out beforehand what sample size is
needed to detect a specified difference, with a specified
power, at a given significance level, if that difference is
really there.
 See Claremont University's Wise Project's Statistical Power
Applet for an interactive demonstration of the interplay
between sample size and power for a one-sample z-test.
51
In practice, details on figuring out sample size will vary from
procedure to procedure. Some considerations involved:
 The difference used in calculating sample size (i.e., the
specific alternative used in calculating sample size) should be
decided on the base of practical significance and/or "worst
case scenario," depending on the consequences of decisions.
 Determining sample size to give desired power and
significance level will usually require some estimate of
parameters such as variance, so will only be as good as these
estimates.
o These estimates usually need to be based on previous
research or a pilot study.
o It is wise to use a conservative estimate of variance
(e.g., the upper bound of a confidence interval from a
pilot study), or to do a sensitivity analysis to see how
the sample size estimate depends on the parameter
estimate.
 Even when there is a good formula for power in terms of
sample size, "inverting" the formula to get sample size from
power is often not straightforward.
o This may require some clever approximation
procedures.
o Such procedures have been encoded into computer
routines for many (not all) common tests.
o See John C. Pezzullo's Interactive Statistics Pages for
links to a number of online power and sample size
calculators.
52
 Good and Hardin (2006, p. 34), Common Errors in Statistics,
Wiley, p. 34) report that using the default settings for power
and sample size calculations is a common mistake made by
researchers.
 For discrete distributions, the "power function" (giving
power as a function of sample size) is often saw-toothed in
shape.
o A consequence is that software may not necessarily
give the optimal sample size for the conditions
specified.
o Good software for such power calculations will also
output a graph of the power function, allowing the
researcher to consider other sample sizes that might
give be better than the default given by the software.
53
Common Mistakes involving Power:
1. Accepting a null hypothesis when a result is not statistically
significant, without taking power into account.
 Since power typically increases with increasing sample size,
practical significance is important to consider.
 Looking at this from the other direction: Power decreases
with decreasing sample size.
 Thus a small sample size may not be able to detect an
important difference.
 If there is strong evidence that the power of a procedure will
indeed detect a difference of practical importance, then
accepting the null hypothesis is appropriate.
 Otherwise “accepting the null hypothesis” is not appropriate
-- all we can legitimately say then is that we fail to reject the
null hypothesis.
2. Neglecting to do a power analysis/sample size calculation
before collecting data
 Without a power analysis, you may end up with a result that
does not really answer the question of interest.
 You might obtain a result that is not statistically significant,
but is not able to detect a difference of practical significance.
 You might also waste resources by using a sample size that is
larger than is needed to detect a relevant difference.
54
3. Confusing retrospective power and prospective power.
 Power as defined above for a hypothesis test is also called
prospective or a priori power.
 It is a conditional probability, P(reject H0 | Ha), calculated
without using the data to be analyzed.
 Retrospective power is calculated after the data have been
collected and analyzed, using the data.
 Retrospective power can be used legitimately to estimate the
power and sample size for a future study, but cannot
legitimately be used as describing the power of the study
from which it is calculated.
 See Hoenig and Heisley (2001) and Wuensch et al (2003) for
more discussion and further references.
55
REFERENCES
Good and Hardin (2006), Common Errors in Statistics, Wiley
Hoenig, J. M. and D. M. Heisley (2001) "The Abuse of Power:
The Pervasive Fallacy of Power Calculations for Data Analysis,"
The American Statistician 55(1), 19-24
Wuensch, K. L. (1994). Evaluating the Relative Seriousness of
Type I versus Type II Errors in Classical Hypothesis Testing,
http://core.ecu.edu/psyc/wuenschk/StatHelp/Type-I-II-Errors.htm
Wuensch, K.L. et al (2003), “Retrospective (Observed) Power
Analysis, Stat Help website,
http://core.ecu.edu/psyc/wuenschk/stathelp/PowerRetrospective.htm