Download Introduction to Testing a Hypothesis

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Foundations of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Introduction to Testing a Hypothesis
Testing a treatment
Descriptive statistics cannot determine if differences are due to chance.
Sampling error means differing by chance alone.
Example of Differences due to chance alone.
   
 2  100
y1  107
y2  117
Examples:
We know that the population mean of IQ is 100. We selected 50 people and give
them our new IQ boosting programme. This sample when tested after the treatment
has a mean of 110. Did we boost IQ?
We select a sample of college students and a sample of university students. We find that
the mean of the college students is 109 and the mean of the university students is 113. Is
there a difference in the IQs of college and university students?
Are booth cases simply due to sampling error?
Remember, the sample mean is rarely the population mean and rarely
do the means of two randomly selected samples end up being the same.
Sampling distribution: describes the amount of sample-to-sample variability
to expect for a given statistic.
Sampling Error of the mean:
Sy 
S
n
Simplifying Hypothesis Testing
1. Develop research hypothesis (experimental)
2. Obtain a sample(s) of observation
3. Construct a null hypothesis
y
y1  y2  0
     
4. Obtain an appropriate sampling distribution
5. Reject or Fail to Reject the null hypothesis
Null Hypothesis
Assume: the sample comes from the same population and that the two
sample means (even though they may be different) are estimating the
same value (population mean).
Why?
Method of Contradiction: we can only demonstrate that a hypothesis is false.
If we thought that the IQ boosting programme worked, what would
we actually test? What value of IQ would we test?
Rejection and Non-Rejection of the Null Hypothesis
If we reject, we then say that we have evidence for our experimental hypothesis,
e.g., that our IQ boosting programme works.
If we fail to reject, we do NOT prove the null to be true.
Fisher: we choose either to reject or suspend judgment.
Neyman and Pearson argued for a pragmatic approach. Do we spend money
on our IQ boosting or not? We must accept or reject the null. But still, accepting
does not equal proving it to be true.
Type I & Type II Errors
Example: the IQ boosting programme
We test:
   
or
     
Type I Error: the null hypothesis is true, but we reject it. The probability of
a Type I Error is called

and is set at .05.
Type II Error: the null hypothesis is false, but we fail to reject it. The probability
of a Type II Error is called

How sure are we of our decisions?
Null Hypothesis
True
False
Reject the
Null
Type I Error
alpha
Correct
Power
1- beta
Fail to
Reject
Correct
Type II Error
Beta
1- alpha
Power &

[a ]
[ ------ b --------][ --- power ----]
Note: The figure is based on the null hypothesis being false and represents
the sampling distribution of the means.
One-Tailed and Two Tailed Test of Significance
Sampling Distribution of the Mean
 