Download Chap 11 Between Subjects Designs

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Probability wikipedia , lookup

History of statistics wikipedia , lookup

Foundations of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Slides to accompany Weathington,
Cunningham & Pittenger (2010),
Chapter 11: Between-Subjects
Designs
1
Objectives
• t-test for independent groups
• Hypothesis testing
• Interpreting t and p
• Statistical power
2
t-test for Independent Groups
• Basic inferential statistic
• Ratio of two measures of variability =
Difference between two group means
Standard Error of the difference between
group means
• Allows us to consider effect, relative to
error
3
Standard Error of the Difference between Means
4
t-test
• Larger |t-ratio| = greater difference
between means
• Based on this we can decide whether to
reject Ho
– Usually Ho = µ1 = µ2
• Sampling error may account for some
difference, but when t is “large” enough…
5
Hypothesis Testing: t-tests
• Based on estimates of probability
• When α = .05, there is a 5% chance of
rejecting Ho when we should not (Type I error)
– See Figure 11.2 (each tail = 2.5%)
– Region of rejection
• If t falls within the shaded ranges, we reject
Ho because probability is so low
6
Figure 11.2
7
Hypothesis Testing Steps
1. State Ho and H1
– Before collecting or examining the data
2. Identify appropriate statistical test(s)
– Based on hypotheses
– Often multiple approaches are possible
– Depends on how well data meet the
assumptions of specific statistical tests
8
Hypothesis Testing Steps
3. Set the significance level (α)
– α = p(Type I error)
•
Risk of false alarm
•
You control
– 1 – α = p(Type II error)
•
Risk of miss
•
Careful, you might “overcontrol”
9
Hypothesis Testing Steps
4. Determine significance level for t-ratio
– Use appropriate table in Appendix B, df for
the test and your selected alpha (α) level to
determine tcritical
– If your observed |t ratio| > tcritical reject Ho
– If your observed p-level is less than α you can
also reject Ho
10
Hypothesis Testing Steps
5. Interpreting t-ratio
– Is it statistically significant?
– Is it practically/clinically significant?
•
Does the effect size matter, really?
•
Book mentions d-statistic
11
Hypothesis Testing Steps
5. Interpreting t-ratio
– Magnitude of the effect
•
Degree of variance accounted for by the IV
•
Omega squared = % of variance accounted for
by IV in the DV
– Is there cause and effect?
•
Typically requires manipulated IV,
randomized assignment, and careful pre- /
post- design
12
Correct Interpretation of t and p
• If you have a significant t-ratio:
= statistically significant difference between
two groups
= IV affects DV
= probability of a Type I error is α
13
Errors in p Interpretation
• Changing α after analyzing the data
– Unethical
– We cannot use p to alter α
• Kills your chances of limiting Type I error risk
• p only estimates the probability of obtaining at
least the results you did if the null hypothesis is
true, and it is based on sample statistics  not
fully the case for α
14
Errors in p Interpretation
• Stating that p = odds-against chance
– p = .05 does not mean that the probability of
results due to chance was 5% or less
– p is not the probability of committing a Type I
error
– Recommended interpretation:
• If p is small enough, I reject the null
hypothesis in favor of the alternative
hypothesis.
15
Errors in p Interpretation
• Assuming p = probability that H1 is true
(i.e., that the results are “valid”)
– p does not confirm the validity of H1
– Smaller p values do not indicate a more
important relationship between IV and DV
• Effect size estimates are required for this
16
Errors in p Interpretation
• Assuming p = probability of replicating
results
– The probability of rejecting Ho is not related to
the obtained p-value
• A new statistic, prep is getting some
attention for this purpose (see Killeen,
2005)
17
Statistical Tests & Power
• β = p(Type II error) or p(miss)
• 1 – β = p(correctly rejecting false Ho) =
power
• Four main factors influence statistical
power
18
Power: Difference between µ
• Power increases
when the difference
between µ of two
populations is
greater
19
Power: Sample Size
• Issue of how well a statistic estimates the
population parameter (Fig. 10.5)
• Larger N  smaller SEM
• As SEM decreases  overlap of sampling
distributions for two populations
decreases  power increases
• Don’t forget about cost
20
Power: Variability in Data
• Lots of variability  variance in the
sampling distribution and greater overlap
of two distributions
• Reducing variability reduces SEM 
overlap decreases  power goes up
• Techniques: Use homogeneous samples,
reliable measurements
21
Power: α
• Smaller α  lower Type I probability 
lower power
• As p(Type I) decreases, p(Type II)
increases (see Figure 11.6)
• As α increases, power increases
– Enlarges the region of rejection
22
23
Estimating Sample Size
• Based on power
• Tables in Appendix B can give you estimates
for t-ratios
– Effect size is sub-heading
• Cost / feasibility considerations
• Remember that sample size is not the only
influence on statistical power
24
What is Next?
• **instructor to provide details
25