Download Ch 14 Outline

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
CHAPTER 14
STUDENT’S t TEST FOR CORRELATED AND
INDEPENDENT GROUPS
CHAPTER OUTLINE
I.
The Two-Condition Experiment
A. Types of two-condition experiments.
1. Correlated groups design. In the earlier chapters we analyzed this using the
sign test.
2. Independent groups design. This design is covered later in the chapter.
B. Limitations of single sample design.
1. At least one population parameter (µ) must be specified.
2. Usually µ is not known.
3. Even if µ were known, one cannot be certain that the conditions under
which µ was calculated are the same for a new set of experimental
4. These limitations are overcome in the two-condition experiment.
II.
Student's t Test for Correlated Groups
A. Characteristics of repeated measures or correlated groups design.
1. Each subject used for both conditions (e.g. before and after; control and
experimental).
2. Or pairs of subjects matched on one or more characteristics serve in both
conditions.
B. Information used by correlated groups t test.
1. magnitude of difference scores.
2. direction of difference scores.
C. What is tested. Tests the assumption that the difference scores are a random
sample from a population of difference scores having a mean of zero.
D. Similar to t test for single samples. The only change is that in this case we deal
with difference scores instead of raw scores.
E. Equations.
t obt 
t obt 
where
D obt   D
sD
N
D obt  μD
SSD
N (N  1)
D = difference score (e.g. control score - experimental score)
Dobt = mean of the sample difference scores
 D = mean of the population of difference scores (usually but not
necessarily equal to 0)
sD = standard deviation of the sample difference scores
SS D = sum of squares of the sample difference scores
=  D2 - ( D)2/N
N = number of difference scores
F. Size of Effect.
1. Rationale. As with the single sample t test, the statistic we are using to
measure size of effect is symbolized by “d.” It is a standardized statistic that
with the correlated groups t test, relies on the relationship between the size
of effect and D obt . As the size of effect gets greater, so does D obt ,
regardless of the direction of the effect. The statistic d uses the absolute
value of D obt since we are interested in the size of real effect, and are not
concerned about direction. This allows d to have a positive value that
increases with the size of D obt regardless of the direction of the real effect.
D obt is divided by σD to create a standardized value, much as was done
with z scores.
2. Formula for Cohen’s d.
d
D obt
σD
Conceptual equation for size of effect, correlated groups t test
Since σD is unknown, we estimate it using sD, the standard deviation of the
sample difference scores. Substituting sD for σD, we arrive at the
computational equation for size of effect. Since sD is an estimate, d̂ is used
instead of d.
D
dˆ  obt computational equation for size of effect, correlated groups t test
sD
3. Interpreting the value of d̂ . To interpret the value of d̂ , we are using the
criteria that Cohen has provided. These criteria are given in the following
table.
Value of d̂
0.00 – 0.20
0.21 – 0.79
≥0.80
Interpretation of d̂
Small effect
Medium effect
Large effect
G. Power. The correlated groups t test is more powerful than the sign test.
Therefore there is less chance of making a Type II error. Note: As a general
rule one uses the most powerful statistical analysis appropriate to the data.
H. Assumptions. For use requires that the sampling distribution of D is normally
distributed. This can be achieved generally by
1. N  30, or
2. Population scores normally distributed
III. Independent Groups Design
A. Design Characteristics
1.
2.
3.
4.
5.
6.
Random sampling of subjects from population.
Random assignment to each condition.
No basis for pairing of scores.
Each subject tested only once.
Raw scores are analyzed.
t test analyzes difference between sample means.
IV. Use of z Test for Independent Groups
A. Formula.
zobt 
X
1

 X 2  μX
σX
1
1
X 2
X 2
where
X
1
X2
 1
1 
2
2
  X 1   X 2   2   
 n1 n2 
B. Assumptions.
1. Changing level of the independent variable is assumed to affect the mean
of the distribution but not the standard deviation.
2. 12 = 22 = 2
C. Characteristics of sampling distribution of the difference between sample
means.
1. Assuming population from which samples are drawn is normal, then the
distribution of the difference between sample means is normal.
2.  X 1  X 2  1  2 , where  X 1  X 2  the mean of the distribution of the
difference between sample means
3.  X 1  X 2   X 1 2   X 2 2 , where  X 1  X 2  the standard deviation of the
difference between sample means;  X 1  the variance of the sampling
distribution of the mean for samples of size n1 taken from the first
population; and  X 2 2  the variance of the sampling distribution of the
mean for samples of size n2 taken from the second population.
D. Must Know  To use z test one must know  which is rarely the case.
V. Student's t Test for Independent Groups
A. Used when 2 must be estimated. Uses a weighted average of the sample
variances, s12 and s22, as the estimate with df as the weights.
B. General equation.
t obt 
X
1

 X 2  μX
1
X 2
 1
1
sW 2  

 n1 n2 

X1  X2
 SS1  SS2   1
1

 

 n1  n2  2   n1 n2 
where df = n1 + n2  2 = N  2
C. Equation when n1= n2.
t obt 
X1  X2
SS1  SS 2
n( n  1)
D. Assumptions for use of t test for independent groups.
1. Sampling distribution of X 1  X 2 is normally distributed, i.e. populations
from which samples were taken must be normal.
2. 12 = 2 (homogeneity of variance).
E. Violations of assumptions. If n1 = n2 and n  30, then the t test is robust if
above assumptions are violated. If violations are extreme, use Mann-Whitney U
test. This test is covered in Chapter 17.
VI. Size of Effect
A. Rationale. As with the correlated groups t test, the statistic we are using to
measure size of effect is symbolized by “d.” It is a standardized statistic that
with the independent groups t test, relies on the relationship between the size
of effect and X 1  X 2 . As the size of effect gets greater, so does X 1  X 2 ,
regardless of the direction of the effect. The statistic d uses the absolute value
of X 1  X 2 since we are interested in the size of real effect, and are not
concerned about direction. This allows d to have a positive value that
increases with the size of X 1  X 2 regardless of the direction of the real effect.
X 1  X 2 is divided by σ to create a standardized value, much as was done with
z scores.
B. Formula for Cohen’s d:
d
X1 X 2
conceptual equation for size of effect , independent groups t test
σ
2
Since σ is unknown, we estimate it using s
, the weighted estimate of σ.
W
Substituting
effect. Since
2
s W for σ, we arrive at the computational equation for size of
2
s W is an estimate, d̂ is used instead of d.
X  X2
dˆ  1
computational equation for size of effect , independent groups t test
2
sW
C. Interpreting the value of d̂ . To interpret the value of d̂ , we again use the
criteria that Cohen has provided. These criteria are shown below.
Value of d̂
0.00 – 0.20
0.21 – 0.79
≥0.80
Interpretation of d̂
Small effect
Medium effect
Large effect
VII. Power of t Test
A. Effect of variables on the power of the t test.
1. The greater the effect of the independent variable, the higher the power.
2. Increasing sample size increases power.
3. Increasing sample variability decreases power.
VIII. Use of Correlated or Independent t
A. Which test to use.
1 Correlated t advantageous when there is a high correlation between the
paired scores.
2 Correlated t is advantageous when there is low variability in difference
scores and high variability in raw scores.
3. Independent t is more efficient from df per measurement analysis.
4. Some experiments do not allow same subject to be used twice (i.e.
comparing males vs. females), so must use independent t test.
IX. Alternative Approach using Confidence Intervals
A. Null Hypothesis Approach. Evaluate the probability of getting obtained results
or results even more extreme assuming chance alone is operating. If obtained
probability  α, reject H0.
B. Confidence Interval Approach. Uses confidence intervals to determine if it is
reasonable to reject H0 and at the same time gives an estimate of the size of
the real effect.
C. 95% Confidence Interval for μ1 – μ2. By estimating the 95% confidence interval
for μ1 – μ2, we can determine if it is reasonable to reject H0 for an alpha level =
0.05 and if so, the confidence interval can be used as an estimate of the size of
the real effect. We have 95% confidence that the interval μ1 – μ2 contains the
size of the real effect.
D. Equations for constructing the 95% Confidence Interval for μ1 – μ2.
μ lower   X 1  X 2   sX 1  X 2 t 0.025
μ upper   X 1  X 2   sX 1  X 2 t 0.025
where
 SS1  SS2   1
1
sX1 X 2  
 

 n1  n2  2   n1 n2 
E. If the interval μ1 – μ2 contains the value “0”. If the interval μ1 – μ2 contains the
value “0”, we cannot reject H0 at α = 0.05.
F. 99% confidence interval for μ1 – μ2. By estimating the 99% confidence interval
for μ1 – μ2, we can determine if it is reasonable to reject H0 for an alpha level =
0.01 and if so, the confidence interval can be used as an estimate of the size
of the real effect. In this case, we have 99% confidence that the interval μ1 –
μ2 contains the size of the real effect.
G. Equations for constructing the 99% Confidence Interval for μ1 – μ2.
μ lower   X 1  X 2   sX 1  X 2 t 0.005
μ upper   X 1  X 2   sX 1  X 2 t 0.005
H. If the interval μ1 – μ2 contains the value “0”. If the interval μ1 – μ2 contains the
value “0”, we cannot reject H0 at α = 0.01.
Related documents