* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download 22.nonexp4 - Illinois State University Department of Psychology
Foundations of statistics wikipedia , lookup
Bootstrapping (statistics) wikipedia , lookup
Psychometrics wikipedia , lookup
Taylor's law wikipedia , lookup
History of statistics wikipedia , lookup
Analysis of variance wikipedia , lookup
Resampling (statistics) wikipedia , lookup
Small-N designs & Basic Statistical Concepts Psych 231: Research Methods in Psychology What are they? Historically, these were the typical kind of design used until 1920’s when there was a shift to using larger sample sizes Even today, in some sub-areas, using small N designs is common place • (e.g., clinical settings, psychophysics, expertise, etc.) Small N designs One or a few participants Data are typically not analyzed statistically; rather rely on visual interpretation of the data Observations begin in the absence of treatment (BASELINE) Then treatment is implemented and changes in frequency, magnitude, or intensity of behavior are recorded These typically are experiments (involve manipulation by the researcher) Small N designs Baseline experiments – the basic idea is to show: 1. when the IV occurs, you get the effect 2. when the IV doesn’t occur, you don’t get the effect (reversibility) Before introducing treatment (IV), baseline needs to be stable Measure level and trend Small N designs Level – how frequent (how intense) is behavior? Are all the data points high or low? Trend – does behavior seem to increase (or decrease) Are data points “flat” or on a slope? Small N designs ABA design (baseline, treatment, baseline) A B A Steady state (baseline) | Transition steady state | Reversibility – The reversibility is necessary, otherwise something else may have caused the effect other than the IV (e.g., history, maturation, etc.) ABA design Advantages Focus on individual performance, not fooled by group averaging effects Focus is on big effects (small effects typically can’t be seen without using large groups) Avoid some ethical problems – e.g., with nontreatments Allows to look at unusual (and rare) types of subjects (e.g., case studies of amnesics, experts vs. novices) Often used to supplement large N studies, with more observations on fewer subjects Small N designs Disadvantages Effects may be small relative to variability of situation so NEED more observation Some effects are by definition between subjects • Treatment leads to a lasting change, so you don’t get reversals Difficult to determine how generalizable the effects are Small N designs Some researchers have argued that Small N designs are the best way to go. The goal of psychology is to describe behavior of an individual Looking at data collapsed over groups “looks” in the wrong place Need to look at the data at the level of the individual Small N designs Mistrust of statistics? It is all in how you use them They are a critical tool in research Statistics Why do we use them? Descriptive statistics • Used to describe, simplify, & organize data sets • Describing distributions of scores Inferential statistics • Used to test claims about the population, based on data gathered from samples • Takes sampling error into account, are the results above and beyond what you’d expect by random chance Statistics Recall that a variable is a characteristic that can take different values. The distribution of a variable is a summary of all the different values of a variable Both type (each value) and token (each instance) How much do you like psy231? 5 values (1, 2, 3, 4, 5) 1-2-3-4-5 Hate it Love it 1 5 5 Distribution 7 tokens (1,1,2,3,4,5,5) 4 1 3 2 Many important distributions Population • All the scores of interest Sample • All of the scores observed (your data) • Used to estimate population characteristics Distribution of sample distributions 1 5 52 3 3 5 3 1 2 5 1 3 Sample Use descriptive statistics, focus on 3 properties Distribution 3 2 1 How do we describe these distributions? 2 Population 3 1 1 1 2 5 • Used to estimate sampling error 1 Properties of a distribution Shape • Symmetric v. asymmetric (skew) • Unimodal v. multimodal Center • Where most of the data in the distribution are • Mean, Median, Mode Spread (variability) • How similar/dissimilar are the scores in the distribution? • Standard deviation (variance), Range Distribution Visual descriptions - A picture of the distribution is usually helpful • Gives a good sense of the properties of the distribution Many different ways to display distribution • Graphs • Continuous variable: • histogram, line graph (frequency polygons) • Categorical variable: • pie chart, bar chart • Table • Frequency distribution table Numerical descriptions of distributions Distribution A frequency histogram Example: Distribution of scores on an exam Graph for continuous variables A line graph Example: Distribution of scores on an exam Graph for continuous variables Bar chart Pie chart Cutting Doe Missing Smith Graphs for categorical variables Be careful using a line graph for categorical variables The line implies that there are responses between Smith and Doe, but there are not Caution VAR00 003 Va lid 1.00 Fre quen cy 2 Percent 7.7 Va lid Perce nt 7.7 Cumu lati ve Percent 7.7 2.00 3.00 4.00 3 3 5 11 .5 11 .5 19 .2 11 .5 11 .5 19 .2 19 .2 30 .8 50 .0 5.00 6.00 7.00 8.00 4 2 4 2 15 .4 7.7 15 .4 7.7 15 .4 7.7 15 .4 7.7 65 .4 73 .1 88 .5 96 .2 9.00 To tal 1 26 3.8 10 0.0 3.8 10 0.0 10 0.0 Values Counts Percentages (types) Frequency distribution table Symmetric • The two sides line up Asymmetric (skewed) • The two sides do not line up Properties of distributions: Shape Symmetric Asymmetric (skewed) Negative Skew Positive Skew tail Properties of distributions: Shape tail Unimodal (one mode) Multimodal Minor mode Major mode Bimodal examples Properties of distributions: Shape There are three main measures of center Mean (M): the arithmetic average • Add up all of the scores and divide by the total number • Most used measure of center Median (Mdn): the middle score in terms of location • The score that cuts off the top 50% of the from the bottom 50% • Good for skewed distributions (e.g. net worth) Mode: the most frequent score • Good for nominal scales (e.g. eye color) • A must for multi-modal distributions Properties of distributions: Center The most commonly used measure of center The arithmetic average Divide by the total number in the population Computing the mean – The formula for the population mean is (a parameter): – The formula for the sample mean is (a statistic): The Mean X N X X n Add up all of the X’s Divide by the total number in the sample How similar are the scores? Range: the maximum value - minimum value • Only takes two scores from the distribution into account • Influenced by extreme values (outliers) Standard deviation (SD): (essentially) the average amount that the scores in the distribution deviate from the mean • Takes all of the scores into account • Also influenced by extreme values (but not as much as the range) Variance: standard deviation squared Spread (Variability) Low variability High variability The scores are fairly similar The scores are fairly dissimilar mean Variability mean The standard deviation is the most popular and most important measure of variability. The standard deviation measures how far off all of the individuals in the distribution are from a standard, where that standard is the mean of the distribution. • Essentially, the average of the deviations. Standard deviation Our population 2, 4, 6, 8 X 2 4 6 8 20 5.0 N 4 4 1 2 3 4 5 6 7 8 9 10 An Example: Computing the Mean Our population 2, 4, 6, 8 Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. X 2 4 6 8 20 5.0 N 4 4 X - = deviation scores 2 - 5 = -3 -3 1 2 3 4 5 6 7 8 9 10 An Example: Computing Standard Deviation (population) Our population 2, 4, 6, 8 Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. X 2 4 6 8 20 5.0 N 4 4 X - = deviation scores 2 - 5 = -3 4 - 5 = -1 -1 1 2 3 4 5 6 7 8 9 10 An Example: Computing Standard Deviation (population) Our population 2, 4, 6, 8 Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. X 2 4 6 8 20 5.0 N 4 4 X - = deviation scores 2 - 5 = -3 4 - 5 = -1 6 - 5 = +1 1 1 2 3 4 5 6 7 8 9 10 An Example: Computing Standard Deviation (population) Our population 2, 4, 6, 8 Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. X 2 4 6 8 20 5.0 N 4 4 X - = deviation scores 2 - 5 = -3 4 - 5 = -1 6 - 5 = +1 8 - 5 = +3 3 1 2 3 4 5 6 7 8 9 10 Notice that if you add up all of the deviations they must equal 0. An Example: Computing Standard Deviation (population) Step 2: So what we have to do is get rid of the negative signs. We do this by squaring the deviations and then taking the square root of the sum of the squared deviations (SS). X - = deviation scores 2 - 5 = -3 4 - 5 = -1 6 - 5 = +1 8 - 5 = +3 SS = (X - )2 = (-3)2 + (-1)2 + (+1)2 + (+3)2 = 9 + 1 + 1 + 9 = 20 An Example: Computing Standard Deviation (population) Step 3: ComputeVariance (which is simply the average of the squared deviations (SS)) So to get the mean, we need to divide by the number of individuals in the population. variance = 2 = SS/N An Example: Computing Standard Deviation (population) Step 4: Compute Standard Deviation To get this we need to take the square root of the population variance. X 2 standard deviation = = 2 N An Example: Computing Standard Deviation (population) To review: Step 1: Compute deviation scores Step 2: Compute the SS Step 3: Determine the variance • Take the average of the squared deviations • Divide the SS by the N Step 4: Determine the standard deviation • Take the square root of the variance An Example: Computing Standard Deviation (population) To review: Step 1: Compute deviation scores Step 2: Compute the SS Step 3: Determine the variance • Take the average of the squared deviations • Divide the SS by the N-1 Step 4: Determine the standard deviation • Take the square root of the variance This is done because samples are biased to be less variable than the population. This “correction factor” will increase the sample’s SD (making it a better estimate of the population’s SD) An Example: Computing Standard Deviation (sample) Example: Suppose that you notice that the more you study for an exam, the better your score typically is. This suggests that there is a relationship between study time and test performance. We call this relationship a correlation. Relationships between variables Properties of a correlation Form (linear or non-linear) Direction (positive or negative) Strength (none, weak, strong, perfect) To examine this relationship you should: Make a scatterplot Compute the Correlation Coefficient Relationships between variables Plots one variable against the other Useful for “seeing” the relationship Form, Direction, and Strength Each point corresponds to a different individual Imagine a line through the data points Scatterplot Y 6 Hours study Exam perf. X 6 1 Y 6 2 5 5 6 2 3 4 1 3 2 Scatterplot 4 3 1 2 3 4 5 6 X A numerical description of the relationship between two variables For relationship between two continuous variables we use Pearson’s r It basically tells us how much our two variables vary together As X goes up, what does Y typically do • X, Y • X, Y • X, Y Correlation Coefficient Linear Form Non-linear Negative Positive Y Y X X • As X goes up, Y goes up • As X goes up, Y goes down • X & Y vary in the same direction • X & Y vary in opposite directions • Positive Pearson’s r • Negative Pearson’s r Direction Zero means “no relationship”. The farther the r is from zero, the stronger the relationship The strength of the relationship Spread around the line (note the axis scales) Strength r = -1.0 “perfect negative corr.” -1.0 r = 0.0 “no relationship” r = 1.0 “perfect positive corr.” 0.0 The farther from zero, the stronger the relationship Strength +1.0 Rel A r = -0.8 Rel B r = 0.5 -.8 -1.0 .5 0.0 Which relationship is stronger? Rel A, -0.8 is stronger than +0.5 Strength +1.0 Compute the equation for the line that best fits the data points Y 6 5 Y = (X)(slope) + (intercept) 4 3 2 1 0.5 Change in Y 1 2 3 Regression 4 5 6 X Change in X 2.0 = slope 4.5 Can make specific predictions about Y based on X Y 6 5 X=5 Y = (X)(.5) + (2.0) Y=? Y = (5)(.5) + (2.0) Y = 2.5 + 2 = 4.5 4 3 2 1 1 2 3 Regression 4 5 6 X Also need a measure of error Y = X(.5) + (2.0) + error Y = X(.5) + (2.0) + error • Same line, but different relationships (strength difference) Y 6 5 Y 6 5 4 3 2 1 4 3 2 1 1 2 3 4 5 Regression 6 X 1 2 3 4 5 6 X Don’t make causal claims Don’t extrapolate Extreme scores (outliers) can strongly influence the calculated relationship Cautions with correlation & regression Purpose: To make claims about populations based on data collected from samples What’s the big deal? Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Inferential Statistics Step 1: State your hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis “Reject H0” “Fail to reject H0” Testing Hypotheses Step 1: State your hypotheses Null hypothesis (H0) • “There are no differences (effects)” Alternative hypothesis(ses) This is the hypothesis that you are testing • Generally, “not all groups are equal” You aren’t out to prove the alternative hypothesis (although it feels like this is what you want to do) If you reject the null hypothesis, then you’re left with support for the alternative(s) (NOT proof!) Testing Hypotheses Step 1: State your hypotheses In our memory example experiment Null H0: mean of Group A = mean of Group B Alternative HA: mean of Group A ≠ mean of Group B (Or more precisely: Group A > Group B) It seems like our theory is that the treatment should improve memory. That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics. Instead, we test the H0 Testing Hypotheses Step 1: State your hypotheses Step 2: Set your decision criteria Your alpha level will be your guide for when to: • “reject the null hypothesis” • “fail to reject the null hypothesis” This could be correct conclusion or the incorrect conclusion • Two different ways to go wrong • Type I error: saying that there is a difference when there really isn’t one (probability of making this error is “alpha level”) • Type II error: saying that there is not a difference when there really is one Testing Hypotheses Real world (‘truth’) H0 is correct Reject H0 Experimenter’s conclusions Fail to Reject H0 Error types H0 is wrong Type I error Type II error Real world (‘truth’) Defendant is innocent Defendant is guilty Type I error Jury’s decision Find guilty Type II error Find not guilty Error types: Courtroom analogy Type I error: concluding that there is an effect (a difference between groups) when there really isn’t. Sometimes called “significance level” We try to minimize this (keep it low) Pick a low level of alpha Psychology: 0.05 and 0.01 most common Type II error: concluding that there isn’t an effect, when there really is. Related to the Statistical Power of a test 1 How likely are you able to detect a difference if it is there Error types Step 1: State your hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Descriptive statistics (means, standard deviations, etc.) Inferential statistics (t-tests, ANOVAs, etc.) Step 5: Make a decision about your null hypothesis Reject H0 Fail to reject H0 “statistically significant differences” “not statistically significant differences” Testing Hypotheses “Statistically significant differences” When you “reject your null hypothesis” • Essentially this means that the observed difference is above what you’d expect by chance • “Chance” is determined by estimating how much sampling error there is • Factors affecting “chance” • Sample size • Population variability Statistical significance Population mean Population Distribution x n=1 Sampling error (Pop mean - sample mean) Sampling error Population mean Population Distribution Sample mean x n=2 x Sampling error (Pop mean - sample mean) Sampling error Generally, as the sample Population mean size increases, the sampling error decreases Sample mean Population Distribution x x n = 10 x x x x x x xx Sampling error (Pop mean - sample mean) Sampling error Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Sampling error Large population variability These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Population Distribution of sample means Samples of size = n XA XB XC XD “chance” Sampling error Avg. Sampling error “A statistically significant difference” means: the researcher is concluding that there is a difference above and beyond chance with the probability of making a type I error at 5% (assuming an alpha level = 0.05) Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference Significance Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to detect a difference that is really there) • Check the statistical power of your test • Sample size is too small • Effects that you’re looking for are really small • Check your controls, maybe too much variability Non-Significance Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Two sample distributions Experimenter’s conclusions XB XA 76% 80% From last time About populations H0: A = B H0: there is no difference between Grp A and Grp B Reject H0 Fail to Reject H0 Real world (‘truth’) H0 is correct H0 is wrong Type I error Type II error Tests the question: Are there differences between groups due to a treatment? Real world (‘truth’) H0 is correct Reject H0 Experimenter’s conclusions Fail to Reject H0 Two possibilities in the “real world” H0 is true (no treatment effect) One population Two sample distributions XB XA 76% 80% “Generic” statistical test H0 is wrong Type I error Type II error Tests the question: Real world (‘truth’) H0 is correct Are there differences between groups due to a treatment? Reject H0 Experimenter’s conclusions Fail to Reject H0 Two possibilities in the “real world” H0 is true (no treatment effect) H0 is wrong Type I error Type II error H0 is false (is a treatment effect) Two populations XB XA XB XA 76% 80% 76% 80% People who get the treatment change, they form a new population (the “treatment population) “Generic” statistical test XA XB ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment Why might the samples be different? (What is the source of the variability between groups)? “Generic” statistical test XA XB ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment The generic test statistic - is a ratio of sources of variability Computed Observed difference TR + ID + ER = = test statistic Difference from chance ID + ER “Generic” statistical test The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Population Distribution of sample means Samples of size = n XA XB XC XD “chance” Sampling error Avg. Sampling error The generic test statistic distribution To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic TR + ID + ER ID + ER Test statistic Distribution of sample means -level determines where these boundaries go “Generic” statistical test The generic test statistic distribution To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H0 Fail to reject H0 “Generic” statistical test The generic test statistic distribution To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H0 “One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”) Fail to reject H0 “Generic” statistical test Things that affect the computed test statistic Size of the treatment effect • The bigger the effect, the bigger the computed test statistic Difference expected by chance (sample error) • Sample size • Variability in the population “Generic” statistical test “A statistically significant difference” means: the researcher is concluding that there is a difference above and beyond chance with the probability of making a type I error at 5% (assuming an alpha level = 0.05) Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference Significance Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to detect a difference that is really there) • Check the statistical power of your test • Sample size is too small • Effects that you’re looking for are really small • Check your controls, maybe too much variability Non-Significance 1 factor with two groups T-tests • Between groups: 2-independent samples • Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial Factorial ANOVA Some inferential statistical tests Design 2 separate experimental conditions Degrees of freedom • Based on the size of the sample and the kind of t-test Formula: Observed difference T= X1 - X2 Diff by chance Computation differs for between and within t-tests T-test Based on sample error Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.” T-test Designs XA XB XC More than two groups • 1 Factor ANOVA, Factorial ANOVA • Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom Several to keep track of The number of them depends on the design Analysis of Variance XA XB XC More than two groups Now we can’t just compute a simple difference score since there are more than one difference So we use variance instead of simply the difference • Variance is essentially an average difference Observed variance F-ratio = Variance from chance Analysis of Variance XA XB XC 1 Factor, with more than two levels Now we can’t just compute a simple difference score since there are more than one difference • A - B, B - C, & A - C 1 factor ANOVA Null hypothesis: XA XB XC H0: all the groups are equal XA = XB = XC Alternative hypotheses HA: not all the groups are equal XA ≠ XB ≠ XC XA = XB ≠ XC 1 factor ANOVA The ANOVA tests this one!! Do further tests to pick between these XA ≠ XB = XC XA = XC ≠ XB Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses XA ≠ XB ≠ XC Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C XA = XB ≠ XC XA ≠ XB = XC XA = XC ≠ XB 1 factor ANOVA Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA We covered much of this in our experimental design lecture More than one factor Factors may be within or between Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions between the factors Factorial ANOVAs Reporting your results The observed differences • Because there may be a lot of these, may present them in a table instead of directly in the text Kind of design • e.g. “2 x 2 completely between factorial design” Computed F-ratios • May see separate paragraphs for each factor, and for interactions Degrees of freedom for the test • Each F-ratio will have its own set of df’s The “p-value” of the test • May want to just say “all tests were tested with an alpha level of 0.05” Any post-hoc or planned comparison results • Typically only the theoretically interesting comparisons are presented Factorial ANOVAs