Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Hypothesis Tests II The normal distribution Normally distributed data Normally distributed means First, lets consider a more simple problemβ¦ π»0 : π = π0 We are testing the equality of a mean of a population (Y) to a particular value. Now, if π»0 is assumed, what do we know? We have some idea about the distribution of sample mean π. We need a measuring device that is sensitive to the variations in π»0 , or in other words deviations from the statement thereinβ¦ z1 z2 π ππ , ππ = π§ = (π§1 , π§2 ) has a bivariate normal distribution. π ππ πππ πππ π ππ±π©(β π π β ππ π β ππ π β π ππ ππππ π π β πππ + ππππ π β ππ π β πππ (π β πππ ) πππ πππ If π§ = (π§1 , π§2 ) has a bivariate normal distribution then the pdf of points (π§1 β π§, π§2 β π§) is a one dimensional normal distribution. This distribution is over the line π§1 + π§2 = 0 because all points of the form (π§1 β π§, π§2 β π§) is situated on this line. z1 z2 This is how one dimension (degrees of freedom) is lost! If π§ = (π§1 , π§2 , π§3 ) has a multinomial normal distribution then the pdf of points (π§1 β π§, π§2 β π§, π§3 β π§) is a two dimensional normal distribution. This distribution is over the plane π§1 + π§2 + π§3 = 0 because all points of the form (π§1 β π§, π§2 β π§, π§3 β π§) is situated on this plane. (Hard to draw) That means even though the points (π§1 β π§, π§2 β π§ ) lie in a two dimensional space, the probability distribution function defined over them is basically single dimensional. zi2 ~π³π2 But, π§π β π§ 2 2 ~π³πβ1 The situation resembles the following: Assume we have two normally distributed random variables; π1 and π2 . Then the distribution of the sum their squares, i.e., π12 + π22 does not necessarily have a Chi-squared distribution with two degrees of freedom. Why? Consider the case where π2 = βπ1 . Then π12 + π22 = 2π12 which has a Chi-square distribution of one degree of freedom. Hence unless π1 , π2 are independent π12 + π22 has chi-square distribution with one degree of freedom. What is π³ 2 distribution? The t-distribution That is why we divide by (n-1) in calculating sample s.d. One sample t-test Two-sample tests B and A are types of seeds. Numerical Example (wheat again) Summary: We have so far seen how a good test statistic (null distribution) looks like. The distribution that we have selected is a test book distribution. Could we pick others? Choosing Test Statistic The t statistic The Kolmogorov-Smirnov statistic Comparing the test statistics Sensitivity to specific alternatives Discussion Orβ¦ β’ We need to add in additional assumptions such as equality of the stanadard deviations of the samples. Two-sample tests B and A are types of seeds. Contingency Tables (Cross-Tabs) We use cross-tabulation when: β’ We want to look at relationships among two or three variables. β’ We want a descriptive statistical measure to tell us whether differences among groups are large enough to indicate some sort of relationship among variables. Cross-tabs are not sufficient to: β’ Tell us the strength or actually size of the relationships among two or three variables. β’ Test a hypothesis about the relationship between two or three variables. β’ Tell us the direction of the relationship among two or more variables. β’ Look at relationships between one nominal or ordinal variable and one ratio or interval variable unless the range of possible values for the ratio or interval variable is small. What do you think a table with a large number of ratio values would look like? Because we use tables in these ways, we can set up some decision rules about how to use tables β’ Independent variables should be column variables. β’ If you are not looking at independent and dependent variable relationships, use the variable that can logically be said to influence the other as your column variable. β’ Using this rule, always calculate column percentages rather than row percentages. β’ Use the column percentages to interpret your results. For example, β’ If we were looking at the relationship between gender and income, gender would be the column variable and income would be the row variable. Logically gender can determine income. Income does not determine your gender. β’ If we were looking at the relationship between ethnicity and location of a personβs home, ethnicity would be the column variable. β’ However, if we were looking at the relationship between gender and ethnicity, one does not influence the other. Either variable could be the column variable. Contingency Tables (Cross-Tabs) Marital Status Married Single Gender Male Female How do we measure the relationship? 37 51 41 32 What do we EXPECT if there is no relationship? Gender Female Result Total Male Cured Not 88 Total 73 78 83 161 Observed F M Cured 37 41 Not 51 32 Expected F M Cured 42.6 35.4 Not 45.4 37.6 (37 ο 42.6)2 ( 41 ο 35.4)2 (51 ο 45.4)2 (32 ο 37.6)2 ο« ο« ο« 42.6 35.4 45.4 37.6 3.18 RESULT β This test statistic has a Ο2 distribution with (2-1)(2-1) = 1 degree of freedom β The critical value at Ξ± = .01 of the Ο2 distribution with 1 degree of freedom is 6.63 β Thus we do not reject the null hypothesis that the two proportions are equal, that the drug is equally effective for female and male patients INTRODUCTION TO ANOVA β’ The easiest way to understand ANOVA is to generate a tiny data set (using GLM): π =π+πΌ+π As a first step set the mean π, to 5 for the dataset with 10 cases. In the table below all 10 cases have a score of 5 at this point. ππ ππ CASE SCORE CASE SCORE π 1 5 π 6 5 π 2 5 π 7 5 π 3 5 π 8 5 π 4 5 π 9 5 π 5 5 π 10 5 β’ The next step is to add the effects of the IV. Suppose that the effect of the treatment at π1 is to raise scores by 2 units and the effect of the treatment at π2 is to lower scores by 2 units. ππ ππ CASE SCORE CASE SCORE π 1 5+2=7 π 6 5-2=3 π 2 5+2=7 π 7 5-2=3 π 3 5+2=7 π 8 5-2=3 π 4 5+2=7 π 9 5-2=3 π 5 5+2=7 π 10 5-2=3 Ξ£ππ1 = 35 Ξ£ππ2 = 15 2 Ξ£ππ1 = 245 2 Ξ£ππ2 = 45 ππ1 = 7 ππ2 = 3 β’ The changes produced by treatment are the deviations of the scores from π. Over all of these cases the deviations is 5 2 2 + 5 β2 2 = 40 This is the sum of the (squared) effects of treatment if all cases are influenced identically by the various levels of A and there is no error. β’ The third step is to complete the GLM with addition of error. ππ ππ CASE SCORE CASE SCORE π 1 5+2+2=9 π 6 5-2+0=3 π 2 5+2+0=7 π 7 5-2-2=1 π 3 5+2-1=6 π 8 5-2+0=3 π 4 5+2+0=7 π 9 5-2+1=4 π 5 5+2-1=6 π 10 5-2+1=4 Ξ£ππ1 = 35 Ξ£ππ2 = 15 2 Ξ£ππ1 = 251 2 Ξ£ππ2 = 51 ππ1 = 7 ππ2 = 3 Ξ£π = 50 Ξ£π 2 = 302 π=5 Then the variance for the π1 group is Ξ£π Ξ£π 2 β π 2 π1 : π πβ1 = πβ1 And the variance for the π2 group is 2 352 251 β 5 = 1.5 = 4 152 51 β 5 = 1.5 2 π2 : π πβ1 = 4 The average of these variances is also 1.5 Check that these numbers represent error variance; that means they represent random variability in scores within each group where all cases are treated the same and therefore are uncontaminated by effects of the IV. The variance for this group of 10 numbers, ignoring group memebership is 502 302 β 10 = 5.78 2 π πβ1 = 9 Standard Setup for ANOVA ππ ππ 9 3 7 1 6 3 7 4 6 4 Ξ£ππ1 = π΄1 = 35 Ξ£ππ2 = π΄2 = 15 Ξ£π = π = 50 2 Ξ£ππ1 = 251 2 Ξ£ππ2 = 51 Ξ£π 2 = 302 ππ1 = 7 ππ2 = 3 π = πΊπ = 5 Sum The difference between each score and the Grand Mean (πππ β πΊπ) is broken into two components: 1. The difference between the score and its own group mean (πππ β ππ ) 2. The difference between that group mean and the grand mean (ππ β πΊπ) πππ β πΊπ = πππ β ππ + Yj β GM πππ β πΊπ = πππ β ππ + Yj β GM Sum of squares for treatment The effect of the IV!!! Sum of squares for error Each term is then squared and summed seperately to produce the sum of squares for error and the sum of squares for treatment seperately. The basic partition holds because the cross product terms vanish. πππ β πΊπ π π 2 = πππ β ππ π π 2 + Yj β GM π π 2 πππ β πΊπ π π 2 = πππ β ππ π 2 + π Yj β GM π 2 π This is the deviation form of basic ANOVA. Each of these terms is a sum of squares (SS). 2 πππ β πΊπ = πππ‘ππ‘ππ = πππ The average of this sum is the total variance in the set of scores ignoring group memebership. π π 2 πππ β ππ = SSerror = SSwg This term is called sum of square within groups. π π 2 Yj β GM = πππ‘ππππ‘ππππ‘ = ππππ This term is called SS between groups. This sum is frequently symbolized as, πππ = ππππ + πππ€π π π At this point it is important to realize that the total variance in the set of scores is partitioned into two sources. One is the effect of the IV and the other is all remaining effects (which we call error). Because the effects of the IV are assessed by changes in the central tendencies of the groups, the inferences that come from ANOVA are about differences in central tendency. However sum of squares are not yet variances. To become variances, they must be βaveragedβ. The denominators for averaging SS must be degrees of freedom so that the statistics will have a proper π 2 distribution (remember previous slides). So far we now that the degrees of freedom of πππ must be N-1. πππ‘ππ‘ππ = π β 1 = ππ β 1 Furthermore, ππππ = π β 1 Also, πππ€π = π π β 1 = ππ β π = π β π Thus we have (as expected) πππ‘ππ‘ππ = ππππ + πππ€π Variance is an βaveragedβ sum of squares (for empirical data of course). Then to obtain mean sum of squares (MS), ππππ ππππ = πβ1 πππ€π πππ€π = πβπ The F distribution is a sampling distribution of the ratio of two π 2 distributions. ππππ πΉ= πππ€π This statististic is used to test the null hypothesis that π1 = π2 = β― = ππ Source table for basic ANOVA Source Between Within Total SS 40 12 52 df 1 8 9 MS 40 1.5 F 26.67