Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Degrees of freedom (statistics) wikipedia , lookup
Foundations of statistics wikipedia , lookup
History of statistics wikipedia , lookup
Confidence interval wikipedia , lookup
Bootstrapping (statistics) wikipedia , lookup
German tank problem wikipedia , lookup
Law of large numbers wikipedia , lookup
Taylor's law wikipedia , lookup
Misuse of statistics wikipedia , lookup
CHAPTER 2 ESTIMATION AND INFERENCE 1. 2. Sample Statistic The Sampling Distribution of the Sample Mean 2.1. Expected Value and Variance of π₯Μ The Normal Sampling Distribution of π₯Μ 3.1. The Margin of Sampling Error Properties of Estimators 4.1. Unbiased Estimators 4.1.1. Proof that π ² is an unbiased estimator of the population variance Ο² 4.2. Efficient Estimators Confidence Interval (Interval Estimate) for the Population Mean 5.1. The π‘ Distribution Test of Hypothesis for µ 6.1. The probability value 3. 4. 5. 6. 1. Sample Statistic In the previous discussions of random variables, both discrete and continuous, we have assumed that we have exact information about the probability distribution or probability density function of the random variable. In particular, we have assumed that we have the exact knowledge about the population parameters, namely, the mean (expected value) µ and the variance Ο². In practice, other than for random variables whose values can be determined through random experiments that can be repeated under identical conditions, we do not know the exact probability distribution or density function of a random variable. Therefore, we do not have an exact knowledge of the population parameters. The next best alternative to the full knowledge of the population parameters is to estimate their values based on data obtained through a random sample. The estimators of the two population parameters µ and Ο² are, respectively, π₯Μ (the sample mean) and π ² (the sample variance), where, π₯Μ = βπ₯ π and π 2 = β(π₯ β π₯Μ )2 πβ1 These estimators π₯Μ and π ² each is a sample statistic. The specific value obtained from the sample data for π₯Μ and π ² are called estimates. 2. The Sampling Distribution of the Sample Mean To obtain an estimate of the population mean we take a single random sample of size π from the population. From the sample data we compute the sample mean as an estimate of µ. The value of the sample mean π₯Μ depends upon the random sample selected. Since this value is not known until we take the random sample, then π₯Μ is a random variable. Since π₯Μ is a random variable, then it has a probability distribution. The probability distribution of π₯Μ is called the sampling distribution of π₯Μ . To explain the sampling distribution, consider the following simple example. 2-Estimation and Inference 1 of 21 Suppose we have a population consisting of π = 5 elements with the following associated values represented by π₯: Population Element A B C D E π₯ 15 12 9 6 3 First compute the mean and variance of the population: µ=9 and Ο2 = β(π₯ β µ)2 βπ = 18 Next write each of the π₯ values in the population on a ball and put them in a bowl. Now select a sample of size π = 3 without replacement and compute the sample mean. Even though we are selecting only a sample of size 3, this sample is one of the 10 possible samples that can be selected. These possible samples are listed below along with the mean corresponding to each sample. Sample Elements A B C A B D A B E A C D A C E A D E B C D B C E B D E C D E Sample Data π₯π 15, 12, 9 15, 12, 6 15, 12, 3 15, 9, 6 15, 9, 3 15, 6, 3 12, 9, 6 12, 9, 3 12, 6, 3 9, 6, 3 Sample Mean π₯Μ 12 11 10 10 9 8 9 8 7 6 The following table shows the relative frequency distribution of the sample means. The relative frequency shows that there is a probability associated with each value of the sample mean. This probability distribution is called the sampling distribution of the random variable xΜ . π₯Μ 6 7 8 9 10 11 12 π(π₯Μ ) 0.1 0.1 0.2 0.2 0.2 0.1 0.1 The sampling distribution of π₯Μ above implies that when the above experiment is conducted many times, 20% of such samples would yield a sample mean of, say, 9, or 10 percent of the time we will get a sample mean of, say, 6. As the above distribution shows, each sample mean value has its own probability of occurring. 2-Estimation and Inference 2 of 21 Μ , or the Mean of the Means 2.1. The Expected Value of π In the previous chapter, you learned that the expected value of a random variable is the mean value of that random variable, obtained as the weighted mean of the values of the random variable, where the weights are the probability associated with each value. E(π₯) = βπ₯π(π₯) Now the random variable of interest is the sample mean π₯Μ . Thus, the expected value of π₯Μ is the βmean of π₯Μ β. E(π₯Μ ) = βπ₯Μ π(π₯Μ ) The following table shows the calculation of the mean or expected value of π₯Μ . π₯Μ 6 7 8 9 10 11 12 π(π₯Μ ) 0.1 0.1 0.2 0.2 0.2 0.1 0.1 π₯Μ π (π₯Μ ) 0.6 0.7 1.6 1.8 2.0 1.1 1.2 9.0 E(π₯Μ ) = µπ₯Μ = βπ₯Μ π(π₯Μ ) Note that µπ₯Μ = µ. This is a very important result. This relationship between the mean of the sample means, E(π₯Μ ), and the population mean, µ, is the cornerstone of inferential statistics. The relationship can simply be stated as: βThe mean of the means equals the mean.β That is, the expected value of the sample means is equal to the mean of the parent population: E(π₯Μ ) = E ( βπ₯ π₯1 + π₯2 + β― + π₯π ) = E( ) π π E(π₯Μ ) = 1 E(π₯1 + π₯2 + β― + π₯π ) π E(π₯Μ ) = 1 [E(π₯1 ) + E(π₯2 ) + β― + E(π₯π )] π E(π₯Μ ) = 1 1 (µ + µ + β― + µ) = πµ = µ π π Note that since π₯π are randomly selected from the same population, then the expected value of each π₯π is the population mean µ. 2.2. Variance of the Mean Using the expression var(π₯Μ ) to denote the variance of π₯Μ , compute var(π₯Μ ). Remember that the variance of a random variable is the expected value (the mean) of the squared deviations. Thus, var(π₯Μ ) = E[(π₯Μ β µ)2 ] = β(π₯Μ β µ)2 π(π₯Μ ) 2-Estimation and Inference 3 of 21 π₯Μ 6 7 8 9 10 11 12 π(π₯Μ ) 0.1 0.1 0.2 0.2 0.2 0.1 0.1 (π₯Μ β µ)2 9 4 1 0 1 4 9 (π₯Μ β µ)2 π(π₯Μ ) 0.9 0.4 0.2 0.0 0.2 0.4 0.9 3.0 var(π₯Μ ) = β(π₯Μ β µ)2 π(π₯Μ ) = 3 The variance of π₯Μ is not equal to the variance of π₯ (the variance of the parent population), that is, var(π₯Μ ) β Ο2 . However, there is a definite relationship between the two variances, as shown by the following formula: var(π₯Μ ) = Ο2 π β π ( ) π πβ1 var(π₯Μ ) = 18 5 β 3 ( )=3 3 5β1 The term ( πβπ πβ1 ) in the formula is called the finite population correction factor. This term disappears for non- finite populations. Thus, var(π₯Μ ) = Ο2 π The proof of this relationship follows: var(π₯Μ ) = var ( βπ₯ π₯1 + π₯2 + β― + π₯π ) = var ( ) π π var(π₯Μ ) = 1 var(π₯1 + π₯2 + β― + π₯π ) π2 var(π₯Μ ) = 1 [var(π₯1 ) + var(π₯2 ) + β― + var(π₯π )] π2 var(π₯Μ ) = 1 2 (Ο + Ο2 + β― + Ο2 ) π2 var(π₯Μ ) = 1 Ο2 2 πΟ = π2 π Note that since π₯1 , π₯2 , β― , π₯π are randomly selected from the same population, then var(π₯1 ) = var(π₯2 ) = β― = var(π₯π ) = Ο2 The square root of the variance of π₯Μ is the standard error of the mean, denoted by se(π₯Μ ). π π(π₯Μ ) = Ο βπ 2-Estimation and Inference 4 of 21 Μ 3. The Normal Sampling Distribution of π Note that the number of samples of size n quickly becomes astronomical. For example, the number of possible samples of size π = 40 selected from a population of size π = 1,000 is: 555,974,423,571,664,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 Since each sample yields its own π₯Μ value, then the random variable π₯Μ takes on infinite values, making π₯Μ a continuous random variable with a smooth probability density function. In fact, in most cases, the sampling distribution of π₯Μ is normal or approximately normal. If the parent population from which the sample is taken is normal, then the sampling distribution of π₯Μ is also normal. When the parent population distribution is normal with mean ΞΌ and standard deviation Ο, ... Ο Parent population distribution x ... the sampling distribution of is also normal with mean ΞΌ and standard error Sampling distribution of xΜ ΞΌ If the parent population is not normal, the sampling distribution of xΜ approaches normal, per central limit theorem, as we increase the sample size n. The minimum sample size to have an approximate normal sampling distribution of π₯Μ is π = 30. 2-Estimation and Inference 5 of 21 When the parent population distribution is NOT normal, ... x ... the sampling distribution of is approximatedly normal with mean ΞΌ and standard error , if n β₯ 30. ΞΌ 3.1. Central Limit Theorem In applied statistical analysis many of the random variables used can be characterized as the sum of a large number of independent random variables. For example, total daily sales in a store are the result of a number of sales to individual customersβeach of which can be modeled as a random variable. Total investment in the United States in a month is the sum of individual investments by many independent firms. Thus, if π₯1 , π₯2 , . . . , π₯π represents the result of individual random events, the observed random variable π₯ is the sum these random variables: π₯ = βπ₯π = π₯1 + π₯2 + β― + π₯π Using the properties of expected value shown in the previous chapter, E(π₯) = E(π₯1 + π₯2 + β― + π₯π ) = πµ var(π₯) = var(π₯1 + π₯2 + β― + π₯π ) = var(π₯1 ) + var(π₯2 ) + β― + var(π₯π ) = πΟ2 The CLT states that the resulting sum, π₯ = βπ₯π , is normally distributed with mean πµ and standard deviation πΟ2 . π₯~π(πµ, βππ) Therefore, 2-Estimation and Inference 6 of 21 π§= π₯ β πµ βπΟ = βπ₯π β πµ βπΟ is a standard normal random variable. If we divide the numerator and the denominator on the right hand side by π, we have: π§= π₯Μ β µ Οββπ This implies that π₯Μ ~ N(µ, Οββπ) Example 1 The speed on a certain stretch of an interstate highway is normally distribution with a mean of µ = 80 mph with a standard deviation of Ο = 5 mph. a) If a vehicle is randomly clocked, what is the probability that the speed is below 82 mph? π(π₯ < 82) = ________ π§= π₯ β µ 82 β 80 = = 0.40 π 5 P(π§ < 0.40) = 0.6554 Alternatively, using the Excel NORM.DIST(...) command, =NORM.DIST(82,80,5,1) = 0.6554 0.6554 x = 82 ΞΌ = 80 x b) If a random sample of π = 16 vehicles is clocked, what is the probability that the average sample speed is below 82 mph? P(π₯Μ < 82) = ________ Now you have to use the sampling distribution of π₯Μ to solve this problem. π₯Μ ~ N(µ, π π(π₯Μ ) = Οββπ) se(π₯Μ ) = 5ββ16 = 1.25 2-Estimation and Inference 7 of 21 π§= π₯Μ β µ 82 β 80 = = 1.60 se(π₯Μ ) 1.25 P(π§ < 1.60) = 0.9772 =NORM.DIST(82,80,1.25,1)= 0.9452 0.9452 ΞΌ = 80 xΜ = 82 xΜ 3.2. The Margin of Sampling Error Note that, like any random variable, the random variable π₯Μ consists of a fixed component and a random component. The fixed component is the mean or expected value of π₯Μ , which is the population mean µ, and the random component is denoted by Ξ΅. π₯Μ = µ + Ο΅ Using π§ = π₯Μ βµ se(π₯Μ ) , and solving for π₯Μ , we have, π₯Μ = µ + π§ β se(π₯Μ ) The random component π₯Μ , then, can be presented as, Ο΅ = π§ β se(π₯Μ ) The random component, called the margin of statistical (sampling) error, is a function of π§. Using this relationship between π and π§ we can determine intervals within which the sample mean will fall with an associated probability. For example, suppose we want to find the lower and upper ends of a middle interval (symmetric about the mean) that contains 95% of all the possible sample means. Of the remaining 5% of the sample means, then 2.5% would exceed the upper boundary value and the other 2.5% would be below the lower boundary. This "5%" represents the probability that the π₯Μ falls outside the "95% interval" and it is called the error probability, represented by the symbol Ξ±. Generally, given an Ξ± value, then 1 β πΌ represent the probability that the interval would contain the sample mean. Thus, π₯Μ πΏ and π₯Μ π are the boundaries of the interval that contain 1 β Ξ± proportion of all possible sample means. Of the remaining π₯Μ values, πΌβ2 each fall on the right tail (to the right of π₯Μ π ) and the left tail (to the left of π₯Μ πΏ ) of the distribution. 2-Estimation and Inference 8 of 21 1βΞ± πΌβ2 π₯Μ πΏ = π β π§πΌβ2 se(π₯Μ ) πΌβ2 ΞΌ π₯Μ π = π + π§πΌβ2 se(π₯Μ ) xΜ The diagram is a graphic representation of the following probability statement: P(π₯Μ πΏ < π₯Μ < π₯Μ π ) = 1 β πΌ P(µ β Ο΅ < π₯Μ < µ + Ο΅) = P(µ β π§Ξ±β2 se(π₯Μ ) < π₯Μ < µ + π§Ξ±β2 se(π₯Μ )) = 1 β πΌ The π§ score that bounds a tail area of Ξ±β2 under the standard normal curve is π§Ξ±β2 . Thus, the margin of error (MOE) formula is generally written as: Ο΅ = π§Ξ±β2 se(π₯Μ ) and, P(µ β π§Ξ±β2 se(π₯Μ ) < π₯Μ < µ + π§Ξ±β2 se(π₯Μ )) = 1 β πΌ Example 2 The speed on a certain stretch of an interstate highway is normally distribution with a mean of 80 mph with a standard deviation of 5 mph. A random sample of π = 64 vehicles is clocked. Find the 95% margin of error for the sample mean. In other words, find the middle interval of π₯Μ values which contains 95% of all possible sample means for samples of size π = 64. 1 β Ξ± = 0.95 Ξ±β2 = 0.025 π§Ξ±β2 = π§0.025 = 1.96 se(π₯Μ ) = Οββπ = 5ββ64 = 0.625 Ο΅ = π§Ξ±β2 se(π₯Μ ) = 1.96(0.625) = 1.225 π₯Μ πΏ = 80 β 1.225 = 78.775 π₯Μ π = 80 + 1.225 = 81.225 2-Estimation and Inference 9 of 21 0.9500 78.775 ΞΌ = 80 81.225 xΜ 4. Properties of Estimators 4.1. Unbiased Estimators Since estimators are random variables with infinite number of values, the probability that a single estimate will equal the population parameter is practically zero. Thus there will always be a deviation between the estimate and the parameter. If the parameter of interest is the population mean µ, then the deviation between the sample mean xΜ and µ is: π₯Μ β µ = Ο΅ Although this deviation will never be zero for any single estimate, in repeated sampling it is desirable that the mean or expected value of the deviation be zero, that is, the deviation above and below µ cancel each other out: E(Ο΅) = 0. If this equality holds in the long run, then E(Ο΅) = E(π₯Μ β µ) = E(π₯Μ ) β µ = 0 Thus, E(π₯Μ ) = µ If deviations average to zero, then the expected value of π₯Μ is equal to the mean of the population. If this is true, then π₯Μ is said to be an unbiased estimator of the population mean. The proof that E(π₯Μ ) = µ was shown above in the discussion of the sampling distribution of π₯Μ . 4.1.1. Proof that π² is an unbiased estimator of the population variance π² We learned that to compute the variance of the sample you use the formula, π 2 = β(π₯ β π₯Μ )2 πβ1 The variance is the mean squared deviation of the data from the sample mean. In computing of the mean squared deviation for the sample data, why do we divide the sum of squared deviations by π β 1 and not by π? This has to do with the fact that when computing π ² we are finding the deviations of the random variable π₯ from another random variable, that is, π₯Μ . Thus, for a sample of size n, the number of random squared deviations is reduced by 1. To explain, suppose you randomly select three items (π = 3) from a population and obtain the following data points: 3, 9, 12. The mean of this sample, another random number, is π₯Μ = 8. 2-Estimation and Inference 10 of 21 Given this mean, the first two squared deviations are (3 β 8)2 = 25 and (9 β 8)2 = 1. These are the only two random squared deviations. The third squared deviation, (12 β 8)2 = 16, is no longer random because when the mean is 8, the third number must be 12. Thus, you lose one βdegree of freedomβ. 1 To be an unbiased mean, the mean of the squared deviations is then obtained by using π β 1 = 2 degrees of freedom in the denominator. If we divide the sum of squared deviation by π, then the sample variance would be smaller and thus it would underestimate the population variance. In other words, π ² would be a biased estimator of the population variance. The following shows that using π in the denominator of the sample variance would make it a biased estimator, and when divided by π β 1 the bias disappears. For π 2 to be an unbiased estimator of Ο² the following must hold. E(π 2 ) = Ο² In the following proof, it will be shown that if in the sample variance formula the sum of squared deviations is divided by n, π 2 = β(π₯ β π₯Μ )2 π then E(π 2 ) = πβ1 2 Ο < Ο2 π That is, the expected value of the sample variance would be less than the population variance, imparting a downward bias to the estimator. Therefore, dividing the sum of deviation squares of π₯, β(π₯ β π₯Μ )2 , by π would make the resulting sample variance a biased estimator of the population variance. Now the proof: E(π 2 ) = E [ E(π 2 ) = β(π₯ β π₯Μ )2 π ] 1 E[β(π₯ β π₯Μ )2 ] π Rewrite the sum of squared deviations within the brackets by adding and subtracting µ, as follows: β(π₯ β π₯Μ )2 = β(π₯ β π₯Μ + µ β µ)2 β(π₯ β π₯Μ )2 = β[(π₯ β µ) β (π₯Μ β µ)]2 1 Note that, 1 π₯Μ = βπ₯ π π₯Μ = 1 (π₯ + π₯2 + β― + π₯π ) π 1 Thus, π₯π = ππ₯Μ β (π₯1 + π₯2 + β― + π₯πβ1 ) This shows that any one of the π observations in a sample can be written as a linear combination of π₯Μ and the remaining π β 1 observations. Therefore, in computing the average of squared deviations as the sample variance, there are π β 1 independent squared deviations 2-Estimation and Inference 11 of 21 β(π₯ β π₯Μ )2 = β[(π₯ β µ)2 β 2(π₯ β µ)(π₯Μ β µ) + (π₯Μ β µ)2 ] β(π₯ β π₯Μ )2 = β(π₯ β µ)2 β 2(π₯Μ β µ)β(π₯ β µ) + π(π₯Μ β µ)2 β(π₯ β π₯Μ )2 = β(π₯ β µ)2 β 2π(π₯Μ β µ)2 + π(π₯Μ β µ)2 Note: β(π₯ β µ) = βπ₯ β πµ = ππ₯Μ β πµ = π(π₯Μ β µ) β(π₯ β π₯Μ )2 = β(π₯ β µ)2 β π(π₯Μ β µ)2 Now we can write E(π 2 ) = 1 E[β(π₯ β µ)2 β π(π₯Μ β µ)2 ] π E(π 2 ) = 1 E[β(π₯ β µ)2 ] β E[(π₯Μ β µ)2 ] π E(π 2 ) = 1 E[(π₯1 β µ)2 + (π₯2 β µ)2 + β― + (π₯π β µ)2 ] β E[(π₯Μ β µ)2 ] π E(π 2 ) = 1 {E[(π₯1 β µ)2 ] + E[(π₯2 β µ)2 ] + β― + E[(π₯π β µ)2 ]} β E[(π₯Μ β µ)2 ] π E(π 2 ) = 1 2 (Ο + Ο22 + β― + Ο2π ) β var(π₯Μ ) π 1 E(π 2 ) = 1 Ο2 (πΟ2 ) β π π Since π₯1 , π₯2 , . . . , π₯π are random selections from the same population, then, Ο12 = Ο22 = β― = Ο2π = Ο2 . Also, the variance of π₯Μ is: var(π₯Μ ) = Ο2 βπ. Thus, E(π 2 ) = Ο2 β Ο2 π β 1 2 = Ο π π which is what we set out to prove. For a sample statistic to be an unbiased estimator of the population parameter, the expected value of that sample statistic must equal to the population parameter. Therefore, when the sample variance is calculated as π 2 = β(π₯ β π₯Μ )2 π this variance would be a biased estimator of the population variance Ο². If, however, we use π β 1 in the denominator of the sample variance formula, π 2 = β(π₯ β π₯Μ )2 πβ1 the end result of same process would instead be E(π 2 ) = 1 π Ο2 (nΟ2 ) β ( ) = Ο2 πβ1 πβ1 π 2-Estimation and Inference 12 of 21 Thus, when E(π 2 ) = Ο², π ² is said to be an unbiased estimator of Ο². 4.2. Efficient Estimators In many situations different unbiased estimators of the population parameter can be obtained. However, the estimator with the smallest variance is clearly preferred since it would provide us with the smallest possible margin of error in the estimation process. The smaller the variance, the more closely clustered the values of the sample statistic (the estimator) around the population parameter. The unbiased estimator with the smallest variance is called the most efficient estimator. 5. Confidence Interval (Interval Estimate) for the Population Mean In the previous discussions, to establish the theory of sampling distribution, we assumed that the population mean µ and standard deviation Ο are known. It was just explained that for samples of size π, 1 β Ξ± proportion of all sample means fall within the margin of error of π = π§Ξ±β2 Οββπ from the population mean. In practice, the whole purpose of inferential statistic is to find an estimate of the unknown population parameter. Obtaining a single sample will provide a point estimate of the population parameter. But this point estimate gives us very little information about the precision of our estimate. We know that the number of samples that could be selected, and hence the number of sample means calculated, is infinite, and these π₯Μ values are normally distributed about the population mean. Therefore, a point estimate would not tell us how close the mean computed from a single sample is to the population mean. An interval estimate provides a range of values, which allows us to state with a known level of confidence that the population mean falls within that interval. This interval estimate for the population mean is obtained as follows. It was explained above that, π (µ β π§Ξ±β2 se(π₯Μ ) < π₯Μ < µ + π§Ξ±β2 se(π₯Μ )) = 1 β πΌ Take the inequality statement (the interval) within the parentheses and rewrite it as: π₯Μ β π§Ξ±β2 se(π₯Μ ) < µ < π₯Μ + π§Ξ±β2 se(π₯Μ ) The above inequality shows that µ falls within 1 β πΌ of all possible intervals built around the means of all random samples: π₯Μ ± π§Ξ±β2 se(π₯Μ ). Therefore, if we select one sample of size n and build a single interval π₯Μ ± π§Ξ±β2 se(π₯Μ ), we are 1 β πΌ percent confident that this interval contains the population mean. Thus, the confidence interval for the population mean, with the lower end πΏ and the upper end π is: πΏ, π = π₯Μ ± π§Ξ±β2 se(π₯Μ ) 5.1. The t Distribution So far, the theory of confidence intervals has been explained using the population standard deviation in the margin of error formula: π = π§Ξ±β2 se(π₯Μ ) = π§Ξ±β2 Ο βπ 2-Estimation and Inference 13 of 21 In practice, obviously, Ο is also an unknown population parameter and must be estimated using the sample data. The estimator of the population parameter Ο is the sample statistic π , the sample standard deviation, π 2 = β β(π₯ β π₯Μ )2 πβ1 Therefore, in the margin of error formula the standard error of π₯Μ becomes an estimated value obtained using: se(π₯Μ ) = s βπ When π is used in place of Ο a peculiar thing happens to the shape of the sampling distribution of π₯Μ . The sampling distribution is still bell shaped, but the area under the curve for a given interval of π₯Μ values is not the same as when the known Ο is used. To illustrate, consider the following example: First, suppose the mean of a normally distributed population is µ = 100 and the standard deviation is Ο = 20. The proportion of π₯Μ values for samples of size π = 16 taken from this population that fall between, say, 90.2 and 109.8 are determined as follows P(90.2 < π₯Μ < 109.8) se(π₯Μ ) = Οββπ = 20ββ16 = 5 π§ = (π₯Μ β µ)βse(π₯Μ ) = ±1.96 P(β1.96 < π§ < 1.96) = 0.95 Now, instead of using Ο, let the standard deviation 20 be as if determined from a sample. That is, let π = 20. Hence, se(π₯Μ ) = sββπ = 20ββ16 = 5 Here, when we attempt to transform π₯Μ to π§ using the formula π₯Μ β µ π ββ π a problem arises. The new random variable obtained through this transformation no longer has a π§ distribution (with mean 0 and standard deviation 1). This problem was observed by William S. Gosset (1867-1937), a British chemist/statistician, in a paper published in 1908. Gosset showed that, when sample size is small, the standard normal table z does not provide the accurate area under the curve for the scores obtained from the conversion formula (π₯Μ β µ)βse(π₯Μ ). In the above example, if (π₯Μ β µ)βse(π₯Μ ) = ±1.96, the area under the curve is bounded by the two scores ±1.96 is no longer 0.95. Gosset developed an alternative table to obtain the more accurate areas or probability values for the scores thus calculated. The new table of probabilities he provided is now called the t table. And the random variable obtained from this transformation is said to have a t distribution, where, π‘= π₯Μ β µ π ββ π 2-Estimation and Inference 14 of 21 The difference between the z and t distributions is shown in the following diagram. z t (df = 4) Tail area under z 0.025 Tail area under t 0.061 0 1.96 Like the z distribution, the t distribution is symmetric about the mean of 0. However, unlike z, which has a unique, unchanging shape due to its fixed standard deviation 1, the t distribution acquires different shapes depending on a parameter called degrees of freedom. In estimations involving ΞΌ, the degrees of freedom is π π = π β π, the denominator used in computing the sample standard deviation. The smaller the degrees of freedom, the larger the tail areas. As the df increases, the π‘ distribution approaches the π§ distribution and tail area under the π‘ curve becomes closer and closer to the tail area under the π§. As the degrees of freedom increases, the distinction between z and t practically disappears. For any ππ > 2, the standard deviation of the t distribution is π π‘ = β ππ . ππβ2 For example, if ππ = 4, then π π‘ = 1.414. As ππ rises, the standard deviation approaches 1, which is the standard deviation of π§. Let, for example, ππ = 1000, then the standard deviation is practically 1 (β1000β998 = 1.001). The fact that π‘ has a larger standard deviation than π§ makes the tail area under the π‘ curve relatively larger for a given value, than the area under the z curve for the same value. Thus, using a computer, it can be shown that, while the tail area for the z score 1.96 is 0.025, the tail area associated with a t score of 1.96 (with ππ = 4) is 0.061. Having a larger standard deviation and tail area than π§ is a reflection of the fact that the π‘ distribution applies to situations with a greater inherent uncertainty. The uncertainty arises from the fact that Ο is unknown and π₯Μ βµ is estimated by the random variable π . The t distribution, π‘ = β , thus reflects the uncertainty in two random variables, π₯Μ and π , while π§ = π₯Μ βµ Οββπ π βπ reflects only an uncertainty due to π₯Μ . The greater uncertainty in t (which makes confidence intervals based on t wider than those based on z) is the price we pay for not knowing Ο and having to estimate it from sample data. In inferential statistics we are interested in the t score for a given tail area, or in the tail area associated with a given π‘ score. A typical t table provides the π‘ scores for a given ππ and various tail areas. But there are no tables which provide the tail area for different π‘ scores. In either case, a computer can easily provide the values we are looking for. Back to the confidence interval for µ: When Ο is unknown, the margin of error used in building the confidence interval is, Μ ) π = ππβπ,π π π¬π(π where ππ = π β 1 and se(π₯Μ ) = π ββπ. 2-Estimation and Inference 15 of 21 [Note: The symbol π is used for margin of error in place of π reflecting the fact that we are using an estimated value for the standard error.] The confidence interval with 1 β Ξ± level of confidence for the population mean is then, π₯Μ β π‘Ξ±β2,ππ π βπ < µ < π₯Μ + π‘Ξ±β2,ππ πΏ, π = π₯Μ ± π‘Ξ±β2,ππ π βπ π βπ Example 3 To build a confidence interval with a 0.95 level of confidence for the average life of a certain type of light bulb a sample of π = 25 where tested. The sample mean is π₯Μ = 920.5 and the sample standard deviation is π = 43.5. se(π₯Μ ) = π βπ = 1 β πΌ = 0.95 43.5 β25 = 8.7 ππ = π β 1 = 24 π‘Ξ±β2,ππ = π‘0.025,24 = 2.064 To find π‘Ξ±β2,ππ = π‘0.025,24 = 2.064, use the following Excel function: =π. πππ. ππ(π©π«π¨ππππ’π₯π’ππ², πππ _ππ«ππππ¨π¦) π = π‘Ξ±β2,ππ π βπ =T. INV. 2T(0.05,24) = (2.064)(8.7) = 17.96 920.5 β 17.96 < µ < 920.5 + 17.96 902.54 < µ < 938.46 πΏ, π = (902.54,938.46) 6. Test of Hypothesis for µ In the interval estimate process we started with no knowledge, assumption, or hypothesis, about the population parameter. A sample is taken and an interval is built around that sample. Unlike the interval estimate approach to inferential statistics, the test of hypothesis starts with a conjecture or hypothesis about the population parameter. Denoting the hypothesized value by µ0 , in hypothesis testing we ascribe this value to the center of gravity of the π₯Μ values in the sampling distribution of π₯Μ . Then the argument goes as follows: if µ0 were the actual population mean, then 1 β πΌ proportion of sample means would fall within the interval bounded by π₯Μ πΏ , π₯Μ π = µ0 ± π§πΌβ2 se(π₯Μ ). 2-Estimation and Inference 16 of 21 1βΞ± µβ To test for statistical validity of this conjecture, to determine if µ0 is reasonable value ascribed to the population mean, a sample of size π is taken from which the sample statistic is computed. If the sample mean falls with the interval shown in the above diagram (the βacceptance regionβ), then we decide this mean belongs to the family of π₯Μ values whose center of gravity is µ0 , and conclude the population mean is the value we ascribed to µ0 . Note that the interval or acceptance region (π₯Μ πΏ , π₯Μ π ) is obtained by adding to and subtracting the margin of error from µ0 the margin of error π = π§πΌβ2 se(π₯Μ ): π₯Μ πΏ , π₯Μ π = µ0 ± π§πΌβ2 se(π₯Μ ) Therefore, main task in performing a test of hypothesis is to find the statistical margin of error. This would provide us with the critical value to establish the decision rule to accept or reject the hypothesis. The decision rule sets up the acceptance region, which defines the range of acceptable values to which the π₯Μ value from the sample is compared. To determine the acceptance region, first you must state the claim (the hypothesis) about the population mean in a prescribed way. The claim contains a null hypothesis, denoted by π»0 , and an alternative hypothesis, π»1 . Suppose we are testing the hypothesis that the population mean equals 100. π»0 : µ = 100 π»1 : µ β 100 The null hypothesis states that the population mean equals 100; the alternative hypothesis states that the population mean is a value other than 100. Once you stated your hypothesis, you must deal with the following dilemma involving hypothesis tests. Since the test of hypothesis involves the sampling distribution, in deriving a conclusion from the results of the test that is based on a random sampling process, there is always a chance that you may make a wrong decision, commit an error. There are two possible errors. 1) Type I ErrorβReject a true null hypothesis. 2) Type II ErrorβNot reject false null hypothesis. 2-Estimation and Inference 17 of 21 (a) Type-I Error Hβ (b) Type-II Error Hβ µβ Hβ µβ Hβ µβ xΜ µβ xΜ There is always a chance that you may commit either one of the two errors. If the population mean is in fact µ = µ0 = 100, but if π₯Μ value falls outside the non-rejection interval (π₯Μ πΏ , π₯Μ π ) in panel (a) of the above diagram, then you would wrongly reject a true null hypothesisβyou have committed a Type I error. The probability of committing a Type I error is Ξ± (the combined two-tail areas in the above diagram). The Type II error would occur if the population mean is not equal to 100 (µ = µ1 β 100), but π₯Μ falls inside the non-rejection interval in panel (b), leading you to not reject a false null hypothesis. The probability of committing a Type II error is denoted by Ξ², shown as the area to the left of π₯Μ π under the distribution labeled π»1 . Reducing Ξ±, expanding the non-rejection interval (π₯Μ πΏ , π₯Μ π ) for a given sample size, comes only at the cost of increasing Ξ². Performing a test of hypothesis is like conducting a trial in a criminal court. The defendant or the accused is charged with a crime. The purpose of the trial is to establish the defendantβs guilt or innocence. The null hypothesis is that the defendant is innocent (the accused is presumed innocent) and the alternative is that he is guilty (the guilt to be established beyond a reasonable doubt by the prosecutor). If the jury finds an innocent person guilty, it has rejected a true null hypothesis; it has, therefore, committed a Type I error. On the other hand, if the jury finds a guilty person not guilty, it has not rejected a false null hypothesis; it has, therefore, committed a Type II error. In the hypothesis test, the benefit of the doubt is given to π»0 , and burden of proof is upon π»1 . That is, we want to make it unlikely to reject the null hypothesis unless the evidence is "very strong". We want to make it unlikely to find the defendant guilty unless guilt is established beyond a reasonable doubt. For this reason the Ξ±, the probability of rejecting the null hypothesis, is always assigned a small valueβtypically, 5 percent. The Ξ± value is also called the level of significance. Note that in a confidence interval, Ξ± is the percentage of all possible intervals built around sample means that do not capture the population mean. That was because Ξ±% of sample means fall outside the margin error π = ±π§Ξ±β2 π π(π₯Μ ). In a test of hypothesis, Ξ± plays a similar role. If the randomly selected π₯Μ falls outside the prescribed margin of error, we would wrongly reject the null hypothesis. And there is always an Ξ±% chance of doing that. Since committing a Type I Error is the more serious of the two errors, the threshold probability (the level of significance Ξ±) is set in advance. The probability of Type II Error (Ξ²), however, varies based on several factors, one of the them being Ξ±. The method to determine Ξ² will be explained later in this chapter. Suppose to test the null hypothesis that the population mean is 100 a random sample of size 16 is selected with the following results: 108 109 104 95 105 93 97 100 96 95 100 109 108 106 102 108 The mean of the sample is π₯Μ = 102.2. The question is then, is 102.2 significantly different from 100? How do we decide if the difference is significant? If we want to limit our probability of Type I error to 5 percent, then we select Ξ± = 0.05. Given this probability, then we can determine the 95% margin of error as follows: 2-Estimation and Inference 18 of 21 π = ±π‘Ξ±β2,(πβ1) se(π₯Μ ) First we must also compute the sample standard deviation (π = 5.671) to determine the standard error of π₯Μ . se(π₯Μ ) = π ββπ = 1.418 π‘0.025,15 = 2.131 Thus, π = 2.131 × 1.418 = 3.02. This tells us 95% of all means of samples of size π = 16 fall within ±3.02 units from the population mean. Since π₯Μ = 102.2 differs from hypothesized mean µ = 100 by 2.2, then this difference falls within the acceptable margin of error of 3.02. Alternatively stated, π₯Μ = 102.2 falls within the non-rejection interval of π₯Μ πΏ = µ0 β π = 100 β 3.02 = 96.98 and π₯Μ π = µ0 + π = 100 + 3.02 = 103.2. Therefore, if the population mean is 100, then 102.2 is one of the likely sample means. The decision rule for rejecting the null hypothesis, in short, can be written as: Μ β µπ | > π Reject H0, if |π This decision rule can also be written in a more frequently applied way, derived as follows: Start with the decision rule above and substitute for π = π‘Ξ±β2,(πβ1) se(π₯Μ ) on the right hand side of the inequality. |π₯Μ β µ0 | > π‘Ξ±β2,(πβ1) se(π₯Μ ) Divide both sides by se(π₯Μ ). |π₯Μ β µ0 | > π‘Ξ±β2,(πβ1) se(π₯Μ ) The left hand side is the test statistic |t| and the right hand side is the critical value. Thus, the decision rule becomes: Reject Hβ, if the test statistic exceed the critical value: |π‘| > π‘Ξ±β2,(πβ1) 2-Estimation and Inference 19 of 21 In the example, |π‘| = 102.2 β 100 = 1.552 1.418 is less than π‘0.025,15 = 2.131. Therefore, do not reject the null hypothesis. 6.1. The probability value The probability value approach to test of hypothesis is based on the notion that if the population mean is in fact 100, what is the probability that a randomly selected sample from this population would yield a sample mean which would deviate from µ = 100 by 2.2 units or more? This probability is the area under the curve to the right of 102.2. To find this probability, we must transform the test statistic into the t variable. This is already done above: |π‘| = 1.552. Now find P(π‘ > |π‘|). Using Excel, this probability can be computed using the following Excel function: = T. DIST(x, deg_freedom, cumulative) Since we want the tail area associated with t-score, enter the negative value for π‘ and β1β for βcumulativeβ. = T. DIST(β1.552,15,1) = 0.0708 P(π‘ > 1.552) = 0.0708 2-Estimation and Inference 20 of 21 0 1.552 2.131 For two-tail tests this probability value must be doubled, 0.0708 × 2 = 0.1416. This means that the probability that a sample mean would deviate (in either direction) from the population mean of 100 by 2.2 or more is 0.1416. Compared to the level of significance of Ξ± = 0.05, 0.1416 is a very high probability. This implies that if we reject the null hypothesis that the population mean is 100, the probability of committing a Type I error, rejecting a true null hypothesis, is over 14%, which far exceeds the self-imposed limit of 5%. Therefore, we do not reject the null hypothesis. In Excel you can obtain the ππ£πππ’π for a two-tail test by = T. DIST. 2T(x, deg_freedom) = T. DIST(1.552,15) 2-Estimation and Inference = 0.1416 21 of 21