Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Economics 326 Methods of Empirical Research in Economics Lecture 8: Hypothesis testing (Part 1) Vadim Marmer University of British Columbia February 9, 2011 Hypothesis testing I Hypothesis testing is one of a fundamental problems in statistics. I A hypothesis is (usually) an assertion about the unknown population parameters such as β1 in Yi = β0 + β1 Xi + Ui . 1/14 Hypothesis testing I Hypothesis testing is one of a fundamental problems in statistics. I A hypothesis is (usually) an assertion about the unknown population parameters such as β1 in Yi = β0 + β1 Xi + Ui . I Using the data, the econometrician has to determine whether an assertion is true or false. 1/14 Hypothesis testing I Hypothesis testing is one of a fundamental problems in statistics. I A hypothesis is (usually) an assertion about the unknown population parameters such as β1 in Yi = β0 + β1 Xi + Ui . I Using the data, the econometrician has to determine whether an assertion is true or false. I Example: Phillips curve: Unemploymentt = β0 + β1 In‡ationt + Ut . 1/14 Hypothesis testing I Hypothesis testing is one of a fundamental problems in statistics. I A hypothesis is (usually) an assertion about the unknown population parameters such as β1 in Yi = β0 + β1 Xi + Ui . I Using the data, the econometrician has to determine whether an assertion is true or false. I Example: Phillips curve: Unemploymentt = β0 + β1 In‡ationt + Ut . In this example, we are interested in testing if β1 = 0 (no Phillips curve) against β1 < 0 (Phillips curve). 1/14 Assumptions I β1 is unknown, and we have to rely on its OLS estimator β̂1 . 2/14 Assumptions I β1 is unknown, and we have to rely on its OLS estimator β̂1 . I We need to know the distribution of β̂1 or of its certain functions. 2/14 Assumptions I β1 is unknown, and we have to rely on its OLS estimator β̂1 . I We need to know the distribution of β̂1 or of its certain functions. I We will assume that the assumptions of the Normal Classical Linear Regression model are satis…ed: 2/14 Assumptions I β1 is unknown, and we have to rely on its OLS estimator β̂1 . I We need to know the distribution of β̂1 or of its certain functions. I We will assume that the assumptions of the Normal Classical Linear Regression model are satis…ed: 1. 2. 3. 4. 5. Yi = β0 + β1 Xi + Ui , i = 1, . . . , n. E (Ui jX1 , . . . , Xn ) = 0 for all i’s. E Ui2 jX1 , . . . , Xn = σ2 for all i’s. E Ui Uj jX1 , . . . , Xn = 0 for all i 6= j. U’s are jointly normally distributed conditional on X ’s. 2/14 Assumptions I β1 is unknown, and we have to rely on its OLS estimator β̂1 . I We need to know the distribution of β̂1 or of its certain functions. I We will assume that the assumptions of the Normal Classical Linear Regression model are satis…ed: 1. 2. 3. 4. 5. I Yi = β0 + β1 Xi + Ui , i = 1, . . . , n. E (Ui jX1 , . . . , Xn ) = 0 for all i’s. E Ui2 jX1 , . . . , Xn = σ2 for all i’s. E Ui Uj jX1 , . . . , Xn = 0 for all i 6= j. U’s are jointly normally distributed conditional on X ’s. Recall that, in this case, conditionally on X ’s: β̂1 N β1 , Var β̂1 , where Var β̂1 = σ2 ∑ni=1 (Xi X̄ ) 2 . 2/14 Null and alternative hypotheses I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. 3/14 Null and alternative hypotheses I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : 3/14 Null and alternative hypotheses I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. 3/14 Null and alternative hypotheses I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : 3/14 Null and alternative hypotheses I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. 3/14 Null and alternative hypotheses I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. It is held to be true if the null is found false. 3/14 Null and alternative hypotheses I I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. It is held to be true if the null is found false. Usually, the econometrician has to carry the "burden of proof," 3/14 Null and alternative hypotheses I I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. It is held to be true if the null is found false. Usually, the econometrician has to carry the "burden of proof," and the case that he is interested in is stated as H1 . 3/14 Null and alternative hypotheses I I I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. It is held to be true if the null is found false. Usually, the econometrician has to carry the "burden of proof," and the case that he is interested in is stated as H1 . The econometrician has to prove that his assertion (H1 ) is true by showing that the data rejects H0 . 3/14 Null and alternative hypotheses I I I I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. It is held to be true if the null is found false. Usually, the econometrician has to carry the "burden of proof," and the case that he is interested in is stated as H1 . The econometrician has to prove that his assertion (H1 ) is true by showing that the data rejects H0 . The two hypotheses must be disjoint: 3/14 Null and alternative hypotheses I I I I I I Usually, we have two competing hypotheses, and we want to draw a conclusion, based on the data, as to which of the hypotheses is true. Null hypothesis , denoted as H0 : A hypothesis that is held to be true, unless the data provides a su¢ cient evidence against it. Alternative hypothesis, denoted as H1 : A hypothesis against which the null is tested. It is held to be true if the null is found false. Usually, the econometrician has to carry the "burden of proof," and the case that he is interested in is stated as H1 . The econometrician has to prove that his assertion (H1 ) is true by showing that the data rejects H0 . The two hypotheses must be disjoint: it should be the case that either H0 is true or H1 but never together simultaneously. 3/14 Decision rule I The econometrician has to choose between H0 and H1 . 4/14 Decision rule I The econometrician has to choose between H0 and H1 . I The decision rule that leads the econometrician to reject or not to reject H0 is based on a test statistic, which is a function of the data f(Yi , Xi ) : i = 1, . . . ng . 4/14 Decision rule I The econometrician has to choose between H0 and H1 . I The decision rule that leads the econometrician to reject or not to reject H0 is based on a test statistic, which is a function of the data f(Yi , Xi ) : i = 1, . . . ng . I Usually, one rejects H0 if the test statistic falls into a critical region. A critical region is constructed by taking into account the probability of making a wrong decision. 4/14 Errors I There are two types of errors that the econometrician can make: 5/14 Errors I There are two types of errors that the econometrician can make: Truth H0 H1 H0 X Type II error Decision H1 Type I error X 5/14 Errors I There are two types of errors that the econometrician can make: Truth H0 H1 H0 X Type II error Decision H1 Type I error X I Type I error is the error of rejecting H0 when H0 is true. 5/14 Errors I There are two types of errors that the econometrician can make: Truth H0 H1 H0 X Type II error Decision H1 Type I error X I Type I error is the error of rejecting H0 when H0 is true. The probability of Type I error is denoted by α and called signi…cance level or size of a test I 5/14 Errors I There are two types of errors that the econometrician can make: Truth H0 H1 H0 X Type II error Decision H1 Type I error X I Type I error is the error of rejecting H0 when H0 is true. The probability of Type I error is denoted by α and called signi…cance level or size of a test : I P (Type I error) = P (reject H0 jH0 is true) = α. I Type II error is the error of not rejecting H0 when H1 is true. 5/14 Errors I There are two types of errors that the econometrician can make: Truth H0 H1 H0 X Type II error Decision H1 Type I error X I Type I error is the error of rejecting H0 when H0 is true. The probability of Type I error is denoted by α and called signi…cance level or size of a test : I P (Type I error) = P (reject H0 jH0 is true) = α. I I Type II error is the error of not rejecting H0 when H1 is true. Power of a test: 1 P (Type II error) = 1 P (Accept H0 jH0 is false) . 5/14 Errors I The decision rule depends on a test statistic T . 6/14 Errors I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). 6/14 Errors I I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). When T is in the acceptance region, we accept H0 (and risk making a Type II error). 6/14 Errors I I I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). When T is in the acceptance region, we accept H0 (and risk making a Type II error). When T is in the rejection (critical) region, we reject H0 (and risk making a Type I error). 6/14 Errors I I I I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). When T is in the acceptance region, we accept H0 (and risk making a Type II error). When T is in the rejection (critical) region, we reject H0 (and risk making a Type I error). Unfortunately, the probabilities of Type I and II errors are inversely related. 6/14 Errors I I I I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). When T is in the acceptance region, we accept H0 (and risk making a Type II error). When T is in the rejection (critical) region, we reject H0 (and risk making a Type I error). Unfortunately, the probabilities of Type I and II errors are inversely related. By decreasing the probability of Type I error α, one makes the critical region smaller, which increases the probability of the Type II error. Thus it is impossible to make both errors arbitrary small. 6/14 Errors I I I I I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). When T is in the acceptance region, we accept H0 (and risk making a Type II error). When T is in the rejection (critical) region, we reject H0 (and risk making a Type I error). Unfortunately, the probabilities of Type I and II errors are inversely related. By decreasing the probability of Type I error α, one makes the critical region smaller, which increases the probability of the Type II error. Thus it is impossible to make both errors arbitrary small. By convention, α is chosen to be a small number, for example, α = 0.01, 0.05, or 0.10. 6/14 Errors I I I I I I The decision rule depends on a test statistic T . The real line is split into two regions: acceptance region and rejection region (critical region). When T is in the acceptance region, we accept H0 (and risk making a Type II error). When T is in the rejection (critical) region, we reject H0 (and risk making a Type I error). Unfortunately, the probabilities of Type I and II errors are inversely related. By decreasing the probability of Type I error α, one makes the critical region smaller, which increases the probability of the Type II error. Thus it is impossible to make both errors arbitrary small. By convention, α is chosen to be a small number, for example, α = 0.01, 0.05, or 0.10. (This is in agreement with the econometrician carrying the burden of proof). 6/14 Steps I The following are the steps of the hypothesis testing: 7/14 Steps I The following are the steps of the hypothesis testing: 1. Specify H0 and H1 . 7/14 Steps I The following are the steps of the hypothesis testing: 1. Specify H0 and H1 . 2. Choose the signi…cance level α. 7/14 Steps I The following are the steps of the hypothesis testing: 1. Specify H0 and H1 . 2. Choose the signi…cance level α. 3. De…ne a decision rule (critical region). 7/14 Steps I The following are the steps of the hypothesis testing: 1. 2. 3. 4. Specify H0 and H1 . Choose the signi…cance level α. De…ne a decision rule (critical region). Perform the test using the data 7/14 Steps I The following are the steps of the hypothesis testing: 1. 2. 3. 4. I Specify H0 and H1 . Choose the signi…cance level α. De…ne a decision rule (critical region). Perform the test using the data: given the data compute the test statistic and see if it falls into the critical region. The decision depends on the signi…cance level α: 7/14 Steps I The following are the steps of the hypothesis testing: 1. 2. 3. 4. I Specify H0 and H1 . Choose the signi…cance level α. De…ne a decision rule (critical region). Perform the test using the data: given the data compute the test statistic and see if it falls into the critical region. The decision depends on the signi…cance level α: larger values of α correspond to bigger critical regions (probability of Type I error is larger). 7/14 Steps I The following are the steps of the hypothesis testing: 1. 2. 3. 4. Specify H0 and H1 . Choose the signi…cance level α. De…ne a decision rule (critical region). Perform the test using the data: given the data compute the test statistic and see if it falls into the critical region. I The decision depends on the signi…cance level α: larger values of α correspond to bigger critical regions (probability of Type I error is larger). I It is easier to reject the null for larger values of α. 7/14 Steps I The following are the steps of the hypothesis testing: 1. 2. 3. 4. Specify H0 and H1 . Choose the signi…cance level α. De…ne a decision rule (critical region). Perform the test using the data: given the data compute the test statistic and see if it falls into the critical region. I The decision depends on the signi…cance level α: larger values of α correspond to bigger critical regions (probability of Type I error is larger). I It is easier to reject the null for larger values of α. I p-value : Given the data, the smallest signi…cance level at which the null can be rejected. 7/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , 8/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , against H1 : β1 6= β1,0 . 8/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , against H1 : β1 6= β1,0 . I β1 is the true unknown value of the slope parameter. 8/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , against H1 : β1 6= β1,0 . I β1 is the true unknown value of the slope parameter. I β1,0 is a known number speci…ed by the econometrician. 8/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , against H1 : β1 6= β1,0 . I β1 is the true unknown value of the slope parameter. I β1,0 is a known number speci…ed by the econometrician. (For example β1,0 is zero if you want to test β1 = 0). 8/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , against H1 : β1 6= β1,0 . I β1 is the true unknown value of the slope parameter. I β1,0 is a known number speci…ed by the econometrician. (For example β1,0 is zero if you want to test β1 = 0). I Such a test is called two-sided because the alternative hypothesis H1 does not specify in which direction β1 can deviate from the asserted value β1,0 . 8/14 Two-sided tests I For Yi = β0 + β1 Xi + Ui , consider testing H0 : β1 = β1,0 , against H1 : β1 6= β1,0 . I β1 is the true unknown value of the slope parameter. I β1,0 is a known number speci…ed by the econometrician. (For example β1,0 is zero if you want to test β1 = 0). I Such a test is called two-sided because the alternative hypothesis H1 does not specify in which direction β1 can deviate from the asserted value β1,0 . 8/14 Two-sided tests when σ2 is known (infeasible test) I Suppose for a moment that σ2 is known. 9/14 Two-sided tests when σ2 is known (infeasible test) I Suppose for a moment that σ2 is known. I Consider the following test statistic: β̂ β1,0 T = q1 Var β̂1 9/14 Two-sided tests when σ2 is known (infeasible test) I Suppose for a moment that σ2 is known. I Consider the following test statistic: β̂ β1,0 σ2 T = q1 , where Var β̂1 = n ∑i =1 (Xi Var β̂1 X̄ ) 2 . 9/14 Two-sided tests when σ2 is known (infeasible test) I Suppose for a moment that σ2 is known. I Consider the following test statistic: β̂ β1,0 σ2 T = q1 , where Var β̂1 = n ∑i =1 (Xi Var β̂1 I X̄ ) 2 . Consider the following decision rule (test): Reject H0 : β1 = β1,0 9/14 Two-sided tests when σ2 is known (infeasible test) I Suppose for a moment that σ2 is known. I Consider the following test statistic: β̂ β1,0 σ2 T = q1 , where Var β̂1 = n ∑i =1 (Xi Var β̂1 I X̄ ) 2 . Consider the following decision rule (test): Reject H0 : β1 = β1,0 when jT j > z1 α/2 , where z1 α/2 is the (1 α/2) quantile of the standard normal distribution (critical value). 9/14 Test validity and power I We need to establish that: 10/14 Test validity and power I We need to establish that: 1. The test is valid, where the validity of a test means that it has correct size 10/14 Test validity and power I We need to establish that: 1. The test is valid, where the validity of a test means that it has correct size or P (Type I error) = α 10/14 Test validity and power I We need to establish that: 1. The test is valid, where the validity of a test means that it has correct size or P (Type I error) = α: P jT j > z1 α/2 j β1 = β1,0 = α. 2. The test has power: 10/14 Test validity and power I We need to establish that: 1. The test is valid, where the validity of a test means that it has correct size or P (Type I error) = α: P jT j > z1 α/2 j β1 = β1,0 = α. 2. The test has power: when β1 6= β1,0 (H0 is false), the test rejects H0 with probability that exceeds α: P jT j > z1 α/2 j β1 6= β1,0 > α. 10/14 Test validity and power I We need to establish that: 1. The test is valid, where the validity of a test means that it has correct size or P (Type I error) = α: P jT j > z1 α/2 j β1 = β1,0 = α. 2. The test has power: when β1 6= β1,0 (H0 is false), the test rejects H0 with probability that exceeds α: P jT j > z1 I We want P jT j > z1 possible. α/2 j β1 α/2 j β1 6= β1,0 > α. 6= β1,0 to be as large as 10/14 Test validity and power I We need to establish that: 1. The test is valid, where the validity of a test means that it has correct size or P (Type I error) = α: P jT j > z1 α/2 j β1 = β1,0 = α. 2. The test has power: when β1 6= β1,0 (H0 is false), the test rejects H0 with probability that exceeds α: P jT j > z1 I I α/2 j β1 6= β1,0 > α. We want P jT j > z1 α/2 j β1 6= β1,0 to be as large as possible. Note that P jT j > z1 α/2 j β1 6= β1,0 depends on the true value β1 . 10/14 The distribution of T when σ2 is known (infeasible test) I Write T = β̂ β1,0 q1 Var β̂1 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = = β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 β β1,0 β̂ β1 q1 + q1 . Var β̂1 Var β̂1 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = = I β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 β β1,0 β̂ β1 q1 + q1 . Var β̂1 Var β̂1 Under our assumptions and conditionally on X ’s: β̂1 N β1 , Var β̂1 , 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = = I β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 β β1,0 β̂ β1 q1 + q1 . Var β̂1 Var β̂1 Under our assumptions and conditionally on X ’s: β̂1 N β1 , Var β̂1 β̂ β1 , or q 1 Var β̂1 N (0, 1) . 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = = I β β1,0 β̂ β1 q1 + q1 . Var β̂1 Var β̂1 Under our assumptions and conditionally on X ’s: β̂1 I β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 N β1 , Var β̂1 β̂ β1 , or q 1 Var β̂1 N (0, 1) . We have that conditionally on X ’s: 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = = I β β1,0 β̂ β1 q1 + q1 . Var β̂1 Var β̂1 Under our assumptions and conditionally on X ’s: β̂1 I β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 N β1 , Var β̂1 β̂ β1 , or q 1 Var β̂1 N (0, 1) . We have that conditionally on X ’s: T 11/14 The distribution of T when σ2 is known (infeasible test) I Write T = = I β β1,0 β̂ β1 q1 + q1 . Var β̂1 Var β̂1 Under our assumptions and conditionally on X ’s: β̂1 I β̂ β + β1 β1,0 β̂ β1,0 q1 = 1 q1 Var β̂1 Var β̂1 N β1 , Var β̂1 β̂ β1 , or q 1 Var β̂1 We have that conditionally on X ’s: T N N (0, 1) . β β1,0 q1 ,1 Var ( β̂1 ) ! . 11/14 Size when σ2 is known (infeasible test) I We have that T 0 1 β1 β1,0 N @q , 1A . Var β̂1 12/14 Size when σ2 is known (infeasible test) I We have that T I 0 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true 12/14 Size when σ2 is known (infeasible test) I We have that T I 0 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T H0 N (0, 1). 12/14 Size when σ2 is known (infeasible test) I We have that T I I 0 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T We reject H0 when jT j > z1 H0 N (0, 1). α/2 12/14 Size when σ2 is known (infeasible test) I We have that T I I 0 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T We reject H0 when jT j > z1 α/2 , T > z1 H0 N (0, 1). α/2 or T < z1 α/2 . 12/14 Size when σ2 is known (infeasible test) I We have that T I I 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T We reject H0 when jT j > z1 I 0 Let Z α/2 , T > z1 H0 N (0, 1). α/2 or T < z1 α/2 . N (0, 1) . 12/14 Size when σ2 is known (infeasible test) I We have that T I I 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T We reject H0 when jT j > z1 I 0 Let Z α/2 , T > z1 H0 N (0, 1). α/2 or T < z1 α/2 . N (0, 1) . P (Reject H0 jH0 is true) 12/14 Size when σ2 is known (infeasible test) I We have that T I I 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T We reject H0 when jT j > z1 I 0 Let Z α/2 , T > z1 H0 N (0, 1). α/2 or T < z1 α/2 . N (0, 1) . P (Reject H0 jH0 is true) = P (Z > z1 α/2 ) + P (Z < z1 12/14 α/2 ) Size when σ2 is known (infeasible test) I We have that T I I 1 β1 β1,0 N @q , 1A . Var β̂1 When H0 : β1 = β1,0 is true, T We reject H0 when jT j > z1 I 0 Let Z α/2 , T > z1 H0 N (0, 1). α/2 or T < z1 α/2 . N (0, 1) . P (Reject H0 jH0 is true) = P (Z > z1 α/2 ) + P (Z < = α/2 + α/2 = α z1 12/14 α/2 ) The distribution of T when σ2 is known (infeasible test) Critical Region 0 Critical Region 13/14 Power of the test when σ2 is known (infeasible test) I Under H1 , β1 β1,0 6= 0 and the distribution of T is not ! centered zero: T N β β1,0 q1 ,1 Var ( β̂1 ) . 14/14 Power of the test when σ2 is known (infeasible test) I Under H1 , β1 β1,0 6= 0 and the distribution of T is not ! centered zero: T I When β1 Critical Region I N β1,0 > 0 : β β1,0 q1 ,1 Var ( β̂1 ) 0 . Critical Region Rejection probability exceeds α under H1 : power increases with the distance from H0 ( β1,0 β1 ) and decreases with Var β̂1 . 14/14