Download Preview Sample File - Solutions Manual | Test bank

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
For Instructors
Solutions to End-of-Chapter Exercises
Chapter 2
Review of Probability
2.1.
(a) Probability distribution function for Y
Outcome (number of heads)
Probability
Y0
0.25
Y1
0.50
Y2
0.25
(b) Cumulative probability distribution function for Y
Outcome (number of heads)
Probability
Y0
0
0Y1
0.25
1Y2
0.75
Y2
1.0
d
(c) Y = E (Y )  (0  0.25)  (1  0.50)  (2  0.25)  1.00 . F 
Fq, .
Using Key Concept 2.3: var(Y )  E(Y 2 )  [ E(Y )]2 ,
and
(ui | X i )
so that
var(Y )  E(Y 2 )  [ E(Y )]2  1.50  (1.00)2  0.50.
2.2.
We know from Table 2.2 that Pr (Y  0)  022, Pr (Y  1)  078, Pr ( X  0)  030,
Pr ( X  1)  070. So
(a) Y  E (Y )  0  Pr (Y  0)  1  Pr (Y  1)
 0  022  1  078  078,
 X  E ( X )  0  Pr ( X  0)  1  Pr ( X  1)
 0  030  1  070  070
(b)  X2  E[( X   X )2 ]
 (0  0.70) 2  Pr ( X  0)  (1  0.70) 2  Pr ( X  1)
 (070) 2  030  030 2  070  021,
 Y2  E[(Y  Y )2 ]
 (0  0.78) 2  Pr (Y  0)  (1  0.78) 2  Pr (Y  1)
 (078) 2  022  0222  078  01716
Solutions to End-of-Chapter Empirical Exercises
(c)  XY  cov (X , Y )  E[( X   X )(Y  Y )]
 (0  0.70)(0  0.78) Pr( X  0, Y  0)
 (0  070)(1  078) Pr ( X  0 Y  1)
 (1  070)(0  078) Pr ( X  1 Y  0)
 (1  070)(1  078) Pr ( X  1 Y  1)
 (070)  (078)  015  (070)  022  015
 030  (078)  007  030  022  063
 0084,
corr (X , Y ) 
2.3.
 XY
0084

 04425
 XY
021 01716
For the two new random variables W  3  6 X and V  20  7Y , we have:
(a)
E (V )  E (20  7Y )  20  7 E (Y )  20  7  078  1454,
E (W )  E (3  6 X )  3  6 E ( X )  3  6  070  72
(b)  W2  var (3  6 X )  62   X2  36  021  756,
 V2  var (20  7Y )  (7)2   Y2  49  01716  84084
(c)  WV  cov(3  6 X , 20  7Y )  6  (7)cov(X , Y )  42  0084  3528
corr (W , V ) 
2.4.
 WV
3528

 04425
W V
756  84084
(a) E( X 3 )  03  (1  p)  13  p  p
(b) E( X k )  0k  (1  p)  1k  p  p
(c) E ( X )  0.3 , and var(X) = E(X2)−[E(X)]2 = 0.3 −0.09 = 0.21. Thus  =
0.21 = 0.46.
var ( X )  E( X 2 )  [ E( X )]2  0.3  0.09  0.21   0.21  0.46. To compute the skewness, use
the formula from exercise 2.21:
E ( X   )3  E ( X 3 )  3[ E ( X 2 )][ E ( X )]  2[ E ( X )]3
 0.3  3  0.32  2  0.33  0.084
Alternatively, E( X   )3  [(1  0.3)3  0.3]  [(0  0.3)3  0.7]  0.084
Thus, skewness  E( X   )3/ 3  0.084/0.463  0.87.
To compute the kurtosis, use the formula from exercise 2.21:
E ( X   )4  E ( X 4 )  4[ E ( X )][ E ( X 3 )]  6[ E ( X )]2 [ E ( X 2 )]  3[ E ( X )]4
 0.3  4  0.32  6  0.33  3  0.34  0.0777
Alternatively, E( X   )4  [(1  0.3)4  0.3]  [(0  0.3)4  0.7]  0.0777
Thus, kurtosis is E( X   )4/ 4  0.0777/0.464  1.76
©2011 Pearson Education, Inc. Publishing as Addison Wesley
3
Solutions to End-of-Chapter Empirical Exercises
2.5.
Let X denote temperature in F and Y denote temperature in C. Recall that Y  0 when X  32 and
Y 100 when X  212; this implies Y  (100/180)  ( X  32) or Y  17.78  (5/9)  X. Using Key
Concept 2.3, X  70oF implies that Y  17.78  (5/9)  70  21.11C, and X  7oF implies
 Y  (5/9)  7  3.89C.
2.6.
4
The table shows that Pr ( X  0, Y  0)  0037, Pr ( X  0, Y  1)  0622,
Pr ( X  1, Y  0)  0009, Pr ( X  1, Y  1)  0332, Pr ( X  0)  0659, Pr ( X  1)  0341,
Pr (Y  0)  0046, Pr (Y  1)  0954.
(a) E (Y )  Y  0  Pr(Y  0)  1 Pr (Y  1)
 0  0046  1 0954  0954
# (unemployed)
(b) Unemployment Rate 
# (labor force)
 Pr (Y  0)  1  Pr(Y  1)  1  E (Y )  1  0954  0.046
(c) Calculate the conditional probabilities first:
Pr (Y  0| X  0) 
Pr ( X  0, Y  0) 0037

 0056,
Pr ( X  0)
0659
Pr (Y  1| X  0) 
Pr ( X  0, Y  1) 0622

 0944,
Pr ( X  0)
0659
Pr (Y  0| X  1) 
Pr ( X  1, Y  0) 0009

 0026,
Pr ( X  1)
0341
Pr (Y  1| X  1) 
Pr ( X  1, Y  1) 0332

 0974
Pr ( X  1)
0341
The conditional expectations are
E (Y |X  1)  0  Pr (Y  0| X  1)  1  Pr (Y  1| X  1)
 0  0026  1 0974  0974,
E (Y |X  0)  0  Pr (Y  0| X  0)  1  Pr (Y  1|X  0)
 0  0056  1 0944  0944
(d) Use the solution to part (b),
Unemployment rate for college graduates  1  E(Y|X  1)  1  0.974  0.026
Unemployment rate for non-college graduates  1  E(Y|X  0)  1  0.944  0.056
(e) The probability that a randomly selected worker who is reported being unemployed is a
college graduate is
Pr ( X  1|Y  0) 
Pr ( X  1, Y  0) 0009

 0196
Pr (Y  0)
0046
The probability that this worker is a non-college graduate is
Pr ( X  0|Y  0)  1  Pr ( X  1|Y  0)  1  0196  0804
©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
5
(f) Educational achievement and employment status are not independent because they do not
satisfy that, for all values of x and y,
Pr ( X  x |Y  y)  Pr ( X  x)
For example, from part (e) Pr ( X  0|Y  0)  0.804, while from the table Pr(X  0)  0.659.
2.7.
Using obvious notation, C  M  F; thus C  M   F and  C2   M2   F2  2cov(M, F ). This
implies
(a) C  40  45  $85,000 per year.
cov(M , F )
, so that cov( M , F )   M  F corr ( M , F ). Thus cov( M, F ) 
 M F
12 18  0.80  172.80, where the units are squared thousands of dollars per year.
(b) corr ( M, F ) 
(c)  C2   M2   F2  2cov(M, F ), so that  C2  122  182  2 172.80  813.60, and
 C  813.60  28.524 thousand dollars per year.
(d) First you need to look up the current Euro/dollar exchange rate in the Wall Street Journal, the
Federal Reserve web page, or other financial data outlet. Suppose that this exchange rate is e
(say e  0.80 Euros per dollar); each 1 dollar is therefore with e Euros. The mean is therefore
e  C (in units of thousands of Euros per year), and the standard deviation is e  C (in units
of thousands of Euros per year). The correlation is unit-free, and is unchanged.
2.8.
Y  E (Y )  1,  Y2  var (Y )  4. With Z  12 (Y  1),
1

1
1
Z  E  (Y  1)   ( Y  1)  (1  1)  0,
2
2
 2
1
2


1
4
1
4
 Z2  var  (Y  1)    Y2   4  1
2.9.
Value of Y
1
5
8
Probability distribution of Y
Value of X
14
0.02
0.17
0.02
0.21
22
0.05
0.15
0.03
0.23
30
0.10
0.05
0.15
0.30
40
0.03
0.02
0.10
0.15
65
0.01
0.01
0.09
0.11
Probability
Distribution of
X
0.21
0.40
0.39
1.00
(a) The probability distribution is given in the table above.
E (Y )  14  0.21  22  0.23  30  0.30  40  0.15  65  0.11  30.15
E (Y 2 )  142  0.21  222  0.23  302  0.30  402  0.15  652  0.11  1127.23
var(Y )  E (Y 2 )  [ E (Y )]2  218.21
 Y  14.77
©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
(b) The conditional probability of Y|X  8 is given in the table below
14
22
0.02/0.39
0.03/0.39
Value of Y
30
0.15/0.39
40
65
0.10/0.39
0.09/0.39
E (Y | X  8)  14  (0.02/0.39)  22  (0.03/0.39)  30  (0.15/0.39)
 40  (0.10/0.39)  65  (0.09/0.39)  39.21
E (Y 2 | X  8)  142  (0.02/0.39)  222  (0.03/0.39)  302  (0.15/0.39)
 402  (0.10/0.39)  652  (0.09/0.39)  1778.7
var(Y )  1778.7  39.212  241.65
 Y  X 8  15.54
(c) E ( XY )  (114  0.02)  (1 22 : 0.05) 
 (8  65  0.09)  171.7
cov( X , Y )  E( XY )  E( X ) E(Y )  171.7  5.33  30.15  11.0
corr( X , Y )  cov( X , Y )/( X  Y )  11.0 / (2.60 14.77)  0.286
2.10.
Using the fact that if Y
N  Y ,  Y2  then
Y  Y
Y
~ N (0,1) and Appendix Table 1, we have
 Y 1 3 1 
(a) Pr (Y  3)  Pr 

   (1)  08413
2 
 2
Y 3 03

(b) Pr(Y  0)  1  Pr(Y  0)  1  Pr 

3 
 3
 1   (1)   (1)  08413
 40  50 Y  50 52  50 


(c) Pr (40  Y  52)  Pr 

5
5 
 5
  (04)   (2)   (04)  [1   (2)]
 06554  1  09772  06326
 65 Y 5 85
(d) Pr (6  Y  8)  Pr 



2
2 
 2
  (21213)   (07071)
 09831  07602  02229
2.11.
(a) 0.90
(b) 0.05
(c) 0.05
(d) When Y ~ 102 , then Y /10 ~ F10, .
2
(e) Y  Z , where Z ~ N (0,1), thus Pr (Y  1)  Pr (1  Z  1)  0.32.
©2011 Pearson Education, Inc. Publishing as Addison Wesley
6
Solutions to End-of-Chapter Empirical Exercises
2.12.
(a)
(b)
(c)
(d)
(e)
(f)
0.05
0.950
0.953
The tdf distribution and N(0, 1) are approximately the same when df is large.
0.10
0.01
2.13.
(a) E(Y 2 )  Var (Y )  Y2  1  0  1; E(W 2 )  Var (W )  W2  100  0  100.
(b) Y and W are symmetric around 0, thus skewness is equal to 0; because their mean is zero, this
means that the third moment is zero.
(c) The kurtosis of the normal is 3, so 3  E(Y  Y )4 / Y4 ; solving yields E(Y 4 )  3; a similar
calculation yields the results for W.
(d) First, condition on X  0, so that S  W :
E(S | X  0)  0; E(S 2 | X  0)  100, E(S 3|X  0)  0, E(S 4 | X  0)  3 1002.
Similarly,
E(S | X  1)  0; E(S 2 | X  1)  1, E(S 3 | X  1)  0, E(S 4 | X  1)  3.
From the law of iterated expectations
E ( S )  E ( S | X  0)  Pr (X  0)  E ( S | X  1)  Pr( X  1)  0
E ( S 2 )  E ( S 2 | X  0)  Pr (X  0)  E ( S 2 | X  1)  Pr( X  1)  100  0.01  1  0.99  1.99
E ( S 3 )  E ( S 3 | X  0)  Pr (X  0)  E ( S 3 | X  1)  Pr( X  1)  0
E ( S 4 )  E ( S 4 | X  0)  Pr (X  0)  E ( S 4 | X  1)  Pr( X  1)
 3  1002  0.01  3  1  0.99  302.97
(e) S  E ( S )  0, thus E(S  S )3  E(S 3 )  0 from part (d). Thus skewness  0. Similarly,
 S2  E(S  S )2  E(S 2 )  1.99, and E(S  S )4  E(S 4 )  302.97. Thus,
kurtosis  302.97 / (1.992 )  76.5
2.14.
The central limit theorem suggests that when the sample size (n) is large, the distribution of the
sample average (Y ) is approximately N  Y ,  Y2  with  Y2 
(a) n  100,  Y2 
Y2
n
Y2
n
. Given Y  100,  Y2  430,
43
 100
 043, and
 Y  100 101  100 
Pr (Y  101)  Pr 

   (1525)  09364
043 
 043
(b) n  165,  Y2 
Y2
n
43
 165
 02606, and
 Y  100 98  100 
Pr (Y  98)  1  Pr (Y  98)  1  Pr 


02606 
 02606
 1   (39178)   (39178)  1000 (rounded to four decimal places)
©2011 Pearson Education, Inc. Publishing as Addison Wesley
7
Solutions to End-of-Chapter Empirical Exercises
(c) n  64,  Y2 
 Y2
64

8
43
 06719, and
64
 101  100 Y  100 103  100 
Pr (101  Y  103)  Pr 



06719
06719 
 06719
  (36599)   (12200)  09999  08888  01111
2.15.
 9.6  10 Y  10 10.4  10 


(a) Pr (9.6  Y  10.4)  Pr 

4/n
4/n 
 4/n
10.4  10 
 9.6  10
 Pr 
Z

4/n 
 4/n
where Z ~ N(0, 1). Thus,
10.4  10 
 9.6  10
(i) n  20; Pr 
Z
  Pr (0.89  Z  0.89)  0.63
4/n 
 4/n
10.4  10 
 9.6  10
(ii) n  100; Pr 
Z
  Pr(2.00  Z  2.00)  0.954
4/n 
 4/n
10.4  10 
 9.6  10
Z
  Pr(6.32  Z  6.32)  1.000
4/n 
 4/n
(iii) n  1000; Pr 
 c
Y  10
c 


(b) Pr (10  c  Y  10  c)  Pr 

4/n
4/n 
 4/n
c 
 c
 Pr 
Z
.
4/n 
 4/n
As n get large
c
gets large, and the probability converges to 1.
4/n
(c) This follows from (b) and the definition of convergence in probability given in Key Concept 2.6.
2.16.
There are several ways to do this. Here is one way. Generate n draws of Y, Y1, Y2, … Yn. Let Xi  1
if Yi  3.6, otherwise set Xi  0. Notice that Xi is a Bernoulli random variables with X  Pr(X  1)
 Pr(Y  3.6). Compute X . Because X converges in probability to X  Pr(X  1)  Pr(Y  3.6),
X will be an accurate approximation if n is large.
2.17.
Y = 0.4 and  Y2  0.4  0.6  0.24
 Y  0.4 0.43  0.4 
 Y  0.4

(a) (i) P( Y  0.43)  Pr 

 0.6124   0.27
  Pr 
0.24/n 
 0.24/n
 0.24/n

©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
 Y  0.4 0.37  0.4 
 Y  0.4

(ii) P( Y  0.37)  Pr 

 1.22   0.11
  Pr 
0.24/n 
 0.24/n
 0.24/n

(b) We know Pr(1.96  Z  1.96)  0.95, thus we want n to satisfy 0.41 
and
2.18.
0.41  0.40
 1.96
24 / n
0.39  0.40
 1.96. Solving these inequalities yields n  9220.
24 / n
Pr (Y  $0)  095, Pr (Y  $20000)  005.
(a) The mean of Y is
Y  0  Pr (Y  $0)  20,000  Pr (Y  $20000)  $1000.
The variance of Y is
 Y2  E  (Y  Y ) 2 
 (0  1000) 2  Pr(Y  0)  (20000  1000) 2  Pr (Y  20000)
 (1000)2  095  190002  005  19 107,
so the standard deviation of Y is  Y  (19  107 ) 2  $4359
1
(b) (i) E(Y )  Y  $1000,  Y2 
 Y2

n
(ii) Using the central limit theorem,
1.9  107
 19  105.
100
Pr (Y  2000)  1  Pr (Y  2000)
 Y  1000 2 000  1 000 
 1  Pr 


5
19  105 
 19  10
 1   (22942)  1  09891  00109
2.19.
l
(a) Pr (Y  y j )   Pr ( X  xi , Y  y j )
i 1
l
  Pr (Y  y j | X  xi )Pr ( X  xi )
i 1
k
k
l
j 1
j 1
i 1
(b) E (Y )   y j Pr (Y  yj )   yj  Pr (Y  y j |X  xi ) Pr ( X  xi )
l  k



i 1  j 1

y


j
Pr (Y  yj |X  xi )  Pr ( X  xi )


l
 E (Y | X  xi )Pr ( X  xi )
i 1
(c) When X and Y are independent,
Pr (X  xi , Y  yj )  Pr (X  xi )Pr (Y  yj )
so
©2011 Pearson Education, Inc. Publishing as Addison Wesley
9
Solutions to End-of-Chapter Empirical Exercises
 XY  E[( X   X )(Y  Y )]
l
k
  ( xi X )( y j Y ) Pr ( X  xi , Y  y j )
i 1 j 1
l
k
 ( xi X )( y j Y ) Pr ( X  xi ) Pr (Y  y j )
i 1 j 1

 l
 k
   ( xi   X ) Pr ( X  xi )    ( yj  Y ) Pr (Y  yj 
 i 1
  j 1

 E ( X   X ) E (Y  Y )  0  0  0,
corr(X , Y ) 
2.20.
l
 XY
0

 0
 XY  XY
m
(a) Pr (Y  yi )   Pr (Y  yi | X  xj , Z  zh ) Pr (X  xj , Z  zh )
j 1 h 1
k
(b) E (Y )   yi Pr (Y  yi ) Pr (Y  yi )
i 1
k
l
m
  yi  Pr (Y  yi | X  xj , Z  zh ) Pr (X  xj , Z  zh )
i 1
j 1 h 1
 k

   yi Pr (Y  yi | X  xj , Z  zh )  Pr (X  xj , Z  zh )
j 1 h 1  i 1

l
m
l
m
  E (Y | X  xj , Z  zh ) Pr (X  xj , Z  zh )
j 1 h 1
where the first line in the definition of the mean, the second uses (a), the third is a
rearrangement, and the final line uses the definition of the conditional expectation.
2.21.
(a) E ( X   )3  E[( X   ) 2 ( X   )]  E[ X 3  2 X 2   X  2  X 2   2 X  2   3 ]
 E ( X 3 )  3E ( X 2 )   3 E ( X )  2   3  E ( X 3 )  3 E ( X 2 ) E ( X )
 3E ( X )[ E ( X )]2  [ E ( X )]3
 E ( X 3 )  3E ( X 2 ) E ( X )  2 E ( X ) 3
(b) E ( X   ) 4  E[( X 3  3 X 2   3 X  2   3 )( X   )]
 E[ X 4  3 X 3   3 X 2  2  X  3  X 3   3 X 2  2  3 X  3   4 ]
 E ( X 4 )  4E ( X 3 ) E ( X )  6E( X 2 ) E( X )2  4 E( X ) E( X )3  E( X ) 4
 E ( X 4 )  4[ E ( X )][ E ( X 3 )]  6[ E ( X )]2 [ E ( X 2 )]  3[ E ( X )]4
©2011 Pearson Education, Inc. Publishing as Addison Wesley
10
Solutions to End-of-Chapter Empirical Exercises
2.22.
11
The mean and variance of R are given by
  w  0.08  (1  w)  0.05
 2  w2  0.072  (1  w)2  0.042  2  w  (1  w)  [0.07  0.04  0.25]
where 0.07  0.04  0.25  Cov ( Rs , Rb ) follows from the definition of the correlation between Rs
and Rb.
(a)   0.065;   0.044
(b)   0.0725;   0.056
(c) w  1 maximizes ;   0.07 for this value of w.
(d) The derivative of 2 with respect to w is
d 2
 2w  0.07 2  2(1  w)  0.042  (2  4 w)  [0.07  0.04  0.25]
dw
 0.0102w  0.0018
Solving for w yields w  18 / 102  0.18. (Notice that the second derivative is positive, so that
this is the global minimum.) With w  0.18,  R  0.038.
2.23.
X and Z are two independently distributed standard normal random variables, so
X  Z  0,  X2   Z2  1,  XZ  0.
(a) Because of the independence between X and Z , Pr (Z  z| X  x)  Pr (Z  z ), and
E(Z |X )  E(Z )  0. Thus E(Y| X )  E( X 2  Z| X )  E( X 2| X )  E(Z |X )  X 2  0  X 2 
(b) E( X 2 )   X2   X2  1, and Y  E( X 2  Z )  E( X 2 )  Z  1  0  1
(c) E( XY )  E( X 3  ZX )  E( X 3 )  E(ZX ). Using the fact that the odd moments of a standard
normal random variable are all zero, we have E( X 3 )  0. Using the independence between
3
X and Z , we have E ( ZX )   Z  X  0. Thus E( XY )  E( X )  E(ZX )  0.
(d)
cov (XY )  E[( X   X )(Y  Y )]  E[( X  0)(Y  1)]
 E ( XY  X )  E ( XY )  E ( X )
 0  0  0
corr (X , Y ) 
2.24.
 XY
0

 0
 XY  XY
(a) E(Yi 2 )   2   2   2 and the result follows directly.
(b) (Yi/) is distributed i.i.d. N(0,1), W   i 1 (Yi / )2 , and the result follows from the definition
n
of a  n2 random variable.
n
(c) E (W )  E 
i 1
Yi 2
2
n
 E
i 1
Yi 2
2
 n.
(d) Write
©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
V
Y1
in2 Yi2

n 1
12
Y1 /
in2 (Y / ) 2
n 1
which follows from dividing the numerator and denominator by . Y1/ ~ N(0,1),
n
n
 i 2 (Yi / )2 ~ n21 , and Y1/ and  i 2 (Yi / )2 are independent. The result then follows from
the definition of the t distribution.
n
2.25.
(a)
 axi  (ax1  ax2  ax3 
i 1
n
(b)
(x  y )  (x
i 1
i
i
1
 axn )  a( x1  x2  x3 
 y1  x2  y2 
 ( x1  x2 
n
n
i 1
i 1
n
 xn )  a  xi
i 1
xn  yn )
xn )  ( y1  y2 
yn )
  xi   yi
n
(c)
 a  (a  a  a 
 a)  na
i 1
(d)
n
 (a  bx
i 1
n
i
 cyi ) 2   (a 2  b 2 xi2  c 2 yi2  2abxi  2acyi  2bcxi yi )
i 1
n
n
n
n
n
i 1
i 1
i 1
i 1
i 1
 na 2  b 2  xi2  c 2  yi2  2ab xi  2ac  yi  2bc  xi yi
2.26.
(a) corr(Yi,Yj)  cov(Yi , Yj )/ Yi  Yj  cov(Yi , Yj )/ Y  Y  cov(Yi , Yj )/ Y2   , where the first equality
uses the definition of correlation, the second uses the fact that Yi and Yj have the same variance
(and standard deviation), the third equality uses the definition of standard deviation, and the
fourth uses the correlation given in the problem. Solving for cov(Yi, Yj) from the last equality
gives the desired result.
(b)
(c) Y 
1
1
1
1
Y  Y1  Y2 , so that E( Y )  E (Y )1  E (Y2 )  Y
2
2
2
2
 2  Y2
1
1
2
var(Y )  var(Y1 )  var(Y2 )  cov(Y1 , Y2 )  Y 
4
4
4
2
2
1 n
1 n
1 n
Yi , so that E (Y )   E (Yi )   Y  Y

n i 1
n i 1
n i 1
©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
1 n 
var(Y )  var   Yi 
 n i 1 
1 n
2
 2  var(Yi )  2
n i 1
n
1
n2


n
 Y2 
i 1
2
n2
n 1
n 1
n
  cov(Y , Y )
i 1 j  i 1
i
j
n
  
i 1 j  i 1
2
Y
n(n  1)
 Y2
n
n2
2  1
 Y  1    Y2
n  n

where the fourth line uses
n 1
2
Y

n
  a  a(1  2  3 
 n  1) 
i 1 j i 1
(d) When n is large
2.27

2
Y
n
 0 and
an(n  1)
for any variable a.
2
1
 0 , and the result follows from (c).
n
(a) E(W)  E[E(W|Z)]  E[E(X  X )|Z]  E[E(X|Z)  E(X|Z)]  0.
(b) E(WZ)  E[E(WZ|Z)]  E[ZE(W)|Z]  E[ Z  0]  0
(c) Using the hint: V  W  h(Z), so that E(V2)  E(W2)  E[h(Z)2]  2  E[W  h(Z)]. Using an
argument like that in (b), E[W  h(Z)]  0. Thus, E(V2)  E(W2)  E[h(Z)2], and the result
follows by recognizing that E[h(Z)2]  0 because h(z)2  0 for any value of z.
For Instructors
Solutions to End-of-Chapter Empirical Exercises
©2011 Pearson Education, Inc. Publishing as Addison Wesley
13
Solutions to End-of-Chapter Empirical Exercises
©2011 Pearson Education, Inc. Publishing as Addison Wesley
14
Chapter 3
Review of Statistics
3.1
(a) Average Hourly Earnings, Nominal $’s
Mean
11.63
18.98
Difference
7.35
AHE1992
AHE2008
AHE2008 
AHE1992
SE(Mean)
0.064
0.115
SE(Difference)
0.132
95% Confidence Interval
11.50  11.75
18.75  19.20
95% Confidence Interval
7.09  7.61
(b) Average Hourly Earnings, Real $2008
Mean
17.83
18.98
Difference
1.14
AHE1992
AHE2008
AHE2008 −
AHE1992
SE(Mean)
0.099
0.115
SE(Difference)
0.152
95% Confidence Interval
17.63 – 18.03
18.75 – 19.20
95% Confidence Interval
0.85 – 1.44
(c) The results from part (b) adjust for changes in purchasing power. These results should be
used.
(d) Average Hourly Earnings in 2008
Mean
SE(Mean)
High School
15.33
0.122
College
22.91
0.180
Difference
SE(Difference)
College-High School
7.58
0.217
(e) Average Hourly Earnings in 1992 (in $2008)
High School
College
College-High School
Mean
15.31
21.78
Difference
6.47
SE(Mean)
0.103
0.171
SE(Difference)
0.200
95% Confidence Interval
15.09 – 15.57
22.56 – 23.26
95% Confidence Interval
7.15 – 8.00
95% Confidence Interval
15.11 – 15.52
21.45 – 22.12
95% Confidence Interval
6.08 – 6.86
©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
16
(f) Average Hourly Earnings in 2008
Mean
0.02
1.13
SE(Mean)
0.160
0.248
95% Confidence Interval
–0.29 – 0.33
0.64 – 1.61
6.47
7.58
Difference
1.11
0.200
0.217
SE(Difference)
0.295
6.08 – 6.86
7.15 – 8.00
95% Confidence Interval
0.53 – 1.69
AHEHS,2008  AHEHS,1992
AHECol,2008 
AHECol,1992
Col-HS Gap (1992)
Col-HS Gap (2008)
Gap2008 − Gap1992
Wages of high school graduates increased by an estimated 0.02 dollars per hour (with a 95%
confidence interval of −0.29 to 0.33); Wages of college graduates increased by an estimated
1.13 dollars per hour (with a 95% confidence interval of 0.64 to 1.61). The College-High
School increased by an estimated 1.11 dollars per hour.
(g) Gender Gap in Earnings for High School Graduates
Year
Ym
sm
nm
Yw
sw
nw
Ym − Yw
SE( Ym − Yw )
95% CI
1992
16.55
7.46
2769
13.48
5.96
1874
3.07
0.20
2008
16.59
8.16
2537
13.15
6.27
1465
3.43
0.23
2.68 –
3.45
2.98 –
3.89
There is a large and statistically significant gender gap in earnings for high school graduates. In
2008 the estimated gap was $3.43 per hour; in 1992 the estimated gap was $3.07 per hour (in
$2008). The increase in the gender gap is somewhat smaller for high school graduates than it was
for college graduates.
©2011 Pearson Education, Inc. Publishing as Addison Wesley
Solutions to End-of-Chapter Empirical Exercises
©2011 Pearson Education, Inc. Publishing as Addison Wesley
17
Related documents