Download Contents - Actuarial Study Materials

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Probability wikipedia , lookup

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Contents
I
Probability Review
1
Probability Review
1.1 Functions and moments . . . . . . . . .
1.2 Probability distributions . . . . . . . . .
1.2.1 Bernoulli distribution . . . . . .
1.2.2 Uniform distribution . . . . . . .
1.2.3 Exponential distribution . . . . .
1.3 Variance . . . . . . . . . . . . . . . . . .
1.4 Normal approximation . . . . . . . . . .
1.5 Conditional probability and expectation
1.6 Conditional variance . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . .
II
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Parameter Estimation
3
3
4
5
5
6
6
7
9
11
12
16
21
2
Estimator Quality
2.1 Bias . . . . . . . . . . . . . . . . . .
2.2 Consistency . . . . . . . . . . . . .
2.3 Efficiency and Mean Square Error .
Exercises . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
23
25
25
28
34
3
Maximum Likelihood
3.1 Likelihood . . . . . . . . . . . . .
3.2 Maximum Likelihood Estimation
Exercises . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
41
47
58
4
Variance of Maximum Likelihood Estimator
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
70
71
5
Sufficient Statistics
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
77
79
III
6
.
.
.
.
Hypothesis Testing
83
Hypothesis Testing
6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Typical exam questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
iii
85
85
88
iv
CONTENTS
6.2.1 Calculate significance or power
6.2.2 Determine critical values . . .
Exercises . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
88
90
92
101
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
Confidence Intervals and Sample Size
7.1 Confidence intervals . . . . . . . .
7.2 Sample size . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
109
109
110
111
113
8
Confidence Intervals for Means
8.1 χ2 distribution . . . . . . . . . . . . . . . . . . . . . . .
8.2 Student’s t distribution . . . . . . . . . . . . . . . . . .
8.3 Testing the mean of a Bernoulli population . . . . . .
8.4 Testing the difference of means from two populations
8.4.1 Two unpaired normal populations . . . . . . .
8.4.2 Two paired normal populations . . . . . . . . .
8.4.3 Two Bernoulli populations . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
117
117
118
119
120
120
122
122
123
130
9
Chi Square Tests
9.1 One-dimensional chi-square
9.2 Two-dimensional chi-square
Exercises . . . . . . . . . . .
Solutions . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
137
137
140
142
147
10 Confidence Intervals for Variances
10.1 Testing variances . . . . . . . . . . . . . . . .
10.2 Testing ratios of variances; the F distribution
Exercises . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
151
151
152
153
159
11 Linear Regression
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
163
166
169
12 Linear Regression: Measures of Fit
12.1 Standard error of the regression . .
12.2 R 2 : the coefficient of determination
12.3 t statistic . . . . . . . . . . . . . . .
12.4 F statistic . . . . . . . . . . . . . . .
12.5 Multiple regression . . . . . . . . .
12.6 Comparison of models . . . . . . .
Exercises . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
173
173
174
175
177
178
179
181
192
13 ANOVA
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
201
205
213
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
CONTENTS
14 Uniformly Most Powerful Critical Regions
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
219
221
224
15 Likelihood Ratio Tests
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
227
229
233
IV
Bayesian Estimation
16 Bayesian Estimation
16.1 Background . . .
16.2 Loss functions . .
16.3 Interval estimates
Exercises . . . . .
Solutions . . . . .
239
.
.
.
.
.
241
241
243
244
244
246
17 Beta-Bernoulli Conjugate Prior Pair
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
249
250
253
18 Normal-Normal Conjugate Prior Pair
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255
256
257
19 Gamma-Poisson Conjugate Prior Pair
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
259
260
267
V
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Nonparametric Methods for Hypothesis Testing
271
20 Order Statistics
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
273
277
280
21 Sign Test
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
287
290
291
22 Wilcoxon Tests
22.1 Signed rank test
22.2 Rank sum test .
Exercises . . . .
Solutions . . . .
.
.
.
.
295
295
298
302
306
23 The Runs Test
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
309
312
315
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vi
CONTENTS
24 Rank Correlation Coefficients
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VI
Poisson Processes
317
321
323
327
25 The Poisson Process: Probabilities of Events
25.1 Introduction . . . . . . . . . . . . . . . . . .
25.2 Probabilities—Homogeneous Process . . .
25.3 Probabilities—Non-Homogeneous Process
Exercises . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
329
329
330
332
334
339
26 The Poisson Process: Time To Next Event
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
345
349
351
27 The Poisson Process: Thinning
27.1 Constant Probabilities . . .
27.2 Non-Constant Probabilities
Exercises . . . . . . . . . . .
Solutions . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
355
355
357
359
365
28 The Poisson Process: Sums and Mixtures
28.1 Sums of Poisson Processes . . . . . .
28.2 Mixtures of Poisson Processes . . . .
Exercises . . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
371
371
372
377
380
29 Compound Poisson Processes
29.1 Definition and Moments . . . . . .
29.2 Sums of Compound Distributions .
Exercises . . . . . . . . . . . . . . .
Solutions . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
385
385
387
389
396
VII
Practice Exams
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
405
1
Practice Exam 1
407
2
Practice Exam 2
415
3
Practice Exam 3
423
4
Practice Exam 4
431
5
Practice Exam 5
439
6
Practice Exam 6
447
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
vii
CONTENTS
Appendices
A Solutions to the Practice Exams
Solutions for Practice Exam 1 . .
Solutions for Practice Exam 2 . .
Solutions for Practice Exam 3 . .
Solutions for Practice Exam 4 . .
Solutions for Practice Exam 5 . .
Solutions for Practice Exam 6 . .
455
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
457
457
463
470
478
485
492
B Solutions to Statistics and Stochastic Process Questions on Old CAS 3 and 3L Exams
B.1 Solutions to CAS Exam 3, Spring 2005 . . . . . . . . . . . . . . . . . . . . . . . . . .
B.2 Solutions to CAS Exam 3, Fall 2005 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.3 Solutions to CAS Exam 3, Spring 2006 . . . . . . . . . . . . . . . . . . . . . . . . . .
B.4 Solutions to CAS Exam 3, Fall 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.5 Solutions to CAS Exam 3, Spring 2007 . . . . . . . . . . . . . . . . . . . . . . . . . .
B.6 Solutions to CAS Exam 3, Fall 2007 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.7 Solutions to CAS Exam 3L, Spring 2008 . . . . . . . . . . . . . . . . . . . . . . . . .
B.8 Solutions to CAS Exam 3L, Fall 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.9 Solutions to CAS Exam 3L, Spring 2009 . . . . . . . . . . . . . . . . . . . . . . . . .
B.10 Solutions to CAS Exam 3L, Fall 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.11 Solutions to CAS Exam 3L, Spring 2010 . . . . . . . . . . . . . . . . . . . . . . . . .
B.12 Solutions to CAS Exam 3L, Fall 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.13 Solutions to CAS Exam 3L, Spring 2011 . . . . . . . . . . . . . . . . . . . . . . . . .
B.14 Solutions to CAS Exam 3L, Fall 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.15 Solutions to CAS Exam 3L, Spring 2012 . . . . . . . . . . . . . . . . . . . . . . . . .
B.16 Solutions to CAS Exam 3L, Fall 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.17 Solutions to CAS Exam 3L, Spring 2013 . . . . . . . . . . . . . . . . . . . . . . . . .
B.18 Solutions to CAS Exam 3L, Fall 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.19 Solutions to CAS Exam ST, Spring 2014 . . . . . . . . . . . . . . . . . . . . . . . . .
B.20 Solutions to CAS Exam ST, Fall 2014 . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.21 Solutions to CAS Exam ST, Spring 2015 . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
501
501
503
505
509
513
516
518
521
524
526
529
532
535
538
540
543
545
548
551
558
565
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
C Lessons Corresponding to Questions on Released and Practice Exams
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
571
viii
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
CONTENTS
Lesson 2
Estimator Quality
We are about to learn how to estimate parameters of a distribution. In other words, we may believe that
the random phenomenon we are analyzing follows a specific probability distribution, like exponential
or Pareto, but need to estimate the parameters of the distribution, θ for an exponential or θ and α for a
Pareto. Before we discuss estimation methods, let’s consider the following question: How do we measure
the quality of an estimator? There are several ways to measure the quality of an estimator.
In the following discussion, θ is a parameter to be estimated, θ̂ is an estimator, and θ̂n is an estimator
based on n observations.
2.1
Bias
A desirable property of an estimator is that its expected value, based on the assumed underlying distribution, equals the parameter we’re estimating. In other words, E[ θ̂] θ. We define bias, biasθ̂ ( θ ) as
biasθ̂ ( θ ) E[θ̂] − θ
(2.1)
If biasθ̂ ( θ ) 0, then we say that θ̂ is an unbiased estimator of θ. If limn→∞ E[ θ̂n ] θ, then we say that θ̂
is asymptotically unbiased.
The sample mean is an unbiased estimator of the true mean. We can easily see this. The sample mean
is defined by
Pn
xi
x̄ i1
n
However, E[x i ] µ by definition of expected value. So
E[x̄] proving that x̄ is an unbiased estimator of µ.
The sample variance, defined by
Pn
i1 E[x i ]
s2 is an unbiased estimator of the true variance,
nµ
µ
n
X ( x − x̄ ) 2
i
σ2 .
σ̂2 n
n−1
The empirical variance, defined by
X ( x − x̄ ) 2
i
n
is a biased estimator of the true variance, σ2 . Its bias can be calculated as follows:
n−1 2
s
n
n−1
( n − 1) σ 2
E[ σ̂2 ] E[s 2 ] n
n
σ̂ 2 CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
23
24
2. ESTIMATOR QUALITY
because we know that s 2 is unbiased, so E[s 2 ] σ2 .
σ2
n
n
The bias goes to 0 as n → ∞, so the empirical variance is an asymptotically unbiased estimator of the true
variance.
biasσ̂2 ( σ2 ) ( n − 1) σ 2
− σ2 −
Example 2A You are given a sample x1 , x2 , . . . , x n . Which of the following estimators are unbiased?
1. For an exponential distribution, x̄ as an estimator for θ.
2. For a Pareto distribution with known θ, 1 +
as an estimator for α.
θ
x̄
3. For a uniform distribution on [0, θ], max x i as an estimator for θ.
Answer:
1. The sample mean is an unbiased estimator of the true mean, and θ is the mean of an
exponential, so x̄ is an unbiased estimator of θ.
θ
2. The mean of a Pareto is µ α−1
. If θ is known, then we see from this that α 1 + θµ . The expected
value of the proposed estimator is 1 + θ E[1/x̄]. In general, the expected value of a reciprocal is not the
reciprocal of the expected value:
1
1
,
E
x̄
E[x̄]
So this estimator of α is biased.
3. We will discuss the distribution of the maximum of a sample in lesson 20, but let’s calculate it here.
Let Y be the maximum of the sample from a uniform distribution on [0, θ]. The probability that the
maximum is less than x is the probability that all the observations are less than x, or
FY ( x ) x
θ
!n
0≤x≤θ
fY ( x ) nx n−1
θn
0≤x≤θ
Differentiating,
The expected value of Y is
Z
θ
0
x fY ( x ) dx Z
θ
0
nx n dx
θn
n
1
n
x n+1 θ
n+1
0
nθ
n+1
!
!
θ
nθ
θ
So the bias of the maximum is n+1
− θ − n+1
.
However, as n → ∞, the bias goes to 0. Therefore, the estimator is asymptotically unbiased.
?
Quiz 2-1 X is an observation from a uniform distribution on [0, θ]. 2X is an unbiased estimator of θ.
Calculate the bias of (2X ) 2 as an estimator for θ 2 .
While unbiasedness is desirable, it is not the only measure of estimator quality. Many biased estimators
are satisfactory as long as they’re asymptotically unbiased. Conversely, an unbiased estimator is good on
the average, but may be a poor estimator. It’s like the statistician with his head in the freezer and his feet
in boiling water; he may be OK on the average, but quite uncomfortable. Good estimators are always close
to the correct value, and it’s not enough that they are correct on the average.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
25
2.2. CONSISTENCY
2.2
Consistency
An estimator is (weakly) consistent if the probability that it is different from the parameter by more than
goes to 0 as n, the sample size, goes to infinity. In other words
lim Pr | θ̂n − θ| > → 0 for any > 0
n→∞
A sufficient but not necessary condition for consistency is that the estimator is asymptotically unbiased
and its variance goes to 0 as n goes to infinity. Thus, the sample mean is a consistent estimator of the true
mean for an exponential or a gamma distribution, but may not be consistent for a Pareto distribution with
α ≤ 2, since then the variance of the distribution and therefore of the sample mean is infinite.
2.3
Efficiency and Mean Square Error
An estimator is efficient if its variance is low. An estimator is more efficient than another estimator if its
variance is lower than the other estimator. For two estimators θ̂1 and θ̂2 , the relative efficiency of θ̂1 with
respect to θ̂2 is
Var ( θ̂2 )
(2.2)
Relative efficiency of θ̂1 to θ̂2 Var ( θ̂1 )
The mean square error of an estimator is the expected value of the square difference between the estimator and the parameter:1
f
g
MSEθ̂ ( θ ) E ( θ̂ − θ ) 2
(2.3)
The MSE is the sum of the bias squared and the variance:
MSEθ̂ ( θ ) biasθ̂ ( θ ) 2 + Var ( θ̂ )
(2.4)
This is a convenient formula. It follows that if the estimator is unbiased, then the MSE is the variance.
An estimator is called a uniformly minimum variance unbiased estimator (UMVUE) if it is unbiased and if
there is no other unbiased estimator with a smaller variance for any true value θ. It would make no sense
to make a similar definition for biased estimators (i.e., a uniformly minimum MSE estimator), since the
estimator equal to the constant happens to have an MSE of 0 if θ is that constant.
Example 2B In an urn, there are four marbles numbered 5, 6, 7, and 8. You draw three marbles from the
urn without replacement. Let θ̂ be the maximum of the three marbles.
Calculate the bias and the mean square error of θ̂ as an estimator for the maximum marble in the urn,
θ.
Answer: There are four combinations of three marbles out of four. Three of the combinations have 8. The
remaining one is {5, 6, 7}, with a maximum of 7. Thus the expected value of θ̂ is 43 (8) + 14 (7) 7 34 , whereas
the true maximum is 8. The bias is 7 34 − 8 − 14 .
The error is 1 one-fourth of the time, 0 otherwise, so the mean square error is 14 (12 ) The variance of the estimator is
and indeed − 14
2
+
3
16
(0.25)(0.75)(12 ) 1
4
.
3
,
16
14 —the bias squared plus the variance equals the mean square error.
1Different textbooks have different conventions on the argument and subscript of MSE; some do the opposite of the formula
here and write MSEθ ( θ̂ ) instead. The notation used here is the one you’ll encounter on Exam 4. Some old exam questions use the
other notation.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
26
2. ESTIMATOR QUALITY
Example 2C [4B-F96:21] (2 points) You are given the following:
• The expectation of a given estimator is 0.50.
• The variance of this estimator is 1.00.
• The bias of this estimator is 0.50.
Determine the mean square error of this estimator.
A. 0.75
B. 1.00
C. 1.25
D. 1.50
E. 1.75
Answer: MSEθ̂ ( θ ) 1.00 + 0.502 1.25 . (C)
Example 2D For a uniform distribution on [0, θ], calculate the mean square error of Y max x i as an
estimator of θ.
Answer: We already calculated the bias in Example 2A. We showed that the density function of Y is
fY ( x ) nx n−1
θn
0≤x≤θ
Now let’s calculate the variance of the estimator. The second moment of Y is
2
E[Y ] Z
θ
0
n
n
θ
nx n+1 dx
θn
!
The variance is
1
nθ 2
x n+2 n+2
0 n + 2
!
θ
Var ( Y ) E[X 2 ] − E[X]2
nθ 2
nθ
−
n+2
n+1
!2
nθ 2
( n + 2)( n + 1) 2
So the MSE is
MSEθ̂ ( θ ) biasθ̂ ( θ ) 2 + Var ( θ̂ )
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
θ2
nθ 2
+
( n + 1) 2 ( n + 2)( n + 1) 2
2θ 2
( n + 1)( n + 2)
27
2.3. EFFICIENCY AND MEAN SQUARE ERROR
Table 2.1: Summary of Estimator Quality Concepts
• The bias of an estimator is the excess of its expected value over the true value:
biasθ̂ ( θ ) E[θ̂] − θ
(2.1)
• An estimator is asymptotically unbiased, even if it isn’t unbiased, if the bias goes to 0 as the sample
size goes to infinity.
• The sample mean is an unbiased estimator of the population mean. The sample variance (with
division by n − 1) is an unbiased estimator of the population variance.
• An estimator is consistent if the probability that it differs from the true value by any amount goes
to 0 as the sample size goes to infinity, or
lim Pr ( | θ̂ − θ| > ) 0
n→∞
for > 0
• If an estimator is asymptotically unbiased and its variance goes to 0 as the sample size goes to
infinity, then it is consistent, but not conversely.
• The sample mean is a consistent estimator of the population mean if the population variance is
finite.
• An estimator is more efficient than another estimator if its variance is lower.
• The relative efficiency of θ̂1 with respect to θ̂2 is
Var ( θ̂2 )
Var ( θ̂1 )
(2.2)
• The mean square error of an estimator is
MSEθ̂ ( θ ) E[ ( θ̂ − θ ) 2 ]
(2.3)
• A formula for the mean square error is
MSEθ̂ ( θ ) biasθ̂ ( θ ) 2 + Var ( θ̂ )
(2.4)
• A uniformly minimum variance unbiased estimator is an unbiased estimator that has the lowest variance of any unbiased estimator regardless of the true value of θ, the estimated parameter.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
28
2. ESTIMATOR QUALITY
Exercises
2.1. [110-S83:15] Let T1 and T2 be estimators of a population parameter θ based upon the same random
sample. If Ti is distributed normally with mean θ and variance σi2 > 0, i 1, 2, and if T bT1 + (1 − b ) T2 ,
then T is an unbiased estimator of θ.
Determine b to minimize the variance of T.
σ2
A.
σ1
σ22
B.
σ12
σ22
C.
σ12 + σ22
σ22 − Cov (T1 , T2 )
D.
σ12 − 2 Cov (T1 , T2 ) + σ22
σ22 − 12 Cov (T1 , T2 )
E.
σ12 − 2 Cov (T1 , T2 ) + σ22
2.2. [110-S83:20] Let X be a random variable with mean 2. Let S and T be unbiased estimators of the
second and third moments, respectively, of X about the origin.
Which of the following is an unbiased estimator of the third moment of X about its mean?
A.
B.
C.
D.
E.
T − 6S + 16
T − 3S + 2
(T − 2) 3 − 3 ( S − 2) 2
(T − 2) 3
T−8
2.3. [110-S88:36] Let X be a random variable with a binomial distribution with parameters m and q,
and let q̂ X/m. Then q̂ is an unbiased estimator of q.
Which of the following is an unbiased estimator of q (1 − q ) ?
A. q̂ (1 − q̂ )
B.
1 m−1 q̂ (1
− q̂ )
C.
1
m q̂ (1
− q̂ )
D.
m−1
m q̂ (1
− q̂ )
E.
m m−1 q̂ (1
− q̂ )
2.4. [110-S83:33] Let X1 , X2 , . . . , X n be a random sample of size n ≥ 2 from a Poisson distribution with
mean λ. Consider the following three statistics as estimators of λ.
I.
II.
III.
X̄ 1
n−1
1
n
Pn
Pn
i1
Xi
i1 ( X i
2X1 − X2
− X̄ ) 2
Which of these statistics are unbiased?
A. I only
B. II only
C. III only
E. The correct answer is not given by A. , B. , C. , or D.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
D. I, II, and III
Exercises continue on the next page . . .
29
EXERCISES FOR LESSON 2
2.5. Which of the following statements are true?
I.
An estimator that is asymptotically unbiased and whose variance approaches 0 as the sample size
goes to infinity is weakly consistent.
II.
For an unbiased estimator, minimizing variance is equivalent to minimizing mean square error.
The estimator S2 III.
1
n
Pn
j1 ( X j
− X̄ ) 2 for the variance σ2 is asymptotically unbiased.
2.6. [4B-S96:12] (1 point) Which of the following must be true of a consistent estimator?
1.
It is unbiased.
2.
For a small quantity , the probability that the absolute value of the deviation of the estimator from
the true parameter value is less than tends to 1 as the number of observations tends to infinity.
3.
It has minimal variance.
A. 1
B. 2
C. 3
D. 2,3
E. 1,2,3
2.7. Which of the following statements is false?
A.
B.
C.
D.
E.
2.8.
If two estimators are unbiased, a weighted average of them is unbiased.
The sample mean is an unbiased estimator of the population mean.
The sample mean is a consistent estimator of the population mean.
For a uniform distribution on [0, θ], the sample maximum is a consistent estimator of the population maximum.
The mean square error of an estimator cannot be less than the estimator’s variance.
θ̂ is an estimator for θ. You are given:
•
E[θ̂] 3
•
E[θ̂ 2 ] 13
If θ 4, what is the mean square error of θ̂?
2.9. [4B-S92:2] (1 point) Which of the following are true?
1.
The expected value of an unbiased estimator of a parameter is equal to the true value of the parameter.
2.
If an estimator is efficient, the probability that an estimate based on n observations differs from the
true parameter by more than some fixed amount converges to zero as n grows large.
3.
A consistent estimator is one with a minimal variance.
A. 1 only
B. 3 only
C. 1 and 2 only
E. The correct answer is not given by A. , B. , C. , or D.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
D. 1,2 and 3
Exercises continue on the next page . . .
30
2. ESTIMATOR QUALITY
2.10. [4B-S91:28] (1 point) α̂ is an estimator of α. Match each of these properties with the correct mathematical description.
a. Consistent
b. Unbiased
c. Efficient
A.
B.
C.
D.
E.
2.11.
A.
B.
C.
D.
E.
a
a
a
a
a
1, b
2, b
1, b
3, b
3, b
2, c
1, c
3, c
2, c
1, c
1. E[ α̂] α
2. Var[ α̂] ≤ Var[ α̃] where α̃ is any other estimator of α
3. For any > 0, Pr ( | α̂ − α| < ) → 1 as n → ∞, where n is the
sample size.
3
3
2
1
2
[4-F04:40] Which of the following statements is true?
A uniformly minimum variance unbiased estimator is an estimator such that no other estimator
has a smaller variance.
An estimator is consistent whenever the variance of the estimator approaches zero as the sample
size increases to infinity.
A consistent estimator is also unbiased.
For an unbiased estimator, the mean squared error is always equal to the variance.
One computational advantage of using mean squared error is that it is not a function of the true
value of the parameter.
2.12. You are given a sample of 25 items from an exponential distribution. You consider the following
two estimators for the mean:
1.
θ̂1 x̄
2.
θ̂2 0.9 x̄
Calculate the relative efficiency of θ2 with respect to θ1 .
2.13. The mean of a uniform distribution on [0, θ] is estimated with the sample mean based on a sample
with 10 observations.
The bias of x̄ 2 as an estimator for the square of the mean of the distribution is cθ 2 .
Determine c.
2.14. A population contains the values 1, 2, 4, 9. A sample of 3 without replacement is drawn from this
variable. Let Y be the median of this sample.
Calculate the mean square error of Y as an estimator of the population mean.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
31
EXERCISES FOR LESSON 2
2.15.
[4B-F92:8] (1 point) You are given the following information:
X is a random variable whose distribution function has parameter α 2.00.
Based on n random observations of X you have determined:
•
E[α 1 ] 2.05, where α1 is an estimator of α having variance equal to 1.025.
•
E[α 2 ] 2.05, where α2 is an estimator of α having variance equal to 1.050.
•
As n increases to ∞, Pr ( |α 1 − α| > ) approaches 0 for any > 0.
Which of the following are true?
1.
α 1 is an unbiased estimator of α.
2.
α 2 is an efficient estimator of α.
3.
α 1 is a consistent estimator of α.
A. 1 only
2.16.
B. 2 only
C. 3 only
D. 1,3 only
E. 2,3 only
[4B-F93:13] (3 points) You are given the following:
•
Two instruments are available for measuring a particular (non-zero) distance.
•
X is the random variable representing the measurement using the first instrument and Y is the random variable representing the measurement using the second instrument.
•
X and Y are independent.
•
E[X] 0.8m; E[Y] m; Var ( X ) m 2 ; and Var ( Y ) 1.5m 2 where m is the true distance.
Consider the class of estimators of m which are of the form Z αX + βY.
Within this class of estimators of m, determine the value of α that makes Z an unbiased estimator with
minimum variance.
A.
B.
C.
D.
E.
Less than 0.45
At least 0.45, but less than 0.50
At least 0.50, but less than 0.55
At least 0.55, but less than 0.60
At least 0.60
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
32
2. ESTIMATOR QUALITY
2.17. [4B-S95:27] (2 points) Two different estimators, ψ and φ, are available for estimating the parameter,
β, of a given loss distribution.
To test their performance, you have conducted 75 simulated trials of each estimator, using β 2, with
the following results:
75
X
ψ i 165,
i1
75
X
i1
ψ 2i 375,
75
X
φ i 147,
i1
75
X
i1
φ 2i 312.
Calculate MSEβ ( ψ ) / MSEβ ( φ ) .
A.
B.
C.
D.
E.
Less than 0.50
At least 0.50, but less than 0.65
At least 0.65, but less than 0.80
At least 0.80, but less than 0.95
At least 0.95, but less than 1.00
2.18. [4B-S92:17] (2 points) You are given that the underlying size of loss distribution for disability claims
is a Pareto distribution with parameters α and θ 6000.
You have determined the following for α̂, an estimator of α:
E[ α̂] 2.20
MSE ( α̂ ) 1.00
Determine the variance of α̂ if α 2.
A.
B.
C.
D.
E.
Less than 0.70
At least 0.70, but less than 0.85
At least 0.85, but less than 1.00
At least 1.00, but less than 1.15
At least 1.15
Losses follow a Pareto distribution with parameters α 3, θ 600. A sample of 100 is available.
2.19.
Determine the MSE of the sample mean as an estimator for the mean.
2.20. A sample of n elements, x1 , . . . , x n , is selected from a random variable having a uniform distribution on [0, θ]. Let Y max ( x i ) . You wish to estimate the parameter θ with an estimator of the form
kY.
You may use the following facts:
•
E[Y] nθ
.
n+1
• Var ( Y ) nθ2
.
( n + 2)( n + 1) 2
Determine the k which minimizes the mean square error of the estimator.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
33
EXERCISES FOR LESSON 2
[4-S00:18] You are given two independent estimates of an unknown quantity µ:
2.21.
• Estimate A: E[µA ] 1000 and σ ( µA ) 400.
• Estimate B: E[µ B ] 1200 and σ ( µ B ) 200.
Estimate C is a weighted average of the two estimates A and B, such that
µ C w · µA + (1 − w ) · µ B
Determine the value of w that minimizes σ ( µ C ) .
A. 0
B. 1/5
C. 1/4
D. 1/3
E. 1/2
[4-F02:31] You are given:
2.22.
x
Pr ( X x )
0
0.5
1
0.3
2
0.1
3
0.1
Using a samplePof size n, the population mean is estimated by the sample mean X̄ and the variance is
estimated by S2n ( X i − X̄ ) 2 /n.
Calculate the bias of S2n when n 4.
A. −0.72
B. −0.49
C. −0.24
D. −0.08
E. 0.00
C. 2
D. 5
E. 25
[4-S05:16] For the random variable X, you are given:
2.23.
• E[X] θ,
θ>0
• Var (X ) θ 2 /25
• θ̂ k/ ( k + 1) X,
k>0
• MSEθ̂ ( θ ) 2 biasθ̂ ( θ )
Determine k.
A. 0.2
2
B. 0.5
2.24. [CAS3-S05:21] An actuary obtains two independent, unbiased estimates, Y1 and Y2 , for a certain
parameter. The variance of Y1 is four times that of Y2 .
A new unbiased estimator of the form k1 Y1 + k 2 Y2 is to be constructed.
What value of k 1 minimizes the variance of the new estimate?
A.
B.
C.
D.
E.
Less than 0.18
At least 0.18, but less than 0.23
At least 0.23, but less than 0.28
At least 0.28, but less than 0.33
At least 0.33
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
34
2. ESTIMATOR QUALITY
2.25. [CAS3-F05:6] Claim sizes are uniformly distributed over the interval [0, θ]. A sample of 10 claims,
denoted X1 , X2 , X3 , . . . , X10 , was observed and an estimate of θ was obtained as follows:
θ̂ Y max ( X1 , X2 , . . . , X10 )
Recall that the probability density function for Y is:
fY ( y ) 10y 9
θ 10
for 0 ≤ y ≤ θ
Calculate the mean square error of θ̂ for θ 100.
A.
B.
C.
D.
E.
Less than 75
At least 75, but less than 100
At least 100, but less than 125
At least 125, but less than 150
At least 150
Additional old CAS Exam 3/3L questions: S06:3 (bias),4 (bias, consistent, sufficient), S08:2 (bias,
consistent, MSE), F08:5 (MSE), S10:20 (bias), F11:17 (bias, MSE), F12:19 (MSE), S13:19 (bias), F13:17
(consistent)
Additional old CAS Exam ST questions: S14:5 (MSE), S15:4
Solutions
2.1. All you need for this exercise is the formula for the variance of a sum:
Var ( aX + bY ) a 2 Var ( X ) + 2ab Cov ( X, Y ) + b 2 Var ( Y )
So we’re minimizing
Var (T ) b 2 σ12 + 2b (1 − b ) Cov (T1 , T2 ) + (1 − b ) 2 σ22
Differentiate with respect to b and set equal to zero.
2bσ12 + (2 − 4b ) Cov (T1 , T2 ) − 2 (1 − b ) σ22 0
b (2σ12 − 4 Cov (T1 , T2 ) + 2σ22 ) + 2 Cov (T1 , T2 ) − 2σ22 0
b
σ 2 − Cov (T1 , T2 )
2
σ2
1
− 2 Cov (T1 , T2 ) + σ 2
(D)
2
2.2. The third central moment µ3 can be expressed in terms of the moments around the origin µ0n as
follows:
µ3 µ03 − 3µ02 µ + 2µ3
Since µ 2, this reduces to
µ3 µ03 − 6µ02 + 16
If the expected value of T is µ03 and the expected value of S is µ02 , then the expected value of T − 6S + 16
is µ3 , so the answer is (A).
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
35
EXERCISE SOLUTIONS FOR LESSON 2
2.3. All five choices are multiples of q̂ (1 − q̂ ) , so let’s determine the expected value of that.
E[q̂] q
E[X 2 ]
m2
2
E[X ] E[X]2 + Var ( X ) m 2 q 2 + mq (1 − q )
E[ q̂ 2 ] m2 q2
+ mq (1 − q )
m2
E[q̂ (1 − q̂ ) ] E[q̂] − E[ q̂ 2 ]
E[q̂ 2 ] mq 2
by the formula for moments of a binomial
+ q (1 − q )
m
mq 2 + q (1 − q )
m
mq (1 − q ) − q (1 − q )
m
m−1
q (1 − q )
m
q−
and therefore the estimator must be multiplied by
2.4.
2.5.
m
m−1
to make it unbiased. (E)
I is the sample mean, which is an unbiased estimator of the true mean λ. !
II is the unbiased sample variance, which is an unbiased estimator of the true variance λ. !
For III, E[2X1 − X2 ] 2λ − λ λ, making it an unbiased estimator. ! (D)
I.
II.
III.
2.6.
As discussed in the lesson, true. !
MSEθ̂ ( θ ) Var ( θ̂ ) + biasθ̂ ( θ )
2
and biasθ̂ ( θ ) 0, so it is true. !
As discussed in the lesson, true. !
(B)
2.7. These are all discussed in this lesson. (C) is false if the variance of the population isn’t finite.
2.8.
biasθ̂ ( θ ) 3 − 4 −1
Var ( θ̂ ) 13 − 32 4
MSEθ̂ ( θ ) 4 + (−1) 2 5
2.9. Only 1 is true. The other two statements have interchanged definitions of consistency and efficiency.
(A)
2.10.
a 3, b 1, c 2. (E)
2.11. A correct version of (A) is “A uniformly minimum variance estimator is an estimator such than no
other unbiased estimator has a smaller variance.”
An estimator which is a constant has no variance, but if it is not equal to the true parameter must be
inconsistent, so (B) is false.
Consistency is an asymptotic property, so a biased estimator which is asymptotically unbiased could
be consistent, making (C) false.
(D) is true.
Mean square error is a function of the true value of the parameter; in fact, it is the expected value of
the square of the difference between the estimator and the true parameter, so (E) is false. Note however,
that the variance of an estimator is not a function of the true parameter.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
36
2. ESTIMATOR QUALITY
2.12. Let V be the variance of the sample mean. Then Var ( θ̂1 ) V, Var ( θ̂2 ) 0.92 V 0.81V. The
relative efficiency of θ2 to θ1 is V/ (0.81V ) 1.2346 . But note that θ̂1 is unbiased while θ̂2 is biased.
2.13.
The square of the mean is θ 2 /4. Now let’s calculate the expected value of x̄ 2 .
 P10
! 2 

i1 X i 

E[x̄ ] E 
10


#
" X
"X
#
10
1 *
2
E
Xi + E
Xi X j +
100
1≤i≤10
, i1
2
1≤ j≤10
i, j
For a uniform on [0, θ], E[X 2 ] θ2 /3. Also, E[X i X j ] E[X i ] E[X j ] because observations in a random
sample are independent, so E[X i X j ] θ 2 /4, the square of E[X i ]. Therefore,
!
!
θ2 +
1 *
θ2
.10
/
+ 90
E[x̄ ] 100
3
4
2
,
310
θ2
1200
The bias is
31 2
120 θ
− 41 θ 2 θ2
120 ,
!
-
and c 1/120 .
2.14. Half the time the sample median is 2 and the other half the time it is 4. The mean is (1+2+4+9) /4 4.
So the MSE is 12 (2 − 4) 2 2 .
2.15.
Only 3 is true. α 2 has higher variance than α 1 and the same bias, so it is less efficient. (C)
2.16.
E[αX + βY] m
0.8α + β 1
Minimize g ( α ) Var ( αX + βY ) α 2 Var ( X ) + β 2 Var ( Y )
α 2 m 2 + (1 − 0.8α ) 2 (1.5m 2 )
or
g (α)
α 2 + 1.5 − 2.4α + 0.96α 2
m2
1.96α2 − 2.4α + 1.5
g ( α ) is minimized at α 2.4/3.92 0.6122 . (E)
2.17. We must estimate the variance of each estimator. The question is vague on whether to use the
empirical variance (divide by 75) or the sample variance (divide by 74). The original exam question said
to work out the answer according to a specific textbook that used the empirical variance. We then get:
!2
375
165 +
*
.
/ 0.04 + 0.16 0.2
+
−
75
75
,
!2
2
147
* 312 − 147 +/ 0.0016 + 0.3184 0.32
MSEβ ( φ ) −2 +.
75
75
75
,
165
MSEβ ( ψ ) −2
75
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
2
37
EXERCISE SOLUTIONS FOR LESSON 2
0.2
0.625
0.32
(B)
If the sample variance were used, we would multiply 0.16 and 0.3184 by 75/74 to get 0.1622 and 0.3227.
The resulting quotient, (0.04 + 0.1622) / (0.0016 + 0.3227) 0.6234, which still leads to answer B.
2.18.
biasα̂ ( α ) 2.20 − 2 0.2
Var ( α̂ ) MSE ( α̂ ) − biasα̂ ( α )
2
1 − 0.22 0.96
(C)
2.19. The estimator is unbiased because the sample mean is an unbiased estimator of the population
mean, so the mean square error equals the variance. The variance of the estimator is:
Var ( X )
Var ( X̄ ) 100
2.20.
The bias of kY is
−
100
600 2
2
2700 .
!
nθ
n ( k − 1) − 1
k
−θθ
.
n+1
n+1
The variance of kY is
The MSE is then
2 (600) 2
2·1
k 2 nθ 2
.
( n + 2)( n + 1) 2
* n ( k − 1) − 1
k 2 nθ 2
+ θ 2 ..
2
( n + 2)( n + 1)
( n + 1) 2
,
2
+/
/.
-
We shall minimize this by differentiating with respect to k. To simplify matters, divide the entire expression by θ 2 and multiply it by ( n + 1) 2 ; this has no effect on the minimizing k:
2
k2 n
+ n ( k − 1) − 1
n+2
2kn
0
f (k ) + 2n n ( k − 1) − 1 0
n+2
k
+ n ( k − 1) − 1 0
n+2
1
k
+n n+1
n+2
f (k ) k n ( n + 2) + 1 ( n + 1)( n + 2)
k ( n + 1) 2 ( n + 1)( n + 2)
k
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
n+2
n+1
38
2.21.
2. ESTIMATOR QUALITY
The variance of the weighted average is
σC2 w 2 σ 2 + (1 − w ) 2 σB2
160,000w 2 + 40,000 (1 − w ) 2
Differentiating,
2 (160,000) w − 2 (40,000)(1 − w ) 0
200,000w 40,000
w 1/5
2.22.
then
We know that S 2 P
(B)
( X i − X ) 2 / ( n − 1) is an unbiased estimator; in other words, E[S2 ] σ2 . But
E[S2n ] and the bias is
n−1
n−1 2
E[S2 ] σ
n
n
n−1
σ2
− 1 σ2 −
n
n
In this case, the true mean µ 0.5 (0) + 0.3 (1) + 0.1 (2) + 0.1 (3) 0.8 and the true variance is
E[S2n ] − σ2 σ2 0.5 (0 − 0.8) 2 + 0.3 (1 − 0.8) 2 + 0.1 (2 − 0.8) 2 + 0.1 (3 − 0.8) 2 0.96
So the bias is −0.96/4 −0.24 . (C)
2.23.
Since
MSEθ̂ ( θ ) biasθ̂ ( θ )
by (iv)
so we calculate biasθ̂ ( θ ) and Var ( θ̂ ) .
biasθ̂ ( θ )
2
+ Var ( θ̂ )
Var ( θ̂ )
biasθ̂ ( θ ) E[θ̂] − θ
"
2
#
k
X −θ
E
k+1
!
kθ
θ
−θ−
k+1
k+1
Var ( θ̂ ) Var
k
X
k+1
k
k+1
!2
θ2
25
!
!2
θ2
25
!
biasθ̂ ( θ ) E[θ̂] − θ
θ
k+1
!2
k
k+1
k2
1
25
k 5
Since k > 0, we reject k −5.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
!
(D)
39
QUIZ SOLUTIONS FOR LESSON 2
2.24. Without loss of generality, assume the variance of Y2 is 1. This is anyway a positive multiplicative
constant, and such constants don’t affect the minimum.
Let the estimated parameter by θ. Since the new estimator for θ, which we’ll call Y, and the old
estimators are unbiased,
θ E[Y] k1 E[Y1 ] + k2 E[Y2 ] ( k1 + k2 ) θ
so k 2 1 − k1 . The variance of Y is
Differentiate and set equal to 0.
Var ( Y ) k12 (4) + (1 − k 1 ) 2
8k1 − 2 (1 − k 1 ) 0
10k1 − 2 0
k1 0.2
2.25.
(B)
See Example 2D, which solves this in general and derives the formula
MSEθ̂ ( θ ) Here, this is
2θ 2
( n + 1)( n + 2)
2 (1002 )
20,000
151.5152
(11)(12)
132
Quiz Solutions
2-1. For a uniform distribution on [0, θ],
2
E[X ] Z
θ
0
x 2 dx θ 2
θ
3
Therefore,
E[ (2X ) 2 ] 4 E[X 2 ] The bias is 34 θ 2 − θ 2 θ2
3
.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
4θ 2
3
(E)
40
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
2. ESTIMATOR QUALITY
Practice Exam 1
1. Cars arrive at a toll booth in a Poisson process at the rate of 6 per minute.
Determine the probability that the third car will arrive between 30 and 40 seconds from now.
A.
B.
C.
D.
E.
Less than 0.18
At least 0.18, but less than 0.21
At least 0.21, but less than 0.24
At least 0.24, but less than 0.27
At least 0.27
2. A business receives 50 pieces of mail every day in a Poisson process. One tenth of the mail contains
checks. The logarithm of the amount of each check has a normal distribution with parameters µ 3,
σ2 9.
Determine the average number of checks for amounts greater than 10,000 that the business receives in
a seven day week.
A.
B.
C.
D.
E.
3.
Less than 0.66
At least 0.66, but less than 0.69
At least 0.69, but less than 0.75
At least 0.75, but less than 0.75
At least 0.75
ATM withdrawals occur in a Poisson process at varying rates throughout the day, as follows:
11PM–6AM 3 per hour
6AM–8AM
Linearly increasing from 3 per hour to 30 per hour
8AM–5PM
30 per hour
5PM–11PM Linearly decreasing from 30 per hour to 3 per hour
Withdrawal amounts are uniformly distributed on (100, 500) , and are independent of each other and
the number of withdrawals.
Using the normal approximation, estimate the amount of money needed to be adequate for all withdrawals for a day 95% of the time.
A.
B.
C.
D.
E.
Less than 137,500
At least 137,500, but less than 138,000
At least 138,000, but less than 138,500
At least 138,500, but less than 139,000
At least 139,000
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
407
Exercises continue on the next page . . .
408
PART VII. PRACTICE EXAMS
4. An estimator θ̂ for θ has the following properties:
E[ θ̂] 4
Var ( θ̂ ) 20
If θ 6, calculate the bias of θ̂ 2 as an estimator for θ 2 .
A.
B.
C.
D.
E.
5.
Less than −3
At least −3, but less than −1
At least −1, but less than 1
At least 1, but less than 3
At least 3
For 2 estimators of θ, θ̂ and θ̃, you are given:
•
Expected value
Variance
•
θ5
•
Cov ( θ̂, θ̃ ) −1
θ̂
4
2
θ̃
5
3
Determine the mean square error of 12 ( θ̂ + θ̃ ) as an estimator of θ.
A.
B.
C.
D.
E.
Less than 1.25
At least 1.25, but less than 1.75
At least 1.75, but less than 2.25
At least 2.25, but less than 2.75
At least 2.75
6. For a set of 3 biased coins, the probability of head is p. The 3 coins are tossed 10 times, with the
following results:
Number of heads
Number of times
0
1
2
3
4
3
2
1
Determine the maximum likelihood estimate of p.
A. 1/5
B. 1/4
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
C. 1/3
D. 2/5
E. 1/2
Exercises continue on the next page . . .
409
PRACTICE EXAM 1
7. A sample of 6 observed claim sizes is
10
25
30
52
70
90
These observations are fitted to a Lognormal distribution with µ 2 using maximum likelihood.
Determine the variance of the fitted distribution.
A.
B.
C.
D.
E.
Less than 21,000
At least 21,000, but less than 23,000
At least 23,000, but less than 25,000
At least 25,000, but less than 27,000
At least 27,000
For two baseball teams A and B:
8.
•
Team A wins 7 out of 10 games.
•
Team B wins x out of 14 games.
•
The null hypothesis is that the two teams are equally likely to win games.
•
The alternative hypothesis is that the two teams are not equally likely to win games.
Determine the highest value of x for which the null hypothesis is accepted at 5% significance.
A. 10
B. 11
C. 12
D. 13
E. 14
9. For a Normally distributed variable X with σ 2 2500, you test H0 : µ 100 against H1 : µ < 100
using the sample mean of 30 observations. The test is constructed to have 1% significance.
Determine the power of the test at 70.
A.
B.
C.
D.
E.
10.
Less than 0.72
At least 0.72, but less than 0.76
At least 0.76, but less than 0.80
At least 0.80, but less than 0.84
At least 0.84
A sample of 20 items from a normal distribution yields the following summary statistics:
X
X
X i 120
X i2 1100
Construct a 99% confidence interval of the form (0, a ) for the variance.
Determine a.
A. 10.0
B. 10.1
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
C. 10.5
D. 48.5
E. 49.8
Exercises continue on the next page . . .
410
PART VII. PRACTICE EXAMS
11.
X is a random variable having probability density function
f ( x ) αx α−1
0<x<1
You test H0 : α 1 against H1 : α > 1 using 2 observations, x1 and x 2 .
Determine the form of the uniformly most powerful critical region for this test.
A.
B.
C.
D.
E.
x1 + x2 < k
x1 + x2 > k
x1 x2 < k
x1 x2 > k
1
1
x1 + x2 < k
12. The amount of time your trip to work takes is a Normally distributed random variable with mean
x minutes and variance 25. You would like to test the hypothesis H0 : x 30 against the alternative H1 :
x > 30. The test should have 5% significance and 90% power at 35.
Determine the minimum number of trips you will need in order to perform this test.
A. 9
B. 10
C. 11
D. 12
E. 13
13. A Normal random variable is known to have mean 5. For a sample of five observations from the
P
variable, 5i1 ( x i − 5) 2 175.
Construct a 95% confidence interval of the form ( a, ∞) for the variance.
Determine a.
A.
B.
C.
D.
E.
14.
Less than 12
At least 12, but less than 14
At least 14, but less than 16
At least 16, but less than 18
At least 18
You are given a sample of size 4 from a distribution with probability density function
f ( x ) 2x
0≤x≤1
Y1 , . . . , Y4 are the order statistics.
Determine Pr ( Y2 > 0.5) .
A.
B.
C.
D.
E.
Less than 0.5
At least 0.5, but less than 0.6
At least 0.6, but less than 0.7
At least 0.7, but less than 0.8
At least 0.8
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
411
PRACTICE EXAM 1
15. You are given the following information from a group of students regarding time spent studying
for an exam and the score on the exam:
Time (minutes)
Score
372
85
405
78
428
82
457
100
500
92
Calculate Spearman’s ρ relating study time and exam score.
A.
B.
C.
D.
E.
16.
Less than 0.3
At least 0.3, but less than 0.4
At least 0.4, but less than 0.5
At least 0.5, but less than 0.6
At least 0.6
For a random variable X, the null hypothesis is
H0 : the median is 820
and the alternative hypothesis is
H1 : the median is not 820.
For a sample of size 48, the 18th order statistic is 815 and the 19st order statistic is 822.
Which of the following statements is true?
A.
B.
C.
D.
E.
Reject H0 at 1% significance.
Accept H0 at 1% significance but not at 2.5% significance.
Accept H0 at 2.5% significance but not at 5% significance.
Accept H0 at 5% significance but not at 10% significance.
Accept H0 at 10% significance.
17. In a certain town, the natural logarithm of annual wages is hypothesized to be symmetrically distributed with mean 10.5. The wages of six people are
20,000
30,000 50,000
80,000
110,000 200,000
Using the Wilcoxon signed rank test, calculate the p-value of the hypothesis.
A.
B.
C.
D.
E.
Less than 0.05
At least 0.05, but less than 0.10
At least 0.10, but less than 0.15
At least 0.15, but less than 0.20
At least 0.20
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
412
PART VII. PRACTICE EXAMS
18.
(S):
It is hypothesized that the price of a restaurant meal (P) has a linear relationship to its star rating
P α + βS + ε
You are given the following data from six restaurants:
Price
Stars
15
1
15
2
P̄ 21
6
X
i1
(P − P̄ ) 2 274
19
3
20
3
22
4
35
5
S̄ 3
6
X
i1
(S − S̄ ) 2 10
6
X
i1
(P − P̄ )(S − S̄ ) 47
Calculate the t statistic to test the significance of the star rating as a factor in the restaurant price.
A.
B.
C.
D.
E.
19.
Less than 2.0
At least 2.0, but less than 3.0
At least 3.0, but less than 4.0
At least 4.0, but less than 5.0
At least 5.0
In a regression model of the form
Y α + βX + ε
you are given
•
There are 8 observations.
P
•
P
•
P
•
P
•
•
X i 85
X i2 1547
Yi 199
X i Yi 3616
The standard error of the regression is 19.36059.
Calculate the t statistic to test the hypothesis α 0.
A.
B.
C.
D.
E.
Less than 0.01
At least 0.01, but less than 0.02
At least 0.02, but less than 0.03
At least 0.03, but less than 0.04
At least 0.04
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
413
PRACTICE EXAM 1
Four different treatments are tried on two fields apiece, with the following results:
20.
Treatment 1
80
70
Treatment 2
60
30
Treatment 3
62
95
Treatment 4
72
41
Calculate the F ratio to test whether the mean results of the treatments are equal.
A.
B.
C.
D.
E.
Less than 0.5
At least 0.5, but less than 1.0
At least 1.0, but less than 1.5
At least 1.5, but less than 2.0
At least 2.0
You are given:
21.
• Claim counts follow a binomial distribution with m 10 and Q.
• Q varies by policyholder.
• Q follows a beta distribution with a 0.1, b 0.9, θ 1.
A policyholder submits 2 claims in 1 year.
Calculate the expected number of claims from this policyholder in the next year.
A.
B.
C.
D.
E.
Less than 1.2
At least 1.2, but less than 1.4
At least 1.4, but less than 1.6
At least 1.6, but less than 1.8
At least 1.8
You are given:
22.
• Claim counts follow a Poisson distribution. The probability of 0 claims is θ.
• The distribution of θ over the entire population has density function
f ( θ ) 3θ 2
0<θ<1
A policyholder submits no claims for 2 years.
Calculate the posterior probability that this policyholder submits no claims in the third year.
A. 0.50
B. 0.67
C. 0.75
D. 0.80
E. 0.83
23. A small department store chain has 5 stores. At each store, daily sales are normally distributed
with variance 5,000,000. Mean daily sales at each store are µ, where µ is normally distributed with mean
50,000 and variance 10,000,000.
At one of the stores, total sales over a 7 day week are 420,000.
Determine the posterior mean daily sales for this store.
A.
B.
C.
D.
E.
Less than 52,000
At least 52,000, but less than 54,000
At least 54,000, but less than 56,000
At least 56,000, but less than 58,000
At least 58,000
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
Exercises continue on the next page . . .
414
PART VII. PRACTICE EXAMS
24.
Claim counts are Poisson with mean λ. The distribution of λ is exponential with mean 0.1.
For a randomly selected policyholder, 6 claims are observed in 2 years.
Determine the posterior expected number of claims from this policyholder.
A.
B.
C.
D.
E.
Less than 0.4
At least 0.4, but less than 0.5
At least 0.5, but less than 0.6
At least 0.6, but less than 0.7
At least 0.7
25. The monthly number of losses on an insurance coverage follows a Poisson distribution with mean
λ. The prior distribution of λ is gamma with parameters α and θ.
A randomly selected insured is observed for n months and submits no claims.
Determine the smallest n such that the expected number of claims for this policyholder is half of the
expected number of claims for the general population.
A. θ
B. 1/θ
C. αθ
D. α/θ
Solutions to the above questions begin on page 457.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
E. θ/α
Appendix A. Solutions to the Practice Exams
Answer Key for Practice Exam 1
1
2
3
4
5
B
B
B
C
A
6
7
8
9
10
C
D
D
D
E
11
12
13
14
15
D
A
C
D
E
16
17
18
19
20
D
E
D
A
C
21
22
23
24
25
E
E
E
C
B
Practice Exam 1
1. [Lesson 25] The probability that the third car will arrive in the interval (30, 40) is the probability
of at least 3 cars in 40 seconds minus the probability of at least 3 cars in 30 seconds. For 40 seconds, the
Poisson parameter is 4 and the probability is
1−e
−4
!
42
1+4+
1 − 0.238103
2
For 30 seconds, the Poisson parameter is 3 and the probability is
1−e
−3
!
32
1 − 0.423190
1+3+
2
The difference is 0.423190 − 0.238103 0.185087 . (B)
2.
[Lesson 27] The probability of a check greater than 10,000 is
!
ln 10,000 − 3
1 − Φ (2.07) 1 − 0.9808 0.0192
1−Φ
3
The Poisson distribution of just the checks over 10,000 in one week has parameter 7 (50)(0.1)(0.0192) 0.672 . (B)
3. [Lesson 29] The Poisson parameter per day is computed by adding up the rates over the 4 periods.
For 11PM–6AM, we have 7 hours times 3 per hour, or 21. For 8AM–5PM we have 9 hours times 30 per
hour, or 270. For the other two periods, because of the linear increase or decrease, the average per hour is
the midpoint, or (30 + 3) /2 16.5, and there are 8 hours with varying rates, for a total of 8 × 16.5 132.
The total number of withdrawals per day is 21 + 270 + 132 423. The mean aggregate withdrawals is
(423)(300) 126,900.
The second moment of the uniform distribution on (100, 500) is the variance plus the mean squared.
The variance of a uniform distribution is the range squared divided by 12, or 4002 /12. Therefore, the
second moment of the uniform distribution is 4002 /12 + 3002 103,333 13 . The variance of aggregate
withdrawals, by the compound variance formula (29.2), is λ E[X 2 ] (423)(103,333 13 ) 43,710,000.
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
457
458
PRACTICE EXAM 1, SOLUTIONS TO QUESTIONS 4–7
The amount of money needed to be adequate 95% of the time is
p
126,900 + 1.645 43,710,000 137,775.68
4.
(B)
[Lesson 2] The bias is the expected value of the estimator minus the true value of the parameter.
E[ θ̂ 2 ] Var ( θ̂ ) + E[θ̂]2 20 + 42 36
and θ 2 62 36, so
is
5.
[Lesson 2] E
1
2
!2 f
1
2 ( θ̂
biasθ̂2 ( θ 2 ) 36 − 36 0
g
(C)
+ θ̃ ) 12 (4 + 5) 4.5, so the bias is 4.5 − 5 −0.5. The variance of the estimator
!
!
1 1 Var ( θ̂ + θ̃ ) Var ( θ̂ ) + Var ( θ̃ ) + 2 Cov ( θ̂, θ̃ ) 2 + 3 + 2 (−1) 0.75
4
4
Therefore, the mean square error is 0.52 + 0.75 1 . (A)
6. [Lesson 3] For a binomial with fixed m 3, maximum likelihood estimates q the same way as the
method of moments. For 30 tosses (10 tosses of 3 coins) we have (1)(3) + (2)(2) + (3)(1) 10 heads, so
q
10
30
1
3
. (C)
7. [Lesson 3]
constants such as
6
X
i1
The likelihood function in terms of the 6 observations x i , dropping multiplicative
is
√1 ,
x i 2π
1 −
L (σ) 6 e
σ
P6
i1
(ln x i −2) 2
2σ2
(ln x i − 2) 2 0.091558 + 1.485658 + 1.963354 + 3.807352 + 5.055731 + 6.249048 18.652701
18.652701
2σ2
dl
6 18.652701
− +
0
dσ
σ
σ3
− 6σ2 + 18.652701 0
18.652701
σ2 3.108784
6
l ( σ ) −6 ln σ −
The moments of the fitted distribution are
E[X] e 2+3.108784/2 34.9666
E[X 2 ] e 4+2 (3.108784) 27,380
Var ( X ) 27,380 − 34.96662 26,157
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
(D)
459
PRACTICE EXAM 1, SOLUTIONS TO QUESTIONS 8–12
8. [Subsection 8.4.3] The number of games won is binomial. The pooled mean games won is (7 +
x ) /24. For a two-sided test with 5% significance, we need the Z statistic to be no higher than 1.96, the
97.5th percentile of a standard normal distribution. The Z statistic is
Zq
We set this equal to 1.96 and solve for x.
x
14
−
7
10
7+x 17−x 24
24
1
10
+
1
14
1.96 p
x
− 0.7 0.171429 (7 + x )(17 − x )
14
24
p
2.112446x − 20.701967 (7 + x )(17 − x )
4.462426x 2 − 87.463556x + 428.5714 −x 2 + 10x + 119
5.462426x 2 − 97.463556x + 309.5714 0
x 13.71, 4.13
Thus we accept the null hypothesis when x is between 4 and 13 . (D)
It may be easier to solve this question by plugging in the answer choices for x in the original equation
setting Z equal to 1.96.
9. [Lesson 6] To achieve 1% significance, the critical value for a normal random variable must be
2.326 times the standard deviation below the mean, or 100 − 2.326 √50 78.76. The power of the test at
30
70 is the probability of rejecting the null hypothesis if µ 70, or
!
78.76 − 70
Pr ( X < 70) Φ
Φ (0.960) 0.831
√
50/ 30
10.
(D)
[Lesson 10] The sample variance is
19S 2
!2
20 * 1100
120 +
S −
20
19 20
20
2
,
-
σ2 W , where W is chi-square with 19 degrees of freedom. To make σ2 large, make W small: pick its
(20)
1st percentile, 7.633. Then σ 2 19
7.633 49.8 is the upper bound of the interval. (E)
11.
[Lesson 14] The likelihood ratio is (α0 1)
!
( x1 x2 ) α0 −1
1
( x1 x2 ) 1−α
α2
α 2 ( x1 x2 ) α−1
This should be less than a constant k. The first factor is a positive constant and can be incorporated in k.
Since 1 − α < 0, we will have this expression less than a constant if x 1 x2 > k. (D)
12.
[Lesson 7] If the critical value is x and n is the number of trips, we need x 30 + 1.645
significance condition, and we need x ≤ 35 − 1.282
√5
n
for the power condition. Thus we have
(1.645 + 1.282)(5)
√
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
n
√5
n
5
for the
460
PRACTICE EXAM 1, SOLUTIONS TO QUESTIONS 13–17
√
n 2.927
n 8.567
Rounding up to the next integer, 9 trips are needed. (A)
13. [Lesson
10] Let X i be an observation of the normal random variable and σ2 the variance of X i .
P
Let W 5i1 ( X i − 5) 2 /σ 2 . Then by the definition of the chi-square distribution, W is a chi-square random
variable with 5 degrees of freedom. The observed value of W is 175/si gma 2 , so
σ2 ∼
175
W
To find the lower bound a of a 95% confidence interval, we use the 95th percentile of W, or 11.070:
a
14.
175
15.808
11.070
(C)
[Lesson 20] The probability that one item X is greater than 0.5 is
Pr ( X > 0.5) Z
1
0.5
2x dx 1 − 0.52 0.75
The probability that Y2 is greater than 0.5 is the probability that three or four items are above 0.5, or
Pr ( Y2 ≥ 0.5) !
!
4
4
(0.753 )(0.25) +
(0.754 )
3
4
0.4218785 + 0.31640625 0.7383
(D)
15. [Lesson 24] The ranks of time are in order: 1, 2, 3, 4, 5. The ranks of scores are 3, 1, 2, 5, 4. Using
formula (24.5),
ρ 1−
is
16.
6 (1 − 3) 2 + (2 − 1) 2 + (3 − 2) 2 + (4 − 5) 2 + (5 − 4) 2
5 (24)
0.6
(E)
[Lesson 21] There are k 30 numbers higher than the hypothesized median. The sign test statistic
k − n/2 30 − 24
Z √
√
1.732
n/2
48/2
In the standard normal table, this is greater than the 95th percentile, which is 1.645, but less than the 97.5th
percentile, which is 1.96. For a two-sided test, this means accepting H0 at 5% significance but not at 10%
significance. (D)
17. [Section 22.1] The logarithms of the six numbers are 9.9, 10.3, 10.8, 11.3, 11.6, 12.2. After subtracting 10.5, we have −0.6, −0.2, 0.3, 0.8, 1.1, 1.7. The ranks are 3, 1, 2, 4, 5, 6. The sum of the ranks of the
positive numbers is 2 + 4 + 5 + 6 17. Since the maximum statistic is n ( n + 1) /2 (6)(7) /2 21, the
probability Pr (T ≥ 17) is the same as Pr (T ≤ 4) . To get a statistic of 4 or less as a sum of unequal numbers,
you’d need no numbers, or 1, 2, 3, 4, or 1 + 2, 1 + 3. That’s a total of 7 ways out of 26 , or 7/64 0.109375.
Since we are performing a two-sided test, the p-value is 2 (0.109375) 0.219 . (E)
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
461
PRACTICE EXAM 1, SOLUTIONS TO QUESTIONS 18–21
18.
[Section 12.3] The estimated value of β is
P
x y 47
β̂ P 2 4.7
10
x
The standard error of β̂ is
SSE 274 − 4.72 (10) 53.1
53.1
s2 13.275
p4
s β̂ 13.275/10 1.1522
The t statistic is 4.7/1.1522 4.0793 . (D)
19.
[Section 12.3] First let’s calculate α̂.
X
The variance of α̂ is
852
643.875
8
X
(85)(199)
( X i − X̄ )(Yi − Ȳ ) 3616 −
1501.625
8
1501.625
2.3322
β̂ 643.875
!
199
85
α̂ Ȳ − β̂ x̄ − 2.3322
0.09571
8
8
( X i − X̄ ) 2 1547 −
19.360592
643.875
!
!
1547
112.57
8
√
The t statistic to test α 0 is 0.09571/ 112.57 0.0090 . (A)
20.
[Lesson 13] The total is 80 + 70 + 60 + 30 + 62 + 95 + 72 + 41 510. The total sum of squares is
802 + 702 + · · · + 412 −
5102
3021.5
8
The sums of each treatment are 150, 90, 157, 113. The treatment sum of squares is
1502 + 902 + 1572 + 1132 5102
−
1496.5
2
8
It has 3 degrees of freedom.
The error sum of squares is 3021.5 − 1496.5 1525. It has 4 degrees of freedom.
The F ratio is
1496.5/3
F3,4 1.308
(C)
1525/4
21.
[Lesson 17] There are 10 possible claims in a year; 2 materialized, 8 didn’t.
a → 0.1 + 2 2.1
b → 0.9 + 8 8.9
The expected number of claims in the next year is 10
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM
2.1 2.1+8.9
1.9091 . (E)
462
PRACTICE EXAM 1, SOLUTIONS TO QUESTIONS 22–25
22. [Lesson 17] Since there are only two possibilities (either 0 claims are submitted or not), the model
is Bernoulli. The prior is beta with a 3, b 1, which in the posterior go to a 0 3 + 2 5, b 0 1 + 0 1,
and the posterior expected value of θ is then a/ ( a + b ) 5/6 0.8333 , which is the posterior probability
of no claims. (E)
23. [Lesson 18] We are given 7 days of experience, so n 7. We are given that v 5,000,000, a 10,000,000, and n x̄ 420,000. Using formula (18.1),
µ∗ 24.
5,000,000 (50,000) + 420,000 (10,000,000)
59,333
5,000,000 + 7 (10,000,000)
(E)
[Lesson 19] An exponential is a gamma with α 1. Let γ 1/θ.
α 1→1+67
γ 10 → 10 + 2 12
Posterior expected claims is
25.
7
12
. (C)
[Lesson 19] Let γ 1/θ. Then after n months, α → α and γ → γ + n. We want
α
1α
γ+n 2γ
This means n γ, or n 1
θ
. (B)
CAS ST Study Manual 2nd edition 3rd printing
Copyright ©2015 ASM