Download Math 164 - Department of Mathematics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Central limit theorem wikipedia , lookup

Transcript
San Jose State University
Department of Mathematics
Math 164
Mathematical Statistics
Catalog Description
Sampling distributions, interval estimation, confidence intervals, order statistics, sufficient
statistics, the Rao-Blackwell Theorem, completeness, uniqueness, point estimation, maximum
likelihood, Bayes’ methods, testing hypotheses.
Prerequisite
Math. 163 (Probability Theory) with a grade of C- or better, or instructor consent.
Textbook
Larsen & Marx, (1986). An Introduction to Mathematical Statistics and Its Applications, 2nd
edition, Prentice Hall.
Alternative textbook
Wackerly, Mendenhall & Scheaffer, Mathematical Statistics with Applications, 6th ed., Duxbury
Press.
References
none
Technology Requirement
A scientific calculator which has an exponential key (yx) and a factorial key (x!) is needed for
some of the homework assignments as well as for the exams. A graphing calculator, such as the
TI-82, or TI-85, is useful but is not required.
Course Content
Brief review of probability theory. Sampling distributions, order statistics, estimation of
parameters, properties of estimators: unbiasedness, efficiency, consistency, minimum variance;
maximum likelihood, confidence intervals, hypothesis testing, Central Limit Theorem, and
goodness-of-fit tests.
Student Outcomes
Students should be able to:
1. Determine whether an estimator for a parameter  is biased or unbiased (for ), and what this
means.
2. Calculate the relative efficiency of two estimators.
3. Determine whether an estimator (for ), W, is efficient by calculating the Cramer-Rao lower
bound and Var(W) and comparing the two values.
4. Given a density function with an unknown parameter , find an efficient unbiased estimator
for  by using the Cramer-Rao theorem.
5. Determine whether an estimator is consistent, asymptotically unbiased, and/or squared-error
consistent (for ).
6. Determine whether an estimator is sufficient (for ) by using the Fisher-Neyman criterion or
by using the Factorization Theorem.
7. Calculate the likelihood function and use it to find the maximum-likelihood estimator(s) (for
one or more parameters).
8. Find the method of moments estimator(s) (for one or more parameters).
9. Find confidence intervals for parameters (p, , 2, , etc.) of various distributions.
10. Do one- and two-sample tests of hypotheses involving parameters of various distributions
and calculate p-values, type I and type II errors. Specifically, be able to do tests involving the
standard normal distribution, the Chi-square distribution, t-tests, and F-tests.
11. Find the critical region for a test given the probability of a type I error and vice-versa.
12. Calculate the generalized likelihood ratio and find the form of the corresponding generalized
likelihood ratio test.
13. Use the Central Limit Theorem to calculate probabilities involving the sample mean (or
sample sum) of a sufficiently large sample.
14. Do a goodness-of-fit test for a distribution in the two cases: where all the parameters are
known and where they are unknown.
15. Do a chi-square test to determine whether two traits are independent based on data given in a
contingency table.
Topics and suggested course schedule
All sections refer to the text by Larsen and Marx.
Topic
Ch. 1-4: Probability Review: discrete and continuous random variables, pdfs, cdfs,
expectation, variance, standard deviation and their properties, moments, central
moments.
Ch. 1-4: Probability Review: the major discrete distributions: Bernoulli, binomial,
geometric, negative binomial, hypergeometric, Poisson; and Poisson as a limit of
Bin(n,p)
Ch. 1-4: Probability Review: the major continuous-type distributions: uniform,
exponential, gamma, normal; and the relationship of the exponential and gamma
distributions to Poisson processes
§ 3.6: Order Statistics
§ 5.1-5.4: Estimation of Parameters.
Number of
75-minute
lectures
1
1.5
1
1
1.5
§ 5.5: Efficiency.
§ 5.6: Minimum Variance Estimators: The Cramer-Rao Lower Bound.
§ 5.7a: Consistency. Include a review of Chebyshev’s inequality.
§ 5.7b: Sufficiency. Include a review of conditional density and conditional
expectation (§ 3.7) if covered. Optional (inclusion requires cutting the time
allotted to some other topic):
§ 5.8: Finding Estimators: (I) The Method of Maximum Likelihood.
§ 5.8: Finding Estimators: (II) The Method of Moments.
§ 5.9: Interval Estimation (Confidence Intervals)
§ 5.10: Confidence Intervals for the binomial parameter p .
§ 6.1-6.2: Hypothesis Testing: The Decision Rule.
§ 6.3: Type I and Type II Errors.
§ 6.4: The Generalized Likelihood Ratio Test.
§ 7.1-7.2: Point Estimates for  and 2 for the normal distribution
Review: § 3.12: Moment generating functions.
§ 7.3: Linear Combinations of Normal Random Variables.
§ 7.4: The Central Limit Theorem.
§ 7.5: The Chi-square Distribution; Inferences about 2
§ 7.6: The F Distribution and t Distribution.
§ 7.7: The One-Sample t Test.
§ 8.1-8.2: Testing H0: X = Y: The Two-Sample t Test.
.5
1
1
(2)
Optional (inclusion requires cutting the time allotted to some other topic): § 8.3:
Testing H0: X2 = Y2: The F Test.
§ 8.4: Binomial Data: Testing H0: pX = pY.
(1)
§ 8.5: Confidence Intervals for the Two-Sample Problem.
Review: § 9.1-9.2: The Multinomial Distribution;
§ 9.3: Goodness-of-Fit Tests.
§ 9.4: Goodness-of-Fit Tests: Parameters Unknown.
§ 9.5: Contingency Tables.
Exams
TOTAL
1
.5
1.5
1
1
2
30
Prepared by:
Amy Rocha
Chair, Probability and Statistics Committee
Department of Math, SJSU
November 2003
1
1
.5
.5
1
1
1
1
.5
.5
1.5
1
1
1
1
1