Download | | | | 1 0 – Mutually exclusive 0 – Unbiased Two events are

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Statistics wikipedia , lookup

Transcript
|
|
Joint probability mass function: ,
0 (1)
(2) ∑ ∑
,
1 (3)
,
,
,
1
Marginal probability mass function: |
|
1
0 – Mutually exclusive 0 – Unbiased Two events are independent if: |
(1)
|
(2)
(3)
Probability density function: (1)
0 (2)
1 Discrete uniform distribution, all n has equal probability: 1
1
1
,
,
2
12
Continuous uniform distribution: 1
,
,
12
2
A Bernoulli trial (Binomial distribution, success or failure) 5
1
1
5 1
/
min
1
1
0, 1, 2, …
,
5 Cumulative distribution function: ,
∞
∞ Expected value of a function of a continuous random value: Normal distribution: 1
,
,
√2
Standardizing to calculate a probability: Normal approximation to the binomial distribution: 1
is approximately a standard normal rv. To approximate a binomial probability with normal distribution 0.5
0.5
1
and 0.5
0.5
1
The approximation is good for 5 and 1
5 Normal approximation to the Poisson distribution: ,
5 √
Exponential distribution: λe , for 0 x ∞ 1/ ,
1/
|
|
is a probability mass |
|
|
For discrete random variable X & Y, if one of the following is true, the other are also true and X, Y are independent: (1)
,
,
& ,
&
0 (2)
|
(3)
,
&
0 |
(4)
,
Covarience: ∑
,
,
,
,
!
Permutations: 1
1 0
!
!
!
The standard error of an estimator Θ is its standard deviation, given by: Θ The mean squared error of the estimator Θ of the parameter θ is: Θ
Θ θ ∏
Likelihood function: Suppose a random experiment consists of a series of n trials. Assume that 1)
The result of each trial is classified into one of k classes. 2)
The probability of a trial generating a result in class 1 to class k is constant over trials and equal to p1 to pk. 3)
The trials are independent The random variables X1, X2,…,Xn that denote the number of trials that result in class 1 to class k have a multinomial distribution and the joint probability mass function is !
,
,…,
…
! !… !
1 ,
1
If x1, x2, … , xn is a sample of n observations, the sample variance is: ∑
∑
∑
1
1
Confidence interval on the mean, variance known: /√
/√ /
/
/
|
| ,
,
/√
Confidence interval on the mean, variance unknown: /√
/√ / ,
/ ,
/√
Random sample normal disr. mean=µ, var=σ2, S2=sample var. 1
.
1
Ci on variance, s2=sample variance, σ2 unknown 1
1
,
,
Lower and upper confidence bounds on σ2: 1
1
,
|
If X & Y are independent random variables, !
Combinations: max 0,
,
,
|
1,2, … Poisson distribution: !
|
Correlation: Negative binomial distribution nr of trials until r successes: 1
1
,
1, 2, … ,
,
1,
2, … 1
1
/ / ,
Hypergeometric distribution: N objects contains K objects = successes, N‐K objects = failures. A sample of size n, X = # of successes in the sample. p=K/N, (good for n/N < 0.1) ,
,
Because a conditional probability mass function function, the following prop. are satisfied: 0 (1)
|
(2) ∑ |
1 |
(3)
|
Conditional mean, variance: 1
Conditional probability mass function of Y given X=x is: ,
,
0 |
(3)
Probability mass function: (1)
0 (2) ∑
1 (3)
Mean and variance of the discrete rv: ,
Geometric distribution: 1
,
1
,
,
,
Binomial proportion: If n is large, the distribution of Approximate test on a binomial proportion: :
1
1
̂ 1
̂
̂ 1
̂
/
̂
/
1
Alternative hypothesis Rejection criteria (**) H1: p ≠ p0 z0 > zα/2 or z0 < ‐zα/2 H1: p > p0 z0 > zα z0 < ‐zα H1: p < p0 App. Sample size for a 2‐sided test on a binomial proportion: is approximately standard normal. Ci on binomial proportion (obs, lower, upper change zα/2 to zα): ̂
,
1
/
1
,
1
Sample size for a specified error on binomial proportion: 1
,
max
Test on the differens in mean, variance known ∆
:
∆ ,
0.5 Prediction int. on a single future observation from norm. dist: 1 1/
1 1/ / ,
/ ,
Ci, difference in mean, variances known: /
Alternative hypothesis Rejection criteria H1: ∆ z0 > zα/2 or z0 < ‐zα/2 H1: ∆ z0 > zα H1: ∆ z0 < ‐zα Sample size, 1‐sided test on difference in mean, with power of at least 1‐β, n1=n2=n, variance known: /
for one‐sided, change zα/2 to zα. Sample size for a c.i. on difference in mean, variances known: /
∆ ∆
Tests on diff. in mean, variances unknown and equal: ∆
:
∆ ,
1
1
Ci Case 1, difference in mean, variance unknown & equal: 1
1
/ ,
1
1
/ ,
Alternative hypothesis Rejection criteria ∆ H1: / ,
/ ,
H1: ∆ ,
∆ H1: ,
Tests on diff. in mean, variances unknown and not equal: ∆
If :
∆ is true, the statistic
1
1
2
Ci Case 2, difference in mean, variance unknown, not equal: / ,
/ ,
/
1
1
ν is degrees of freedom for tα/2,if not integer, round down. Ci for µD from paired samples: /√
/√ / ,
/ ,
Approximate ci on difference in population proportions: /
̂
̂ 1
̂
̂
̂ 1
is distributed app. as t with ν degrees of freedom, ~
Paired t‐test: :
̂
̂ 1
̂
̂ 1
̂
/
:
,
/√
,
1
1
,
,
Goodness of fit: ,
,
Expected frequency: The power of a test: 1
Φ z
Δ
Rejection criteria z0 > zα/2 or z0 < ‐zα/2 z0 > zα z0 < ‐zα Δ
,
2
Δ
σ
n
∑
,
∑
,
Ci:
,
/ ,
/ ,
,
,
1
Δ
/
σ
n
∑
∑
,
z
σ
n
∑
Rejection criteria / ,
Φ
The P‐value is the smallest level of significance that would lead to rejection of the null hypothesis H0 with the given data. 21 Φ | |
:
:
:
1 Φ z
:
:
:
Φ z
: :
:
Fitted or estimated regression line: 1
1
,
,
,
/ ,
/
σ
n
Alternative hypothesis Rejection criteria H1: µ ≠ µ0 t0 > tα/2,n‐1 or t0 < ‐tα/2, n‐1 H1: µ > µ0 t0 > tα,n‐1 H1: µ < µ0 t0 < ‐tα,n‐1 Test in the variance of a normal distribution: 1
:
,
Alternative hypothesis H1: σ2 ≠ H1: σ2 > H1: σ2 < :
1
Hypothesis test: 1.
Choose parameter of interest 2.
H0: 3.
H1: 4.
α= 5.
The test statistic is 6.
Reject H0 at α= … if 7.
Computations 8.
Conclusions Test on mean, variance known :
,
/√
Alternative hypothesis H1: µ ≠ µ0 H1: µ > µ0 H1: µ < µ0 Test on mean, variance unknown ,
/√
Alternative hypothesis Rejection criteria H1: ≠ Δ t0 > tα/2,n‐1 or t0 < ‐tα/2, n‐1 t0 > tα,n‐1 H1: > Δ H1: < Δ t0 < ‐tα,n‐1 Approximate tests on the difference of two population proportions: ̂
/
̂
Δ ,
,
,