Download Formula/Table Card for Weiss`s Introductory Statistics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Larry R. Griffey
Notation
n = sample size
x = sample mean
s = sample stdev
Chapter 3
Qj = jth quartile
N = population size
m = population mean
p = population proportion
O = observed frequency
E = expected frequency
Descriptive Measures
• Sample mean: x =
• Lower limit ⫽ Q1 ⫺ 1.5 . IQR, Upper limit ⫽ Q3 ⫹ 1.5 . IQR
©xi
n
©xi
N
• Population standard deviation (standard deviation of a variable):
• Population mean (mean of a variable): m =
• Range: Range ⫽ Max ⫺ Min
• Sample standard deviation:
©(xi - x )2
s =
B n - 1
or
s =
B
© x2i - (© xi)2>n
s =
n - 1
• Interquartile range: IQR ⫽ Q3 ⫺ Q1
Chapter 4
s = population stdev
d = paired difference
pN = sample proportion
g(xi - m)2
B
N
B N
- m2
Probability Concepts
• Rule of total probability:
f
N
where f denotes the number of ways event E can occur and
N denotes the total number of outcomes possible.
k
P(B) = a P(Aj) # P(B ƒ Aj)
P(E ) =
• Special addition rule:
P(A or B or C or Á ) = P(A) + P(B) + P(C) + Á
(A, B, C, … mutually exclusive)
• Complementation rule: P(E) ⫽ 1 ⫺ P(not E)
• General addition rule: P(A or B) ⫽ P(A) ⫹ P(B) ⫺ P(A & B)
• Conditional probability rule: P(B ƒ A) =
P(A & B)
P(A)
j=1
(A1, A2, …, Ak mutually exclusive and exhaustive)
• Bayes’s rule:
P(Ai ƒ B) =
P(Ai) # P(B ƒ Ai)
k
P(Aj) # P(B ƒ Aj)
aj=1
(A1, A2, …, Ak mutually exclusive and exhaustive)
• Factorial: k! = k (k - 1) Á 2 # 1
• Permutations rule: mPr =
m!
(m - r)!
• General multiplication rule: P(A & B) ⫽ P(A) ⭈ P(B ƒ A)
• Special permutations rule: mPm = m!
• Special multiplication rule:
• Combinations rule: mCr =
P(A & B & C & Á ) = P(A) # P(B) # P(C) Á
m!
r!(m - r)!
• Number of possible samples: N Cn =
(A, B, C, … independent)
N!
n!(N - n)!
Discrete Random Variables
• Mean of a discrete random variable X: m = ©xP(X = x)
• Standard deviation of a discrete random variable X:
s = 2©(x - m)2P(X = x) or s = 2©x2P(X = x) - m2
• Factorial: k! = k(k - 1) Á 2 # 1
n
n!
• Binomial coefficient: a b =
x
x!(n - x)!
where n denotes the number of trials and p denotes the success
probability.
• Mean of a binomial random variable: ␮ ⫽ np
• Standard deviation of a binomial random variable:
s = 1np(1 - p)
• Poisson probability formula: P(X = x) = e-l
• Binomial probability formula:
n
P(X = x) = a b px(1 - p)n - x
x
Chapter 6
gx2i
x - m
s
• Standardized variable: z =
• Probability for equally likely outcomes:
Chapter 5
s =
or
• Mean of a Poisson random variable: ␮ ⫽ ␭
• Standard deviation of a Poisson random variable: s = 1l
The Normal Distribution
• z-score for an x-value: z =
x - m
s
lx
x!
• x-value for a z-score: x = m + z # s
Copyright 2012 Pearson Education, Inc.
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Larry R. Griffey
Chapter 7
The Sampling Distribution of the Sample Mean
• Mean of the variable x: mx = m
Chapter 8
• Standard deviation of the variable x: sx = s> 1n
Confidence Intervals for One Population Mean
• Standardized version of the variable x:
• Studentized version of the variable x:
x - m
z =
t =
s> 1n
• z-interval for ␮ (␴ known, normal population or large sample):
x ; za>2 #
x - m
s> 1n
• t-interval for ␮ (␴ unknown, normal population or large sample):
s
1n
x ; ta>2 #
• Margin of error for the estimate of ␮: E = za>2 #
s
1n
s
1n
with df ⫽ n ⫺ 1.
• Sample size for estimating ␮:
n = a
za>2 # s
E
b
2
rounded up to the nearest whole number.
Chapter 9
Hypothesis Tests for One Population Mean
• z-test statistic for H0: ␮ ⫽ ␮0 (␴ known, normal population or
large sample):
x - m0
z =
s> 1n
• t-test statistic for H0: ␮ ⫽ ␮0 (␴ unknown, normal population or
large sample):
t =
• Symmetry property of a Wilcoxon signed-rank distribution:
W1 - A = n(n + 1)>2 - WA
• Wilcoxon signed-rank test statistic for H0: ␮ ⫽ ␮0 (symmetric
population):
W ⫽ sum of the positive ranks
x - m0
s> 1n
with df ⫽ n ⫺ 1.
Chapter 10
Inferences for Two Population Means
• Pooled sample standard deviation:
sp =
A
(n1 -
1)s21
+ (n2 n1 + n2 - 2
1)s22
• Pooled t-test statistic for H0: ␮1 ⫽ ␮2 (independent samples,
normal populations or large samples, and equal population
standard deviations):
x1 - x2
t =
sp 2(1>n1) + (1>n2)
with df ⫽ n1 ⫹ n2 ⫺ 2.
t =
with df ⫽ ⌬.
x1 - x2
2(s21>n1)
+ (s22>n2)
• Nonpooled t-interval for ␮1 ⫺ ␮2 (independent samples, and
normal populations or large samples):
(x1 - x2) ; ta>2 # 2(s21>n1) + (s22>n2)
with df ⫽ ⌬.
• Pooled t-interval for ␮1 ⫺ ␮2 (independent samples, normal
populations or large samples, and equal population standard
deviations):
(x1 - x2) ; ta>2 # sp 2(1>n1) + (1>n2)
with df ⫽ n1 ⫹ n2 ⫺ 2.
• Symmetry property of a Mann⫺Whitney distribution:
M1 - A = n1(n1 + n2 + 1) - MA
• Mann–Whitney test statistic for H0: ␮1 ⫽ ␮2 (independent samples and same-shape populations):
M ⫽ sum of the ranks for sample data from Population 1
• Degrees of freedom for nonpooled t-procedures:
¢ =
• Nonpooled t-test statistic for H0: ␮1 ⫽ ␮2 (independent samples,
and normal populations or large samples):
[(s21>n1) + (s22>n2)]2
(s21>n1)2
(s22>n2)2
+
n1 - 1
n2 - 1
rounded down to the nearest integer.
• Paired t-test statistic for H0: ␮1 ⫽ ␮2 (paired sample, and normal
differences or large sample):
t =
with df ⫽ n ⫺ 1.
Copyright 2012 Pearson Education, Inc.
d
sd > 1n
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Larry R. Griffey
• Paired t-interval for ␮1 ⫺ ␮2 (paired sample, and normal differences
or large sample):
sd
d ; ta>2 #
1n
with df ⫽ n ⫺ 1.
Chapter 11
W ⫽ sum of the positive ranks
Inferences for Population Standard Deviations
• x2-test statistic for H0: ␴ ⫽ ␴0 (normal population):
n - 1 2
x2 =
s
s20
with df ⫽ n ⫺ 1.
• x -interval for ␴ (normal population):
n - 1#
n - 1#
s to
s
A x2a>2
A x21 - a>2
2
with df ⫽ n ⫺ 1.
Chapter 12
• Paired Wilcoxon signed-rank test statistic for H0: ␮1 ⫽ ␮2 (paired
sample and symmetric differences):
• F-test statistic for H0: ␴1 ⫽ ␴2 (independent samples and normal
populations):
F = s21>s22
with df ⫽ (n1 ⫺ 1, n2 ⫺ 1).
• F-interval for s1>s2 (independent samples and normal populations):
s
1 # s1
1
# 1
to
s
s
2Fa>2 2
2F1 - a>2 2
with df ⫽ (n1 ⫺ 1, n2 ⫺ 1).
Inferences for Population Proportions
• Sample proportion:
x
pN =
n
where x denotes the number of members in the sample that have
the specified attribute.
• z-interval for p:
pN ; za>2 # 2pN (1 - pN )>n
• z-test statistic for H0: p1 ⫽ p2:
z =
(Assumptions: independent samples; x1, n1 ⫺ x1, x2, n2 ⫺ x2 are
all 5 or greater)
( pN 1 - pN 2) ; za>2 # 2pN 1(1 - pN 1)>n1 + pN 2(1 - pN 2)>n2
• Margin of error for the estimate of p:
E = za>2 # 2pN (1 - pN )>n
(Assumptions: independent samples; x1, n1 ⫺ x1, x2, n2 ⫺ x2 are
all 5 or greater)
• Sample size for estimating p:
za>2 2
za>2 2
n = 0.25 a
b or n = pN g (1 - pN g) a
b
E
E
rounded up to the nearest whole number (g ⫽ “educated guess”)
• Margin of error for the estimate of p1 ⫺ p2:
E = za>2 # 2pN 1(1 - pN 1)>n1 + pN 2(1 - pN 2)>n2
• Sample size for estimating p1 ⫺ p2:
n1 = n2 = 0.5 a
• z-test statistic for H0: p ⫽ p0:
pN - p0
za>2
E
b
2
or
2p0(1 - p0)>n
(Assumption: both np0 and n(1 ⫺ p0) are 5 or greater)
x1 + x2
• Pooled sample proportion: pN p =
n1 + n2
Chapter 13
2pN p(1 - pN p)2(1>n1) + (1>n2)
• z-interval for p1 ⫺ p2:
(Assumption: both x and n ⫺ x are 5 or greater)
z =
pN 1 - pN 2
n1 = n2 = 1 pN 1g (1 - pN 1g) + pN 2g (1 - pN 2g)2 a
za>2
E
b
2
rounded up to the nearest whole number (g ⫽ “educated guess”)
Chi-Square Procedures
• Expected frequencies for a chi-square goodness-of-fit test:
E ⫽ np
• Test statistic for a chi-square goodness-of-fit test:
x2 = ©(O - E )2>E
with df ⫽ c ⫺ 1, where c is the number of possible values for the
variable under consideration.
• Expected frequencies for a chi-square independence test or a
chi-square homogeneity test:
R#C
E =
n
where R ⫽ row total and C ⫽ column total.
• Test statistic for a chi-square independence test:
x2 = ©(O - E)2>E
with df ⫽ (r ⫺ 1)(c ⫺ 1), where r and c are the number of possible
values for the two variables under consideration.
• Test-statistic for a chi-square homogeneity test:
x2 = ©(O - E)2>E
with df ⫽ (r ⫺ 1)(c ⫺ 1), where r is the number of populations
and c is the number of possible values for the variable under
consideration.
Copyright 2012 Pearson Education, Inc.
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Larry R. Griffey
Chapter 14
Descriptive Methods in Regression and Correlation
2
>Sxx
• Regression sum of squares: SSR = ©( yN i - y)2 = Sxy
• Sxx, Sxy , and Syy:
Sxx = ©(xi - x)2 = ©x2i - (©xi)2>n
2
>Sxx
• Error sum of squares: SSE = ©( yi - yN i )2 = Syy - Sxy
Sxy = ©(xi - x)( yi - y) = ©xi yi - (©xi)(©yi)>n
Syy = ©( yi - y)2 = ©y2i - (©yi)2>n
• Regression equation: yN = b0 + b1x, where
Sxy
1
b1 =
and b0 = (©yi - b1 ©xi) = y - b1x
n
Sxx
• Total sum of squares: SST = ©( yi - y)2 = Syy
Chapter 15
• Regression identity: SST ⫽ SSR ⫹ SSE
• Coefficient of determination: r 2 =
• Linear correlation coefficient:
1
n - 1 ©(xi - x)( yi - y)
r =
sx sy
or r =
Sxy
1SxxSyy
Inferential Methods in Regression and Correlation
• Population regression equation: y = b 0 + b 1x
• Prediction interval for an observed value of the response variable
corresponding to xp:
SSE
An - 2
• Standard error of the estimate: se =
yN p ; ta>2 # se
• Test statistic for H0: ␤1 ⫽ 0:
t =
with df ⫽ n ⫺ 2.
se> 1Sxx
se
1Sxx
r
1 - r2
An - 2
with df ⫽ n ⫺ 2.
• Confidence interval for the conditional mean of the response
variable corresponding to xp:
(xp - ©xi>n)
1
+
An
Sxx
2
yN p ; ta>2 # se
(xp - ©xi>n)2
1
+
n
Sxx
• Test statistic for H0: r = 0:
t =
b1 ; ta>2 #
with df ⫽ n ⫺ 2.
A1 +
with df ⫽ n ⫺ 2.
b1
• Confidence interval for ␤1:
with df ⫽ n ⫺ 2.
Chapter 16
SSR
SST
• Test statistic for a correlation test for normality:
Rp =
©xiwi
2Sxx ©w2i
where x and w denote observations of the variable and the corresponding normal scores, respectively.
Analysis of Variance (ANOVA)
• Notation in one-way ANOVA:
k ⫽ number of populations
n ⫽ total number of observations
x ⫽ mean of all n observations
nj ⫽ size of sample from Population j
xj ⫽ mean of sample from Population j
s2j ⫽ variance of sample from Population j
Tj ⫽ sum of sample data from Population j
• Defining formulas for sums of squares in one-way ANOVA:
SST = ©(xi - x)2
F =
MSTR
MSE
with df ⫽ (k ⫺ 1, n ⫺ k).
• Confidence interval for ␮i ⫺ ␮j in the Tukey multiple-comparison
method (independent samples, normal populations, and equal
population standard deviations):
(xi - xj) ;
qa
12
# s 2(1>ni) + (1>nj)
where s = 1MSE and q␣ is obtained for a q-curve with parameters
k and n ⫺ k.
SSTR = ©nj (xj - x)2
SSE = ©(nj -
• Test statistic for one-way ANOVA (independent samples, normal
populations, and equal population standard deviations):
1)s2j
• One-way ANOVA identity: SST ⫽ SSTR ⫹ SSE
• Computing formulas for sums of squares in one-way ANOVA:
SST = ©x2i - (©xi)2>n
SSTR = ©(Tj2>nj) - (©xi)2>n
SSE = SST - SSTR
• Mean squares in one-way ANOVA:
SSTR
SSE
MSTR =
MSE =
k - 1
n - k
• Test statistic for a Kruskal–Wallis test (independent samples,
same-shape populations, all sample sizes 5 or greater):
k R2j
SSTR
12
H =
or H =
- 3(n + 1)
SST>(n - 1)
n(n + 1) ja
= 1 nj
where SSTR and SST are computed for the ranks of the data,
and Rj denotes the sum of the ranks for the sample data from
Population j. H has approximately a chi-square distribution
with df ⫽ k ⫺ 1.
Copyright 2012 Pearson Education, Inc.
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Table II
Areas under the standard normal curve
Table II (cont.)
Areas under the standard normal curve
Larry R. Griffey
Copyright 2012 Pearson Education, Inc.
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Larry R. Griffey
Table IV
Values of t␣
Table IV (cont.)
Table V
Copyright 2012 Pearson Education, Inc.
Values of t␣
Values of W␣
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Larry R. Griffey
Table I
Table VI
Random numbers
Table III
Normal scores
Values of M␣
Table VII
Copyright 2012 Pearson Education, Inc.
Values of x2A
Formula/Table Card for Weiss’s Introductory Statistics, 9/e
Table VIII
Values of F␣
Table VIII (cont.)
Values of F␣
Larry R. Griffey
Copyright 2012 Pearson Education, Inc.
Related documents