Download Laws of expected value

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Statistics for management and Economics, Seventh Edition
Formulas
Numerical Descriptive techniques
Population mean
N
x
i
i 1
 =
N
Sample mean
n
x
x
i
i 1
n
Range
Largest observation - Smallest observation
Population variance
N
( x   )
2
i
2 =
i 1
N
Sample variance
n
s2 =
 (x
i 1
i
 x)2
n 1
Population standard deviation
 =
2
Sample standard deviation
s=
s2
Population covariance
N
 xy 
( x
  x )( y i   y )
i
i 1
N
Sample covariance
n
( x
i
 x )( y i  y )
i 1
s xy 
n 1
Population coefficient of correlation
 xy

 x y
Sample coefficient of correlation
r
s xy
sx sy
Slope coefficient
b1 
cov( x , y )
s x2
y-intercept
b0  y  b1 x
Probability
Conditional probability
P(A|B) = P(A and B)/P(B)
Complement rule
C
P( A ) = 1 – P(A)
Multiplication rule
P(A and B) = P(A|B)P(B)
Addition rule
P(A or B) = P(A) + P(B) - P(A and B)
Bayes’ Law Formula
P(A i | B) 
P( A i ) P( B | A i )
P( A 1 ) P( B | A 1 )  P( A 2 ) P( B | A 2 )  . . .  P( A k ) P( B | A k )
Random Variables and Discrete Probability Distributions
Expected value (mean)
E(X) =  
 xP( x )
all x
Variance
V(x) =  2 
( x   ) P( x )
2
all x
Standard deviation
  2
Covariance
COV(X, Y)
= ( x   x )( y   y )P( x , y )
Coefficient of Correlation

COV ( X , Y )
 x y
Laws of expected value
1. E(c) = c
2. E(X + c) = E(X) + c
3. E(cX) = cE(X)
Laws of variance
1.V(c) = 0
2. V(X + c) = V(X)
2
3. V(cX) = c V(X)
Laws of expected value and variance of the sum of two variables
1. E(X + Y) = E(X) + E(Y)
2. V(X + Y) = V(X) + V(Y) + 2COV(X, Y)
Laws of expected value and variance for the sum of more than two variables
1. E (
2. V (
k
k
i 1
i 1
k
k
i 1
i 1
 X i )   E( X i )
 X i )  V ( X i ) if the variables are independent
Mean and variance of a portfolio of two stocks
E(Rp) = w1E(R1) + w2E(R2)
V(Rp) = w12 V(R1) + w22 V(R2) + 2 w1 w2 COV(R1, R2)
= w12 12 + w22  22 + 2 w1 w2  1  2
Mean and variance of a portfolio of k stocks
k
E(Rp) =
 w E(R )
i
i
i 1
k
V(Rp) =

k
k
  w w COV (R , R )
wi2  i2  2
i 1
i
j
i 1 j i 1
Binomial probability
P(X = x) =
n!
p x ( 1  p ) n x
x! ( n  x )!
  np
 2  np( 1  p )
  np( 1  p )
Poisson probability
P(X = x) =
e   x
x!
Continuous Probability Distributions
Standard normal random variable
i
j
Z
X 

Exponential distribution
    1/ 
P(X  x)  e x
P(X  x)  1  e x
P(x1  X  x 2 )  P(X  x 2 )  P(X  x1 )  e x1  e x 2
F distribution
F1 A, 1 , 2 =
1
FA, 2 , 1
Sampling Distributions
Expected value of the sample mean
E( X )   x  
Variance of the sample mean
V ( X )   2x 
2
n
Standard error of the sample mean
x 

n
Standardizing the sample mean
Z
X 
/ n
Expected value of the sample proportion
E (Pˆ )   pˆ  p
Variance of the sample proportion
p (1  p )
V ( Pˆ )   2pˆ 
n
Standard error of the sample proportion
 pˆ 
p (1  p )
n
Standardizing the sample proportion
Z
Pˆ  p
p (1  p ) n
Expected value of the difference between two means
E ( X 1  X 2 )   x1 x2  1   2
Variance of the difference between two means
V ( X 1  X 2 )   x21  x2 
 12
n1

 22
n2
Standard error of the difference between two means
 x1  x2 
12  22

n1 n 2
Standardizing the difference between two sample means
Z
( X 1  X 2 )  (1   2 )
12  22

n1 n 2
Introduction to Estimation
Confidence interval estimator of 
x  z / 2

n
Sample size to estimate 

z
n    / 2 
W


2
Introduction to Hypothesis Testing
Test statistic for 
z
x
/ n
Inference about One Population
Test statistic for 
t
x
s/ n
Confidence interval estimator of 
x  t / 2
s
n
Test statistic for  2
2 
( n  1 )s 2
2
Confidence interval Estimator of  2
LCL =
UCL =
( n  1 )s 2
 2 / 2
( n  1 )s 2
 12 / 2
Test statistic for p
z
p̂  p
p( 1  p ) / n
Confidence interval estimator of p
p̂  z / 2 p̂( 1  p̂ ) / n
Sample size to estimate p
z
p̂( 1  p̂ ) 
n /2


W


2
Confidence interval estimator of the total of a large finite population

s 
N  x  t / 2

n

Confidence interval estimator of the total number of successes in a large finite population

p̂( 1  p̂ 
N  p̂  z / 2

n 

Confidence interval estimator of  when the population is small
x  t / 2
s
n
N n
N 1
Confidence interval estimator of the total in a small population

s
N  x  t / 2

n

N  n 
N  1 
Confidence interval estimator of p when the population is small

p  z / 2
p̂( 1  p̂ )
n
N n
N 1
Confidence interval estimator of the total number of successes in a small population

N  p̂  z / 2


p̂( 1  p̂ ) N  n 
n
N  1 
Inference About Two Populations
Equal-variances t-test of 1   2
t
( x1  x 2 )  ( 1   2 )

s 2p 
1
1 


n
n
2 
 1
  n1  n2  2
Equal-variances interval estimator of 1   2
 1
1
( x1  x 2 )  t / 2 s 2p  
 n1 n 2
Unequal-variances t-test of 1   2



  n1  n2  2
t
( x1  x 2 )  (  1   2 )

 s12 s 22 



 n1 n 2 


( s12 / n1  s22 / n2 )2
( s12 / n1 )2 ( s22 / n2 )2

n1  1
n2  1
Unequal-variances interval estimator of 1   2
( x1  x 2 )  t / 2
s12 s 22

n1 n 2

( s12 / n1  s22 / n2 )2
( s12 / n1 )2 ( s22 / n2 )2

n1  1
n2  1
t-Test of  D
t
xD   D
  nD  1
sD / nD
t-Estimator of  D
x D  t / 2
sD
  nD  1
nD
F-test of 12 / 22
F=
s12
 1  n1  1 and  2  n2  1
s22
F-Estimator of 12 / 22
 s2
LCL =  12
s
 2

1

 F / 2 , ,

1 2
 s2
UCL =  12
s
 2

 F / 2 , ,
2 1


z-Test and estimator of p1  p 2
Case 1: z 
Case 2:
z
( p̂1  p̂ 2 )
 1
1 

p̂( 1  p̂ ) 
n
n
2 
 1
( p̂1  p̂2 )  ( p1  p2 )
p̂1( 1  p̂1 ) p̂2 ( 1  p̂2 )

n1
n2
z-estimator of p1  p 2
p̂1( 1  p̂1 ) p̂2 ( 1  p̂2 )

n1
n2
( p̂1  p̂2 )  z / 2
Analysis of Variance
One-way analysis of variance
k
n ( x
SST =
j
j
 x )2
j 1
k
nj
 ( x
SSE =
j 1
i 1
MST =
SST
k 1
MSE =
SSE
nk
F=
 x j )2
ij
MST
MSE
Two-way analysis of variance (randomized block design of experiment)
k
SS(Total) =
b
 ( x
j 1
ij
 x )2
i 1
k
SST =
b( x [ T ]
j
 x )2
i 1
b
SSB =
 k( x [ B ]  x )
2
i
i 1
k
SSE =
b
 ( x
ij
j 1
i 1
MST =
SST
k 1
MSB =
SSB
b 1
 x [ T ] j  x [ B ]i  x )2
MSE =
SSE
n  k  b 1
F=
MST
MSE
F=
MSB
MSE
Two-factor experiment
a
SS(Total) =
b
r
 ( x
ijk
 x )2
i 1 j 1 k 1
a
SS(A) = rb
( x [ A ]  x )
2
i
i 1
b
SS(B) = ra
( x [ B ]
j
 x )2
j 1
a
SS(AB) = r
b
( x [ AB ]
ij
 x [ A ]i  x [ B ] j  x )2
i 1 j 1
a
SSE =
b
r
 ( x
ijk
 x [ AB ]ij )2
i 1 j 1 k 1
F=
MS(A)
MSE
F=
MS(B)
MSE
F=
MS(AB)
MSE
Least Significant Difference Comparison Method
 1
1 
LSD = t / 2 MSE 
 ni n j 


Tukey’s multiple comparison method
  q ( k , )
Chi-Squared Tests
MSE
ng
Test statistic for all procedures
(f i  e i ) 2
ei
k

2 
i 1
Simple Linear Regression
Sample slope
b1 
s xy
s x2
Sample y-intercept
b0  y  b1 x
Sum of squares for error
n
SSE =
( y
i
 ŷ i ) 2
i 1
Standard error of estimate
s 
SSE
n2
Test statistic for the slope
t
b1  1
s b1
Standard error of
s b1 
b1
s
( n  1 )s x2
Coefficient of determination
R2 
2
s xy
s x2 s 2y
 1
SSE
( y
i
 y )2
Prediction interval
ŷ  t / 2 ,n 2 s 1 
2
1 ( xg  x )

n ( n  1 )s x2
Confidence interval estimator of the expected value of y
ŷ  t / 2 ,n 2 s
2
1 ( xg  x )

n ( n  1 )s x2
Sample coefficient of correlation
r
s xy
sx sy
Test statistic for testing  = 0
tr
n2
1 r 2
Multiple Regression
Standard Error of Estimate
s 
SSE
n  k 1
Test statistic for  i
t
bi   i
s bi
Coefficient of Determination
R2 
2
s xy
s x2 s 2y
 1

SSE
( yi  y )2
Adjusted Coefficient of Determination
Adjusted R 2  1 
SSE /( n  k  1 )
( y  y )
2
i
Mean Square for Error
MSE = SSE/k
Mean Square for Regression
MSR = SSR/(n-k-1)
/( n  1 )
F-statistic
F = MSR/MSE
Durbin-Watson statistic
n
d
( e
i
 ei 1 ) 2
i2
n
e
2
i
i 1
Time Series Analysis and Forecasting
Exponential smoothing
S t  wyt  (1  w)S t 1
Nonparametric Statistical techniques
Wilcoxon rank sum test statistic
T  T1
E(T) =
n1 ( n1  n 2  1 )
2
T 
n1 n 2 ( n1  n 2  1 )
12
z
T  E( T )
T
Sign test statistic
x = number of positive differences
z
x  .5n
.5 n
Wilcoxon signed rank sum test statistic
T T
E(T) =
n( n  1)
4
T 
z
n( n  1 )( 2n  1 )
24
T  E( T )
T
Kruskal-Wallis Test
 12
H 
 n( n  1 )

T j2 
  3( n  1 )
n
j 1 j 

k

Friedman Test

12
Fr  
 b( k )( k  1 )


T j2   3b( k  1 )

j 1

k

Spearman rank correlation coefficient
rS 
s ab
s a sb
Spearman Test statistic for n > 30
z  rS n  1
Statistical Process Control
Centerline and control limits for x chart using S
Centerline = x
Lower control limit = x  3
S
n
Upper control limit = x  3
S
n
Centerline and control limits for the p chart
Centerline = p
Lower control limit = p  3
p( 1  p )
n
Upper control limit = p  3
p( 1  p )
n
Decision Analysis
Expected Value of perfect Information
EVPI = EPPI - EMV*
Expected Value of Sample Information
EVSI = EMV' - EMV*
Related documents