Download Probability Reference

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Probability Reference
Combinatorics and Sampling
A permutation is an ordered selection. The number of permutations of k items picked from a list of n items,
without replacement, is
P (n; k) := n(n
|
1)(n
2)
{z
k + 1) =
} (n
(n
k-factors
n!
k)!
=: (n)k
When selecting with replacement, the number of possibilities is nk .
A combination is an unordered selection. The number of combinations of k items chosen from a list of n
di¤erent items, without replacement, is
C(n; k) :=
n
k
P (n; k)
=:
k!
=
n(n
|
1)(n 2) (n
k(k 1)(k 2)
{z
k + 1)
n!
=
1
k!(n k)!
}
k-factors, num erator and denom inator
The number of ways to select k objects from n di¤erent items with replacement is
n
k
n+k
k
:=
1
=
n(n + 1)(n + 2) (n + k
k(k 1)(k 2) 1
|
{z
1)
k-factors, ab ove and b elow
(This is also the number of nonnegative integer solutions of the equation x1 + x2 +
of ways to distribute k identical objects into n distinct boxes.)
}
+ xn = k and the number
The number of distinct ways of distributing n objects into k distinct classes of size n1 ; n2 ; : : : ; nk , without
replacement and with no order within each class, is
n
n1 ; n 2 ; : : : ; n k
:=
Binomial Theorem:
n
(a + b) =
n!
; where n1 + n2 +
n1 !n2 ! nk !
n
X
n k n
a b
k
k=0
k
=
n
X
n n
a
k
+ nk = n
k k
b
k=0
Multinomial Theorem:
n
(a1 + a2 +
+ ak ) =
X
n
an1 an2
n1 ; n 2 ; : : : ; n k 1 2
ank k ;
where the sum is taken over all nonnegative integer values of n1 ; n2 ; : : : ; nk for which n1 + n2 +
: p
n
Stirling’s Formula: n! = 2 n (n=e) or more accurately
: p
n! = 2
n+
1
2
+ nk = n.
n+(1=2)
e
n
Binomial coe¢ cient identities:
n
k
n
P
k=0
r
P
k=0
+
n
k
n
k+1
=
n+1
k+1
;
n
=2 ;
n+k
k
=
n+r+1
r
;
n
k
n
P
k=0
r
P
k=0
k n+k 1
k
= ( 1)
(
k
1) nk
m
k
n
r k
, for n > 0;
= 0;
=
m+n
r
;
n
k
n
P
k=1
m
P
k=0
n k
p
k nk
m
k
=
n
p
n p
k
n 1
= n2
;
n
r+k
m+n
m+r
=
=
n
k p
n k+p
p
;
1
Some useful series:
n
P
k = 12 n (n + 1)
k=1
n
P
rm rn+1
r =
1 r
k=m
1
P
k
x
log (1 x), for jxj < 1
k =
t)
n
=
1
P
k=0
Probability
k=1
1
P
k
k=1
(1
n
P
n+k 1
k
1
P
tk =
k=0
k=0
1
P
k=0
n+k 1
n 1
tk , jtj < 1
k 2 = 16 n (n + 1) (2n + 1)
1
k
r =
x2k
(2k)!
1
r
, for jrj < 1
n
P
k=1
1
P
k=0
1
P
= cosh x
k=0
2
k 3 = 14 n2 (n + 1)
xk
k!
= ex
x2k+1
(2k+1)!
= sinh x
If A, B, and C are events (de…ned as subsets of the sample space S of all possible outcomes of an experiment)
then Pr is a probability measure, when the following are true:
S1
P1
(i) 0 Pr(A) 1, (ii) Pr ( 1 Ai ) = i=1 Pr (Ai ), for pairwise disjoint Ai ; (iii) Pr(S) = 1 , Pr (?) = 0.
The complement of an event is de…ned to be A0 = fx : x 2
= Ag, then Pr (A0 ) = 1 Pr (A) is the Law of
Complements; Pr (A [ B) = Pr (A) + Pr (B) Pr (A \ B) is the Principle of Inclusion-Exclusion.
Conditional probability of A given B,
Pr (AjB) :=
Pr (A \ B)
; when Pr (B) > 0;
Pr (B)
this implies Pr (A \ B) = Pr (AjB) Pr (B) = Pr (BjA) Pr (A) = Pr (B \ A).
The events A1 ; A2 ; : : : ; An are independent if
Pr (Ar1 \ Ar2 \
\ Ark ) = Pr (Ar1 ) Pr (Ar2 )
for fr1 ; r2 ; : : : ; rk g any subset of 1 : n. This implies that
either A or A0 , separately for each set as k = 2 : n.
Pr (Ark ) ;
#
#
A#
i 1 ; A i2 ; : : : ; A is
are independent, where A# can be
If B = fBk ; k = 1 : ng is a partition of the sample space S, meaning Bi \ Bj = ? for i 6= j and
then the Law of Total Probability says
Pr (A) =
n
X
Sn
i=1
Bi = S,
Pr (AjBi ) Pr (Bi ) ;
i=1
and Bayes’formula is
Pr (AjBk ) Pr (Bk )
Pr (Bk jA) = Pn
i=1 Pr (AjBi ) Pr (Bi )
Discrete Random Variables
1. X has probability mass function pmf f (x) if (i) f (x)
0, (ii)
P
f (x) = 1, (iii) f (xk ) = Pr(X = xk ):
X
2. X has cumulative distrbution function cdf F (x) if F (x) := Pr(X
x) =
f (y); Pr(a < x
b) =
y x
F (b)
F (a); f (xk ) = F (xk )
F (xk
1 ).
Continuous Random Variables
1. X has probability density function pdf f (x) if (i) f (x)
Z b
f (x) dx.
a
2. X has cdf F (x) if F (x) :=
Z
0, (ii)
Z
1
f (x) dx = 1, (iii) Pr(a < x
x
f ( ) d ; Pr(a
1
X < b) = F (b)
b) =
1
F (a); f (x) =
dF
.
dx
3. The median x
~ satis…es F (~
x) = 12 and the pth percentile xp satis…es F (xp ) = p. The interquartile range
is IQR := x0:75 x0:25 and the interdecile range is IDR := x0:90 x0:10
2
Discrete and Continuous cdfs
F (x) is (i) nondecreasing, (ii) lim F (x) = 0, (iii) lim F (x) = 1, and (iv) F (x) is right continuous.
x! 1
x!1
Independent and Exchangeable Random Variables
The rvs X1 ; X2 ; : : : ; Xn are independent if and only if the joint pf is the product of the marginals, i.e.,
f (x1 ; x2 ; : : : ; xn ) = f1 (x1 )f2 (x2 )
fn (xn )
The rvs X1 ; X2 ; : : : ; Xn are exchangeable if and only if the joint pf is invariant under interchanges of its arguments,
i.e.,
f (x1 ; x2 ; : : : ; xn ) = f (xi1 ; xi2 ; : : : ; xin ) ;
for any permutation (i1 ; i2 ; : : : ; in ) of 1 : n. Independent and identically distributed (iid) rvs are exchangeable,
but independence and exchangeability, although overlapping in some areas, are distinct concepts. For instance,
independence does not imply exchangeability. Nor does exchangeability imply independence.
Expectation Values
By “de…nition,” the expectation of a function of a rv is
8
P
<
g(x)f (x); X discrete,
range(X)
E (g(X)) :=
R
: 1 g(x)f (x) dx; X continuous.
1
For rvs of the mixed type with probability function pf f (x) = fdisc (x) + (1
)fcont (x), you can only de…ne the
moments
Z
X
E Xk =
xk fdisc (x) + (1
)
xk fcont (x) dx
continuous(X)
discrete(X)
:= E (X) ; rth central moment is
1. rth moment is 0r := E (X r ) ; the mean is
r
absolute deviation is r := E (jX
j );
the variance is var(X) :=
2
r
=
0
r
=
:=
2
r
1
0
r
r
+
2
) = E X2
= E (X
0
r 1
r
1
r 1
r
2
+
+
r
2
2 0
r 2
2
r 2
2
:= E (X
r
) ; rth
.
+ ( 1)r
+
+
r
+
r
r
1
(r
1)
r 2
2
2
+
r
r
3
4
2. Coe¢ cient of skewness is 1 := E ((X
) = ) and coe¢ cient of excess is 2 = E ((X
)= )
3.
P
P
P
P 2
PP
3. E ( ak Xk ) =
ak E (Xk ) ; var ( ak Xk ) =
ak var(Xk ) + 2
j<k aj ak cov(Xj ; Xk ). The covariance
and correlation are de…ned by:
cov(Xj ; Xk ) := E (Xj
cov
X
j )(Xk
ai Xi ;
X
k)
bj X j =
= E (Xj Xk )
X
ai bi var (Xi ) +
j k
=:
XX
jk ;
jk
:= corr(Xj ; Xk ) =
jk
j k
(ai bj + aj bi ) cov (Xi ; Xj )
i<j
4. Conditional expectations: E (Y ) = E (E(Y jX)) and
var(Y ) = E (var(Y jX)) + var (E(Y jX)) ;
where var (Y jX) := E (Y
2
E(Y jX)) jX .
5. When X and Y are independent rvs,
var(XY ) = var(X) var(Y ) + E 2 (X) var(Y ) + E 2 (Y ) var(X)
3
Generating Functions
(n)
0
n,
Moment generating function, mgf: MX (t) := E etX , MX (0) =
2
2
00
0
= MX
(0) (MX
(0)) .
0
00
Cumulant generating function, cgf: KX (t) := log (MX (t)), KX
(0) = , KX
(0) =
(r)
r
th
the r cumulant is r = KX (0) = E (X
) .
(r)
Factorial generating function, fgf: PX (s) := E sX , PX (1) =
(n)
0
n,
Characteristic function, cf: 'X (!) := E ei!X , 'X (0) = in
2
:= E (X(X
[r]
0
= MX
(0),
MaX+b (t) = ebt MX (at),
000
, KX
(0) = E (X
1)
(X
3
) ,
r + 1)).
'aX+b (!) = eibt
X '(a!).
Order Statistics
A random sample of size n is a set fX1 ; X2 ; : : : ; Xn g of independent and identically distributed (iid) rvs.
The order statistics of the random sample are de…ned to be X(1;n) X(2;n)
X(n;n) . We assume they are
drawn from a population with pdf f (x) and cdf F (x).
1. The pdf and cdf of Y = X(r;n) are given by
gr (y) =
n
1; 1; n
r
r 1
The joint pdfs of two order statistics, Yr = X(r;n)
r
n
r
1; 1; s
[F (yr )]
s
F (y)]
;
n i
F (y)]
Ys = X(s;n) are given by
r 1
1; 1; n
n r
f (y) [1
n
X
n
i
[F (y)] [1
i
i=r
Gr (y) =
gr;s (yr ; ys ) =
[F (y)]
r
s r 1
f (yr ) [F (ys )
F (yr )]
f (ys ) [1
n s
F (ys )]
, yr
ys
and the pdf of all the order statistics is
g(y1 ; y2 ; : : : ; yn ) = n! f (y1 ; y2 ; : : : ; yn ) for y1
2. The pdf and cdf of the range, R := X(n;n)
Z
fR (r) = n(n 1)
FR (r) = n
y2
yn
X(1;n) , are given by
1
Z
n 2
[F (x + r)
F (x)]
f (x)f (x + r) dx;
1
1
[F (x + r)
n 1
F (x)]
f (x) dx
1
Transformation of Variables
If Y = u(X) is a smooth one-to-one transformation, then
G(y) = Pr (Y
y) = Pr (u(X)
y) = Pr X
u
1
(y) = F u
1
(y)
The corresponding pdf is the derivative: g(y) = f (x(y)) dx
dy . If the transformation is not one-to-one, break up
its support into a union of intervals over each of which it is one-to-one and apply the previous formula to each
piece and sum the result. E.g.,
p
p
p
p
Y = X 2 ; G(y) = Pr (Y
y) = Pr X 2 y = Pr (
y X
y) = F ( y) F (
y) ;
so that
1
p
p
g(y) = p (f ( y) + f (
y))
2 y
For discrete rvs, the pmf is g(yk ) = F u
1
(yk )
F u
1
(yk
1)
=f u
You should know that for any continuous rv, both U = F (X) and V = 1
1
(yk ) .
F (X) are Unif (0; 1).
4
If Y := [Y1 ; Y2 ; : : : ; Yn ] = [u1 (X1 ; X2 ; : : : ; Xn ) ; : : : ; un (X1 ; X2 ; : : : ; Xn )] is a smooth invertible multivariate
transformation, then use the Jacobian Change of Variable Theorem to write,
g (y1 ; y2 ; : : : ; yn ) = f (x1 (y); x2 (y); : : : ; xn (y))
@(x1 ; x2 ; : : : ; xn )
;
@ (y1 ; y2 ; : : : ; yn )
where the Jacobian is de…ned to be
2
6
6
@(x1 ; x2 ; : : : ; xn )
:= det 6
6
@ (y1 ; y2 ; : : : ; yn )
4
@x1
@y1
@x2
@y1
..
.
@xn
@y1
@x1
@y2
@x2
@y2
@x1
@yn
@x2
@yn
..
.
..
@xn
@y2
@xn
@yn
For sums of independent rvs, use the mgf result: If S = X1 + X2 +
MS (t) = MX1 (t)MX2 (t)
..
.
.
3
7
7
7
7
5
+ Xn , then
MXn (t);
n
which for the iid case reduces to MS (t) = MX
(t).
Also for sums of random variables, the pdf, f (s), of the sum is related to the pdfs of the individual Xi , pi (x),
via the convolution product f = p1 p2
pn , where the product is de…ned recursively by
Z 1
(p1 p2 ) (x) :=
p1 (x y)p2 (y) dy;
1
p1 p2 = p2 p1 , and p1 (p2 p3 ) = (p1 p2 ) p3 . This is usually not very useful except for distributions for
which f (s) can be more easily calculated other ways, e.g., mgfs. (See the last section of this reference sheet.)
De…nitions and Results
If Xn has cdf Fn (x) for n = 1 : 1 and if for some cdf F (x) we have limn!1 Fn (x) = F (x) for all values of x
at which F (x) is continuous, then the sequence fXn g converges in distribution to X, which has cdf F (x),
d
and we write Xn ! X.
If Xn has mgf Mn (t), X has mgf M (t), and there is an a > 0 such that limn!1 Mn (t) = M (t) for all
d
t 2 ( a; a), then Xn ! X.
We say the sequence fXn g converges stochastically to a constant c if the limiting distribution puts all its
P
mass at the atom fcg, written Xn ! c.
The sequence fXn g converges in probability to X if limn!1 Pr (jXn
P
written as Xn ! X.
If 0 := f! : limn!1 Xn (!) = X existsg and Pr (
a.s.
and we write Xn ! X.
P
0)
Xj < ") = 1, for any " > 0. This is
= 1, then we say that Xn converges almost surely
d
P
P
Slutsky’s Theorem says: (a) If Xn ! X, then Xn ! X: (b) If Xn ! c, then g(Xn ) ! g(c), whenever g is
d
d
P
d
d
continuous at c. (c) If Xn ! X and Yn ! c, then (i) Xn + Yn ! X + c, (ii) Xn Yn ! Xc, (iii) Xn =Yn ! X=c.
d
d
(d) If Xn ! X, then for any continuous function g(y), g (Xn ) ! g(X).
Central Limit Theorem: (Form 1) If X1 ; X2 ; : : : ; Xn are iid from a distribution with mean
2
< 1, then
Pn
n
1 Xi
p
lim Zn := lim
= Z N (0; 1)
n!1
n!1
n
and variance
(Form 2) If as above, then
X
p =Z
n!1 = n
lim Zn := lim
n!1
N (0; 1)
5
2+
(Berry-Esseen Bound) If, in addition, (E jXi j)
c such that
< zp
Pr X
sup
3
The
and c1
n
< zp
Pr X
sup
< 1, for some
2 (0; 1], then there is a constant
2+
c
n =2
(z) : x 2 R
3
= 1 case is most often cited: (E jXi j) =
2+
=
yields
n
3
c
p1
n
(z) : x 2 R
;
1:322.
Special Discrete Random Variables
1. Bin(n; p), Binomial: X = # successes in n, a …xed number of independent Bernoulli trials with constant
p = Pr(Success) =: 1 q.
n x n x
b(x; n; p) =
p q
; x = 0 : n;
x
= np;
2
= npq;
(r)
= (n)r pr ;
MX (t) = (pet + q)n .
2. Hyper(n; N; k), Hypergeometric: X = # defectives in sampling n items without replacement from a set of
N items of which D are defectives.
D
x
h(x; n; N; D) =
=n
D
N
2
;
N D
n x
N
n
;
N
N
=
x = maxf0; n + D
n
1
n
D
N
D
N
1
N g : minfD; ng
;
[r]
=
n[r] D[r]
N [r]
3. Pois( ), Poisson: X = # of occurrences of events occurring “randomly and independently” in a time T and
at a rate when = T .
x
p(x; ) =
x!
e
[r]
; x = 0 : 1;
=
r
;
=
MX (t) = exp
2
0
2
= ;
et
= (1 + )
1
The Law of Rare Events tells us that the limit of Bin (n; p) as n ! 1, p ! 0, and np =
is Pois ( ).
4. Bin (r; p), Negative Binomial: X = # of trials until rth success, or N Bin(r; p): Y = # failures until rth
success = X r.
x 1 r x r
r
rq
2
b (x; r; p) =
p q
; x = r : 1;
X = ;
X = 2;
r 1
p
p
f (y) =
r r
p ( q)y =
y
r+y
y
MX (t) =
pr
(e
t
1
pr q y ; y = 0 : 1;
r;
q)
MY (t) =
Y
=
rq
;
p
2
Y
=
rq
p2
pr
r
(1 qet )
(a) Geo(p), Geometric: X = # of trials until the …rst success. This is Bin (1; p). So, f (x; p) = q x
1
x = 1 : 1. = p1 , 2 = pq2 , MX (t) = p (e t q) .
1
p, for
5. Mult (n; p), Multivariate: Xi = # of occurences falling into category i when the probability of having an
outcome in each category is the same for each independent trial.
Pr (X = x) =
n x
p :=
x
and E (Xi ) = npi , var (Xi ) = npi (1
n
px1 px2
x1 ; x2 ; : : : ; xk 1 2
pi ), and cov (Xi ; Xj ) =
pxkk ;
k
X
xi = n and
1
k
X
pi = 1;
1
npi pj for i 6= j.
6
Special Continuous Random Variables
The indicator function is de…ned by
1; x 2 (a; b)
0; x 2
= (a; b)
I(a;b) (x) =
1. Unif (a; b), Uniform: f (x; a; b) =
MX (t) =
2. N ( ;
2
1
b
a
ebt
t(b
I(a;b) (x);
=x
~ = 21 (a + b);
eat
;
a)
br+1 ar+1
;
(b a) (r + 1)
2
); Gaussian or Normal: n(x; ;
2
=x
~= ;
For the standard normal, Z
=
2
0
r
=
1
)=
;
1
x
= 0;
1
=
1
12 (b
= 0;
1
:= p
2
2
2
exp
a)2 ;
6
5
=
(
= 0; MX (t) = exp
2
1
2
t+
2
x
1
2
)
,
2 2
t
N (0; 1), the pdf is
1
(z) = p e
2
z 2 =2
2
3. log N ( ; ), log Normal:
= exp + 12 2 ; if ! = e , then
4
3
2
6, x
~ = e ; 0r = exp r + 12 r2 2
2 = ! + 2! + 3!
1
f (x; ; ) = p
x 2
1
exp
2
and the mgf is
1 h
MX (t) = exp
1
where the semifactorial is de…ned by
p
2
1
+ (x
)2
2
;
(log x
i
;
r
2 4 6
1 3 5
> 0,
'X (!) = exp i !
2
e
x=
2
)
t)
1,
I(0;1) (x)
3)!!
2
= 15
,
>0
2r 1 r 1
do not exist but x
~=
and the characteristic
is one half the interquartile range, i.e.,
I(0;1) (x),
MX (t) = (1
= (2r
p
= (! + 2) !
1
jtj
is the only generating function that exists. The parameter
1
1
Q1 ) = 21 (x0:75 x0:25 ).
2 IQR = 2 (Q3
1
1) e2 ;
= ! (!
(2r)
(2r 1)
and
function
6. Exp( ) = Gam(1; ), Exponential: f (x) =
2
p
= , 2= 3 , 1=3
,
)
2
)
I(0;1) when > 0 and
x
2 t)
(1 +
(2r)!!
=
(2r 1)!! =
2
2
4. invG( ; ), inverse Gaussian or inverse Normal:
(
1
(x
f (x; ; ) = p
exp
3
2
2 x
5. Cauchy( ; ), Cauchy: f (x) =
2
1
> 0;
= ;
2
=
2
;
0
r
=
r
=
r!;
:
This is the distribution of the time until the next occurrence of a random event.
7
1
x
( )
7. Gam( ; ), Gamma: f (x; ; ) =
2
8.
2
n
x=
e
I(0;1) (x),
;
> 0,
=
2
;
2
=
;
1
p
= 2= ,
= 6= ,
0
r
If
1
(r + )
( )
=
r
; MX (t) = (1
t)
th
is a positive integer, then this is the distribution of the time until the
= Gam(n=2; 2), Chi-Square: f (x; n) =
2
= 2n; Mode = n
9. Beta( ; ), Beta: f (x; ; ) =
2
=
0
r
2;
1
x
r=
=
1
x
1+
;
1
0
r
=2
2
;
1
8
;
n
I(0;1) (x), for n > 0;
=
2
= n;
12
; MX (t) = (1
n
x
2
;
= 0;
2
e
(x
=
2t)
n=2
I(0;1) (x);
2=
=
+k
+ +k
= 3;
2
r
1
1+
2
1
exp f jx
2
= (2r)!
2r
1=
=
1+
;
+
k=0
exp
r
> 0;
rY1
B( + r; )
=
B( ; )
=
11. Lap( ; ), Laplace or Double Exponential: f (x; ; ) =
2
x=2
e
r
=
I(0;1) (x) for ;
;
+ 1)( + )2
( +
r
1
1
(1 x)
B( ; )
10. Weib( ; ), Weibull: f (x; ; ) =
0
r
n
2 +
n
2
2r
=
1
x(n=2)
(n=2)
2n=2
occurrence of a random event.
1
1+
j = g;
; MX (t) =
e
1
;
=x
~= ;
t
2 2
t
12. Logist ( ; ), Logistic:
f (x; ; ) =
s 1+e
=
;
2
=
)=
(x
1
3
2 2
= e t B (1
MX (t)
13. vM ( ; ), von Mises: f (x; ; ) = 2 I10 (
function of order 0 and > 0; = x
~= ;
2
=1
)
)=
;
2;
1
F (x; ; ) =
= 0;
2
1
1+e
(x
)=
= 1:2;
t; 1 + t) where B is the beta function
exp ( cos (x
)) I(
; )
(x), where I0 ( ) is the modi…ed Bessel
Ij!j ( ) i!
I1 ( )
; cf = 'X (!) =
e ;
I0 ( )
I0 ( )
Some limits are
lim f (x; ; ) =
!0
1
I(
2
; )
n
1
(x) ; lim f (x; ; ) = p
exp
(x
!1
2
2 =
lim vM ( ; ) = Unif (
!0
14. Par (m; ), Pareto: f (x; m; ) =
=
m
1
m
x
, for
+1
; ); lim vM ( ; ) = N ( ; 1=
!1
I(m;1) (x), with m and
> 1; x
~ = m21= ;
0
r
=
mn
n
2
=
(
2
2
)
o
)
both positive.
m2
1) (
2)
, for
>2
for n <
8
15. Extr ( ; ), Extreme Value: cdf = F (x; ; ) = exp
=
+
2
;
=
1
6
2
; x
~=
(x
e
)=
, for
> 0.
t
log (log 2) ; MX (t) = e
(1
t) , for t < 1=
N (0; 1)
16. tn , t-distribution: tn = r
when numerator and denominator are independent and n > 0;
2
n
n
(n+1)=2
2
1 + xn
f (x; n) =
p
nB
;
1 n
2; 2
2
= 0;
tn only has moments up to order n
2;
2
1;
2
2;
n
n
2
where
Q :=
N(
1;
1
2
1 ),
1
1
X2
2
"
N(
x
2)=2
2
;
(n + mx)
2
1 2
2
x1
1
1
p
(m+n)=2
x1
2
2
2 ),
2;
=
1
x2
1
1
Xjy
1
+
N
6
n
4
for n = 5 : 1
I(0;1) (x);
(y
2)
2
and
1 t1
+
2 t2
+
2
y
( )=
n
X
k=1
Xk
2
where
:=
k
n
X
k=1
+
1
2
2
2 2
1 t1
+
2
2
) , and Y jx
=
2
x2
2
2
1 (1
x;
2
2
N
(x
+2
1 2 t1 t2
k;
y;
#
2
2 (1
2
) , where
1)
2
( ) Noncentral chi-square : If Z1 ; Z2 ; : : : Zn are independent N
02
n
=
1
Q ;
2
exp
2
1
1
MX1 ;X2 (t1 ; t2 ) = exp
02
n
2
2n2 (m + n 2)
m(n 2)2 (n 4)
=
Also
19.
= 0 for n = 4 : 1, and
), Bivariate normal:
f (x1 ; x2 ) =
Then X1
;
when numerator and denominator are independent.
=
1;
2
mm=2 nn=2 (m
n x
B m
2; 2
f (x; m; n) =
18. N2 (
n
n
1, hence, the mgf does not exist.
2
m =m
,
2 =n
n
17. Fm;n , F -distribution: Fm;n =
=
2
k
+
2 2
2 t2
, then
2
k
is the noncentrality parameter
k
The pdf is
f (x; n; ) =
1
X
1
k!
k=0
k
2
e
=2
x(n=2)+k
2(n=2)+k
= E (X) = n + , var (X) = 2 (n + 2 ), M (t) = (1
0
r
= 2r
r+
2t)
1
e
x=2
n
2
+k
n=2
1
n X r
2
k
exp f t= (1
k=0
p
20. t0n ( ) Noncentral t: t0n ( ) := (N ( ; 1)) =
2 =n
n
I(0;1) for
> 0; n = 1 : 1
2t)g,
r
= 2r
1
(r
1)! (n + r )
k
( =2)
k + n2
and the pdf is
!
1
k=2
k
X
nn=2
e =2
n+k+1
2x2
f (x; n) =
I(0;1)
(n=2) p (n + x2 )(n+1)=2
2
k!
n + x2
k=0
n 1 r
2
n 1+ 2
n
n 2 ((n 1) =2)
2
=
for
n
>
1,
var
(X)
=
for n > 2
n
2 (n=2)
2
n 2
2
2
9
0
0
21. Fm;n
( ) Noncentral F : Fm;n
( )=
2
n
( )=
1
X
1
k!
=2
f (x; m; n; ) = e
02
m
and the pdf is
k
1
2 (m + n +
1
2 (m + 2k)
2
k=0
x(m+2k)=
2k)
1
I
(m+n+2k)=2 (0;1)
n
2
(1 + x)
2
2
(m + ) n2
(m + ) + 2 (m + ) n
(m + ) n
2
=
for n > 2, var (X) =
(n 2) m
(n 2) (n 4) m2
2
2
(n
2) m
for n > 4
AdditionTheorems, Division Statements, Miscellaneous Relations
Each of the following sums are of independent rvs of the type indicated.
P
P
1.
Bin(nk ; p) = Bin ( nk ; p)
Pn
2.
1 Geo(p) = Bin (n; p)
P
P
3.
Pois( k ) = Pois (
k)
Pn
4.
1 Exp( ) = Gam(n; )
Pn
P
5.
k; )
1 Gam( k ; ) = Gam(
P
P
P
6.
ak N ( k ; 2k ) = N
ak k ; a2k 2k
7.
8.
9.
2
1
P
2
= fN (0; 1)g
P
2
nk
02
nk
=
2
=
02
nk
nk
2
10. X1 ; : : : ; Xn iid N ( ;
),X
N( ;
2
n
2
1) S2
) independent of (n
2
(n
1)
11. If X; Y iid N (0; 1) then
(a)
(b)
X
jXj
Cauchy(0; 1) = t1
X+Y
X Y
(c) U =
Cauchy(0; 1)
N 0; p12
p XY
X 2 +Y 2
independent of V =
12. X1 ; : : : ; Xn iid Unif (0; 1) ) X(r;n)
13. X
Unif (0; 1) )
2
2 log X
Beta(r; n
16. X
Gam( ; ) )
2
m
Fm;n )
18. X
Beta(
(m=n)F
1+(m=n)F
1;
1)
2
n
)
N2 (0; 0; 1; 1; ) )
21. X
Gam(n; ) and Y
FX (x; n; ) = 1 FY (n
Exp ( =n)
X
X+Y
Beta
m n
2; 2
Fm=2;n=2
independent of Y
20. X
N Bin(r; p) and Y
1 FY (r; n; p)
r + 1)
2
2
independent of Y
17. F
19. (X; Y )
2X
which is also normal.
(2)
14. X1 ; X2 ; : : : ; Xn iid Exp ( ) ) X(1;n)
15. X
X2 Y 2
X 2 +Y 2
Y
X
Beta(
2;
2)
1
2
=
=
2
1
+
+
2
1
) XY
) XY
Beta(
Beta(
2;
1
1;
1
+
+
2)
2)
Cauchy(0; 1)
Bin(n; p) ) Pr (X
Pois (1= ) ) Pr (X
1; x; )
n) = Pr (Y
x; n; ) = 1
r). In terms of cdfs, this is FX (n; r; p) =
Pr (Y
n
1; x; ). In terms of cdfs, this is
10
22. X
log N ( 1 ;
log N ( 1
2;
1)
1
and Y
2)
log N (
2;
2)
independent, then XY
log N (
1
+
2;
1
+
2)
and X=Y
23. For any continuous rv X with cdf F (x), the rth order statistic X(r;n) has cdf Gr (y) = H (F (y); r; n
where H is the cdf of a Beta(r; n r + 1) rv.
24. Gamma function: ( + 1) =
p
1
integer, and
2 =
R1
25. 0 t e t dt = ( +1)
>0
+1 ,
( ) :=
R1
0
t e t dt;
(1) = 1;
(n + 1) = n! when n is a nonnegative
Rx
26. Incomplete gamma function: (a; x) := 0 ta 1 e t dt for a > 0 and x > 0. P (a; x) :=
cdf of the gamma distribution. The corresponding tail probability is
Z 1
(a; x)
(a; x)
1
ta 1 e t dt
:= 1
=
(a)
(a)
(a) x
27. Beta function: B ( ; ) :=
R1
0
t
1
(1
t)
28. Incomplete beta function: Bx ( ; ) :=
the beta distribution.
1
Rx
0
dt =
t
1
( ) ( )
( + ) ,
(1
t)
for
1
r + 1),
> 0,
(a; x) = (a) is the
>0
dt. Ix ( ; ) := Bx ( ; ) =B ( ; ) is the cdf of
11
Related documents