Download Theorem The maximum of n mutually independent Bernoulli random

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Transcript
Theorem The maximum of n mutually independent Bernoulli random variables is a Bernoulli
random variable.
Proof The Bernoulli distribution has probability mass function


 1−p
f (x) = P (X = x) =  p

0
x=0
x=1
otherwise.
Let Xi ∼ Bernoulli(pi ) for i = 1, 2, . . . , n. Then,
Y = max{X1 , X2 , . . . , Xn } ∼ Bernoulli 1 −
n
Y
i=1
(1 − pi )
!
because the support of Y is {0, 1} and Y = 0 occurs only when X1 = X2 = · · · = Xn = 0.
Thus because X1 , X2 , . . . , Xn are assumed to be mutually independent, P (Y = 0) = (1 −
p1 )(1 − p2 ) . . . (1 − pn ) and P (Y = 1) = 1 − P (Y = 0).
APPL verification: The APPL statements
MaximumIID(BernoulliRV(p),n);
simplify(op(%[1])[2]);
yield expressions that are consistent with the result for the special case where X1 , X2 , . . . , Xn
are assumed to be mutually independent and identically distributed.
1