Download 1 Negative exponential distribution

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Probability Distribution Function. If X is a continuous random variable, then
its cumulative distribution function (abbreviated cdf ) or probability distribution
function is defined for all real x, by:
Fx (x) = P (X ≤ x)
Probability Density Function. The probability density function (pdf) of a continuous random variable is:
fX (x) = lim
dx→0
FX (x + dx) − FX (x)
P (x < X ≤ x + dx)
= lim
dx→0
dx
dx
Let X1 , ..., Xn be independent random variables. When the random variable are
independent, then by definition:
f (x1 , x2 , ..., xn ) = fX1 (x1 )fX2 (x2 )...fXn (xn )
If X1 and X2 are random variables, then for any sets A and B:
P (X1 A, X2 B) = P (X1 A)P (X2 B)
This holds for more than two independent random variables, so for example:
P (X1 ≤ x1 , X2 ≤ x2 , ..., Xn ≤ xn ) = P (X1 ≤ x1 )P (X2 ≤ x2 )...P (Xn ≤ xn )
1
Negative exponential distribution
In probability theory and statistics, the exponential distributions (a.k.a. negative exponential distributions) are a class of continuous probability distributions. They describe
the times between events in a Poisson process, i.e. a process in which events occur
continuously and independently at a constant average rate. The probability density
function of an exponential distribution is:
f (x; µ) = µe−µx , x ≥ 0 and 0 when x < 0
If a random variable X has this distribution, we say that the random variable is exponentially distributed with parameter µ. The cumulative distribution function is:
F (x; µ) = P (X ≤ x) = 1 − e−µx
and:
P (X > x) = e−µx
The mean value is given by:
E(X) = 1/µ
Let X1 , ..., Xn be independent exponential random variables representing the time
to failure of n subsystems. For subsystem i, the mean is µi
E(Xi ) =
1
, i = 1, ..., n.
µi
1
a) Assume that the time unit is years, what is the probability that all n subsystems
function for at least y years?
If you think of one subsystem i, in isolation, the probability that it lasts for
more than y years is simply P (Xi > y). According the above definition then, the
probability that all n subsystems function for at least y years is:
P (X1 > y, X2 > y, ...., X2 > y) =
n
Y
P (Xk > y)
k=1
=
n
Y
1 − FXi (y) =
k=1
n
Y
e−µi y
k=1
b) What is the distribution of the time that passes until the first system fails
The system failure time is y. This means that the system works up until time y
and has failed when the time is greater than y. The first system that fails must
be the minimum of all Xi ’s and when this system fails all the other systems are
still working which means that none of the other systems can have failed. This
can thus be stated as:
P (X1 > y, X2 > y, ...., X2 > y) =
n
Y
P (Xk > y)
k=1
=
n
Y
1 − FXi (y) =
n
Y
e−µi y
k=1
k=1
Which is exactly the same as above. The distribution is thus exponential with
parameter µ1 + µ2 + ... + µ3
There are numerous applications where random variables are ordered from least
to greatest, with a particular value in the ordering being of interest, such as the
smallest, the largest, median etc.
Define
Y1 = min(X1 , ..., Xn )
and
Yn = max(X1 , ..., Xn )
.
c) Determine the distribution of Y1 and Yn
FY1 (y) = P (Y1 ≤ y)
2
= P (min{X1 , X2 , ......Xn } ≤ y)
= P (X1 ≤ y or X1 ≤ y or... or X1 ≤ y)
For one of these to be less then y means that not all can be greater than y:
= 1 − P (X1 > y, X2 > y....Xn > y)
=1−
n
Y
[P (Xi > y]
i=1
=1−
n
Y
[1 − (1 − e−µi y )]
i=1
n
Y
=1−
[e−µi y )]
i=1
FYn (y) = P (Yn ≤ y)
= P (max{X1 , X2 , ......Xn } ≤ y)
IF the max is smaller than y, then all of them must be smaller than y so:
= P (X1 ≤ y, X2 ≤ y, ..., Xn ≤ y)
=
=
n
Y
P (Xi ≤ y)
i=1
n
Y
(1 − e−µi y )
i=1
d) Show that the probability that Xi is the smallest one among X1 , ..., Xn is equal
µi
, i = 1, ..., n
to µ1 +...+µ
n
P (Xi = min(X1 , ...Xn ))
= P (next event is from Xi )
= P (next event is from Xi in interval t, t+dt∩no events in the others processes)
Using the continuous version of total probability:
Z ∞
Y
P (Xi = min(X1 , ...., Xn )) =
fXi (x)
P (Xj > x)dx
0
Z
∞
e−(µ1 +µ2 +..µn )x µi e−µi x dx =
0
3
j6=i
µi
µ1 + µ2 + ... + µn
Related documents