Download Chapter 2 - Random Variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
EE385 Class Notes
9/15/2015
John Stensby
Chapter 2 - Random Variables
In this and the chapters that follow, we denote the real line as R = (- < x < ), and the
extended real line is denoted as R+  R{±}. The extended real line is the real line with ±
thrown in.
Put simply (and incompletely), a random variable is a function that maps the sample
space S into the extended real line.
Example 2-1: In the die experiment, we assign to the six outcomes fi the numbers X(fi) = 10i.
Thus, we have X(f1) = 10, X(f2) = 20, X(f3) = 30, X(f4) = 40, X(f5) = 50, X(f6) = 60.
For an arbitrary value x0, we must be able to answer questions like “what is the
probability that random variable X is less than, or equal to, x0?” Hence, the set {   S : X() 
x0 } must be an event (i.e., the set must belong to -algebra F ) for every x0 (sometimes, the
algebraic, non-random variable x0 is said to be a realization of the random variable X). This
leads to the more formal definition.
Definition: Given a probability space (S, F, P), a random variable X() is a function
X : S  R+ .
(2-1)
That is, random variable X is a function that maps sample space S into the extended real line R+.
In addition, random variable X must satisfy the two criteria discussed below.
1) Recall the Borel -algebra B of subsets of R that was discussed in Chapter 1 (see Example 19). For each B  B, we must have
X 1(B)   S : X( )  B  F ,
BB .
(2-2)
A function that satisfies this criteria is said to be measurable. A random variable X must be
a measurable function.
2) P[  S : X() = ± ] = 0. Random variable X is allowed to take on the values of ± ;
Updates at http://www.ece.uah.edu/courses/ee385/
2-1
EE385 Class Notes
9/15/2015
John Stensby
however, it must take on the values of ±  with a probability of zero.
These two conditions hold for most elementary applications. Usually, they are treated as mere
technicalities that impose no real limitations on real applications of random variables (usually,
they are not given much thought).
However, good reasons exist for requiring that random variable X satisfy the conditions
1) and 2) listed above. In our experiment, recall that sample space S describes the set of
elementary outcomes. Now, it may happen that we cannot directly observe elementary outcomes
  S. Instead, we may be forced to use a measuring instrument (i.e., random variable) that
would provide us with measurements X(),   S. Now, for each   R, we need to be able to
compute the probability P[-  < X()  ], because [-  < X()   is a meaningful,
observable event in the context of our experiment/measurements. For probability P[- < X() 
] to exist, we must have [- < X()   as an event; that is, we must have
 S :   X()    F
(2-3)
for each   R. It is possible to show that Conditions (2-2) and (2-3) are equivalent. So, while
(2-2) (or the equivalent (2-3)) may be a mere technicality, it is an important technicality.
Sigma-Algebra Generated by Random Variable X
Suppose that we are given a probability space (S, F, P) and a random variable X as
described above. Random variable X induces a -algebra (X) on S. (X) consists of all sets of
the form {  S : X()  B, B  B}, where B denotes the -algebra of Borel subsets of R that
was discussed in Chapter 1 (see Example 1-9). Note that (X)  F ; we say that (X) is the sub
-algebra of F that is generated by random variable X.
Probability Space Induced by Random Variable X
Suppose that we are given a probability space (S, F, P) and a random variable X as
described above. Let B be the Borel -algebra introduced by Example 1-9. By (2-2), for each B
 B, we have {  S : X()  B}  F, so P[{  S : X()  B}] is well defined.
Updates at http://www.ece.uah.edu/courses/ee385/
2-2
EE385 Class Notes
9/15/2015
John Stensby
This allows us to use random variable X to define (R+, B, P), a probability space
induced by X. Probability measure P is defined as follows: for each B  B, we define P(B) 
P[{  S : X()  B]. We say that P induces probability measure P.
Distribution and Density Functions
The distribution function of the random variable X() is the function
F(x)  P[X()  x]  P[  S : X()  x] ,
(2-4)
where    x < .
Example 2-2: Consider the coin tossing experiment with P[heads] = p and P[tails] = q  1 - p.
Define the random variable
X(head) = 1
X(tail) = 0.
If x  1, then both X(head) = 1  x and X(tail) = 0  x so that
F(x)  1 for x  1 .
If 0  x  1, then X(head) = 1  x and X(tail) = 0  x so that
F(x)  P[X  x]  q for 0  x < 1
Finally, if x  0, then both X(head) = 1  x and X(tail) = 0  x so that
F(x)  P[X  x]  0 for x < 0 .
Updates at http://www.ece.uah.edu/courses/ee385/
2-3
EE385 Class Notes
9/15/2015
John Stensby
F(x)
1
q
x
1
Figure 2-1: Distribution function for X(head)
=1, X(tail) = 0 random variable.
See Figure 2-1 for a graph of F(x).
Properties of Distribution Functions
First, some standard notation:
F(x  )  limit F(r) and F(x  )  limit F(r) .
rx 
rx 
Some properties of distribution functions are listed below.
Claim #1: F(+) = 1 and F(-) = 0.
(2-5)
Proof: F(+) = limit P[X  x] = P[S ]=1 and F(- ) = limit P[X  x] = P[{}]=0 .
x 
x 
Claim #2: The distribution function is a non-decreasing function of x. If x1  x2, then F(x1) 
F(x2).
Proof: x1  x2 implies that {X()  x1}  {X()  x2}. But this means that P[{X()  x1}] 
P[{X()  x2}] and F(x1)  F(x2).
Claim #3: P[X > x] = 1 - F(x).
(2-6)
Proof: {X()  x1} and {X()  x1} are mutually exclusive. Also, {X()  x1}{X()  x1} =
Updates at http://www.ece.uah.edu/courses/ee385/
2-4
EE385 Class Notes
9/15/2015
John Stensby
S. Hence, P[{X  x1}] + P[{X  x1}] = P[S] = 1.
Claim #4: Function F(x) may have jump discontinuities. It can be shown that a jump is the only
type of discontinuity that is possible for distribution F(x) (and, F(x) may have a countable
+
number of jumps, at most). F(x) must be right continuous; that is, we must have F(x ) = F(x).
At a “jump”, take F to be the larger value; see Figure 2-2.
Claim #5: P[ x1  X  x2 ] = F(x2) - F(x1)
(2-7)
Proof: {X()  x1} and {x1  X()  x2} are mutually exclusive. Also, {X()  x2} = {X() 
x1}  {x1  X()  x2}. Hence, P[X()  x 2 ]  P[X()  x1 ] + P[ x1 < X()  x 2 ] and P[x1 
X()  x2] = F(x2) - F(x1).
Claim #6: P[X  x ] = F( x ) - F( x ).
(2-8)
Proof: P[ x -   X  x ] = F(x) - F(x - ). Now, take limit as    to obtain the desired result.
Claim #7: P[ x1  X  x2 ]  F(x2) - F(x1).
(2-9)
Proof: {x1  X  x2}  {x1  X  x2}{X  x1} so that
P[ x1  X  x 2 ] = ( F(x 2 )  F(x1 ) )+( F(x1 )  F(x1 ) )= F(x 2 )  F(x1 ) .
1
F(x0)
x0
Figure 2-2: Distributions are right continuous.
Updates at http://www.ece.uah.edu/courses/ee385/
2-5
EE385 Class Notes
9/15/2015
John Stensby
Fx(x)
1
x
Figure 2-3: Distribution function for a discrete random
variable.
Continuous Random Variables
Random variable X is of continuous type if Fx(x) is continuous. In this case, P[X = x] =
0; the probability is zero that X takes on a given value x.
Discrete Random Variables
Random variable X is of discrete type if Fx(x) is piece-wise constant. The distribution
should look like a staircase. Denote by xi the points where Fx(x) is discontinuous. Then Fx(xi) 
Fx(xi) = P[X = xi] = pi. See Figure 2-3.
Mixed Random Variables
Random variable X is said to be of mixed type if Fx(x) is discontinuous but not a
staircase.
Density Function
The derivative
f x (x) 
dFx (x)
dx
(2-10)
is called the density function of the random variable X. Suppose Fx has a jump discontinuity at a
point x0. Then f(x) contains the term
Updates at http://www.ece.uah.edu/courses/ee385/
2-6
Distribution Function
9/15/2015
Fx(x)
Density Function
EE385 Class Notes
fx(x)
John Stensby
}k
Jump Discontinuity at x0
x
x0
k(x - x0)
Delta Function at x0
x0
x
Figure 2-4: "Jumps" in distribution function causes delta function in density
function. The distribution jumps by the value k at x = x0.
 Fx (x 0 )  Fx (x 0 )  (x  x 0 )   Fx (x 0 )  Fx (x 0 )  (x  x 0 ) .




(2-11)
See Figure 2-4.
Suppose that X is of a discrete type taking values xi, i  I. The density can be written as
f x (x) 
 P[X  xi ] (x  x i ) ,
iI
where I is an index set. Figures 2-5 and 2-6 illustrate the distribution and density, respectively,
of a discrete random variable.
Properties of fx
The monotonicity of Fx implies that fx(x)  0 for all x. Furthermore, we have
f x (x) 
x
dFx (x)
 Fx (x)   f x ()d

dx
Updates at http://www.ece.uah.edu/courses/ee385/
(2-12)
2-7
EE385 Class Notes
9/15/2015
Distribution
Function
1

k
 1
x1
UV k2
W
x2
} k4
q k3
x3
k1
x
x4
Figure 2-5: Distribution function for a discrete
random variable.
P[x1 < X  x2] = Fx(x2) - Fx(x1) =
fx(x)
Density Function
Fx(x)
John Stensby
k2
x1
x2
k4
k3
x3
x
x4
Figure 2-6: Density function for a discrete
random variable.
x2
x1 f x ()d .
(2-13)
If X is of continuous type, then Fx(x) = Fx(x-), and
P[x1  X  x2] = Fx(x2) - Fx(x1) =
x2
x1 f x ()d .
For continuous random variables P[x  X  x + x]  fx(x)x for small x.
Normal/Gaussian Random Variables
Let ,     , and ,  > 0, be constants. Then
f x (x) 
2
1  x  
1
 x  
g
exp[  12 

 ]
   
2 
  
(2-14)
1.0 F(x)
f (x) 
1
2
exp[ 
1
2
x2 ]
0.5
x
exp[½ ]d
2 
F(x)  1
x
-3
-2
-1
0
1
2
2
3
x
Figure 2-7: Density and distribution functions for a Gaussian random variable
with  = 0 and  = 1.
Updates at http://www.ece.uah.edu/courses/ee385/
2-8
EE385 Class Notes
9/15/2015
John Stensby
is a Gaussian density function with parameters  and .
These parameters have special
meanings, as discussed below. The notation N(;) is used to indicate a Gaussian random
variable with parameters  and . Figure 2-7 illustrates Gaussian density and distribution
functions.
Random variable X is said to be Gaussian if its distribution function is given by
F(x)  P[X  x] 
2
x
1
  (  )  d
exp

2  
22 
(2-15)
for given , - <  < , and ,  > 0. Numerical values for F(x) can be determined with the aid
of a table. To accomplish this, make the change of variable y = ( - )/ in (2-15) and obtain
2
(x ) / 
1
  y  dy  G  x    .
exp


 2 
2  
  
F(x) 
(2-16)
Function G(x) is tabulated in many reference books, and it is built in to many popular computer
math packages (i.e., Matlab, Mathcad, etc.).
Uniform
Random variable X is uniform between x1 and x2 if its density is constant on the interval
[x1, x2] and zero elsewhere. Figure 2-8 illustrates the distribution and density of a uniform
fx(x)
Fx(x)
1
x 2  x1
1
x1
Figure 2-8:
variable.
x2
x
x1
x2
x
Density and distribution functions for a uniform random
Updates at http://www.ece.uah.edu/courses/ee385/
2-9
EE385 Class Notes
9/15/2015
John Stensby
random variable.
Binomial
Random variable X has a binomial distribution of order n with parameter p if it takes the
integer values 0, 1, ... , n with probabilities
n
P[X  k]    pk q n  k ,
 
k
0  k  n,
0  p  1,
p  q  1.
(2-17)
Both n and p are known parameters where p + q = 1, and
n
n!
 
.
  k!(n  k)!
k
(2-18)
We say that binomial random variable X is B(n,p).
The Binomial density function is
f x (x) 
n
  pkq n  k(x  k) ,
k 0  k 
n
(2-19)
and the Binomial distribution is
Fx (x) 
n
   pkq n  k ,
k 0  k 
mx
 1,
mx  x  mx  1  n
(2-20)
x  n.
Note that mx depends on x.
Updates at http://www.ece.uah.edu/courses/ee385/
2-10
EE385 Class Notes
9/15/2015
John Stensby
Poisson
Random variable X is Poisson with parameter a > 0 if it takes on integer values 0, 1, ...
with
P[X  k]  e a
ak
,
k!
k  0, 1, 2, 3, ...
(2-21)
The density and distribution of a Poisson random variable are given by
f x (x)  e a

ak
 k! (x  k)
k 0
Fx (x)  e a
mx
ak
 k!
k 0
(2-22)
m x  x  m x  1, m x  0, 1, ... .
(2-23)
Rayleigh
The random variable X is Rayleigh distributed with real-valued parameter ,  > 0, if it is
described by the density
f x (x) 
  

x
exp   12 
2


x
0
2
, x0
(2-24)
, x  0.
See Figure 2-9 for a depiction of a Rayleigh density function.
The distribution function for a Rayleigh random variable is
Updates at http://www.ece.uah.edu/courses/ee385/
2-11
EE385 Class Notes
9/15/2015
LM e j OP
N
Q
2
1
fx ( x )  2 x exp  12 x
U( x )

1
 exp

 12
fx ( x )   exp  x U( x )
0.0
0.5
1.0
1.5
2.0
2.5
3.0 x/
Figure 2-9: Rayleigh density function.
Fx (x) 
John Stensby
x
0
1
2
3
x
Figure 2-10: Exponential density function.
u  u 2 / 2 2
du, x  0
0 2 e
(2-25)
 0,
x0
To evaluate (2-25), use the change of variable y = u2/22, dy = (u/2)du to obtain
Fx (x) 
x 2 / 2 2  y
e dy 
0

2
2
1  e  x / 2 , x  0
 0,
(2-26)
x0
as the distribution function for a Rayleigh random variable.
Exponential
The random variable X is exponentially distributed with real-valued parameter ,  > 0, if
it is described by the density
f x (x)   exp  x  , x  0
0
(2-27)
, x  0.
See Figure 2-10 for a depiction of an exponential density function.
Updates at http://www.ece.uah.edu/courses/ee385/
2-12
EE385 Class Notes
9/15/2015
John Stensby
The distribution function for an exponential random variable is
Fx (x) 
x
0 e
y
dy  1  e x , x  0
 0,
(2-28)
x  0.
Conditional Distribution
Let M denote an event for which P[M]  0. Assuming the occurrence of event M, the
conditional distribution F(xM) of random variable X is defined as
F(xM)  P[X  xM] 
P[X  x, M]
.
P[M]
(2-29)
Note that F(M) = 1 and F(-M) = 0. Furthermore, the conditional distribution has all of the
properties of an "ordinary" distribution function. For example,
P[x1  X  x 2M]  F(x 2M)  F(x1M) 
P[x1  X  x 2 , M]
.
P[M]
(2-30)
Conditional Density
The conditional density f(xM) is defined as
f (xM) 
d
P[x  X  x  xM]
.
F(xM)  limit
dx
x
x  0
(2-31)
The conditional density has all of the properties of an "ordinary" density function.
Example 2-3: Determine the conditional distribution F(xM) of random variable X(fi) = 10i of
the fair-die experiment where M={f2, f4, f6} is the event "even" has occurred. First, note that X
must take on values in the set {10, 20, 30, 40, 50, 60}. Hence, if x  60, then [X  x] is the
Updates at http://www.ece.uah.edu/courses/ee385/
2-13
EE385 Class Notes
9/15/2015
John Stensby
certain event and [X  x, M] = M. Because of this,
F(xM) 
P[X  x, M] P[M]

 1 , x  60 .
P[M]
P[M]
If 40  x < 60, then [X  x, M] = [f2, f4], and
F(xM) 
P[X  x, M]
P[f 2 ,f 4 ]
2/6


,
P[M]
P[f 2 ,f 4 ,f6 ] 3/ 6
40  x  60 .
If 20  x < 40, then [X  x, M] = [f2], and
F(xM) 
P[X  x, M]
P[f 2 ]
1/ 6


,
P[M]
P[f 2 ,f 4 ,f6 ] 3/ 6
20  x  40 .
Finally, if x < 20, then [X  x, M] = [], and
F(xM) 
P[X  x, M]
P[{]
0


,
P[M]
P[f 2 ,f 4 ,f6 ] 3/ 6
x  20 .
Conditional Distribution When Event M is Defined in Terms of X
If M is an event that can be expressed in terms of the random variable X, then F(xM)
can be determined from the "ordinary" distribution Fx(x). Below, we give several examples.
As a first example, consider M = [X  a], and find both F(xM) = F(xX a) = P[X 
xX  a] and f(xM). Note that
F(xX  a)  P[X  xX  a] 
P[X  x X  a]
.
P[X  a]
Hence, if x  a, we have [X  x, X  a] = [X  a] and
Updates at http://www.ece.uah.edu/courses/ee385/
2-14
EE385 Class Notes
F(xX  a) 
9/15/2015
P[X  a]
 1,
P[X  a]
John Stensby
xa.
If x  a, then [X  x, X  a] = [X  x] and
F(xX  a) 
P[X  x] Fx (x)

,
P[X  a] Fx (a)
xa.
-
-
At x = a, F(xX  a) would jump 1 - FX(a )/FX(a) if FX(a )  FX(a). The conditional density is
 f x (x)

, x  a
 Fx (a  ) 
d
d  Fx (x)   Fx (a)

f (xX  a) 
F(xX  a) 


1 
 (x  a) .


dx
dx  Fx (a)  
F
(a)
x




x  a 
0,
As a second example, consider M = [b < X  a] so that
F(xb  X  a) 
P[X  x b  X  a]
.
P[b  X  a]
Since
[X  x, b < X  a]
= [b < X  a],
ax
 [b < X  x], b < x < a
=
{},
xb,
we have
Updates at http://www.ece.uah.edu/courses/ee385/
2-15
EE385 Class Notes
F(xb < X  a) 


9/15/2015
1,
John Stensby
ax
Fx (x)  Fx (b)
, bxa
Fx (a)  Fx (b)
0,
x  b.

F(xb < X  a) is continuous at x = b. At x = a, F(xb < X  a) jumps 1 - {FX(a ) 
FX(b)}/{FX(a) - FX(b)}, a value that is zero if FX(a )  FX(a). The corresponding conditional
density is
f (xb  X  a) 
d
F(xb  X  a)
dx
f x (x)


 F (a)  F (b) , b  x  a   F (a  )  F (b) 
x
x
x
 x
 (x  a) .
  1 

F
(a)
F
(b)
x
x



 
0,
otherwise 

Example 2-4: Find f(xX -  ), where X is N(). First, note that
 x 
Fx (x)  P[X  x]  G 

  
f x (x) 
d  x 
 x  1
G
  g
 ,
dx   
  
where G is the distribution, and g is the density, for the case  = 0 and  = 1. Also, note that the
inequality X –    implies
  X  
so that
Updates at http://www.ece.uah.edu/courses/ee385/
2-16
EE385 Class Notes
9/15/2015
John Stensby
P[X   ]  Fx (  )  Fx (  )  G( )  G()  2G( )  1 .
By the previous example, we have
F(xX    ) 
    x
1,
 x 
G
  G()
 


,
2G( )  1

    x    
x     ,
0,
a conditional distribution that is a continuous function of x. Differentiate this to obtain
 x  1
g

 

f (xX    )  
2G( )  1
 

1
x  2 
exp   12 

2 

,
2G( )  1
 0,
    x    
otherwise ,
a conditional density that contains no delta functions.
Distributions/Densities Expressed In Terms of Conditional Distributions/Densities
Let X be any random variable and define B = [X  x]. Let A1, A2, ... , An be a partition
of sample space S. That is,
n
 Ai  S and Ai  A j  { for i  j .
(2-32)
i=1
From the discrete form of the Theorem of Total Probability discussed in Chapter 1, we have
P[B] = P[B A1]P[A1] + P[B A2]P[A2] + ... + P[B An]P[An]. Now, with B = [X  x], this
becomes
Updates at http://www.ece.uah.edu/courses/ee385/
2-17
EE385 Class Notes
9/15/2015
P[X  x] = P[X  x A1]P[A1] + P[X  x A2]P[A2] + ... + P[X  x An]P[An].
John Stensby
(2-33)
Hence,
Fx (x) = F(xA1 )P[A1 ] + F(xA 2 )P[A 2 ] + ... + F(xA n )P[A n ]
f x (x) = f(xA1 )P[A1 ] + f(xA2 )P[A2 ] + ... + f(xA n )P[A n ] .
(2-34)
Total Probability – Continuous Form
The probability of an event can be conditioned on an event defined in terms of a random
variable. For example, let A be any event, and let X be any random variable. Then we can write
P[AX  x] 
P[A X  x] P[X  xA]P[A] F(xA)P[A]


.
P[X  x]
P[X  x]
Fx (x)
(2-35)
As a second example, we derive a formula for P[Ax1 < X  x2]. Now, the conditional
distribution F(xA) has the same properties as an "ordinary" distribution. That is, we can write
P[x1 < X  x2A] = F(x2A) - F(x1A) so that
P[Ax1  X  x 2 ] 

P[x1  X  x 2
P[A]
P[x1  X  x 2 
(2-36)
F(x 2  F(x1
P[A] .
Fx (x 2   Fx (x1 
In general, the formulation
P[AX  x] 
P[A X  x]
P[X  x]
Updates at http://www.ece.uah.edu/courses/ee385/
(2-37)
2-18
EE385 Class Notes
9/15/2015
John Stensby
may result in an indeterminant 0/0 form. Instead, we should write
P[AX  x] 

limit P[Ax  X  x  x] 
x  0 +
F(x  xA)  F(xA)
P[A]
x 0+ Fx (x  x)  Fx (x)
limit
(2-38)
[F(x  xA)  F(xA)] / x
P(A) ,
x 0+ [Fx (x  x)  Fx (x)] / x
limit
which yields
P[AX  x] 
f (xA)
P[A] .
f x (x)
(2-39)
Now, multiply both sides of this last result by fx and integrate to obtain


P[AX  x]f x (x)dx  P[A]


f (xA)dx .
(2-40)
But, the area under f(xA) is unity. Hence, we obtain
P[A]  


P[AX  x]f x (x) dx ,
(2-41)
the continuous version of the Total Probability Theorem. In Chapter 1, we gave a “finite
dimensional” version of this theorem.
Compare (2-41) with the result of Theorem 1-1;
conceptually, they are similar. P[AX = x] is the probability of A given that X = x. It is a
function of x. Equation (2-41) tells us to average this function over all possible values of X to
find P[A].
For the Total Probability Theorem, the discrete and continuous cases can be related by
the following simple, intuitive argument. We start with the discrete version (see Theorem 1.1,
Updates at http://www.ece.uah.edu/courses/ee385/
2-19
EE385 Class Notes
9/15/2015
John Stensby
Chapter 1 of class notes). For small x > 0 and integer i,  < i < , define the event
Ai  i x  X  (i +1) x     S : i x  X()  (i +1) x  .
Note that the Ai,  < i < , form a partition of the sample space. Let A be an arbitrary event.
By Theorem 1-1 of Chapter 1, the discrete version of the Total Probability Theorem, we can
write
P A 



i 


i 



i 
P  AAi P  Ai 
P  Ai x  X  (i +1) x  P i x  X  (i +1) x 
P  AX  x i  f x (x i ) x ,
where xi is any point in the infinitesimal interval (ix , (i + 1)x]. In the limit as x approaches
zero, the infinitesimal x becomes dx, and the sum becomes an integral so that
P A  


P  AX = x  f x (x)dx ,
the continuous version of the Theorem of Total Probability.
Example 2-5 (Theorem of Total Probability)
Suppose we have a large set Sc of distinguishable coins. We do not know the probability
of heads for any of these coins. Suppose we randomly select a coin from Sc (all coins are equally
likely to be selected).
Its probability of heads will be estimated by a method that uses
experimental data obtained from tossing the selected coin. This method is developed in this and
Updates at http://www.ece.uah.edu/courses/ee385/
2-20
EE385 Class Notes
9/15/2015
John Stensby
the next example.
For each coin in Sc , we model its probability of heads as a continuous random variable p
, 0  p  1, described by known density f p (p) . Density f p (p) should peak at or near p = ½ and
decrease as p deviates from ½. After all, most coins are well balanced and nearly “fair”. Also,
note the requirements
P  p1  p  p2   
p2
f (p)dp .
p1 p
1
P 0  p  1   f p (p)dp  1 .
0
(2-42)
(2-43)
Realistically, it is likely that we do not know f p (p) exactly. We may make a “good
guess”; on the other hand, we could choose a uniform density for f p (p) , a choice that implies
considerable ignorance. As shown in Example 2-6, given a specific coin   Sc , it may be
possible to use experimental data and estimate its probability of heads without knowledge of
f p (p) .
Consider a combined, or joint, experiment of randomly selecting a coin from Sc (the first
experiment) and then tossing it once (the second experiment, with sample space S = [h, t], is
independent of the first experiment). For this combined experiment, the relevant product sample
space is
Sc  S  (, h) :   Sc   (, t) :   Sc  .
(2-44)
For the combined experiment, the event “we get a heads” or “heads occurs” is
H  (, h) :   Sc  .
Updates at http://www.ece.uah.edu/courses/ee385/
(2-45)
2-21
EE385 Class Notes
9/15/2015
John Stensby
The conditional probability of H, given that the tossed coin has (Probability of Head) = p (i.e.,
p  p), is simply
P  H
p  p   P [(, h) :   Sc ]
p  p   p.
(2-46)
The Theorem of Total Probability can be used to express P[H] in terms of known f p (p) .
Simply use (2-46) and the Theorem of Total Probability (2-41) to write
1
P H    P  H
0
1
p  p  f p (p)dp   p f p (p)dp .
0
(2-47)
On the right hand side of (2-47), the integral is called the expected value, or ensemble average,
of the random variable p . Equation (2-47) is used in Example 2-6 below to compute an estimate
for the probability of heads for any coin selected from Sc.
Bayes Theorem - Continuous Form
From (2-39) we get
f (xA) 
P[AX  x]
f x (x) .
P[A]
(2-48)
Now, use (2-41) and (2-48) to write
f (xA)  
P[AX  x]
 P[AX  ]f x ()d
f x (x) ,
(2-49)
a result known as the continuous form of Bayes Theorem.
Often, fX(x) is called the a-priori density for random variable X. And, f(xA) is called
the a-posteriori density conditioned on the observed event A. In an application, we might “cook
Updates at http://www.ece.uah.edu/courses/ee385/
2-22
EE385 Class Notes
9/15/2015
John Stensby
up” a density fX(x) that (crudely) describes a random quantity (i.e., variable) X of interest. To
improve our characterization of X, we note the occurrence of a related event A and compute
f(xA) to better characterize X.
The value of x that maximizes f(xA) is called the maximum a-posteriori (MAP)
estimate of X. MAP estimation is used in statistical signal processing and many other problems
where one must estimate a quantity from observations of related random quantities. Given event
A, the value xm is the MAP estimate for X if
f (x m  f (x),
x  xm .
Example 2-6: (Bayes Theorem & MAP estimate of probability of heads for Example 2-5)
Find the MAP estimate of the probability of heads in the coin selection and tossing experiment
that was described in Example 2-5. First, recall that the probability of heads is modeled as a
random variable p with density f p (p) (which we may have to guess). We call f p (p) the apriori ("before the coin toss experiment") density of p . Suppose we toss the coin n times and
get k heads. We want to use these experimental results with our a-priori density f p (p) to
compute the conditional density
f (pk heads, in a specific order, in n tosses of a selected coin)  f (p .



(2-50)
Observed Event A
This density is called a-posteriori density ("after the coin toss experiment") of random variable
p given the experimentally observed event
A  [k heads, in a specific order, in n tosses of a selected coin] .
Updates at http://www.ece.uah.edu/courses/ee385/
(2-51)
2-23
EE385 Class Notes
9/15/2015
John Stensby
The a-posteriori density f(pA) may give us a good idea (better than the a-priori density
f p (p) ) of the probability of heads for the randomly selected coin that was tossed. Conceptually,
think of f(pA) as a density that results from using experimental data/observation A to “update”
f p (p) . In fact, given that A occurred, the a-posteriori probability that p is between p1 and p2 is
p2
p1 f (pA) dp .
(2-52)
Finally, the value of p that maximizes f(pA) is the maximum a-posteriori estimate (MAP
estimate) of p for the selected coin.
To find this a-posteriori density, recall that p is defined on the sample space Sc. The
experiment of tossing the randomly selected coin n times is defined on the sample space (i.e.,
product space) Sc  S n , where S = [h, t]. The elements of Sc  S n have the form
, 
t
h 
t h where   Sc and 
t
h 
t h  S n .
n outcomes
n outcomes
Now, given that p = p, the conditional probability of event A is
P[k heads in specific order in n tosses of specific coin  p = p] = pk (1  p)n  k .



(2-53)
Observed Event A
Substitute this into the continuous form of Bayes rule (2-49) to obtain
Observed
Event A



f (pA)  f (pk heads, in a specific order, in n tosses of a selected coin )
pk (1  p) n  k f p (p)
(2-54)
 1
,
k
nk

(1


)
f
(

)d

p

0
Updates at http://www.ece.uah.edu/courses/ee385/
2-24
EE385 Class Notes
9/15/2015
John Stensby
a result known as the a-posteriori density of p given A. In (2-54), the quantity  is a dummy
variable of integration.
Suppose that the a-priori density f p (p) is smooth and slowly changing around the value
p = k/n (indicating a lot of uncertainty in the value of p). Then, for large values of n, the
numerator pk(1-p)n-k f p (p) and the a-posteriori density f(pA) have a sharp peak at p = k/n,
indicating little uncertainty in the value of p. When f(pA) is peaked at p = k/n, the MAP
estimate of the probability of heads (for the selected coin) is the value p = k/n.
Example 2-7: Continuing Examples 2-5 and 2-6, for  > 0, we use the a-priori density
 (p  1/ 2)2 
1
exp  

2 
22 

,
 1 
1  2G 

 2 
f p (p) 
0  p  1, ,
(2-55)
where G(x) is a zero mean, unit variance Gaussian distribution function (verify that there is unit
area under f p (p) ). Also, use numerical integration to evaluate the denominator of (2-54). For 
= .3, n = 10 and k = 3, a-posteriori density f(pA) was computed and the results are plotted on
Figure 2-11. For  = .3, n = 50 and k = 15, the calculation was performed a second time, and the
results appear on Figure 2-11. For both, the MAP estimate of p is near .3, since this is were the
plots of f(p A) peak.
Expectation
Let X denote a random variable with density fx(x). The expected value of X is defined as
E[X]  


x f x (x)dx .
(2-56)
Also, E[X] is known as the mean, or average value, of X.
If fx is symmetrical about some value , then  is the expected value of X. For example,
Updates at http://www.ece.uah.edu/courses/ee385/
2-25
EE385 Class Notes
9/15/2015
John Stensby
7
6
f(pA) for n = 50, k = 15
Density Functions
5
4
f(pA) for n = 10, k = 3
3
f~p(p)
2
1
0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
p
Figure 2-11: Plot of a-priori and a-posteriori
densities.
let X be N() so that
f (x) 

1
(x  )2 
exp   1

2
2
 2 

.
(2-57)
Now, (2-57) is symmetrical about , so E[X] = .
Suppose that random variable X is discrete and takes on the value of xi with P[X = xi] =
pi. Then the expected value of X, as defined by (2-56), reduces to
E[X]   x i pi .
i
(2-58)
Example 2-8: Consider a B(n,p) random variable X (that is, X is Binomial with parameters n
and p). Using (2-58), we can write
E[X] 
n
n


P
k
[X
k]
k

   pkq n  k .
k 0
k 0  k 
n
Updates at http://www.ece.uah.edu/courses/ee385/
(2-59)
2-26
EE385 Class Notes
9/15/2015
John Stensby
To evaluate this result, first consider the well-know, and extremely useful, Binomial expansion
n
   x k  (1  x)n .
k 0  k 
n
(2-60)
With respect to x, differentiate (2-60); then, multiply the derivative by x to obtain
n
k
   x k  xn(1  x)n 1 .
k 0  k 
n
(2-61)
Into (2-61), substitute x = p/q, and then multiply both sides by qn. This results in
n
p
k
   pkq n  k  npq n 1(1  q )n 1  np(p  q)n 1
k 0  k 
n
(2-62)
 np .
From this and (2-59), we conclude that a B(n,p) random variable has
E[X] = np.
(2-63)
We now provide a second, completely different, evaluation of E[X] for a B(n,p) random
variable. Define n new random variables
Xi
 1 , if i th trial is a "success", 1  i  n
(2-64)
= 0 , otherwise .
Updates at http://www.ece.uah.edu/courses/ee385/
2-27
EE385 Class Notes
9/15/2015
John Stensby
Note that
E[X i ]  1  p  0  (1  p)  p,
1 i  n .
(2-65)
B(n,p) random variable X is the number of successes out of n independent trials; it can be written
as
X  X1  X 2    X n .
(2-66)
With the use of (2-65), the expected value of X can be evaluated as
E[X]  E[X1  X 2    X n ]  E[X1 ]  E[X 2 ]    E[X n ]
 np ,
(2-67)
a result equivalent to (2-63).
Consider random variable X, with mean x = E[X], and constant k. Define the new
random variable Y  X + k. The mean value of Y is
y  E[Y]  E[X  k ]  E[X]  k  x  k .
(2-68)
A random variable, when translated by a constant, has a mean that is translated by the same
constant. If instead Y  kX , the mean of Y is E[Y] = E[kX] = kx.
In what follows, we extend (2-56) to arbitrary functions of random variable X. Let g(x)
be any function of x, and let X be any random variable with density fx(x). In Chapter 4, we will
discuss transformations of the form Y = g(X); this defines the new random variable Y in terms of
the old random variable X. We will argue that the expected value of Y can be computed as
Updates at http://www.ece.uah.edu/courses/ee385/
2-28
EE385 Class Notes
9/15/2015
John Stensby

E[Y]  E[g(X)]   g(x)f x (x)dx .
-
(2-69)
This brief “heads-up” note (on what is to come in CH 4) is used next to define certain statistical
averages of functions of X.
Variance and Standard Deviation
The variance of random variable X is
2  Var[X]  E[(X  )2 ]  


(x  )2 f x (x)dx .
(2-70)
Almost always, variance is denoted by the symbol 2. The square root of the variance is called
the standard deviation of the random variable, and it is denoted as . Finally, variance is a
measure of uncertainty (or dispersion about the mean). The smaller (alternatively, larger) 2 is,
the more (alternatively, less) likely it is for the random variable to take on values near its mean.
Finally, note that (2-70) is an application of the basic result (2-69) with g = (x - )2 .
Example 2-9: Let X be N() then
VAR[X]  E[(X  )2 ] 
 (x  )2 

1
2


(x
)
exp

 dx .
2 
22 

(2-71)
Let y = x dx = dy so that
Var[X] 
1  2 2  12 y2
dy  2 ,
 y e


2
(2-72)
a result obtained by looking up the integral in a table of integrals.
Consider random variable X with mean x and variance 2x = E[{Xx}2]. Let k denote
an arbitrary constant. Define the new random variable Y  kX. The variance of Y is
Updates at http://www.ece.uah.edu/courses/ee385/
2-29
EE385 Class Notes
9/15/2015
2y  E[{Y  y }2 ]  E[k 2{X  x }2 ]  k 22x .
John Stensby
(2-73)
The variance of kX is simply k2 time the variance of X. On the other hand, adding constant k to a
random variable does not change the variance; that is, VAR[X+k] = VAR[X].
Consider random variable X with mean x and variance 2x . Often, we can simplify a
problem by “centering and normalizing” X to define the new random variable
Y
X  x
.
x
(2-74)
Note that E[Y] = 0 and VAR[Y] = 1.
Moments
The nth moment of random variable X is defined as
m n  E[X n ]  


x n f x (x)dx .
(2-75)
The nth moment about the mean is defined as
 n  E[(X  ) n ]  


(x  ) n f x (x)dx .
(2-76)
Note that (2-75) and (2-76) are basic applications of (2-69).
Variance in Terms of Second Moment and Mean
Note that the variance can be expressed as
2  Var[X]  E[(X  )2 ]  E[X 2  2X  2 ]  E[X 2 ]  E[2X]  E[2 ] ,
(2-77)
where the linearity of the operator E[·] has been used. Now, constants “come out front” of
Updates at http://www.ece.uah.edu/courses/ee385/
2-30
EE385 Class Notes
9/15/2015
John Stensby
expectations, and the expectation of a constant is the constant. Hence, Equation (2-77) leads to
2  E[X 2 ]  E[2X]  E[2 ]  m2  2E[X]  2  m 2  2 ,
(2-78)
the second moment minus the square of the mean. In what follows, this formula will be used
extensively.
Example 2-10: Let X be N(0,) and find E[Xn]. First, consider the case of n an odd integer
E[X n ] 
 1 x2 
 n
1
x
exp
  2 2  dx  0 ,
2 
 

(2-79)
since an integral of an odd function over symmetrical limits is zero. Now, consider the case n an
even integer. Start with the known tabulated integral

x 2
  1/ 2  ,
e
dx  

 > 0.
(2-80)
Repeated differentiations with respect to  yields
first d/d:
second d/d:

  x
2 x 2
e

4 x 2
5 / 2
x

 e dx  12 23 

k th d/d:
dx   1 3/ 2 
2

(2k 1) (2k 1) / 2
2k x 2
x
.
 e dx  12 23 25  2 

Let n = 2k (remember, this is the case n even) and  = 1/22 to obtain
Updates at http://www.ece.uah.edu/courses/ee385/
2-31
EE385 Class Notes

 x
n  x 2 / 2 2
e
9/15/2015
dx 
1 3 5 (n 1)
2
n
John Stensby
 2(n 1) 2(n+1)
(2-81)
 1  3  5 (n  1) 2  n+1 .
From this, we conclude that the nth-moment of a zero-mean, Gaussian random variable is
m n  E[X n ] 
 n  x 2 / 2 2
1
x e
dx  1  3  5 (n  1) n , n = 2k

2  
=0
(n even)
(2-82)
, n = 2k-1 (n odd) .
Example 2-11 (Rayleigh Distribution): A random variable X is Rayleigh distributed with
parameter  if its density function has the form
 x2 
x
f X (x)  2 exp   2  , x  0
 2 



=0,
(2-83)
x<0.
This random variable has many applications in communication theory; for example, it describes
the envelope of a narrowband Gaussian noise process (as described in Chapter 9 of these notes).
The nth moment of a Rayleigh random variable can be computed by writing
2
2

E[X n ]  12  x n 1 e  x / 2 dx .
 0
(2-84)
We consider two cases, n even and n odd. For the case n odd, we have n+1 even, and (2-84)
becomes
2
2
 n 1  x 2 / 22 
1 
2  1
E[X n ]  12  x n 1 e  x / 2 dx 
x
e
dx  .
 2 -
2  2 - 

Updates at http://www.ece.uah.edu/courses/ee385/
(2-85)
2-32
EE385 Class Notes
9/15/2015
John Stensby
On the right-hand-side of (2-85), the bracket contains the (n+1)th moment of a N(0,) random
variable (compare (2-82) and the right-hand-side of (2-85)). Hence, we can write


E[X n ]   2   1  3  5 n n 1  1  3  5 n n  / 2, n = 2k+1 (n odd) .
 2 
(2-86)
Now, consider the case n even, n+1 odd. For this case, (2-84) becomes
2
2
2
2 x

1 
E[X n ]  2  x n 1 e  x / 2 dx   x n e  x / 2 2 dx .

0
 0
(2-87)
Substitute y = x2/22, so that dy = (x/2)dx, in (2-87) and obtain

E X n   
  0

 ! ,

n

2 y e  ydy  2 n / 2 n  y n / 2e  ydy  2 n / 2 n n
2
0
(2-88)
(note that n/2 is an integer here) where we have used the Gamma function

(k  1)   y k e  ydy  k! ,
(2-89)
0
k  0 an integer. Hence, for a Rayleigh random variable X, we have determined that
E[X n ]  1  3  5 n n  / 2 , n odd
= 2
n/2 n

 !
n
2
(2-90)
, n even .
In particular, Equation (2-90) can be used to obtain the mean and variance
Updates at http://www.ece.uah.edu/courses/ee385/
2-33
EE385 Class Notes
9/15/2015
John Stensby


2
E[X] 
(2-91)
2
Var[X]  E[X 2 ]   E[X]  2 2    2  (2   ) 2 ,
2
2
given that X is a Rayleigh random variable.
Example 2-12: Let X be Poisson with parameter  so that P[X  k]  e 

 e
f (x) 
k 0
k
(x  k) .
k!
k
, k  0, and
k!
(2-92)
Show that E[X] =  and VAR[X] =  Recall that
e 

k
 k! .
k 0
(2-93)
With respect to  differentiate (2-93) to obtain
e 

k 1 1  k
 k
k!
 k 1 k!
k 0
k
(2-94)
Multiply both sides of this result by e- to obtain
=

k
 e     E[X] .
k
  k! 
k 1 
(2-95)
as claimed. With respect to , differentiate (2-94) (obtain the second derivative of (2-93)) and
obtain
Updates at http://www.ece.uah.edu/courses/ee385/
2-34
EE385 Class Notes
e 

 k(k  1)
k 1
9/15/2015
John Stensby
k  2
1 
k
1  k
 2  k2
 2 k
.
k!
k!
 k 1
 k 1 k!
Multiply both sides of this result by e- to obtain
2 




k
k
2    
2
 e    



k
e
k
k
P
[X
k]
  k!    k!  
 kP[X  k] .
k 1
k 1
k 1
k 1
Note that (2-96) is simply 2  E[X 2 ]  E[X] .
(2-96)
Finally, a Poisson random variable has a
variance given by
2
Var[X]  E[X 2 ]   E[X]  E[X 2 ]  2  E[X]   ,
(2-97)
as claimed.
Conditional Mean
Let M denote an event. The conditional density f(xM) can be used to define the
conditional mean

E[XM]   x f (x dx .
-
(2-98)
The conditional mean has many applications, including estimation theory, detection theory, etc.
Example 2-13: Let X be Gaussian with zero mean and variance 2 (i.e., X is N(0,)). Let M =
[X > 0]; find E[X M] = E[XX > 0]. First, we must find the conditional density f(x X > 0);
from previous work in this chapter, we can write
Updates at http://www.ece.uah.edu/courses/ee385/
2-35
EE385 Class Notes
F(xX ) 

9/15/2015
John Stensby
P[X  x, X ] P[X  x, X ] P[X  x, X ]


1- P[X ]
1  Fx (0)
P[X ]
FX (x)  FX (0)
,
1  Fx (0)
= 0,
x 0
x < 0,
so that
f (xX0)  2f x (x), x  0
 0,
x  0.
From (2-98) we can write
E  XX  0 

2
x exp   x 2 / 22  dx .



2  0
Now, set y = x2/22, dy = xdx/2 to obtain
E  XX  0 
2   y
2
e dy  
.


2 0
Tchebycheff Inequality
A measure of the concentration of a random variable near its mean is its variance.
Consider a random variable X with mean  variance 2 and density fX(x). The larger 2, the
more "spread-out" the density function, and the more probable it is to find values of X "far" from
the mean. Let  denote an arbitrary small positive number. The Tchebycheff inequality says that
the probability that X is outside ( - ,  + ) is negligible if  is sufficiently small.
Theorem (Tchebycheff’s Inequality)
Consider random variable X with mean  and variance 2. For any  > 0, we have
Updates at http://www.ece.uah.edu/courses/ee385/
2-36
EE385 Class Notes
9/15/2015
John Stensby
2
P  X-     2 .

(2-99)
Proof: Note that
2  


 2 [
(x  )2f x (x)dx 

(x  )2 f x (x)dx
{x : x   }

f x (x)dx
]
{x : x   }
 2P[ X-  ] .
This leads to the Tchebycheff inequality
2
P  X-     2 .

(2-100)
The significance of Tchebycheff's inequality is that it holds for any random variable, and
it can be used without explicit knowledge of f(x). However, the bound is very "conservative" (or
"loose"), so it may not offer much information in some applications. For example, consider
Gaussian X. Note that
X-


P  X-  3   1  P  X-  3   1  P  3 
3 



 1  G(3)  G( 3)  2  2G(3)
,
(2-101)
 .0027
where G(3) is obtained from a table containing values of the Gaussian integral. However, the
Tchebycheff inequality gives the rather "loose" upper bound of
Updates at http://www.ece.uah.edu/courses/ee385/
2-37
EE385 Class Notes
9/15/2015
P  X-  3   1/ 9 .
John Stensby
(2-102)
Certainly, inequality (2-102) is correct; however, it is a very crude upper bound as can be seen
from inspection of (2-101).
Generalizations of Tchebycheff's Inequality
For a given random variable X, suppose that fX(x) = 0 for x < 0. Then, for any  > 0, we
have
P[ X   ]  
(2-103)
To show (2-103), note that



  E[X]   x f x (x) dx   x f x (x) dx    f x (x) dx   P[X  ] ,
0


(2-104)
so that P[ X   ]   as claimed.
Corollary: Let X be an arbitrary random variable and  and n an arbitrary real number and
positive integer, respectively. The random variable X - n takes on only nonnegative values.
Hence
n
E[ X- ]
n
P  X-  n  
,


n
(2-105)
which implies
n
P  X-    
E[ X- ]
n
.
Updates at http://www.ece.uah.edu/courses/ee385/
(2-106)
2-38
EE385 Class Notes
9/15/2015
John Stensby
The Tchebycheff inequality is a special case with  =  and n = 2.
Application: System Reliability
Often, systems fail in a random manner. For a particular system, we denote tf as the time
interval from the moment a system is put into operation until it fails; tf is the time to failure
random variable. The distribution Fa(t) = P[tf  t] is the probability the system fails at, or prior
to, time t. Implicit here is the assumption that the system is placed into service at t = 0. Also, we
require that Fa(t) = 0 for t  0.
The quantity
R(t)  1  Fa (t)  P[t f  t]
(2-107)
is the system reliability. R(t) is the probability the system is functioning at time t > 0.
We are interested in simple methods to quantify system reliability. One such measure of
system reliability is the mean time before failure

MTBF  E  t f    t fa (t)dt ,
0
(2-108)
where fa = dFa/dt is the density function that describes random variable tf.
Given that a system is functioning at time t, t 0, we are interested in the probability
that a system fails at, or prior to, time t, where t  t 0. We express this conditional distribution
function as
F(tt f  t) 
P[t f  t, t f  t] P[t < t f  t] Fa (t)  Fa (t)
,


1  Fa (t)
P[t f  t]
P[t f  t]
t  t .
(2-109)
The conditional density can be obtained by differentiating (2-109) to obtain
Updates at http://www.ece.uah.edu/courses/ee385/
2-39
EE385 Class Notes
f (tt f  t) 
9/15/2015
d
f (t)
F(tt f  t)  a
,
dt
1  Fa (t)
t  t .
John Stensby
(2-110)
F(ttf > t) and f(ttf > t) describe tf conditioned on the event tf > t. The quantity f(ttf > t)dt
is, to first-order in dt, the probability that the system fails between t and t + dt given that it was
working at t.
Example 2-14
Suppose that the time to failure random variable tf is exponentially distributed. That is,
suppose that
1  e t , t  0

Fa (t)  
0,
t0

 e t , t  0

fa (t)  
0,
t  0,

(2-111)
for some constant  > 0. From (2-110), we see that
e t
f (tt f  t)  t   e  (t  t )  fa (t  t), t > t .
e
(2-112)
That is, if the system is working at time t, then the probability that it fails between tand t
depends only on the positive difference t - t, not on absolute t. The system does not “wear out”
(become more likely to fail) as time progresses!
With (2-110), we define f(ttf > t) for t > t. However, the function
(t) 
fa (t)
,
1  Fa (t)
Updates at http://www.ece.uah.edu/courses/ee385/
(2-113)
2-40
EE385 Class Notes
9/15/2015
John Stensby
known as the conditional failure rate (also known as the hazard rate), is very useful. To firstorder, (t)dt (when this quantity exists) is the probability that a functioning-at-t system will fail
between t and t + dt.
Example 2-15 (Continuation of Example 2-14)
Assume the system has fa and Fa as defined in Example 2-14. Substitute (2-111) into
(2-113) to obtain
(t) 
e t
1  {1  e t }
 .
(2-114)
That is, the conditional failure rate is the constant . As stated in Example 2-14, the system does
not “wear out” as time progresses!
If conditional failure rate (t) is a constant , we say that the system is a good as new
system. That is, it does not “wear out” (become more likely to fail) overtime.
Examples 2-14 and 2-15 show that if a system’s time-to-failure random variable tf is
exponentially distributed, then
f (tt f  t)  fa (t  t), t > t
(2-115)
and
(t) = constant,
(2-116)
so the system is a good as new system.
The converse is true as well. That is, if (t) is a constant  for a system, then the
system’s time-to-failure random variable tf is exponentially distributed. To argure this, use
(2-113) to write
Updates at http://www.ece.uah.edu/courses/ee385/
2-41
EE385 Class Notes
9/15/2015
John Stensby
fa (t)  [1  Fa (t)] .
(2-117)
But (2-117) leads to
dFa
 Fa (t)   .
dt
(2-118)
Since Fa(t) = 0 for t  0, we must have
1  e t , t  0

Fa (t)  
0,
t  0,

so tf is exponentially distributed. For (t) to be equal to a constant , failures must be truly
random in nature. A constant  requires that there be no time epochs where failure is more likely
(no Year 2000 – type problems!).
Random variable tf is said to exhibit the Markov property, or it is said to be memoryless,
if its conditional density obeys (2-115). We have established the following theorem.
Theorem
The following are equivalent
1) A system is a good-as-new system
2)  = constant
3) f(ttf > t) = fa(t – t), t > t
4) tf is exponentially distributed.
The previous theorem, and the Markov property, is stated in the context of system
reliability. However, both have far reaching consequences in other areas that have nothing to do
with system reliability. Basically, in problems dealing with random arrival times, the time
between two successive arrivals is exponentially distributed if
Updates at http://www.ece.uah.edu/courses/ee385/
2-42
EE385 Class Notes
9/15/2015
John Stensby
1) the arrivals are independent of each other, and
2) the arrival time t after any specified fixed time t is described by a density function that
depends only on the difference t - t (the arrival time random variable obeys the Markov
property).
In Chapter 9, we will study shot noise (caused by the random arrival of electrons at a
semiconductor junction, vacuum tube anode, etc), an application of the above theorem and the
Markov property.
Updates at http://www.ece.uah.edu/courses/ee385/
2-43