Download Lecture 17

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Expectations of Random Variables,
Functions of Random Variables
ECE 313
Probability with Engineering Applications
Lecture 17
Ravi K. Iyer
Dept. of Electrical and Computer Engineering
University of Illinois at Urbana Champaign
Iyer - Lecture 16
ECE 313 – Spring 2017
Today’s Topics
•
Expectation and Variance
– Moments: Mean and Variance
– Functions of Random Variables
•
Reliability Function--- deriving the mean and varince
•
Announcements
– Homework 7 due Wednesday.
– Group activity on Hypo-exponentials, Erlang and Hyper-Exponentials, c
Pl read the class notes and examples
– Mini Project 2 graded waiting for you to submit individual contributions
– Final mini-project will be announced next week
Iyer - Lecture 16
ECE 313 – Spring 2017
Iyer - Lecture 16
ECE 313 – Spring 2017
Moments of a Distribution
• Let X be a random variable, and define another random variable Y as a
function of X so that Y   ( X ). Suppose that we wish to compute E[Y]
   ( xi ) p X ( xi ),

E[Y ]  E[ ( X )]   i
  ( x) f X ( x)dx,
if X is discrete,
if X is continuous,
(provided the sum or the integral on the right-hand side is absolutely
k
convergent). A special case of interest is the power function  ( X )  X
k
For k=1,2,3,…, E[ X ] is known as the kth moment of the random variable X.
Note that the first moment E[ X ] is the ordinary expectation or the mean of X.
• We define the kth central moment,  k of the random variable X by
mk = E[(X - E[X])k ]
• Known as the variance of X, Var[X], often denoted by 
• The variance of a random variable X is
ì
2
(x
E[X])
p(xi )
if X is discrete
å
i
ï
i
2
Var[X] = s = í ¥
ï ò (x - E[X])2 f (x)dx if X is continuous
î -¥
• Var[X] is always a nonnegative number.
Iyer - Lecture 16
2
ECE 313 – Spring 2017
Variance: 2nd Central Moment
• We define the kth central moment,  k of the random variable X by
 k  E[( X  E[ X ]) k ]
•
2
2
σ known as the variance of X, Var[X], often denoted by   E[( X  E[ X ]) ]
• Definition (Variance). The variance of a random variable X is
ì
ï
2
Var[X] = s = í
ï
î
å (x - E[X]) p(x )
ò (x - E[X]) f (x)dx
2
i
¥
-¥
i
i
2
if X is discrete
if X is continuous
• It is clear that Var[X] is always a nonnegative number.
Iyer - Lecture 16
ECE 313 – Spring 2017
Variance of a Random Variable
• Suppose that X is continuous with density f, let E[ X ]   . Then,
Var ( X )  E[( X   ) 2 ]
 E[ X 2  2 X   2 )]

  ( x 2  2 x   2 ) f ( x)dx








2
2
x
f
(
x
)
dx

2

xf
(
x
)
dx




 f ( x)dx
 E[ X 2 ]  2    2
 E[ X 2 ]   2
• So we obtain the useful identity: Var(X) = E[X 2 ]- (E[X])2
Iyer - Lecture 16
ECE 313 – Spring 2017
Variance of Normal Random Variable
• Let X be normally distributed with parameters  and  2 . Find
Var(X).
• Recalling that E[ X ]   , we have that:
Var ( X )  E[( X   ) 2 ]
1

2 

2 ( x   )
(
x


)
e

2
/ 2 2
dx

• Substituting y  ( x   ) /  yields:
2
Var ( X ) 
2

y e
2  y2 / 2

• Integrating by parts (u  y, dv  ye
2
Var ( X ) 
2
Iyer - Lecture 16


2
  ye  y / 2





y

e



dy
2
 y2 / 2
dy ) gives:
 2
/2
dy  
2


y /2
2
e
dy



2

ECE 313 – Spring 2017
Variance of Exponential Distribution
•
•
•
ì
ü
ï l e- l x ;
x³0 ï
f (x) = í
ý
Var(x) = E(x 2 ) -[E(x)]2
0; otherwise ï
ï
î
þ

From this, we determine the following proof: E ( x) 
xex dx
0
From this point, we need to use integration by parts to solve this equation:

 e  x
u  x
v
du  dx
dv  e x dx



Now we can use the integration by parts formula u dv  uv  v du to continue
solving:


 (x)( e x ) 
e  x 
E ( x)  
dx
  0 



0
  
E ( x)  xe
 x

0

0
 e x dx

 e  x 
E ( x)  0  

  0
 1
E ( x)  0    
 
Iyer - Lecture 16
ECE 313 – Spring 2017
Variance of Exponential Distribution
(cont.)
•
Now, we need to determine E(x2) so we can calculate the variance:

E ( x )   x 2ex dx
2
•
0
Again, integration by parts again:
ux 
v
2
du  2 xdx
 e  x

dv  e x dx

 x
 ( 2 x )( e
 ( x 2 )( e x ) 
)
2
E( x )  

dx


0



0
2 
2
2  x 
E ( x )  x e 0   xe x dx

•


0
Note
E ( x) 
Iyer - Lecture 16


0
x e x dx 
1

ECE 313 – Spring 2017
Variance of Exponential Distribution
(cont.)
 2  1 
E ( x 2 )  0    
    
 2
E(x2 )   2 
 
•
Now that we have found E ( x) 
1
and E ( x) 
2
and we can substitute


them into the equation Var ( x)  E ( x 2 )  [ E ( x)]2 to find the following
Var ( x) 
Var ( x) 
Iyer - Lecture 16
2
2

1
2
1
2
ECE 313 – Spring 2017
Variance of Exponential Distribution
(cont.)
 2  1 
E ( x 2 )  0    
    
 2
E(x2 )   2 
 
•
Now that we have found E ( x) 
1
and E ( x) 
2
and we can substitute


them into the equation Var ( x)  E ( x 2 )  [ E ( x)]2 to find the following
Var ( x) 
Var ( x) 
Iyer - Lecture 16
2
2

1
2
1
2
ECE 313 – Spring 2017
Functions of a Random Variable
•
•
Let Y   ( X )  X 2 As an example, X could denote the measurement
error in a certain physical experiment and Y would then be the square
of the error (e.g. method of least squares).
Note that
FY ( y )  0 for y  0. For y  0,
FY ( y )  P (Y  y )
 P( X 2  y )
 P( y  X 
y)
 FX ( y )  FX ( y ),
and by differentiation the density of Y is
 1
[ f X ( y )  f X ( y )], y  0,

fY ( y )   2 y

0,
otherwise.
Iyer - Lecture 16
ECE 313 – Spring 2017
Functions of a Random Variable (cont.)
•
Let X have the standard normal distribution [N(0,1)] so that
1  x2 / 2
f X ( x) 
e
,
2
Then
   x  .
 1  1 y
e


fY ( y )   2 y  2
 0,

/2

1 y
e
2
/2

 y  0,
, y  0,
or
 1
e y

fY ( y )   2y
 0,

•
/2
,
y  0,
y  0.
This is a chi-squared distribution with one degree of freedom
Iyer - Lecture 16
ECE 313 – Spring 2017
Functions of a Random Variable (cont.)
Generating Exponential Random Numbers
•
•
Let X be uniformly distributed on (0,1). We show that Y  1 ln( 1  X )
has an exponential distribution with parameter   0. Note: Y is a
nonnegative random variable: FY (y) = 0for y £ 0.
For y > 0, we have
FY (y) = P(Y £ y) = P[-l -1 ln(1- X) £ y]
= P[ln(1- X) ³ -l y]
= P[(1- X) ³ e- l y ] (since e x is an increasing function of x,)
= P(X £ 1- e- l y )
= FX (1- e- l y ).
But since X is uniform over (0,1), FX (x) = x, 0 £ x £ 1.
Thus FY (y) = 1- e- l y . Therefore Y is exponentially distributed with parameter l.
•
This fact can be used in a distribution-driven simulation. In simulation programs it is
important to be able to generate values of variables with known distribution functions.
Such values are known as random deviates or random variates. Most computer
systems provide built-in functions to generate random deviates from the uniform
distribution over (0,1), say u. Such random deviates are called random numbers.
Iyer - Lecture 16
ECE 313 – Spring 2017
Example 1
•
Let X be uniformly distributed on (0,1). We obtain the cumulative
distribution function (CDF) of the random variable Y, defined by Y = Xn
as follows: for 0  y  1,
FY ( y )  P{Y  y}
 P{ X n  y}
 P{ X  y1/ n }
 FX ( y1/ n )
 y1 / n
•
Now, the probability density function (PDF) of Y is given by
ì 1 1 -1
ï y n
fY (y) = í n
ï
0
î
Iyer - Lecture 16
0 £ y £1
otherwise
ECE 313 – Spring 2017
Expectation of a Function of a Random
Variable
•
•
•
•
•
•
Given a random variable X and its probability distribution or its pmf/pdf
We are interested in calculating not the expected value of X, but the
expected value of some function of X, say, g(X).
One way: since g(X) is itself a random variable, it must have a
probability distribution, which should be computable from a
knowledge of the distribution of X. Once we have obtained the
distribution of g(X), we can then compute E[g(X)] by the definition of
the expectation.
Example 1: Suppose X has the following probability mass function:
p(0)  0.2,
p(1)  0.5,
p(2)  0.3
Calculate E[X2].
Letting Y=X2,we have that Y is a random variable that can take on one of
the values, 02, 12, 22 with respective probabilities
Hence,
2
pY (0) = P{Y = 0 } = 0.2
2
E
[
X
]  E[Y ]  0(0.2)  1(0.5)  4(0.3)  1.7
2
pY (1) = P{Y = 1 } = 0.5
Note that
2
pY (2) = P{Y = 2 } = 0.3
1.7  E[ X 2 ]  E[ X ]2  1.21
Iyer - Lecture 16
ECE 313 – Spring 2017
Expectation of a Function of a Random
Variable (cont.)
•
Proposition 2: (a) If X is a discrete random variable with probability mass
function p(x), then for any real-valued function g,
E[ g ( X )]   g ( x) p ( x)
x: p ( x )  0
•
(b) if X is a continuous random variable with probability density function f(x),
then for any real-valued function g:

E[ g ( X )]   g ( x) f ( x)dx

•
Example 3, Applying the proposition to Example 1 yields
•
Example 4, Applying the proposition to Example 2 yields
E[ X 2 ]  02 (0.2)  (12 )(0.5)  (22 )(0.3)  1.7
1
E[ X ]   x 3dx
3
0
(since f(x)  1, 0  x  1)
1

4
Iyer - Lecture 16
ECE 313 – Spring 2017
Corollary
• If a and b are constants, then E[aX  b]  aE[ X ]  b
• The discrete case:
 (ax  b) p( x)
E[aX  b] 
x: p ( x )  0
 xp( x)  b  p( x)
a
x: p ( x )  0
x: p ( x )  0
 aE[ X ]  b
• The continuous case:

E[aX  b]   (ax  b) f ( x)dx





 a  xf ( x)dx  b  f ( x)dx
 aE[ X ]  b
Iyer - Lecture 16
ECE 313 – Spring 2017
The Reliability Function
• Let the random variable X be the lifetime or the time to failure of
a component. The probability that the component survives until
some time t is called the reliability R(t) of the component:
R(t )  P( X  t )  1  F (t )
where F is the CDF (Failure) of the component lifetime, X.
• The component is assumed to be working properly at time t=0
and no component can work forever without failure:
i.e.
R(0)  1 and lim R(t )  0
t 
• R(t) is a monotone non-increasing function of t.
• For t less than zero, reliability has no meaning, but: sometimes
we let R(t)=1 for t<0. F(t) is also called the unreliability.
Iyer - Lecture 16
ECE 313 – Spring 2017
Time to Failure and Reliability Function
• Let T denote the time to failure or lifetime of a component in the
system, and f(t) and F(t) denote the probability density function
and cumulative distribution function of T, respectively.
• f(t) represents the y probability density of failure at time t
• The probability that the component will fail at or before time t is
given by:
P{T  t} = F(t)
• And the reliability of the component is equal to the
probability that it will survive at least until time t, given by:
R(t) = P{T> t} =1- F(t)
• So we have:
R¢(t) = - f (t)
• Note: f (t ) t is the (unconditional) probability that a
component will fail in the interval (t , t  t )
Iyer - Lecture 16
ECE 313 – Spring 2017
Mean Time to Failure (MTTF)
• The expected life or the mean time to failure (MTTF) of the
component is given by:


0
0
E[T ]   tf (t )dt    tR(t )dt.
• Integrating by parts we obtain:

E[T ]   tR(t )   R(t )dt.

0
0
• Now, since R(t) approaches zero faster than t approaches ,
we have:

E[T ]   R(t )dt  MTTF
0
Iyer - Lecture 16
ECE 313 – Spring 2017
Exponentially Distributed Lifetime
• If the component lifetime is exponentially distributed, then:
R(t)  e -t
• And:

E[T ]   e dt 
 t
0

1

Var[T ]   2te dt 
0
Iyer - Lecture 16
 t
1

2

2

2

1

2

1
2
ECE 313 – Spring 2017
Instantaneous Failure Rate
or Hazard Rate
• Hazard measures the conditional probability of a failure given
the system is currently working.
• The failure density (pdf) measures the overall speed of failures
• The Hazard/Instantaneous Failure Rate measures the dynamic
(instantaneous) speed of failures.
• To understand the hazard function we need to review
conditional probability and conditional density functions (very
similar concepts)
Iyer - Lecture 16
ECE 313 – Spring 2017
Instantaneous Failure Rate
• If we know for certain that the component was functioning up to
time t, the (conditional) probability of its failure in the interval will
(in general) be different from f (t ) t
• This leads to the notion of “Instantaneous failure rate.” the
conditional probability that the component does not survive for
an (additional) interval of duration x given that it has survived
until time t can be written as:
P(t  X  t  x) F (t  x)  F (t )
GY ( x | t ) 

P( X  t )
R(t )
Iyer - Lecture 16
ECE 313 – Spring 2017
Instantaneous Failure Rate (Cont’d)
• Definition: The instantaneous failure rate h(t) at time t is defined
to be:
1 F (t  x)  F (t )
R(t )  R(t  x)
h(t )  lim
 lim
x 0 x
x 0
R(t )
xR(t )
f (t )
so that:
h(t ) 
R(t )
• h(t)∆t represents the conditional probability that a component
surviving to age t will fail in the interval (t,t+∆t).
• The exponential distribution is characterized by a constant
instantaneous failure rate:
f (t ) e t
h(t ) 
Iyer - Lecture 16
R(t )

e
t

ECE 313 – Spring 2017
Instantaneous Failure Rate (Cont’d)
• Integrating both sides of the equation:
t
t
 h( x)dx 

0
0
t
f ( x)
dx
R( x)
 
0
R' ( x)
dx
R( x)
R (t )

dR
 R
R (0)
t
or
 ln R (t )   h( x) dx
0
(Using the boundary condition, R(0)=1) Hence:
 t

R (t )  exp   h( x) dx 
 0

Iyer - Lecture 16
ECE 313 – Spring 2017
Cumulative Hazard
t
• The cumulative failure rate, H (t )   h( x)dx , is referred to as the
0
cumulative hazard.
•
 t

R (t )  exp   h( x) dx 
 0

gives a useful theoretical representation
of reliability as a function of the failure rate.
• An alternate representation gives the reliability in terms of
 H (t )
cumulative hazard: R(t )  e
• If the lifetime is exponentially distributed, then
we obtain the exponential reliability function.
Iyer - Lecture 16
H (t )  t and
ECE 313 – Spring 2017
f(t) and h(t)
• f(t)∆t is the unconditional probability that the component will fail
in the interval (t,t+ ∆t]
• h(t) ∆t is the conditional probability that the component will fail in
the same time interval, given that it has survived until time t.
• h(t) is always greater than or equal to f(t), because R(t)≤1.
• f(t) is a probability density. h(t) is not.
• [h(t)] is the failure rate
• [f(t)] is the failure density.
• To further see the difference, we need the notion of conditional
probability density.
Iyer - Lecture 16
ECE 313 – Spring 2017
Failure Rate as a Function of Time
Iyer - Lecture 16
ECE 313 – Spring 2017
Constraints on f(t) and z(t)
Iyer - Lecture 16
ECE 313 – Spring 2017
Related documents