Download here for U13 text. - Iowa State University

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
Module U13: Functions of random variables
1
Module PE.PAS.U13.3
Functions of Random Variables
Primary Author:
Email Address:
Co-author:
Email Address:
Last Update:
Reviews:
Prerequisite Competencies:
James D. McCalley, Iowa State University
[email protected]
None
None
8/1/02
None
1. Know basic probability relations (module U5)
2. Define random variables and distinguish between discrete
(module U6) and continuous (module U7) random
variables.
3. Relate probability density functions and distributions for
discrete and continuous random variables (modules U6
and U7).
4. Use and relate joint, marginal, and conditional pdfs and
cdfs.
Module Objectives:
1. Compute functions of a random variable for the
univariate and bivariate cases.
U13.1
Introduction
I
t is often of interest to obtain information about a function of one or more random variables, each of
which have known statistics through, for example, the probability density function (pdf) or joint pdf. A
simple illustration of this situation is, in the univariate case, when, for example, we know the pdf for X,
fX(x), and we want to know the pdf for Y, fY(y), where Y=4X. Similarly, in the bivariate case, we may
know the statistics on several random variables and wish to obtain the statistics on one or more related
variables. A very common situation is where we know the pdf for X1 and X2, fX1(x1) and fX2(x2), and we
want to know the pdf for Y, fY(y), where Y=X1+X2. We will focus on the use of transformation methods for
addressing these situations, first, in Section U13.2, for the univariate case, and second, in Section U13.3,
for the bivariate case. Proofs of the results stated in this module may be found in [1,2], the sources for
which the material of this section was adapted.
U13.2
The univariate case
U13.2.1
The discrete case
Consider first the discrete case, where X is a discrete random variable with pdf fX(x) and that Y=g(X)
defines a one-to-one transformation so that y=g(x) can be solved uniquely for x, resulting in x=h(y). Then
the pdf of Y is given by
(U13.1)
f Y ( y)  f X (h( y))
If the transformation is not one-to-one, it is still possible to obtain a pdf on Y if we can partition the Yspace into a number of subspaces Aj so that for each subspace, the equation y=g(x) has a unique solution
xj=gj(y) over the space Aj. In this case, the pdf of Y is
Module U13: Functions of random variables
2
f Y ( y )   f X (h j ( y ))
(U13.2)
j
Example U13.1
U13.2.2
The continuous case
Lets assume now that X is a continuous random variable with pdf fx(x) and that Y=g(X) defines a one-toone transformation from a region of the x-space, call it A, such that A={x|fX(x)>0}, to a region of the yspace, call it B, such that B={y|fY(y)>0}, with inverse transformation x=h(y). If the derivative dh/dy is
continuous and nonzero on B, then the pdf of Y is given by
f Y ( y)  f X (h( y))
dh( y)
dy
(U13.3)
The absolute value of the derivative in (U13.2) is appropriately referred to as the Jacobian of the
transformation, terminology that will be shown more useful in the bivariate case.
If the transformation Y=g(X) is not one-to-one, i.e., if there are a number of solutions xj=hj(y) that satisfy
y=g(x), then we can obtain the pdf for Y, fy(y), according to the following:
f Y ( y)   f X (h j ( y))
dh j ( y)
j
(U13.4)
dy
Example U13.2
Consider the transformation Y=X2, and find the fY(y) expressed as a function of fX(x).
We note that the inverse transformation is not unique as it has two solutions:
X   Y  h1 ( y)   y , h2 ( y)   y
X  Y
Taking the derivative of each of these with respect to y, we obtain:
dh1 ( y)
1

dy
2 y
dh2 ( y )
1

dy
2 y
Substitution into (U13.4) results in
f Y ( y)  f X ( y )
U13.2.3
1
2 y
 f X ( y )

1
1

f X ( y )  f X ( y )
2 y 2 y

Expectations
Computing expected values of a function of a random variable can be useful, and the necessary relations
are provided below for the discrete and continuous cases, respectively.
Module U13: Functions of random variables
3
If X is a random variable with pdf f(x) and y=g(x) is a real-valued function whose domain includes the
possible values of X, then
E[Y ]   g ( x) f X ( x) if X is discrete
(U13.3)
x

 g ( x) f
E[Y ] 
X
( x)dx if X is continuous
(U13.4)

U13.3
The bivariate case
U13.3.1
General case
Consider that we know the joint density fX,Y(x,y), and we wish to find the joint density fZ,W(z,w), given that
we know that the random variables Z and W are related to the random variables X and Y through the
transformations Z=g(X,Y) and W=h(X,Y). The method is as follows:
1.
Solve the system of 2 equations z=g(x,y) and w=h(x,y). There may be several solutions, denoted by
(x1,y1), …,(xn,yn).
2.
Obtain the general form of the Jacobian matrix, according to
 z
 x
J ( x, y )  
 w
 x
3.
z 
y 

w 
y 
(U13.5)
Evaluate the desired joint density fZ,W(z,w) according to:
f Z ,W ( z, w) 
f X ,Y ( x1 , y1 )
J ( x1 , y1 )
 ... 
f X ,Y ( xn , y n )
J ( xn , y n )
(U13.6)
Example U13.1
U13.3.2
Special case: sum of two random variables
The procedure of Section U13.3.1 may be applied to the case when we desire the pdf of a sum of random
variables. Consider that we know the joint pdf of X and Y, f X,Y(x,y), and we desire the pdf of S, fS(s). Then
the procedure of Section U13.3.1 results in

f S ( s) 
 f (t , s  t )dt
(U13.7)

If X and Y are independent, then (U13.7) becomes the convolution formula:

f S ( s) 
f

X
(t ) f Y ( s  t )dt
(U13.8)
Module U13: Functions of random variables
4
Problems
Problem 1
Problem 2
Problem 3
Problem 4
References
[1] L. Bain and M. Engelhardt, “Introduction to Probability and Mathematical Statistics,” Duxbury Press,
Belmont, California, 1992.
[2] A. Papoulis, “Probability, Random Variables, and Stochastic Processes,” McGraw-Hill, New York,
1984.