Survey							
                            
		                
		                * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
EMIS 7300
SYSTEMS ANALYSIS METHODS
FALL 2005
Dr. John Lipp
Copyright © 2003-2005 Dr. John Lipp
Session 2 Outline
• Part 1: Correlation and Independence.
• Part 2: Confidence Intervals.
• Part 3: Hypothesis Testing.
• Part 4: Linear Regression.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-2
Today’s Topics
• Bivariate random variables
– Statistical Independence.
– Marginal random variables.
– Conditional random variables.
• Correlation and Covariance
– Multivariate Distributions.
– Random Vectors.
– Correlation and Covariance Matrixes.
• Transformations
– Transformations of a random variable.
– Transformations of a bivariate random variables.
– Transformations of a multivariate random variables.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-3
Bivariate Data
• A common experimental procedure is to control one variable
(input) and measure another variable (output).
• The values of the “input” variable are denoted xi and the
values of the “output” variable as yi.
• An xy-plot of the data points is referred to as a scatter
diagram if the data (xi and/or yi) are random.
• From the scatter diagram a general data trend may be
observed that suggests an empirical model.
• Fitting the data to this model is known as regression analysis.
When the appropriate empirical model is a line then the
procedure is called simple linear regression.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-4
Bivariate Data (cont.)
n
xi
yi
1
9.5013
22.4030
2
2.3114
12.3828
3
6.0684
20.3993
4
4.8598
17.5673
5
8.9130
27.9870
6
7.6210
22.9163
7
4.5647
18.8927
8
0.1850
0.5602
9
8.2141
21.3490
10
4.4470
13.2672
11
6.1543
10.8923
12
7.9194
21.8680
13
9.2181
19.2104
14
7.3821
25.4247
15
1.7627
5.3050
EMIS7300 Fall 2005
30
25
20
15
10
5
0
0
1
2
3
Copyright  2003-2005 Dr. John Lipp
4
5
6
7
8
9
10
5-5
Bivariate Data (cont.)
• The line fit equation is
yˆ  mˆ x  bˆ
where
1 n
 n y x   1  n y  n x 
 yi  y  xi  x    i i    i    i 
 n  i 1   i 1 
mˆ  n i 1 n
  i 1
2
n
n
1
2
1
 x2    x 
 xi  x 
 i 
 i 
n i 1
 i 1  n  i 1 
n
n
1
bˆ  y  mˆ x    yi  mˆ  xi 
n  i1
i 1
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-6
Simple Linear Regression (cont.)
• The slope of the linear regression is related to the sample
correlation coefficient
1 n
xi  x  yi  y 
s
n i1
r  x mˆ 
sy
 1 n  x  x 2   1 n  y  y 2 
i
i
 n 
n
  i1
i 1
• The calculation for r can be rewritten as
 n y x   1  n y  n x 
 i i 
  i   i 
n
 i 1
 i 1  i1 
r
 n 2  1  n  2   n 2  1  n  2 
  yi     yi     xi     xi  
 i1  n  i1    i 1  n  i1  
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-7
Simple Linear Regression (cont.)
• r has no units.
• The value of r is bounded by 1.
– r=1
– 0<r1
– r=0
– 0 > r  -1
– r = -1
EMIS7300 Fall 2005
 the line fits the data perfectly.
 the line has a positive slope.
 there is no line fit.
 the line has a negative slope.
 the line fits the data perfectly.
Copyright  2003-2005 Dr. John Lipp
5-8
Simple Linear Regression (cont.)
10
9
8
7
6
5
4
3
2
1
0
10
9
8
7
6
5
4
3
2
1
0
0
0
1
1
2
2
3
3
EMIS7300 Fall 2005
4
4
5
5
6
6
7
7
8
8
9
9
10
10
10
9
8
7
6
5
4
3
2
1
0
10
9
8
7
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
Copyright  2003-2005 Dr. John Lipp
5-9
Bivariate Random Variables
• Consider the case of two random variables X and Y.
• The joint CDF is denoted FX,Y(x,y) = P(X  x, Y  y).
• The joint PDF is defined via the joint CDF
x y
FX ,Y ( x, y ) 
f
X ,Y
(u, v) du dv
where
 
f X ,Y ( x, y ) 
FX ,Y ( x, y )
x y
• Expected value
E X ,Y g ( x, y ) 
  g ( x, y ) f
X ,Y
( x, y ) dx dy
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-10
Statistical Independence
• X and Y are statistically independent if and only if,
FX ,Y ( x, y)  FX ( x) FY ( y) or
f X ,Y ( x, y)  f X ( x) fY ( y)
• Statistical Independence has an effect on the expected value of
separable functions of joint random variables
E X ,Y g ( x)h( y ) 
  
  g ( x ) h( y ) f
X ,Y
( x, y ) dx dy
  
  g ( x ) h( y ) f
 g ( x) f
X
( x) fY ( y ) dx dy
X
( x)dx  h( y ) fY ( y ) dy
 E X {g ( X )}EY {h(Y )}
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-11
Marginal Random Variables
• It is often of interest to find the individual CDFs and PDFs
when two random variables are not statistically independent.
These are known as the marginal CDF and marginal PDF.
• Marginal CDFs are straightforward,
x 
FX ( x)  F ( x,)     f X ,Y (u, v) dv  du
  
 
FY ( y )  F (, y )     f X ,Y (u, v) du  dv
  
y
• Marginal PDFs are found by “integrating out” y or x,
f X ( x) 
EMIS7300 Fall 2005
 f X ,Y ( x, y) dy
fY ( y ) 
Copyright  2003-2005 Dr. John Lipp
 f X ,Y ( x, y) dx
5-12
Conditional Random Variables
• Conditional CDFs and PDFs can be defined,
FX ,Y ( x, y)  FX |Y ( x | y) FY ( y)  FY | X ( y | x) FX ( x)
f X ,Y ( x, y)  f X |Y ( x | y) fY ( y)  fY | X ( y | x) f X ( x)
• Rewriting the conditional PDF for X given Y
f X ,Y ( x, y )
f X |Y ( x | y ) 
fY ( y )
f X ,Y ( x, y )
f X ,Y ( x, y )dx
fY | X ( y | x) fY ( y )
fY | X ( y | x) fY ( y )dx
This is just ____________________ for random variables!
• A similar equation holds for Y given X.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-13
Marginal Random Variables (cont.)
• Consistent results are obtained if X and Y are independent,
1
FX (x) = FX,Y (x,) = FX (x) FY () = FX (x)
f X ( x) 
1
 f X ,Y ( x, y) dy  f X ( x)  fY ( y)dy  f X ( x)
• Find the marginal PDFs for fX,Y(x,y) = 2 when 0 < x < y < 1
and fX,Y(x,y) = 0 everywhere else. Are X and Y independent?
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-14
Conditional Random Variables (cont.)
• The definitions of conditional CDFs and PDFs are consistent
when X and Y are statistically independent
FX ,Y ( x, y ) FX ( x) FY ( y )
FX |Y ( x | y ) 
 FX ( x)
FY ( y )
FY ( y )
FX ,Y ( x, y ) FX ( x) FY ( y )
FY | X ( y | x) 
 FY ( y )
FX ( x)
FX ( x)
f X ,Y ( x, y ) f X ( x) fY ( y )
f X |Y ( x | y ) 
 f X ( x)
fY ( y )
fY ( y )
f X ,Y ( x, y ) f X ( x) fY ( y )
fY | X ( y | x) 
 fY ( y )
f X ( x)
f X ( x)
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-15
Bivariate Guassian Random Variables
• Let X and Y be jointly Gaussian, but not necessarily
independent, random variables.
• The joint PDF is
f X ,Y ( x, y ) 
 y2  x   x 2 2  xy x y  x   x  y   y  x2  y   y 2
1
2   1  
2
x
2
y
2
xy
e
2
2 x2 y2 (1  xy
)
• Note:
E X ,Y { X }   x
E X ,Y {Y }   y
2
E X ,Y { X   x  }  E X ,Y { X 2 }   x  x   x2
2
E X ,Y {Y   y  }  E X ,Y {Y 2 }   y  y   y2
E X ,Y { X   x  Y   y }  E X ,Y { X Y }   x  y   x y  xy
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-16
Bivariate Guassian Random Variables (cont.)
• The marginal PDF of X ~ N( x,x2) and Y ~ N( y, y2).
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-17
Bivariate Guassian Random Variables (cont.)
• Consider the case that the Gaussian variables are uncorrelated,
that is, xy = 0. The joint PDF is then
x   x 2  
 1
2 x2
 1 e
f X ,Y ( x, y )  
e
2
2
 2 x
  2 y
 f X ( x) fY ( y )
y   y 2 
2 y2
• Thus, uncorrelated jointly Gaussian random variables are
independent Gaussian random variables.
This is a very important exception to the notion that
uncorrelated random variables are not also independent
random variables.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-18
Correlation and Covariance
• The correlation between two joint random variables X and Y
is defined as E{XY}.
• The covariance is defined as
cov(X,Y) = E{(X - x)(Y -  y)} = E{XY} -  x  y = xy
where  x and  y are the means of X and Y, respectively.
• X and Y are uncorrelated if and only if cov(X,Y) = 0. An
equivalent condition is X and Y are uncorrelated if and only if
E{XY} = E{X}E{Y}. This is not the same as independence!
• Two random variables X and Y are said to be orthogonal if
and only if E{XY} = 0. Not the same as uncorrelated!
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-19
Correlation and Covariance (cont.)
• Independent random variables are always uncorrelated
x
y
cov(X,Y) = E{XY} -  x  y = E{X}E{Y} -  x  y= 0
The reverse is generally not true.
Uncorrelated RV’s
Independent RVs
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-20
Correlation and Covariance (cont.)
• The correlation coefficient (normalized covariance) is
 xy
E  X   x Y   y 
 xy 
2
2
 x y
E  X   x  E Y   y  
–
–
–
–
The correlation coefficient is bounded, -1  xy  +1.
xy = 0 if X and Y are uncorrelated.
xy = 1 means that X and Y are perfectly correlated.
xy = -1 means that X and Y are perfectly anti-correlated.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-21
Correlation and Covariance (cont.)
• Although covariance describes a linear relationship between
variables (if it exists), it does not give an indication of nonlinear relationships between variables.
y
0.04
0.02
0.04
0.04
0.04
0.02
0.05
0.05
0.05
0.05
0.20
0.05
0.05
0.02
0.04
0.04
0.05
0.05
0.04
0.02
0.04
x
• The above distribution shows a clear relationship between the
random variables X and Y, but the covariance is zero!
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-22
Multivariate Distributions
• When more than two random variables are considered, the
various distributions and densities are termed multivariate.
– Joint CDF: FX1,X2,…Xn (x1,x2,…,xn)
– Joint PDF: fX1,X2,…Xn (x1,x2,…,xn)
– Conditional CDF:
FX1| X 2 ,...,X n ( x1 | x2 ,..., xn ) 
FX1 , X 2,...,X n ( x1 , x2 ,..., xn )
FX 2 ,...,X n ( x2 ,..., xn )
– Conditional PDF:
f X1| X 2 ,...,X n ( x1 | x2 ,..., xn ) 
EMIS7300 Fall 2005
f X1 , X 2,...,X n ( x1 , x2 ,..., xn )
f X 2 ,...,X n ( x2 ,..., xn )
Copyright  2003-2005 Dr. John Lipp
5-23
Multivariate Distributions (cont.)
– Marginal PDF:
f X 2 ,...,X n ( x2 ,..., xn ) 
 f X , X ,...,X ( x1 , x2 ,..., xn )dx1
1
2
n
– Expectation:
E{g ( x1 ,..., xn )} 
 ...  g ( x1 ,..., xn ) f X ,...,X ( x1 ,..., xn ) dx1 dxn
1
n
– Independence:
f X1 , X 2 ,...,X n ( x1 , x2 ,..., xn )  f X1 ( x1 ) f X 2 ( x2 )... f X n ( xn )
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-24
Random Vectors
• Using vector notation is just as useful for random variables as
it is in other engineering disciplines.
• Consider the random vector
• Define the “vector PDF”
 X1 
X 
  2
X  X3
  
 
 X n 
f ( x )  f X1 , X 2 ,...,X n ( x1 , x2 ,..., xn )
X
• The CDF, marginal, and conditionals are similar.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-25
Correlation Matrix
• Let X be an 1N random vector and Y be a 1M random
vector. Then the correlation matrix, Rxy  E{ XY T } , is
•
 E{ X 1Y1} E{ X 1Y2 } E{ X 1Y3 }  E{ X 1YM } 
 E{ X Y } E{ X Y } E{ X Y }  E{ X Y } 
2 1
2 2
2 3
2 M
 T
E{ XY }   E{ X 3Y1} E{ X 3Y2 } E{ X 3Y3 }  E{ X 3YM } 
 E{ X N Y1} E{ X N Y2 } E{ X N Y3 }  E{ X N YM }
 T
Rx  E{ XX } is known as the autocorrelation matrix.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-26
Covariance Matrix
• Let X be an 1N random vector and Y be a 1M random
vector. Then the covariance matrix is
    T
C xy  E( X   x )(Y   y ) 
  x1 , y1  x1 , y2  x1 , y3   x1 , yM 
x2 , y2
x2 , y3
x2 , yM
 x2 , y1
  x3 , y1  x3 , y2  x3 , y3   x3 , yM 
 
 
 xN , y1  xN , y2  xN , y3   xN , yM 
where  x and  y are the vector means of X and Y , respectively.
• It is often more useful or more natural to write
    T
 T
 
 
C xy  E( X   x )(Y   y )   E{ XY }   x  Ty  Rxy   x  Ty
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-27
Covariance Matrix
• Let X be an 1N random vector and Y be a 1M random
vector. Then the covariance matrix is
    T
C xy  E( X   x )(Y   y ) 
 cov( X 1 , Y1 ) cov( X 1 , Y2 ) cov( X 1 , Y3 )  cov( X 1 , YM ) 
 cov( X , Y ) cov( X , Y ) cov( X , Y )  cov( X , Y ) 
2 1
2
2
2
3
2
M
  cov( X 3 , Y1 ) cov( X 3 , Y2 ) cov( X 3 , Y3 )  cov( X 3 , YM ) 
cov( X N , Y1 ) cov( X N , Y2 ) cov( X N , Y3 )  cov( X N , YM )
where  x and  y are the vector means of X and Y , respectively.
• It is often more useful or more natural to write
    T
 T
 
 
C xy  E( X   x )(Y   y )   E{ XY }   x  Ty  Rxy   x  Ty
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-28
Covariance Matrix (cont.)
• More interesting is the autocovariance matrix,
  12
  1 2 12
C x    1 3 13
 1 N 1N
 2 1  21
 22
 2 3  23
 3 1  31
 3 2  32
 32
 2 N  2 N  3 N  3 N
  N 1  N1 
  N 2  N1 
  N 3  N1 
 N2 
• The autocovariance matrix is symmetric because ij = ji .
• It is often more useful or more natural to write
    T
 T
 T
 T
C x  E ( X   x )( X   x )  E{ XX }   x  x  Rx   x  x
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-29
Covariance Matrix (cont.)
• Autocovariance
matrix for
uncorrelated
random variables
(ij = 0).
 12 0
0
2
0
0
2
Cx   0
0  32
 
 0
0
0
• Covariance matrix
for perfectly
correlated random
variables (ij = 1).
1 
 
 2
C x    3  1  2  3   N 
  
 
 N 
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
0 
0 
0 
 
 N2 
5-30
Covariance Matrix (cont.)
• Consider a random variable Y which is the weighted sum of N
independent random variables Xi, …, XN
N
T 
Y  w1 X 1  w2 X 2   wN X N   wi X i  w X
i 1
• The mean of Y is straight forward
N
T 
T 
 y  E{Y }  E{w X }  w  x   wi  xi
i 1
• The variance is also straight forward
T  2
T  2
2
2
2
 y  E{Y }   y  E w X   w  x 
T   T  T  T 
 E{w X X w}  w  x  x w
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-31
Covariance Matrix (cont.)
• If the Xi are uncorrelated with different variances, then
 12 0
0  0 
2
0
0
0
2
 N
T
2
2 2
2
y w  0
0  3  0  w   wi  i
i 1
   
 
 0
0
0   N2 
• If the Xi are correlated with different variances, then
2
  1 
 1
  
 
2 
2
2
N
 y2  wT   3  1  2  3   N w   wT   3      wi i 
     i1
  
  
 
   
 N 
  N 
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-32
Covariance Matrix (cont.)
• Let Y and b be M  1 vectors, A be an M  N matrix, and X be
an N  1 vector, then Y = AX + b has the statistics
 y  A x  b
C y  ACx AT
• Usually it is easy to generate X as uncorrelated random
variables with unit variances (Cx = identity matrix).
• To generate Y with a desired autocovariance find the “square
root” of Cy =AAT using eigenvector decomposition
 12 0  0 
 1 0  0 
2
0
0
0
0
2
2
U T  A  U 
C y  UDU T  U 
 
 
  0 
  0 
2 
0
0
0
0
0
0
N
N
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-33
Covariance Matrix (cont.)
• Covariance matrix for uncorrelated variables.
• Covariance matrix after rotation  rotation for uncorrelated!
• How compute a sample correlation / covariance.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-34
Gaussian Vector
• Let the elements of the random vector X be mutually
Gaussian. The PDF in vector notation is
1  
 
 ( x   x )T 1 ( x   x )
1
2
f X ( x ) 
e
(2 ) N |  |
Determinant of 
where  x is the mean and  is the autocovariance matrix of X .
• If the elements X are independent / uncorrelated (equivalent for
Guassian only!) the inverse is trivial
0
1
 
 
0
2
1
EMIS7300 Fall 2005
0
 22
0
1
 12
0 
0
2
 0 
0
2
 
  
2 
 N
0
 0
Copyright  2003-2005 Dr. John Lipp
0 
 0 
  
2 
 N 
5-35
Bivariate Guassian Random Variables (cont.)
• Consider the case that the Gaussian variables are uncorrelated,
that is, xy = 0. The joint PDF is then
x   x 2  
 1
2 x2
 1 e
f X ,Y ( x, y )  
e
2
2
 2 x
  2 y
 f X ( x) fY ( y )
y   y 2 
2 y2
• Thus, uncorrelated jointly Gaussian random variables are
independent Gaussian random variables.
This is a very important exception to the notion that
uncorrelated random variables are not also independent
random variables.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-36
• Transformation of RVs
• Use inverse transformation to show Z = X+Y thing is
convolution.
• Use inverse transformation to show how to use uniform for
generating other RVs.
• See 232 in Papuolis book.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-37
Transformations of Random Variables
• Many of the continuous random variables from the previous
session where defined as non-linear functions of other random
variables, e.g., a chi-square random variable is the result from
squaring a zero-mean Gaussian random variable.
• Here is how NOT to transform a random xvariable
1 
– Let X ~ exponential, i.e., f X ( x)  e , x  0 .
– Define Y  X and substitute X = Y 2 into fX(x),
fY ( y )  f X ( y ) 
2
1
e
y2
, y0
– But Y should be Rayleigh, fY ( y ) 
EMIS7300 Fall 2005
y
Copyright  2003-2005 Dr. John Lipp
e
y2
2
, y0!
5-38
Transformations of Random Variables (cont.)
• The reason the “obvious” procedure failed is that the PDF has
no meaning outside of an integral!
• The correct procedure is to transform the CDF and then
compute its derivative to get the transformed PDF.
• Let Y = g(X)  X = g-1(Y) be a one-to-one “mapping”, then
1
d
g
( y)
fY ( y )  f X ( g 1 ( y ))
dy
• For X ~ exponential and Y  X  X = Y 2 then
y2
2
2y 
2 d y
fY ( y )  f X ( y )
e , y0
dy
• The scaling factor  looks different from the Rayleigh PDF.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-39
Transformations of Random Variables (cont.)
• Why is a one-to-one mapping is important? Let X ~ N(0, x2)
and apply the transformation Y = X 2 (Y should be chi-square)
y
d y
1
2 x2
fY ( y )  f X ( y )
e
, y0
2
dy
2 2 x y
• The above “PDF” does not integrate to 1! Instead, it
integrates to ½.
0.4
4
0.3
3
f (y)
0.2
Y
X
f (x)
• What went wrong? Two points from X map into Y
0.1
0
2
1
-4
EMIS7300 Fall 2005
-3
-2
-1
0
x
1
2
3
4
0
Copyright  2003-2005 Dr. John Lipp
1
2
3
4
5
y
6
7
8
9
5-40
Transformations of Random Variables (cont.)
• In general, a mapping of X to Y with a function Y = g(X) must
be analyzed by dividing g(X) into N monotonic regions (roots)
and then summing the PDF contributions from each region
N
d g 1 ( y )
1
fY ( y )   f X ( g ( y ))
dy
i 1
• The transformation Y = X 2 has two monotonic regions, X < 0
and X  0 (the equality belongs on the right).
25
20
x2
15
10
5
0
EMIS7300 Fall 2005
-4
-3
-2
-1
0
x
1
Copyright  2003-2005 Dr. John Lipp
2
3
4
5-41
Transformations of Bivariate Random Variables
• The process is identical to that for a random variable except
that the derivative operation is replaced with the Jacobian.
• Let Y1 = g1(X1, X2) and Y2 = g2(X1, X2).
fY1Y2(y1,y2) is found with
N
fY1 ,Y2 ( y1 , y2 )  
i 1
The joint PDF
 x1 , x2 
f X1 , X 2 ( g ( y1 , y2 ), g ( y1 , y2 )) J 
 y1 , y2 
1
1
1
2
where
 g11 ( y1 , y2 )
 x1 , x2 
y1
J
1
g
y
,
y
2 ( y1 , y 2 )
 1 2
y1
EMIS7300 Fall 2005
 g11 ( y1 , y2 )  g1 ( x1 , x2 )
y2
x1
1
 g 2 ( y1 , y2 )  g 2 ( x1 , x2 )
x1
y2
Copyright  2003-2005 Dr. John Lipp
 g1 ( y1 , y2 )
x2
 g 2 ( x1 , y2 )
x2
5-42
1
Transformations of Bivariate Random Variables (cont.)
• Example: Let X1 and X2 be zero-mean, independent Gaussian
random variables with equal variances. Compute the PDF
fR,(r,) of the polar transform R  X 12  X 22 ,   tan 1 ( X 1 X 2 ).
• First, note that this transform is one-to-one.
• Second, the PDF of fX1,X2(x1,x2) is
1
 2 x12  x22 
1
2 x
f X1 , X 2 ( x1 , x2 ) 
e
2 x2
• Third, the Jacobian is
 x x
 x1 , x2 
x1
J
1
 y1 , y2   tan ( x1 x2 )
x1
2
1
EMIS7300 Fall 2005
2
2
 x x
x2
 tan 1 ( x1 x2 )
x2
2
1
2
2
Copyright  2003-2005 Dr. John Lipp
1
x1
 r
x2
r2
x2
r
x1
 2
r
1
 r
5-43
Transformations of Bivariate Random Variables (cont.)
• Substituting
 x1 , x2 
1
1
fY1 ,Y2 ( y1 , y2 )  f X1 , X 2 ( g1 (r , ), g 2 (r , )) J 
 r , 
 f X1 , X 2 (r cos( ), r sin( ))  r
1
 2 r 2 cos2 ( ) r 2 sin 2 ( ) 
1
2 x
e
r
2
2 x
r
 2
1  r
    2 e 2 x 
 2   x
• Thus R ~ Rayleigh with  = x2 and  ~ uniform [0,2].
• Moreover, R and  are statistically independent.
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-44
Transformations of Bivariate Random Variables (cont.)
• The process of transformation of random variables has
several important and useful results.
• A random variable U ~ uniform [0,1] can be transformed to
any other PDF fX(x) with the transform X = FX-1(U).
– Exponential: X = -  ln(1 - U).
– Rayleigh: X    ln(1  U ).
The only limitation is being able to invert FX(x).
• A pair of independent, zero-mean, unit variance Gaussian
random variables can be generated from
X1 = Rcos() and X2 = Rsin()
where R is Rayleigh ( = 1) and  is uniform [0,2].
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-45
Transformations of Bivariate Random Variables (cont.)
• Let X1 and X2 be independent random variables and define
Y = X1 + X2
W = X1
– The transformation is one-to-one.
– The Jacobian is   x  x    x  x  1
1
2
1
2
1
1
1
 x ,x 
x1
x2
J 1 2  
 1
x1
x1
1 0
 y, w 
x1
x2
– Thus fY,W(y,w) = fX1 (w)f X2(y-w).
– Integrating vs. w
fY ( y ) 
f
X1
( w) f X 2 ( y  w)dw  f X1 ( x1 )  f X 2 ( x2 )
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-46
Transformations of Bivariate Random Variables (cont.)
• Let X1 and X2 be random variables and define
Y = X1 X2
W = X2
– The transformation is one-to-one.
– The Jacobian is   x x    x x  1
 x ,x 
J 1 2  
 y, w 
– Thus
and
EMIS7300 Fall 2005
1 2
x1
x2
x1
1 2
x2
x2
x2
x2
x1
0
1
1
1 1
x2 w
fY,W(y,w) = fX1,X2(y / w, w) / |w|
1
fY ( y )   f X1 , X 2 ( y / w, w) dw
w
Copyright  2003-2005 Dr. John Lipp
5-47
Transformations of Bivariate Random Variables (cont.)
• Let X1 and X2 be random variables and define
Y = X1 / X2
W = X2
– The transformation is one-to-one.
– The Jacobian is
1
  x1 x2    x1 x2 
1
 x ,x 
x1
x2
J 1 2  
 x2
x2
x2
 y, w 
0
x1
x2
– Thus
x1
x22
1
1
 x2  w
fY,W(y,w) = fX1,X2(yw,w)
and
fY ( y ) 
f
X1 , X 2
( yw, w) w dw
EMIS7300 Fall 2005
Copyright  2003-2005 Dr. John Lipp
5-48
Homework
• Mandatory (answers in the back of the book):
5-27
EMIS7300 Fall 2005
5-37
5-39
5-89
Copyright  2003-2005 Dr. John Lipp
5-49