# Download X - UNC-Chapel Hill

Document related concepts
no text concepts found
Transcript
```Introduction to Random Variables
1. Random variables
1.1 A random variable and its realization
X is a random variable that takes different possible values.
x is a specific value, or realization, of X.
Example:
X is the concentration of particulate matter concentration in Chapel Hill on July 6, 1988.
x is the concentration measured at a monitoring station in Chapel Hill on July 6, 1988 by
collecting particulate matter dust in a filter and sending the filter to a lab for analysis.
1
1.2 The cdf of a random variable
Let P[] be the probability that the event A occurs.
Note: We always have P[] [0,1]
Then the cumulative density function (cdf) of a random variable X is defined as
FX(x)=P[X≤x]
Important properties:
FX(-∞)=0
why?
FX(∞)=1
why?
FX(x) is always increasing
Proof: b>a  P[X<b]= P[X<a]+ P[a≤X<b] ≥ P[X<a]  FX(b) ≥ FX(a)
2
1.3 The pdf of a random variable
Definition:
The probability density function (pdf) of a random variable may be defined as the derivative of its
cdf,
F ( x)
fX(x) = X
x
Important properties:
 Since FX(x) is always increasing, then fX(x)≥0,  x




b
b
a dx f X ( x)  FX ( x) a 
FX(b)-FX(a) P[X≤b]-P[X≤a] P[a<X≤b]
In other words, the area under the fX(x)–curve between a and b is the probability of the event
a<X≤b. Hence the pdf is really a density of probability, or a probability density function.
 dx f ( x)  1
 The normalization constraint: We must always have 

X


proof:  dx f X (x) = FX (x)  = FX(∞)-FX(-∞) = 


3
1.4 The expected value of a random variable

The expected value of g(X) is E[g(X)]=  dx g ( x) f X ( x)
Examples
The mean is the expected value of X

mx=E[X]=  dx x f X (x)
The variance is the expected value of (X-mx)2

varx=E[(X-mx)2]=  dx ( x  mx ) 2 f X ( x)
The expected value of (X-mx)4 is

E[(X-mx)4]=  dx ( x  mx ) 4 f X ( x)
4
1.5 Exercises
k if x  [a,b]
0 otherwise (o/w)
Let f X ( x)  
1) Use the normalization constraint to find k so that fX(x) is a pdf.

 dx f X ( x)  1  …  k=1/(b-a)
2) Assume that fX()is the pdf of a random value x, what is the expected value of X?

E[X]=  dx x f X (x) = … = (a+b)/2
3) Write the formulae for the variance of X
2
a b
1

var[X]=E[(X-mx) ]=  dx ( x  mx ) f X ( x) = a dx  x 

2  ba

2

2
b
5
2. Bivariate distributions
X and X’ are two random variables that may be dependent on one another.
Example:
X is the lead concentration in the drinking water at the tap of a house
X’ is the lead concentration in the blood of the person living in that house.
2.1 The bivariate cdf
FXX’(x x’)=P[X≤x AND X’≤ x’] =P[X≤x , X’≤x’]’
2.2 The bivariate pdf
fXX’(x x’)=
 
FX X ' ( x, x' )
x x'
2.3 Normalization constraint


 dx  dx' f X X ' ( x, x' )  1
6
2.4 The marginal and conditional pdf

The marginal pdf of fXX’(x x’ ) with respect to xis fx(x)=  dx' f XX ' ( x, x' )

The marginal pdf of fXX’(x x’) with respect to x’ is fX’(x’)=  dx f XX ' ( x, x' )
The conditional pdf fX|x’(X| X’=x’) of X given that X'=x’ is
fX|x’(X| X’=x’) =
f XX ' ( x, x' )
=
f X ' ( x' )

f XX ' ( x, x' )
 dx f XX ' ( x, x' )
7
2.5 The expected value


E[g(X,X’)]=  dx  dx' g ( x, x' ) f XX ' ( x, x' )
Examples:


The expected value of X is mx=E[X]=  dx  dx' x f XX ' ( x, x' )


The expected value of X’ is mx’=E[X’]=  dx  dx' x' f XX ' ( x, x' )


The variance of x is E[(X-mx)2]=  dx  dx' ( x  mx ) 2 f XX ' ( x, x' )


The variance of x’ is E[(X’-mx’)2]=  dx  dx' ( x'mx ' ) 2 f XX ' ( x, x' )
The covariance between X and X’ is


E[(X-mx)(X’-mx’)]=  dx  dx' ( x  mx )( x'mx ' ) f XX ' ( x, x' )
8
2.6 Exercises
k if x  [a,b] and y  [a,b]
0 o/w
Let f XY ( x, y)  
1) Use the normalization constraint to find k so that fXY(x,y) is a pdf.


 dx  dy f XY ( x, y)  1  …  k=1/(b-a)
2
2) Assume that fXY(x,y) is the pdf of X and Y, what is the expected value of X?


b
b
mx=E[X]=  dx  dy x f XY ( x, y ) = a dx a dy x /(b  a) 2 =… = (a+b)/2
3) Calculate the variance of X.


2
var(X)=E[(X-mx)2]=E[X2]-mx2=  dx  dy x 2 f XY ( x, y ) -(a+b)2/4=…= (b-a) /12
4) Calculate the expected value of Y.
9


my=E[Y]=  dx  dy y f XY ( x, y ) = …= (a+b)/2
5) Calculate the covariance between X and Y.
b
b
cov(X,Y)=E[(X-mx)(Y-my)]=E[XY]-mxmy= a dx a dy xy /(b  a) 2 - (a+b)2/4=…= 0
6) Find the marginal pdf fx() of the random variable X
1
1
 b

a dy
2
ba
(b  a)
f X ( x)   dy f XY ( x, y )  
0


if
axb
otherwise
7) Calculate E[X] using f X (x)

E[X] =  dx x f X (x) = … = (a+b)/2
(same as in 2)
10
8) Find the marginal pdf fY(y) of the random variable Y
1
1
 b
dx


(b  a) 2 b  a
fY ( y )   dx f XY ( x, y )   a
0


a yb
if
otherwise
9) Find the conditional pdf of X given that Y=y
 1
f XY ( x, y ) 
f X | y ( x | Y  y) 
 b  a
fY ( y )

0
10)
if
a  x  b and a  y  b
otherwise
Find the probability that X <(a+b)/2 given that Y=b
( a b ) / 2
P[X <(a+b)/2 | Y=b] = 
dx f X |Y b ( X | Y  b) =
( a b ) / 2
a
dx /(b  a) =1/2
11
Exercises on Conditional probability using discrete variables
Let A be a random variable taking values a1, a2, …, an .
Let B be a random variable taking values b1, b2, …, bm .
Let P[ai] represent the probability that A=ai .
Let P[bj] represent the probability that B=bj .
Let P[ai , bj] represent the probabilities that A=ai AND B=bj
Then the probability P[ai | bj] that A=ai GIVEN that B=bj is
P[ai | bj] = P[ai , bj] / P[bj]
Example 1:
Consider the case where A takes values a1 or a2, and B takes values b1 or b2. In an experiment we
record over 1000 trials the values for A and B, and we obtain the following distribution
b1
b2
a1
100
400
a2
100
400
Number of trials =
P[a1]=
P[a2]=
P[b1]=
P[b2]=
P[a1, b1]=
P[a1, b2]=
P[a1 | b1]=
P[a1| b2]=
12
Example 2:
Redo the example with the following distribution
b1
b2
a1
400
100
a2
100
400
Number of trials =
P[a1]=
P[a1, b1]=
P[a1, b2]=
P[a1 | b1]=
P[a1| b2]=
P[a2]=
P[b1]=
P[b2]=
Note that in this example, the conditional probability did change, while that was not the case in
Example 1. Why?
The change in probability can be though of as knowledge updating:
P[a1] is the prior probability, while P[a1 | b1] is the updated probability when we know that B=b1.
13
```
Related documents