Download Random Variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Random Variables
Numerical Outcomes
• Consider associating a numerical value with each sample
point in a sample space.
(1,1)
(2,1)
(3,1)
(4,1)
(5,1)
(6,1)
(1,2)
(2,2)
(3,2)
(4,2)
(5,2)
(6,2)
(1,3)
(2,3)
(3,3)
(4,3)
(5,3)
(6,3)
(1,4)
(2,4)
(3,4)
(4,4)
(5,4)
(6,4)
(1,5)
(2,5)
(3,5)
(4,5)
(5,5)
(6,5)
(1,6)
(2,6)
(3,6)
(4,6)
(5,6)
(6,6)
:
9
10
11
12
• The function relating each outcome from a roll of the die
with their sum is considered a random variable.
• Refer to values of the random variable as events.
For example, {Y = 9}, {Y = 10}, etc.
Probability Y = y
• The probability of an event, such as {Y = 9}
is denoted P(Y = 9).
• In general, for a real number y,
the probability of {Y = y} is denoted
P(Y = y), or simply, p( y).
• P(Y = 10) or p(10) is the sum of probabilities for
sample points which are assigned the value 10.
• When rolling two dice,
P(Y = 10) = P({(4, 6)}) + P({(5, 5)}) + P({(6, 4)})
= 1/36 + 1/36 + 1/36
= 3/36
Discrete Random Variable
• A discrete random variable is a random variable
that only assumes a finite (or countably infinite)
number of distinct values.
• For an experiment whose sample points are
associated with the integers or a subset of integers,
the random variable is discrete.
Probability Distribution
• A probability distribution describes the probability
for each value of the random variable.
y
2
3
4
5
6
7
8
9
10
11
12
p(y)
1/36
2/36
3/36
4/36
5/36
6/36
5/36
4/36
3/36
2/36
1/36
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
2
3
4
5
6
7
8
9
10
11
12
Presented as a table, formula, or graph.
Probability Distribution
• For a probability distribution:
y
2
3
4
5
6
7
8
9
10
11
12
p(y)
1/36
2/36
3/36
4/36
5/36
6/36
5/36
4/36
3/36
2/36
1/36
= 1.0
 p( y)  1
y
Here we may take the sum
just over those values of y
for which p(y) is non-zero.
And, of course,
0  p( y )  1, for all y.
Expected Value
• The “long run theoretical average”
• For a discrete R.V. with probability function p(y),
define the expected value of Y as:
E (Y )   y p( y )
y
• In a statistical context,
E(Y) is referred to as the mean and so
E(Y) and m are interchangeable.
For a constant multiple…
• Of course, a constant multiple may be factored out
of the sum
E (cY )   (c y ) p ( y )
y


 c   y p ( y )   cE ( y )
 y

• Thus, for our circles, E(C) = E(2pR) = 2pE(R).
For a constant function…
• In particular, if g(y) = c for all y in Y,
then E[g(Y)] = E(c) = c.
E (c )   c p ( y )
y


 c   p ( y )   (c)(1)  c
 y

Function of a Random Variable
• Suppose g(Y) is a real-valued function of a
discrete random variable Y. It follows g(Y) is also
a random variable with expected value
E[ g (Y )]   g ( y) p( y)
y
• In particular, for g(Y) = Y2, we have
E[Y 2 ]   y 2 p( y )
y
Try this!
• For the following distribution:
y
-2
0
1
4
5
7
p(y) 0.10 0.15 0.20 0.25 0.25 0.05
• Compute the values
E( Y ), E( 3Y ), E( Y2 ), and E( Y3 )
For sums of variables…
• Also, if g1(Y) and g2(Y) are both functions of the
random variable Y, then
E[ g1 (Y )  g 2 (Y )]   ( g1 (Y )  g 2 (Y )) p( y )
y
 [ g1 (Y ) p ( y )  g 2 (Y ) p( y )]
y
  g1 (Y ) p ( y )   g 2 (Y ) p ( y )
y
y
E[ g1 (Y )]  E[ g 2 (Y )]
All together now…
• So, when working with expected values, we have
E[ g1 (Y )  g 2 (Y )]  E[ g1 (Y )]  E[ g 2 (Y )]
E (cY )  cE ( y), and E (c)  c.
• Thus, for a linear combination Z = c g(Y) + b,
where c and b are constants:
E ( Z )  E[cg (Y )  b]
 E[cg (Y )]  E (b)
 c E[ g (Y )]  b
Try this!
• For the following distribution:
y
-2
0
1
4
5
7
p(y) 0.10 0.15 0.20 0.25 0.25 0.05
• Compute the values
E( Y2 + 2 ), E( 2Y + 5 ), and E( Y2 - Y )
Variance, V(Y)
• For a discrete R.V. with probability function p(y),
define the variance of Y as:
V (Y )  E[(Y  m )2 ]
• Here, we use V(Y) and s2 interchangeably to
denote the variance. The positive square root of
the variance is the standard deviation of Y.
• It can be shown that V (cY  b)  c2V (Y )
• Note the variance of a constant is zero.
Computing V(Y)
• And applying our rules for expected value, we
find variance may be expressed as
V (Y )  E[(Y  m ) 2 ]  E[Y 2  2Y m  m 2 ]
 E[Y 2 ]  (2 m ) E[Y ]  E[ m 2 ]
(as the mean is
a constant)
 E[Y 2 ]  (2m )( m )  m 2
 E[Y ]  m or E[Y ]   E[Y ]
2
2
2
2
When computing the variance, it is often easier to
use the formula
V (Y )  E[Y ]   E[Y ]
2
2
Try this!
• For the following distribution:
y
-2
0
1
4
5
7
p(y) 0.10 0.15 0.20 0.25 0.25 0.05
• Compute the values
V(Y) , V(2Y), and V(2Y + 5).
• How would you compute V(Y2) ?
“Moments and Mass”
• Note the probability function p(y) for a discrete
random variable is also called a “probability mass”
or “probability density” function.
• The expected values E(Y) and E(Y2) are called the
first and second moments, respectively.
Continuous Random Variables
Continuous Random Variables
• For discrete random variables, we required
that Y was limited to a finite (or countably
infinite) set of values.
• Now, for continuous random variables, we
allow Y to take on any value in some
interval of real numbers.
• As a result, P(Y = y) = 0
for any given value y.
CDF
• For continuous random variables,
define the cumulative distribution function
F(y) such that F (Y )  P(Y  y),    y  
Thus, we have
lim F ( y)  0 and
y 
lim F ( y)  1
y 
PDF
• For the continuous random variable Y,
define the probability density function as
d[ F ( y)]
f ( y) 
 F ( y)
dy
for each y for which the derivative exists.
Integrating a PDF
• Based on the probability density function,
we may write
y
F ( y)   f (t )dt

Remember the 2nd Fundamental Theorem of Calc.?
Properties of a PDF
• For a density function f(y):
• 1). f(y) > 0 for any value of y.

• 2).  f (t )dt  P(Y  )  1
Density function, f(y)
Distribution function, F(y)
Try this!
• For what value of k is the following function a
density function?
ky (1  y ), for 0  y  1
f ( y)  
otherwise
 0,
• We must satisfy the property



f (t )dt  P(Y  )  1
Try this!
• For what value of k is the following function a
density function?
 ke0.2 y , for 0  y
f ( y)  
otherwise
0,
• Again, we must satisfy the property



f (t )dt  P(Y  )  1
P(a < Y < b)
• To compute the probability of the event
a < Y < b ( or equivalently a < Y < b ),
we just integrate the PDF:
b
P(a  Y  b)  F (b)  F (a)   f (t )dt
a
5
F (5)  F (3)   f (t )dt
3
Try this!
• For the previous density function
ky (1  y ), for 0  y  1
f ( y)  
otherwise
 0,
• Find the probability
P(0.4  Y  1)
• Find the probability
P(Y  0.4 | Y  0.8)
Try this!
• Suppose Y is time to failure and

 1  e y , for y  0
F ( y)  
otherwise

0,
2
• Determine the density function f (y)
• Find the probability
P(Y  2)
• Find the probability
P(Y  1 | Y  2)
Expected Value, E(Y)
• For a continuous random variable Y, define the
expected value of Y as

E (Y )   y f ( y)dy, if it exists.

• Note this parallels our earlier definition for the
discrete random variable:
E (Y )   y p( y )
y
Expected Value, E[g(Y)]
• For a continuous random variable Y, define the
expected value of a function of Y as

E[ g (Y )]   g ( y) f ( y)dy, if it exists.

• Again, this parallels our earlier definition for the
discrete case:
E[ g (Y )]   g ( y) p( y)
y
Properties of Expected Value
• In the continuous case, all of our earlier properties
for working with expected value are still valid.

E (c)   c f ( y)dy  c

E (aY  b)  aE (Y )  b
E[ g1 (Y )  g 2 (Y )]  E[ g1 (Y )]  E[ g 2 (Y )]
Properties of Variance
• In the continuous case, our earlier properties for
variance also remain valid.
V (Y )  E[(Y  m )2 ]  E (Y 2 )  [ E (Y )]2
and
V (aY  b)  a V (Y )
2
Problem from MAT 332
• Find the mean and variance of Y, given
1  y  0
 0.2,

f (Y )   0.2  1.2 y, 0  y  1
 0,
otherwise

Related documents