Download MTH/STA 561 JOINT PROBABILITY DISTRIBUTIONS The study of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Probability wikipedia , lookup

Randomness wikipedia , lookup

Transcript
MTH/STA 561
JOINT PROBABILITY DISTRIBUTIONS
The study of random variables in the previous chapters was restricted to the idea that a
random variable associates a single real number with each possible outcome of an experiment.
Subsequently, probability distribution is de…ned for the one-dimensional random variable,
based upon the probability measure for the experiment. Now we equally well de…ne rules
that would associate two numbers, three numbers, or n numbers with possible outcomes of
an experiment. These rules actually de…ne two-dimensional, three-dimensional, or for the
general case, n-dimensional random variables (or random vectors). Again, the probability
distribution for the n-dimensional random variable (random vector) derives directly from
the probability measure for the experiment.
As an example, consider an experiment of tossing a fair coin and a balanced die together
once. Let Y1 be an binary variable that takes on the value 1 if the coin turns up to be a
head; and the value 0 otherwise. Also, let Y2 be the number of dots appearing on the die.
The sample space associated with the experiment is
S = f(y1 ; y2 ) j y1 = 0; 1 and y2 = 1; 2; 3; 4; 5; 6g :
Assume that all 12 sample points are equally likely. Then the random variable Y1 and Y2
associates two numbers with each possible outcome of the experiment as follows: Let y1 and
y2 denote the realized values of Y1 and Y2 , respectively, we then have
Sample Point (y1 ; y2 )
(Head; 1)
(1; 1)
(T ail; 1)
(0; 1)
(Head; 2)
(1; 2)
(T ail; 2)
(0; 2)
(Head; 3)
(1; 3)
(T ail; 3)
(0; 3)
Sample Point (y1 ; y2 )
(Head; 4)
(1; 4)
(T ail; 4)
(0; 4)
(Head; 5)
(1; 5)
(T ail; 5)
(0; 5)
(Head; 6)
(1; 6)
(T ail; 6)
(0; 6)
and, accordingly, the bivariate probability distribution for Y1 and Y2 is given by
p (y1 ; y2 ) = P fY1 = y1 ; Y2 = y2 g =
1
12
for y1 = 0; 1 and y2 = 1; 2; 3; 4; 5; 6. It is also clear that
P fY1 = 1; 3
Y2
5g = p (1; 3) + p (1; 4) + p (1; 5) =
3
1
= :
12
4
It should noted here that the vector (Y1 ; Y2 ) thus de…ned can be thought of as a rule that
associates an ordered pair (y1 ; y2 ) of realized values with each sample point of the sample
1
space S, and is referred to as a two-dimensional random vector. Since (Y1 ; Y2 ) is a twodimensional random vector, we can visualize the possible observed values (y1 ; y2 ) as being
points in a two-dimensional space and an event is a collection of points or a region in the
two-dimensional space. In general, an n-dimensional random vector may be de…ned as follows.
De…nition 1. Let S be the sample space of an experiment. An n-dimensional random
vector (Y1 ; Y2 ;
; Yn ) is a rule (or, equivalently, an ordered collection of n rules) that associates an n-tuple with each sample point of S. We will equivalently say that Y1 , Y2 ,
, Yn
are jointly distributed random variables.
If (Y1 ; Y2 ;
; Yn ) is an n-dimensional random vector, we can visualize the possible observed values (y1 ; y2 ;
; yn ) as being points in an n-dimensional space. For an n-dimensional
random vector, events are collections of points or some regions in the n-dimensional space.
I. Joint Probability Distributions for Discrete Random Variables
Based upon the discussions in the preceding paragraphs, it is natural to de…ne the bivariate probability distribution for discrete random variables Y1 and Y2 as follows.
De…nition 2. The function p (y1 ; y2 ) is a bivariate probability distribution for discrete
random variables Y1 and Y2 if the following properties holds:
(1) p (y1 ; y2 ) 0 for all real numbers y1 and y2 .
XX
(2)
p (y1 ; y2 ) = 1, where the sum ranges over all values (y1 ; y2 ) that are assigned
y1
y2
nonzero probabilities.
(3) p (y1 ; y2 ) = P fY1 = y1 ; Y2 = y2 g.
(4) For any region A in the two-dimensional space,
XX
P f(Y1 ; Y2 ) 2 Ag =
p (y1 ; y2 ) :
(y1 ;y2 )2A
Below are examples with regard to the bivariate probability distribution for two discrete
random variables.
Example 1. Suppose that random variables Y1 and Y2 have the joint probability distribution given by
p (y1 ; y2 ) =
4 /3y1 +y2
0
for y1 = 1; 2; 3;
elsewhere
and y2 = 1; 2; 3
Clearly,
1 X
1
X
y1 =1 y2 =1
4
3y1 +y2
1
1
X
1 X 1
=4
=4
y1
y2
3
3
1
y =1
y =1
1
2
2
1
3
1
3
1
3
1
1
3
=4
1 1
= 1;
2 2
where the two summations in the second slot of the above equation are geometric series with
ratio 13 .
Example 2. A local supermarket has three checkout counters. Two customers, say
A and B, arrive at the counters at di¤erent times when the counters are serving no other
customers. Each customer chooses a counter at random, independently of the other. Let
Y1 be the number of customers who choose counter 1, and Y2 the number of customers who
choose counter 2. The all possible arrivals of the two customers are listed in the following
table:
Counter 1 Counter 2 Counter 3
A
B
A
B
A
B
B
A
B
A
B
A
AB
AB
AB
(y1 ; y2 )
(1; 1)
(1; 0)
(0; 1)
(1; 1)
(1; 0)
(0; 1)
(2; 0)
(0; 2)
(0; 0)
Then the bivariate probability distribution for Y1 and Y2 is given by
y1
0
1
2
0
1=9
2=9
1=9
y2
1
2=9
2=9
0
2
1=9
0
0
Example 3. Consider an experiment of drawing two marbles at random from an urn
which contains 3 blue marbles, 2 red marbles, and 3 green marbles. Let Y1 be the number
of blue marbles drawn, and Y2 the number of red marbles drawn. Then (Y1 ; Y2 ) takes on
values (y1 ; y2 ) with y1 = 0, 1, 2, y2 = 0, 1, 2, and 0
y1 + y2
2. The total number
8
of equally likely ways of drawing any two marbles from the eight is
= 28; and the
2
number of ways of drawing y1 blue marbles, y2 red marbles, and 2 y1 y2 green marbles
3
2
3
is
. Hence, the bivariate probability distribution for Y1 and Y2 is
y1
y2
2 y1 y2
given by
3
2
3
y1
y2
2 y1 y2
p (y1 ; y2 ) =
8
2
for y1 = 0, 1, 2, y2 = 0, 1, 2, and 0
y1 + y2
2; that is, (y1 ; y2 ) = (0; 0), (0; 1), (1; 0),
(1; 1), (0; 2), (2; 0). More speci…cally, p (y1 ; y2 ) is given by
3
y1
0
1
2
y2
1
3=14
3=14
0
0
3=28
9=28
3=28
2
1=28
0
0
Hence, the probability that at least one green marble will be drawn is
P fY1 + Y2
1g = p (0; 0) + p (0; 1) + p (1; 0)
3
3
9
9
=
+
+
= :
28 14 28
14
When (Y1 ; Y2 ;
; Yn ) is a discrete n-dimensional random vector; that is, Y1 , Y2 ,
,
Yn are n discrete (one-dimensional) random variables, the corresponding joint probability
function, or joint probability distribution, of Y1 , Y2 ,
, Yn may be speci…ed by
p (y1 ; y2 ;
; yn ) = P fY1 = y1 ; Y2 = y2 ;
; Yn = yn g
which gives the probability of occurrence of individual points (in the n-dimensional space); as
in the one-dimensional case, the probability of any event is computed by summing the probabilities of occurrence of individual points that belong to the event of interest. Evidently, the
joint probability function of Y1 , Y2 ,
, Yn must satisfy properties in the following de…nition:
De…nition 3. The function p (y1 ; y2 ;
; yn ) is a joint probability distribution for discrete
random variables Y1 , Y2 ,
, Yn if the following conditions hold:
(1) p (y1 ; y2 ;
XX
(2)
y1
y2
; yn ) 0 for all real numbers y1 , y2 ,
, yn .
X
p (y1 ; y2 ;
; yn ) = 1, where the sum ranges over all values (y1 ; y2 ;
; yn )
yn
that are assigned nonzero probabilities.
(3) p (y1 ; y2 ;
; yn ) = P fY1 = y1 ; Y2 = y2 ;
; Yn = yn g.
(4) For any region A in the n-dimensional space,
XX
X
P f(Y1 ; Y2 ;
; Yn ) 2 Ag =
p (y1 ; y2 ;
(y1 ;y2 ;
; yn ) :
;yn )2A
II. Joint Distribution Functions
Remember that the one-dimensional continuous random variable and its univariate probability density function are de…ned through its univariate distribution function. In a similar
fashion, an n-dimensional random vector and its joint probability density function can also
be derived from its joint distribution function. With this in mind, it is necessary to de…ne
the joint distribution function for the n-dimensional random vector. To do so, let us …rst
4
de…ne the bivariate distribution function (which works for both the discrete and continuous
cases) and, subsequently, its bivariate probability density function for the continuous case
only. Naturally this bivariate distribution function can be readily extended to be the joint
distribution function for an n-dimensional random vector.
In analogue to the distribution function F (y) = P fY
yg for a one-dimensional random variable Y , the bivariate distribution function for any two random variables Y1 and Y2
(discrete or continuous) can be easily de…ned as follows:
De…nition 4. For any two random variables Y1 and Y2 , the bivariate distribution
function F (y1 ; y2 ) is given by
F (y1 ; y2 ) = P fY1
for
1 < y1 < 1 and
y2 g
y1 ; Y2
1 < y2 < 1.
Likewise, the joint distribution function random variables Y1 , Y2 ,
continuous) is de…ned in a similar form as follows:
De…nition 5. The joint distribution function F (y1 ; y2 ;
Y2 ,
, Yn is given by
F (y1 ; y2 ;
for
1 < yi < 1 (i = 1; 2;
; yn ) = P fY1
y1 ; Y2
y2 ;
, Yn (discrete or
; yn ) of random variables Y1 ,
; Yn
yn g
; n).
It seems to be relatively easier to evaluate the bivariate distribution function for the
discrete case than for the continuous case. Typical evaluation of such distribution functions
for the discrete case relies upon counting possible discrete pairs (y1 ; y2 ) of realized values of
the random vector (Y1 ; Y2 ) as demonstrated in the example below.
Example 4. Refer to the coin-die-tossing example, we have
F (1; 3) = P fY1 1; Y2 3g
= p (0; 1) + p (0; 2) + p (0; 3) + p (1; 1) + p (1; 2) + p (1; 3)
6
1
= :
=
12
2
Refer to Example 2, we obtain
F ( 1; 2) = P fY1
1; Y2
2g = P ( ) = 0
and
F (1:5; 2) = P fY1 1:5; Y2 2g
= p (0; 0) + p (0; 1) + p (0; 2) + p (1; 0) + p (1; 1) + p (1; 2)
1 2 1 2 2
8
=
+ + + + +0= :
9 9 9 9 9
9
5
Refer to Example 3, we get
F (2:5; 1:6) = P fY1 2:5; Y2 1:6g
= p (0; 0) + p (0; 1) + p (1; 0) + p (1; 1) + p (2; 0) + p (2; 1)
3
3
9
3
3
27
=
+
+
+
+
+0= :
28 14 28 14 28
28
As seen in the distribution function for one-dimensional random variable, is possible to
show that the distribution function F (y1 ; y2 ;
; yn ) is nondecreasing and continuous at
least from the right with respect to every variable. Since
fY1 < 1g \ fY2 < y2 g \
\ fYn < yn g
fY1 < y1 g \ fY2 < 1g \
\ fYn < yn g
.. .. ..
. . .
fY1 < y1 g \ fY2 < y2 g \
\ fYn < 1g
are impossible events, it is clear that
F ( 1; y2 ;
; yn ) = F (y1 ; 1;
; yn ) =
= F (y1 ; y2 ;
; 1) = 0:
Moreover, since
f 1 < Y1 < 1g \ f 1 < Y2 < 1g \
\ f 1 < Yn < 1g = S;
where S is the sample space, it satis…es the equality
F (1; 1;
; 1) = P f 1 < Y1 < 1; 1 < Y2 < 1;
; 1 < Yn < 1g = P (S) = 1:
For a two-dimensional random vector (Y1 ; Y2 ), it should be noticed that we also have the
following equality:
P fa1 < Y1
a2 ; b1 < Y2
b2 g = P fY1 a2 ; Y2 b2 g P fY1 a2 ; Y2 b1 g
P fY1 a1 ; Y2 b2 g + P fY1 a1 ; Y2 b1 g
= F (a2 ; b2 ) F (a2 ; b1 ) F (a1 ; b2 ) + F (a1 ; b1 )
as illustrated in the …gure below.
6
A good demonstration for the above result is laid out in the following example.
Example 5. Refer to Example 3, it is clear that the event f1 < Y1
contains only one possible pair (2; 1) and so
P f1 < Y1
2; 0 < Y2
2; 0 < Y2
1g
1g = p (2; 1) = 0
On the other hand, it follows from the above formula that
P f1 < Y1
2; 0 < Y2
1g = F (2; 1) F (2; 0) F (1; 1) + F (1; 0)
27 15 6 3
=
+ =0
28 28 7 7
where
F (2; 1) = p (0; 0) + p (0; 1) + p (1; 0) + p (1; 1) + p (2; 0) + p (2; 1)
3
3
9
3
3
27
+
+
+
+
+0=
=
28 14 28 14 28
28
3
9
3
15
F (2; 0) = p (0; 0) + p (1; 0) + p (2; 0) =
+
+
=
28 28 28
28
F (1; 1) = p (0; 0) + p (0; 1) + p (1; 0) + p (1; 1)
3
3
9
3
6
=
+
+
+
=
28 14 28 14
7
3
9
3
F (1; 0) = p (0; 0) + p (1; 0) =
+
= :
28 28
7
In contrast to the one-dimensional distribution function, in order that the function
F (y1 ; y2 ) be the distribution function of a two-dimensional random vector, it is not suf…cient that this function be continuous from the right, nondecreasing with respect to each
of the variables, and satisfy the following conditions:
F ( 1; y2 ) = F (y1 ; 1) = 0
and
F (1; 1) = 1:
To see this, consider the function
F (y1 ; y2 ) =
for y1 + y2 < 0
for y1 + y2 0,
0
1
that is, the function F (y1 ; y2 ) takes on the value 1 for the points on and above the line
y2 = y1 , and the value 0 for the points below the line. This function is nondecreasing,
continuous from the right with respect to y1 and y2 , and
F ( 1; y2 ) = F (y1 ; 1) = 0
and
F (1; 1) = 1:
However, it does not satisfy the equality
P fa1 < Y1
a2 ; b 1 < Y 2
b2 g = F (a2 ; b2 )
7
F (a2 ; b1 )
F (a1 ; b2 ) + F (a1 ; b1 ) :
For instance,
P f 2 < Y1
2; 1 < Y2
3g = F (2; 3) F (2; 1) F ( 2; 3) + F ( 2; 1)
= 1 1 1 + 0 = 1 < 0:
Remark. A real-valued function F (y1 ; y2 ) is a distribution function of a two-dimensional
random vector if and only if the following conditions hold:
(1) F (y1 ; y2 ) is nondecreasing and continuous at least from the right with respect to both
arguments y1 and y2 .
(2) F ( 1; 1) = F (y1 ; 1) = F ( 1; y2 ) = 0 and F (1; 1) = 1.
(3) If a1
a2 and b1
b2 , then
P fa1 < Y1 a2 ; b1 < Y2 b2 g
= F (a2 ; b2 ) F (a2 ; b1 ) F (a1 ; b2 ) + F (a1 ; b1 )
0:
We shall mainly consider multi-dimensional random vectors of the discrete or continuous
type.
De…nition 6. A two-dimensional random vector (Y1 ; Y2 ) is said to be discrete if, with
probability 1, it takes on pairs of values belonging to a set A of pairs that is at most countable, and every pair (a; b) is taken with positive probability P fY1 = a; Y2 = bg. We call
theses pairs of values jump points, the their probabilities jumps.
The joint distribution function F (y1 ; y2 ) of two discrete random variables Y1 and Y2 is
given by
X X
F (y1 ; y2 ) =
p (t1 ; t2 )
t1 y1 t2 y2
and the joint probability function p (y1 ; y2 ) is de…ned to be
p (y1 ; y2 ) = P fY1 = y1 ; Y2 = y2 g :
In the discrete case, the joint distribution function for random variables Y1 , Y2 ,
is given by
X X
X
F (y1 ; y2 ;
; yn ) =
p (t1 ; t2 ;
; tn )
t1 y1 t2 y2
and the joint probability function p (y1 ; y2 ;
p (y1 ; y2 ;
tn yn
; yn ) is de…ned to be
; yn ) = P fY1 = y1 ; Y2 = y2 ;
8
; Yn = yn g :
, Yn
III. Joint Probability Density Functions for Continuous Random Variables
As seen in the univariate continuous case, two random variables Y1 and Y2 are said to
be jointly continuous if their joint distribution function F (y1 ; y2 ) is continuous in both arguments. We now formally de…ne the notion of a two-dimensional random vector of the
continuous type.
De…nition 7. A two-dimensional random vector (Y1 ; Y2 ) is said to be continuous if
there exists a nonnegative function f (y1 ; y2 ) such that
F (y1 ; y2 ) =
Zy2 Zy1
1
f (t1 ; t2 ) dt1 dt2
1
for all pairs (y1 ; y2 ) of real numbers, where F (y1 ; y2 ) is the joint distribution function of Y1
and Y2 . The function f (y1 ; y2 ) is called the joint probability density function or bivariate
probability density function for continuous random variables Y1 and Y2 .
Consider a continuous two-dimensional random vector (Y1 ; Y2 ). The corresponding bivariate probability density function, f (y1 ; y2 ), for Y1 and Y2 is proportional to the probability
that the random vector is equal to the argument (y1 ; y2 ); that is,
P fy1
Y1
y1 +
y 1 ; y2
Y2
y2 +
y2 g
f (y1 ; y2 ) y1 y2 :
Let A be any event (a region in the two-dimensional space). To evaluate the probability of
any event A, we simply integrate the density function over the region de…ned by A. At the
continuity points of (y1 ; y2 ), we write
f (y1 ; y2 ) = lim
y1 !0
y2 !0
P fy1
Y1
y1 +
y 1 ; y2
y1 y2
Y2
y2 +
y2 g
:
If the joint density function f (y1 ; y2 ) is continuous at the point (y1 ; y2 ), then
f (y1 ; y2 ) =
@2
F (y1 ; y2 ) :
@y1 @y2
The bivariate probability density function for continuous random variables Y1 and Y2
should satisfy the following conditions.
Theorem 1. If the function f (y1 ; y2 ) is a bivariate probability density function for
continuous random variables Y1 and Y2 , then the following properties holds:
(1) f (y1 ; y2 )
(2)
Z1 Z1
1
0 for all real numbers y1 and y2 .
f (y1 ; y2 ) dy1 dy2 = 1.
1
9
(3) P f(Y1 ; Y2 ) 2 Ag =
Z Z
f (y1 ; y2 ) dy1 dy2 for any region A in the xy-plane.
A
When (Y1 ; Y2 ;
; Yn ) is a continuous n-dimensional random vector; that is, Y1 , Y2 ,
,
Yn are n continuous (one-dimensional) random variables, the corresponding joint probability density function, or joint density distribution, of Y1 , Y2 ,
, Yn may be denoted by
f (y1 ; y2 ;
; yn ) which is proportional to the probability that the random vector is equal to
the argument (y1 ; y2 ;
; yn ); that is,
P fy1 Y1 y1 + y1 ; y2
f (y1 ; y2 ;
; yn ) y 1 y 2
Y2 y 2 +
yn :
y2 ;
; yn
Yn
yn +
yn g
We evaluate the probability of any event A (a region in the n-dimensional space) by integrating the density function over the region de…ned by A. At the continuity points of
(y1 ; y2 ;
; yn ), we write
f (y1 ; y2 ;
; yn )
P fy1 Y1
= lim
y1 +
y 1 ; y2
Y2
y1 !0
y2 !0
y2 + y2 ;
y1 y2
; yn
Yn
yn +
yn g
:
yn !0
If the joint density function f (y1 ; y2 ;
; yn ) for the n-dimensional continuous random vector
is continuous at the point (y1 ; y2 ;
; yn ), then
f (y1 ; y2 ;
; yn ) =
@n
F (y1 ; y2 ;
@y1 @y2
@yn
In summary, the joint density function of Y1 , Y2 ,
lowing de…nition:
; yn ) :
, Yn must satisfy properties in the fol-
De…nition 8. The function f (y1 ; y2 ;
; yn ) is a joint probability density function for
continuous random variables Y1 , Y2 ,
, Yn if and only if
(1) f (y1 ; y2 ;
(2)
Z1 Z1
1
1
; yn )
Z1
0 for all real numbers y1 , y2 ,
f (y1 ; y2 ;
; yn ) dy1 dy2
, yn .
dyn = 1.
1
(3) For any region A in the n-dimensional space,
Z Z
Z
P f(Y1 ; Y2 ;
; Yn ) 2 Ag =
f (y1 ; y2 ;
(y1 ;y2 ;
; yn ) dy1 dy2
dyn :
;yn )2A
Presented below are examples for bivariate probability density functions of continuous
random variables.
10
Example 6. Let Y1 and Y2 denote the proportions of time, out of one workday, that
employees 1 and 2, respectively, actually spend on performing their assigned tasks. The joint
probability density function is given by
for 0 y1
elsewhere.
y1 + y2
0
f (y1 ; y2 ) =
1 and 0
y2
1
Then
Z1 Z1
1
Z1 Z1
f (y1 ; y2 ) dy1 dy2 =
1
0
0
Z1
=
(y1 + y2 ) dy1 dy2 =
Z1
1 2
y + y1 y2
2 1
0
1
1
1
+ y2 dy2 = y2 + y22
2
2
2
0
y2 =1
=
y2 =0
y1 =1
dy2
y1 =0
1 1
+ =1
2 2
and
P
1
1
Y1 < ; Y2 >
2
4
=
Z1 1=2
Z
(y1 + y2 ) dy1 dy2 =
1=4 0
=
Z1
Z1
1 2
y + y1 y2
2 1
1=4
1 1
1
1
+ y2 dy2 = y2 + y22
8 2
8
4
1=4
1 1
+
8 4
=
1
1
+
32 64
=
y1 =1=2
dy2
y1 =0
y2 =1
y2 =1=4
21
:
64
Example 7. Let Y1 and Y2 have the joint probability density function given by
f (y1 ; y2 ) =
cy13 y22
0
for 0 < y1 < 1 and 0 < y2 < 1
elsewhere.
Then the value of c can be obtained through
1=c
Z1 Z1
0
y13 y22 dy1 dy2
=c
0
Z1
y13 dy1
0
Z1
y22 dy2 = c
0
Hence, c = 12. Also, since the probability is zero if Y2
P
3 1
5
0 < Y1 < ; < Y2 <
4 2
2
y14
4
y1 =1
y23
3
y1 =0
y2 =1
=
y2 =0
c
12
1, it follows that
3 1
0 < Y1 < ; < Y2 < 1
4 2
3=4
3=4
Z1 Z
Z
Z1
3 2
3
=
12y1 y2 dy1 dy2 = 12 y1 dy1 y22 dy2
= P
1=2 0
= 12
1 4
y
4 1
81
=
256
11
0
y1 =3=4
y1 =0
1
1
8
1 3
y
3 2
y2 =1
y2 =1=2
567
=
:
1792
1=2
It should be noted that this probability is the volume under the surface f (y1 ; y2 ) = 12y13 y22
and above the rectangular set (y1 ; y2 ) j 0 < y1 < 43 ; 12 < y2 < 52 in the y1 y2 -plane.
Example 8. Consider random variables Y1 and Y2 having the joint probability density
function given by
4y1 y2 e
0
f (y1 ; y2 ) =
y12 y22
for 0 < y1 < 1 and 0 < y2 < 1
elsewhere.
This is a legitimate joint density function because
Z1 Z1
0
4y1 y2 e
y12
y22
dy1 dy2 =
0
Z1
2y1 e
0
=
lim
a!1
Also,
Z3 Z2
P f1 < Y1 < 2; 0 < Y2 < 3g =
0
h
=
1
e
h
y12
dy1
Z1
y22
2y2 e
0
e
4y1 y2 e
i
2 y1 =a
y1
y1 =1
lim
y1 =0 b!1
y12
i
h
2 y1 =2
y1
h
y22
e
dy1 dy2 =
i
2 y2 =3
y2
dy2
y22
e
iy2 =b
= 1 1 = 1:
y2 =0
Z2
2y1 e
y12
dy1
1
Z3
2y2 e
= e
1
e
4
1
e
9
for 0 y2
elsewhere.
(a) Find the value of c.
(b) Evaluate P Y1
1
; Y2
2
>
1
4
.
12
:
y2 =0
y12 y22
Example 9. Let Y1 and Y2 have the joint probability density function given by
cy1
0
dy2
0
Note that this probability is the volume under the surface f (y1 ; y2 ) = 4y1 y2 e
above the rectangular set f(y1 ; y2 ) j 1 < y1 < 2; 0 < y2 < 3g in the y1 y2 -plane.
f (y1 ; y2 ) =
y22
y1
1
and
Solution. (a) Since f (y1 ; y2 ) is a joint probability density function for Y1 and Y2 , it
must have that
1=
Z1 Zy1
0
cy1 dy2 dy1 =
0
Z1
1
cy1 [y2 ]yy22 =y
=0
dy1 =
0
Z1
cy12 dy1
y1 =1
y3
=c 1
3
y1 =0
0
c
= :
3
Hence, c = 3.
(b) Also, we can evaluate
P
Y1
1
1
; Y2 >
2
4
=
1=2
Z Zy1
3y1 dy2 dy1 =
1=4 1=4
=
1=2
Z
1
3y1 [y2 ]yy22 =y
=1=4
dy1 =
1=4
3y12
1=4
=
=
1
1
5
+
=
:
32 128
128
3
1=2
Z
3y1 y1
1
4
dy1
1=4
3
y1 dy1 = y13
4
# "
2
3 1
1
8 2
4
"
1
2
1=2
Z
y1 =1=2
3 2
y
8 1
3
y1 =1=4
3
8
1
4
2
#
This probability is the volume under the surface f (y1 ; y2 ) = 3y1 and above the triangular
set f(y1 ; y2 ) j 0 y2 y1 1g in the y1 y2 -plane.
Example 10. Let Y1 and Y2 have the joint probability density function given by
f (y1 ; y2 ) =
for 0 y1
elsewhere.
4y1 y2
0
1 and 0
y2
1
Then
P
1
< Y1
4
1 3
; < Y2
2 4
1
=
Z1 1=2
Z
4y1 y2 dy1 dy2 =
3=4 1=4
3
=
8
Z1
3
8
y2 dy2 =
13
1
2
2y12 y2
3=4
3 1 2
y
8 2 2
3=4
=
Z1
9
32
=
21
:
256
y2 =1
y2 =3=4
y1 =1=2
y1 =1=4
dy2
On the other hand,
F
1
;1
2
= P
Y1
1
; Y2
2
1
Z1 1=2
Z
=
0
=
Z1
1
1
y2 dy2 = y22
2
4
0
F
1 3
;
2 4
= P
Y1
1
; Y2
2
y2 =1
0
=
1
1
y2 dy2 = y22
2
4
0
F
1
;1
4
= P
Y1
1
; Y2
4
=
y2 =3=4
0
=
Z1
1
1 2
y2 dy2 =
y
8
16 2
0
F
1 3
;
4 4
= P
Y1
1
; Y2
4
1
y2 =1
=
y2 =0
=
0
1 2
1
y2 dy2 =
y
8
16 2
=
y2 =3=4
y2 =0
3=4
Z
3
4
2
=
= F
=
1
4
y1 =1=2
y1 =0
dy2
4y1 y2 dy1 dy2 =
Z1
2y12 y2
y1 =1=4
y1 =0
dy2
3=4
Z
2y12 y2
y1 =1=4
y1 =0
dy2
0
1
;
16
4y1 y2 dy1 dy2 =
0
1
=
16
0
3
4
2
=
9
:
256
1
1 3
< Y1
; < Y2 1
4
2 4
1
1 3
1
1 3
;1
F
;
F
;1 + F
;
2
2 4
4
4 4
9
1
9
64 36 16 + 9
21
+
=
=
:
64 16 256
256
256
14
2y12 y2
9
;
64
Hence,
P
dy2
0
0
3=4
Z 1=4
Z
0
3=4
Z
0
Z1 1=4
Z
=
4y1 y2 dy1 dy2 =
1
=
4
y2 =0
1
y1 =1=2
y1 =0
0
3=4
Z 1=2
Z
0
3=4
Z
2y12 y2
1
= ;
4
y2 =0
3
4
4y1 y2 dy1 dy2 =
Z1
Example 11.
0 < y2 < 1,
Let f (y1 ; y2 ) be de…ned as in Example 8. Then for 0 < y1 < 1 and
Zy2 Zy1
F (y1 ; y2 ) =
0
h
=
Hence,
F (y1 ; y2 ) =
(
1
t21
t22
it1 =y1 h
dt1 dt2 =
e
t22
t1 =0
y12
e
4t1 t2 e
0
e
t21
1
e
Zy1
it2 =y2
2t1 e
t21
dt1
0
Zy2
2t2 e
t22
dt2
e
y22
0
= 1
e
y12
1
t2 =0
y22
for 0 < y1 < 1 and 0 < y2 < 1
elsewhere.
0
Example 12. For the joint density function de…ned in Example 7, for 0 < y1 < 1 and
0 < y2 < 1, the joint distribution function is given by
Zy2 Zy1
Zy1
Zy2
F (y1 ; y2 ) =
12t31 t22 dt1 dt2 = 4t31 dt1 3t22 dt2
0
=
Hence,
0
4 t1 =y1
t1 t1 =0
8
< 0
y4y3
F (y1 ; y2 ) =
: 1 2
1
0
t =y
t32 t22 =02
=
0
y14 y23 :
for y1 0 and y2 0
for 0 < y1 < 1 and 0 < y2 < 1
elsewhere.
Example 13. Given a joint density function
(
(n 1)(n 2)
for y1 0; y2
n
(1+y
1 +y2 )
f (y1 ; y2 ) =
1
elsewhere,
0; n > 2
the joint distribution function is given by
F (y1 ; y2 ) =
1
0
1
(1+y1 )n
2
1
(1+y2 )n
2
+
1
(1+y1 +y2 )n
for y1 0; y2
elsewhere,
2
0 and y2 0, we have
Zy2 Zy1
Zy2
(n 1) (n 2)
F (y1 ; y2 ) =
dt1 dt2 =
(1 + t1 + t2 )n
0; n > 2
since, for y1
0
=
Zy2
0
0
n 2
(1 + t2 )n
(n 2)
(1 + t1 + t2 )n
0
1
(n 2)
(1 + y1 + t2 )n
1
dt2
t =y
2
2
1
1
=
+
(1 + t2 )n 2 (1 + y1 + t2 )n 2 t2 =0
1
1
1
= 1
n 2
n 2 +
(1 + y1 )
(1 + y2 )
(1 + y1 + y2 )n
15
2:
t1 =y1
dt2
1
t1 =0