Download Lecture 4: Random Variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Lecture 4: Random Variables
1. Definition of Random variables
1.1 Measurable functions and random variables
1.2 Reduction of the measurability condition
1.3 Transformation of random variables
1.4 σ-algebra generated by a random variable
2. Distribution functions of random variables
2.1
2.2
2.3
2.4
2.5
Measure generated by a random variable
Distribution function of a random variable
Properties of distribution functions
Random variables with a given distribution function
Distribution functions of transformed random variables
3. Types of distribution functions
3.1
3.2
3.3
3.4
Discrete distribution functions
Absolutely continuous distribution functions
Singular distribution functions
Decomposition representation for distribution functions
4 Multivariate random variables (random vectors)
4.1 Random variables with values in a measurable space
4.2 Random vectors
4.3 Multivariate distribution functions
1
5 Independent random variables
5.1 Independent random variables
5.2 Mutually independent random variables
1. Definition of Random variables
1.1 Measurable functions and random variables
< Y, BY > and < X , BX > are two measurable spaces (space
plus σ-algebra of measurable subsets of this space);
f = f (y) : Y → X is a function acting from Y to X .
Definition 4.1, f = f (y) is a measurable function if
{y : f (y) ∈ A} ∈ BY , A ∈ BX .
Example
(1) Y = {y1 , . . . , } is a finite or countable set; BY is the σ-algebra
of all subsets of Y. In this case, any function f (x) acting from
Y to X is measurable.
(2) Y = X = R1 ; BY = BX = B1 is a Borel σ-algebra. In this
case, f (x) is called a Borel function.
(3) A continuous function f = f (x) : R1 → R1 is a Borel function.
2
< Ω, F, P > is a probability space;
X, BX is a measurable space.
X = X(ω) : Ω → X (a function acting from Ω to X ).
Definition 4.2. X = X(ω) is a random variable with values
in space X defined on a probability space < Ω, F, P > if it is a
measurable function acting from Ω → X , i.e., such function that
{ω : X(ω) ∈ A} ∈ F, A ∈ BX .
< Ω, F, P > is a probability space;
X = X(ω) : Ω → R1 (a function acting from Ω to R1 );
BX = B1 is a Borel σ-algebra of subsets of R1 .
Definition 4.3. X = X(ω) is a (real-valued) random variable
defined on a probability space < Ω, F, P > if it is a measurable
function acting from Ω → R1 , i.e., such function that
{ω : X(ω) ∈ A} ∈ F, A ∈ B1 .
< Ω, F, P > is a probability space;
X = X(ω) : Ω → [−∞, +∞];
B1+ is a Borel σ-algebra of subsets of [−∞, +∞] (minimal σalgebra containing all intervals [a, b], −∞ ≤ a ≤ b ≤ +∞);
Definition 4.4. X = X(ω) is a (improper) random variable
defined on a probability space < Ω, F, P > if it is a measurable
function acting from Ω → R1 , i.e., such function that
{ω : X(ω) ∈ A} ∈ F, A ∈ B1+ .
3
Examples
(1) X = {x1 , . . . , xN }, BX is a σ-algebra of all subsets of X; X
is a random variable with a finite set of values;
(2) X = R1 , BX = B1 ; X is a (real-valued) random variable;
(3) X = Rk , BX = Bk ; X is a random vector (random variable
with values in Rk ;
(4) X is a metric space, BX is a Borel σ-algebra of subsets of
X; X is a random variable with values in the metric space X.
(5) Ω = {ω1 , ω2 . . .} → a discrete sample space;
F = F0 → the σ-algebra of all subsets of Ω;
Any function X = X(ω) : Ω → R1 is a random variable since,
in this case, it is automatically a measurable function.
(6) Ω = R1 ;
F = B1 → Borel σ-algebra of subsets of R1 ;
1.2 Reduction of the measurability condition
The following notations are used:
X −1 (A) = {X ∈ A} = {ω : X(ω) ∈ A}.
Theorem 4.1. The measurability condition (A) X −1 (A) ∈
F, A ∈ B1 hold if and only if (B) Ax = {ω : X(ω) ≤ x} ∈
F, x ∈ R1 .
4
———————————
(A) ⇒ (B). Indeed, (−∞, x], x ∈ R1 are Borel sets;
(B) ⇒ (A). Indeed, let K be a class of all sets A ⊂ R1 such that
X −1 (A) ∈ F. Then
(a) X −1 ((a, b]) = Ab \ Aa ∈ F, a ≤ b. Thus, (a, b] ∈ K, a < b;
(b) A ∈ K ⇒ A ∈ K since X −1 (A) = X −1 (A) ∈ F;
(c) A1 , A2 . . . ∈ K ⇒ ∪n An ∈ K since X −1 (∪n An ) = ∪n X −1 (An )
∈ F;
(d) Thus K is a σ-algebra which contains all intervals. Therefore, B1 ⊆ K.
———————————
1.3 Transformation of random variables
X1 , . . . , Xk → random variables defined on a probability space
< Ω, F, P >;
f (x1 , . . . , xk ) : Rk → R1 → a Borel function, i.e.,
f −1 (A) = {(x1 , . . . , xk ) ∈ A} ∈ Bk , A ∈ B1 .
Theorem 4.2. X = f (X1 , . . . , Xk ) is a random variable.
———————————
(a) {ω : X1 (ω) ∈ (a1 , b1 ], . . . , Xk (ω) ∈ (ak , bk ]} ∈ F, ai ≤ bi , i =
1, . . . , k;
(b) Let K be a class of all sets A ⊂ Rk such that
{(X1 (ω), . . . , Xk (ω)) ∈ A} ∈ F. Then K is a σ-algebra. The
5
proof is analogous to those given for Theorem 1.
(c) C ∈ B1 ⇒ {ω : f (X1 (ω), . . . , Xk (ω)) ∈ C}
= {ω : (X1 (ω), . . . , Xk (ω)) ∈ f −1 (C)} ∈ F.
(d) Thus, X = f (X1 , . . . , Xk ) is a random variable.
———————————
(1) U± = X1 ± X2 , V = X1 X2 , W = X1 /X2 (if X2 6= 0) are
random variables;
(2) Z+ = max(X1 , . . . , Xk ), Z− = min(X1 , . . . , Xk ) are random
variables.
X, X1 , X2 , . . . , are random variables defined on a probability
space < Ω, F, P >;
Theorem 4.3. Let X be a random variable defining by one of
the relation,
X = sup Xn
n≥1
X = inf Xn
n≥1
X = limn→∞ Xn = inf sup Xk
n≥1 k≥n
X = limn→∞ Xn = sup inf Xk
n≥1 k≥n
X = n→∞
lim Xn = limn→∞ Xn = limn→∞ Xn
Then X is a random variable (possibly improper).
———————————
(1) {ω : supn≥1 Xn (ω) > x} = ∪n≥1 {Xn (ω) > x};
(2) {ω : inf n≥1 Xn (ω) < x} = ∪n≥1 {Xn (ω) < x};
6
(3) {ω : limn→∞ Xn (ω) < x} = ∪l≥1 ∪n≥1 ∩k≥n Xk (ω) < x − 1l };
(4) {ω : limn→∞ Xn (ω) > x} = ∪l≥1 ∪n≥1 ∩k≥n Xk (ω) > x − 1l };
———————————
Let A1 , . . . , An ∈ F and a1 , . . . , an are real numbers.
Definition 4.5. X(ω) =
able.
Pn
k=1 ak IAk (ω)
is a simple random vari-
Theorem 4.4. X = X(ω) is a random variable if and only if
X(ω) = limn→∞ Xn (ω), ω ∈ Ω, where Xn , n = 1, 2, . . . are simple
random variables.
< Z, BZ >, < Y, BY > and < X , BX > and are three measurable spaces;
f = f (z) : Z → Y is a measurable function acting from Z to Y.
g = g(y) : Y → X is a measurable function acting from Y to X .
Theorem 4.5. The superposition h(z) = g(f (x)) of two measurable functions f and g is a measurable function acting from
space Z to space X .
———————————
(1) Let A ⊆ X . Then h−1 (A) = f −1 (g −1 (A)).
(2) Let A ∈ BX . Then g −1 (A) ∈ BY ;
(3) Then h−1 (A) = f −1 (g −1 (A)) ∈ BZ .
———————————
1.4 σ-algebra generated by a random variable
Theorem 4.6 Let X = X(ω) be a random variable defined
7
on a probability space < Ω, F, P >. The class of sets FX =<
X −1 (A), A ∈ B1 > is a σ-algebra (generated by the random variable X).
———————————
(a) C ∈ FX ⇔ C = X −1 (A), where A ∈ B1 ⇒ C = X −1 (A) =
X −1 (A) ∈ FX since A ∈ B1 ;
(b) C1 , C2 , . . . ∈ FX ⇔ Cn = X −1 (An ), n = 1, 2, . . ., where An ∈
B1 , n = 1, 2, . . . ⇒ ∪n Cn = ∪n X −1 (An ) = X −1 (∪n An ) ∈ FX
since ∪n An ∈ B1 ;
(d) Thus FX is a σ-algebra.
———————————
(1) FX ⊆ F.
Theorem 4.7 Let X = X(ω) be a random variable defined
on a probability space < Ω, F, P > and taking values in a
space X with σ-algebra of measurable sets BX . The class of
sets FX =< X −1 (A), A ∈ BX > is a σ-algebra (generated by the
random variable X).
2. Distribution functions of random variables
2.1 Measure generated by a random variable
X = X(ω) → a random variable defined on a probability space
< Ω, F, P >.
PX (A) = P (ω : X(ω) ∈ A) = P (X −1 (A)), A ∈ B1 .
Theorem 4.8. PX (A) is a probability measure defined on Borel
σ-algebra B1 .
8
———————————
(a) PX (A) ≥ 0;
(b) A1 , A2 , . . . ∈ B1 , Ai ∩ Aj = ∅ ⇒ X −1 (∪n An ) = ∪n X −1 (An )
P
and, therefore, PX (∪n An ) = P (X −1 (∪n An )) = n P (X −1 (An )) =
P
n PX (An );
(c) PX (R1 ) = P (X −1 (R1 )) = P (Ω) = 1.
———————————
Definition 4.6. The probability measure PX (A) is called a distribution of the random variable X.
X = X(ω) → a random variable defined on a probability space
< Ω, F, P > and taking values in a space X with σ-algebra of
measurable sets BX .
PX (A) = P (ω : X(ω) ∈ A) = P (X −1 (A)), A ∈ BX .
Theorem 4.9. PX (A) is a probability measure defined on Borel
σ-algebra BX .
Definition 4.7. The probability measure PX (A) is called a distribution of the random variable X.
2.2 Distribution function of a random variable
X = X(ω) → a random variable defined on a probability space
< Ω, F, P >;
PX (A) = P (X(ω) ∈ A) = P (X −1 (A)), A ∈ B1 → the distribution of the random variable X.
9
Definition 4.8 . The function FX (x) = PX ((−∞, x]), x ∈ R1
is called the distribution function of a random variable X.
(1) The distribution PX (A) uniquely determines the distribution function FX (x) and, as follows from the continuation
theorem, the distribution function of random variable uniquely
determines the distribution PX (A).
2.3 Properties of distribution functions
A distribution function FX (x) of a random variable X possesses the following properties:
(1) FX (x) is non-decreasing function in x ∈ R1 ;
(2) FX (−∞) = limx→−∞ FX (x) = 0, FX (∞) = limx→∞ FX (x)= 1;
(3) FX (x) is continuous from the right function, i.e., FX (x) =
limy≥x,y→x FX (y), x ∈ R1 .
——————————(a) x0 ≤ x00 ⇒ (−∞, x0 ] ⊆ (−∞, x00 ] ⇒ FX (x0 ) = PX ((−∞, x0 ]) ≤
FX (x00 ) = PX ((−∞, x00 ]);
(b) xn → −∞ ⇒ zn = maxk≥n xk , ↓ −∞ ⇒ FX (xn ) ≤ FX (zn ) =
PX ((−∞, zn ]) → 0 since ∩n (−∞, zn ] = ∅;
(c) xn → ∞ ⇒ zn = mink≥n xk , ↑ ∞ ⇒ FX (xn ) ≥ FX (zn ) =
PX ((−∞, zn ]) → 1 since ∪n (−∞, zn ] = R1 ;
(d) xn ≥ x, xn → x ⇒ zn = maxk≥n xk , ↓ x ⇒ FX (xn ) ≤
FX (zn ) = PX ((−∞, zn ]) → FX (x) since ∩n (−∞, zn ] = (−∞, x];
——————————10
(4) P (X ∈ (a, b]) = PX ((a, b]) = FX (b) − FX (a), a ≤ b;
(5) P (X = a) = PX ({a}) = FX (a) − FX (a − 0), a ≤ b;
——————————(e) PX ({a}) = limn→∞ (FX (a) − FX (a − n1 )) = FX (a) − FX (a − 0)
since ∩n (a − n1 , a] = {a}.
——————————(6) Any distribution function has not more that n jumps with
values ≥ n1 for every n = 1, 2, . . . and, therefore, the set of all
jumps is at most countable.
——————————(f) Let a1 < · · · < aN be some points of jumps with values
≥ n1 . Then ∪N
n=1 {X = an } ⊆ {−∞ < X < ∞}. Thus, N/n ≤
PN
N
n=1 P (X = an ) = P (∪n=1 {X = an } ≤ P (−∞ < X < ∞) = 1
and thus N ≤ n.
——————————2.4 Random variables with a given distribution function
One can call any function F (x) defined on R1 a distribution
function if it possesses properties (1)– (3).
According the continuation theorem every distribution function uniquely determines (generates) a probability measure P (A)
on B1 which is connected with this distribution function by the
relation
P ((a, b]) = F (b) − F (a), a < b.
11
Theorem 4.10. For any distribution function F (x) there exists a random variable X that has the distribution function
FX (x) ≡ F (x).
——————————(a) Choose the probability space < Ω = R1 , F = B1 , P (A) >
where P (A) is the probability measure which is generated by
the distribution function F (x).
(b) Consider the random variable X(ω) = ω, ω ∈ R1 . Then
P (ω : X(ω) = ω ≤ x) = P ((−∞, x]) = F (x), x ∈ R1 .
——————————Let F (x) is distribution function. One can define the ”inverse” function
F −1 (y) = inf(x : F (x) ≥ y), 0 ≤ y ≤ 1.
.
Random variable Y has an uniform distribution if it has the
following distribution function





FY (x) = 



0 if x < 0,
x if x ∈ [0, 1],
1 if x > 1.
Theorem 4.11*. For any distribution function F (x) the random variable X = F −1 (Y ), where Y is a uniformly distributed
random variable, has the distribution function F (x).
——————————(a) Consider here only the case where F (x) is a continuous
12
strictly monotonic distribution function. In this case F −1 (y) =
inf(y : F (x) = y) is also a continuous strictly monotonic function and F −1 (F (x)) = x.
(b) F (x) = P (Y ≤ F (x)) = P (F −1 (Y ) ≤ F −1 (F (x))
= P (X ≤ x), x ∈ R1 .
——————————Example
Let F (x) = 1 − e−ax , x ≥ 0 be an exponential distribution function In this case F −1 (y) = − a1 ln(1 − y) and random variable
X = − a1 ln(1 − Y ) has the exponential distribution function
with parameter a.
2.5 Distribution functions of transformed random variables
X → random variable with a distribution function FX (x) and
the corresponding distribution PX (A);
f (x) : R1 → R1 is a Borel function.
Af (x) = {y ∈ R1 : f (y) ≤ x}, x ∈ R1 .
Theorem 4.12. The distribution function of the transformed
random variable Y = f (X) is given by the following formula,
FY (x) = P (f (X) ≤ x) = P (X ∈ Af (x)) = PX (Af (x)), x ∈ R1 .
Examples
(1) Y = aX + b, a > 0;
Af (x) = (−∞, x−b
a ];
13
FY (x) = FX ( x−b
a ).
(2) Y = eaX , a > 0;
Af (x) = ∅ if x ≤ 0 or (−∞, a1 ln x] if x > 0;
FY (x) = I(x > 0)FX ( a1 ln x).
(3) Y = X 2 ;
√ √
Af (x) = ∅ if x ≤ 0 or [− x, x] if x > 0;
√
√
FY (x) = I(x > 0)(FX ( x) − FX (− x − 0)).
3 Types of distribution functions
3.1 Discrete distribution functions
Definition 4.9. A distribution function F (x) is discrete if there
exists a finite or countable set of points A = {a1 , a2 , . . .} such
P
that n (F (an ) − F (an − 0)) = 1.
If X is a random variable with the distribution function F (x)
then
P (X ∈ A) =
X
P (X = an ) =
an ∈A
X
an ∈A
Examples
(a) Bernoulli distribution;
(b) Discrete uniform distribution;
(c) Binomial distribution;
(d) Poisson distribution;
(e) Geometric distribution;
14
(F (an ) − F (an − 0)).
3.2 Absolutely continuous distribution functions.
Definition 4.10. A distribution function F (x) is absolutely
continuous if it can be represented in the following form
F (x) =
Z x
−∞
f (y)dy, x ∈ R1 ,
∞
where (a) f (y) is a Borel non-negative function; (b) −∞
f (y)dy =
1; (c) Lebesgue integration is used in the formula above (if f (y)
is a Riemann integrable function then the Lebesgue integration
can be replaced by Riemann integration).
R
Examples
(a) Uniform distribution;
(b) Exponential distribution;
(c) Normal (Gaussian) distribution;
(d) Gamma distribution distribution;
(e) Guachy distribution;
(f) Pareto distribution.
3.3 Singular distribution functions.
Definition 4.11. A distribution function F (x) is singular if it
is a continuous function and its set of points of growth SF has
Lebesgue measure m(SF ) = 0 (x is point of growth if F (x + ) −
F (x − ) > 0 for any > 0).
Example*
15
Define a continuous distribution function F (x) such that F (x)
= 0 for x < 0 and F (x) = 1 for x > 1, which set of of points of
growth SF is the Cantor set, in the following way:
(a) Define function F (x) at the Cantor set in the following way:
(1) [0, 1] = [0, 31 ] ∪ [ 13 , 23 ] ∪ [ 32 , 1]: F (x) = 21 , x ∈ [ 13 , 23 ];
(2) [0, 31 ] = [0, 19 ] ∪ [ 91 , 29 ] ∪ [ 29 , 13 ]: F (x) = 41 , x ∈ [ 91 , 29 ];
(3) [ 32 , 1] = [ 23 , 79 ] ∪ [ 97 , 89 ] ∪ [ 89 , 1]: F (x) = 34 , x ∈ [ 97 , 89 ];
.......
(b) Define a function F (x) as continuous function in points that
do not belong to the listed above internal intervals.
(c) The sum of length of all internal intervals, where the function F (x) take constant values is equal
∞ 2
1
1
1
1X
1
1
+2· +4·
+ ··· =
( )k = ·
3
9
27
3 k=0 3
3 1−
2
3
= 1.
3.4 Decomposition representation for distribution functions
Theorem 4.13 (Lebesgue)**. Any distribution function
F (x) can be represented in the form F (x) = p1 F1 (x)+p2 F2 (x)+
p3 F3 (x), x ∈ R1 where (a) F1 (x) is a discrete distribution function, (a) F2 (x) is an absolutely continuous distribution function, (c) F3 (x) is singular distribution function, (d) p1 , p2 , p2 ≥
0, p1 + p2 + p3 = 1.
4 Multivariate random variables (random vectors)
16
4.1 Random variables with values in a measurable
space
Let X is an arbitrary space and B(X ) is a σ-algebra of measurable subsets of X .
Definition 4.12. A random variable X = X(ω) defined on a
probability space < Ω, F, P > and taking values in the space
X (with a σ-algebra of measurable subsets B(X )) is a measurable function acting from Ω → X , i.e., such function that
{ω : X(ω) ∈ A} ∈ F for any A ∈ B(X ).
Examples
(1) X = R1 , B(X ) = B1 . In this case, X is a real-valued random
variable;
(2) X = {0, 1} × · · · × {0, 1} (the product is taken n times),
B(X ) is a σ-algebra of all subsets of X . A random variable
X = (X1 , . . . , Xn ) is a Bernoulli vector which components are
Bernoulli random variables.
(3) X ia a metric space, B(X ) is a Borel σ-algebra of subsets of
X (the minimal σ-algebra containing all balls); X is a random
variable taking values in the metric space X .
4.2 Random vectors
17
X = Rk , B = Bk and P is a probability measure defined on
Bk .
Definition 4.13. A multivariate random variable (random vector) is a random variable X = (X1 , . . . , Xn ) defined on a probability space < Ω, F, P > and taking values taking values in
the space X = Rk (with a σ-algebra of measurable subsets
B(X ) = Bk ).
(1) Every component of a random vector is a real-valued random variable defined on the same probability space.
(2) If Xk = Xk (ω), k = 1, . . . , n are real-valued random variables defined on some probability space, then X = (X1 , . . . , Xn )
is a random vector defined on the same probability space.
4.3 Multivariate distribution functions
Definition. A multivariate distribution function of a random variable (random vector) X = (X1 , . . . , Xn ) is a function
F (x1 , . . . , xn ) defined for x = (x1 , . . . , xn ) ∈ Rn by the following
relation
FX1 ,...,Xn (x1 , . . . , xn ) = P (X1 ≤ x1 , . . . , Xn ≤ xn ).
The multivariate distribution function possesses the following properties:
(1) limxk →−∞ FX1 ,...,Xn (x1 , . . . , xn ) = 0;
(2) FX1 ,...,Xn (x1 , . . . , xn ) non-decrease in every argument;
(3) limxk →∞,k=1,...,n FX1 ,...,Xn (x1 , . . . , xn ) = 1;
18
(4) the multivariate distribution functions of the random vectors
(X1 , . . . , Xn ) and (X1 , . . . , Xk−1 , Xk+1 , . . . , Xn ) are connected by
the following relation limxk →∞ FX1 ,...,Xn (x1 , . . . , xn ) =
FX1 ,...,Xk−1 ,Xk+1 ,...Xn (x1 , . . . , xk−1 , xk+1 , . . . , xn );
(5) P (X1 ∈ (a1 , b1 ], . . . , Xn ∈ (a1 , b1 ]) = FX1 ,...,Xn (b1 , . . . , bn ) −
P
k FX1 ,...Xn (b1 , . . . , bk−1 , ak , bk , . . . , bn ) + · · ·
+(−1)n FX1 ,...,Xn (a1 , . . . , an ) ≥ 0;
(6) FX1 ,...,Xn (x1 , . . . , xn ) is continuous from above functions that
is limyk ↓xk ,k=1,...,n FX1 ,...,Xn (y1 , . . . , yn ) = FX1 ,...,Xn (x1 , . . . , xn ).
Example
Let X = (X1 , X2 ) is a two-dimensional random vector. Then
P (X1 ∈ (a1 , b1 ], X2 ∈ (a2 , b2 ]) = FX1 ,X2 (b1 , b2 )
−FX1 ,X2 (b1 , a2 ) − FX1 ,X2 (a1 , b2 ) + FX1 ,X2 (a1 , a2 ).
Theorem 4.13. A multivariate distribution function
FX1 ,...,Xn (x1 , . . . , xn ) of a random vector X = (X1 , . . . , Xn ) uniquely determines a probability measure PX (A) on the Borel σalgebra Bk by its values on the cubes PX ((a1 , b1 ] × · · · × (an , bn ])
= P (X1 ∈ (a1 , b1 ], . . . , Xn ∈ (a1 , b1 ]).
5 Independent random variables
5.1 Independent random variables
Definition Two random variables X and Y with distribution functions, respectively, FX (x) and FY (y) are independent
19
if the two-dimensional distribution of the random vector (X, Y )
satisfies the relation
FX,Y (x, y) = FX (x) · FY (y), x, y ∈ R1 .
(1) If random variables X and Y are independent then P (X ∈
A, Y ∈ B) = P (X ∈ A) · P (Y ∈ B) for any A, B ∈ B1 .
5.2 Mutually independent random variables
Definition 4.14. Random variables Xt , t ∈ T with distribution
functions FXt (x) are mutially independent if for any t1 , . . . , tn , ti 6=
tj the multivariate distribution function FXt1 ,...,Xtn (x1 , . . . , xn ) of
the random vector (Xt1 , . . . , Xtn ) satisfies the relation
FXt1 ,...,Xtn (x1 , . . . , xn ) = FXt1 (x1 ) × · · · × FXtn (xn ).
LN Problems
1. Let A is a random event for a probability space < Ω, F, P >
and I = IA (ω) is a indicator of event A. Prove that I is a random variable.
2. Let X1 , X2 , . . . be a sequence of random variables defined
on a probability space < Ω, F, P >. Let also Z = maxn≥1 Xn
and I is an indicator of event A = {Z < ∞}. Let Y = Z · I
where the product is counted as 0 if Z = ∞, I = 0. Prove that
Y is a random variable.
3. Let F (x) is a distribution function af a random variable X. Prove that P (a ≤ X ≤ b) = F (b) − F (a − 0) and
20
P (a < X < b) = F (b − 0) − F (a).
4. Let random variable X has a continuous strictly monotonic distribution function F (x). Prove that the random variable Y = F (X) is uniformly distributed in the interval [0, 1].
5. Let Ω = {ω1 , ω2 . . .} → be a discrete sample space, F0 →
the σ-algebra of all subsets of Ω, and P (A) is a probability measure on F. Let also X is a random variable defined on the
discrete probability space < Ω, F0 , P >. Can the random variable X be a continuous or a singular distribution function?
5. Let random variable X has a distribution function F (x).
What distribution function have random variables Y = aX 2 +
bX + c?
6 Let X and Y are independent random variables uniformly
distributed in the interval [0, 1]. What is the two-variate distribution function of the random vector Z = (X, Y )?
7. Let X1 , . . . , Xn be independent random variables with
the same distribution functionF (x). What are the distribution functions of random variables Zn+ = max(X1 , . . . , Xn ) and
Zn− = min(X1 , . . . , Xn )?
8. Give the proof of Theorem 4.7.
9. Give the proof of Theorem 4.9.
10. Give the proof of Theorem 4.9 for the case of a general
21
distribution function (with possible jumps) [see G]
11. Give the proof of related to the example given in Section
3.3.
22
Related documents