Download Tutorial 5, STAT1301 Fall 2010, 26OCT2010, MB103@HKU By

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
NUMERICAL CHARACTERISTICS
OF A RANDOM VARIABLE
GENERATING FUNCTIONS
STRICTLY MONOTONIC
TRANSFORMATION OF A
RANDOM VARIABLE
EXPECTATION AS INTEGRATION
MARKOV’S INEQUALITY
Tutorial 5, STAT1301 Fall 2010, 26OCT2010, MB103@HKU
By Joseph Dong Recall: What is a Random Variable?
•
A Random Variable is a function defined on a sample space. •
The sample space contains randomness.
•
The state space is accordingly random.
•
The Random Variable itself is deterministic.
2
Recall: What we have done about RV?
• We have defined the Random Variable as a function (with a special restriction we don’t want to discuss in this course) from a given sate space to a sample space (the total set of outcomes from a random experiment) , usually a subset of ∞, ∞ . • In symbols: : Ω ∋ ↦
∈ Ω ⊂
• The sample space is the platform where we adopt the notion “variable”. 3
Recall: What we have done about RV? • We have done the probability distribution of a random variable.
• This is the law governing the random variable’s dance in sample space.
• Two equivalent way of describing the law
• By probability measure on the sample space: ℙ (takes in a set as argument)
• By listing the probability measure for all atoms of the sample space
• This is equivalent to defining PDF or PMF, or a general probability function
• By distribution function •
•
:
∞, ∞ ∋
↦
(takes in a number as argument)
∈ 0,1
≔ℙ
ℙ
:
• The distribution function is never decreasing
•
∞
0, ∞
1
• The distribution function is right continuous
4
Numerical Characteristics of a Random Variable and Related Topics
• Workplace = a numeral sample space (subset of ) = Ω ,
• Expectation • Law Of The Unconscious Statistician:
ℙ
• Moments = Expectation of positive integer powers:
Ω ,ℙ
or
What’s the integrand? What’s the bedrock for integration?
Expectation is a moment. Variance is a moment. Moment is the most general concept among the three.
• Variance = 2nd order central moment:
• Compute Moments using Moment Generating Function Generating Function is a trick. Here we apply the trick to the problem here of finding moments. And we get huge bonus (in Ch4)
• Markov & Chebyshev Inequalities
Chebyshev is Markov’s teacher. But the relationship is reversed for the two inequalities. ℙ
, ℙ
Markov’s Inequality has a physical meaning.
• Strictly Monotonic Transformation of an R.V. & an invariant differential • When is strictly increasing, then 5
Linearity of Expectation
where can be ∞.
Simple cases:
6
Technical Exercises
• Handout Problem 1, 2, and 3.
• This is the level that you have already mastered before yesterday’s midterm
7
A Closer Look at Expectation
• Expectation is a generalized integral.
• Let’s forget about probability theory for a few minutes and go back to calculus.
• Usually, we always use a homogeneous horizontal axis for integration. The density everywhere is the same. Such as in
1
• But we can generalize by allowing the density to vary from place to place on the horizontal axis. • To take care of the density, we introduce a
density function
into the integral as:
1
(Of course the integral will now change value, except 1 everywhere.)
8
Center of Mass and Expectation
• For now let’s forget about the curve but focus on the x‐axis
• If we treat the segment on the horizontal axis 1,1 as a massed segment with linear mass density
, we can now compute the coordinate of its center of mass, , according to the formula:
• One more step:
• Note that whole thing can be regarded as a normalizing constant and the could be some real probability density!
• Now suppose the x‐axis is the state space of some random variable , and is actually , the probability density, then and are the same thing—both conceptually and technically.
9
Exercises: Handout Problem 4 & 5
10
Law of the Unconscious Statistician
• We go one step forward to find the expectation of any function of such as , ln , etc., that is ?
• Go back to the previous unresolved integration 1
, and, without lost of generality, assume the here is a probabilistic density one.
• Obs1: If two r.v.’s share the same sample space and the same distribution, then they must have the same expectation.
• Therefore • Obs2: If two values, say by , that is, if • Therefore and , are mapped to the same value , then ∑
11
A New Level of Understanding
• Now we understand the meaning of the new integral
1
where is a probability density on the x‐axis, is the expectation of 1:
1
• Expectation is an Integration of the general kind.
as a random variable • They are unconscious about the fact that has a different sample space than has. Hence the definition of or more explicitly written as ∘
should be
and it takes some reasoning to establish the equality of this integral with the one used in Lotus.
12
Markov’s Inequality
ℙ
1
Caution: Markov’s Inequality only works for non‐negative r.v..
13
Generating Function
• Generating Function is a general math technique.
• Whenever you have a function whose value set (range) is a countable set, you can embed these values in a power series as:
⋯
where , , , ⋯ is the range of the function. In specific cases, the power series will converge(sum) to a compact form, but it will still be a function of .
• Question: How to get back the ’s when you are directly given ?
• One widely used way is to differentiate with respect to , multiple times, and evaluate the derivative at 0, and divide by a constant.
• For example, you want to get back , the procedure is
3!
• Often, to remove the division step, we adopt the form
1
2!
1
3!
⋯
!
14
Moment Generating Function
• Recall: Moment of a random variable
where is a non‐negative integer ( ≡ 1). • If we regard is a function whose value is indexed by , then the value set is a countable set: 1, , , , ⋯
• Then we can embed all the moments in a generating function/power series known as Moment Generating Function:
1
2!
3!
1
⋯
2!
1
2!
3!
3!
⋯
⋯
15
Strictly Monotonic Transformation of an R.V.
• Strictly Monotonic Transformation(Function)
• Strictly Increasing Transformation
• Strictly Decreasing Transformation
• Consider a strictly increasing function : Ω ∋ ↦
∈
Ω . For simplicity, use to denote , and hence to denote . The following equality between the two probability differentials must hold:
• Reason: • This is equivalent to claiming ℙ
ℙ
• But ℙ
ℙ
, since is is the strictly monotonic, therefore the event .
exactly the same one as • For strictly decreasing functions, absolute values are needed.
16
Consequence of • Caution: Always remember this equality holds under the strict monotonic transformation condition.
• Consequence: • Caution: Absolute value here are always needed for some very mysterious reason in the general theory of calculus (Consult Loomis’s Advanced Calculus if you are interested).
• This is the standard way of find the (strictly monotonically) transformed density function.
17