Download Lecture 14 - Stony Brook AMS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Mathematical optimization wikipedia , lookup

Least squares wikipedia , lookup

Dirac delta function wikipedia , lookup

Simplex algorithm wikipedia , lookup

Probability box wikipedia , lookup

Generalized linear model wikipedia , lookup

Fisher–Yates shuffle wikipedia , lookup

Hardware random number generator wikipedia , lookup

Randomness wikipedia , lookup

Transcript
AMS 311, Lecture 14
March 27, 2001
Two problem quiz next class (March 29): univariate transformation of a random
variable and moment of a univariate random variable. You may use one sheet of notes.
Homework for Chapter Seven (due April 5): Starting on page 252: 6, 10, 14*; starting
on page 266: 8, 16, 24*; starting on page 274: 4, 8; starting on page 285: 2, 8, 12.
Theorem 6.1. (Method of Transformations)
Let X be a continuous random variable with density function fX and the set of possible
values A. For the invertible function h: AR, let Y=h(X) be a random variable with the
set of possible values B=h(A)={h(a):aA}. Suppose that the inverse of y=h(x) is the
function x=h-1(y), which is differentiable for all value of yB. Then fY, the density
function of Y, is given by
f Y ( y)  f X (h 1 ( y))|(h 1 )'( y)|, y  B.
In computer simulation, one applied the probability integral transformation to generate
values following a specified distribution.
Two example density function of a random variable problems:
Example 1. Let U be a uniform(0,1) random variable. Find the distribution of Y=-ln(1-U).
Always look for two ways.
Direct: Find the cdf of Y :
FY ( y)  P(Y  y)  P( ln(1  U )  y)  P(1  U  e  y )  P(U  1  e  y ). From this one
can find the pdf by differentiation.
Use of Theorem 6.1.
The inverse function is u  1  e  y . Hence the differential element du  e  y dy.
Definition of Expected Value
If X is a continuous random variable with probability density function f, the expected

value of X is defined by E ( X ) 
 xf ( x)dx, provided that the integral converges

absolutely.
Definition of var (X): The variance of the random variable X is still
var( X )  E ( X  EX ) 2 .
Example
c
,    x   , is called a
1 x2
Cauchy random variable. Find c so that the f(x) is a pdf. Show that E(X) does not exist.
A random variable X with density function f ( x ) 
Don’t be bashful about checking your old calculus books and tables of integrals! From
there, you will find
dx
 1  x 2  arctan x.
Law of the unconscious statistician (I prefer to call this the law of the choice of
probability measures).
Theorem 6.3.
Let X be a continuous random variable with probability density function f(x); then for any
function h: RR,

E (h( X )) 
 h( x) f ( x)dx.

Uniform distribution:
A random variable X is uniformly distributed over the interval (a, b) if its pdf is
1
f ( x) 
, a  x  b, and zero otherwise.
b a
ab
(b  a ) 2
, and var( X ) 
Then E ( X ) 
.
2
12
Example. Let the random variable X be uniform (0,1). Find E (e tX ).
The function in the last problem is extremely important in later chapters. It is called the
moment generating function.
Normal Distribution
Statement of pdf. The cdf is tabulated and is a basic reference for working problems.
De Moivre’s Theorem: Central limit theorem type result for approximating number of
heads in n independent tosses of a fair coin.
De Moivre-Laplace Theorem. Generalization to n independent Bernoulli trials with
probability of success p.
All probabilities are calculated through conversion to a standard normal distribution.
Basic principle of p-values in statistical tests.