Download STA 348 Introduction to Stochastic Processes Lecture 3

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Generalized linear model wikipedia , lookup

Pattern recognition wikipedia , lookup

Randomness wikipedia , lookup

Mean field particle methods wikipedia , lookup

Corecursion wikipedia , lookup

Birthday problem wikipedia , lookup

Probability amplitude wikipedia , lookup

Transcript
STA 348
Introduction to
Stochastic Processes
Lecture 3
1
Example

X2
1
X1, X2 uniformly distributed
over triangle [(-1,0),(0,1),(1,0)]

Find Cov(X1,X2)
1
0
1
X1
2
Moment Generating Functions

Moment generating function (mgf) of RV X
  x etx p( x)

tX
 (t )  E  e    
tx
e
  f ( x)dx

Properties:

k
d
 ( k ) (0)  k  (t )  E  X k 
dt
t 0

 X (t )  Y (t )  X & Y have same distribution

X 1 ,..., X n indep. & identicalty distr. (iid) with  X (t )
n
n
 S   i 1 X i has mgf  S (t )   X (t ) 
3
Distribution Summary
4
Example

Show that sum of #n independent Exp(λ)
RV’s X1,…,Xn follows Gamma(n,λ)
5
Basic Probability Theorems

Markov Inequality: For non-negative RV X
P  X  a   E ( X ) / a, a  0
  E ( X )
 2
  Var ( X )

Chebyshev Inequality: For any RV X
P  | X   | a    2 / a 2 ,  a  0

Strong Law of Large Numbers (SLNN):
For iid RV's X 1 , X 2 , with E ( X i )   , then
X1  X 2    X n
  as n   with prob. 1
n
6
Stochastic Processes

Stochastic process: collection of RV’s  X t , t  T 



RV X t or X (t ) is value of process at t
Index t often represents time/space
Index set T contains all possible values of t
 Countable T → discrete-time process  X n , n  0,1, 2,
● E.g. Xn = employee’s salary on year n

Uncountable T → continuous-time process  X (t ), t  0
● E.g. X(t) = location of particle at time t
● State space = set of all possible values for Xt
● E.g. X(t) = location of particle → state space = ℝ3
7
Example


Consider particle moving along
set of #(m+1) nodes in a circle
If at time n particle is in node i,
then at time n+1 it goes to:



node i+1 with prob p=½
node i−1 with prob q=½
Xn = particle position @ step n,
where X0=0 (start at node 0)

What is the index set?

What type of process is this?

What is the state space?
0
m
1
2
i+1
p=1/2
q=1/2
i
i−1
8
Example

In previous example, imagine
particle moves until it visits all
nodes. Find the probability
that the last node visited is i.
0
m
1
2
i+1
p=1/2
q=1/2
i
i−1
9
Conditioning

The key to solving many Stochastic
Processes problems is conditioning


Helps to break down a complicated probability or
expectation into simpler (conditional) parts, which
you can then calculate
Before pursuing this approach in more detail,
we first review how conditioning works

Look at conditional distributions, probabilities, and
expectations
10
Discrete Conditional
Distributions


Consider RV’s X, Y with joint pmf p  x, y 
The conditional pmf of X given Y=y is
p ( x, y )
p X |Y ( x | y ) 
pY ( y )

( If X, Y independent
 p X |Y ( x | y )  p X ( x) )
The conditional cdf of X given Y=y is
FX |Y ( x | y )  P  X  x | Y  y    i  x p X |Y (i | y )

The conditional expectation of X given Y=y is
E ( X | Y  y )   x xp X |Y ( x | y )
11
Example


Student takes 2 multiple choice tests without studying.
Test 1 has #n1 & test 2 has #n2 A-B-C-D-type questions
If she answers each question at random, find the
conditional pmf of her test 1 score (X1) given her total
score is m (X1+X2=m)
12
Continuous Conditional
Distributions


Consider RV’s X, Y with joint pdf f  x, y 
The conditional pdf of X given Y=y is
f ( x, y )
f X |Y ( x | y ) 
, for fY  y   0
fY ( y )

 f X |Y ( x | y )  f X ( x) )
The conditional cdf of X given Y=y is
FX |Y ( x | y )  P  X  x | Y  y   
x


( If X, Y independent
f X |Y (t | y )dt
The conditional expectation of X given Y=y is

E ( X | Y  y )   xf X |Y ( x | y )dx

13
Example
Y
1
RV’s X, Y have joint pdf:
3x, 0  y  x  1
f  x, y   
 0, otherwise

(X,Y)
domain
0
1
X
3
Find the conditional pdf
of Y given X=.5
f(x,y)
1
1
y
x

0
Example

Find the conditional probability of Y<.25 given X=.5

Find the conditional expectation of Y given X=.5
15
Computing Probabilities by
Conditioning

For event A and partition {B1,B2,…}, we have
P  A    i 1 P  A  Bi    i 1 P  A | Bi  P  Bi 


(Law of Total
Probability)
Applied to Stochastic Process {Xt, t∈T}, we get
  P  A | X t  x  p X ( x) (discr. X t )
t
 x
P  A   
  P  A | X t  x  f X t ( x)dx (cont. X t )
Point is to choose wisely which events / RV’s
to condition on (not every conditioning works)
16
Example (Matchbox Problem)

L
R
Smoker buys 2 boxes with #n matches each, & puts one
in his left & one in his right pocket. Every time he lights a
cigarette, he picks a pocket at random & uses a match.
Consider the first time he picks a box which is empty,
what is the probability that the other box is also empty?
17
Example

For the matchbox problem, define the RV:
Y = “# matches left in other box when smoker
first finds out that chosen box is empty”

Find the pmf of Y
18
Example (Best Prize Problem)

You are presented with #n prizes of different
values in (random) sequence




You don’t know the prize values beforehand; you only
learn the value of a prize once it is presented to you
At each point, you can either accept the presented
prize, or reject it and move on to the next one
Your strategy is to reject first k prizes, and accept
subsequent prize with > value than all rejected k
Find the probability of getting the best prize for
this strategy: Pk  best   ?
19
Example (Best Prize Problem)
20
Example (Best Prize Problem)

Find k (# of initial rejections) that maximizes
the probability of getting the best prize
21