Download Optional Stopping Theorem. 07/27/2011

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Indeterminism wikipedia , lookup

Inductive probability wikipedia , lookup

Birthday problem wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Random variable wikipedia , lookup

Probability interpretations wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Random walk wikipedia , lookup

Randomness wikipedia , lookup

Central limit theorem wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Law of large numbers wikipedia , lookup

Transcript
Optional Stopping Theorem
REU. 07/27/2011
Suppose every minute you toss a symmetric coin. If it comes heads (with probability 1/2), you
win 1$. If it comes tails (also with probability 1/2), you lose 1$. Let Xk be your win (or loss) at
the moment k. So Xk takes values ±1 with equal probability. All Xk are independent. Your total
capital at the moment n is
n
X
Sn :=
Xk , S0 = 0.
k=1
This random process S = (Sn )n≥0 is called a random walk. You can stop this game at any moment.
This moment needs not be deterministic; it can be random. Denote this moment by τ . This is a
nonnegative integer-valued random variable. And your total gain in this game will be Sτ . This
τ is your strategy: you cannot influence the values of Xk , but you can choose when to withdraw
yourself from this game. It is also called a stopping time.
You can decide whether to exit the game at the moment n only basing on the past: using
the values of X1 , . . . , Xn , which you already know by this moment. Speaking formally, the event
{τ = n} depends only on X1 , . . . , Xn .
So we consider the random walk S = (Sn )0≤n≤N . There are two cases: when the time horizon
N is finite and when it is infinite.
Can we find a strategy τ such that we will gain something? But we will gain some random
sum, so let us use the expectation. Question:
can we find a strategy τ such that EXτ > 0?
Theorem 1. [Optional Stopping Theorem] For finite time horizon, this is not possible:
for every strategy τ , we have ESτ = 0.
Lemma. For every n = 0, . . . , N we have:
E Sn I{τ =n} = E SN I{τ =n} .
(1)
Suppose we proved this lemma. Let us complete the proof. The variable τ attains values
0, . . . , N . Therefore,
N
N
X
X
ESτ = E
Sn I{τ =n} =
E Sn I{τ =n} .
n=0
n=0
Recall that for any event A, the indicator of A is the random variable which is 1 when A happens
and 0 when A does not happens. It follows from (1) that
!
N
N
N
X
X
X
E Sn I{τ =n} =
E SN I{τ =n} = E SN
I{τ =n} = ESN = 0.
n=0
n=0
n=0
The proof is complete. Now we only need to prove the lemma.
Proof of the lemma. It suffices to prove that E (SN − Sn )I{τ =n} = 0. Note that
SN − Sn =
N
X
k=n+1
1
Xk ,
so it depends only on Xn+1 , . . . , XN . And {τ = n} depends only on X1 , . . . , Xn . So SN − Sn and
I{τ =n} are independent. Recall: for independent variables X, Y , we have EXY = EXEY . And
E(SN − Sn ) = E
N
X
Xk =
k=n+1
N
X
EXk = 0.
k=n+1
Thus,
E (SN − Sn )I{τ =n} = E ((SN − Sn )) EI{τ =n} = 0.
This completes the proof of the lemma and of the theorem.
In the case of infinite time horizon, the situation is different. The random walk will eventually
come to the value 1. You should wait until it comes there and then exit the game. Your gain
will be 1. Formally speaking, τ = min{n | Sn = 1}, and Sτ = 1. Certainly, the main question is
whether τ always exists. Does this random walk eventually come to 1?
Theorem 2. With probability 1, there exists n such that Sn = 1.
Proof. Let us split the proof into five steps:
√
Step 1. Consider the event A = {Sn ≥ n for infinitely many n}. Then P(A) > 0.
Step 2. Consider the event
Sn
B = lim √ ≥ 1 .
n→∞
n
Then P(B) > 0.
Step 3. Fix N . Then B does not depend on X1 , . . . , XN .
Step 4. P(B) = 0 or 1.
Step 5. Finish the proof.
√
Proof of step 1. Let Am = {Sm ≥ m}. Then
A=
∞ [
∞
\
Am =
∞
\
Bn , Bn =
Am .
m=n
n=1
n=1 m=n
∞
[
The sequence of events B1 , B2 , . . . is decreasing: B1 ⊇ B2 ⊇ B3 ⊇ . . .. Therefore, P(A) =
lim P(Bn ) (this is a property of probability). Note that, by Central Limit Theorem,
√
Sn
P(Bn ) ≥ P(An ) = P{Sn ≥ n} = P √ ≥ 1 → P{ξ ≥ 1},
n
as n → ∞. In the first inequality, we used the fact that Bn ⊇ An . It suffices to note that
1
P{ξ ≥ 1} = √
2π
Z∞
e−x
2 /2
dx ≈ 0.16 > 0.
1
Thus, P(A) = lim P(Bn ) ≥ P{ξ ≥ 1} > 0.
Proof of step 2. Since A ⊆ B, we have: 0 < P(A) ≤ P(B).
Proof of step 3.
Sn
lim √ = lim
n→∞
n n→∞
N
1 X
Sn − SN
√
Xk + √
n k=1
n
2
!
Sn − SN
√
,
n→∞
n
= lim
and
Sn − SN =
n
X
Xk
k=N +1
Sn
is independent of X1 , . . . , XN . Therefore, lim √
does not depend on X1 , . . . , XN , and the event
n→∞ n
B is also independent of these variables. Such events are called tail events.
Proof of step 4. It follows from step 3 that B is independent of any event generated by
X1 , . . . , XN , for any N . Let N → ∞. Then B is independent of any event generated by X1 , X2 , . . ..
But B is itself generated by these variables; so B is independent of itself! Recall that any two events
C1 , C2 are independent if P(C1 ∩ C2 ) = P(C1 )P(C2 ). Let C1 = C2 = B. Then P(B) = P2 (B),
and P(B) = 0 or 1. This is called Kolmogorov’s 0 − 1 law.
Proof of step 5. Combining the results of steps 2 and 4, we get: P(B) = 1. Thus, with
probability 1, we have:
Sn
lim √ ≥ 1.
n→∞
n
Therefore, with probability 1, Sn > 0 for some n. But this random walk starts from 0. Thus, with
probability 1, there exists n such that Sn = 1.
Remark. You can actually see that S attains every integer value infinitely many times. This
can be easily seen from the fact that (by symmetry of the random walk)
Sn
lim √ ≥ 1,
n→∞
n
Sn
lim √ ≤ −1.
n
n→∞
3