Download Probability spaces • Discrete random variables - E

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Randomness wikipedia , lookup

Probability interpretations wikipedia , lookup

Probability wikipedia , lookup

Transcript
Stochastic Calculus
Probability spaces
Week 2
A random experiment is an experiment with a priori unkown outcome, and which can be repeated under identical
Topics:
conditions.
• Probability spaces
Example: Toss a coin twice and report the two ordered out-
• Discrete random variables
comes.
• Conditional expectations
A sample space is the set of possible outcomes of the experiment, denoted by Ω.
comes) are denoted by ω.
Individual elements (the outA sample space is finite if
Ω = {ω i , i = 1, . . . , n} for some integer n, and countable if
Ω = {ω i , i = 1, 2, . . .}; otherwise it is uncountable.
Example: The sample space of the previous example is
Ω = {ω 1 , ω 2 , ω 3 , ω 4 } = {HH, HT, TH, TT}, where, e.g.,
HT means “first toss heads (H), second toss tails (T)”.
1
A σ-field or σ-algebra, often denoted by F, is a collection
A probability measure P is a function which gives a number
of subsets of Ω (events), such that if the sets {A1 , A2 , . . .}
between 0 en 1 to all events A in F, with the additional
are in F, then also their complements {Ac1 , Ac2 , . . .}, their
properties P(∅) = 0, P(Ω) = 1, P(Ac ) = 1 − P(A), and for
intersection A1 ∩ A2 ∩ . . ., and their union A1 ∪ A2 ∪ . . .
disjoint sets {Ai },
are in F. A σ-algebra always contains the null set ∅ (the
P
“impossible event”), and Ω (the “sure event”).
∞
[
!
Ai
=
i=1
∞
X
P(Ai ).
i=1
Example: One possible σ-field in the example is
Example: The most natural probability measure in our ex-
F = {∅, {ω 1 }, {ω 2 }, {ω 3 }, {ω 4 },
ample satisfies P(HH) = p2 , P(HT) = P(TH) = p(1 − p),
{ω 1 , ω 2 }, {ω 1 , ω 3 }, {ω 1 , ω 4 }, {ω 2 , ω 3 }, {ω 2 , ω 4 }, {ω 3 , ω 4 },
{ω 1 , ω 2 , ω 3 }, {ω 1 , ω 2 , ω 4 }, {ω 1 , ω 3 , ω 4 }, {ω 2 , ω 3 , ω 4 }, Ω}
P(TT) = (1 − p)2 for some number 0 < p < 1. If the coin
is fair, then p = 0.5.
This contains all possible events. For example, the set
The triple (Ω, F, P) is a probability space. It fully charac-
{ω 1 , ω 4 } signifies the event “both tosses have the same out-
terizes the random experiment.
come”. The smaller collection
F1 = {∅, {ω 1 , ω 2 }, {ω 3 , ω 4 }, Ω}
is also a σ-field. Here the only interesting event are “first
toss is heads” or “first toss is tails”.
2
Discrete random variables
Example: Two possible random variables on the previous
space are, for some number a > 0,
(
a, if ω ∈ {HH, HT},
X1 (ω) =
−a, if ω ∈ {TH, TT},
A random variable X a real-valued function on Ω. Thus, we
assign to each outcome a real number. The function should
be such that events of the form {ω : X(ω) ∈ B} are in F,
(
and
for all sets B which are intervals of R or can be constructed
X2 (ω) =
from unions and intersections of intervals; this defines the
Borel σ-field B consisting of Borel sets B.
a,
if ω ∈ {HH, TH},
−a, if ω ∈ {HT, TT}.
Note that X1 depends on the outcome of the first toss, and
When events are in F we say that they are measurable, i.e.,
X2 on the second toss. From this example one can construct
they can be given a probability measure. The function X
a multiplicative binomial tree with u = 1/d = exp(a), by
satisfying the property {ω : X(ω) ∈ B} ∈ F for all B ∈ B
choosing a fixed S0 and defining S1 (ω) = S0 exp(X1 (ω))
is called Borel-measurable. Thus all events {X ∈ B} can
and S2 (ω) = S1 (ω) exp(X2 (ω)) = S0 exp(X1 (ω)+X2 (ω)).
be given a probability measure.
The distribution function of both X1 and X2 is



 0, if x < −a,
FX (x) =
p, if − a ≤ x < a,


 1, if x ≥ a.
The function X transforms the original probability space
(Ω, F, P) into a new probability space (R, B, PX ), where
PX is the distribution of X, defined by PX (B) = P(X(ω) ∈
B). The distribution function FX is defined by FX (x) =
A random vector is simply a vector-valued function of Ω.
PX ((−∞, x]) = P(X(ω) ≤ x).
Borel-measurability can be defined analogously.
In the
above example, X(ω) = (X1 (ω), X2 (ω)) is a random vector.
3
Expectations
Example: In the example, we have
E(X1 ) = E(X2 ) = ap + (−a)(1 − p) = a(2p − 1)
Consider a discrete random variable X, defined on a count-
Var(X1 ) = Var(X2 ) = a2 p + a2 (1 − p) − a2 (2p − 1)2
able probability space Ω = {ω 1 , ω 2 , . . .}. Let xi = X(ω i ).
= a2 [1 − (2p − 1)2 ].
Then the mean or expected value is simply given by
E(X) =
=
∞
X
i=1
∞
X
Furthermore, remembering u = 1/d = exp(a),
X(ω i )P(ω i )
E(exp(X1 )) = E(exp(X2 )) = up + d(1 − p).
xi PX (xi ).
If we define another probability measure Q by the property
i=1
The two sums may alternatively be denoted as
R
R
X(ω)dP(ω),
or
Ω
R xdPX (x). For more general (contin-
Q(ω 1 ) = q 2 , Q(ω 2 ) = Q(ω 3 ) = q(1 − q), Q(ω 4 ) = (1− q)2 ,
uous) random variables, the definition of these integrals is
EQ (exp(X1 )) = EQ (exp(X2 )) = uq + d(1 − q) = 1.
with q = (1 − d)/(u − d), then
more complicated. For the present case, they may simply
be seen as an alternative way to denote the sums.
Sometimes we need to make the explicit the relevant probability measure, leading to the notation EP , EQ , etc.
We may also take expectations of functions of X. Of particular importance is the variance of X:
Var(X) = E{(X − E[X])2 } = E(X 2 ) − {E(X)}2 .
4
Example: The events A1 = {HH, HT} (“first toss is
Independence
heads”) and A2 = {HH, TH} (“second toss is heads”) are
Two events A and B are independent if P(A ∩ B) =
independent under P. The σ-fields
P(A)P(B).
F1 = {∅, {HH, HT}, {TH, TT}, Ω}
Two σ-fields F1 and F2 , which are both contained in F, are
and
independent if every A1 ∈ F1 and A2 ∈ F2 are independent.
F2 = {∅, {HH, TH}, {HT, TT}, Ω}
Two random variables X1 and X2 are independent if P(X1 ∈
are independent, because they represent the outcomes of the
B1 ∩ X2 ∈ B2 ) = P(X1 ∈ B1 )P(X2 ∈ B2 ) for all Borel
two tosses. The random variables X1 and X2 are indepen-
sets B1 , B2 .
dent, because Xi is Fi -measurable.
Xi is called Fi -measurable, if {ω : Xi (ω) ∈ B} ∈ Fi , i =
1, 2. That is, all events relevant for Xi are in Fi . The
above implies that X1 and X2 are independent if Xi is Fi measurable and F1 and F2 are independent.
5
denoted by F1 = σ(X1 ), then we obtain the conditional
Conditioning
expectation of Y given X1 :
For two events A and B with P(B) > 0, the conditional
E(Y |X1 ) = E(Y |F1 ).
probability of A given B is defined by
P(A|B) =
P(A ∩ B)
.
P(B)
Here are some rules for manipulating conditional expecta-
This may be interpreted as the probability of the event A,
tions:
based on the information that B occurred. If A and B are
• E(X|F) = X if X is F-measurable
independent, then P (A|B) = P (A); in that case the event
• E(XY |F) = XE(Y |F) if X is F-measurable
B is irrelevant for the probability of A.
• E(X|F0 ) = E(X), where F0 = {∅, Ω}
For a discrete random variable X, the conditional expecta-
• E(aX + bY |F) = aE(X|F) + bE(Y |F)
tion of X given B is given by
Z
E(X|B) =
X(ω)dP(ω|B)
• E[E(X|F)] = E(X) (law of iterated expectation)
Ω
=
∞
X
• E[E(X|F)|G] = E[E(X|G)|F] = E(X|F) if F ⊂ G
X(ω i )P(ω i |B).
(tower property).
i=1
The conditional expectation given a σ-field F, E(X|F) is a
Often the σ-fields upon which one conditions are interpreted
random variable, defined by
as information sets.
E(X|F) = E(X|Ai ) if ω ∈ Ai ,
for all Ai in F. When F1 is the σ-field generated by X1 (i.e.,
the smallest σ-field with respect to which X1 is measurable),
6
Example:
In the binomial tree example, with S1 =
Exercises
S0 exp(X1 ), we have F1 = σ(X1 ) = σ(S1 ). The conditional expectation of S2 = S1 exp(X2 ) given S1 is
1. Define Y to be the number of heads in the example. Derive the σ-field generated by Y .
E(S2 |F1 ) = S1 E[exp(X2 )|F1 ]
= S1 E[ exp(X2 )],
2. Find the expectation of random variable Y from the previous exercise, and also the conditional expectation of Y
because X2 is independent of X1 and hence S1 . Similarly,
given F1 . Check that in this case, E[E(Y |F1 )] = E(Y ).
E(S1 |F0 ) = S0 E[exp(X1 )]. Hence, if we use the measure
3. In the example, let Y = 1 if the two tosses have the same
Q defined earlier, we have
result, and zero otherwise. Show that if p = 12 , Y and X1
EQ (Si |Fi−1 ) = Si−1 .
are independent, and also that Y and X2 are independent.
Is Y also independent of (X1 , X2 )?
This final property defines the sequence {Si , Fi } to be a
4. In the binomial tree example, derive an expression for
martingale. The probability measure Q under which this
the conditional variance Var(S2 |S1 ) = E(S22 |S1 ) −
is the case is the equivalent martingale measure.
{E(S2 |S1 )}2 .
5. For any random variable X and increasing sequence of
σ-fields {Fi }, show that Xi = E(X|Fi ) is a martingale.
7
Define the discounted stock price Zi = Bi−1 Si = e−ir δt Si ,
Multi-step binomial tree
so that
Ingredients:
Zi = e−ir δt Si−1 eXi
= e−(i−1)r δt Si−1 e−r δt eXi
• Time horizon T , n steps, hence time step is δt = T /n;
1
2
= Zi−1 eXi −r δt .
n
• Sample space Ω, with element ω = (ω , ω , . . . , ω ),
where ω j is H or T;
Then
• When k is # heads in ω, P(ω) = pk (1 − p)n−k ; since
there are nk combinations possible with k heads, P(k
heads) = nk pk (1 − p)n−k ;
√
√
• Xi (ω) = σ δt if ω i = H, and −σ δt if ω i = T;
EQ (Zi |Fi−1 ) = Zi−1 EQ (eXi −r δt |Fi−1 )
r δt
r δt
−d
−r δt e
−r δt u − e
= Zi−1 ue
+ de
u−d
u−d
= Zi−1 .
• Si (ω) = Si−1 (ω)eXi (ω) ; S0 is fixed, and furthermore
Therefore, under Q, the discounted stock price is a martingale. We therefore call Q the equivalent martingale measure
Fi = σ(S1 , . . . , Si ) = σ(X1 , . . . , Xi );
(Q equivalent to P means Q(A) = 0 ⇔ P(A) = 0).
• Bi = Bi−1 er δt = B0 eir δt (with B0 = 1);
er δt − d
•q=
, and Q(k heads) =
u−d
n
k
q k (1 − q)n−k , where
Because
√
u = 1/d = exp(σ δt).
EQ
Si − Si−1 Xi
F
e
−
1
F
=
E
i−1
Q
i−1
Si−1 = er δt − 1
Bi − Bi−1
=
,
Bi−1
we also call Q the risk-neutral measure.
8
Construction of a claim
Early exercise
1. Claim (payoff) X at time T ; define ET = BT−1 X;
2. Define Ei = EQ BT−1 X|Fi ; then Ei is a Q-martingale
American-type call option:
• at time T : VT = X = [ST − K]+
with respect to Fi = σ(S1 , . . . , Si ).
• at time T − 1 (take δt = 1):
VT −1 = max [ST −1 − K]+ , e−r EQ [ST − K]+ |FT −1 .
3. Binomial representation theorem:
Ei = E0 +
i
X
φk (Zk − Zk−1 ),
• at time T − 2:
k=1
VT −2 = max [ST −2 − K]+ , e−r EQ (VT −1 |FT −2 )
= max [ST −2 − K]+ , e−r EQ [ST −1 − K]+ |FT −2 ,
e−2r EQ [ST − K]+ |FT −2 .
or ∆Ei = φi ∆Zi , with φi ∈ Fi−1 (previsible).
Ei (up) − Ei (down)
4. Define φi (Si−1 ) =
.
Zi (up) − Zi (down)
5. Strategy: portfolio of φi+1 stocks and ψ i+1 = (Ei −
φi+1 Bi−1 Si ) bonds. At i = 0, V0 = φ1 S0 + ψ 1 B0 . At
• at time i: choice between T − i + 1 European call op-
any i, the value becomes
tions, expiring at time i, i + 1, . . . , T . The value of the
−1
φi Si + ψ i Bi = φi Si + Ei−1 Bi − φi Bi−1
Si−1 Bi
American option is the maximum of the values of these
−1
= Bi [Ei−1 + φi (Bi−1 Si − Bi−1
Si−1 )]
European options.
= Bi [Ei−1 + φi (Zi − Zi−1 )]
• In the absence of dividend payments on the stock, it
= Bi Ei
can be shown that for a call option, it never pays to
= φi+1 Si + ψ i+1 Bi =: Vi .
exercise early (so value American option = value European option). For in-the-money put options however
This is self-financing: Vi −Vi−1 = φi (Si −Si−1 )+ψ i (Bi −
(X = [K − ST ]+ , K > S0 ), the values will differ.
Bi−1 ), and replicating: VT = BT ET = X.
9