Download Modeling with Itô Stochastic Differential Equations §2.1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Transcript
Modeling with Itô Stochastic
Differential Equations
§2.1 - 2.3
E. Allen
presentation by T. Perälä 13.10.2009
Postgraduate seminar on applied mathematics 2009
Outline
Introduction to Stochastic Processes (§2.1)
Discrete Stochastic Processes (§2.2)
Markov process
Markov chains
Continuous Stochastic Processes (§2.3)
Continuous Markov process
Wiener process
Introduction (§2.1)
A stochastic process is a family of random variables
defined on a probability space
If the set
is discrete, the stochastic process is called discrete
If the set
is continuous, the stochastic process is called continuous
The random variables
can be discrete valued or continuous valued at each
Solutions of stochastic differential equations are stochastic processes
Discrete Stochastic Processes (§2.2)
Let
be a set of discrete times
Let sequence of random variables
If only the present value of
called Markov process
each be defined on the sample space
is needed to determine the future value of
the sequence
A discrete-valued Markov process is called a Markov chain
Let
define the one-step transition probabilities for a Markov chain, that is
If the transition probabilities are independent of time , then the Markov chain is said to have stationary
transition probabilities and is called a homogenous Markov chain
is
Example 2.1. A continuous-valued Markov process
Let
where
Then,
Note that
and
for
, where
. Let
be defined by
.
is a Markov process with continuous values of
so
.
and discrete values of time
.
Example 2.2. A homogenous Markov chain
Let
with
for
distribution of the discrete random variable
assuming that
where
so that
and
. Define the probability
takes on the values
with probabilities
.
Let
where
Then,
are independent identically distributed (i.i.d.) values with the same distribution as
.
and
The stochastic process is discrete time and discrete valued.
The transition probabilities are
Furthermore, we note that
Then, by the central limit theorem
In particular, if
, then
for large . Thus, as increases, the distribution of
approaches the same distribution as the random variable in Example 2.1.
Homogeneous Markov chain
Let
so that
Let
be a homogeneous Markov chain defined at discrete times
. Let
for each
where
define the transition probabilities.
The transition probability matrix is defined as
The probability distribution of
Define the th power of
where
Let
Let
We see that
Thus,
as
and
.
can be computed using the transition probability matrix
. As
.
, then by matrix multiplication
. This relation is known as the Chapman-Kolmogorov formula for a homogeneous Markov chain.
be the probability distribution of
, where
is the initial probability distribution of
.
Example 2.3. Approximation to a Poisson process
Consider the discrete homogeneous stochastic process defined by the transition probabilities
Assume that
equation
. In this example, the transition probability matrix
has the componentwise form:
is bidiagonal and the
and
Rearranging these expressions yields
where
and
. As
satisfied by the Poisson process. That is,
, the above Markov chain probabilities approach those
Nonhomogeneous Markov chain
Let
w
where
. Let
be the Markov chain satisfying
and where
for each for a positive number
.
Let
define the transition probabilities which now may depend on time
. (Nonhomogenous Markov chain)
The transition probability matrix is defined as the
matrix
Similar to the homogeneous Markov chain, the probability distribution for
using the probability transition matrices
for
.
Let
Let
distribution.
Noticing that
can be computed
define the probability distribution at time .
where
is the initial probability
we see that
Example 2.4. Forward Kolmogorov equations
Let
and let
. Let
transition probabilities of a discrete stochastic process by the following:
be given. Define the
where and
are smooth nonnegative functions. Notice that with the above transition probabilities, if
is the change in the stochastic process at time
fixing
, then
It is assumed that
Let
is small so that
is positive.
be the probability distribution at time . Then,
satisfies
(2.1)
Rearranging yields
where,
and
approaches a continuous-time process. Then
As
, the discrete stochastic process
satisfies the initial-value problem:
(2.2)
with initial values
.
These are the Forward Kolmogorov equations for the continuous-time stochastic process.
Example 2.4. continued
Now assume that
is small so that the stochastic process approaches a continuous-valued process. As
(mean value theorem)
for some values
equation:
such that
, then the (2.2) approximates the partial differential
(2.3)
Equation (2.2) is a central-difference approximation to the above one. This approximation is accurate for
small and when comparing the solutions of (2.1) and (2.3), it can be shown that
It can be shown that (2.3) is the forward Kolmogorov equation corresponding to a diffusion process having
the stochastic differential equation
(2.4)
The probability density of solutions to the above stochastic differential equation satisfies the partial
differential equation (2.3).
The coefficients of (2.4) are related to the discrete stochastic model (2.1) through the mean and variance in
the change in the process
over a short time interval
fixing
. Specifically,
Example 2.5. Specific example of forward Kolmogorov eq’s
Consider a birth-death process, where
is the per capita death rate. It is assumed that
example have the form
and
and is the per capita birth rate and
are constants. The transition probabilities for this
It follows that the probability distribution in continuous time (letting
Kolmogorov equations
) satisfies the forward
with
assuming an initial population of size
Note that, fixing
at time ,
and
to order
. For large
the above equations approximately satisfy the Fokker-Planck equation
with
The probability distribution
equation
with
is the probability distribution of solutions to the Itô stochastic differential
.
Thus the solutions to the above stochastic differential equation have approximately the same probability
distribution as the discrete birth-death stochastic process and a reasonable model for the simple birthdeath process is the above stochastic differential equation.
.
,
Continuous Stochastic Processes (§2.3)
Let a continuous stochastic process
be defined on the probability space
an interval in time and the process is defined at all time instants in the interval.
where
is
A continuous-time stochastic process is a function
of two variables and and
may be
discrete-valued or continuous-valued. In particular,
is a random variable for each
and
maps the interval into
and is called a sample path, realization, or a trajectory of the stochastic
process for each
.
Specific knowledge of is generally unnecessary, but each
results in a different trajectory. The
normal convention is that the variable
is often suppressed, that is,
represents a random variable for
each and
represents a trajectory over the interval
.
The stochastic process is a Markov process if the state of the process at any time
the future state of the process. Specifically,
whenever
.
determines
Example 2.6. Poisson process with intensity
Let
equal the number of observations in time . Assume that the probability of one observation in time
interval
is equal to
. This is a continuous stochastic process and the probability of
observations in time is
The process
is a continuous-time stochastic process which is discrete-valued. Specifically,
is a
Poisson process with intensity
. Note that
and the number of observations at any time is
Poisson-distributed with mean . That is, for any
,
Indeed, the process is a Markov process and
and the probability distribution at time
the history of the system. Also,
only depends on the state of the system at time
and not on
. The relations satisfied by the probabilities of the discrete stochastic
process for Example 2.3 are finite-difference approximations to the above differential equations and
approach these differential equations as
.
In addition, if
and
.
, then
is also Poisson-distributed with intensity
Example 2.6. continued
Transition probability for continuous Markov process
Consider the transition probability density function for transition from
continuous Markov process.
at time
to
at time
for a
Analogous to the discrete Markov process, the transition probability density function satisfies the ChapmanKolmogorov equation:
A Markov process
is said to be homogeneous if its transition probability satisfies
That is, the transition probability only depends on the elapsed time. In this case, it can be written
as
.
Example 2.7. An approximate Wiener process
Let
2.6. Let
be
independent Poisson processes with intensity
be another stochastic process defined by
as described in Example
By the Central Limit Theorem, as
increases,
approaches a random variable distributed normally
with mean
and variance . Indeed, by considering Example 2.6,
approaches a normally
distributed variable with mean
and variance
for every
.
In this example,
process
approaches a Wiener process or Brownian motion
as
increases. A Wiener
is a continuous stochastic process with stationary independent increments such that
In particular
are independent Gaussian random variables for
homogeneous Markov process.
. Notice that a Wiener process is a
Generation of a sample path of a Wiener process
How to generate a sample path of a Wiener process
at a finite number of points?
Suppose that a Wiener process trajectory is desired on the interval
at the points
where
.
Then,
and a recurrence relation that gives the values of a Wiener process trajectory at the points
is given by
where
are
independent normally distributed numbers for
.
The values
determine a Wiener sample path at the points
. Using these
values, the Wiener process sample path can be approximated everywhere on the interval
.
Another way: Karhunen-Loève expansion, which is derived from a Fourier series expansion of the Wiener
process:
for
, where
are i.i.d. standard normal random variables
We can get the standard normal random variable from the Wiener process
Generation of a sample path of a Wiener process
Sample paths of a Wiener process
t = 1, 2, ..., 200
Recurrent relation
Karhunen-Loève n = 1,2,...,10000
Generation of sample path of a Wiener process continued
Lets check that the series (2.9) has the required properties of the Wiener process
The partial sum:
It can be shown that
Therefore
As
for each
and that
in
for each
as
for each
, then
where
Noting that
for
In addition, it can be shown using the trigonometric identity
that
is Cauchy in
Continuity and differentiability of a Wiener process
Notice that at each
,
. In addition,
is continuous in the mean square sense.
thus
so given
there exists a
such that
when
However,
there is no
does not have a derivative, as
such that
Expectations of functions of a Wiener process
Let the Wiener process be
for
First, recall that probability density of normally distributed r.v. with mean
For
and
and variance
is
,
In addition,
Now consider a partition of
. For
,
Furthermore, for
The densities
measures on
define a set of finite-dimensional probability
Expectations of functions of a Wiener process continued
The probability distribution of the partition satisfies
It is interesting that this probability measure can be extended through finer and finer partitions to all
where the measure is identical to the finite-dimensional measure for any partition
As these finite-dimensional probability measures satisfy certain symmetry and compatibility conditions,
Kolmogorov’s extension theorem can be applied which says that there exists a probability space and a
stochastic process such that the finite-dimensional probability distributions are identical to those defined
above.
The stochastic process is the Wiener process or Brownian motion
and over any partition of
finite dimensional distributions of
reduce to the above expression
, the
Transition probabilities of a Wiener process
Finally, consider the transition probability density
time . In this case,
for the Wiener process from
at time
to
at
and we see that
so the transition probability depends only on the elapsed time and thus the Wiener process is a continuous
homogeneous Markov process.
In addition, one can directly verify the Chapman-Kolmogorov equation for this transition probability, that is,
for