Download Slides 13: Multivariate Derivatives (PDF, 136 KB)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Stochastic Models
Multivariate Distributions
Walt Pohl
Universität Zürich
Department of Business Administration
April 24, 2013
Multivariate Distributions
A multivariate distribution is the probability distribution
for a family of random variables X1 , . . . , Xn . To define
the distribution, we must specify, for a family of ranges
[a1 , b1 ], . . . , [an , bn ],
P(a1 ≤ X1 ≤ b1 , . . . , an ≤ Xn ≤ bn ).
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
2 / 15
Independence
An important special case is when the random variables
are independent. Then
P(a1 ≤ X1 ≤ b1 , . . . , an ≤ Xn ≤ bn )
= P(a1 ≤ X1 ≤ b1 ) · · · P(an ≤ Xn ≤ bn ).
More challenging is modelling dependent distributions.
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
3 / 15
Defining Multivariate Distributions
The methods for univariate distributions generalize.
Discrete distribution – a probability mass function,
f,
P(X1 = x1 , . . . , Xn = xn ) = f (x1 , . . . , xn )
Continuous distribution – a multivariate density, p
P(a1 ≤ X1 ≤ b1 , . . . , an ≤ Xn ≤ bn )
Z b1
Z bn
=
p(x1 , . . . , xn )dx1 · · · dxn
···
a1
Walt Pohl (UZH QBA)
an
Stochastic Models
April 24, 2013
4 / 15
Defining Multivariate Distributions, cont’d
In practice, very few multivariate distributions are defined
this way. Instead, we rely heavily on transformations.
One (partial) exception: the multivariate normal
distribution.
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
5 / 15
Multivariate Normal Distribution
The multivariate normal distribution depends on
A vector of means, µ = (µi ), where E (Xi ) = µi .
A matrix of covariances, Σ = (σij ), where
Cov (Xi , Xj ) = σij .
Its density is
1
− 12 (x−µ)0 Σ−1 (x−µ)
e
(2π)k/2 (det Σ)1/2
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
6 / 15
Multivariate Normal Distribution, cont’d
The covariance matrix must be symmetric positive
definite.
Symmetric: σij = σji , or Σ0 = Σ.
Positive definite: v 0 Σv > 0 for v 6= 0. This is
because v 0 Σv is the variance of v 0 X .
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
7 / 15
Simulating the Multivariate Normal
To simulate the multivariate normal, we usually simulate
n independent normal random variables, = (i ), and
use a linear change of variables,
X = µ + M,
where µ is a vector, and M = (mij ) is a matrix.
How should we pick M?
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
8 / 15
Simulating the Multivariate Normal
If Xi =
P
j
mij j , then
X
X
Cov (Xi , Xi 0 ) = Cov (
mij j ,
mi 0 j 0 j 0 )
j0
j
=
XX
=
X
j
mij mi 0 j 0 Cov (j , j 0 )
j0
mij mi 0 j .
j
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
9 / 15
Simulating the Multivariate Normal,
cont’d
The last sum can be written in matrix form as MM 0 . So
we just need to find a matrix M such that MM 0 = Σ.
But how do we find such an M?
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
10 / 15
Simulating the Multivariate Normal,
cont’d
There are two main methods of finding such an M.
Cholesky decomposition.
Symmetric square root.
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
11 / 15
Cholesky decomposition
Any symmetric positive-definite matrix Σ can be written
as
Σ = CC 0 ,
where C is upper triangular.
This is called the Cholesky decomposition of Σ.
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
12 / 15
Symmetric square root
A symmetric positive-definite matrix has all positive
eigenvalues, and can be diagonalized as
Σ = O 0 DO,
where O is an n-dimensional rotation (also known as an
orthogonal matrix). Orthogonal matrices have the
property that O 0 O = OO 0 = I , the identity matrix.
This is known as the singular value decomposition.
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
13 / 15
Symmetric square root
Let D 1/2 be a diagonal matrix formed by taking the
square roots of the diagonal elements of D. Then the
symmetric square root, Σ1/2 is
Σ1/2 = O 0 D 1/2 O.
This matrix is indeed symmetric, and
Σ1/2 Σ1/2 = (O 0 D 1/2 O)(O 0 D 1/2 O)
= O 0 DO
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
14 / 15
Comparison
Cholesky decomposition – faster to compute.
Symmetric square root – numerically more accurate.
Walt Pohl (UZH QBA)
Stochastic Models
April 24, 2013
15 / 15
Related documents