Download Correlations and Copulas(1).

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Central limit theorem wikipedia , lookup

Transcript
Correlations and Copulas
1
Measures of Dependence
•The risk can be split into two parts:
• the individual risks and
• the dependence structure between them
• Measures of dependence include:
• Correlation
• Rank Correlation
• Coefficient Tail Dependence
• Association
2
Correlation and Covariance
• The coefficient of correlation between
two variables X and Y is defined as
E(YX)  E(Y) E(X)
SD(Y) SD(X)
• The covariance is
E(YX)−E(Y)E(X)
3
Independence
• X and Y are independent if the knowledge
of one does not affect the probability
distribution for the other
f( Y X  x)  f(Y)
where f() denotes the probability density
function
4
Correlation Pitfalls
• A correlation of 0 is not equivalent to independence
• If (X, Y ) are jointly normal, Corr(X,Y ) = 0 implies
independence of X and Y
• In general this is not true: even perfectly related
RVs can have zero correlation:
X ~ N(0,1) and Y  X 
Corr(X, Y)  0
2
Types of Dependence
E(Y)
E(Y)
X
X
(a)
(b)
E(Y)
X
(c)
6
Correlation Pitfalls (cont.)
• Correlation is invariant under linear
transformations, but not under general
transformations:
– Example, two log-normal RVs have a different
correlation than the underlying normal RVs
• A small correlation does not imply a small
degree of dependency.
Stylized Facts of Correlations
• Correlation clustering:
– periods of high (low) correlation are likely to be followed
by periods of high (low) correlation
• Asymmetry and co-movement with volatility:
– high volatility in falling markets goes hand in hand with
a strong increase in correlation, but this is not the case
for rising markets
• This reduces opportunities for diversification in
stock-market declines.
Monitoring Correlation Between
Two Variables X and Y
Define xi=(Xi−Xi-1)/Xi-1 and yi=(Yi−Yi-1)/Yi-1
Also
varx,n: daily variance of X calculated on day n-1
vary,n: daily variance of Y calculated on day n-1
covn: covariance calculated on day n-1
cov n
The correlation is
varx ,n var y ,n
9
Covariance
• The covariance on day n is
E(xnyn)−E(xn)E(yn)
• It is usually approximated as E(xnyn)
10
Monitoring Correlation continued
EWMA:
cov n   cov n1  (1  ) xn1 yn1
GARCH(1,1)
cov n    xn1 yn1   cov n1
11
Correlation for Multivariate Case
• If X is m-dimensional and Y n-dimensional
then
• Cov(X,Y) is given by the m×n-matrix with
entries Cov(Xi, Yj )
•  = Cov(X,Y) is called covariance matrix
12
Positive Finite Definite Condition
A variance-covariance matrix, , is
internally consistent if the positive semidefinite condition
w w  0
T
holds for all vectors w
13
Example
The variance covariance matrix
 1

 0

 0.9
0
1
0.9
0.9

0.9

1
is not internally consistent. When
w=[1,1,-1] the condition for positive
semidefinite is not satisfied.
14
Correlation as a Measure of
Dependence
• Correlation as a measure of dependence fully
determines the dependence structure for normal
distributions and, more generally, elliptical
distributions while it fails to do so outside this class.
• Even within this class correlation has to be handled
with care: while a correlation of zero for multivariate
normally distributed RVs implies independence, a
correlation of zero for, say, t-distributed rvs does not
imply independence
Multivariate Normal Distribution
• Fairly easy to handle
• A variance-covariance matrix defines the
variances of and correlations between
variables
• To be internally consistent a variancecovariance matrix must be positive
semidefinite
16
Bivariate Normal PDF
• Probability density function of a bivariate normal
distribution:
 X1 
X
X 
 ~ MVN (μ, Σ)
 2
 μ1 
Mean Vector μ   
μ2 

σ12
Cov(X 1 , X 2 ) 

Covariance Matrix Σ  
2

Cov(X
,
X
)
σ
1
2
1


f ( x1 , x2 ) 
1
2 | Σ |
exp[( x  μ)' Σ 1 (x  μ)]
X and Y Bivariate Normal
• Conditional on the value of X, Y is normal with mean
X μ X
μ Y  ρ XY σ Y
σX
and standard deviation σ Y 1  ρ XY where X, Y,
X, and Y are the unconditional means and SDs of
X and Y and xy is the coefficient of correlation
between X and Y
2
18
Generating Random Samples for
Monte Carlo Simulation
• =NORMSINV(RAND()) gives a random
sample from a normal distribution in
Excel
• For a multivariate normal distribution a
method known as Cholesky’s
decomposition can be used to generate
random samples
19
Bivariate Normal PDF
independence
 0
1 0
, ρ  0, Pr(Y  0 | X  0)  0.5
μ   , Σ  
 0
0 1
Bivariate Normal PDF
dependence
 0
1 ρ
, ρ  0.483, Pr(Y  0 | X  0)  0.75
μ   , Σ  
 0
ρ 1
Factor Models
• When there are N variables, Vi (i = 1,
2,..N), in a multivariate normal
distribution there are N(N−1)/2
correlations
• We can reduce the number of correlation
parameters that have to be estimated
with a factor model
22
One-Factor Model continued
• If Ui have standard normal distributions we can
set
U i  ai F  1 ai2 Zi
where the common factor F and the idiosyncratic
component Zi have independent standard normal
distributions
• Correlation between Ui and Uj is ai aj
23
Copulas
• A powerful concept to aggregate the risks — the
copula function — has been introduced in finance
by Embrechts, McNeil, and Straumann [1999,2000]
• A copula is a function that links univariate
marginal distributions to the full multivariate
distribution
• This function is the joint distribution function of N
standard uniform random variables.
Copulas
• The dependence relationship between two random variables
X and Y is obscured by the marginal densities of X and Y
• One can think of the copula density as the density that filters
or extracts the marginal information from the joint
distribution of X and Y.
• To describe, study and measure statistical dependence
between random variables X and Y one may study the copula
densities.
• Vice versa, to build a joint distribution between two random
variables X ~G() and Y~H(), one may construct first the
copula on [0,1]2 and utilize the inverse transformation and
G-1() and H-1().
Cumulative Density Function
Theorem
• Let X be a continuous random variable
with distribution function F()
• Let Y be a transformation of X such that
Y=F(X).
• The distribution of Y is uniform on [0,1].
Sklar’s (1959) Theorem- The
Bivariate Case
• X, Y are continuous random variables such that
X  ~G(·), Y  ~ H(·)
• G(·), H(·): Cumulative distribution functions – cdf’s
• Create the mapping of X  into X such that X=G(X ) then X has a
Uniform distribution on [0,1] This mapping is called the probability
integral transformation e.g. Nelsen (1999).
• Any bivariate joint distribution of (X ,Y ) can be transformed to a
bivariate copula (X,Y)={G(X ), H(Y )} –Sklar (1959).
• Thus, a bivariate copula is a bivariate distribution with uniform
marginal disturbutions (marginals).
Copula
Mathematical Definition
• A n-dimensional copula C is a function which is a cumulative
distribution function with uniform marginals:
C(u)  C(u1 ,....., u n )
• The condition that C is a distribution function leads to the
following properties
– As cdfs are always increasing C(u)  C(u1,....., u n ) is increasing in
each component ui.
– The marginal component is obtained by setting uj = 1 for all
j i and it must be uniformly distributed,
C(u)  C(1,...,1, ui ,1...,1)  ui
– For ai<bi the probability Pr(U1 [a1 , b1 ],..., Un [a n , bn ])
must be non-negative
An Example
Let Si be the value of Stock i. Let Vpf be the value of a
portfolio
N
Vpf   w iSi ,
i 1
N
w
i 1
i
1
5% Value-at-Risk of a Portfolio is defined as follows:
Pr(Vpf  VaR)  0.05
Gaussian Copulas have been used to model
dependence between (S1, S2, …..,Sn)
Copulas Derived from Distributions
• Typical multivariate distributions describe
important dependence structures. The copulas
derived can be derived from distributions.
• The multivariate normal distribution will lead
to the Gaussian copula.
• The multivariate Student t-distribution leads to
the t-copula.
Gaussian Copula Models:
• Suppose we wish to define a correlation structure
between two variable V1 and V2 that do not have
normal distributions
• We transform the variable V1 to a new variable U1
that has a standard normal distribution on a
“percentile-to-percentile” basis.
• We transform the variable V2 to a new variable U2
that has a standard normal distribution on a
“percentile-to-percentile” basis.
• U1 and U2 are assumed to have a bivariate normal
distribution
31
The Correlation Structure Between the V’s is
Defined by that Between the U’s
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
V2
V1
One-to-one
mappings
-6
-4
-2
0
2
4
6
-6
-4
-2
0
2
4
6
U2
U1
Correlation
Assumption
32
Example (page 211)
V1
V2
33
V1 Mapping to U1
V1
Percentile
(probability)
U1
0.2
0.20
-0.84
0.4
0.55
0.13
0.6
0.80
0.84
0.8
0.95
1.64
Use function NORMINV in Excel to get values
in for U1
34
V2 Mapping to U2
V2
Percentile
(probability)
U2
0.2
0.08
−1.41
0.4
0.32
−0.47
0.6
0.68
0.47
0.8
0.92
1.41
Use function NORMINV in Excel to get values
in for U2
35
Example of Calculation of Joint
Cumulative Distribution
• Probability that V1 and V2 are both less than
0.2 is the probability that U1 < −0.84 and U2 <
−1.41
• When copula correlation is 0.5 this is
M( −0.84, −1.41, 0.5) = 0.043
where M is the cumulative distribution function
for the bivariate normal distribution
36
Gaussian Copula – algebraic
relationship
• Let G1 and G2 be the cumulative marginal
probability distributions of V1 and V2
• Map V1 = v1 to U1 = u1 so that
G1 (v1 )  (u1 )
• Map V2 = v2 to U2 = u2 so that
G 2 (v 2 )  (u 2 )
•  is the cumulative normal distribution function
1
1
u1   [G 1 (v1 )] and u 2   [G 2 (v 2 )]
1
1
v1  G1 [ (u 1 )] and v 2  G 2 [ (u 2 )]
Gaussian Copula – algebraic
relationship
• U1 and U2 are assumed to be bivariate normal
• The two-dimensional Gaussian copula
CρGa (u1 , u 2 )   (1[G1 (v1 )], 1[G 2 (v 2 )])
where  is the 22 matrix with 1 on the diagonal
and correlation coefficient  otherwise.   denotes
the cdf for a bivariate normal distribution with zero
mean and covariance matrix .
• This representation is equivalent to
s12  2 ρs1s 2  s 22
 2 π 1  ρ 2 exp(  2(1  ρ 2 ) ) ds1ds 2
u1 u 2
1
Bivariate Normal Copula
independence
BVN ( X , Y )  Independence
BVNCopula( X , Y )  Independence
X ~ U [0,1], Y ~ [0,1], ( X , Y ) ~ c(u1 , u2 )  1, (u1 , u2 ) [0,1]2
39
Bivariate Normal Copula
dependence
BVN ( X , Y )  Dependence
BVNCopula( X , Y )  Dependence
X ~ U [0,1], Y ~ [0,1], ( X , Y ) ~ c(u1 , u2 ), (u1 , u2 )  [0,1]2
40
5000 Random Samples from
the Bivariate Normal
5
4
3
2
1
0
-5
-4
-3
-2
-1
0
1
2
3
4
5
-1
-2
-3
-4
-5
41
5000 Random Samples from
the Bivariate Student t
10
5
0
-10
-5
0
5
10
-5
-10
42
Multivariate Gaussian Copula
• We can similarly define a correlation
structure between V1, V2,…Vn
• We transform each variable Vi to a new
variable Ui that has a standard normal
distribution on a “percentile-topercentile” basis.
• The U’s are assumed to have a
multivariate normal distribution
43
Factor Copula Model
In a factor copula model the correlation
structure between the U’s is generated
by assuming one or more factors.
44
Credit Default Correlation
• The credit default correlation between
two companies is a measure of their
tendency to default at about the same
time
• Default correlation is important in risk
management when analyzing the benefits
of credit risk diversification
• It is also important in the valuation of
some credit derivatives
45
Model for Loan Portfolio
• We map the time to default for company i, Ti, to a
new variable Ui and assume
U i  ai F  1 a Zi
2
i
where F and the Zi have independent standard
normal distributions
• The copula correlation is  =a2
• Define Qi as the cumulative probability distribution
of Ti
• Prob(Ui<U) = Prob(Ti<T) when N(U) = Qi(T)
46
Analysis
• To analyze the model we
– Calculate the probability that, conditional on the
value of F, Ui is less than some value U
– This is the same as the probability that Ti is less that
T where T and U are the same percentiles of their
distributions
Zi 
U2   F
1 
– And
U   F 
U   F 
  N

Pr ob(U i  U | F )  Pr ob
 1  
 1  




– This is also Prob(Ti<T|F)
Analysis (cont.)
1
U  N [Q(T )]
This leads to
 N 1 PD   F 
Prob(Ti  T F )  N 

1 


where PD is the probability of default in time T
The Model continued
• The worst case default rate for portfolio for a
time horizon of T and a confidence limit of X is
 N 1[Q(T )]   N 1 ( X ) 

WCDR(T,X)  N 


1




• The VaR for this time horizon and confidence
limit is
VaR(T , X )  L  (1  R)  WCDR (T , X )
where L is loan principal and R is recovery rate
49
The Model continued
U  a F 
i

Prob(U i  U F )  N 
2
 1  ai 
Hence
 N 1 Q (T )  a F 
i
i

Prob(Ti  T F )  N 
2


1  ai
Assuming the Q' s and a' s are the same for all companies
 N 1 Q(T )   F 
Prob(Ti  T F )  N 

1 


where  is the copula correlatio n
50
Appendix 1: Sampling from
Bivariate Normal Distribution
Appendix 2: Sampling from
Bivariate t Distribution
Appendix 3: Gaussian Copula
with Student t Distribution
• Sample U1 and U2 from a bivariate normal distribution
with the given correlation .
• Convert each sample into a variable with a Student tdistribution on a percentile-to-percentile basis.
• Suppose that U1 is in cell C1. The Excel function TINV
gives a “two-tail” inverse of the t-distribution. An
Excel instruction for determining V1 is therefore
=IF(NORMSDIST(C1)<0.5,TINV(2*NORMSDIST(C1),df),TIN
V(2*(1-NORMSDIST(C1)),df))
where df stands for degrees of freedom parameter