Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Econometric Methods
Dr. Matthias Opnger
Lehrstuhl für Finanzwissenschaft
WS 2015/16
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
1 / 43
Econometric Methods
Revision: Matrix Algebra and Probability
Dr. Matthias Opnger
Lehrstuhl für Finanzwissenschaft
WS 2015/16
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
2 / 43
Review of Matrix Algebra
Moving on to ...
1
Review of Matrix Algebra
Basic Denitions
Matrix Operations
Linear Independence and Rank of a Matrix
Quadratic Forms and Positive Denite Matrices
Idempotent Matrices
Dierentiation of Linear and Quadratic Forms
2
Review of Probability
Random variables and Probability Distributions
Expected Values, Mean and Variance
Two Random Variables
The Normal Distribution
Random Sampling and the Sample Distribution
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
3 / 43
Review of Matrix Algebra
Basic Denitions
Basic Denitions
a11 a12 · · ·
a21 a22 · · ·
A=
..
...
.
an1 an2 · · ·
a1K
a2K
anK
A matrix is a rectangular array of numbers.
An (n X K ) matrix has n rows and K columns.
n is the row dimension and K is the column dimension.
aij represents the element in the i th row and j th column.
A real number can be interpreted as a (1 X 1) matrix, which is called a
scalar.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
4 / 43
Review of Matrix Algebra
Basic Denitions
Basic Denitions
A (1 X K ) matrix is called a row vector (of dimension K ) and can be
written as x = (x1 , x2 , . . . , xK ).
An (n X 1) matrix is called a column vector (of dimension n) and can be
written as
y1
y2
y=
..
.
yn
Matrix A could be written in the following form:
A = a1 a2 · · · aK
in terms of column vectors aj .
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
5 / 43
Review of Matrix Algebra
Basic Denitions
Basic Denitions
A square matrix has the same number of rows and columns (n = K ).
A symmetric matrix is a square matrix in which aij = aji for all i and j .
1 3 7
A = 3 5 2
7 2 1
A diagonal matrix is a square matrix whose o-diagonal elements are
zero, that is, aij = 0 for all i 6= j .
a11 0 · · ·
0 a22 · · ·
A=
..
...
.
0 0 ···
Dr. Matthias Opnger
Econometric Methods
0
0
ann
WS 2015/16
6 / 43
Review of Matrix Algebra
Basic Denitions
Basic Denitions
An (n X n) identity matrix, denoted I, or sometimes In to emphasize its
dimension, is the diagonal matrix with unity (one) in each diagonal
position, and zero elsewhere.
1 0 ··· 0
0 1 · · · 0
I = In =
..
...
.
0 0 ··· 1
An (n X K ) zero matrix, denoted 0, is the (n X K ) matrix with zero for
all entities. A column vector of zeros will be denoted as o.
A triangular matrix is a square matrix that has only zeros either above or
below the main diagonal. If the zeros are above the diagonal, the matrix is
lower triangular.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
7 / 43
Review of Matrix Algebra
Matrix Operations
Matrix Addition and Subtraction
Two matrices A and B, each having dimension (n X K ) can be
added/subtracted element by element: A + B = [aij + bij ]. More precisely,
a11 + b11 a12 + b12 · · ·
a21 + b21 a22 + b22 · · ·
A+B=
..
...
.
an1 + bn1 an2 + bn2 · · ·
Properties:
a1K + b1K
a2K + b2K
anK + bnK
A+0=A
0+A=A
A−A=0
A+B=B+A
(A + B ) + C = A + ( B + C)
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
8 / 43
Review of Matrix Algebra
Matrix Operations
Scalar Multiplication
Given any real number (scalar) γ , scalar multiplication is dened as
γ A ≡ [γ aij ], or
γ a11 γ a12 · · ·
γ a21 γ a22 · · ·
γA = .
...
..
γ an 1 γ an 2 · · ·
Dr. Matthias Opnger
Econometric Methods
γ a1K
γ a2K
γ anK
WS 2015/16
9 / 43
Review of Matrix Algebra
Matrix Operations
Transposition
The transpose of a matrix A, denoted A0 , is obtained by creating the
matrix whose kth row is the kth column of the original matrix. If A is
(n X K ), A0 is (K X n).
1
5
A=
6
3
2
1
4
1
3
5
5
4
1 5 6 3
0
A = 2 1 4 1
3 5 5 4
Properties:
(A0 )0 = A
(αA)0 = αA0 for any scalar α.
( A + B ) 0 = A0 + B 0
If A is symmetric, A = A0 .
The transpose of a column vector a is a row vector.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
10 / 43
Review of Matrix Algebra
Matrix Operations
Inner Product and Matrix Multiplication
The inner product (or dot product) of a row vector a0 and a column
vector b is a scalar:
a0 b = a1 b1 + a2 b2 + · · · + an bn
To multiply matrix A by matrix B to form the product AB, the column
dimension of A must equal the row dimension of B.
Let A be an (n X K ) matrix and let B (K X m) matrix. The product
matrix,
C = AB
is an (n X m) matrix whose ij th element is is the inner product of row i of
A and column j of B.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
11 / 43
Review of Matrix Algebra
Matrix Operations
Matrix Multiplication
Let
a
a
A = 11 12
a21 a22
b11 b12 b13
and B =
b21 b22 b23
A cross table might help to multiply the matrices and to determine the
product matrix C.
B
=
A
C
a11
a21
a12
a22
b11
b21
c11
c21
b12
b22
c12
c22
b13
b23
c13
c23
For instance, to nd out c23 (cij ), we calculate the inner product of the 2nd
(i th ) row of matrix A and the 3rd (j th ) column of matrix B, that is
c23 = a21 b13 + a22 b23 .
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
12 / 43
Review of Matrix Algebra
Matrix Operations
Matrix Multiplication
B
=
A
C=
c11
c21
c12
c22
C
c13
a b + a12 b21
= 11 11
c23
a21 b11 + a22 b21
Dr. Matthias Opnger
a11
a21
a12
a22
b11
b21
c11
c21
a11 b12 + a12 b22
a21 b12 + a22 b22
Econometric Methods
b12
b22
c12
c22
b13
b23
c13
c23
a11 b13 + a12 b23
a21 b13 + a22 b23
WS 2015/16
13 / 43
Review of Matrix Algebra
Matrix Operations
Matrix Multiplication
IA = AI = A
0A = A0 = 0
AB 6= BA
(α + β)A = αA + β A for scalars α and β
α(A + B) = αA + αB
(αβ)A = α(β A)
α(AB) = (αA)B = A(αB) = (AB)α
(AB)C = A(BC)
A(B + C) = AB + AC
(A + B)C = AC + BC
(A + B)(C + D) = AC + AD + BC + BD
(AB)0 = B0 A0
(ABC)0 = C0 B0 A0
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
14 / 43
Review of Matrix Algebra
Matrix Operations
Trace
The trace of a matrix is a very simple operation dened only for square
matrices.
For an (n X n) matrix A, the trace of a matrix A, denoted tr(A), is the
sum of its diagonal elements. Mathematically,
tr (A) =
Properties:
n
X
i =1
aii
tr (In ) = n
tr (A0 ) = tr (A)
tr (A + B) = tr (A) + tr (B)
tr (αA) = αtr (A) for any scalar α
Dr. Matthias Opnger
tr (AB) = tr (BA)
Econometric Methods
WS 2015/16
15 / 43
Review of Matrix Algebra
Matrix Operations
Inverse
The notion of a matrix inverse is very important for square matrices.
An (n X n) matrix A has an inverse, denoted A−1 , provided that
A−1 A = In and AA−1 = In . In this case, A is said to be invertible or
nonsingular. Otherwise, it is said to be noninvertible or singular. If an
inverse exists, it is unique.
Properties:
(αA)−1 = (1/α)A−1 if α 6= 0 and A is invertible.
(AB)−1 = B−1 A−1 if A and B are both n X n and invertible.
(A0 )−1 = (A−1 )0
(A−1 )−1 = A
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
16 / 43
Review of Matrix Algebra
Linear Independence and Rank of a Matrix
Linear Independence
Let {x1 , x2 , . . . , xr } be a set of n X 1 vectors. These are linearly
independent vectors if, and only if,
α1 x1 + α2 x2 + . . . + αr xr = 0
(*)
implies that α1 = α2 = . . . = αr = 0.
If (*) holds for a set of scalars that are not all zero, then {x1 , x2 , . . . , xr } is
linearly dependent. At least one vector in this set can be written as a linear
combination of the others.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
17 / 43
Review of Matrix Algebra
Linear Independence and Rank of a Matrix
Rank of a Matrix
(a) Let A be an (n X K ) matrix. The column (row) rank of a matrix is
the maximum number of linearly independent columns (rows) of A.
(b) It can be shown that the column rank of a matrix always equals the
row rank of a matrix. Therefore, it is enough to talk about rank of a
matrix, denoted by rank (A).
Properties:
rank (A0 ) = rank (A)
rank (A0 A) = rank (AA0 ) = rank (A)
rank (In ) = n
If A is (n X K ), rank (A) ≤ min(n, K ).
An (n X K ) matrix A has a full rank, if rank (A) = min(n, K ).
A square matrix, which has a full rank, is dened to be a regular matrix;
otherwise it is singular. A regular matrix is invertible.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
18 / 43
Review of Matrix Algebra
Quadratic Forms and Positive Denite Matrices
Quadratic Forms
Let A be an (n X n) square matrix. The quadratic form associated with
the matrix A is the real-valued function b0 Ab dened for all (1 X n)
vectors b0 = [b1 b2 . . . bn ]:
a11 b1 + a12 b2 + . . . + a1n bn
a21 b1 + a22 b2 + . . . + a2n bn
b0 Ab = b0 [Ab] = [b1 b2 . . . bn ]
.
.
.
an1 b1 + an2 b2 + . . . + ann bn
= b1 (a11 b1 + a12 b2 + . . . + a1n bn )
+ b2 (a21 b1 + a22 b2 + . . . + a2n bn )
···
+ bn (an1 b1 + an2 b2 + . . . + ann bn )
n
n
n
n X
n
X
X
X
X
=
bi (ai 1 b1 + ai 2 b2 + . . . + ain bn ) =
bi
aij bj =
aij bi bj
i =1
Dr. Matthias Opnger
i =1
Econometric Methods
j =1
i =1 j =1
WS 2015/16
19 / 43
Review of Matrix Algebra
Quadratic Forms and Positive Denite Matrices
Positive Denite and Positive Semi-Denite Matrices
(a) A square matrix A is said to be positive denite if b0 Ab > 0 for all
(n X 1) vectors b except b = 0
(b) A square matrix A is said to be positive semi-denite if b0 Ab ≥ 0
for all (n X 1) vectors.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
20 / 43
Review of Matrix Algebra
Quadratic Forms and Positive Denite Matrices
Positive Denite and Positive Semi-Denite Matrices
Properties:
Let A be an (n X K ) matrix:
A0 A and AA0 are positive semi-denite.
Let A be an (n X K ) matrix with rank (A) = K :
A0 A is always positive denite and therefore nonsingular.
Let A be a positive denite matrix:
A−1 exists and is also positive denite.
Let A be a positive denite (n X n) matrix and B an (n X K ) matrix
with rank (B) = K :
the (K X K ) matrix B0 AB is positive denite.
Any positive denite (K X K ) matrix C has rank (C) = K .
For any two regular matrices of the same order:
A − B positive denite ⇔ B−1 − A−1 positive denite.
Let A be a positive denite matrix. There is at least one regular
matrix B, such that B0 B = A−1 .
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
21 / 43
Review of Matrix Algebra
Idempotent Matrices
Idempotent Matrices
Let A be an (n X n) symmetric matrix. Then A is said to be an
idempotent matrix if, and only if, AA = A.
Properties:
rank (A) = tr (A)
A is positive semi-denite.
We can construct idempotent matrices very generally. Let X be an
(n X K ) matrix with rank (X) = K . Dene:
P ≡ X(X0 X)−1 X0
M ≡ In − X(X0 X)−1 X0 = In − P
Then P and M are symmetric, idempotent matrices with rank (P) = K and
rank (M) = n − K .
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
22 / 43
Review of Matrix Algebra
Dierentiation of Linear and Quadratic Forms
Dierentiation of Linear and Quadratic Forms
For a given (n X 1) vector a, consider the linear function dened by
f (x) = a0 x
for all (n X 1) vectors x. The derivative of f with respect to x is the
(1 X n) vector of partial derivatives, which is simply
∂ f (x)/∂ x = a
For an (n X n) symmetric matrix A, dene the quadratic form
g (x) = x0 Ax
Then,
∂ g (x)/∂ x = 2x0 A
which is a (1 X n) vector.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
23 / 43
Review of Probability
Moving on to ...
1
Review of Matrix Algebra
Basic Denitions
Matrix Operations
Linear Independence and Rank of a Matrix
Quadratic Forms and Positive Denite Matrices
Idempotent Matrices
Dierentiation of Linear and Quadratic Forms
2
Review of Probability
Random variables and Probability Distributions
Expected Values, Mean and Variance
Two Random Variables
The Normal Distribution
Random Sampling and the Sample Distribution
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
24 / 43
Review of Probability
Random variables and Probability Distributions
Random variables and Probability Distributions
Most aspects of world around us have an element of randomness, The
theory of probability provides mathematical tools for quantifying and
describing this randomness.
The mutually exclusive potential results of a random process are called the
outcomes.
The probability of an outcome is the proportion of the time that the
outcome occurs in the long run.
The set of all possible outcomes is called the sample space. An event is a
subset of the sample space, that is, an event is a set of one or more
outcomes.
A random variable is a numerical summary of a random outcome.
discrete random variable takes on only discrete set of values.
continuous random variable takes on a continuum of possible
values.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
25 / 43
Review of Probability
Random variables and Probability Distributions
Random variables and Probability Distributions
Example:
Let u1 , u2 and u3 be random numbers.
A random experiment: A die is rolled once.
u1 ="The number of dots on the face which turns up"
has six possible outcomes. Each outcome has a probability of 1/6.
u2 ="The sum of the numbers of dots when a die is rolled twice"
has eleven possible outcomes (N=11).
Probability of outcome 2: f (2) = (1/6)(1/6) = 1/36
Probability of outcome 3: f (3) = 2(1/6)(1/6) = 2/36
u3 =Ä real number in the interval [0,1]"
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
26 / 43
Review of Probability
Random variables and Probability Distributions
Random variables and Probability Distributions
The probability distribution of a discrete random variable is the list of all
possible values of the variable and the probability that each value will
occur. These probabilities sum to 1. Probability is summarized by a
probability density function in the case of continuous random variables.
Abbildung :
Probability distribution for a discrete (a) and for a continuous variable (b)
The probability of an event can be computed from the probability
distribution. The cumulative probability distribution is the probability
that a random variable is less than or equal to a particular value.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
27 / 43
Review of Probability
Expected Values, Mean and Variance
Expected Values
The expected value of a random variable u , denoted E (u ), is the long-run
average value of the random variable over many repeated trials or
occurrences.
The expected value of a discrete random variable is computed as a
weighted average of the possible outcomes of that random variable, where
the weights are the probabilities of that outcome.
E (u ) =
N
X
i =1
f (ui )ui
The expected value of u is also called the expectation of u or the mean
of u and is denoted by µu .
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
28 / 43
Review of Probability
Expected Values, Mean and Variance
The Standard Deviation and Variance
The variance and standard deviation measure the dispersion or the
ÿpreadöf a probability distribution.
The variance of a random variable u , denoted var (u ), is the expected
value of the square of the deviation of u from its mean:
2
var (u ) = E [(u − µu ) ] =
N
X
i =1
f (ui )(ui − E (u ))2
The standard deviation is the square root of the variance:
sd (u ) =
Dr. Matthias Opnger
p
var (u )
Econometric Methods
WS 2015/16
29 / 43
Review of Probability
Two Random Variables
Joint and Conditional Distributions
Example: A die is rolled once.
u1 ="The number of dots on the face which turns up"
u4 ="The number of natural numbers by which the number of dots on the
die is divisible"
Random Variable
u1
u4
Outcome
1 2 3 4 5 6
1 2 2 3 2 4
Tabelle : Outcomes of random variables
Dr. Matthias Opnger
Econometric Methods
u1
and
u4
WS 2015/16
30 / 43
Review of Probability
Two Random Variables
Joint and Conditional Distributions
The distribution of a random variable (u1 ) conditional on another random
variable (u4 ) taking on a specic value is called the conditional
distribution of u1 given u4 , denoted f (u1i |u4j ).
f (u1 = 3|u4 = 2) = 1/3
f (u1 = 1|u4 = 2) = 0
The joint probability distribution of two discrete random variables, say
u1 and u4 , is the probability that the random variables simultaneously take
on certain values, say i and j ; denoted f (u1i , u4j ).
In our example, there are 6 X 4 = 24 possible outcomes for the joint
distribution of u1 and u4 .
f (u1i , u4j ) = f (u1i |u4j ).f (u4j ) = f (u4i |u1j ).f (u1j )
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
31 / 43
Review of Probability
Two Random Variables
Joint and Conditional Distributions
f (u1i , u4j ) = f (u1i |u4j ).f (u4j ) = f (u4i |u1j ).f (u1j )
f (u1i = 3, u4j = 2) = f (u1i = 3|u4j = 2).f (u4j = 2) = 1/3.1/2 = 1/6
u1i
u4j
1
2
3
4
1
2
3
4
5
6
1/6 0
0
0
0
0
0 1/6 1/6 0 1/6 0
0
0
0 1/6 0
0
0
0
0
0
0 1/6
Tabelle : Joint distribution of random variables
Dr. Matthias Opnger
Econometric Methods
u1
and
u4
WS 2015/16
32 / 43
Review of Probability
Two Random Variables
Independence
Two random variables are independently distributed, or independent, if
knowing the value of one of the variables provides no information about the
other .
u1 and u2 are independent if the conditional distribution of u1 given u2
equals the marginal distribution of u1 :
f (u1 |u2 ) = f (u1 )
Example: Two dice are rolled once simultaneously.
u1 ="The number of dots on the face which turns up by the rst die."
u1 ="The number of dots on the face which turns up by the second die."
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
33 / 43
Review of Probability
Two Random Variables
Independence
Using the denition of joint distribution:
f (u1 , u2 ) = f (u1 |u2 ).f (u2 )
gives an alternative expression for independent random variables.
If u1 and u2 are independent, then
f (u1 , u2 ) = f (u1 ).f (u2 )
Example: Two dice are rolled once simultaneously.
f (u1 = 2, u2 = 5) = f (u1 ).f (u2 ) = 1/6 ∗ 1/6 = 1/36
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
34 / 43
Review of Probability
Two Random Variables
Covariance
One measure of the extent to which two random variables move together is
their covariance.
The covariance between u1 and u2 is
cov (u1 , u2 ) = E [(u1 − E (u1 ))(u2 − E (u2 ))]
cov (u1 , u2 ) =
N1 X
N2
X
i =1 j =1
f (u1i , u2j )[(u1i − E (u1 ))(u2j − E (u2 ))]
Covariance is a measure of a linear relationship between two random
variables.
If tend to move in the same (opposite) direction, the covariance is positive
(negative).
If u1 and u2 are independent, then the covariance is 0.
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
35 / 43
Review of Probability
Two Random Variables
Correlation
The units of covariance are, awkwardly, the units of u1 times the units of
u2 . This ünits"problem can make numerical values of the covariance
dicult to interpret.
The correlation is an alternative measure of dependence between two
random variables that solves the ünits"problem.
The correlation between u1 and u2 is
cov (u1 , u2 )
cov (u1 , u2 )
cor (u1 , u2 ) = p
=
sd (u1 )sd (u2 )
var (u1 )var (u2 )
The correlation is always between -1 and 1, and is usually represented by a
ρ.
−1 ≤ ρ ≤ 1
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
36 / 43
Review of Probability
Two Random Variables
Means, Variances and Covariances of Sums of Random
Variables
Let u1 and u2 be two random variables and let x1 and x2 be constants. The
following facts follow from the denition of mean, variance and covariance:
Mean:
E (x1 ) = x1
E (x1 .u1 ) = x1 .E (u1 )
E (u1 + u2 ) = E (u1 ) + E (u2 )
E (x1 + x2 .u2 ) = x1 + x2 .E (u2 )
E (E (u1 )) = E (u1 )
As a general rule:
E (u1 .u2 ) 6= E (u1 ).E (u2 )
Only if the two random variables are uncorrelated or independent:
E (u1 .u2 ) = E (u1 ).E (u2 )
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
37 / 43
Review of Probability
Two Random Variables
Means, Variances and Covariances of Sums of Random
Variables
Let u3 be another random variable. Dene
u3 = x1 .u1 + x2 .u2
then, variance of u3 is:
var (u3 ) = x12 .var (u1 ) + x22 .var (u2 ) + 2x1 x2 cov (u1 , u2 )
For the special case u1 = 1 (var (u1 ) = 0 and cov (u1 , u2 ) = 0):
u3 = x1 + x2 .u2
then, variance of u3 is:
var (u3 ) = x22 .var (u2 )
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
38 / 43
Review of Probability
Two Random Variables
Means, Variances and Covariances of Sums of Random
Variables
Let u4 be another random variable.
Covariance between u3 and u4 :
cov (u3 , u4 ) = cov (x1 .u1 + x2 .u2 , u4 ) = x1 cov (u1 , u4 ) + x2 cov (u2 , u4 )
Covariances are always symmetric:
cov (u3 , u4 ) = cov (u4 , u3 )
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
39 / 43
Review of Probability
The Normal Distribution
The Normal Distribution
A continuous random variable with a normal distribution has the familiar
bell-shaped probability density.
A normally distributed random variable u with an expected value E (u ) and
a variance var (u ) can be expressed by
u ∼ N (E (u ), var (u ))
The standard normal distribution is the normal distribution with an
expected value E (u ) = 0 and a variance var (u ) = 1, denoted N (0, 1).
Random variables that have a N (0, 1) distribution are often denoted by Z .
To compute probabilities for a normal variable with a general mean and
variance, it must be standardized rst subtracting the mean, then dividing
the result by the standard deviation.
z=
Dr. Matthias Opnger
u − E (u )
sd (u )
Econometric Methods
WS 2015/16
40 / 43
Review of Probability
Random Sampling and the Sample Distribution
Random Sampling and the Distribution of the Sample
Average
We are going to deal with various ÿamples of data"during our course.
Random sampling is the act of randomly drawing a sample from a larger
population. This has the eect of making the sample average itself a
random variable.
Assume, for a random variable x, we have a random sample of T
observations: x1 , x2 , . . ., xT .
The sample mean is:
x̄ =
Dr. Matthias Opnger
T
1X
T
t =1
xt
Econometric Methods
WS 2015/16
41 / 43
Review of Probability
Random Sampling and the Sample Distribution
Random Sampling and the Sample Distribution
The sample variance is:
c (x ) =
var
where Sxx =
1
T −1
T
X
(xt − x̄ )2 =
t =1
1
T −1
Sxx
PT
2
t =1 (xt − x̄ ) .
The sample standard deviation is then:
b (x ) =
sd
Dr. Matthias Opnger
q
c (x )
var
Econometric Methods
WS 2015/16
42 / 43
Review of Probability
Random Sampling and the Sample Distribution
Random Sampling and the Sample Distribution
Assume, we have a random sample of T observations for random variables
x and y : (x1 , y1 ), (x2 , y2 ), . . ., (xT , yT ).
The sample covariance of x and y is :
cd
ov (x , y ) =
where Sxy =
1
T −1
T
X
t =1
(xt − x̄ )(yt − ȳ ) =
1
T −1
Sxy
PT
t =1 (xt − x̄ )(yt − ȳ )
Finally, the sample correlation coecient between x and y follows
c (x , y ) =
cor
Sxy /(T − 1)
Sxy
cd
ov (x , y )
p
=p
=√ p
b (x )sd
b (y )
Sxx Syy
Sxx /(T − 1) Syy /(T − 1)
sd
Dr. Matthias Opnger
Econometric Methods
WS 2015/16
43 / 43