Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
COSC 6221: Statistical Signal Processing Theory
Assignment # 4: Random Vectors, Karhunen Loeve Transform, and Maximum Likelihood Estimation
Due Date: October 29, 2003
In the last two weeks, we extended our discussion on random variables to random vectors, Karhunen Loeve Transform
(KLT), and Maximum Likelihood Estimation. A random vector is defined as a (n 1) column vector X X 1 X 2 X n
whose elements X i are random variables. Such a notation provides us with a compact representation for multiple random
variables, the joint probability density function, and the associated statistics. A consequence of the vector notation is that the
covariance K E{XX T ] of the random vector X is now a (n n) positive definite, symmetric matrix. When the covariance
matrix K is diagonal, the random variables in the random vector are uncorrelated. We illustrated how the Karhunen Loeve
Transformation can be used to diagonalize any given covariance matrix K . In the later part of the week, we focused on the
derivation of the maximum likelihood estimators covering the associated properties of a good estimator such as unbiasness,
consistency, minimum variance, and minimum mean square error (MSE).
Please review chapter 5 from the Woods text before attempting the assignment.
1.
(PDF) Let f X x be the pdf given by
T
f X x Ke x U (x)
where 1 , 2 , , n
T
with i 0 for all i ; X X 1 X 2 X n ; and
1 if x i 0, i 1 n
U x
elsewhere
0
Calculate the value of K will enable f X x to be a pdf?
2.
(Gaussian Random Vector) For ( xi ), i 1, , n , let the probability density function (pdf) of X can be
f X x
1 nx
exp i
2 i 1 i
(2) n / 2 1 n
1
2
Show that all marginal pdf’s are Gaussian.
3.
(Covariance Matrix) Explain which of the following matrices can be covariance matrices of real valued random vectors.
2 4 0
4 3 1
0
1 2
4.
4
0
0
0
0
0
0 3 0
0 0 9
0 0
1 0
6 1 j
1 j
5
2
1
2
1
6
4 1 2
6 9 2
9 2 16
(Diagonalization) Let K 1 and K 2 be positive definite covariance matrices in the expression
K a1K 1 a2K 2
where a1, a2 0
Let A be a transformation that achieves
A T KA I
(a)
Show that A satisfies K
1
A T K 1A Λ
K 1 A AΛ (1) .
(1)
diag (11) , , (n1) .
(b)
T
Show that A K 2 A Λ
( 2)
T
diag (12) , , (n2) .
T
(c)
Show that A K 1 A and A K 2 A share the same eigenvectors.
(d)
Show that the eigenvalues of Λ (2) are related to the eigenvectors Λ (1) as
(i2)
1
[1 a1(i1) ]
2
And are in inverse order from those in Λ (1) .
5.
(KLT Transformation) Two jointly Gaussian random variables X 1 and X 2 have joint pdf given by
8
3
exp x12 x1 x 2 x 22
2
7
7
f X1 X 2 x1 x 2
2
Find a nontrivial transformation
Y1
X
A 1
Y
2
X2
That makes Y1 and Y2 independent, Compute the joint pdf f Y Y x1 x 2 .
1 2
6.
(Multivariate Gaussian) Show that if X X 1 , X 2 , X n
T
has mean μ 1 2 n and covariance K K ij ,
then the scalar random variable
Y p1 X 1 p n X n
has the following statistics:
n
EY p i i
i 1
7.
Y2 p i p j K ij.
(Characteristic Function) Compute the joint characteristic function of X X 1 , X 2 , X n
T
where the random
variable X i , i 1, , n are mutually independent and identically distributed Cauchy’s random variable with the following
distribution
f X i ( x)
Use the result to compute the pdf of Y
x 2
2
n
Xi.
i 1
8.
(Maximum Likelihood Estimator) Compute the MLE for the parameter in the exponential pdf for n independent
observations of the random variable. Show that the likelihhod function is indeed a maximum at the MLE value.
9.
(Maximum Likelihood Estimator) Compute the MLE for the parameter p (the probability of a success) in the binomial
pdf for n independent observations of the random variable.