Download Gaussian Processes, Multivariate Probability Density Function

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Gaussian Processes, Multivariate Probability Density Function,
Transforms
A real-valued random process X(t) is called a Gaussian process, if all of
its nth-order joint probability density functions are n-variate Gaussian
pdfs. The nth-order joint probability density function of a Gaussian
vector
X = [X1 X2 ... Xn]T = [X(t1) X(t2) ... X(tn)]T
is given by
p( x ) =
1


1 ( x - m )T C-1 ( x - m )
exp
x
x
x
2
(2 )n | C x |
where
x = [x1 x2 ... xn]T
mx = E[X] = [mx(t1) mx(t2) ... mx(tn)]T = mean vector
= covariance matrix
|Cx| = determinant of matrix Cx
If random variables X(t1), X(t2), ..., X(tn) are uncorrelated, then the
values of autocovariance function are given by
2

 x ( t i ), i = j
C x ( t i , t j ) = E[(X( t i ) - m x ( t i ))(X( t j ) - m x ( t j ))] = 

0, i  j

Thus, Cx is a diagonal matrix, and from this it follows that
2
(
(
)
)
x
m
t
k
x
k
1 ( x - m ) C ( x - m )=

x
x
2
2  2x ( t k )
k =1
T
n
-1
x
and
n
| C x |=  2x ( t k )
k =1
and the pdf can be factored into a product of n univariate Gaussian pdfs.
n
p( x ) = 
k =1
=>
1
2  x ( t k )
e
-( x k - m x ( t k ) ) 2 /2 2x ( t k )
If random variables X(t1), X(t2), ..., X(tn) from a Gaussian
process are uncorrelated, then they are also statistically
independent
The n-variate Gaussian pdf is completely determined by its mean vector
and covariance matrix. If a Gaussian process is wide-sense stationary,
the mean mx(t) and autocovariance Cx(t, t+) do not depend on time t.
Thus the pdf of the process and the statistical properties derived from the
pdf are invariant over time.
=>
If a Gaussian process is wide-sense stationary, then the
process is also strictly stationary.
Besides this, it can also be shown, that if a Gaussian process is
widesense stationary, then the process is also ergodic.
Another extremely important property of Gaussian process is, that any
linear operation on a Gaussian process X(t) produces another Gaussian
process.
=> linear filtering of Gaussian signals retains their Gaussianity
Example 1:
Let us consider two-dimensional case, i.e. n=2:
x = [x1 x2]T
mx = [2, 1]T
6 3
Cx= 

3 4
Then
 4

 15
-1 
Cx =

- 1
 5
and
| C x |= 15
and further

- 1
5


2
5 
( x - m x )T C -x1 ( x - m x )
 x - 2
1
1
2
4
 1 
=  (x1 - 2) - (x2 - 1),- (x1 - 2)+ (x 2 - 1)
5
5
5
 15
  x - 1
 2 
 4x1 x2 1 x1 2x2   x1 - 2 

=
- - ,- +


5 
 15 5 3 5
 x2 - 1
=(
4x1 x2 1
x 2x
- - )(x1 - 2)+ (- 1 + 2 )(x 2 - 1)
15 5 3
5
5
2(-5x1 + 2x12 - 3x1 x22 + 3x22 + 5)
=
15
Thus, the pdf is given as
p( x ) =
1
2 15

exp 5x1 - 2x12 + 3x1x2 - 3x22 - 5)/15

Example 2:
Let us consider another two-dimensional case, i.e. n=2:
x = [x1 x2]T
mx = [2, 1]T
6 0 
Cx= 

0 4 
Then
1

6
- 1 
Cx =

0

and
| C x |= 24
and further
0




1
4 
( x - m x )T C-x1 ( x - m x )
( x1 - 2) ( x 2 - 1)  x1 - 2 
=
,


4   x 2 - 1
 6
 x 1 x 1   x1 - 2 
=  1 - , 2 - 

 6 3 4 4   x 2 - 1
=(
x1 1
x 1
- )( x1 - 2) + ( 2 - )( x 2 - 1)
6 3
4 4
- 8 x1 + 2 x12 - 6 x 2 + 3 x 22 + 11
=
12
Thus, the pdf is given as
p( x ) =
1
2  24
=
exp(8 x1 - 2 x12 + 6 x 2 - 3 x 22 - 11)/24
1
1
-( x1- 2 )2 /12
-( x2 -1 )2 /8
e
e
2 6
2 4
Example 3:
Randomly phased sinusoid with AWGN
A random signal x(t) is given by
x(t) = A cos( 0 t +  )
where A and 0 are constants and the phase  is a uniformly
distributed random variable with pdf
p(  ) = 21
for 0    2
Let
y(t) = x(t) + n(t)
where n(t) is a zero-mean white Gaussian process with variance 2.
Find the joint pdf of Y1, Y2, ... Yn where Yi = y(ti).
Let us consider the case for given value of the phase , in which case
x(t) is a deterministic signal. Then
Y = [Y1, Y2, ..., Yn]T
is a Gaussian random vector with mean
mx = [Acos(0t1+), Acos(0t2+),..., Acos(0tn+)]T
Since n(t) is white noise, the samples
Y1, Y2, ... Yn are
uncorrelated and the conditional pdf of Y is given by
n
1
-( y k - A cos( 0 t k + ) )2 /2 2
e
2 
p( y |  ) = 
k =1
=
=
1
(2  )
n/2

n
1
(2  )
n/2

n
-
1
2 2
n
 ( y k - A cos( 0 t k + ) )2
k =1
-
1
2 2
n
n
-2A cos( 0 t k + ))  y k + A 2 cos 2( 0 t k + ))
 y2
k
k =1
k =1
e
e
To find the unconditional pdf of Y we should evaluate the integral


1
p( y ) =  p( y , )d  =  p(  )p( y |  )d  =
2
-
-
=
1
(2  )1+n/2  n
2 -
e
0
2
 p( y |  )d
0
n
n
2
1
-2A
cos
(

t
+

))
y
y k +A 2


0
k
22 k=1 k
k=1
cos 2 ( 0 t k + ))
d
Let us consider a complex random variable Z = X + jY, where X and Y
are independent Gaussian variables with same variance 2. Then
mz = mx + jmy
 z2 = E[| Z - mz |2 ] = E[(X - mx )2 + (Y - my )2 ] =  x2 + y2 = 2 2
The second-order joint probability density function of X and Y is the
bivariate Gaussian pdf
 (x - m x )2 + (y - m y )2 
1

exp p XY (x, y)= p X (x) pY (y) =
2
2

2 
2


=
1

2
z


exp - | z - m z |2 /  2z = p Z (z)
We have found the pdf of a complex random variable Z.
If Z = X + jY is a complex random vector from a complex-valued
random process Z(t)
Z = [Z(t1) Z(t2) ... Z(tn)]T = [X1 X2 ... Xn]T + j[Y1 Y2 ... Yn]T
where X and Y are statistically independent and jointly distributed
according to a real multivariate Gaussian distribution, and the
covariance matrixes of X and Y fulfill the conditions
 C x  C y

T
C xy  C yx  0
Under these conditions the nth-order joint probability density function of
a complex-valued Gaussian vector Z is given by
pZ ( z )=


1
H
-1
exp
(
z
)
Cz ( z - m z )
m
z
n
 | Cz |
where
z = [z1 z2 ... zn]T
mz = E[Z] = [mz(t1) mz(t2) ... mz(tn)]T = mean vector
H
Cz = E[( Z - m z )( Z - m z ) ] = 2 Cx =
covariance
matrix
|Cz| = determinant of matrix Cz
[ ]H denotes the Hermitian operation, which is equivalent to
transposal and complex conjugation of a matrix
By using basic equations of matrix algebra, i t can be easily seen that
|Cz| = 2n |Cx|
and
-1
-1
C z = 21 C x
And further
( z - mz )H C-z1 ( z - mz ) = ( x - jy - mx + jmy )T 1 C-x1 ( x + jy - mx - jmy )
2
= 1 (( x - mx )T C-x1 - j( y - m y )T C-x1 )( x - mx + j( y - m y ))
2
= 1 [( x - mx )T C-x1 ( x - mx ) + ( y - m y )T C-x1 ( y - m y )]
2
The pdf of complex random vector Z is equivalent to joint probability
density function of random vectors X and Y, or equivalent to the (2n)thorder pdf of random vector U
U = [XT YT]T = [X(t1), X(t2), ... X(tn), Y(t1), Y(t2), ... Y(tn)]T
with
mu = E[U] = [mxT myT]T
and
T
Cu = E[( U - m u )(U - m u ) ]
= E[[( X - m x )T , (Y - m y )T ]T [( X - m x )T , (Y - m y ) T ]]
E[( X - m x )(X - m x )T ] E[( X - m x )(Y - m y )T ]

=

T
T 
E[(
Y
m
)(
X
m
]
E[(
Y
m
)(
Y
m
)
)

y
x
y
y ]

 C x C xy  C x
0
=
=

C yx C y   0 C x 
From this it follows
|Cu| = |Cx|2
and
0
C-x1
-1 

Cu =
 0 C-1
x
And further
( u - mu )T Cu-1 ( u - mu )
-1
0

C
x
T
T
T
T T
= [( x - mx ) ,( y - m y ) ] 
 [( x - mx ) ,( y - m y ) ]
 0 C-x1
= [( x - mx )T C-x1 ,( y - m y )T C-x1 ][( x - mx )T ,( y - m y )T ] T
= ( x - mx )T C-x1 ( x - mx ) + ( y - m y )T C-x1 ( y - m y )
Thus
1
p( x , y ) = p( u ) =
(2 ) | Cu |
2n


exp - 21 ( u - mu )T Cu- 1 ( u - mu )


=
1
exp - 21 ( x - mx )T C-x1 ( x - mx ) + ( y - m y )T C-x1 ( y - m y )
n
(2 ) | C x |
=
1
H
-1
exp
(
z
m
)
C
z ( z - mz ) = p Z ( z )
z
n
|
|
 Cz


Also complex-valued Gaussian processes have the important property,
that any linear operation on the process produces another Gaussian
process.
Related documents