Download Random Signals

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Random Signals
Pedro M. Q. Aguiar
January 2012
Parts of this document are based on powerpoint slides by Jorge Salvador Marques.
Motivation
Many signals processed by computers can be considered as random.
Examples: speech, audio, video, digital communication, medical,
biological, and economic signals.
speech
ECG
Is this a random signal?
0
x(n) = A cos (ωn+φ)
Yes, for example if A, ω, or φ are random variables!
finite length – random vectors
random signals
infinite length – random processes
(stochastic processes)
A finite length signal
x1, x2 ,..., xn
can be considered as n-dimensional vector
x = [ x1, x2 ,..., xn ]T
several realizations of x
Full description
x is characterized by the joint probability density function (pdf)
p(x)=p(x1, x2, …, xn)
p ( x) ≥ 0 , ∀x
∫
p( x )dx = 1
Rn
Pr{x ∈ C} = ∫ p ( y )dy
C
If x is discrete, it is characterized by a joint probability function
P(x)=P(x1, x2, …, xn)
P( x) ≥ 0 , ∀x
∑ P ( x) = 1
x
Pr{x ∈ C} = ∑ P( y )
y∈C
Pr{x1 = i1 ,..., xn = in } = P(i1 ,..., in )
The pdfs of variables xi do not contain information about their mutual relationships
(dependence).
Example: x=[x1 x2]
p(x1)
p(x2)
x1
x2
x2
x2
x1
x1
(both joint pdfs p(x1, x2) have the same marginal pdfs p(x1), p(x2))
Independent random variables
N random variables x1, …, xn are independent iif
p ( x1,..., xn ) = p( x1 ) p ( x2 )... p( xn )
(in this case, the joint pdf can be obtained from the marginal pdfs)
A sequence of independent random variables is called white noise
2nd order description
A random vector x can be (partially) characterized by
 E{x1} 


mean vector µ = E{x} =  M 


 E{x }

n 
covariance matrix
 R11 R12 L R1n 


R
R
L
R
2n 
 21 22
T
R = E{( x − µ )( x − µ ) } = 

M
M

Rij = E{( x − µi )( x − µ j )} 


 Rn1 Rn 2 M Rnn 
covariance matrix R is:
• symmetric
• semipositive definite
n
• can be expressed as
R = ∑ λi vi viT
i =1
λi – eigen value
vi – (normalized) eigen vector
Covariance of independent variables
The covariance matrix of a set of n independent random
variables is a diagonal matrix
Rij = E{( xi − µi )( x j − µ j )} = E{xi − µi } E{x j − µ j } = 0
Independence (i ≠ j)
σ12 0 L 0 


2
 0 σ2 L 0 
R= 

M 
M


2
 0 0 M σn 
Rij = σ i2 δ (i − j )
Normal distribution
Completely defined by the 1st and 2nd order statistics µ and R
Normal distribution x~N(µ,R)
p ( x) =
√λ1v1
1
( 2π )n / 2 |R|1 / 2
e
− 1 ( x − µ )T R −1 ( x − µ )
2
quadratic form
constant
√λ2v2
the level surfaces are elipsoids centered at µ with axis
determined by the eigenvalues and eigenvectors of R
µ
Nice property:
a linear combination of normal variables is normal
(if x is a normal variable, y=Ax is a normal variable)
Infinite length signals
A random signal with infinite length
{ xn , n ∈ Z }
is called a random, or stochastic, process
Naturally, its complete characterization is more complex than the one of a vector
In general, a random process is characterized through the pdfs
of all finite subsets of samples
xk1 , xk2 ,..., xkn
2nd order description
Partial description of a random process, based on the 1st and 2nd
order statistics
mean
µi = E{xi }
covariance function
c(i, j ) = E{( xi − µi )( x j − µ j )}
Gaussian processes
A random process is Gaussian iif any subset of samples follows a
Normal distribution
It is completely characterized by the 2nd order description
White noise
A sequence of independent random variables is called white noise
The covariance function of a white noise is c(i,j)=σi2 δ(i,j)
If, in addition, each variable follows a normal distribution, the process is
called a white Gaussian noise
It is very simple to generate a realization of a Gaussian white noise in
the computer
Other processes
We can synthesize processes with non-independent
samples (colored noise) by filtering white noise:
xi ~ N (0,1)
(white noise)
autoregressive (AR) model
yi = a1 yi −1 + ... + a p yi − p + xi
moving average (MA) model
yi = b0 xi + ... + bq xi −q
autoregressive moving average (ARMA) model
yi = a1 yi −1 + ... + a p yi − p + b0 xi + ... + bq xi − q
yi = .98 yi −1 + xi
Related documents