Download 5. Random Processes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Spectral density wikipedia, lookup

Generalized linear model wikipedia, lookup

Probability box wikipedia, lookup

Data assimilation wikipedia, lookup

Mean field particle methods wikipedia, lookup

Taylor's law wikipedia, lookup

Fisher–Yates shuffle wikipedia, lookup

Randomness wikipedia, lookup

Hardware random number generator wikipedia, lookup

Transcript
5. Random Processes
5-1
Introduction
5-2
Continuous and Discrete Random Processes
5-3
Deterministic and Nondeterministic Random Processes
5-4
Stationary and Nonstationary Random Processes
5-5
Ergodic and Nonergodic Random Processes
5-6
Measurement of Process Parameters
5-7
Smoothing Data with a Moving Window Average
Topics

Continuous and Discrete Random Processes

Classification of Random Processes

Deterministic and Nondeterministic Random Processes

Stationary and Nonstationary Random Processes

Wide Sense Stationary
o Time Averages and Statistical Mean
o Autocorrelation early intro

Ergodic and Non-ergodic Random Processes

Ergodicity

Measurement of Process Parameters
o Statistics

Smoothing Data with a Moving Window Average

Simulating a Random Process
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
1 of 11
ECE 3800
Chapter 5: Random Processes
A random process is a collection of time functions and an associated probability description.
When a continuous or discrete or mixed process in time/space can be describe mathematically as
a function containing one or more random variables.
The entire collection of possible time functions is an ensemble, designated as xt , where one
particular member of the ensemble, designated as xt  , is a sample function of the ensemble. In
general only one sample function of a random process can be observed!
There are many similar ensembles in engineering, where the sample function, once known,
provides a continuing solution. In many cases, an entire system design approach is based on
either assuming that randomness remains or is removed once actual measurements are taken!
Terminology:
Continuous vs. discrete
Deterministic vs. nondeterministic
Stationary vs. nonstationary
Ergodic vs. nonergodic
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
2 of 11
ECE 3800
Continuous and Discrete Random Processes
A continuous random process is one in which the random variables, such as
X t1 , X t 2 ,  X t n  , can assume any value within the specified range of possible values. A
more precise definition for a continuous random process also requires that the probability
distribution function be continuous.
A discrete random process is one in which the random variables, such as X t1 , X t 2 ,  X t n  ,
can assume any certain values (though possibly an infinite number of values). A more precise
definition for a discrete random process also requires that the probability distribution function
consist of numerous discontinuities or steps. Alternately, the probability density function is better
defined as a probability mass function … the pdf is composed of delta functions.
A mixed random process consists of both continuous and discrete components. The probability
distribution function consists of both continuous regions and steps. The pdf has both continuous
regions and delta functions.
The process or process model would be best describing using a pdf or pmf or a combination of
the two.
Deterministic and Nondeterministic Random Processes
A nondeterministic random process is one where future values of the ensemble cannot be
predicted from previously observed values.
A deterministic random process is one where one or more observed samples allow all future
values of the sample function to be predicted (or pre-determined). For these processes, a single
random variable may exist for the entire ensemble. Once it is determined (one or more
measurements) the sample function is known for all t.
For a nondeterministic random process, the future may be bounded based on a knowledge of past
history, but the values can’t be predicted or predetermined.
Note: there are numerous engineered systems that apply short term prediction … either based on
a-priori statistics or gathered statistical information.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
3 of 11
ECE 3800
Stationary and Nonstationary Random Processes
The probability density functions for random variables in time have been discussed, but what is
the dependence of the density function on the value of time, t, when it is taken?
If all marginal and joint density functions of a process do not depend upon the choice of the time
origin, the process is said to be stationary (that is it doesn’t change with time). All the mean
values and moments are constants and not functions of time!
For nonstationary processes, the probability density functions change based on the time origin or
in time. For these processes, the mean values and moments are functions of time.
In general, we always attempt to deal with stationary processes … or approximate stationary by
assuming that the process probability distribution, means and moments do not change
significantly during the period of interest.
The requirement that all marginal and joint density functions be independent of the choice of
time origin is frequently more stringent (tighter) than is necessary for system analysis. A more
relaxed requirement is called stationary in the wide sense: where the mean value of any random
variable is independent of the choice of time, t, and that the correlation of two random variables
depends only upon the time difference between them. That is
E  X t   X   X and
E X t1   X t 2   E X 0   X t 2  t1   X 0   X    R XX   for   t 2  t1
You will typically deal with Wide-Sense Stationary Signals, processes where the mean and
autocorrelation are stationary.
Ergodic and Nonergodic Random Processes
Ergodicity deals with the problem of determining the statistics of an ensemble based on
measurements from a sample function of the ensemble.
For ergodic processes, all the statistics can be determined from a single function of the process.
This may also be stated based on the time averages. For an ergodic process, the time averages
(expected values) equal the ensemble averages (expected values). That is to say,

X
n


1
x  f  x   dx  lim
T   2T
n

T
X
n
t   dt
T
Note that ergodicity cannot exist unless the process is stationary!
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
4 of 11
ECE 3800
5-6
Measurement of Process Parameters
For a stationary process, the statistical parameters of a random process are derived from
measurements of the random variables X t  at multiple time instances, t.
As might be anticipated, the statistics may be generated based on one sample function.
Therefore, it is not possible to generate an ensemble average. If the process is ergodic, the time
average is equivalent to the ensemble average. As might be expected, ergodicity is typically
assumed.
Further, since an infinite time average is not possible, the statistical values (sample mean and
variance) defined in Chapter 4 are used to estimate the appropriate moments. For a continuous
time sample function, this becomes
T
X X 
1
X t   dt or
T 0
T
X2  X2 
1
2
X t   dt or

T 0
X X 
1 N
  X k 
N n 1
X2  X2 
1 N
  X k 
N n 1
2
etc …
As before, for X a random variable
 
1 T
 1T
E X  E   X t   dt    E X t   dt
T 0
 T 0
The discrete version of this is a repeat of the sample mean …
X 
Sample Mean
1 n
 Xi ,
n i 1
  1n  EX   1n  X  X
EX 
n
i 1
n
i
i 1
For discrete samples, it is desired and, in fact, assumed that the observed samples are spaced far
enough apart in time to be statistically independent.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
5 of 11
ECE 3800
The 2nd moment needed to compute the variance of the estimated sample mean is computed as
   E  1n  X
EX
n
2

j 1
For X i independent
 1 n
1


X i   E  2

j 
  n i 1 
n
n
n
 X
i 1 j 1
   
E X 2  X 2
i

E Xi  X j  
2
EX i   E X j  X



i

X j

for i  j
   
for i  j
   n1  EX  X    EX  X 
EX
n
2
2
n
i
i 1
   n1 n  EX
EX
2
2
i
2
i
j 1, j  i
  n
2

j
 n EX 
2

    1n  X  n n n  X    1n  X  X   1  1n   X   1n  X 
EX
2
2
2
2
2
2
2
2
2
   1n  
EX
2
 
2
 X
2
 X
X
2
And the variance of the mean estimate is
 
  EX   1n  
Var X  E X
2
2
X
   X 
2
2

1
2
 X
n
As before, as the number of values goes to infinity, the sample mean becomes a better estimate
of the actual mean.
As may be expected, the unbiased estimate of the sample variance can be defined as
ˆ X
2
1


n 1
n

i 1
Xi2 

2
n
 Xˆ
n 1
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
6 of 11
ECE 3800
Exercise 5-6.2
Show that the estimate of the variance is an unbiased estimate of the true variance.
 
E ˆ X 2   X 2
 
E ˆ X
2
 
E ˆ X
 
E ˆ X
2
2

1


n 1
1


n 1

n
 1
2
n
2
ˆ

E

Xi 
 X 
n 1
n 1

i 1


  
n
E Xi2 
i 1

 2
n
 E  Xˆ 
n 1 

  X 2   X 2  n  1   n   X 2  X  
n
n
1
2
i 1
  n n 1    n n 1    n 1 1    n n 1  
1 
n 
 n
 n
E ˆ   






 n  1 n  1
 n  1 n  1
E ˆ   
E ˆ X 2 
X
X
2
X
2
X
X
2
2
X
2
2
X
X
X
2
2
2
As mentioned, there are numerous classes of systems for processing that are stationary and
ergodic.
It is very common when simulating systems to gather the samples from the model/simulation and
computed statistical values. These should (must) match the underlying model parameters AND
should match data collected from the actual signals/systems being modeled!
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
7 of 11
ECE 3800
A Process for Determining Stationarity and Ergodicity
a)
Find the mean and the 2nd moment based on the probability
b)
Find the time sample mean and time sample 2nd moment based on time averaging.
c)
If the means or 2nd moments are functions of time … non-stationary
d)
If the time average mean and moments are not equal to the probabilistic mean and
moments or if it is not stationary, then it is non ergodic.
Examples:

X 

 x  f X x  dx
and
X
2

1
x  Xˆ  lim
T   2T

x
2
 f X  x   dx

T

xt   dt
and
T
x
2
1
 lim
T   2T
T

xt 2  dt

T
Examples:
xt   const.
1)
Deterministic, stationary, ergodic
xt   A. , for A a zero mean, unit variance Gaussian random variable.
2)
Deterministic, stationary, non-ergodic
xt   A. sin wt  , for A a uniformly distributed random variable A   2,2 .
3)
Deterministic, stationary in the mean but not the variance(!), non-ergodic
 t  nT 
xt   Bn .  rect 
 , for B =+/-1 with prob. 50%.
 T 
Non-deterministic, Zero mean, stationary, ergodic (Bipolar/Binary communications)
4)
xt   A.  coswt    , for A and/or theta random variables ...
5)
Deterministic, stationary in the mean but not the variance(!), non-ergodic
Commets:
There are cases where the random variable remains part of the time computation solution.
There are cases where time functions remain when performing probabilistic computations.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
8 of 11
ECE 3800
Smoothing Data with a Moving Window Average
While we may like to increase the quality of time estimates by measuring over a longer and
longer period, many practical sensor measurements involve monitoring changes in the mean
value of a process over time. A sensor also inherently measures additive noise with the
underlying process to be sensed.
If we assume that the noise is stationary with a zero mean, tracking the sensor process is one
where an accurate measure of the mean is desired, but without including a variance in the mean
caused by the noise. Thus there is a dilemma concerning the appropriate number of time samples
to take to (1) observe the underlying desired value as it changes in time and (2) collect
sufficiently long sample sets to minimize any contribution due to the variance in the noise
sample mean.
Said another way, how do we detect and extract a low frequency signal from an environment that
contains wide bandwidth noise (frequency content significantly higher than the SOI).
The simple answer is a low pass filter, but for “captured data” a non-causal filter can be
employed as follows.
The simplest low pass filter is a moving average (MA) window. So, let’s study this possibility
1
Xˆ i 

nL  nR  1
nR
 Yi  k
k  n L
where Yi  X i  N i for signal, X i , and noise, N i . The MA approximation for X is
1
Xˆ i 

nL  nR  1
nR
 X i  k  Ni  k 
k  n L
The mean estimate (note that N is zero mean)
 
E Xˆ i 
 nR

1
 E    X i  k  N i  k   X  N  X
n L  n R  1  k   nL

The 2nd moment and variance of the estimate can be computed as
 
2
E Xˆ i 
 
   X 
1
 X2  X
nL  nR  1
2
2

 
1
 N2
nL  nR  1
and
 
Var Xˆ i 
1
1
2
2
 X 
 N
nL  nR  1
nL  nR  1
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
9 of 11
ECE 3800
MATLAB
%%
% Figure 5_4
%
clear;
close all;
ma_length1 = 21;
ma_length2 = 41;
nsamples = 1000;
n=(0:1:nsamples-1)';
x = square(2*pi*n/200);
noise = randn(nsamples,1);
y=x+noise;
MA1 = ones(ma_length1,1)/ma_length1;
est_x1 = filter(MA1,1,y);
MA2 = ones(ma_length2,1)/ma_length2;
est_x2 = filter(MA2,1,y);
figure
plot(n,x,n,y,'rx',n,est_x1,n,est_x2);
xlabel('Samples')
ylabel('Amplitude')
title('Figure 5-4')
grid
Figure 5-4
4
3
2
Amplitude
1
0
-1
-2
-3
-4
-5
0
100
200
300
400
500
600
Samples
700
800
900
1000
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
10 of 11
ECE 3800
Frequency domain signal analysis and filtering.
A “moving average” filter is a means of modifying the frequency domain content of a signal
when the underlying signal has known (estimated) frequency characteristics.
A “square wave” can be “decomposed into a number of frequency domain components using the
Fourier series. For a square wave, it is composed of odd harmonics starting at the fundamental
“periodicity” of the square wave.
The noise has spectral components at all frequencies. Therefore, to clean up the signal plus noise
signal, we apply a low-pass filter … a moving average or rect function filter.
A better filter can be designed and used, one that can keep a specific number of harmonics and
reject the rest.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
11 of 11
ECE 3800