Download Sparse Event Detection in Wireless Sensor Networks using

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Four-vector wikipedia , lookup

Matrix calculus wikipedia , lookup

Principal component analysis wikipedia , lookup

Transcript
Jia Meng, Husheng Li, and Zhu Han
the 43rd Annual Conference on Information Sciences
and Systems (CISS), 2009
1
Outline
 Introduction
 System Model
 Compressive Sensing Algorithm
 Simulation Results and Analysis
 Conclusions
2
Introduction
 The dogma of signal processing maintains that a signal
must be sampled at a Nyguist rate at least twice its
bandwidth in order to be represented without error
 In practice, we often compress the data soon after
sensing, trading off signal representation complexity
(bits) for some error(consider JPEG image
compression in digital cameras, for example)
 Clearly, this is wasteful of valuable sensing/sampling
resources
3
Introduction
 In this paper, we investigate how to employ
compressive sensing in wireless sensor networks
 Specifically, we target on two problems of wireless
sensor networks
The number of events is much less compared to the
number of all sources
2. Different events may happen simultaneously and cause
interference to detect them individually
1.
 To overcome the above two problems, we propose a
sparse event detection scheme in wireless sensor
networks by employing compressive sensing
4
System Model
 There are a total of N sources randomly located in a
field
 Those source randomly generate the events to be
measured
 We denote K as the number of events that the sources
generate
 K is a random number, and is much smaller than N
 We denote X N 1 as the event vector, in which each
component has a binary value, i.e., X n 0,1
 Obviously X is a sparse vector since K  N
5
System Model
 In the system, there are M active monitoring sensors
trying to capture these events
 There are two challenges for those monitoring sensors
1. All those events happen simultaneously
 As a result, the received signals are interfering with each
other
2. The received signal is deteriorated by propagation
loss and thermal noise
6
System Model
 The received signal vector can be written as
YM 1  GM N X N 1   M 1
  M 1 is the thermal noise vector whose component is
independent and has zero mean and variance of  2
 GM  N is the channel response matrix whose component
can be written as
Gm,n   d m,n 
 / 2
hm,n
d m,n is the distance from the nth source to the m th sensing device
  is the propagation loss factor
 hm , n is the Raleigh fading modeled as complex Gaussian Noise
with zero mean and unit variance

7
System Model
 Notice that the number of events, the number of
sensors, and total number of sources have the
following relation K  M  N
 Consequently, the received signal vector Y is an
condensed representation of the event
 Event vector Y has aliasing of vector X, due to the low
sampling rate M
8
Compressive Sensing Algorithm
 Problem Formulation and Analysis
 Bayesian Detection
1. Model Specification
2. Marginal Likelihood Maximization
3. Heuristic using Prior Information
9
Problem Formulation and Analysis
 Definition : Restricted Isometry Property (RIP)
For any vector V sharing the same K nonzero entries as
2
X, if
GV
1  
 1 
2
V
for some   0 , , then the matrix G preserves the
information of the K-sparse signal.
 It has been proved that if G is an i.i.d. Gaussian matrix
or random ±1 entry matrix, then the K-sparse signal is
compressible with high probability if
M  cK log  N / K   N
10
Problem Formulation and Analysis
 Since M < N there are infinite number of X̂ satisfy
Y  GXˆ
 The problem is to find the sparse reconstructed signal
Xˆ  arg min Xˆ
Y GXˆ
1
 The above optimization is called the l1-magic in the
literature
 
 The complexity is O N
3
11
Bayesian Detection
 Considering the fact that the components of X are
either 0 or 1
 we adopt the Bayesian compressive sensing [12–14],
which is fully probabilistic and introducing a set of
hyper-parameters
[12] M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine”,
Journal of Machine Learning Research, vol. 1, p.p. 211-244, Sept. 2001.
[13] M. E. Tipping and A. C. Faul, “Fast marginal likelihood maximisation for sparse
Bayesian models”, in Proceedings of the Ninth International Workshop on Artificial
Intelligence and Statistics, Key West, FL, Jan 3-6.
[14] S. Ji, Y. Xue and L. Carin, “Bayesian compressive sensing”, IEEE Trans. Signal
Processing, vol. 56, no. 6, June 2008.
12
Maximum Likelihood Estimation (MLE)
 假設有五個袋子,各袋中都有無限量的餅乾(櫻桃口味
或檸檬口味),已知五個袋子中兩種口味的比例分別是
1.
2.
3.
4.
5.
櫻桃 100%
櫻桃 75% + 檸檬 25%
櫻桃 50% + 檸檬 50%
櫻桃 25% + 檸檬 75%
檸檬 100%
0
0.252
0.502
0.752
1
 從同一個袋子中連續拿到2個檸檬餅乾,那麼這個袋
子最有可能是上述五個的哪一個?

Ans : 5
13
Maximum a posteriori (MAP)
 假設有五個袋子,各袋中都有無限量的餅乾(櫻桃口味
或檸檬口味),已知五個袋子中兩種口味的比例分別是
1.
2.
3.
4.
5.
櫻桃 100% (拿到的機率0.1)
櫻桃 75% + 檸檬 25% (拿到的機率0.2)
櫻桃 50% + 檸檬 50% (拿到的機率0.4)
櫻桃 25% + 檸檬 75% (拿到的機率0.2)
檸檬 100% (拿到的機率0.1)
0.1 × 0=0
0.2 × 0.252 =0.0125
0.4 × 0.502=0.1
0.2 × 0.752 =0.1125
0.1 × 1=0.1
 從同一個袋子中連續拿到2個檸檬餅乾,那麼這個袋
子最有可能是上述五個的哪一個?

Ans : 4
p  xi |   
p  xi  p  | xi 
p  
14
Model Specification
 The noise in the system is composed of propagation
loss with zero mean and variance  2
 The probability density function can be approximated
M
as Gaussian distribution as p     N   | 0,  2 

i 1
i
 Due to the assumption of independence of Yn , he
likelihood of the complete data set can be written as
p Y | X , 
2
   2

2 M / 2
2
 1
exp   2 Y  GX 
 2

15
Model Specification
 The real distribution of X is Bernoulli distribution
 However, the close form solution in our problem is
hard to be obtained
 Instead, we assume a zero-mean Gaussian prior
distribution over the signal X
p  X |     N  X n | 0,  n1 
N
n 1
  2 
N /2
N

n 1
1/ 2
n
  n xn2 
exp  

2


 where  is a vector of N independent hyper-parameters
16
Model Specification
 Given  , the posterior parameter distribution
conditioned over the signal is given by combining the
likelihood and prior with Bayes’ rule
p  X | Y , , 2  
p Y | X ,  2  p  X |  
p Y |  ,  2 
 which is a Gaussian distribution N   ,   ith covariance
and mean of   A   2GT G 1


   2 GT Y
A  diag 1 , ,  n 
17
Marginal Likelihood Maximization
 The sparse Bayesian model is formulated as the local
maximization with respect to  of the marginal
likelihood, or equivalently its logarithm
L    log p Y |  ,  2 
 log  p Y | X ,  2  p  X |   dX



 with
1
M log 2  log C  Y T C 1Y 

2
C   2  I  GA1G T
18
Marginal Likelihood Maximization
 A point estimate MP for the parameters is then
obtained by evaluating (11) with    MP , giving a
posterior mean approximator GX  GMP
 However, marginal likelihoods are generally difficult to
2
compute, i.e., values of  and  which maximize L  
cannot be obtained in closed form
 For the updating of  , differentiate (12), and then
equate it to 0. After rearranging, we have

new
i
i
 2
i
19
Marginal Likelihood Maximization

new
i
i
 2
i
th
 where i is the i posterior mean signal from (11), and
 i is defined as
 i  1   i Nii
th
N
i
with ii being the diagonal element of the posterior
signal covariance from (10) computed with current 
2
and  values
2

 For the variance , differentiation leads to re-estimate
2
 new

Y  G
2
M  i i
20
Heuristic using Prior Information
 After the reconstruction of X̂ , if the algorithm
converges to wrong results, there are two possible
situations
1. The algorithm can converge to either around 0 and 1,
but with the wrong position for the sparse events

could not be easily distinguished
2. X̂ have values deviating from 0 or 1
 easy to find the error using threshold methods
21
Heuristic using Prior Information
22
Simulation Results and Analysis
 There are a total of N = 256 events randomly located





within 500m-by-500m area
The M wireless sensors are also randomly located
within this area
The minimal distance between a event and a sensor is
5m
The propagation loss factor is 3
The transmitted power is normalized to 1 and the
thermal noise is 10-12
The number of random events is K which is a small
number
23
Simulation Results and Analysis
Proposed method
l1-magic
24
Simulation Results and Analysis
Illustration of Correct
Detection
Illustration of Incorrect
Detection
25
Simulation Results and Analysis
 Heuristic Improvement
26
Simulation Results and Analysis
 Noise Effect
27
Conclusions
 Propose a compressive sensing method for sparse
event detection in wireless sensor networks
 Formulate the problem and propose solutions
 Introduced a fully probabilistic Bayesian framework
which helps dramatically reduce the sampling rate
28