Download Lecture 20 (Mar. 26)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Data analysis wikipedia , lookup

Computational phylogenetics wikipedia , lookup

Mathematical optimization wikipedia , lookup

Operational transformation wikipedia , lookup

Corecursion wikipedia , lookup

Multidimensional empirical mode decomposition wikipedia , lookup

Generalized linear model wikipedia , lookup

Pattern recognition wikipedia , lookup

Data assimilation wikipedia , lookup

Inverse problem wikipedia , lookup

Transcript
MP/BME 574 Lecture 20: Blind Deconvolution
Learning Objectives:
 Blind Deconvolution
 Maximum likelihood
 Expectation maximization
Assignment:
1. Matlab Image Processing Toolbox User's Guide, “Deblurring with the Blind Deconvolution.”
2. Lucy et al., An iterative technique for the rectification of observed distributions...
I.
Applications of Monte Carlo methods for solving case where a point response function, p(x), is not
known or not easily measured.
a. Generalized Monte Carlo algorithm:
1. Generate a random number
2. “Guess” within some constraints or boundary on the problem
i.e. mapping f(x) to coordinate space.
3. Cost function: “Is the point inside the relevant coordinate space?”
4. If yes, store value
5. Repeat
II.
Deconvolution review
a. Recall the non-blind deconvolution problem and the inverse filter
Weiner deconvolution in the presence of noise:
g (n1, n 2)  f (n1, n 2)  h(n1, n 2)  n( x, y )
S (k1, k 2)  Power Spectrum of the signal, f
N(k1, k2)  Power Spectrum of the noise
B(k1, k 2) 
H  (k1, k 2) S (k1, k 2)
H (k1, k 2) S (k1, k 2)  N (k1, k 2)
2


2


H
(
k
1
,
k
2
)
1


B(k1, k 2) 
N (k1, k 2) 
2
H (k1, k 2) 
H (k1, k 2) 

S (k1, k 2) 
This filter can be shown to minimize the mean square error (MSE) between the true object and
the estimate:
2
 (k1, k 2)  F (k1, k 2)  Fˆ (k1, k 2)
MP/BME 574 Lecture 20: Blind Deconvolution
b. Iterative approaches:
fˆo (n1, n2)  g (n1, n2)
fˆk (n1, n2)  fˆk 1 (n1, n2)  [ g (n1, n2)  fˆk 1 (n1, n2) * h(n1, n2)] (2)
The second term is the " residual" error.
Fˆk (k1, k 2)  Fˆk 1 (n1, n2)(1  H (k1, k 2))  G(k1, k 2) .
For k iterations then,
Fˆk (k1, k 2)  G (k1, k 2)[(1  H (k1, k 2)) k  (1  H (k1, k 2)) k 1  ...  1]
k
 G  (1  H ) k 
m 0
G
(1  (1  H ) k 1 ).
H
G(k1, k 2)
Therefore as k   , then Fˆ (k1, k 2) 
for 1  H  1 .
H (k1, k 2)
III.
Blind Deconvolution
a. Deterministic Approach
i. “Blind” deconvolution methods where both fˆ ( n1, n 2) and hˆ( n1, n 2) must be estimated,
the optimal solution can be highly dependent on the initial guess.
ii. Under-determined system (i.e. one equation, g = f**h, and 2 unknowns, f and h.)
iii. Maximum Likelihood (ML) Blind Deconvolution
iv. One strategy is to combine the robust convergence properties of iterative techniques
with a priori assumptions about the form of the data including statistical models of
uncertainty in the measurements.

  1
ln Pr   ln 
2
i 
  2
 ( yi  yˆ i ) 2 



2

2



Therefore, the log likelihood of the measured data is maximized for a model in which,
( yi  yˆ i ) 2
2 2
is minimized.
v. General assumptions about the physical boundary conditions and uncertainty in the
data
1. e.g. Non-negativity and compact support.
vi. Statistical models of variation in the measured data:
1. e.g. Poisson or Gaussian distributed.
2. This leads to estimates of expected values for the measured data for ML
optimization.
MP/BME 574 Lecture 20: Blind Deconvolution
vii. Physical parameter constrains the solution, while the ML approach provides a
criterion for evaluating convergence.
fˆo ( n1, n 2)  g ( n1, n 2)
fˆk 1 ( n1, n 2)  fˆk ( n1, n 2)   [ g ( n1, n 2)  fˆk ( n1, n 2) * hˆk ( n1, n 2)]
and at each k,
n1 '  n1  n1 ' '
ˆ
 f k (n1, n2) n '  n  n ' '
2
2
2
fˆk (n1, n2)  


0
otherwise
fˆ (n1, n2)  0  0
k
gˆ k (n1, n2)  fˆk (n1, n2) **hˆk (n1, n2), and
LSE  
n1
 g  gˆ
2
k
is minimized and used to optimize the convergence. The conditions for
n2
convergence are similar to the iterative procedure when h( n1, n 2) is known except that the
convergence to the inverse filter is no longer guaranteed and is sensitive to noise, choice of ,
and the initial guess, fˆo (n1, n2) .
Non-linear fitting problem. Searching for the minimum of the LSE cost function using steepest
descent algorithm (e.g. Nelder-Mead or Levenberg-Marquardt - for combined functions) :
http://zoot.radiology.wisc.edu/~fains/Lectures/Lecture20.ppt
b. Stochastic Approach
i. Many cases the uncertainty in the location of the detected photons is governed by wellcharacterized stochastic processes
1. For example,
a. Photon counts in a CCD detector
i. Astronomy
ii. Emission Tomography (SPECT, PET)
iii. X-ray Computed Tomography
b. Inherently a Monte Carlo process where the pdf of many realizations of
detected events is assumed.
ii. Intuitively h(x ) is the superposition of multiple random processes used as probes (i.e.
individual photons or molecules of dye) use to measure the response of the system.
iii. As before, the physical system must adhere to mass balance and, for finite counting
statistics, non-negativity.
iv. For example, consider:
MP/BME 574 Lecture 20: Blind Deconvolution
 ( x )   ( ) p( x  )d , where  (x ) is our measured image data,  ( ) is the desired corrected
image, and p( x  ) is a conditional probability density function kernel that relates the expected value
of the data to the measured data, e.g. p( x  ) 
1
 ( x  ) 2
2 2
assuming the photon counts in our x2 2
ray image are approximately Gaussian distributed about there expected value  For this example,
then
 ( x )   ( )
 ( x  ) 2
1
2
e
2
e
2 2
d   ( x ) * p( x ) becomes our familiar convolution process.
v. Expectation maximization (EM)
1. Expectation in the sense that we use a statistical model of the data measurement
process to estimate the missing data
2.
3. Maximization in the maximum likelihood sense, where iteration is used within
appropriate physical constraints to maximize the likelihood of the probability
model for the image.
vi. Consider an “inverse” conditional probability function given by Bayes’ Theorem:
Q( x) 
 ( ) p( x  )
.
 ( x)
Then it is possible to estimate the value of the inverse probability function iteratively from
current guesses of k ( x ),  k ( ) at the kth iteration of the measured image and
deconvolved image respectively.
Our iterative estimate of the inverse filter is then:
 ( ) p( x  )
,
Qk ( x)  k
k ( x)
where,
 k ( x)    k ( ) p( x  )d , and  k ( )    ( x)Qk 1 ( x)dx
Putting this all together starting with the last result and substituting, then the iterative
estimate of the image is:
 k 1 ( )    ( x)
 k ( ) p( x  )
 ( x)
dx   k ( ) 
p( x  )dx .
k ( x)
k ( x)
vii. Convergence to ML result is Guaranteed
1. Lucy paper.
MP/BME 574 Lecture 20: Blind Deconvolution
2. If x,  are non-negative and the respective areas of k (x ) and  k ( ) are
conserved. This is because k (x ) will approach  (x ) and the  k 1 ( ) will then
approach  k ( ) in these circumstances. Note that the model has remained
general.