Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Diffraction topography wikipedia , lookup

Spectrum analyzer wikipedia , lookup

Confocal microscopy wikipedia , lookup

Gaseous detection device wikipedia , lookup

Optical rogue waves wikipedia , lookup

Phase-contrast X-ray imaging wikipedia , lookup

Spectral density wikipedia , lookup

Preclinical imaging wikipedia , lookup

Image stabilization wikipedia , lookup

Super-resolution microscopy wikipedia , lookup

Nonlinear optics wikipedia , lookup

Fourier optics wikipedia , lookup

Johan Sebastiaan Ploem wikipedia , lookup

Superlens wikipedia , lookup

Microscopy wikipedia , lookup

Chemical imaging wikipedia , lookup

Interferometry wikipedia , lookup

Optical aberration wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Transcript
3. Spatiotemporal field correlations
3.1. Spatiotemporal correlation function. Coherence volume.
 All optical fields in practice fluctuate randomly in both time and space and are
subject to a statistical description [1]. These fluctuations depend on both the
emission process (primary sources) and propagation media (secondary sources).
 Optical coherence is a manifestation of the field statistical similarities in space
and time and coherence theory is the discipline that mathematically describes
these similarities [2]. A deterministic field distribution in both time and space is
the monochromatic plane wave, which is only a mathematical construct,
impossible to obtain in practice due to the uncertainty principle.
 The formalism presented below for describing the field correlations is
mathematically similar to that used for mechanical fluctuations, for example, in
the case of vibrating membranes.
1
 The analogy between the two different types of fluctuations and their
mathematical description in terms of spatiotemporal correlations has been
recently emphasized [3].
 A starting point in understanding the physical meaning of a statistical optical
field is the question: what is the effective (average) temporal sinusoid, eit

,
for a broadband field? What is the average spatial sinusoid, eikr .
k
 A monochromatic plane wave is described by ei (t kr ) . These two averages can
be performed using the probability densities associated with the temporal and
spatial frequencies, S() and P(k), which are normalized to satisfy  S ( )d  1,
3
P
(
k
)
d
k  1.

 Thus, S()d is the probability of having frequency component  in our field,
or the fraction of the total power contained in the vicinity of frequency .
2
 Similarly, P(k)d3k is the probability of having component k in the field, or the
fraction of the total power contained around spatial frequency k. Up to a
normalization factor, S and P are the temporal and spatial power spectra
associated with the fields. The two “effective sinusoids” can be expressed as
ensemble averages, using S() and P(k) as weighting functions,
eit

  S ( )e it d
(1a)
 (t )
eikr
k
  P(k )eikr d 3k
(1b)
 W (r )
 Equations 1a-b establish that the average temporal sinusoid for a broadband
field equals its temporal autocorrelation, . The average spatial sinusoid for an
inhomogeneous field equals its spatial autocorrelation, denoted by W.
3
 Besides describing the statistical properties of optical fields, coherence theory
can make predictions of experimental relevance. The general problem can be
formulated as follows (Fig. 1):
Figure 1. Spatio-temporal distribution of a real optical field
4
 Given the optical field distribution U  r, t  that varies randomly in space and
time, over what spatiotemporal domain does the field preserve significant
correlations? This translates into: combining the field U  r, t  with a replica of
itself shifted in both time and space, U  r  ρ, t    , on average, how large can
ρ and  be and still observe “significant” interference?
 We expect that monochromatic fields exhibit infinitely broad temporal
correlations, plane waves are expected to manifest broad spatial correlations.
Regardless of how much we shift a monochromatic field in time or a plane wave
in space, they remain perfectly correlated with their unshifted replicas. It is
difficult to picture temporal correlations decaying over timescales that are
shorter than an optical period and spatial correlations that decay over spatial
scales smaller than the optical wavelength. In the following we provide a
quantitative description of the spatiotemporal correlations.
5
 The statistical behavior of optical fields can be mathematically captured
generally via a spatiotemporal correlation function
  r1 , r2 ; t1 , t2   U  r1 , t1  U *  r2 , t2 
2
r ,t
 The average is performed temporally and spatially, indicated by the subscripts r
and t. Because common detector arrays capture the spatial intensity distributions
in 2D only, we will restrict the discussion to r  ( x, y) , without losing
generality. These averages are defined in the usual sense as
U  r1 , t1   U *  r2 , t2 
U  r1 , t1   U *  r2 , t2 
t
r
1
 lim 2
T  T
1
A A2
 lim
T /2 T /2
  U  r , t  U *  r , t  dt dt
1
1
2
2
1
T /2 T /2
2
3
2
2
U
r
,
t

U
*
r
,
t
d
r
d
r2




2 2
1
 1 1
AA
 Often we deal with fields that are both stationary (in time) and statistically
homogeneous (in space).
6
 If stationary, the statistical properties of the field (e.g. the average, higher order
moments) do not depend on the origin of time. Similarly, for statistically
homogeneous fields, their properties do not depend on the origin of space.
Wide-sense stationarity is less restrictive and defines a random process with
only it’s first and second moments independent of the choice of origin. For the
discussion here, the fields are assumed to be stationary at least in the widesense.
 Under these circumstances, the dimensionality of the spatiotemporal correlation
function  decreases by half,
  t1 , t2     t2  t1 
4
  r1 , r2     r2  r1 
 The spatiotemporal correlation function becomes
  ρ,   U  r, t   U *  r  ρ, t   
5
r ,t
7
   0,0   U  r, t  U *  r, t 
r ,t
represents the spatially averaged irradiance of
the field, which is, of course, a real quantity. In general   ρ,  is complex.
Define a normalized version of  , referred to as the spatiotemporal complex
degree of correlation
  ρ, t  
  ρ, 
  0,0 
6
.
 For stationary fields  attains its maximum at r  0,
0     ,   1
t  0 , thus
7
 Define an area AC  C 2 and length lc  c C , over which   C , C  maintains a
significant value, say   1 , which defines a coherence volume
2
VC  AC  lC .
8
8
 This coherence volume determines the maximum domain size over which the
fields can be considered correlated. In general an extended source, such as an
incandescent filament, may have spectral properties that vary from point to
point. It is convenient to discuss spatial correlations at each frequency  , as
described below.
9
3.2. Spatial correlations of monochromatic light
Figure 3. Measuring the spatial power spectrum of the field from source S via a lens (a) and Fraunhofer propagation in
free space (b).
10
 The spatial correlation function W  ρ,   can also be experimentally determined
from measurements of the spatial power spectrum, as shown in Fig. 3.
 Both the far field propagation in free space and propagation through a lens can
generate the Fourier transform of the source field, as illustrated in Fig. 3,
U  k ,     U  r,    e  ikr d 2r,
16
 The CCD is sensitive to power and detects the spatial power spectrum,
P k,   U k,  .
2
 In Eq. 16, the frequency component k  (k x , k y ) depends either on the focal
distance, for the lens transformation (Fig. 3a), or on the propagation distance z,
for the Fraunhofer propagation (Fig. 3b),
11
2
 x ', y '
f
.
2
k
 x ', y '
z
k
17
 In the Fraunhofer regime, the ratios x/f and x/z describe the diffraction angle;
therefore sometimes P  k ,   is called angular power spectrum.
 For extended sources that are far away from the detection plane, as in Fig. 3b,
the size of the source may have a significant effect on the Fourier transform in
Eq. 16. This effect becomes obvious if we replace the source field U with its
spatially truncated version, U, to indicate the finite size of the source
r
U  r,    U  r,       ,
a
18
  is the 2D rectangular function, a square of side a. The far field becomes
U  k ,    a 2U  k ,   kx k y sinc  ak  ,
19
12
 * denotes convolution and sinc is sin(x)/x.
 The field across detection plane (x’, y’), U  k ,   , is smooth over scales given by
the width of the sinc function.
 This smoothness indicates that the field is spatially correlated over this spatial
scale. Along x’, this correlation distance, xc ' , is obtained by writing explicitly
the spatial frequency argument of the sinc function,
2
 kx
a
2

 xc '
z
20
 We can conclude that the correlation area of the field generated by the source in
the far zone is of the order of
13
AC  xc 2 

2

21
.
  is the solid angle subtended by the source.
 This relationship allowed Michelson to measure interferometricaly the angle
subtended by stars.
 For example, the Sun subtends an angle 
10 mrad , i.e.  104 srad . Thus, for
the green radiation that is the mean of the visible spectrum,   550nm , the
coherence area at the surface of the Earth is of the order of AC Sun  50  50  m 2 .
Measuring this area over which the sun light shows correlations (or generates
fringes) provides information about its angular size.
 For angularly smaller sources, far field spatial coherence is correspondingly
higher. This is the essence of the Van Cittert-Zernike theorem, which states
14
that the field generated by spatially incoherent sources gains coherence
upon propagation. This is the result of free-space propagation acting as a
spatial low-pass filter [4].
 Zernike employed the spatial filtering concept to develop phase contrast
microscopy [5, 6]. It had been known since Abbe that an image can be described
as an interference phenomenon [7]. Image formation is the result of
simultaneous interference processes that take place at each point in the image.
15
3.3. Temporal correlations of plane waves
 We now investigate the temporal correlations of fields at a particular spatial
frequency k (or a certain direction of propagation). Taking the spatial Fourier
transform of  in Eq. 2, we obtain the temporal correlation function
  k ,       ,   eik d 2r
 U  k , t  U *  k , t   
22
t
Figure 5. Michelson interferometry.
16
 The autocorrelation function  is relevant in interferometric experiments of the
type illustrated in Fig. 5. In a Michelson interferometer, a plane wave from the
source is split in two by the beam splitter and subsequently recombined via
reflections on mirrors M1,and M2. The intensity at the detector has the form (we
assume 50/50 beam splitter)
I  k ,   U  k , t   U *  k , t   
2
t
 2 I  k   2 Re U  k , t   U *  k , t   
23
 The real part of   k ,  is obtained by varying the time delay between the two
fields. This delay can be controlled by translating one of the mirrors. The
complex degree of temporal correlation at spatial frequency k is defined as
  k ,  
  k , 
24
  k ,0 
17
   k ,0  represents the intensity of the field, i.e.
  k ,0   U  k , t   U *  k , t 
 I k 
t
25
 The complex degree of temporal correlation has the similar property with its
spatial counterpart  , i.e.
0    k ,   1
26
 The coherence time is defined as the maximum time delay between the fields for
which  maintains a significant value, say ½.
 If we cross-correlate temporally two plane waves of different wave vectors
(directions of propagation), the result vanishes unless k1  k 2 ,
  k 1 , k 2 ,   U1  k1 , t   U 2*  k 2 , t   
   k 1 , k1 ,   (k 2  k1 )
18
t
27
 At each moment t, the two plane waves generate fringes parallel to k 2  k1 . If
the detector (e.g. a CCD) averages the signal over scales larger than the fringe
period, the temporal correlation information is lost.
 As  changes, the fringes “run” across the plane such that the contrast averages
to 0. For this reason, for example, the two beams in a typical Michelson
interferometer are carefully aligned to be parallel.
 The temporal correlation  is the Fourier transform of the power spectrum,
  k ,  


S  k ,    e  i d

S k,  
28

i

k
,


e
d




  can be determined via spectroscopic measurements, as exemplified in Fig. 6.
19
Figure 6. Spectroscopic measurement using a grating: G grating, D detector, diffraction angle. The dashed line indicates
the undiffracted order (zeroth order)
 By using a grating (a prism, or any other dispersive element), we can “disperse”
different colors at different angles, such that a rotating detector can measure
S   directly.
 To estimate the coherence time for a broad band field, let us assume a Gaussian
spectrum centered at frequency 0 , and having the r.m.s. width  ,
20
S    S0  e
  0 


 2 
2
29
,
 S0 is a constant.
 The autocorrelation function is also a Gaussian, modulated by a sinusoidal
function, as a result of the Fourier shift theorem
     0  e
  


 2 
2
 ei0
30If we define the width
of  as the coherence time, we obtain
C 
1
,

31
 and the coherence length
lC  c C
2


32
21

The coherence length depends on the spectral bandwidth in an analog fashion to
the coherence area dependence on solid angle (Eq. 21). This is not surprising as
both types of correlations depend on their respective frequency bandwidth.
1.
J. W. Goodman Statistical optics (Wiley, New York, 2000).
2.
3.
L. Mandel and E. Wolf Optical coherence and quantum optics (Cambridge University Press, Cambridge ; New York, 1995).
G. Popescu, Y. K. Park, R. R. Dasari, K. Badizadegan and M. S. Feld, Coherence properties of red blood cell membrane motions,
Phys. Rev. E., 76, 031902 (2007).
J. W. Goodman Introduction to Fourier optics (McGraw-Hill, New York, 1996).
F. Zernike, Phase contrast, a new method for the microscopic observation of transparent objects, Part 1, Physica, 9, 686-698 (1942).
F. Zernike, Phase contrast, a new method for the microscopic observation of transparent objects, Part 2, Physica, 9, 974-986 (1942).
E. Abbe, Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung, Arch. Mikrosk. Anat. , 9, 431 (1873).
A. R. Webb Introduction to biomedical imaging (Wiley, Hoboken, New Jersey, 2003).
M. Pluta Advanced light microscopy (Polish Scientific Publishers, Warszawa, 1988).
4.
5.
6.
7.
8.
9.
22
4. IMAGE CHARACTERISTICS
 Imaging is the process of mapping a certain physical property of an object and
displaying it in a visual form [8].
 Examples of such physical properties include absorption (e.g. light imaging, xrays), reflectivity (e.g. ultrasound, photography), proton density (e.g. MRI),
concentration of radionuclides (e.g. nuclear imaging).
 In microscopy, intrinsic specimen absorption, emission, and scattering are the
quantities of interest. Using exogenous contrast agents, e.g. stains or fluorescent
tags, one can image concentration distributions of certain species.
 We discuss the basic properties of microscope images, irrespective of the
method involved in acquiring the images and present simple image enhancement
operations allowed by numerical image processing.
23
4.1 Imaging as linear operation
 Often an imaging system, e.g. a microscope, can be approximated by a linear
system.
 Consider the physical property of the sample under investigation is described by
the function S (r ) , r  ( x, y, z ) . The imaging system outputs an image I (r )
which is related to S (r ) , through a convolution operation,
I (r)  S (r)  h(r)
(4.1)
 h defines the impulse response or Green’s function of the system. h is
commonly called point spread function (PSF). We are not specific as to what
quantity I means. For example, I can be an intensity distribution, like in
incoherent imaging, or complex field like in coherent imaging and QPI. h can
be experimentally retrieved by imaging a very small object, i.e. approaching a
delta-function in 3D.
24
 Thus, replacing the sample distribution S(r) with (r), we obtain
I (r )   (r )  h(r)
(4.2)
 h(r )
 The PSF of the system, h(r ) , defines how an infinitely small object is blurred
through the imaging process. The PSF is a measure of the resolving power of the
system.
 I(r) can be an intensity distribution or a complex field distribution, depending
upon whether the imaging uses spatially incoherent or coherent light,
respectively. This distinction is very important because in coherent imaging, the
system is linear in complex fields, while in incoherent imaging, the linearity
holds in intensities.
25
4.2 Resolution
 The resolution of an imaging system is defined by the minimum distance
between two points that are considered “resolved.” What is considered resolved
is subject to convention.
 The half-width half maximum of h in one direction is one possible measure of
resolution along that direction. When discussing image formation, we may
encounter other conventions for resolution.
 The image can be represented in the spatial frequency domain via a 3D Fourier
transform (see Appendix B)
I (k )   I (r )  e  ikr d 3r,
(4.3)
V
 k  (k x , k y , k z ) is the (angular) spatial frequency, in units of rad/m.
26
 Writing I in separable form, I (r )  I x ( x)  I y ( y )  I z ( z ) , the 3D Fourier transform
in Eq. 3 breaks into a product of three 1D integrals,
I (k ) 


 ik x  x
I
(
x
)

e
dx 
x
 I y ( y)  e
 ik y  y

dy 
 ik z  z
I
(
z
)

e
dz
z
 I (k x )  I (k y )  I (k z )

(4.4)
 To find a relationship between the object and the image in the frequency
domain, we Fourier transform Eq. 1 and use the convolution theorem to obtain,
I (k )  S (k )  h (k ),
(4.5)
 S and h the Fourier transforms of S and h, respectively. h (k ) is referred to as
the transfer function (TF).
27
Figure 1. a) Relationship between the modulus of PFS and TF. b) Infinitely narrow PSF requires infinitely broad TF.
 From the Fourier relationship between PSF and TF, it is clear that high
resolution (i.e. narrow PSF) demands broad frequency support of the system
(broad TF), as illustrated in Fig. 1.
28
 The ideal system for which h(r)   (r) requires infinite frequency support,
which is clearly unachievable in practice,
 (r )  1
(4.6)
 The physics of the image formation defines the instrument’s PSF and TF. We
will derive these expressions for coherent and diffraction-limited microscopy
later. For now, we continue the general discussion regarding image
characteristics, irrespective of the type of microscope that generated the image,
or of whether the image is an intensity or phase distribution.
29
4.3 Signal to noise ratio (SNR)
 Any measured image is affected by experimental noise. There are various causes
of noise, particular to the light source, propagation medium, and detector, which
contribute to the measured signal,
xˆ  x   .
(4.7)
 x̂ is the measured signal, x is the true (noise-free) signal, and  the noise.
 The variance,  2 , of the signal describes how much variation around the
average there is over N measurements,


2
1 N
2
   xˆi  xˆ ,
N i 1
1
xˆ   xˆi
N i
(4.8)
30
 The N measurements can be taken over N pixels within the same image, which
characterizes the noise spatially, or over N values of the same pixel in a timeseries, which defines the noise temporally.
 The standard deviation  is also commonly used and has the benefit of carrying
the same units as the signal itself.
 The SNR is defined in terms of the standard deviation as
SNR 
x

,
(4.9)
 The signal is expressed as a modulus, such that SNR>0.
 Let us discuss the effects of averaging on SNR. It can be easily shown that for N
uncorrelated measurements, the variance of the sum is the sum of the variance,
 N 2  N 2
(4.10)
31
 Thus, the standard deviation of the averaged signal is
N 

N
(4.11)
,
 and the SNR becomes
SNR N  N  SNR
(4.12)
 Equation 12 establishes that taking N measurements and averaging the results
increases the signal to noise ratio by
N . This benefit only exists when there is
no correlation between the noise present in different measurements.
 For correlated (coherent) noise, Eq. 10 is not valid. To prove this, consider the
variance associated with the sum of two correlated noise signals a1 and a2
(assume a1  a2  0 for simplicity),
 a1  a2 
2
 a12  a2 2  2a1a2
    2   12
2
1
2
(4.13)
2
32
 The total variance is the sum of individual variances,  12   2 2 , only when the
cross term vanishes, i.e. 12  0 .
 If, for example, a1 and a2 fluctuate randomly in time, and signal a2 is measured
after a delay time  from the measurement of a1, the cross term has the familiar
form of a cross-correlation,
 12 ( )  a1 (t )  a2 (t   ) 
1
a (t )a (t   )dt. (4.14)


1
2
 Equation 14 indicates that if the noise is characterized by some correlation time,
 c , then averaging different measurements increases the signal to noise only if
the interval between measurements  is larger than  c . The noise becomes
uncorrelated at temporal scales larger than c.
33
4.4. Contrast and contrast to noise ratio
Figure 2. a) Illustration of a low- and high-contrast image. b) Region of interest A, B, and noise region N.
34
 The contrast of an image quantifies the ability to differentiate between different
regions of interest within the image (Fig. 2),
C AB  S A  S B ,
(4.15)
 SA,B stands for the signal associated with regions A and B. The modulus in Eq.
15 ensures that the contrast has always positive values.
 Unlike resolution, which is established solely by the microscope itself, contrast
is a property of the instrument-sample combination. The same microscope,
characterized by the same resolution, renders superior contrast for stained than
unstained tissue slices.
 The simple definition of contrast in Eq. 15 is insufficient because it ignores the
effects of noise. It is easy to imagine circumstances where the noise itself is of
very high contrast across the field of view, which by Eq. 15 generates high
values of CAB.
35
 A better quantity to use for real, noisy images is the contrast to noise ratio
(CNR),
CNR AB 

C AB
N
(4.16)
S A  SB
N
  N is the standard deviation associated with the noise in the image.
Figure 3. Contrast to noise ration vs. contrast CAB and noise standard deviation N.
36
 Of course, the best case scenario from a measurement point of view happens at
low noise and high contrast (Fig. 3).
4.5 Image filtering
 Filtering is a generic term that relates to any operation taking place in the
frequency domain of the image,
I (k )  I 0 (k )  H (k ),
(4.17)
 I 0 is the Fourier transform of the image and H is the filter function.
 The filtered image I (r ) is obtained by Fourier transforming I (k ) into the space
domain,
I (r)  I 0 (r)* H (r).
(4.18)
37
 The filtered image I is the convolution between the unfiltered image I 0 and the
Fourier transform of the filter function H. Any measured image is a filtered
version of the object (the transfer function, TF, of the instrument is the filter), as
stated in Eq. 5.
 Filtering is used to enhance or diminish certain features in the image. Depending
on the frequency range that they allow to pass, we distinguish three types of
filters: low-pass, band-pass, and high-pass.
Low-pass filtering
Figure 4. Low-pass filtering: a) noisy signal, b) filtering high frequencies; c) low-passed signal.
38
 Consider the situation where the noise in the measured image exhibits highfrequency fluctuations (Fig. 4a).
 This noise contribution is effectively diminished if a filter that blocks the high
frequencies (passes the low frequencies) is used (Fig. 4b).
 The resulting filtered image, has better SNR (Fig. 4c). However, in the process,
resolution is diminished.
Band-pass filter
 In this case the filter function H passes a certain range (band) of frequencies,
from the image. This type of filtering is useful when structures of particular
range of sizes are to be enhanced in the image.
High-pass filter
 Using a filter that selectively passes the high frequencies reveals finer details
within the image.
39
 One particular application of high-pass filtering is edge detection. This
application is particularly useful in selecting (segmenting) an object of interest
(e.g. a cell) from the image. The gradient of the image along one dimension can
significantly enhance the edges in an image (Fig. 5).
Figure 5. Edge enhancement: a) original image; b) gradient along one direction; c) Profiles along the lines shown in a
and b. d) Frequency domain profile associated with a. e) Frequency domain profile associated by b, i.e. k xI(kx).
40
 Thus, the new, edge-enhanced image is
I x (r )   x  I (r ) 
I (r )

x
(4.19)
 Taking the Fourier transform of Eq. 19, we find
I x (k )  ik x  I (k )
 i indicates a 
(4.20)
2
shift that occurs upon differentiation (i.e. sines become
cosines). The new frequency content of the gradient image I x is enhanced at
high frequencies due to the multiplication by k x (Fig. 5).
 The gradient image suffers from “shadow” artifacts due to the change in sign of
the first order derivative across an edge (Fig. 5b). This artifact is commonly
known in DIM microscopy, where the gradient is computed “optically” [9].
41
 Taking the Laplacian of the image removes these anisotropic artifacts related to
the change in sign of the first order derivative.
2I 2I
 I (r )  2  2
x
y
2
(4.21)
 2 I (r )   k 2  I (k )
Figure 6. Laplacian (a) and frequency domain profile (b) associated with the image in Fig 5a.
42
 Figure 6a shows the results of taking the Laplacian of the same image. Even
finer details are now visible in the image, as the high-pass filter is stronger due
to the k2 multiplication of I(k). The shadow artifacts are less disturbing because
they do not change sign across each edge.
 There are many, much more sophisticated filters and algorithms for image
enhancement and restoration that are beyond the scope of this discussion.
43