Download Convolution in Imaging and the Optical Transfer Function Process

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ellipsometry wikipedia , lookup

Confocal microscopy wikipedia , lookup

Photon scanning microscopy wikipedia , lookup

Optical tweezers wikipedia , lookup

Image intensifier wikipedia , lookup

Magnetic circular dichroism wikipedia , lookup

Night vision device wikipedia , lookup

Chemical imaging wikipedia , lookup

Astronomical spectroscopy wikipedia , lookup

Anti-reflective coating wikipedia , lookup

Retroreflector wikipedia , lookup

Super-resolution microscopy wikipedia , lookup

Ultraviolet–visible spectroscopy wikipedia , lookup

Superlens wikipedia , lookup

Fourier optics wikipedia , lookup

Microscopy wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Interferometry wikipedia , lookup

Optical aberration wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Transcript
Convolution in Imaging and the Optical Transfer Function
Process:
We begin this lab by introducing a mathematical concept called Convolution Functions. These functions
have many important uses in fields like probability, statistics, and computer imaging. It is defined as:
Convolution: A mathematical operation on two functions, f and g, producing a third function
which is a slightly modified version of one of the original functions, giving the area overlap
between the two functions as a function of the amount that one of the original functions is
translated.
A convolution function is defined mathematically by:
∞
𝐢(π‘žπ‘₯ ) = ∫ 𝑓(π‘₯)β„Ž(π‘žπ‘₯ βˆ’ π‘₯)𝑑π‘₯
βˆ’βˆž
To appreciate how this is applied, let’s introduce a simple example. We would use this function
to analyze wavelength reflection spectrums in silver. Silver has a large reflection coefficient for long
wavelengths, but for anything below 300 nm, the reflection coefficient drastically changes. We can show
this through a simple graph and idea of what a setup would look like to measure such a quantity:
Real Reflection
1
0.8
0.6
0.4
0.2
0
200
400
600
800
Wavelength [nm]
Fig 1: Real reflection spectrum of silver
Fig. 2: The setup to measure reflection
With a setup like that of figure 2, we can measure how light reflects off of silver. We start with a
light source, mostly likely that of a halogen lamp. The light rays are directed towards the prism. Recall
that the index of refraction in any material is based on:
𝑛=
𝑐
𝑣
Fig. 3: Beam Splitter Interface
Where n is our index, c is the speed of light in a vacuum and v is the
speed at which the light is traveling through the medium.
Remember, the index also depends on the wavelength of the light.
So, when we have a prism – it’s going to split the light apart (in a
rainbow fashion). Part of the refracted light is let through a slit in
the apparatus and collected by a carefully placed lens. The lens then
creates a light column, which is then split by what is called a beam
splitter. A beam splitter is an optical device that splits a light beam
into two (it is most commonly made from two triangular glass prisms – See fig. 3). 50% of the light beam
is directed towards our leftward detector B, and the other (that which passes right through) goes
straight towards a silver sample. Then silver sample receives the light and either reflects or absorbs it –
the reflected portion heading towards detector A. The reflection is then calculated by dividing the
intensity measured by A with the intensity measured by B. More than one measurement is made, and
different angles are accounted for. Note: Collimated light is not monochromatic, but has a certain
wavelength range depending on our slit width.
To be congruent with our example, let’s assume the slit is asymmetric and when we set πœ†π‘ =
500nm, the spectrum of light that emits through the slit is given by fig. 4 below. The spectrum can be
described by a shifted triangular function: π‘”Ξ”πœ† (πœ† βˆ’ πœ†π‘ ). This function is defined within the regions:
π‘”Ξ”πœ† (π‘₯) = 0 π‘“π‘œπ‘Ÿ π‘₯ < 0 π‘Žπ‘›π‘‘ Ξ”πœ† < π‘₯
π‘”Ξ”πœ† (π‘₯) = π‘Žπ‘₯ π‘“π‘œπ‘Ÿ 0 < π‘₯ < Ξ”πœ† π‘“π‘œπ‘Ÿ π‘ π‘œπ‘šπ‘’ π‘π‘œπ‘›π‘ π‘‘π‘Žπ‘›π‘‘ π‘Ž
We can then define a function, namely π‘…π‘šπ‘’π‘Žπ‘  (πœ†π‘ ) which is dependent on πœ†π‘ (our light
wavelength setting) and with an input value, it will differ from the theoretical values of what silver’s
reflection is really supposed to be: RAg(c). Now we apply our convolution function, by setting our
measured reflection function equal to a weighted average of RAg(c) around πœ†π‘ :
πœ†π‘ +Ξ”πœ†/2
π‘…π‘šπ‘’π‘Žπ‘  (πœ†π‘ ) = ∫
π‘…π‘Žπ‘” (πœ†)π‘”Ξ”πœ† (πœ† βˆ’ πœ†π‘ )π‘‘πœ†
πœ†π‘ βˆ’Ξ”πœ†/2
We can also call π‘”Ξ”πœ† the line-width function of the setup. The spectrum of π‘”Ξ”πœ† is given in fig. 4.
The weight behind the reflection function is really given by our function: π‘”Ξ”πœ† (πœ† βˆ’ πœ†π‘ ). A mirrored
version of π‘”Ξ”πœ† about the origin is called impulse response. This is the spectrum that is measured when
the spectrum of our material would be a simple delta function. We could go into a hypothetical material
that allows this property, but instead let’s look at the graphs for our line-width function and π‘…π‘šπ‘’π‘Žπ‘  (πœ†π‘ ):
measured Reflection
of Ag
linewidth function g
1
0.5
0
200
400
600
800
1
0.8
0.6
0.4
0.2
0
200
400
600
800
Wavelength [nm]
Wavelength [nm]
Fig. 4: Line-width function when prism is
set to 500nm.
Fig. 5: Measured Reflection Spectrum of
Ag.
We actually have the ability to write our measured reflection in terms of impulse response, instead of
our line-width function, this comes to:
Rmeas c  ο€½
ο‚₯
 R  h  
Ag

c
ο€­  d
ο€­ο‚₯
Notice we’ve extended these boundaries out towards infinity. This is possible as h is zero
outside the interval –<<0. Rmeas() is nothing else than the convolution of RAg and the impulse
response function, written as:
Rmeas ο€½ RAg  h
For silver there is a significant difference between the real reflection spectrum (Fig.1) and the
measured reflection spectrum. It is clear that the smaller the βˆ†πœ†, the better the approximation our
measured relfection with be for our real spectrum (𝑅𝐴𝑔 (πœ†)). π‘”Ξ”πœ† is called the line widh function of the
instrument used, and β„ŽΞ”πœ† is called the impulse response for the transfer function that relates 𝑅𝐴𝑔 to
π‘…π‘šπ‘’π‘Žπ‘  (see fig. 7):
Fig. 7: Influence of reflection measurement
setup on measured spectrum described by
transformation.
Fig. 8: Influence of optical imaging setup on
image formation described by transformation
For an imaging setup we will have to deal with the final resolution. As studied in class, the
resolution of an optical imaging system can be limited by many physical factors within a physical system.
Factors like diffraction, aberration, and pixel size of the sensor. Similarly to the spectral setup discussed
above, we can study the impulse response: What is seen in the image plane if the object plane consists
of a simple delta function? What about a simple spectral gradient test? This impulse response of the
optical imaging system can be used to calculate the response on an arbitrary object function via a
relation very similar to the function above (π‘…π‘šπ‘’π‘Žπ‘  ).
In this lab, we are going to take a very close look at the different modulation transfer functions
found from different image compressions using a simple line gradient transfer. Since we don’t have a
sophisticated camera to work with – fortunately blur effects can be emulated in programs like
Photoshop. Begin by going to this link and downloading the packet:
http://voltagemoon.com/opticsfiles/opticscomplab.zip
What exactly is the modulation transfer function? How does it help us analyze the quality of an
optical system? Well, we’ll first by looking at a very important concept, the Optical Transfer Function
(OTF). The OTF of an optical system describes how the components of the system project light from an
object onto a detector or film. It’s defined by the Fourier transform of the Point Spread Function
(Impulse Response) of the optical system. We can utilize Matlab to analyze the Fourier transform of our
PSF for any image. The OTF is defined by a plot of our Modulation Transfer Function against the spatial
frequency (which is cycles/mm). The PSF of our imaging system defines the resolution given in the
image. It is the corresponding irradiance distribution in the image plane. If the optical system is perfect,
then the image plane function for single point (delta point object) would also be a delta function (make
sense? A single point is a spike – the irradiance is a spike). Cartesian surfaces only exist for a single
object and image point. If your object consists of more than a single point, perfect imaging can’t occur.
In chapters 2 and 3 there are a few examples of lens and optical system aberrations. These aberrations
occur at each individual object point and contribute to the total irradiance of a blurry spot around some
arbitrary image point. We can minimize these effects through different techniques and multiple lenses
(ensuring there is more of a spike and less of a Bessel diffraction effect). We can define a PSF in an
optical system by the square of a Bessel function:
Fig. 9: Delta function in object plane (left); Bessel function in image plane
because of diffraction of the aperture stop of the optical system
So for a perfect system, there would be little to no diffraction in the transfer. Of course, it’s very
difficult and expensive to emulate such a system. So, the PSF is mainly determined by diffraction (which
is always limited). Assuming the optical system is linear, the image for an object consisting of more than
a single point (as it is above) can be calculated by superposition. Each object point will cause an
irradiance distribution in the image plane, and can be expressed by a shifted point spread function.
Adding up all those shifted point spread functions caused by each individual point (x,y) will lead to an
intensity given by a convolution function:
I X ,Y  ο€½
ο‚₯ ο‚₯
  Ox, y h X ο€­ x, Y ο€­ y dxdy  I ο€½ O  h
ο€­ο‚₯ο€­ο‚₯
Remember, the PSF of an optical system really determines the resolution of our image. The
width of the main peak of the PSF defines the minimum distance two object points can be resolved in
our image plane. If they’re too close, they won’t be resolved through the optical system. We can see
through the diagram below:
Fig. 10: Points in object plain (left), corresponding image is a superposition of shifted point spread
functions (right).
By the convolution theorem, if we take the Fourier transform on both sides of our Intensity equation
above, then we get:
I  ο€½ O  h ο€½ O h
This implies that a convolution in real space corresponds to a multiplication in Fourier space.
Note, the Fourier transform of the object is the spectrum of the object (exactly what we want for our
OTF), and the Fourier transform of the image is the spatial frequency spectrum of the image. The
convolution theorem tells us now that you can calculate the spectrum of the image by multiplying the
spectrum of the object with the Fourier transform of the point spread function. Just as the PSF, the
Optical Transfer Function fully describes our optical system.
In this lab, you will be measuring the OTF for a given image with different compressions or
image types. Compressions can cause aberration-like effects, just like optical systems. If you are
unfamiliar with Matlab, then a sample code is provided. If you are familiar with the program, then you
may use the given code or try to better it through a different approach. An example of what our MTF vs.
Spectral Frequency is given in Fig. 11. Instead of directly looking at the aberrations caused by an optical
system, like a certain camera, we’re going to take a look at how image compression can cause a loss of
quality – within the OTF.
Fig. 11: An example from Wikipedia for a given OTF function, a PSF, and the actual image.
We can see now a good example of what we are going for from the examples in graphs a and d.
After you’ve download the packet, go ahead and unzip the file. Matlab works locally, so when you’re
using the function to look at the OTF, be sure it is located in your image folders.
1. Begin the process by looking over the images we are analyzing. Look up any information on the
compression of these formats and give a short overview in your lab report of the different types:
a. Bitmap
b. Jpeg
c. Tiff
d. GIF
2. What can you say about what the Matlab code is doing? Using the comments provided is fine.
3. The matlab code provided is very basic. It doesn’t handle processing actual pixel information, we
are just curious about the analysis of the graph. If you look on line 1, you’ll see the line pasted
below. This line is where we change the name to the image we are analyzing. When you’ve
inserted the correct image name in, hit F5 to run the code and get the plot. Be sure to start with
the non-blurred file, it should have a noblur portion in its name. It doesn’t matter which format
you start with.
e=imread('edge_blur1.bmp'); %Read in 200x200 image from PSP
4. Once you’ve gotten your plot, save it for your lab report. Go through each portion of the code
and run the analysis for all image formats. Provided in the file there are several images in each
format. In order to run in Matlab, you’ll have to have the script in the image folder and opened
from that folder.
5. What do you notice about the OTF as we change formats? What about when the blur changes?
How do you think this is affected by the compression of each file?
6. What, in your opinion, is the cleanest of all the formats? Which produces an image that is the
closest to its clean version when blurred? Take a look at the file sizes of the images, do the
larger files necessarily have a great quality (cleaner OTF) to it? List and compare the file size to
OTF clarity in a table. You can create a scale for the OTF, say 1-10.
7. Structure your lab report according to each image section, in a format like so:
a. Title of Compression
i. Brief Summary
ii. Graphs of OTF – labeled in accordance to the severity of blur and quality of
image. Notice: There are two different quality compressions for our JPEGS.
1. Provide an analysis on why you think the graph is shaped this way.
Nothing super in depth – just an idea on how the resolution affects the
OTF graphed.
iii. Table of size to quality ratio (you can graph it too).
8. At the end of your report, create a full sized table with size to quality comparisons. Also include
any comments about what format you think is the most efficient for usage in:
a. Web Design – Resolution is semi-important, low file sizes are key.
b. Photography and Graphic Design – Where resolution and quality is the most important –
size doesn’t matter as much.
c. Printing – Resolution is very important, but file sizes must be small.
d. Any other fields you might think of that work with different image formats.