Download Resolution and Sampling Considerations

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Night vision device wikipedia , lookup

Vibrational analysis with scanning probe microscopy wikipedia , lookup

Diffraction topography wikipedia , lookup

Optical tweezers wikipedia , lookup

Dispersion staining wikipedia , lookup

Hyperspectral imaging wikipedia , lookup

Imagery analysis wikipedia , lookup

Optical aberration wikipedia , lookup

Johan Sebastiaan Ploem wikipedia , lookup

Confocal microscopy wikipedia , lookup

Image stabilization wikipedia , lookup

Microscopy wikipedia , lookup

Super-resolution microscopy wikipedia , lookup

Superlens wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Preclinical imaging wikipedia , lookup

Digital versus film photography wikipedia , lookup

Chemical imaging wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Transcript
Imaging Particle Analysis: Resolution and Sampling Considerations
Lew Brown
Technical Director
Fluid Imaging Technologies, Inc.
Abstract: Imaging particle analysis represents an exciting method
of particle analysis which combines the speed of automated particle
analyzers with the discrimination found in optical microscopy. However,
in order to discriminate shape differences, it is limited to particles of a
certain size or larger (see Figure 1) . This paper discusses the primary
factors that lead to this limitation, caused by the optical system and the
sensor. In the end, diffraction limits of the optical system and sampling
limits incurred at the sensor limit this technique to particle counting of
particles 1µm in Equivalent Spherical Diameter (ESD) and larger, and
particle characterization of particles 2µm in ESD and larger. Due to this
limitation, imaging particle analysis of submicron particles is limited to
non-optical techniques such as electron microscopy.
I. What Do We Mean by
“Resolution”?
The term “resolution” as it applies to imaging is often
misused, and is easily misunderstood. As a start,
consider this definition specific to imaging from the
Merriam-Webster online dictionary:
Figure 1: Most particle
analyzers give a distribution of
particle size only as shown by
the graph on the left. Imaging
particle analysis yields size,
shape and gray-scale
information, enabling the
automated characterization of
different particle types in a
heterogeneous sample.
However, the ability to make
these differentiations based
upon particle shape requires a
certain minimum level of image
resolution, which is the topic of
this paper.
6 a: the process or capability of making
distinguishable the individual parts of an object,
closely adjacent optical images, or sources of light
b: a measure of the sharpness of an image or of the
fineness with which a device (as a video display,
printer, or scanner) can produce or record such an image
usually expressed as the total number or density of pixels
in the image <a resolution of 1200 dots per inch> (1)
As can be seen from the above, “resolution” is used in two
very different ways: definition “a” refers to the ability of
a system to capture information (input resolution), while
definition “b” refers to the ability of a system to output
information. These are completely independent concepts,
although the “input resolution” of an image can certainly
affect its ability to be properly output. For example, many
people wrongly assume they can print something they see on
a computer screen on paper and see the same quality output
at the same size. What they fail to realize is that in this case,
the image on the screen only has a resolution (input) of 72
pixels/inch, whereas the equivalent print (output) resolution is
300 pixels (dots)/inch. For the purposes of this paper, we are
only concerned with definition “a”, the input resolution.
The resolution that can be captured by any imaging system
is limited by two distinct factors: the optical system and the
sensor. An easy way to think about this is to remember your
last visit to the eye doctor, reading a letter chart on the wall:
the various lenses that the doctor puts between your eyes and
the letter chart are the optical system, and your eyes are the
sensor (which remains constant). As the doctor changes the
lenses, the image seen by the eye can become sharper or more
blurry; the sensor (eyes) remains constant. Your ability to
“resolve” objects is determined by your ability to recognize the
letters on the chart correctly.
Although we perceive the world as “continuous tone” (all
colors and shades of colors merge smoothly into their
neighbors), the human eye is actually a discrete sensor
composed of rods and cones on the retina. There are
approximately 120 million rods on the retina (sensitive only
to gray scale) and 6-7 million cones (which are sensitive to
color) (2). In digital imaging terminology, we can think of
the eye as having a 120 Megapixel black and white sensor
and a 6-7 Megapixel color sensor. This is an extremely highresolution sensor! Despite the fact that the eye is indeed a
discrete sensor, we never see any “pixilation”, because the eye
as a sensor is also connected to the most powerful computer
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
1
known, the human brain. The brain processes the signals from
the eye to make the world appear as completely “continuous
tone”.
Using definition “a” above, the most common terminology
used to describe “spatial resolution” comes from photography,
where the resolution of a system is described in its ability
to distinguish closely spaced lines in an image. The unit of
measure used is “line pairs per millimeter”, where a “line
pair” is a pair of two parallel black lines separated by an
equal width white line. In film photography, a resolution
target consisting of groups of bars of increasing numbers of
line pairs/millimeter is imaged, and the largest bars that the
imaging system can not discern are considered the limit of the
system’s resolving power. The most commonly used target has
been the 1951 USAF
Resolution test Target
(Figure 2)
granularity of the film. In a digital system, the sensor is a
discrete array of “picture elements” or pixels arranged on
a rectangular grid. Each pixel is a photosensitive site that
outputs a signal based upon the amount of light striking
it. Although the actual “signal” produced by the photosite
is “continuous”, the signal is immediately converted by
an analog to digital converter (A/D) into a discrete digital
number. In a black and white system, this number is usually
an 8-bit number ranging from 0 to 255, where 0 is black and
255 is white. Color systems (with the exception of multiple
sensor systems or Foveon chip systems) use three separate
photosites to measure Red, Green and Blue intensity, which
are combined to produce color (16.7 million colors can be
created from 8 bits each of red, green and blue data). For ease
of discussion, we will work with a monochrome system here.
Since the sensor in a digital system has a discrete number
of photosites, or pixels, the resolution of the sensor itself is
usually the prime decider of the spatial resolution limit of
the overall system. As with film, one of the easiest ways to
determine the resolution limit is simply to image a resolution
test target to determine the maximum number of line pairs
per millimeter that the system can distinguish.
In film systems, given
identical optics, the
resolution limit would
be determined by
the microstructure
of the film itself,
or the film “grain”
III.Composition of a Digital Image
size. Essentially this
equates to the mean
As discussed above, a digital image is composed of a twosize of the silver grains
dimensional matrix of picture elements, or pixels. Each
which are laid down
pixel has a value associated with it, typically an 8-bit
on the film emulsion
number where 0 is black, 255 is white, and the numbers in
during manufacturing.
between represent varying shades of gray (recall that we are
Figure 2: USAF 1951 Resolution Test
The smaller the grains,
Target
going to limit our discussion to monochrome images for
the more detail that
Sensor Geometry Projected onto Object
can be resolved in the test target (the
Image (Sensor) Space
trade-off is that finer grained film also
requires more light to be “exposed”). In
general, film has much higher resolution
Object Space
than most reasonably priced digital
Optics
systems (although as with most digital
systems, the sensor prices are dropping
precipitously while the pixel density
continues to increase).
Optical Axis
II. What is “Resolution” in
a Digital System?
We described above how resolution
in a film system is measured using a
resolution target. In the film system,
the sensor is the film, and its spatial
resolution is generally limited by the
Sensor Pixel Dimensions,
Projected onto Object:
2.5µm X 2.5µm
Sensor Pixel Dimensions:
5µm X 5µm
Figure 3: Projecting Sensor Geometry Back onto Object
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
2
simplicity’s sake, color images are just an extrapolation of the
monochrome). Because the image sensor is a discrete 2D array
of pixels, the resolution of the sensor is fixed. If the overall
magnification of the optical system is known, we can actually
predict the overall resolution of the system by projecting the
sensor forward through the optical system onto the target (see
Figure 3, preceding page).
Although the optical magnification
of the system can be deduced
mathematically, it is more generally
simply measured by imaging a target
of known dimensions. This could be as
simple as imaging a ruler; in imaging
particle analysis the common method
is to image calibrated spheres which are
traceable to a known standard (typically
NIST), and to calculate the images’
Equivalent Spherical Diameter (ESD).
As an example, if a ruler is imaged, and
it is found that it takes ten pixels to
cover 10mm, then the resolution is 10
pixels = 10mm or 1 pixel = 1 mm. The
calibration of the system can then be
technique above, we need to reduce each pixel to a binary
value that says that either the pixel is part of what you want
to measure, or it is not. In imaging particle analysis, what
this really says is that a pixel can only either be “particle” or
“not particle”. The reduction of the image to a binary image
is accomplished through a simple “gray scale threshold”. First,
a “background” image is recorded that represents the gray
The Effect of Resolution
Resolution: 1 pixel = 1 unit area
Resolution: 4 pixels = 1 unit area
Gray-Scale Image
Theshold
Binary Image
Size (ESD) =
2�(4 pixels/π)
= 2.26 units
Size (ESD) =
2�(2 pixels/π)
= 1.60 units
Size (ESD) =
2�[(12 pixels/4)/π]
= 1.95 units
Size (ESD) =
2�[(6 pixels/4)/π]
= 1.38 units
Figure 5: Gray-Scale Thresholding to Produce a Binary Image
The Effect of Thresholding
scale value for each pixel in
the sensor when no particles
are present. Then, for each
Gray-Scale Image
image acquired when sample
is present, the background
pixel value is subtracted
from the incoming value for
Theshold
the same pixel in the sensor
array, yielding a “difference”
value. If the difference value
is 0, then the pixel is the
Binary Image
same as the background,
and no particle is present.
If the difference is greater
than (or less than) 0, then
something is present. At this
point, the software makes a
Figure 4: Gray-Scale Thresholding to Produce a Binary Image
decision as to whether this
expressed by the size of one pixel projected onto the target
pixel is “particle” or “not particle” based upon a user-supplied
plane, in this case 1mm/pixel. All other distances can now be
threshold value for the difference. Figure 4 shows how this
measured merely by multiplying the number of pixels to cover binarization would look on some sample objects at a very
the object by the calibration factor, so a 25 pixel long object
coarse level.
in the image would be 25mm long.
It is extremely important to realize that the pixel density of
So far this seems pretty straightforward, but we need to
the projected sensor onto the object will have an enormous
remember that there is another dimension to each pixel, the
impact upon what the thresholded image looks like, and its
gray scale value. In order to make measurements using the
measurements (Figure 5).
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
3
Quantization Error by Threshold
100
100
100
100
100
100
Gray Scale Image
100
100
50
100
100
100
100
100
Threshold
50
100
100
100
Threshold <100
Threshold ≤150
Size (ESD) = 2�(6 pixels/π) = 2.80 units
Figure 6: Size Error Caused by Different Thresholds
Positional Error
100
100
100
200
Gray Scale Image
The second type of error that can be
introduced during thresholding is
“positional error”, essentially the fact
that the overlap of the projected sensor
onto the object can actually produce
different results for the same size object
with the same threshold (Figure 7).
IV.Sampling Theory and
Nyquist Limits
Binary Image
Size (ESD) = 2√(1 pixels/π) = 1.12 units
called “quantization error”, and can
be introduced by setting the threshold
differently when looking at the same
objects (Figure 6).
200
50
50
50
50
The process of digitizing is the process
of converting an analog (or continuous)
signal or object into a discrete set of
points or samples. This process is also
known as “sampling”. One of the most
important tenets of sampling theory is
the Nyquist-Shannon sampling theorem.
Many good references can be found that
address this theorem in detail (3,4,5),
and a detailed discussion is beyond the
scope of this paper.
The basic conclusion of the NyquistShannon sampling theory is that “in
order to get an accurate reproduction
100
100
100
Threshold
of a continuous signal with a particular
frequency the sampling frequency must
be at least the double of that number.
The theorem refers to units that must be
translated to the particular case of digital
Binary Image
imaging. The theorem says you need
at least 2 samples per cycle, and this
means two pixels per line pair” (6) To
put this into microscopic terms, if you
Threshold <100
Threshold <100
wish to resolve one line pair/micron on
Size (ESD) = 2�(1 pixels/π) = 1.12 units
Size (ESD) = 2�(4 pixels/π) = 2.20 units
an object, you must sample that object
Figure 7: Size Error Caused by Different Projection on the Sensor
with at least 2 pixels/micron, or a system
It is an absolute axiom in all digital imaging that more
calibration of 0.5 microns/pixel. Once again, this calibration
resolution in the sensor always yields more accuracy in
value is based upon the projection of the system’s image
measurements and a more faithful rendition of the object
sensor onto the object through the optics. So, in this example,
given all other things in the system are constant (and that
if the optics are 10X magnification, then the sensor must have
diffraction limits are not reached).
a photosite density of 5 microns/pixel (or smaller) in order to
resolve one line pair per micron on the sample.
To make matters a bit more complex, we need to realize that
there are a couple of types of error that can be introduced
Current state-of-the-art industrial digital video cameras with
during the thresholding process. The first type could be
a resolution of 1024x768 pixels have a pixel size somewhere
100
50
100
200
200
200
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
4
between 4 to 5 microns/pixel, so this is within the range
desired in the example above. However, a Nyquist sampling
frequency of 2 samples per cycle is a theoretical minimum
sampling rate to resolve an object of 1 cycle in size. In reality,
many more samples are usually necessary to actually “resolve”
the object. Most microscopists use a sampling rate of between
3-10 samples per object as a rule of thumb (7). This would
mean that in the above example (trying to resolve 1 line
pair/micron), they would want to have a system calibration
somewhere in the area of between 0.33 microns/pixel and
0.10 microns/pixel. At 10X magnification, this represents
a photosite density on the sensor of between 3.3 and 1.0
microns/pixel. With current technology, anything below 4
microns/pixel is not only very expensive in a sensor, but is
also extremely prone to noise due to the small size of the
photosite.
one another. A common example would be to measure a
particle’s “circularity” by comparing its actual perimeter to its
perimeter based upon ESD. However, if we are sampling only
to resolve ESD, the circularity measured will be the same for
all particles of that size regardless of shape (see Figure 8).
This means that in order to measure higher order shape
attributes such as perimeter on a particle, the particle has
to be sampled at a much higher sampling rate than would
be suggested by the Nyquist limit! Finally, because light is a
wave-based phenomenon, the absolute limit of any instrument
forming an image by wave interference is half the wavelength
of the wave used to form the image (in this case, light). In
other words, using a 550nm light source, the theoretical
maximum resolution that could be achieved would be
0.275µm.
V.Diffraction Limits
Loss of Information at
Low Sampling Frequency
(Both Particles are
assigned the same
Circularity due to
Under-Sampling)
Gray Scale Image
Threshold
Binary Image
Size (ESD) = 2�(9 pixels/π) = 3.40 units
Perimeter = 12 units
Size (ESD) = 2�(9 pixels/π) = 3.40 units
Perimeter = 12 units
Figure 8: Information (Detail) Loss due to Undersampling
The final thing to remember in this discussion is that, so
far, we have been talking only about resolving “line pairs”,
which are very simple objects. For the purposes of imaging
particle analysis, resolving a line pair can be looked at as the
minimum unit for measurement of Equivalent Spherical
Diameter (ESD). So, continuing the example above, if
we can resolve 1 line pair/micron, we should be able to
measure spherical particles 1 micron and larger in diameter.
However, the primary strength of imaging particle analysis
over other more common techniques (electrozone sensing,
laser diffraction, etc.) is that it can be used to measure
particle shape attributes beyond ESD. These higher order
measurements can then be used to differentiate particles from
All of the above discussion was made
making a very simple (and very
incorrect) assumption: that the optical
system is “perfect”, meaning that it
performs perfectly according to the
mathematics associated with it. In
reality, there is absolutely no such thing
as a “perfect” optical system, and the
closer one tries to get to “perfect”, the
faster the cost rises! All lenses have
defects referred to as aberrations; some
of the common ones are astigmatism,
distortion, field curvature, and coma.
These defects result in a loss of image
quality in the projected image onto the
sensor. Many of these defects can be
“corrected” by design and materials, and
some can also be eliminated by postprocessing of the digital image.
However, there is one further limit on image quality
(sharpness) for which there is no “fix”, diffraction. Diffraction
is caused by the wave-like nature of light and what happens to
those waves when they encounter objects (such as an aperture)
or changes in the material the wave is travelling through (such
as the change in refractive index when travelling from air into
glass). All optical imaging systems can be characterized by
a “Numerical Aperture” which is a direct indication of how
well the optics will be able to resolve fine detail. Numerical
Aperture is defined as NA = n sinθ where n is the index of
refraction the lens is working in (1.0 for air) and θ is the halfangle of the maximum cone of light that can enter or exit the
lens with respect to a point P (focal plane) (see Figure 9) (8).
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
5
P
An optical system that can actually produce the theoretical
maximum angular resolution is said to be “diffraction
limited”. In the real world, most optical systems have enough
additional defects so as to be significantly lower resolution
than the diffraction limit.
θ
VI.What Does this Mean to Microscopic
Imaging Particle Analysis?
Figure 9: Definition of Half-Angle for Numerical Aperture
The size of the finest detail resolvable by an optical system
is proportional to λ/NA, where λ is the wavelength of the
illumination. For a constant λ, the higher the NA of the
system, the more light the lens gathers and the higher level of
detail it can resolve. Going back to the definition of NA, since
air has an index of refraction of 1.0, the theoretical maximum
NA for a lens working in light is 1.0. For this reason, in very
high resolution microscopy, oil immersion lenses are used
where the oil can have an index of refraction in the 1.5 range.
Finally, since the resolution is proportional to λ/NA, it is
important to note that shorter wavelengths of light will give
higher theoretical resolution for the same NA (see Figure 10)
(9).
Wavelength
(Nanometers)
360
400
450
500
550
600
650
700
Resolution
(Micrometers)
0.19
0.21
0.24
0.26
0.29
0.32
0.34
0.37
Figure 10: Wavelength Versus Resolution at Fixed NA= 0.95
In a system using “white light” illumination, this means
that the resolution limit is different for objects of different
wavelengths. If we choose the middle wavelength of 550 nm,
then we can see that our theoretical maximum resolution for
a 0.95 NA objective is 0.29 µm. Once again, this result is
calculated using many assumptions which are very unlikely
to occur in the real world. Also, recall from the previous
discussion, that when we talk about “resolving” an object
here, we are referring to a simple theoretical line pair, not
some complex organic shape.
So we have now seen that there are several factors which enter
into and can limit the “resolution” of a digital imaging system:
the density of the photosites on the digital sensor, sampling
artifacts, Nyquist limits and finally diffraction. Let us now
look at how this affects imaging particle analysis. From our
discussion on diffraction limits, we know that, putting aside
any other considerations for now, any optical sensing system
will be at very minimum limited by diffraction. Diffraction
tells us that even with a “perfect” optical system, the best we
could ever possibly resolve in a microscope system would be
on the order of around 0.30µm, or 3.3 line pairs per micron.
Now, add in what we learned from sampling theory, and that
resolution is cut by at least a factor of 2 (or halved). Finally,
add in the fact that our optical system will never be “perfect”
(it will have aberrations), and we begin to see that actual
resolution for merely “counting” particles in an optical system
will be on the order of minimum 1µm spheres (as no shape
data can be measured or inferred at this level). As previously
stated, the important difference in imaging particle analysis
lies in shape discrimination, so realistically we can only really
talk about discerning “low level” shape constructs (10) at
particle sizes of 2µm ESD or larger.
At this point, a “real world” example should greatly help to
put all of this “theory” into perspective! A sample containing
particles smaller than 10µm in ESD was run through
the FlowCAM®, a continuous-imaging particle analysis
system manufactured by Fluid Imaging Technologies, Inc.
(Yarmouth, ME). This particular sample was a parenteral
drug sample, although for looking at particles below 2 µm,
the actual sample does not much matter (as will be evident
from the images).
Once the particle images were acquired by the FlowCAM, the
instrument’s VisualSpreadsheet© software was used to filter
and isolate particles having an Equivalent Spherical Diameter
(ESD) equal to 1µm. The first image below shows 9 particle
images that have an ESD = 1µm. Note the summary statistics
associated with these particles. The actual images are too
small to see any real detail on when displayed at 1:1 (actual
pixels) (see Figure 11, following page). The second image
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
6
shows the same nine particle images zoomed by a factor of
64X, so that one pixel in the original image is now shown
using a 64x64 pixel array on the screen (see Figure 12). At this
magnification, the actual pixels are easily seen because they are
now represented as “blocks” of data. This type of digital zoom
is known as “pixel replicated”, because each pixel is made
larger simply by replicating it. The final image is the same but
with the addition of the binary overlay to see the actual pixels
that the measurements were made from (see Figure 13).
Figure 11: Particles with ESD = 1µm at Actual Scale
Figure 12: Particles with ESD = 1µm at 64X Zoom
Figure 13: Particles with ESD = 1µm at 64X Zoom with Binary Overlay
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
7
Compare the images above of 1µm ESD particles with the
following two particles from the same sample having a 4µm
ESD (Figures 14, 15: these images are also zoomed by a factor
of 64X as per the previous images):
is limited by two different factors, the optical system and
the sensor. The optical system is, in the best case, limited by
diffraction and the wavelength of light being used to image
the particles. Adding to this, the sensor further limits the
optical system due to sampling constraints (Nyquist limit)
and also by physical limitations on the size of the actual
photosite that can be produced on the sensor.
All of these factors combined, along with the example shown,
lead us to the following basic conclusions:
1.) Particle counting in an imaging particle analysis system
should be limited to particles having an ESD of 1µm and
greater.
2.) “Simple” particle characterization (i.e. “round” versus “rodlike”) in an imaging particle analysis system should be limited
to particles having an ESD of 2µm and greater.
Figure 14: Particles with ESD = 4µm, 64X Zoom
3.) “Higher level” particle characterization (i.e. differentiation
based upon higher order measurements such as circularity)
in an imaging particle analysis system should be limited to
particles having an ESD of 4µm or greater.
For submicron particles, imaging particle analysis requires
the use of non-optical imaging techniques, such as electron
microscopy. Unfortunately, these techniques are even
more limited than optical microscopy from the standpoint
of requiring extensive laboratory set-up (and expensive
equipment). This means that imaging of statistically significant
numbers of particles of this size is not possible, only very
small samples can be observed.
Figure 15: Particles with ESD = 4µm, 64X Zoom with Binary Overlay
It can be clearly seen from these images of the 4µm ESD
particles that far more detail is now seen in the images. In the
two particle images above, for example, we can now clearly
distinguish that one of these particles is spherical in shape
whereas the other is “rod-like”. We can also now begin to
collect “higher order” measurements like “circularity” at this
point, which we were incapable of doing with the 1µm ESD
particles.
VII.
Conclusions
Clearly there are many factors that affect the ability of an
imaging particle analysis system to resolve detail in very small
microscopic particles. As pointed out in the introduction, we
stated the resolution of the imaging particle analysis system
VIII. References
1.) Merriam-Webster Online Dictionary:
http://www.merriam-webster.com/dictionary/resolution
2.) HyperPhysics web site, Light and Vision, Georgia State
University Department of Physics and Astronomy
http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.
html
3.) Wikipedia entry on Nyquist-Shannon sampling theorem
http://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_
theorem
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
8
4.) An Introduction to Sampling Theory, Thomas
Zawistowski & Paras Shah
http://www2.egr.uh.edu/~glover/applets/Sampling/
Sampling.html
(contains an interactive Java applet demonstrating aliasing
caused by sampling)
5.) Digital Signal Processing : Principles, Algorithms, and
Applications, by J. Proakis and D. Manolakis, New York:
Macmillan Publishing Company, 1992
6.) “Do Sensors “Outresolve” Lenses?”, Rubén Osuna and
Efraín García, Luminous Landscape Web Site
http://luminous-landscape.com/tutorials/resolution.shtml
7.) Microscopy Today (Microscopy Society of America),
Volume 14 - Number 6, November 2006, Netnotes, “Image
Analysis - object size”, pages 63-66
8.) Wikipedia entry on Numerical Aperture
http://en.wikipedia.org/wiki/Numerical_aperture
9.) Nikon MicroscopyU on the web, Concepts and Formulas/
Resolution
http://www.microscopyu.com/articles/formulas/
formulasresolution.html
10.) “Particle Image Understanding - A Primer”, Lew Brown,
Fluid Imaging Technologies Web Site
http://fluidimaging.com/imaging-particle-analysis-whitepapers.aspx
Copyright © 2009 by Lew Brown, Fluid Imaging Technologies, Inc.
9