Download 10.5.4.2How It Works

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ringing artifacts wikipedia , lookup

Dither wikipedia , lookup

Scale space wikipedia , lookup

Autostereogram wikipedia , lookup

2.5D wikipedia , lookup

Computer vision wikipedia , lookup

BSAVE (bitmap format) wikipedia , lookup

Hold-And-Modify wikipedia , lookup

Indexed color wikipedia , lookup

Anaglyph 3D wikipedia , lookup

Rendering (computer graphics) wikipedia , lookup

Stereoscopy wikipedia , lookup

Edge detection wikipedia , lookup

Medical image computing wikipedia , lookup

Image editing wikipedia , lookup

Stereo display wikipedia , lookup

Spatial anti-aliasing wikipedia , lookup

Transcript
Chapter 10
Algorithm for Morphological Cancer
Detection
10.1 introduction
Over half of all human cancers occur in stratified squamous epithelia. Approximately
1 million cases of nonmelanoma cancers of the stratified squamous epithelia are
identified each year. Tissues with stratified squamous epithelia include the cervix,
skin, and oral cavity. Such tissues consist of a surface layer, the epithelium (several
cell layers thick), a basement membrane, and an underlying stroma (containing
structural proteins and blood vessels). Neoplastic cells originate near the basement
membrane and can move progressively upward through the epithelium. These cells
can eventually occupy the full thickness of the epithelium and ultimately break
through the basement membrane and invade the stroma. Currently, the diagnosis of
squamous epithelial cancers is carried out through visual inspection, followed by
biopsy. In patients at high risk for malignancy, the entire epithelium may potentially
be diseased. Therefore, it is difficult to identify the best location to biopsy based on
visual inspection alone. Techniques that can diagnose epithelial precancers and
cancers more accurately than visual inspection alone are needed to guide tissue
biopsy.
Multiphoton laser scanning microscopy (MPLSM) is a potentially attractive technique
for the diagnosis of epithelial precancers and cancers. This technology can
noninvasively generate high-resolution, three-dimensional fluorescence images deep
within tissue while maintaining tissue viability. This technique enables the
visualization of cellular and subcellular structures with exceptional resolution.
Visualization of these structures is important because it is well known that the
development of precancers and cancers is accompanied by changes in cellular and
subcellular morphology . MPLSM can also exploit the intrinsic fluorescence contrast
of molecules already present in tissue, thus obviating the need for exogenous contrast
agents. It has been previously shown that the endogenous fluorescence of certain
molecules, such as reduced nicotinamide adenine dinucleotide (NADH) within tissue
is altered with precancer and that these sources of intrinsic fluorescence contrast can
be exploited for the early detection of epithelial precancers and cancers.
MPLSM has been used to image the endogenous fluorescence in tissues in several
feasibility studies. These collective studies show that MPLSM can image the
endogenous fluorescence deep within thick tissues and that qualitative morphologic
differences can be observed between malignant and nonmalignant tissues. In a more
recent study, MPLSM was used to image the endogenous fluorescence within the
stroma of the normal, precancerous, and cancerous hamster cheek pouch model in
vivo. Images were obtained from a total of five sites per animal in a total of 70
animals. The diagnosis of tissues was based on a blinded observer evaluation of the
morphologic features resolved with MPLSM (collagen matrix and fibers, cellular
infiltrates, and blood vessels). The blinded observer evaluation agreed with the gold
standard, histopathology for 88.6% of the samples.
Previous studies have used Fourier analysis of in vivo human corneal endothelial
cells to correlate cell structure with patient age [3]. It was found that the Fourier
transforms provided quantitative descriptions of population cell size and organization.
Fourier transform analysis will be applied to MPLSM images of normal and
cancerous
tissues to determine whether automated diagnosis based on tissue morphology is
feasible.
10.2 multiphoton laser scanning microscopy(MPLSM)
MultiPhoton Laser Scanning Microscope (MPLSM), an instrument that represents the latest
development in imaging technology.
Two-photon excitation microscopy is a fluorescence imaging technique that allows
imaging living tissue up to a depth of one millimeter. The two-photon excitation
microscope is a special variant of the multiphoton fluorescence microscope. Twophoton excitation can be a superior alternative to confocal microscopy due to its
deeper tissue penetration, efficient light detection and reduced phototoxicity
Two-photon excitation employs a concept first described by Maria Goeppert-Mayer
(1906-1972) in her 1931 doctoral dissertation.[2], and first observed in 1962 in cesium
vapor using laser excitation by Isaac Abella
The concept of two-photon excitation is based on the idea that two photons of low
energy can excite a fluorophore in a quantum event, resulting in the emission of a
fluorescence photon, typically at a higher energy than either of the two excitatory
photons. The probability of the near-simultaneous absorption of two photons is
extremely low. Therefore a high flux of excitation photons is typically required,
usually a femtosecond laser.
Two-photon microscopy was pioneered by Winfried Denk in the lab of Watt W.
Webb at Cornell University. He combined the idea of two-photon absorption with the
use of a laser scanner.[4] In two-photon excitation microscopy an infrared laser beam
is focused through an objective lens. The Ti-sapphire laser normally used has a pulse
width of approximately 100 femtoseconds and a repetition rate of about 80 MHz,
allowing the high photon density and flux required for two photons absorption and is
tunable across a wide range of wavelengths. Two-photon technology has been
patented by Winfried Denk, James Strickler and Watt Webb at Cornell University.[5]
Carl Zeiss currently holds this patent; Olympus Inc. has licensed it to sell 2-photon
microscopes.
10.1.1 Image formation
The most commonly used fluorophores have excitation spectra in the 400–500 nm
range, whereas the laser used to excite the fluorophores lies in the ~700–1000 nm
(infrared) range. If the fluorophore absorbs two infrared photons simultaneously, it
will absorb enough energy to be raised into the excited state. The fluorophore will
then emit a single photon with a wavelength that depends on the type of fluorophore
used (typically in the visible spectrum). Because two photons need to be absorbed to
excite a fluorophore, the probability for fluorescent emission from the fluorophores
increases quadratically with the excitation intensity. Therefore, much more twophoton fluorescence is generated where the laser beam is tightly focused than where it
is more diffuse. Effectively, excitation is restricted to the tiny focal volume (~1
femtoliter), resulting in a high degree of rejection of out-of-focus objects. This
localization of excitation is the key advantage compared to single-photon excitation
microscopes, which need to employ additional elements such as pinholes to reject outof-focus fluorescence. The fluorescence from the sample is then collected by a highsensitivity detector, such as a photomultiplier tube. This observed light intensity
becomes one pixel in the eventual image; the focal point is scanned throughout a
desired region of the sample to form all the pixels of the image.
The use of infrared light to excite fluorophores in light-scattering tissue has added
benefits.[6] Longer wavelengths are scattered to a lesser degree than shorter ones,
which is a benefit to high-resolution imaging. In addition, these lower-energy photons
are less likely to cause damage outside the focal volume. Compared to a confocal
microscope, photon detection is much more effective since even scattered photons
contribute to the usable signal. There are several caveats to using two-photon
microscopy: The pulsed lasers needed for two-photon excitation are much more
expensive then the constant wave (CW) lasers used in confocal microscopy. The twophoton absorption spectrum of a molecule may vary significantly from its one-photon
counterpart. For very thin objects such as isolated cells, single-photon (confocal)
microscopes can produce images with higher optical resolution due to their shorter
excitation wavelengths. In scattering tissue, on the other hand, the superior optical
sectioning and light detection capabilities of the two-photon microscope result in
better performance.
A fluorophore, in analogy to a chromophore, is a component of a molecule which
causes a molecule to be fluorescent. It is a functional group in a molecule which will
absorb energy of a specific wavelength and re-emit energy at a different (but equally
specific) wavelength. The amount and wavelength of the emitted energy depend on
both the fluorophore and the chemical environment of the fluorophore. This
technology has particular importance in the field of biochemistry and protein studies
Figure 10.1 A diagram of a two-photon microscope
10.1.2 difference between MPLSM and traditional microscope
Traditional microscopes use regular light, which can cause tissue samples to appear
blurry. Confocal microscopes eliminate much of the blurriness by blocking out light
emitted from structures not in focus, producing a sharper image. Living tissue
specimens, however, can be damaged by confocal microscopy.
The multiphoton confocal microscope is superior to traditional and confocal
microscopes, in that it uses infrared light to illuminate only a small dot of tissue at a
time. Moreover, in living tissue, damage is minimized. Because the light beam
penetrates deeply, a greater volume of tissue can be examined.
figure 10.2 Multiphoton laser scanning microscopy
10.3 Approach
The first steps of our algorithm preprocess the images to remove graininess (median
filter), enhance contrast between the cytoplasm, and nucleus and extracellular
components (unsharp mask, threshold). After the pre-processing steps, the relative
disorganization of the images were determined with Fourier transform analysis
Averaging was then used to reduce the noise of the Fourier domain image and a 1-D
line plot was made. From this line plot, normal and cancerous tissue could be
differentiated by a machine.
10.4 Work Performed and Results
The image processing algorithm was applied to a total of four images from normal
tissues and four images from cancerous tissues for a total of eight images. The results
from two normal and two cancerous images are shown in this report. The results for
the remaining two normal and two cancerous images are similar. Figure 10.3 shows
the original images.
NORMAL 1
NORMAL 2
CANCER 1
CANCER 2
Figure 10.3 Original data (each image size is 125.4 μm x 125.4 μm).
10.5 Algorithm
For each step of the algorithm, the output of the previous step is the input of the
subsequent step. The parameters of the algorithm were optimized for maximum
contrast between normal and cancerous images at the last step of the algorithm.
Figure 10.4 shows a flowchart of the algorithm.Each step of the algorithm, along
with the output from each step is described in detail below.
Figure 10.5 flowchart for algorithm
10.5.1Median Filter
The median filter is a non-linear digital filtering technique, often used to remove noise
from images or other signals. The idea is to examine a sample of the input and decide
if it is representative of the signal. This is performed using a window consisting of an
odd number of samples. The values in the window are sorted into numerical order; the
median value, the sample in the center of the window, is selected as the output. The
oldest sample is discarded, a new sample acquired, and the calculation repeats.
Median filtering is a common step in image processing. It is particularly useful to
reduce speckle noise and salt and pepper noise. Its edge-preserving nature makes it
useful in cases where edge blurring is undesirable
The first step of the algorithm aims to attenuate noise without blurring the images. A
2-dimensional median filter was applied using the ‘medfilt2’ function in Matlab. Each
output pixel contains the median value in the 5-by-5 neighborhood around the
corresponding pixel in the input image. ‘Medfilt2’ pads the image with zeros on the
edges, so the median values for the points within 3 pixels of the edges may appear
distorted. The result is shown in Fig.10.6
A common problem with all filters based on all adjacent pixels is how to process the
edges of the image. As the filter nears the edges, a median filter may not preserve its
odd number of samples criteria. It is also more complex to write a filter that includes a
method to specifically deal with the edges. so that we use unsharp mask
NORMAL1
NORMAL2
CANCER1
CANCER2
Figure 10.6 Data after Median filter
10.5.2unsharp mask
10.5.2.1 what is the unsharp mask
Unsharp masking is an image manipulation technique now familiar to many users of
digital image processing software, but it seems to have been first used in Germany in
the 1930s as a way of increasing the acutance, or apparent sharpness, of photographic
images. The "unsharp" of the name derives from the fact that the technique uses a
blurred, or "unsharp," positive to create a "mask" of the original image. The
unsharped mask is then combined with the negative, creating the illusion that the
resulting image is sharper than the original. From a signal-processing standpoint, an
unsharp mask is generally a linear or nonlinear filter that amplifies high-frequency
components
10.5.2.2Brief Description
The unsharp filter is a simple sharpening operator which derives its name from the
fact that it enhances edges (and other high frequency components in an image) via a
procedure which subtracts an unsharp, or smoothed, version of an image from the
original image. The unsharp filtering technique is commonly used in the photographic
and printing industries for crispening edges
10.5.2.3How It Works
Unsharp masking produces an edge image
where
from an input image
is a smoothed version of
via
. (See Figure 1.)
Figure 10.7 Spatial sharpening.
We can better understand the operation of the unsharp sharpening filter by examining
its frequency response characteristics. If we have a signal as shown in Figure 10.8(a),
subtracting away the lowpass component of that signal (as in Figure 10.8(b)), yields
the highpass, or `edge', representation shown in Figure10.8(c).
Figure 10.8 Calculating an edge image for unsharp filtering
This edge image can be used for sharpening if we add it back into the original signal,
as shown in Figure 10.9.
Figure 10.9 Sharpening the original signal using the edge image.
Thus, the complete unsharp sharpening operator is shown in Figure 10.10.
Figure 10.10 The complete unsharp filtering operator.
We can now combine all of this into the equation:
where k is a scaling constant. Reasonable values for k vary between 0.2 and 0.7, with
the larger values providing increasing amounts of sharpening
10.5.2.4Photographic unsharp masking
In the photographic process, a large-format glass plate negative is contact-copied onto
a low contrast film or plate to create a positive. However, the positive copy is made
with the copy material in contact with the back of the original, rather than emulsionto-emulsion, so it is blurred. After processing this blurred positive is replaced in
contact with the back of the original negative. When light is passed through both
negative and in-register positive (in an enlarger for example), the positive partially
cancels some of the information in the negative.
Because the positive has been intentionally blurred, only the low frequency (blurred)
information is cancelled. In addition, the mask effectively reduces the dynamic range
of the original negative. Thus, if the resulting enlarged image is recorded on contrasty
photographic paper, the partial cancellation emphasizes the high frequency (fine
detail) information in the original, without loss of highlight or shadow detail. The
resulting print appears sharper than one made without the unsharp mask; the apparent
accutance is increased.
In the photographic procedure, the amount of blurring can be controlled by changing
the softness or hardness (from point source to fully diffuse) of the light source used
for the initial unsharp mask exposure, while the strength of the effect can be
controlled by changing the contrast and density (i.e., exposure and development) of
the unsharp mask.
In traditional photography, unsharp masking is usually used on monochrome
materials; special panchromatic soft-working black and white films have been
available for masking photographic color transparencies. This has been especially
useful to control the density range of a transparency intended for photomechanical
reproduction.
10.5.2.5Digital unsharp masking
The same differencing principle is used in the unsharp masking tool in many digital
imaging software packages (for example, Adobe Photoshop or GIMP). The software
applies a Gaussian blur to a copy of the original image and then compares it to the
original. If the difference is greater than a user-specified threshold setting the images
are (in effect) subtracted. The threshold control constrains sharpening to image
elements that differ from each other above a certain size threshold, so that sharpening
of small image details such as photographic grain can be suppressed.
Digital unsharp masking is a flexible and powerful way to increase sharpness,
especially in scanned images. However, it is easy to create unwanted and conspicuous
edge effects. On the other hand these effects can be used creatively, especially if a
single channel of an RGB or Lab image is sharpened. Typically three settings will
control digital unsharp masking:

Amount:
This is listed as a percentage, and controls the magnitude of each overshoot (how
much darker and how much lighter the edge borders become). This can also be
thought of as how much contrast is added at the edges. It does not affect the width
of the edge rims.

Radius:
This affects the size of the edges to be enhanced or how wide the edge rims
become, so a smaller radius enhances smaller-scale detail. Higher Radius values
can cause halos at the edges, a detectable faint light rim around objects. Fine detail
needs a smaller Radius. Radius and Amount interact; reducing one allows more of
the other.

Threshold:
Which controls the minimum brightness change that will be sharpened or how far
apart adjacent tonal values have to be before the filter does anything. This lack of
action is important to prevent smooth areas from becoming speckled. The
threshold setting can be used to sharpen more pronounced edges, while leaving
more subtle edges untouched. Low values should sharpen more because fewer
areas are excluded. Higher threshold values exclude areas of lower contrast.
In our algorithm
Next, contrast between the cytoplasm, and nuclei and extracellular components were
enhanced using an unsharp filter. The filter was applied to the image by subtracting
the gaussian filtered input image, multiplied by a scaling factor, from the input image.
The gaussian filter was created using the built-in Matlab functions ‘fspecial’ and
‘gaussian’.A rotationally symmetric Gaussian lowpass filter with a standard deviation
of 10 pixels was used, with a total filter size of 15-by-15 pixels. The scaling factor
was 0.9. The result of this step is shown in Fig. 10.7
NORMAL1
NORMAL2
CANCER1
CANCER2
Figure10.11Data after unsharp mask
10.5.3Threshold
Segmentation




Thresholding
Edge finding
Binary mathematical morphology
Gray-value mathematical morphology
In the analysis of the objects in images it is essential that we can distinguish between
the objects of interest and "the rest." This latter group is also referred to as the
background. The techniques that are used to find the objects of interest are usually
referred to as segmentation techniques - segmenting the foreground from background.
In this section we will two of the most common techniques--thresholding and edge
finding-- and we will present techniques for improving the quality of the segmentation
result. It is important to understand that:
*
there is no universally applicable segmentation technique that will work for all
images, and,
*
no segmentation technique is perfect.
Thresholding
This technique is based upon a simple concept. A parameter called the brightness
threshold is chosen and applied to the image a[m,n] as follows:
This version of the algorithm assumes that we are interested in light objects on a dark
background. For dark objects on a light background we would use:
The output is the label "object" or "background" which, due to its dichotomous nature,
can be represented as a Boolean variable "1" or "0". In principle, the test condition
could be based upon some other property than simple brightness (for example, If
(Redness{a[m,n]} >= red), but the concept is clear.
The central question in thresholding then becomes: how do we choose the threshold
? While there is no universal procedure for threshold selection that is guaranteed to
work on all images, there are a variety of alternatives.
* Fixed threshold - One alternative is to use a threshold that is chosen independently
of the image data. If it is known that one is dealing with very high-contrast images
where the objects are very dark and the background is homogeneous and very light,
then a constant threshold of 128 on a scale of 0 to 255 might be sufficiently accurate.
By accuracy we mean that the number of falsely-classified pixels should be kept to a
minimum.
* istogram-derived thresholds - In most cases the threshold is chosen from the
brightness histogram of the region or image that we wish to segment. (See Sections
3.5.2 and 9.1.) An image and its associated brightness histogram are shown in Figure
51.
A variety of techniques have been devised to automatically choose a threshold starting
from the gray-value histogram, {h[b] | b = 0, 1, ... , 2B-1}. Some of the most common
ones are presented below. Many of these algorithms can benefit from a smoothing of
the raw histogram data to remove small fluctuations but the smoothing algorithm must
not shift the peak positions. This translates into a zero-phase smoothing algorithm
given below where typical values for W are 3 or 5:
(a) Image to be thresholded (b) Brightness histogram of the image
Figure 10.12: Pixels below the threshold (a[m,n] < ) will be labeled as object pixels; those above the
threshold will be labeled as background pixels.
* Isodata algorithm - This iterative technique for choosing a threshold was developed
by Ridler and Calvard . The histogram is initially segmented into two parts using a
starting threshold value such as 0 = 2B-1, half the maximum dynamic range. The
sample mean (mf,0) of the gray values associated with the foreground pixels and the
sample mean (mb,0) of the gray values associated with the background pixels are
computed. A new threshold value 1 is now computed as the average of these two
sample means. The process is repeated, based upon the new threshold, until the
threshold value does not change any more. In formula:
* Background-symmetry algorithm - This technique assumes a distinct and dominant
peak for the background that is symmetric about its maximum. The technique can
benefit from smoothing. The maximum peak (maxp) is found by searching for the
maximum value in the histogram. The algorithm then searches on the non-object pixel
side of that maximum to find a p% point.
In Figure 10.12b, where the object pixels are located to the left of the background
peak at brightness 183, this means searching to the right of that peak to locate, as an
example, the 95% value. At this brightness value, 5% of the pixels lie to the right (are
above) that value. This occurs at brightness 216 in Figure 51b. Because of the
assumed symmetry, we use as a threshold a displacement to the left of the maximum
that is equal to the displacement to the right where the p% is found. For Figure 5b this
means a threshold value given by 183 - (216 - 183) = 150. In formula:
This technique can be adapted easily to the case where we have light objects on a
dark, dominant background. Further, it can be used if the object peak dominates and
we have reason to assume that the brightness distribution around the object peak is
symmetric. An additional variation on this symmetry theme is to use an estimate of
the sample standard deviation based on one side of the dominant peak and then use a
threshold based on = maxp +/- 1.96s (at the 5% level) or = maxp +/- 2.57s (at the
1% level). The choice of "+" or "-" depends on which direction from maxp is being
defined as the object/background threshold. Should the distributions be approximately
Gaussian around maxp, then the values 1.96 and 2.57 will, in fact, correspond to the
5% and 1 % level.
* Triangle algorithm - This technique due to Zack [is illustrated in Figure10.13.A
line is constructed between the maximum of the histogram at brightness bmax and the
lowest value bmin = (p=0)% in the image. The distance d between the line and the
histogram h[b] is computed for all values of b from b = bmin to b = bmax. The
brightness value bo where the distance between h[bo] and the line is maximal is the
threshold value, that is, = bo. This technique is particularly effective when the object
pixels produce a weak peak in the histogram.
Figure 10.13: The triangle algorithm is based on finding the value of b that gives the maximum
distance d.
The three procedures described above give the values = 139 for the Isodata
algorithm, = 150 for the background symmetry algorithm at the 5% level, and =
152 for the triangle algorithm for the image in Figure 10.12a.
Thresholding does not have to be applied to entire images but can be used on a region
by region basis. Chow and Kaneko developed a variation in which the M x N image is
divided into non-overlapping regions. In each region a threshold is calculated and the
resulting threshold values are put together (interpolated) to form a thresholding
surface for the entire image. The regions should be of "reasonable" size so that there
are a sufficient number of pixels in each region to make an estimate of the histogram
and the threshold. The utility of this procedure--like so many others--depends on the
application at hand.
Edge finding
Thresholding produces a segmentation that yields all the pixels that, in principle,
belong to the object or objects of interest in an image. An alternative to this is to find
those pixels that belong to the borders of the objects. Techniques that are directed to
this goal are termed edge finding techniques.
* Gradient-based procedure - The central challenge to edge finding techniques is to
find procedures that produce closed contours around the objects of interest. For
objects of particularly high SNR, this can be achieved by calculating the gradient and
then using a suitable threshold. This is illustrated in Figure 53.
(a) SNR = 30 dB (b) SNR = 20 dB
Figure10.14: Edge finding based on the Sobel gradient, combined with the Isodata thresholding
algorithm eq. .
While the technique works well for the 30 dB image in Figure 10.14a, it fails to
provide an accurate determination of those pixels associated with the object edges for
the 20 dB image in Figure 7b. A variety of smoothing techniques can be used to
reduce the noise effects before the gradient operator is applied.
* Zero-crossing based procedure - A more modern view to handling the problem of
edges in noisy images is to use the zero crossings generated in the Laplacian of an
image.The rationale starts from the model of an ideal edge, a step function, that has
been blurred by an OTF such as Table 4 T.3 (out-of-focus), T.5 (diffraction-limited),
or T.6 (general model) to produce the result shown in Figure 10.15.
Figure 10.15: Edge finding based on the zero crossing as determined by the second derivative, the
Laplacian. The curves are not to scale.
The edge location is, according to the model, at that place in the image where the
Laplacian changes sign, the zero crossing. As the Laplacian operation involves a
second derivative, this means a potential enhancement of noise in the image at high
spatial frequencies. To prevent enhanced noise from dominating the search for zero
crossings, a smoothing is necessary.
The appropriate smoothing filter, from among the many possibilities should according
to Canny have the following properties:
* In the frequency domain, (u,v) or ( , ), the filter should be as narrow as possible
to provide suppression of high frequency noise, and;
* In the spatial domain, (x,y) or [m,n], the filter should be as narrow as possible to
provide good localization of the edge. A too wide filter generates uncertainty as to
precisely where, within the filter width, the edge is located.
The smoothing filter that simultaneously satisfies both these properties--minimum
bandwidth and minimum spatial width--is the Gaussian filter. This means that the
image should be smoothed with a Gaussian of an appropriate followed by
application of the Laplacian. In formula:
where g2D(x,y) is The derivative operation is linear and shift-invariant this means that
the order of the operators can be exchanged (eq. (4)) or combined into one single
filter. This second approach leads to the Marr-ildreth formulation of the "Laplacianof-Gaussians" (LoG) filter :
where
Given the circular symmetry this can also be written as:
This two-dimensional convolution kernel, which is sometimes referred to as a
"Mexican hat filter", is illustrated in Figure 10.16.
(a) -LoG(x,y) (b) LoG(r)
Figure 10.16: LoG filter with
= 1.0.
*PLUS-based procedure - Among the zero crossing procedures for edge detection,
perhaps the most accurate is the PLUS filter as developed by Verbeek and Van Vliet .
The filter is defined,as:
Neither the derivation of the PLUS's properties nor an evaluation of its accuracy are
within the scope of this section. Suffice it to say that, for positively curved edges in
gray value images, the Laplacian-based zero crossing procedure overestimates the
position of the edge and the SDGD-based procedure underestimates the position. This
is true in both two-dimensional and three-dimensional images with an error on the
order of ( /R)2 where R is the radius of curvature of the edge. The PLUS operator has
an error on the order of ( /R)4 if the image is sampled at, at least, 3x the usual
Nyquist sampling frequency or if we choose >= 2.7 and sample at the usual Nyquist
frequency.
All of the methods based on zero crossings in the Laplacian must be able to
distinguish between zero crossings and zero values. While the former represent edge
positions, the latter can be generated by regions that are no more complex than
bilinear surfaces, that is, a(x,y) = a0 + a1*x + a2*y + a3*x*y. To distinguish between
these two situations, we first find the zero crossing positions and label them as "1"
and all other pixels as "0". We then multiply the resulting image by a measure of the
edge strength at each pixel. There are various measures for the edge strength that are
all based on the gradient. This last possibility, use of a morphological gradient as an
edge strength measure, was first described by Lee, aralick, and Shapiro and is
particularly effective. After multiplication the image is then thresholded (as above) to
produce the final result. The procedure is thus as follows :
Figure 10.17: General strategy for edges based on zero crossings.
The results of these two edge finding techniques based on zero crossings, LoG
filtering and PLUS filtering, are shown in Figure 57 for images with a 20 dB SNR.
a) Image SNR = 20 dB
b) LoG filter
c) PLUS filter
Figure10.18: Edge finding using zero crossing algorithms LoG and PLUS. In both algorithms
= 1.5.
Edge finding techniques provide, as the name suggests, an image that contains a
collection of edge pixels. Should the edge pixels correspond to objects, as opposed to
say simple lines in the image, then a region-filling technique such as eq. may be
required to provide the complete objects.
Binary mathematical morphology
The various algorithms that we have described for mathematical morphology in
Section 9.6 can be put together to form powerful techniques for the processing of
binary images and gray level images. As binary images frequently result from
segmentation processes on gray level images, the morphological processing of the
binary result permits the improvement of the segmentation result.
* Salt-or-pepper filtering - Segmentation procedures frequently result in isolated "1"
pixels in a "0" neighborhood (salt) or isolated "0" pixels in a "1" neighborhood
(pepper). The appropriate neighborhood definition must be chosen as in Figure 3.
Using the lookup table formulation for Boolean operations in a 3 x 3 neighborhood
that was described in association with Figure 43, salt filtering and pepper filtering are
straightforward to implement. We weight the different positions in the 3 x 3
neighborhood as follows:
For a 3 x 3 window in a[m,n] with values "0" or "1" we then compute:
The result, sum, is a number bounded by 0 <= sum <= 511.
* Salt Filter - The 4-connected and 8-connected versions of this filter are the same
and are given by the following procedure: i) Compute sum ii) If ( (sum == 1) c[m,n] =
0 Else c[m,n] = a[m,n]
* Pepper Filter - The 4-connected and 8-connected versions of this filter are the
following procedures:
4-connected 8-connected i) Compute sum i) Compute sum ii) If ( (sum == 170) ii) If (
(sum == 510) c[m,n] = 1 c[m,n] = 1 Else Else c[m,n] = a[m,n] c[m,n] = a[m,n]
* Isolate objects with holes - To find objects with holes we can use the following
procedure which is illustrated in Figure 58.
i) Segment image to produce binary mask representation ii) Compute skeleton without
end pixels - eq. iii) Use salt filter to remove single skeleton pixels iv) Propagate
remaining skeleton pixels into original binary mask - eq.
a) Binary image b) Skeleton after salt filter c) Objects with holes
Figure 10.19: Isolation of objects with holes using morphological operations.
The binary objects are shown in gray and the skeletons, after application of the salt
filter, are shown as a black overlay on the binary objects. Note that this procedure
uses no parameters other then the fundamental choice of connectivity; it is free from
"magic numbers." In the example shown in Figure 58, the 8-connected definition was
used as well as the structuring element B = N8.
* Filling holes in objects - To fill holes in objects we use the following procedure
which is illustrated in Figure 10.20.
i) Segment image to produce binary representation of objects ii) Compute complement
of binary image as a mask image iii) Generate a seed image as the border of the image
iv) Propagate the seed into the mask - eq. v) Complement result of propagation to
produce final result
a) Mask and Seed images b) Objects with holes filled
Figure12: Filling holes in objects.
The mask image is illustrated in gray in Figure 59a and the seed image is shown in
black in that same illustration. When the object pixels are specified with a
connectivity of C = 8, then the propagation into the mask (background) image should
be performed with a connectivity of C = 4, that is, dilations with the structuring
element B = N4. This procedure is also free of "magic numbers."
* Removing border-touching objects - Objects that are connected to the image border
are not suitable for analysis. To eliminate them we can use a series of morphological
operations that are illustrated in Figure 60.
i) Segment image to produce binary mask image of objects ii) Generate a seed image
as the border of the image iv) Propagate the seed into the mask - eq. v) Compute XOR
of the propagation result and the mask image as final result
a) Mask and Seed images b) Remaining objects
Figure 10.21: Removing objects touching borders.
The mask image is illustrated in gray in Figure 60a and the seed image is shown in
black in that same illustration. If the structuring element used in the propagation is B
= N4, then objects are removed that are 4-connected with the image boundary. If B =
N8 is used then objects that 8-connected with the boundary are removed.
* Exo-skeleton - The exo-skeleton of a set of objects is the skeleton of the background
that contains the objects. The exo-skeleton produces a partition of the image into
regions each of which contains one object. The actual skelet-onization is performed
without the preservation of end pixels and with the border set to "0." The procedure is
described below and the result is illustrated in Figure 10.22
i) Segment image to produce binary image ii) Compute complement of binary image
iii) Compute skeleton using eq. i+ii with border set to "0"
Figure 10.22: Exo-skeleton.
* Touching objects - Segmentation procedures frequently have difficulty separating
slightly touching, yet distinct, objects. The following procedure provides a
mechanism to separate these objects and makes minimal use of "magic numbers." The
exo-skeleton produces a partition of the image into regions each of which contains
one object. The actual skeletonization is performed without the preservation of end
pixels and with the border set to "0." The procedure is illustrated in Figure 15.
i) Segment image to produce binary image ii) Compute a "small number" of erosions
with B = N4 iii) Compute exo-skeleton of eroded result iv) Complement exo-skeleton
result iii) Compute AND of original binary image and the complemented exo-skeleton
a) Eroded and exo-skeleton images b) Objects separated (detail)
Figure 10.23.: Separation of touching objects.
The eroded binary image is illustrated in gray in Figure 10.23a and the exo-skeleton
image is shown in black in that same illustration. An enlarged section of the final
result is shown in Figure 15b and the separation is easily seen. This procedure
involves choosing a small, minimum number of erosions but the number is not critical
as long as it initiates a coarse separation of the desired objects. The actual separation
is performed by the exo-skeleton which, itself, is free of "magic numbers." If the exoskeleton is 8-connected than the background separating the objects will be 8connected. The objects, themselves, will be disconnected according to the 4connected criterion.
Gray-value mathematical morphology
Gray-value morphological processing techniques can be used for practical problems
such as shading correction. In this section several other techniques will be presented.
* Top-hat transform - The isolation of gray-value objects that are convex can be
accomplished with the top-hat transform as developed by Meyer . Depending upon
whether we are dealing with light objects on a dark background or dark objects on a
light background, the transform is defined as:
Light objects Dark objects -
where the structuring element B is chosen to be bigger than the objects in question
and, if possible, to have a convex shape. Because of the properties given in eqs. and ,
Topat(A,B) >= 0. An example of this technique is shown in 16.
The original image including shading is processed by a 15 x 1 structuring element as
described in eqs. and to produce the desired result. Note that the transform for dark
objects has been defined in such a way as to yield "positive" objects as opposed to
"negative" objects. Other definitions are, of course, possible.
* Thresholding - A simple estimate of a locally-varying threshold surface can be
derived from morphological processing as follows:
Threshold surface Once again, we suppress the notation for the structuring element B under the max and
min operations to keep the notation simple. Its use, however, is understood.
(a) Original
(a) Light object transform (b) Dark object transform
Figure 10.24: Top-hat transforms.
* Local contrast stretching - Using morphological operations we can implement a
technique for local contrast stretching. That is, the amount of stretching that will be
applied in a neighborhood will be controlled by the original contrast in that
neighborhood. The morphological gradient defined in eq. may also be seen as related
to a measure of the local contrast in the window defined by the structuring element B:
The procedure for local contrast stretching is given by:
The max and min operations are taken over the structuring element B. The effect of
this procedure is illustrated in Figure 10.24. It is clear that this local operation is an
extended version of the point operation for contrast stretching presented in eq. above.
before after
before after
before after
Figure10.24: Local contrast stretching.
In our algorithm
The built-in Matlab function ‘graythresh’ was used to threshold all images so that the
cytoplasm was white (or 1) and the nucleus and extracellular components were black
(or0). The Matlab function ‘graythresh’ computes the global image threshold using
Otsu’smethod. The result is shown in Fig10.25.
NORMAL 1
NORMAL2
CANCER 1
CANCER2
Figure10.25 Data after threshold
10.5.4 Fourier Transform
10.5.4.1 Brief Description
The Fourier Transform is an important image processing tool which is used to
decompose an image into its sine and cosine components. The output of the
transformation represents the image in the Fourier or frequency domain, while the
input image is the spatial domain equivalent. In the Fourier domain image, each point
represents a particular frequency contained in the spatial domain image.
The Fourier Transform is used in a wide range of applications, such as image analysis,
image filtering, image reconstruction and image compression.
10.5.4.2How It Works
As we are only concerned with digital images, we will restrict this discussion to the
Discrete Fourier Transform (DFT).
The DFT is the sampled Fourier Transform and therefore does not contain all
frequencies forming an image, but only a set of samples which is large enough to fully
describe the spatial domain image. The number of frequencies corresponds to the
number of pixels in the spatial domain image, i.e. the image in the spatial and Fourier
domain are of the same size.
For a square image of size N×N, the two-dimensional DFT is given by:
where f(a,b) is the image in the spatial domain and the exponential term is the basis
function corresponding to each point F(k,l) in the Fourier space. The equation can be
interpreted as: the value of each point F(k,l) is obtained by multiplying the spatial
image with the corresponding base function and summing the result.
The basis functions are sine and cosine waves with increasing frequencies, i.e. F(0,0)
represents the DC-component of the image which corresponds to the average
brightness and F(N-1,N-1) represents the highest frequency.
In a similar way, the Fourier image can be re-transformed to the spatial domain. The
inverse Fourier transform is given by:
To obtain the result for the above equations, a double sum has to be calculated for
each image point. However, because the Fourier Transform is separable, it can be
written as
where
Using these two formulas, the spatial domain image is first transformed into an intermediate
image using N one-dimensional Fourier Transforms. This intermediate image is then
transformed into the final image, again using N one-dimensional Fourier Transforms.
Expressing the two-dimensional Fourier Transform in terms of a series of 2N onedimensional transforms decreases the number of required computations.
Even with these computational savings, the ordinary one-dimensional DFT has
complexity. This can be reduced to
if we employ the Fast Fourier
Transform (FFT) to compute the one-dimensional DFTs. This is a significant
improvement, in particular for large images. There are various forms of the FFT and
most of them restrict the size of the input image that may be transformed, often to
where n is an integer. The mathematical details are well described in the
literature.
The Fourier Transform produces a complex number valued output image which can
be displayed with two images, either with the real and imaginary part or with
magnitude and phase. In image processing, often only the magnitude of the Fourier
Transform is displayed, as it contains most of the information of the geometric
structure of the spatial domain image. However, if we want to re-transform the
Fourier image into the correct spatial domain after some processing in the frequency
domain, we must make sure to preserve both magnitude and phase of the Fourier
image.
The Fourier domain image has a much greater range than the image in the spatial
domain. Hence, to be sufficiently accurate, its values are usually calculated and stored
in float values.
In our algorithm
The built-in Matlab function ‘fft2’ was used to convert the binary images into the
spatialfrequency domain using the two-dimensional discrete Fourier transform. The
image wasshifted before the Fourier transform so that the zero frequency component
was at thecenter of the frequency space. The result is shown in Fig. 10.26
NORMAL1
CANCER 1
NORMAL2
CANCER 2
Figure 10.26 Data after fourier transform
10.5.5Log Transform
In other word Basic Grey Level Transformations:
Image enhancement is a very basic image processing task that defines us to have a
better subjective judgement over the images. And Image Enhancement in spatial
domain (that is, performing operations directly on pixel values) is the very simplistic
approach. Enhanced images provide better contrast of the details that images contain.
Image enhancement is applied in every field where images are ought to be understood
and analysed. For example, Medical Image Analysis, Analysis of images from
satellites, etc. Here I discuss some preliminary image enhancement techniques that are
applicable for grey scale images.
Image enhancement simply means, transforming an image f into image g using T.
Where T is the transformation. The values of pixels in images f and g are denoted by r
and s, respectively. As said, the pixel values r and s are related by the expression,
s = T(r)
where T is a transformation that maps a pixel value r into a pixel value s. The results
of this transformation are mapped into the grey sclale range as we are dealing here
only with grey scale digital images. So, the results are mapped back into the range [0,
L-1], where L=2k, k being the number of bits in the image being considered. So, for
instance, for an 8-bit image the range of pixel values will be [0, 255].
There are three basic types of functions (transformations) that are used frequently in
image enhancement. They are,



Linear,
Logarithmic,
Power-Law.
The transformation map plot shown below depicts various curves that fall into the
above three types of enhancement techniques.
Figure 10.27: Plot of various transformation functions
The Identity and Negative curves fall under the category of linear functions. Indentity
curve simply indicates that input image is equal to the output image. The Log and
Inverse-Log curves fall under the category of Logarithmic functions and nth root and
nth power transformations fall under the category of Power-Law functions.
10.5.5.1Image Negation
The negative of an image with grey levels in the range [0, L-1] is obtained by the
negative transformation shown in figure above, which is given by the expression,
s=L-1-r
This expression results in reversing of the grey level intensities of the image thereby
producing a negative like image. The ouput of this function can be directly mapped
into the grey scale look-up table consisting values from 0 to L-1.
10.5.5.2Log Transformations
The log transformation curve shown in fig. A, is given by the expression,
s = c log(1 + r)
where c is a constant and it is assumed that r≥0. The shape of the log curve in fig. A
tells that this transformation maps a narrow range of low-level grey scale intensities
into a wider range of output values. And similarly maps the wide range of high-level
grey scale intensities into a narrow range of high level output values. The opposite of
this applies for inverse-log transform. This transform is used to expand values of dark
pixels and compress values of bright pixels.
10.5.5.2.1Brief Description
The dynamic range of an image can be compressed by replacing each pixel value with
its logarithm. This has the effect that low intensity pixel values are enhanced.
Applying a pixel logarithm operator to an image can be useful in applications where
the dynamic range may too large to be displayed on a screen (or to be recorded on a
film in the first place).
10.5.5.2.2How It Works
The logarithmic operator is a simple point processor where the mapping function is a
logarithmic curve. In other words, each pixel value is replaced with its logarithm.
Most implementations take either the natural logarithm or the base 10 logarithm.
However, the basis does not influence the shape of the logarithmic curve, only the
scale of the output values which are scaled for display on an 8-bit system. Hence, the
basis does not influence the degree of compression of the dynamic range. The
logarithmic mapping function is given by
Since the logarithm is not defined for 0, many implementations of this operator add the
value 1 to the image before taking the logarithm. The operator is then defined as
The scaling constant c is chosen so that the maximum output value is 255 (providing
an 8-bit format). That means if R is the value with the maximum magnitude in the
input image, c is given by
The degree of compression (which is equivalent to the curvature of the mapping
function) can be controlled by adjusting the range of the input values. Since the
logarithmic function becomes more linear close to the origin, the compression is
smaller for an image containing small input values.
10.5.5.3Power-Law Transformations
The nth power and nth root curves shown in fig. A can be given by the expression,
s = crγ
This transformation function is also called as gamma correction. For various values of
γ different levels of enhancements can be obtained. This technique is quite commonly
called as Gamma Correction. If you notice, different display monitors display images
at different intensities and clarity. That means, every monitor has built-in gamma
correction in it with certain gamma ranges and so a good monitor automatically
corrects all the images displayed on it for the best contrast to give user the best
experience.
The difference between the log-transformation function and the power-law functions
is that using the power-law function a family of possible transformation curves can be
obtained just by varying the λ.
These are the three basic image enhancement functions for grey scale images that can
be applied easily for any type of image for better contrast and highlighting. Using the
image negation formula given above, it is not necessary for the results to be mapped
into the grey scale range [0, L-1]. Output of L-1-r automatically falls in the range of
[0, L-1]. But for the Log and Power-Law transformations resulting values are often
quite distintive, depending upon control parameters like λ and logarithmic scales. So
the results of these values should be mapped back to the grey scale range to get a
meaningful output image. For example, Log function s = c log(1 + r) results in 0 and
2.41 for r varying between 0 and 255, keeping c=1. So, the range [0, 2.41] should be
mapped to [0, L-1] for getting a meaningful image
In our algorithm
The most common application for the dynamic range compression is for the display of
the fourier transform
The log transform of the image in Fourier space was performed using the equation
s = log (r + 1)
The log transform compressed the values of the light pixels of the image and
expanded the values of the dark pixels of the image. This reduced the DC values
relative to the rest of the pixel values, allowing the details of the transform to become
visible (Fig. 10.28). At this point a feature starts to become apparent which might be
used to automatically separate the normal from the cancer. In the cancer samples, the
low frequency bright spot is fairly uniform. Looking closely at the normal samples, a
dark ring is visible.
NORMAL1
NORMAL2
CANCER 1
CANCER2
Figure 10.28 data after log transform
10.5.6 Mean Filter
Mean filtering is a spatial filter that replaces the center value in the window with the
average of all the pixel values in the window. The window is usually square but can
be any shape.
Mean filter is a simple, intuitive and easy to implement method of smoothing images,
i.e. reducing the amount of intensity variation between one pixel and the next. It is
often used to reduce noise in images.(see chapter 3)
In our algorithm
These images are fairly noisy which may make automatic detection schemes
challenging. To reduce the noise a 5 by 5 pixel mean filter was implemented. This
filtered averaged 25 points thus reducing the noise by 5. Because a single pass of this
filter did not seem to provide sufficient noise reduction, the image was passed through
the filter a second time. The results can be seen in Fig.10.29.
NORMAL1
NORMAL2
CANCER 1
CANCER2
Figure 10.29 data after mean filter
Here the dark rings in the low frequency area of the normal tissue is still visible but
the noise is reduced.
10.5.7 Line Plot
To get these two dimensional images such that they could easily , and quantitatively
beanalyzed by 1 dimensional signal processing techniques, the center row of pixels
was extracted and their values plotted against their positions. The results can be seen
in Fig.10.29.
From the plots in Fig. 10.29 it can be seen that there is a local minimum in the normal
images at approximately the 7th pixel from the center, or at a frequency of
approximately 55 mm-1. Possibly more telling is that there is a local maximum at
approximately the 16th pixel from the center or, 127 mm-1 . This would indicate that
normal cells contain regular features which repeat at 7.8 μm, where as the cancerous
cells do not contain this repeating nature.
NORMAL1
NORMAL2
CANCER1
CANCER2
Figure10.30 Data after line plot
An alternative approach was also implemented starting with the log transformed
Fourier space image. As already described, there were two things that we wanted to
do to this picture. The first was to reduce the noise. The second was to reduce the
image to a plot that could be quantitatively analyzed. To accomplish both goals
simultaneously the radial symmetry of the image was exploited, and pixels were
averaged according to their radius. For example, the value of all pixels, 5 pixels from
the center of the image were averaged. Then the value of all pixels 6 pixels from the
center were averaged. This was done along the entire radius of the image. This
function was then mirrored around DC to make the result more intuitive (Fig. 10.30).
Noise reduction by averaging is the square root of the number of pixels averaged, thus
the noise reduction changes as a function of radius. However, it may be argued that
this retains the radial features of the image better than applying a uniform averaging
filter. Again in these plots, it can clearly be seen there is a local minimum in the
normal images at approximately the 7th pixel from the center or at a frequency of
approximately 55 mm-1 and a local maximum at approximately the 16th pixel from the
centeror, 127 mm-1
NORMAL1
NORMAL2
Figure 10.31 Circumferentially smoothed
CANCER1
CANCER2
power spectrum
10.6 Discussion
Multiple image enhancement steps were needed to exaggerate the differences between
the frequency-domain images of normal and cancerous tissues (median filter, unsharp
mask, threshold). Additional enhancements were needed to improve contrast between
the power spectrum of normal and cancerous tissues (averaging). After these
enhancements, clear differences could be seen between the normal and cancerous
power spectrum. For example, in Figs. 10.30 and 10.31 there are frequency peaks as
indicated by the arrows in the normal tissue spectrum, which are not present in the
cancerous tissue spectrum. With more extensive testing, we believe we may be able to
use this local maximum to quantify the organization of the tissue structure. This could
be investigated in the future and potentially lead to automated diagnostic techniques,
reducing the cost and increasing the accuracy of epithelial biopsy procedures.