Download MAESTRO Deliverable N° D8 : Report on the development of a

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Medical imaging wikipedia , lookup

Image-guided radiation therapy wikipedia , lookup

Fluoroscopy wikipedia , lookup

Transcript
Project no. LSHC-CT-2004-503564
MAESTRO
Methods and Advanced Equipment for Simulation and Treatment in Radio-Oncology
Instrument : Integrated Project
Thematic Priority : Life sciences, Genomics and Biotechnology for Health
Deliverable N° D8 : Report on the development of a prototype system for
multi-modality image registration, segmentation, modelling and organ tracking
Due date of deliverable : month 12
Actual submission date : month 12
Start date of project : 1st May 2004
Duration : 5 years
Organisation name of lead contractor for this deliverable : UEA (University of East Anglia)
Revision : 1
WP1.3: Progress report on the development of a
prototype system for multi-modality image
registration, segmentation, modelling and organ
tracking
Mark Fisher† , Yu Su† , Gloria Bueno‡, Olivier Haas#
†
School of Computing Sciences, UEA, Norwich, UK
‡
Universidad de Castilla-La Mancha, Spain
#
CTAC, Coventry University, UK
∗
April 29, 2005
1
Introduction
This report details the progress made towards the first deliverable for WP1.3, a prototype organ tracking demonstrator, due in month 18. Unfortunately, delays in receiving
the budget for the project has resulted in delays in recruiting key staff. This has impacted significantly on the progress made during the first 12 months. Also, there have
been problems sourcing key image data sets (i.e. Electronic Portal Images (EPI) and
Electa ’cone’ beam images) from within MAESTRO. Although other sources of data
have now been identified outside the MAESTRO project, there is no guarantee that
these clinical partners will remain committed to the project through the future months.
Establishing an image database within MAESTRO is therefore a high priority task.
A small number of EPI (Figure 1 shows a representative set) have been received from
the Norfolk and Norwich University Hospital (situated adjacent to UEA). Several problems are evident in these images. Firstly, there is a need for image enhancement and
secondly there is the problem of segmentation (a prerequisite for registration/tracking).
Finally there is the problem of registration/tracking itself. The following sections review the literature in each of these areas.
2
Image Enhancement
Image enhancement is employed to improve the appearance of an image, e.g., to remove noise, to deblur object’s edges, or to highlight some specified features [99]. Over
∗
email{mhf,sy}@cmp.uea.ac.uk, [email protected], [email protected]
1
a)
b)
c)
Figure 1: Typical EPI Images a) Prostate, b) Pelvis and c) lung (enhanced by histogram
equalization)
time, many techniques for image enhancement have been proposed. Histogram equalization and adjustment, together with linear filtering are standard image enhancement
operations. However, other operations exist, such as thresholding, nonlinear filtering, adaptive filtering approaches. Several enhancement techniques are introduced in
following paragraphs.
Image restoration also improves the image, therefore it is considered by some as
image enhancement too. However, the difference between these two techniques is that
the image enhancement uses subjective criteria to improve the appearance of the image,
while image restoration tries to reverse specific damage to the image using objective
criteria [80]. There are two reasons why an image may require restoration. Firstly, the
grey level of individual pixels may be changed by the imaging process and secondly
an image might become distorted by individual pixels shifting away from their correct
position.
In radiographic images, the alteration of individual pixels’ grey value may be caused
by the imaging devices’ point spread function (PSF), which describes the increase in
the response of the electronic portal image devices (EPID) at the beam axis due to
off-axis irradiation mainly from the scattered irradiation [31].
The image distortion, caused by individual pixels shifting away from their correct
position, is the subject of geometric restoration, also known as image registration [80].
Image registration is very important in medical applications where two images have to
be aligned with each other and is the focus of another section.
Existing enhancement techniques may be grouped according to the technical methods involved [99]:
1. Spatial smoothing of regions, which employs linear or nonlinear spatial-domain
low pass filters;
2. Intensity adjustment and histogram equalization, for contrast and feature enhancement;
3. Edge enhancement, which involves linear or nonlinear spatial-domain high pass
filters;
4. Frequency-domain filtering, which utilizes low or high pass filters in frequency
domain.
2
Spatial smoothing of regions removes random noise at each pixel. Under this scheme,
the random noise is assumed to be additive and normally distributed with zero mean.
Commonly used spatial smoothing schemes include convolution based equal/unequalweighted neighbourhood-averaging filters, namely convolution kernels [80, 17, 81, 67,
94], nonlinear filters [5], order-statistic filters [5], and the Wiener lowpass filters [94].
Nonlienar special filtering, such as the median and Hachimura-Kuwahara filter are
often used in image processing to reduce so-called ”salt and pepper” noise and are more
effective than convolution based kernels when the goal is to simultaneously reduce noise
and preserve edges. The median filter is based upon moving a window over an image as
in a convolution and computing the output pixel as the median value of the brightness
within the input window. The Hachimura-Kuwahara filter is a particular example of
edge-preserving/enhancing smoothing filters [46]. It is able to smooth images without
disturbing the sharpness and, if possible, the position of edges [5].
Order-statistic filters, are also called Rank-order filters. They replace each element
in an image array by the nth element in the sorted set of neighbours specified by the
nonzero elements in the domain [5].
Wiener lowpass filters are adaptive noise-removal filters that assume an intensity
image that has been degraded by additive noise of constant power. This filter uses a
pixel-wise adaptive Wiener method based on statistics estimated from a local neighbourhood of each pixel [94].
Intensity adjustment and histogram equalization methods attempt to requantize
the image by assigning a new grey level to each pixel to improve the contrast [80, 17,
81, 67, 94]. These methods improve the contrast of an image and hence can extract
hidden features from the background according to some subjective criteria.
Histogram equalization attempts to requantize the image such that all grey values
in an image are equally probable. In order to get an absolutely flat histogram, a method
called histogram with random addition has been adopted [79], with which the pixels are
randomly re-distributed across neighbouring grey values. It has been pointed out that
human perception performs a nonlinear transformation of the light intensity and so one
may want to emphasize certain grey values to compensate human eye responses [25].
The problem of inhomogeneous contrast has been addressed by sliding a window across
the whole region of an image and only modifying the histogram inside the window [80].
Edge enhancement methods are used to deblur the edge of an object by increasing
the grey level difference between the edge pixels of the object and those of the neighbouring background. Characterized as edge-preserving, the Hachimura-Kuwahara filter mentioned in the previous section is often used in edge-enhancement [5]. The
anti-diffusion operation is also commonly adopted for edge reinforcement [3], however,
overemphasis of accidental fluctuations is one of the major problems with this approach, attempts to address this problem are summarized in [99, 41]. Local statistic
filtering technique have also been used in edge enhancement [48, 49]
Filters, implemented in the frequency domain, can be arranged into three different
groups, lowpass, highpass and bandpass filters. These filters are also commonly used
in image restoration. A standard restoration approach is the inverse filter. Here we
attempt to divide the image in the Fourier space by the optical transfer function of the
imaging system. This method is sensitive to noise and assume that the optical transfer
function, which corrupted the image, has to be known beforehand in order to recover
3
the ”true” image. This is often not the case and therefore, more comprehensive methods
such as the Wiener filter and Maximum Entropy filter (also based on the same principle)
have been developed to produce more satisfactory results [21] and [1, 26, 28, 32, 36].
Other methods of image enhancement can be found in [80].
2.1
Towards Enhancement of EPID Images using Restoration
Techniques
Although the boundary between image enhancement and image restoration is rather
blurred, a common understanding is that the enhancement is based on some subjective
criteria whilst in the restoration process, it is assumed that the cause of the image
degradation is previously known. Various restoration techniques have, therefore, been
developed to compensate for the effects of degradation. According to the technical
methods involved, restoration methods may be organized in to several groups such
as, filtering approaches, reconstruction from coded images, error recovery, space varying restoration, adaptive restoration, frequency spectrum restoration, Bayesian image
restoration, restoration by deconvolution, color, multispectral, multichannel restoration, etc. [2].
In radiographic images, the alteration of individual pixels’ grey value may be caused
by the imaging devices’ point spread function (PSF), which describes the increase in the
response of the electronic portal image devices (EPID) at the beam axis due to off-axis
irradiation, resulting mainly from scattered irradiation [3, Heijmen et al, 1995]. Several
pilot studies on the EPID’s PSF have been intensively carried out during 1990’s, in
which measured data has been used to derive the EPID’s PSF [31].
The deconvolution operation is to be used in our investigation to recover the original
image. Given the corrupted image g , to restore the original undegraded image f , the
prior knowledge of the PSF or its Fourier transform of the degradation process may be
used .
Under the assumption that the effect, which corrupts the image is linear, the degraded image g(α, β) can be written as follows:
g(α, β) =
∞
−∞
or in its discrete form:
g(i, j) =
f (x, y)h(x, y, α, β)dxdy
N
N f (k, l)h(k, l, i, j)
(1)
(2)
k=1 l=1
where h(x, y, α, β) and h(k, l, i, j) are the point spread function (PSF) in its continuous
and discrete form respectively, while f (x, y) and f (k, l) are the continuous and discrete
expressions of underlying undegraded image [80]. By removing the degradation factor,
the ”true” image can be recovered.
Deconvolution mainly solves the blurring problem, which we have already introduced in previous section of enhancement. According to the technical methods, deconvolution algorithms may be categorized as follows:
Blind Deconvolution Algorithm The Blind Deconvolution Algorithm can be used
effectively when no information about the distortion (blurring and noise) is
4
known. The algorithm restores the image and the point-spread function (PSF)
simultaneously.
Lucy-Richardson Algorithm The Lucy-Richardson algorithm can be used effectively when the point-spread function PSF (blurring operator) is known, but
little or no information is available for the noise. The blurred and noisy image is restored by the iterative, accelerated, damped Lucy-Richardson algorithm.
The additional optical system (e.g. camera) characteristics can be used as input
parameters to improve the quality of the image restoration.
Regularized Filter Regularized deconvolution can be used effectively when constraints
are applied on the recovered image (e.g., smoothness) and limited information is
known about the additive noise. The blurred and noisy image is restored by a
constrained least square restoration algorithm that uses a regularized filter.
Wiener Filter Wiener deconvolution can be used effectively when the frequency characteristics of the image and additive noise are known, to at least some degree.
The Wolfson Bioinformatics Laboratory of School of Computing Science at UEA has
been working on the advanced development of intelligent deconvolution algorithms
based on the above approaches for the last 10 years [84, 82, 85, 83, 86, 88, 87, 89].
However, in medical radiographic images, different types of tissue and scattering
effects demand a more complex dynamic PSF model. The whole process may be
simulated using the results generated by previous published studies on EPID’s PSF as
mentioned in [31]. The enhancement of EPID image is the focus of current work in
progress.
3
Tracking
The aim of motion tracking is to recover the movement of an object (through space)
given a sequence of image frames F (x, y, t). Dynamic scene analysis involving a stationary camera is a well studied computer vision problem and the most successful techniques are based on determining optical flow (i.e. the velocity vector of each pixel in the
image). Methods for efficiently and robustly computing optical flow have their origins
in early work in computer vision and artificial intelligence, for example [56, 33, 53, 16].
Much of the research in the Computer Vision community has focused on tracking
humans (e.g. video surveillance [11, 40, 51]) or parts of humans (e.g. gesture recognition [65], gait analysis [20], lipreading [62]) and traffic surveillance [23, 27], however
there are a few examples of medical applications. For example, over the last 10 years
researchers have studied the estimation of cardiac motion and deformation from cine
MR imaging employing MR tagging or phase contrast [72, 91]. Work by Winterfelt
et al. [103] and Jacob et al. [35] have considered 2D image segmentation and tracking
approaches (e.g. Nastar and Ayache [69, 68], McEachen and Duncan [63]) without any
coherence between slices. However, recent work has used surface properties extracted
from 3D electrocardiography (3DE) [70, 71, 13], captured over the cardiac cycle.
5
3.1
Active Surface Models
A common approach is to track shape related features on the left ventricle LV over
time [92], using either statistical (typically ASM [19]) or deformable [64] models. In
general all the methods depend on a accurate segmentation of the LV walls. This
stage is usually achieved interactively and fully automatic segmentation is the focus
of current research [66]. The problem of determining optical flow cannot be solved
if we assume every point in the image can move independently and a model that
captures assumptions about the displacement field is usually employed; this typically
takes the form of a smoothness constraint. Spatial and temporal smoothness arise from
the intuition that we are viewing homogeneous objects of finite size undergoing rigid
motion or deformation. In this case neighbouring points on objects will have similar
velocities. High values of the derivatives of the displacement field are likely to be the
result of noise (or object occlusion), this leads to methods that impose a regularisation
constraint that penalises the spatial derivatives, such as in the method proposed by
Horn and Schunk [33], Eqn. 3.
arg min
û =
u
x
dI
+ u.∇I
dt
2


dui 2
 dx
+λ
ij
dxj
(3)
where u is the displacement vector field over a space x that can be two- or threedimensional, t is time, and I represents the image. The gradient constraint term
(It + u.∇I)2 essentially tries to match points of equal intensity and is the data term.
The regularising term can be thought of as a model term that captures an hypothesis
about the properties of the displacement field. More generally, the gradient constraint
term can be replaced by an image data adherence term that tries to ensure that the displacement field stays close to some pre-existing displacement estimates. For example,
if an estimate um of the displacement field exists, Eqn. 3 can be written:
arg min
û =
u
x

2 
du
i
 dx
|u − um |2 + λ 
ij
dxj
(4)
If we discretize Eqn 4, differentiate it with respect to u and concatenate all the individual displacements u into a large vector U we can write a generalised expression
[K]U = F
(5)
where K is a matrix of local derivative operators, that includes model constraints from
a regularisation term and F is the driving force that tries to deform the model to the
image data. Temporal smoothness constraints are imposed either within a Bayesian
framework (e.g. Kalman or particle filtering [14] chapters 9–12) or via a mechanical
model (e.g. extending Eqn. 5 to include dynamics (Eqn. 6) [96]:
M Ü + C U̇ + [K]U = F
(6)
where M is a mass matrix and C is a damping matrix.
Recently, researchers at Carnegie Mellon University [7, 6, 8, 9, 10] have attempted
to establish a unified approach for image alignment by describing most algorithms and
6
their extensions within a consistent framework. In this series of papers, Baker et al.
revisit the Lucus-Kanade algorithm [56, 57], summarising the problem as follows:
“The goal of the Lucas-Kanade algorithm is to align a template image T (x) to an
input image I(x) where x = (x, y)T is a column vector containing the pixel coordinates.
Let W(x; p) denote the parameterised set of allowed warps, where p = (p1 , . . . , pn )T
is a vector of parameters. The warp W(x; p) takes the pixel x in the template T and
maps it to the sub-pixel location W(x; p) in the image I.” [8]
The approach attempts to minimise the sum of squared error between the two
images, the template T and the image I warped back to the coordinate frame of the
template (Eqn. 7):
[T (x) − I(W(x; p))]2
(7)
x
and this is solved as a Gauss-Newton gradient descent non-linear optimization problem.
4
Use of EPIDs in QA for IMRT
The mainstay of intensity modulated radiotherapy treatment (IMRT) delivery is the
multileaf collimator (MLC) [101]. Intensity modulated beams (IMBs) may be constructed using a sequence of static MLC shaped fields in which the shape is fixed
between the delivery of quanta of fluence, the so called static MLC (SMLC) technique,
or the leaves may define changing shapes with the radiation on; the so called dynamic
MLC (DMLC) technique.
The availability of electronic portal imaging devices (EPIDs) [4] has motivated
researchers to consider their use for estimation of patient set-up errors [61, 95] (localisation) and quality assurance (verification). In the early days of DMLC it was feared
that the concept of moving components during treatment compromised patient safety
and so great efforts were made to assure the quality of dynamic radiation therapy.
Attempts using EPID to verify collimator leaf position in both static and dynamic use
have been reported.
Quality control of MLC’s for static use has been studied by Eilertsen [22] to quantify the performance of a Varian MLC in conjunction with a EPID. Dynamic studies
undertaken by Partridge [73] using an EPID capable of recording an image for each
pulse (1/25 s) of an Electra accelerator were able to verify the collimator leaf position
and more recently this has been achieved when patient attenuation is present [75, 74].
James [37, 38] and Williams [102] used an Electa fluoroscope EPID system to track
movement of the MLC leaves during DMLC and provided geometric verification in real
time. Pasma [76, 78, 77] and van Esche [98] have made a pre-treatment dosimetric
verification of IMRT using a CCD camera based fluoroscopic EPID system. EPID
images are acquired for all beams and converted to 2-D dose distributions which are
subsequently compared with predicted exit dose distributions.
Most clinical work using dynamic MLC methods are focused on the delivery of
IMBs using microMLCs for stereotactic radiosurgery of small concave target volumes.
Several clinical studies report success in treating brain tumours [12, 15, 100] (e.g. using
a computer controlled microMLC manufactured by the BrainLab Corporation). Here,
7
the computer runs a program that acts as an ’interpreter’, turning the desired intensity
modulation into a set of instructions to drive the MLC leaves. Treatment is usually
delivered in fractions and the anatomy is immobilised using a mechanical fixation
device to ensure the target alignment is accurately maintained (hence the treatment is
constrained to head and neck).
5
Movement studies and models for IMRT
Yu et al. [110, 109, 111] have studied the effect of intra-treatment movement during
the delivery of IMRT via the sliding-window DMLC technique and have shown that
patient movement during treatment can lead to 100% errors in the delivered dose.
Techniques such as active breathing control [47] aimed at limiting the effects of patient
motion and the introduction of smoothing constrains in treatment planning software
have been proposed to offset these effects.
Yang et al. [107] made a series of measurements which determined the importance of
breathing motion in tomotherapy using both spiral (continuous) and MIMiC (discretestep discrete motion) delivery approaches and reported on the use of a dynamic phantom to simulate this. It showed primarily that spiral tomotherapy is little affected
by motion, provided this motion is rapid with respect to gantry rotation motion and
using the MIMiC technique the dose distribution is actually improved. Other studies
involving tomotherapy include work by Chui [18] and Fitchard et al. [24].
The effects of systematic gantry and collimator angular rotation errors have been
studied by Low et al. [54] for IMRT delivered with fixed portals. Xing et al [106]
included errors due to the couch and logitudinal displacements using the MIMiC technique for IMRT delivery. Kung and Chen [45] studied the effect of misregistration
(MIMiC). Löff et al. [52] made a theoretical study of movement with respect to IMRT.
Webb [101] reports that no study has shown which IMRT treatment method most insensitive to tissue movement, but that ‘beating’ is an obvious problem that must be
avoided (i.e. “the patient must breath fast or not at all!” [101]). Hector [30, 29] studied
IMRT of the breast with respect to patient movement (fields can be redesigned to take
account of the change in breast volume with progress of treatment).
“One of the major difficulties with IMRT is the possibility (probability) that movement during the therapy will compromise the advantages of the method.” [101].
Several systems exist to measure changes due to motion in the abdomen and some
radiation systems can be gated (see [43, 44, 59, 60]). Mageras [59] measured that
the tumour position for lung tumours varies by 1.0–2.5 cm and for liver and kidney
tumours by 1.5–3 cm due to breathing. MRI movies confirm the lung location to
move up to 2 cm during breathing. Wong et al [105, 104] state that the total beam
aperture expansion needed to take account of breathing, set-up variations and beam
penumbra can be as large as 2.5 cm, gives rise to a significant risk to normal structures.
Studies of movement due to breathing using EPIDs have shown that some kind of
breathing control is advantageous [44] and MacKay et al. [58] have completed studies
using an imaging tool to show by animation the movement of tissues and compute
the biological consequences in terms of tissue control probability (TCP) and normal
8
tissue complication probability (NTCP). These showed that if the margins were small
then it would be necessary to intervene to correct for movements which had been
observed using portal imaging. Shirato et al [93] developed a technique whereby the
movement of the organ irradiated is tracked using four X-ray TV systems which view
an implanted gold seed, and gated if the target moves significantly. A theoretical
study using EPID’s to compensate for target motion in the in the patient’s body using
dynamic MLC systems has been reported by Li et al. [50]. [108]
5.1
Gabor Wavelet Network
The Gabor Wavelet(GWN) [113] is defined as filter for feature detector. For a good
understanding of the utility of (GWN) for tracking, lets remind some proprieties of
these. At first, it is invariant to some degree with respect to translation, rotation
and dilation. Secondly, the parameters of Gabor wavelets including the weights which
are directly related to their filter responses are directly related to the underlying image
structure. And finally, the precision of representation depends on number of the Gabor
wavelets chosen. The grey-level image can be considered as a 2D function. By a
continuous wavelet transformation of this function, which is an orthogonal projection
from this image to wavelet space, we can transform an image to the sum of Gabor
wavelet. This method has been applied to estimate for example the head pose [42],
and Real-time face tracking [90], which decompose de face on Gabor wavelets. But
the main disadvantage of this application is its incapability to track the out of the plan
movement.
5.2
Space-Time Tracking
Unlike the classic techniques for tracking which use a prior model, the space-time rank
constraint is an reverse process which can be used for non-rigid motion tracking [55],
[97]. The principle of this method based on representing the tracking by a matrix. This
matrix combine both the x and y movement of the pixels. It has been proven that the
tracking matrix can be factored to a matrix who describes the relative pose between
camera and object for each time frame, and second one describes the 3D structure of
the scene which is invariant to camera and object motion. This approach works By
estimating the parameters of the first matrix which are the motion parameters and
then estimate the model which fits the data.
5.3
Kalman Filter
In the assumption of linear Gaussian system, Kalman filter is used for tracking. One
version of Kalman filter called structural Kalman filter is proposed in [39] for tracking
that deal with the problem of measurements inaccuracy of the target. The structural
Kalman filter is composed by a cell Kalman filters that allocated to the sub-region
and relation Kalman filters allocated to the connection between two adjacent regions.
The method has been applied to the human body tracking. The use of the tracker
based Kalman filter is limited by the fact that it is based on Gaussian density which
is unimodal [34], it can’t represents simultaneous alternative hypothesis
9
5.4
Bayesian Method
The vision problem specially tracking process can be formulated as Bayesian inference
and in this case a maximum a posteriori(MAP) is considered. The method works
by inferring a prior probability distributions of the variable which can represent the
position of an object for example, and a conditional distribution of the measurements
given the position variable. But it has proved some limits of this method [112] like
Order Parameters and Phase Transitions for Road Tracking.
6
Image Registration
6.1
Introduction to Image Registration
In the field of medical images there are many different modalities, among them, we can
consider:
1. Anatomical images: (X-Ray, CT, MRI, ultrasound image, portal image and
video) the goal of those images is studding and looking for anatomical structures inside the images.
2. Functional images: (SPECT, PET, fMRI, EPI), in those images we try to understand and study functional process, for example, which brain zones are affected
in a specific pathology, It’s usually apply a biological contrast in the image.
All medical images contain information, either anatomical or functional. Very often
two or more images are acquired of the same patient. When the second image is
acquired, it is practically impossible to have the patients head positioned in the scanner
exactly the same way as the first time, so we need to spatial aligned these images.
In order to integrate all the information from different sources and modalities and
be able to apply computer algorithms to the images, we need to define a spatial relationship between the images. The goal of the image matching is to compare different
images applying spatial transformations.
Considering different criteria, we can group the matching techniques as follows:
1. Nature of matching basis:
• Extrinsic: based on foreign objects introduced into the imaged space (field
markers, adapters).
i. Invasive Techniques, the markers are inside the patient body.
ii. No Invasive Techniques, the markers are outside the body (skin markers).
• Intrinsic: based on the image information as generated by the patient. Inside
this group we can consider based on landmark, segmentation based and voxel
property based
• Non-image based (calibrate coordinate systems).
2. Nature of Transformation
10
• Rigid: only translations and rotations are allowed.
• Affine: if the transformation maps parallel lines onto parallel lines.
• Projective: if it maps lines onto lines.
• Nonrigid: elastic models
3. Modalities involved
• Monomodal: the images to be registered belong to the same modality.
• Multimodal: the images to be registered stem from two different modalities.
• Modality to Model: only one image is involved a the other is a model.
• Modality to Patient: only one image is involved a the other is the patient.
Among the different applications of matching are: MR images [8,14,17] a review of
MRI register techniques are present on [2], applied to EPI (Echoplanar Images) [3,5,15],
applied to microscopic images [6], register process improvements [18], biologic images
[7], partial data images [9], neurosurgery [10], applied to de radiotherapy process using
DRR and Portal Images [12], with biochemistry images like gel images [7].
Others approach could be studied in order to define the matching, an example is [18]
where is proposed matching the gradient of each image and the histograms associated
to them.
6.2
Methods
A review of several matching methods by Maintz [4] is an good start point on image
registration. Registration techniques may be divided into two categories, rigid and
nonrigid.
6.2.1
Rigid Matching
An image coordinate transformation is called rigid, when only translations and rotations are allowed. If the transformation maps parallel lines onto parallel lines is called
affine. If it maps lines onto lines is called projective.
Some examples of rigid matching are [2,3,4,5,7,8,10,11,12,16,18]
6.2.2
Nonrigid or Elastic Matching
These transformations cannot in general be represented using constant matrices. The
elastic registration is based on deformation models, like β-splines (snakes) [6,7,13],
Monte-Carlo methods [8] and adaptive algorithms [14]. Some examples of rigid matching are [4,6,7,8,9,13,14,15].
All these techniques trying to minimize the differences among images. There are
several measurements based on different methods for estimation:
• Correlation Methods: based on measure the similarity between the images, some
of these measurements are mutual information [1,2,10,11,12,15,16], entropy[1] and
distance functions [3,5,7,9].
11
• Point Based Methods: Estimation of transformation parameters using points
methods is based on establishing correspondence between identified points in both
images, like Active Shape Models based on Point distribution model. Modelling
is based on applying principal component analysis [8].
• Fourier: similar to correlation methods but using Fourier domain instead spatial
domain. It is based on Fourier transformation [2,18].
• Moment Methods: Moments define the spatial distribution of a rigid mass, like
principal axes [2].
• AIR (Automated Image Registration): AIR is a sophisticated and powerful image
registration algorithm. This method uses all the pixels in the image, and the
search space consist of up to fifth-order polynomials, involving as many as 168
parameters [2] that uses ”ratio of image uniformity” (RIU) as the similarity
measure.
6.3
Applications
Image Matching is a generally used in many fields, when It is necessary to study
different images, it is usually to use a matching technique in order to be able to compare
both images.
There are many areas where is suitable to apply matching methods:
1. Medical Area:
• Study the growth of a tumour during an time period, using CT (see [2]).
• Help to identify the anatomic location of certain mental activity using PET
and MRI is necessary align the different modalities (see [2]) or using EPI
and MRI [3,13].
• Correction for Interscan patient motion and geometric distortions on EPI
[5,15].
• Applying to MRI images previously segmented [8].
• Neurosurgery using Laser-Range Scanning (LRS) [10].
• 2D-3D registration using DRR using light fields [11].
• Patient positioning in radiotherapy using PI and DRR[11,12,16].
2. Microscopy:
• Registration of confocal images [6].
• Analysis of microarrays, genetic expression patterns, recognition of proteins
or study of 2D electrophoretic geles [7].
3. General Purpose:
• Images with significant rotation and translation between them [18].
• Deal with partial data images [9].
12
6.4
Conclusion
The goal of image registration is to find a transformation that aligns one image to another. This activity is due in part to the many clinical applications including diagnosis,
longitudinal studies, surgical planning and radiotherapy planning. In Radiotherapy
Planning (RTP) medical imaging is one of the most important task. In RTP many
image modalities are involved, high definition MRI, CT studies, fMRI, EPI. Portal
images, DRR, etc. Some of them are anatomical images where the aim is find out
relevant structures, and others are functional ones trying to show functional areas.
These images come from several sources, and are used to make a diagnostic, study the
evolution of pathologies, correct the position in neurosurgery or radiotherapy process,
compare result from different patients, etc.
We can find out different aspect where the image matching is important in order
to diagnosis or solve no trivial problems involved in RTP as:
1. Patient Position [11,12,16].
2. Correction of Images [5,15].
3. Comparison between 2D and 3D Images, from different sources with different
features [2,3,11,13].
These aspects use several modalities of image, and is necessary to adapt each algorithm to the image features.
There are some issues where an improvement of methods and more research is
available, some of these issues are:
• Decrease the computational cost in order to be able to apply the methods to
real-time registration.
• Study different similarity measures like mutual information or specific measures.
• Adapt general matching methods to specific problems, with specific image modalities and specific requirements.
• Trying to apply new deformable models.
13
References
[1] Digital filters - frequencies filters. http://www.cee.hw.ac.uk/hipr/html/freqfilt.html.
[2] Keith price bibliography contents for chapter on image processing.
http://iris.usc.edu/Vision-Notes/bibliography/contentsimage-proc.html.
[3] I E Abdou and W K Pratt. Quantitative design and evaluation of enhancement/thresholding edge detectors. In Proc. IEEE 67, page 753, 1979.
[4] Larry E Antonuk. Electronic portal imaging devices: a review and historical
perspective of contempory technologies and research. 47:R31–R65, 2002.
[5] J Astola and Pauli Kuosmanen. Fundamentals of Nonlinear Digital Filtering.
ISBN 0-8493-2570-6. CRC Press, 1997.
[6] Simon Baker, Ralph Gross, Takahiro Ishikama, and Iain Mathews. Lucus-kanade
20 years on: A unifying framework: Part 2. Technical Report - CMU-RI-TR-0301, 2003. Robotics Institute, Carnegie Mellon University.
[7] Simon Baker, Ralph Gross, and Iain Mathews. Lucus-kanade 20 years on: A unifying framework: Part 1. Technical Report - CMU-RI-TR-02-16, 2002. Robotics
Institute, Carnegie Mellon University.
[8] Simon Baker, Ralph Gross, and Iain Mathews. Lucus-kanade 20 years on: A unifying framework: Part 3. Technical Report - CMU-RI-TR-02-35, 2003. Robotics
Institute, Carnegie Mellon University.
[9] Simon Baker, Ralph Gross, and Iain Mathews. Lucus-kanade 20 years on: A unifying framework: Part 4. Technical Report - CMU-RI-TR-04-14, 2004. Robotics
Institute, Carnegie Mellon University.
[10] Simon Baker, Raju Patil, German Cheung, and Iain Mathews. Lucus-kanade 20
years on: A unifying framework: Part 5. Technical Report - CMU-RI-TR-04-64,
2004. Robotics Institute, Carnegie Mellon University.
[11] A. M. Baumberg and D. C. Hogg. Generating spatio-temporal models from
examples. In Proc. British Machine Vision Conference, volume 2, pages 413–
422, 1995.
[12] S.H. Benedict, R.M. Chardinale, Q. Wu, R.D. Zwicker, M.A. Arnfield, and R. Mohan. Potential for intensity modulation in stereotactic radiosurgery. 45(Suppl.
1):187, 1999.
[13] M.-O. Berger, G. Winterfeldt, and J.-P. Lethor. Contour tracking in echocardiographic sequences without learning stage: application to the 3d reconstruction of
the 3d beating left ventricle. In Med. Image Computing and Computer-Assisted
Intervention (MICCAI99), Lecture Notes on Computer Science, Vol. 1679, pages
508–515, Cambridge, UK, 1999. Springer.
14
[14] A. Blake and M. Isard. Active Contours: The application of techniques from
graphics, vision, control theory and statistics to visual tracking of shapes and
motion. Springer Verlag, 1998.
[15] D.E. Boccuzzi, S. Kim, J. Pryor, A. Berenstein, J.A. Shih, S.T. Chiu-Tsao, and
L.B. Harrison. A dosimetric comparison of stereotactic radiosurgery using static
beams with a micro-multileaf collimator versus arcs for treatment of arteiovenous
malformations. 45(Suppl. 1), 1999.
[16] B.F. Buxton and H. Buxton. Monocular depth perception from optical flow by
space time signal processing. Proc. R. Soc. London, B, 218:27–47, 1983.
[17] K R Castleman. Digital Signal Processing. ISBN 0-13-211467-4. Prentice-Hall,
New Jersey, 1996.
[18] C.-S. Chui. The effects of intra-fraction organ motion on the delivery of intensitymodulated fields with a multileaf collimator. 26:1087, 1998.
[19] T. F. Cootes, C. J. Taylar, D. H. Cooper, and J. Graham. Active shape models
- their training and application. In Computer Vision and Image Understanding,
volume 61:1, pages 38–59, January 1995.
[20] D. Cunado, M.S. Nixon, and J.N. Carter. Automatic extraction and description
of human gait models for recognition purposes. Computer Vision and Image
Understanding, 90(1):1–41, 2003.
[21] E Davies. Machine Vision: Theory, Algorithms and Practicalities. Academic
Press, 1990.
[22] K. Eilertsen. Automatic detection of single MLC leaf positions with corrections
for penumbral effects and portal dose rate characteristics. 42:313–314, 1997.
[23] N. Ferrier, S. Rowe, and A. Blake. Real-time traffic monitoring. In Proc. 2nd.
IEEE Workshop on Applications of Computer Vision, pages 81–88, 1994.
[24] E.E. Fitchard, J.S. Aldridge, K. Ruchala, G. Fang, J. Balog, D.W. Pearson,
G.H. Olivera, E.A. Schlosser amd D. Wenman, J.P. Reckwerdt, and T.R. Mackie.
Registration using tomographic projection files. 44:495–507, 1999.
[25] W Frei. Image enhancement by histogram hyperbolization. 6:184, 1977.
[26] R Gonzalez and R Woods. Digital Image Processing. Addison-Wesley Publishing
Company, 1992.
[27] M. Haag and H.-H. Negal. Tracking of complex driving manoeuvres in traffic
image sequences. Image and Vision Computing, 16(8):517–527, June 1998.
[28] R Hamming. Digital Filters. Prentice-Hall, 1983.
[29] Charlotte L. Hector, Philip M. Evans, and Steve Webb. The dosimetric consequences of inter-fractional patient movement on three classes of intensitymodulated delivery techniques in breast radiotherapy. 59(3):281–291, June 2001.
15
[30] C.L. Hector, P.M. Evans, and S. Webb. The dosimetric consequences of patient
movement on three classes of intensity-modulated delivery techniques in breast
radiotherapy. In Proc. 13th. Int. Conf. on the use of Computers in Radiation
Therapy, pages 289–291, Heidelberg, May 2000.
[31] B J Heijmen, K L Pasma, M Kroonwijk, V G M Althof, J C J de Boer, A G
Visser, and H Huizenga. Portal dose measurement in radiotherapy using an
electronic portal imaging device (epid). 40:1943–1955, 1995.
[32] B Horn. Robot Vision. MIT Press, 1986.
[33] B.K.P. Horn and B.G. Schunck. Determining optical flow. Artificial Intelligence,
17:185–203, 1981.
[34] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional
density. In European Conference on Computer Vision, page 2430, 1996.
[35] G. Jacob, A. A. Noble, M. Mulet-Parada, and A. Blake. Evaluating a robust contour tracker on echocardiographic sequences. Medical Image Analysis, 3(1):63–75,
1999.
[36] A Jain. Fundamentals of Digital Image Processing. Prentice-Hall, 1986.
[37] H. James, S. Atherton, G. Budgell, M. Kirby, and P. Williams. Verification of dynamic multileaf collimation using an electronic portal imaging device. 51(Suppl.
1):S25, 1999.
[38] H. James, S. Atherton, G. Budgell, M. Kirby, and P. Williams. Verification of
dynamic multileaf collimation using an electronic portal imaging device. 45:495–
509, 2000.
[39] D. Jang, S. Jang, and H. Choi. 2d human body tracking with structural kalman
filter. 2002.
[40] P. KaewTraKulPong and R. Bowden. An improved adaptive background mixture
model for real-time tracking with shadow detection. In In Proc. 2nd European
Workshop on Advanced Video Based Surveillance Systems, AVBS01, Kingston,
UK., Sept. 2001.
[41] L S G Kovasznay and H M Joseph. Image processing. In Proc. IRE 43, pages
560–570, 1955.
[42] V. Krüger, S. Bruns, and G. Sommer. Efficient head pose estimation with gabor
wavelet networks. In British Machine Vision Conference, pages 12–14, 2000.
[43] H. Kubo. Review of respiritory gated radiotherapy. 25:A140, 1998.
[44] H. Kubo, E.G. Shapiro, and E.J. Seppi. Potential and role of a prototype amorphous silicon array electronic portal imaging device in breathing synchronized
radiotherapy. 26:2410–2414, 1999.
16
[45] J.H. Kung and G.T.Y. Chen. An analysis of dose perturbations from patient
misalignment in delivery of intensity modulated radiotherapy. 25:A204, 1998.
[46] M Kuwahara, K Hachimura, S Eiho, and M Kinoshita. Processing of RIangiocardiographic image. In Digital Processing of Biomedical Images, pages
187–202, Preston, K. and Onoe, M., Eds., Plenum, New York, 1976.
[47] J. Lebesque, R.K. Ten Haken, and J.W. Wong. Organ motion: Impact on conformal radiotherapy and methods of compensation. 42(Suppl.):121, 1998.
[48] J S Lee. Digital image enhancement and noise filtering by use of local statistics.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2:165, 1980.
[49] J S Lee. Refined filtering of image noise using local statistics. 15:380, 1981.
[50] Q. Li, S.V. Groezinger, T. Haberer, E. Rietzel, and G. Kraft. Online compensation for target motion with scanned partical beams:simulation envirnment.
49:3029–3046, 2004.
[51] Y. Li, S. Gong, and H. Liddell. Recognising trajectories of facial identities using
kernel discriminant analysis. Image and Vision Computing, 21(13):1077–1086,
December 2003.
[52] D. Löff, B.K. Lind, and A. Brahme. An adaptive control algorithm for optimization of intensity modulated radiotherapy considering uncertainties in beam
profiles, patient set-up and internal organ motion. 43:1605–1028, 1998.
[53] H.C. Longuet-Higgins and K. Prazdny. The interpretation of moving retinal
image. Proc. of the R. Soc. B, 208:385–387, 1980.
[54] D.A. Low, X.R. Zhu, and J.A. Purdy. The influence of angular misalignment on
fixed-portal intensity-modulated radiation therapy. 24:1123–1139, 1997.
[55] L.Torresani and C. Bregler. Space-time tracking. In European Conference on
Computer Vision, page 801812, 2002.
[56] Bruce D. Lucus and Takeo Kanade. An iterative image registration technique
with an application to stereo vision. In Proc. 7th Intl. Joint Conf on Artificial
Intelligence (IJCAI), pages 674–679, Vancouver, British Colombia, August 24-28
1981.
[57] Bruce D. Lucus and Takeo Kanade. An iterative image registration technique
with an application to stereo vision. In Proc. DARPA Image Understanding
Workshop, pages 121–130, April 1981.
[58] R.I. MacKay, P.A. Graham, C.J. Moore, J.P. Logue, and P.J. Sharrock. Animation and radiobiological anajysis of 3D motion in conformal radiotherapy.
52:43–49, 1999.
[59] G.S. Mageras. Respiratory motion-induced treatment uncertainties. 15:199, 1999.
17
[60] G.S. Mageras. Interventional strategies for reducing respiratory-induced motion
in external beam therapy. In Proc. 13th. Int. Conf. on the Use of Computers in
Radiation Therapy, pages 514–516, Heidelberg, May 2000.
[61] G.K. Matsopoulos, P.A. Asvestas, K.K. Delibasis, V. Kouloulias, N. Uzunoglu,
P. Karaiskos, and P. Sandilos. Registration of electronic portal images for potient
set up verification. 49:3279–3289, 2004.
[62] I.A. Matthews, T. Cootes, J.A. Bangham, S.J. Cox, and R. W. Harvey. Extraction of visual features for lipreading. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 24(2):198–213, 2002.
[63] J. McEachen and J. Duncan. Shaped-base tracking of left ventricular wall motion.
IEEE Transactions on Medical Imaging, 16:270–283, 1997.
[64] Tim McInerney and Dimiti Terzopoules. Deformable models in medical image
analysis: A survey. Medical Image Analysis, 1(2):91–108, 1996.
[65] A. Micilotta and R. Bowden. View-based location and tracking of body parts
for visual interaction. In In Proc. BMVC04, volume 2, pages 849–858, Kingston
UK., Sept. 2004.
[66] J. Montagnat, M. Sermesant, H. Delingette, G. Malandain, and N. Ayache.
Anisotropic filtering for model-based segmentation of 4D cylindrical images. Pattern Recognition Letters, 24:15–28, 2003.
[67] T Morris. Computer Vision and Image Processing. ISBN 0-333-99451-5. PALGRAVE MCMILLAN, New York, 2004.
[68] C. Nastar and N. Ayache. Frequency-based nonrigid motion analysis: application
to four dimensional medical images. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 18(11):1067–1079, 1996.
[69] Chahab Nastar and Nicholas Ayache. Fast segmentation, tracking, and analysis
of deformable objects. In Proceedings of the Fourth International Conference on
Computer Vision (ICCV’93), Berlin, May 1993. IEEE.
[70] X. Papademetris, A. J. Sinusas, D. P. Dione, and J. S. Duncan. 3D cardiac
deformation from ultrasound images. In Medical Image Computing and Computer
Aided Intervention (MICCAI), pages 420–429, Cambridge, England, September
1999.
[71] X. Papademetris, A. J. Sinusas, D. P. Dione, and J. S. Duncan. Estimation of
3D left ventricular deformation from echocardiography. Medical Image Analysis,
5(1):17–28, 2001.
[72] J. Park, D. Metaxas, and L. Axel. Volumetric deformable models with parameter
functions: a new approach to 3D motion analysis of the LV from MRI-SPAMM.
In Fifth Int. Conf. on Computer Vision, pages 700–705, 1995.
18
[73] M. Partridge, P.M. Evans, M.A. Mosleh-Shirazi, and D. Convery. Independent
verification using portal imaging of intensity-modulated beam delivery by the
dynamic MLC technique. 25:1872–1879, 1998.
[74] M. Partridge, J.R.N. Symonds-Tayler, and P.M. Evans. The use of electronic
portal imaging for verification of dynamic mlc beam delivery. In Proc. 16th. Int.
Workshop on Electronic Portal Imaging (EPI2k), page 58, Brussels, June 5–7
2000.
[75] M. Partridge, J.R.N. Symonds-Tayler, and P.M. Evans. Verification of dynamic
MLC beam delivery using electronic portal imaging. In Proc. 13th. Int. Workshop
on the Use of Computers in Radiation Therapy, pages 556–557, Heidelberg, May
2000.
[76] K.L. Pasma, M. Dirkx, M. Kroonwijk, A. Visser, and B. Heijmen. Dosimetric verification of intensity modulated fields producded with dynamic multileaf
collimation using an electronic portal imaging device EPID. 51(Suppl. 1):S33,
1999.
[77] K.L. Pasma, M.L.P. Dirkx, M. Kroonwijk, A. Visser, and B. Heijmen. Dosimetric
verification of intensity modulated fields produced with dynamic leaf collimation
using an electronic portal imaging device. 26:2373–2378, 1999.
[78] K.L. Pasma, M. Kroonwijk, E. van Dieren, A. Visser, and B. Heijmen. Simple and
accurate verification of compensator thickness using an electronic portal imaging
device EPID. 51(Suppl. 1):S25, 1999.
[79] Maria Petrou and Panagiota Bosdogianni. Image Processing: The fundamental.
ISBN 0-471-99883-4. Wiley, 1999.
[80] Maria Petrou and Panagiota Bosdogianni. Image processing: The fundamentals.
John Wiley and Sons, Chichester, UK, 1999.
[81] W K Pratt. Digital Image Processing. Wiley, New York, 1978.
[82] M Razaz, J A Bangham, R A Lee, and R W Harvey. Segmentation of optical
microscope microscope images. pages 184–188, 1997.
[83] M Razaz, D M P Hagyard, and R A Lee. A segmentation methodology for real
3d images. pages 269–276, 1998.
[84] M Razaz and D Kampmann-Hudson. A blind deconvolution algorithm for simultaneous image restoration and system characterisation. volume 2, pages 887–890,
1996.
[85] M Razaz and R A Lee. Comparison of an iterative deconvolution and wiener
filtering for image restoration. volume 1, pages 145–159, 1997.
[86] M Razaz, R A Lee, P S Belton, and K M Wright. Nmrida: A new algorithm for
deconvolution of nmr spectra and images. pages 3–5, 1998.
19
[87] M Razaz, N Nicholson, and R Lee. Reconstruction of 3d confocal microscopy
images with non-symmetric psf. pages 341–345, 2000.
[88] M Razaz and S Nicholson. A robust blind deconvolution method for noisy images.
volume 1, pages 442–446, 1999.
[89] M Razaz and S Nicholson. 3d blind image reconstruction using combined nonlinear and statistical techniques. volume 4, pages 932–935, 2000.
[90] R.Feris and V. Krueger. A wavelet subspace method for real-time face tracking.
Real-Time Imaging, 10:339350, 2004.
[91] P. Shi, G. Robinson, A. Chakraborty, L. Staib, R.T. Constable, A. Sinusas, and
J. Duncan. A unified framework to assess myocardial function from 4D images.
In Lecture Notes in Computer Science: First Int. Conf. on Computer Vision,
Virtual Reality, and Robotics in Medicine, pages 327–337, 1995.
[92] P. Shi, A.J. Sinusas, R.T. Constable, E. Ritman, and J.S. Duncan. Point-tracked
quantitative analysis of left ventricular motion from 3D image sequences. IEEE
Transactions on Medical Imaging, in-press.
[93] H. Shirato, T. SHimizu, H. Akita, N. Kurauchi, N. Shinohara, S. Ogura,
T. Harabayashi, H. Aoyama, and K. Miyasaka. Fluoroscope real-time tumour
tracking radiotherapy. 45(Suppl. 1):205, 1999.
[94] M Sonka, V Hlavac, and R Boyle. Image Processing, Analysis, and Machine
Vision. ISBN 0-534-95393-X. Brooks/Cole Publishing Company, CA USA, 1999.
[95] Catarina Sundgren. Estimation of patient setup errors in radiation therapy using
portal imaging. Technical report, RaySearch Laboratories, Sveavagen 25, SE-111
34 Stockholm, Sweden, 2004.
[96] D. Terzopoulos and D. Metaxas. Dynamic 3D models with local and global
deformation: deformable superquartics. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 13(17), 1991.
[97] L. Torresani, D. Yang, E. Alexander, and C. Bregler. Tracking and modeling nonrigid objects with rank constraints. Computer Vision and Pattern Recognition,
1:493500, 2001.
[98] A. van Esche, P. Sommer, P. Bogaerts, R. Rijinders, G.J. Kutcher, and
D. Huyskens. Dosimetric treatment verification for IMRT: Towards clinical implementation. In Proc. 16th. Int. Workshop on Electronic Portal Imaging (EPI2k),
Brussels, June 5–7 2000.
[99] David C. C. Wang, Anthony H. Vagnucci, and C. C. Li. Digital image enhancement: A survey. Computer Vision, Graphics and Image Processing, 24:363–381,
1983.
20
[100] G.A. Watson, D.D. Leavitt, M. Tobler, D.K. Gaffney, and F.A. Gibbs. Application of enhanced dynamic wedge to stereotactic radiotherapy. In Proc. 13th. Int.
Conf. on the Use of Computers in Radiation Therapy, pages 194–196, Heidleberg,
May 2000. Springer.
[101] Steve Webb. Intensity-Modulated Radiation Therapy. IOP Publishing, Bristol,
UK, 2001.
[102] P. Williams, G. Budgell, J. Mott, B. Perin, H. James, S. Atherton, and A. Hounsell. In vivo verification of clinical intensity-modulated radiotherapy delivered
via dynamic multileaf collimation. 26:1085, 1999.
[103] G. Winterfeldt, M. Berger, J. Lethor, and M. Handschuhmacher. Expert model
based 3D reconstruction of the left ventricle using transthorasic echographic images. In Computers in Cardiology, pages 89–99, 1997.
[104] J.W Wong, M.B. Sharpe, D.A. Jaffray, V.R. Kini, J.M. Robertson, J.S.
Stromberg, and A.A. Martinez. Interventional strategies to counter the effects
of inter-fraction treatment variation. In Proc. 13th Int. Conf. on the Use of
Computers in Radiation Therapy, pages 511–513, Heidelberg, May 2000.
[105] J.W Wong, D. Yan, D.A. Jaffray, G. Edmundsen, and A.A. Martinez. The use of
active breathing control (ABC) to reduce margin for breathing motion. 44:911–
919, 1999.
[106] L. Xing, Z.-X. Lin, S.S. Donaldson, Q.T. Lee, D. Tate, D.R. Goffinet, S. Wolden,
L. Ma, and A.L. Boyer. Dosimetric effects of patient displacement and collimator
gantry angle misplacement on intensity modulated radiation therapy. 56:97–108,
2000.
[107] J.N. Yang, T.R. Mackie, P. Reckwerdt, J.O. Deasy, and B.R. Thomadsen. An
investigation of tomotherapy beam delivery. 24:425–435, 1997.
[108] Cheng-Hao Yao and Shu-Yuan Chen. Retrieval of translated, rotated and scaled
color textures. Pattern Recognition, 36:913–929, 2003.
[109] C. Yu, D.A. Jaffray, A.A. Martinez, and J.W. Wong. Calculating the effects
of intra-treatment organ motion on dynamic intensity modulation. 39(Suppl.
2):164, 1997.
[110] C. Yu, D.A. Jaffray, and J.W. Wong. Calculating the effects of intra-treatment
organ motion on dynamic intensity modulation. In Proc. 12th. Int. Conf. on the
Use of Computers in Radiation Therapy, pages 231–233, Salt Lake City, Utah,
May 1997.
[111] C. Yu, D.A. Jaffray, and J.W. Wong. The effects of intra-fraction organ motion
on the delivery of dynamic intensity modulation. 43:91–104, 1998.
[112] A. Yuille and J. Coughlan. Fundamental limits of bayesian inference: Order
parameters and phase transitions for road tracking. 2000.
21
[113] Q. Zhang and A. Benviste. Wavelet networks. IEEE Trans. Neural Networks,
3:889898, 1992.
7
References Registration
1 J.P.W. Pluim, A. Maintz and M. Viergever, ”Mutual Information Based Registration of Medical Images. A Survey” IEEE Transactions on Medical Imaging
pp. 1-21, 2003.
2 P. Kostelec and S. Periaswamy, ”Image Registration for MRI” Modern Signal
Processing MSRI Publications vol. 46. pp. 161-184, 2003.
3 S. Soman, A. Chung and W. Grimson and S. Wells III ”Rigid Registration of
Echoplanar and Conventional Magnetic Resonance Images by Minimizing the
Kullback-Leibler Distance” Proceedings of 2nd International Workshop on Biomedical Image Registration vol. 1 , 2004.
4 A. Maintz and M. Viergever ”A Survey of Medical Image Registration”. Medical
Image Analysis vol 2(1), pp. 1 - 37, 1998.
5 T. Ernst, O. Speck. L. Itti and L. Chang. ”Simultaneous Correction for Interscan
Patient Motion and Geometric Distortions in Echoplanar Imaging” Magnetic
Resonance in Medicine vol. 42 pp. 201 - 205, 1999.
6 T. Rohlfing, R. Brandt, R. Menzel and C. Maurer, ”Segmentation of ThreeDimensional Images Using Non-Rigid Registration: Methods and Validation with
Application to Confocal Microscopy Images of Bee-Brains” SPIE Medical Image
2003: Image Processing pp. 1 - 12, 2003.
7 C. Sorzano, P. Thevenaz and M. Unser, ”Elastic Registration of Biological Images
Using Vector-Spline Regularization” IEEE Transactions on Biomedical Engineering 2004.
8 M. Held, W. Weiser and F. Wilhelmstötter, ”Fully Automatic Elastic Registration of MR Images with Statistical Feature Extraction” Journal of WSCG vol.
12 (1) pp. 2 - 6. 2004.
9 S. Periaswamy and H. Farid, ”Elastic Registration with Partial Data” Medical
Image Analysis 2004.
10 M. Miga, T. Sinha, D. Cash, R. Galloway and R. Weil, ”Cortical Surface Registration for Image-Guided Neurosurgery Using Laser-Range Scanning” IEEE
Transactions on Medical Imaging vol. 22 (8) pp. 973 - 985, 2003.
11 D. Russakoff, T. Rohlfing and C. Maurer ”Fast Intensity-Based 2D-3D Image
Registration of Clinical Data Using Light Fields” Proceedings of the 9th IEEE
International Conference on Computer Vision pp. 416 - 422, 2003.
22
12 D. Sarrut and S. Clippe, ”Fast DRR Generation for Intensity-Based 2D/3D Image
Registration in Radiotherapy” Tech Report Laboratoire d Informatique en Images
et Systemes d Information June 2003.
13 J. Kybic and M. Unser, ”Fast Parametric Elastic Image Registration” IEEE
Transaction on Image Processing vol 12(11) pp. 1427 - 1442, 2003.
14 G. Rohde, A. Aldroubi and B. Dawant, ”The Adaptive Bases Algorithm for
Intensity-Based Nonrigid Image Registration” IEEE Transactions on Medical
Imaging vol. 22 (11) pp.1470 - 1479, 2003.
15 P. Hellier and C. Barillot, ”Multimodal Non-Rigid Warping for Correction of
Distortions in Functional MRI” Proceedings of Medical Image Computing and
Computer-Assisted Intervention pp. 512 - 520, 2000.
16 T. Rohlfing, D. Russakof, M. Murphy and C. Maurer, ”An intensity-Based Registration for Probabilistic Images and Its Applications for 2D to 3D Image Registration” Proceedings of Medical Imaging 2002: Image Processing pp. 581 - 591,
2002.
17 B. Hamre. ”Three-dimensional Image Registration of Magnetic Resonance (MRI)
Head Volumes” Ph. D Thesis, Section for Medical Image Analysis and Informatics. Department of Physiology and Department of Informatics. University of
Bergen. Norway 1999.
18 J. Gluckman, ”Gradient Field Distributions for the Registration of Images” Proceedings of IEEE International Conference on Computer Vision, 2000.
23