Download Voxel Similarity Measures for Automated Image

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dither wikipedia , lookup

3D film wikipedia , lookup

Medical imaging wikipedia , lookup

BSAVE (bitmap format) wikipedia , lookup

Scale space wikipedia , lookup

Aphelion (software) wikipedia , lookup

Hold-And-Modify wikipedia , lookup

Computer vision wikipedia , lookup

Edge detection wikipedia , lookup

Indexed color wikipedia , lookup

Anaglyph 3D wikipedia , lookup

Spatial anti-aliasing wikipedia , lookup

Stereoscopy wikipedia , lookup

Image editing wikipedia , lookup

Stereo display wikipedia , lookup

Medical image computing wikipedia , lookup

Transcript
Voxel Similarity Measures for Automated Image
Registration
Derek LG Hill, Cohn Studholme & David J Hawkes
Radiological Sciences, UMDS, Guy's & St Thomas' Hospitals, London SE1 9RT, UK
D.Hill@ umds.ac.uk, C.Studholme @umds.ac.uk, D.Hawkes @umds.ac.uk
ABSTRACT
We present the concept of the feature space sequence: 2D distributions of voxel features of two images generated at registration and a sequence of misregistrations. We provide an explanation of the structure seen in these images. Feature space
sequences have been generated for a pair of MR image volumes identical apart from the addition of Gaussian noise to one, MR
image volumes with and without Gadolinium enhancement, MR and PET-FDG image volumes and MR and CT image volumes, all of the head. The structure seen in the feature space sequences was used to devise two new measures of similarity
which in turn were used to produce plots of cost versus misregistration for the 6 degrees of freedom of rigid body motion. One
of these, the third order moment of the feature space histogram, was used to register the MR image volumes with and without
Gadolinium enhancement. These techniques have the potential for registration accuracy to within a small fraction of a voxel or
resolution element and therefore interpolation errors in image transformation can be the dominant source of error in subtracted
images. We present a method for removing these errors using sinc interpolation and show how interpolation errors can be
reduced by over two orders of magnitude.
1. INTRODUCTION
Important clinical applications of multimodal imaging have emerged in functional neuro-imaging, surgical planning, tumour
localisation and treatment assessment"2. Our objective is the development of clinically useable automatic 3D medical image
registration algorithms. The approach to meet this objective which is described in this paper is motivated by the observation
that medical images ofthe same region ofa patient acquired with different imaging modalities are usually recognisably similar,
even to a non-expert. For example MR and CT images of many parts of the body contain broadly the same features, but they
appear with different intensities and texture.
Where such images are being combined, image registration can be accomplished by identifying a small number of equivalent
features such as points or surfaces in the images16. An alternative approach is to perform registration using all, or at least a
large number, of the voxels in the images rather than a small number of derived features. If achievable this might provide methods more robust to noise in the image data and less prone to truncation effects due to partially overlapping imaged volumes.
The basis of this approach is the assumption that some arithmetic combination of voxel values in two images, when applied to
each image voxel in turn, provides a similarity measure that has an optimum value when the images are aligned. The best
known algorithm of this type is cross correlation. We present new work on a methodology for devising improved similarity
measures.
1.1. Accurate registration using voxel intensity values
The cross correlation of two functions is frequently used in signal processing and image processing as a measure of how well
two functions A and B match when they are transformed with respect to each other. The transformation T that provides the best
match between the functions has the highest correlation value.
Roger Woods7'8 has proposed an algorithm that is related to cross correlation, hut incorporates an important modification. His
algorithm is based on an idealised assumption that when two images are accurately aligned the value of any voxel in one image
is related to the value of the corresponding voxel in the other image by a multiplicative factor R. In other words, for all voxels
of intensity a and b, in images A and B respectively, = R. When A and B are acquired from the same patient using the same
modality at different times, there will be a single value of R for all intensity values. When A and B are acquired from the same
patient using different modalities, there might be a different value of R for each intensity value in either image. The Woods
algorithm has been developed specifically for registration of multiple neurological PET images from the same patient, and for
O-8194-1698-3/94/$6.OO
SPIE Vol. 2359 / 20S
registration of neurological PET images to MRI images of the same patient. The latter requires delineation of the outer surface
of the brain in MR to exclude all but the intracranial structures. We have previously shown how this technique can be modified
in order to automatically register MR and CT images of the head without the need for prior segmentation, provided there is sufficient axial sampling9. The success of these techniques for solving specific registration problems encouraged us to devise a
methodology for studying this and other related voxel similarity measures.
1.2. Interpolation schemes for accurate image transformation
Registration using voxel similarity measures has the potential for very high registration accuracy due to the noise averaging
properties of the algorithms used. Registration accuracy better than image resolution or voxel dimensions is feasible. This is
particularly true for intramodality image registration where very high accuracy is required in emerging clinical applications.
These include subtraction of pre and post contrast spiral CT images to produce spiral CT angiograms, subtraction of pre and
post Gadolinium MR images for improved delineation of enhancing lesions, monitoring disease progression and functional
MR imaging, where resting state brain images are subtracted from images acquired whilst a cognitive task is being undertaken.
For all these applications it cannot be assumed that the subject remains still between concurrent image acquisitions, nor that the
patient can be repositioned accurately for subsequent studies. Small movements (considerably less than a voxel) can lead to
edge artefacts in the subtracted images that might be misleading. Therefore accurate image registration is required. If this is
achieved conventional image transformation using nearest neighbour or linear interpolation can introduce significant errors.
A consideration of sampling theory can be useful in overcoming these interpolation artefacts. The k-space image representation of a correctly sampled band limited function can be recovered from the discrete Fourier transform by multiplication with a
cuboidal window. The corresponding spatial domain operation is a convolution with the Fourier transform of the cuboidal window, which is a 3D sinc function. Intermediate sampling points in a band limited image can therefore be recovered in the spatial domain using a sinc function with zero crossing separation equal to the voxel dimensions.
Medical images are invariably truncated in all three dimensions, with the consequence that they are not truly band limited.
Consequently, sinc interpolation does not correctly transform the images and introduces artefacts such as shading. Extrapolation of the image prior to interpolation can enormously reduce these truncation artefacts. The extrapolation can be done by
adding an average or "DC" intensity around the periphery of the image, or by data replication.
The number of voxel neighbours used for the interpolation has a significant impact on the time taken to do the transformation.
Therefore the sinc function has to be truncated after a certain number of zero crossings in order to transform the images in a
reasonable length of time. The function used to truncate the sinc function affects the residual interpolation artefacts.
2. METHODS
2.1. Feature space sequences and their characteristics
A possible way of examining the effect of misregistration useful for devising appropriate voxel similarity measures is to construct 2D distributions or feature spaces of image intensities (or other voxel features) of two images at registration and for a
sequence of known misregistrations for each degree of freedom. We term these feature space sequences. A feature space of two
images is a two dimensional histogram of the value of a numerical image feature from one image plotted against the corresponding value from the other image. The feature space may be constructed from all voxels where there is correspondence (i.e.
overlap) between the two images. The image feature is usually voxel intensity, but could also be a derived value such as the
output of a gradient operator or texture analysis. A feature space sequence can be calculated for each degree of freedom of the
rigid body transformation. It provides a way of studying the change in the appearance of a feature space with misregistration.
To understand the structure seen in feature space sequences it is informative to calculate the expected appearance using a simple model. Consider a tomographic slice consisting of two regions, a vertical band of uniform intensity surrounded by a uniform background intensity. The image intensities from image modality A, at,, are given by:-
= a2 for u < i v else a1 = a1
and, similarly assuming identical voxel sizes for imaging modality B, image intensities b,1 are given by:-
b,1 = b2 for u < i v else
=
where u and v are the bounds of the vertical band in the x direction and 0 i <Xm(,x and
206 / SPIE Vol. 2359
0 <Ymax
A feature space of this image will consist of two signals at coordinates
,b1) and (a2,b2) of intensities equal to the number
of voxels within each region with zero elsewhere. With misregistration in the x direction by tpixels two other signals will
appear at (a1 ,b2) and (a2,b1) of intensity ty,,. With the addition of noise in each modality the feature space image will be
convolved with the noise distribution functions of image A and B, A and
in the a and b axes respectively. With image blur-
ring, by the impulse response function 'Ar of imaging modality A and 'Br
are given, in the noise free case, by:-
imaging modality B, image intensities a, and
a. = IAr1(i_r)j and b11 = IBr1(i_r)j
The feature space with image blurring will consist of the original two high signals at (a1 ,b1) and (a2,b2) connected by an arc
with the parametric equation in p for the coordinates of arc (a1, b) given by:-
a : aI+IA(r_P)(a2—al)
and
b = bl+IB(r_p)(b2bl).
When the blurring functions are the same in the two modalities this parametric equation describes a straight line with:-
(a—a1)/(a2—a1) = (b—b1)/(b2—b1) forb1
The intensity at each point, p, along this arc in feature space is proportional to the sum of products of the inverse of the
impulse response functions (
('A(.)'B())' + ('A(v-p)'B(v-p)Y1) and the inverse ofthe arc length.
At misregistration by a distance t in the x direction the parametric equation of the arc becomes:-
a = al+IA(r_P)(a2—al) andb = bl+IB(r_P_l)(b2bl)•
This equation generates an arc between points (a1,b1) and (a2,b2) similar in shape to the hysteresis curve of magnetisation
of ferromagnetic material. The greater the misregistration the greater will be the area enclosed by the curve, until at the limit
that IAand I - 0, when the arc will describe the rectangle (a1 ,b1 ) (a1 ,b2) (a2,b2) (a2,b1 ) . The area between the two
curves is a function of misregistration, contrast between the two regions and the impulse response functions of the two imaging
modalities.
In a real image the feature space will consist of a superimposition of a number of points connected by arcs with intensity proportional to image blurring. Image noise will blur the feature space and misregistration will generate hysteresis type curves
from the connecting arcs. The misregistration of each boundary element along its surface normal will determine the area
enclosed by the corresponding hysteresis curve in feature space. For convoluted structures surface orientations will be widely
distributed and the feature space for a particular misregistration will be a superimposition of multiple hysteresis type curves,
but the envelope of these curves will correspond to the maximum misregistration perpendicular to the surface.
We have generated feature space sequences from various combinations of MR. CT and PET images in the head. The first feature space was generated from two MR images (Ti gradient echo, voxel size 0.859mm x 0.859mm x 0.9mm) that differ only
by the addition of Gaussian noise with a standard deviation similar to that of the air signal. Feature spaces were also generated
from images of patients scanned at different times or with different modalities that had been pre-registered using our interactive point landmark system3. These included:- pre and post Gadolinium MR images (Ti weighted, voxel sizes 0.898mm x
0.898mm x 2.0mm pre Gadolinium and 0.898mm x 0.898mm x 1.0mm post Gadolinium); MR images (Ti weighted, voxel
size 0.918mm x 0.918mm x 2.5mm) and CT images (voxel size 0.488mm x 0.488mm x 3.0mm); and MR images (TI
weighted, voxel size 0.859mm x 0.859mm x 2.5mm) and PET-FDG images (voxel sizes 2.0mm x 2.0mm x 3.75mm); all of the
head.
SPIE Vol. 2359 / 207
Prior to the calculation of the feature spaces, each pair of images was blurred so that they both have the same resolution. For
the work presented here all images were filtered with a Hanning window to give a full width half maximum of the impulse
response function of 10mm. This will sharpen structure seen in the feature space sequences by reducing the blurring effect of
image noise. In addition filtering to the same resolution in each modality will produce symmetric structure in the feature
spaces. The feature spaces shown are produced using translation in units of a single pixel, but we are able to produce these
using arbitrary transformations in all six degrees of freedom.
2.2. Similarity measures
A number of measures of the feature space dispersion described above could be envisaged. We have examined 4 such measures:1.
Simple correlation;
2. The variance of intensity ratio of one modality to the second, computed for each intensity of the second and averaged
(Woods' algorithm8);
3. As above but restricted to predefined bands of intensity, corresponding to rectangular regions of the feature space;
4. The third order moment of the intensity histogram of the feature space.
The third measure will give a low signal at registration for regions of the feature space which include the arcs connecting two
adjacent regions in the original images. The hysteresis effects outlined above will increase this variance. The histogram of the
feature space contains information about the distribution of feature space intensities. It is weighted towards high values if a
large number of feature space pixels have high intensity, and it is weighted towards low values if a large number of feature
space pixels have low intensity. The higher order moments of the histogram measure this distribution. We used the third order
moment of the feature space histogram, but the choice of this rather than any other moment of order 2 or above was arbitrary.
Using registered reference images, the similarity measure is evaluated for the images at registration, and when misregistered
by known transformations in each of the degrees of freedom of the desired registration transformation. For rigid body registration, this provides a series of six one dimensional curves, each of which is a plot of similarity measure value against misregistration for a single degree of freedom. We term the resulting graphs similarity measure plots. The similarity measures are
converted to cost functions, so an ideal cost function has a minimum value at registration, and is a monotonically increasing
function of misregistration. It must be emphasised that these similarity measure plots do not sample all of the parameter space,
and there may well be many local minima of cost that are not visible in the similarity measure plots.
These techniques have been applied to 2 copies (and therefore registered versions) of the same MR dataset in which one copy
had Gaussian noise added (sigma equal to 6 standard deviations of the original MR image noise) and 3 sets of images preregistered using our interactive point based registration technique3 and assessed visually to ensure good registration. These 3 sets
were a T1 weighted MR volume with and without Gadolinium enhancement, MR and PET-FDG image volumes and MR and
CT image volumes. All images were of the head, but each pair of image volumes was from a different patient.
2.3. Image transformation using sinc interpolation
The consequences of interpolation artefacts can easily be demonstrated by transforming an image and then transforming the
transformed image back using the inverse transformation, and subtracting the original image from the twice transformed one.
This was carried out using an arbitrary transformation through all 6 degrees of freedom using nearest neighbour and trilinear
interpolation on an example MR image volume of the head.
To demonstrate the reduction in artefact that can be achieved using sinc interpolation, the same double transformation was carried out using a 3D sinc function truncated after 12 zero crossings using a Welch window. The data was extrapolated using
periodic replication of the image. In the slice plane, where the image object is surrounded by air, the image was simply replicated repeatedly, so that the pixel ordering becomes:
, n—i, n
(1,2,3
n—i, n 11,2,3
In the direction perpendicular to the slice plane, the object is not surrounded by air, so the truncation artefacts are more serious.
In this direction, the extrapolated slice ordering was:
, 3, 2, lii, 2, 3
208 / SPIE Vol. 2359
n—i, n I n, n—i
3. RESULTS
3.1. Feature
A feature space sequence generated from two similar MR images (original intensity on horizontal axis and
added noise on vertical axis) registered (left) and translated laterally by 2 pixels (centre), and 5 pixels (right).
Figure 1.
Figure 2.
A feature space sequence generated from pre- (horizontal axis) and post- (vertical axis) Gadolinium MR
images, registered (left) and translated laterally by 2 pixels (centre), and 5 pixels (right). The echo times, and hence contrast, were slightly different in the two MR images.
A feature space sequence from the two copies of the same MR image volume is shown in Fig. 1 . Feature space sequences for
the MR image volumes with and without Gadolinium are shown on Fig. 2, for the MR and PET-FDG image volumes on Fig. 3
and for the MR and CT image volumes on Fig. 4. These feature space sequences are of very different overall appearance but
they share common characteristics predicted in section 2.1:1. Diagonal features, corresponding to blurring between adjacent regions in the image volumes, disperse with misregistration.
2. This dispersion is accompanied by hysteresis type patterns which increase in area as misregistration increases until in the
limit horizontal and vertical lines appear.
3. The hysteresis type patterns show clear bounds, presumably caused by the maximum misregistration along surface normals.
4. Except at the origin, the brightest feature space pixels decrease in intensity with misregistration.
5. The number of low intensity feature space pixels increases with misregistration.
SPIE Vol. 2359 / 209
Figure 3.
A feature space sequence generated from MR (horizontal axis) and PET (vertical axis) images registered
(left) and translated laterally by 2 pixels (centre), and 5pixels (right).
A feature space sequence generated from CT (horizontal axis) and MR (vertical axis) images registered (left)
and translated laterally by 2 pixels (centre), and 5 pixels (right).
Figure 4.
Fig.5 shows the cost function with misregistration generated from the variance of intensity ratios for the PET-FDG and MR
image volumes (PET-FDG intensity divided by MR intensity). The functions are reasonably well behaved with monotonically
increasing cost with misregistration except for x and z rotations. Also for x and y translations the cost function starts to decrease
once misregistration exceeds about 5mm which is probably related to the resolution distance of the image volumes. Fig. 6
shows that, by selecting a predefined band of intensities to perform the analysis, the cost function for x and z rotations are considerably improved. Fig.7 provides similarly well behaved cost functions for the MR and CT image volume, where the vanance of intensity ratios (MR intensity divided by CT intensity) was computed oven a range of intensities corresponding to fat
and the skin surface. Fig. 8 shows that, as expected, conventional correlation provides a rather poor basis for computing cost
for MR and CT registration. Similar results for correlation were obtained for the PET-FDG and MR image volumes. Fig. 9.
shows the intensity regions for the MR PET-FDG and MR CT feature spaces that were used to compute the cost functions in
Figs. 6 and 7, respectively, together with labels for different tissues on the MR PET-FDG feature space.
Fig. 10 shows the cost functions for the pre and post Gadolinium MR image volumes. These images were acquired according
to the normal routine clinical protocol. The patient was removed from the MR scanner for injection of contrast between acquisitions, and the voxel dimensions were different in the two images. These plots demonstrate that the cost increases almost
monotonically with misregistration in each of the degrees of freedom. These image volumes have been successfully registered
automatically using the third order moment of the feature space histogram as a cost function with optimisation using a genetic
algorithm10 with a population size of 100 and 30 generations, at two scales. Fig. 11 shows the subtracted image generated from
these registered images.
210/SPIEV0!. 2359
a)
Ca
>
-10.0
-5.0
0.0
5.0
misregistration - translation (mm) or rotation (deg.)
10.0
Cost function calculated using the variance of intensity ratios for the MR and PET-FDG image volumes
using the full range of intensities.
Figure 5.
a)
>
Cl)
00
-10.0
-5.0
0.0
5.0
misregistration- translation (mm) or rotation (deg.)
10.0
Figure 6.
Cost function calculated using the variance of intensity ratios for the MR and PET image volumes using a
limited range of intensities corresponding to the outer surface of the brain and skull.
SPIEVo!. 2359/211
o 0 x translate
0 y translate
o 0 z translate
x rotate
y rotate
1 Vzrotate
U,
>
C-)
-10.0
-5.0
0.0
5.0
misregistration - translation (mm) or rotation (deg.)
10.0
Cost functions calculated using the variance of intensity ratios, for the MR and CT image volumes using a
Figure 7.
limited range of low CT intensities corresponding to the skin surface and fat.
a,
>
CU
o o x translate
0 C y translate
C
0
CU
0
U,
(3
0 z translate
x rotate
y rotate
V V z rotate
.10.0
5.0
0.0
-5.0
misregistration - translation (mm) or rotation (deg.)
10.0
Correlation of MR and CT image volumes as a function of misregistration. Cost in this case will be the
Figure 8.
inverse of the correlation value.
212 ISPIE Vol. 2359
Skull
Air
Intensity ranges defined for computation of variance of intensity ratios for the MR PET-FDG feature space
(left) and the MR CT Feature space (right). Different tissue regions are labelled on the MR PET-FDG feature space.
Figure 9.
a)
>
U)
00
-30
-20
-10
0
10
20
30
misregistration - translation (mm) or rotation (deg.)
Cost function calculated using the 3rd order moment of the feature space histogram, for two T1 weighted
MR images (pre and post Gadolinium).
Figure 10.
3.2. Image transformation using sine interpolation
Fig. 12 shows a slice from the original image volume, and example slices from the difference images produced by subtracting
the original image from the twice transformed one using nearest neighbour, linear and sinc interpolation. The intensity range in
the original image is 0 - 370. The subtracted images are displayed with an intensity range of -25 to +25. The mean square difference between the original image and the twice transformed image provides a quantitative indication of the interpolation
errors clearly visible in this figuEe. These values are given in table 1.
SPIEVo!. 2359/213
Figure 11.
Single slice from an image volume produced by subtracting the pre
and post Gadolinium MR volume after registration and transformation.
Table 1: Mean square difference values for interpolation artifacts.
Interpolation algorithm
Mean square difference
Nearest Neighbour
883.6
Trilinear (8 neighbour)
96.4
Sinc interpolation (12 zero crossing)
1.05
4. DISCUSSION
We have presented the concept of a feature space sequence, a series of 2D distributions of voxel features from two image volumes generated at registration and known misregistrations. We have analysed the type of structure visible in these feature
spaces and described how this analysis might help to devise appropriate measures of misregistration, or similarity measures,
which might be used in an optimisation process for automatic registration. Cost has been computed in 4 examples pairs of
image volumes using 4 such measures: image correlation; the variance of intensity ratios of one modality to the other averaged
over each intensity of the latter; the same restricted to certain bands of image intensity; and the third order moment of the feature space histogram. As expected correlation of intensity values is of little use in multi-modal registration while the last two
measures show promise at providing an appropriate cost function for automated registration. The third order moment of the
feature space histogram was successfully used for the automatic registration of MR images pre and post injection of Gadolinium. This is an important application of image registration, as subtraction of these images can provide important clinical informatio&1 , and patients normally move between the acquisition of these sequences. It is an example of a large class of image
registration problems, in which a time series of images need to be compared. Obvious examples include monitoring disease
progression (e.g.: plaque volume in patients with multiple sclerosis), and correcting for movement artifacts in functional
MR'2. In these applications it is essential to use accurate interpolation in the transformation process. We have shown how sinc
interpolation can be used to compute accurate, if computationally expensive, image transformations once the correct transformation has been computed. Work is in progress to test the registration process on a large number of registered datasets from a
wide range of modalities in different parts of the body.
214/SPIEV0!. 2359
Figure 12.
Interpolation artefacts generated by transforming the image volume, transforming it back to its original position, and subtracting the original image from the twice transformed one. Top left: slice from original image. Bottom left:
nearest neighbour interpolation. Top right: linear interpolation. Bottom right: interpolation using a sinc function truncated
after 12 zero crossings using a Welch window.
The diagonal features in the feature space images provide strong cues for registration. These cues only exist when registration
of surface features is within the resolution distance of each image. For misregistrations beyond this distance there is no cue on
the degree of misregistration and therefore, unlike methods based on the chamfer distance5'6, there is no indication of the
appropriate direction of search. The solution might be to blur images to a greater extent when starting estimates are poor and
progressively sharpening the images as the registration process proceeds. We are investigating such a multiscale approach.
Alternatively, first and higher order gradient operators might also provide suitable features from which to construct feature
spaces sequences and devise appropriate similarity measures. Such operators might be tuned to particular types of features
SPIE Vol. 2359/215
such as ridges or troughs as proposed by Van den Elsen13 or the multiscale medial axis transform as proposed by Morse et al14
Truncation and only partially overlapping datasets cause difficulties for all automated registration algorithms. The feature
space approach makes use of, in principle, all corresponding (overlapping) voxels in the two datasets and will therefore be less
prone to truncation effects unless starting estimates are so poor that edge artefacts interfere with the generation of sufficiently
coarse scale images.
Our proposed techniques are generic in that they can be applied to many different combinations of multimodal 3D images in a
wide range of applications. More work is necessary to test appropriate similarity measures and to devise algorithms which efficiently converge to the global minimum in the 6 dimensional search space of the rigid body transformation. This technique will
also find applications in image guided intervention where interventional imaging devices such as ultrasound and MR could be
used to update the pre-operative representation. We are exploring ways in which the matching process might be extended to
provide multiple local matches in order to track deformations of soft tissue structures.
5. ACKNOWLEDGEMENTS
This work was funded by UK EPSRC Grant GR/H48170. We are grateful for the support and encouragement of our clinical
colleagues in this work, in particular Mr. Michael Gleeson (ENT Surgeon), Mr. Anthony Strong (Neurosurgeon), Dr. Tim Cox
(Neuroradiologist), Dr. Wai-Lup Wong (Radiologist)and Dr. Alan Colchester (Neurologist), and for the technical assistance of
the Radiographic staff of Guy's, St. Thomas', The Maudsley and St. George's Hospitals in London.
6. REFERENCES
1.
Hill DLG, Hawkes DJ, Gleeson MJ, Cox TCS, Strong AJ, Wong W-L, Ruff CF, Kitchen ND, Thomas DGT, Sofat A,
Crossman JE, Studholme C, Gandhe A, Green SEM, Robinson GP. "Accurate frameless registration ofMR and CT images
of the head: Applications in Planning Surgery and Radiotherapy". Radiology, 191 ; 447-454, 1994
2. Evans AC, Marrett S, Torrescorzo J, Ku 5, Collins L. "MRI-PET Correlation in Three Dimensions Using a Volume-ofInterest (VOI) Atlas". J Cereb Blood Flow Metab; 1 1 :A69-A78. 1991
3. Hill DLG, Hawkes DJ, Crossman JE, Gleeson MJ, Cox TCS, Bracey EECML, Strong AJ, Graves P. "Registration of MR
and CT images for skull base surgery using point-like anatomical features." Br J Radiology. 64:1030-1035. 1991
4. Pelizzari CA, Chen GTY, Spelbring DR, Weichselbaum RR, Chen C-T. "Accurate three dimensional registration of CT,
PET and/or MR images of the brain". J Comput Assist Tomogr 13:20-26. 1989
5. Jiang H, Robb RA, Holton KS. "New approach to 3-D registration of multimodality medical images by surface matching".
In: Visualisation in Biomedical Computing, Proc Soc Photo-opt Instrum Eng 1808: 1 96-2 1 3. 1992
6. Hill DLG, Hawkes DJ. "Medical image registration using knowledge of adjacency of anatomical structures". Image and
Vision Computing 12: 173-178 1994
7. Woods RP, Cherry SR. Mazziotta JC. "A rapid automated algorithm for accurately aligning and reslicing PET images." J
Comp Assis Tomogr 16: 620-633 1992
8. Woods RP, Mazziotta JC, Cherry SR. "MRI-PET registration with automated algorithm". J Comp Assis Tomogr 17: 536346 1993
9. Hill DLG, Hawkes DJ, Harrison N, Ruff CF. "A strategy for automated multimodality registration incorporating anatomical knowledge and imager characteristics." In: Barrett HH, Gmitro AF, eds. Information Processing in Medical Imaging
IPMI '93. Lecture Notes in Computer Science 687 Springer-Verlag, Berlin. ppl 82- 1 96. 1993
10. Goldberg DE. "Genetic algorithms in search optimisation and machine learning." Addison Wesley, Mass. USA. 1989
11. Lloyd GAS, Barker PG, Phelps PD. "Subtraction gadolinium enhanced magnetic resonance for head and neck imaging". Br
J Radiology 66: 12-16 1993
12. Hajnal JV, Myers R, Oatridge A, Schwieso JE, Young IR, Bydder GM. "Artifacts due to stimulus correlated motion in
functional imaging of the brain". Mag. Reson. Med. 31:283-2911994
13. Van den Elsen PA. "Multimodality matching of brain images". Utrecht University Ph.D. Thesis 1993.
14. Morse BS, Pizer SM, Liu A, "Multiscale medial analysis of medical images", In: Barrett HH, Gmitro AF, eds. Information
Processing in Medical Imaging IPMI '93. Lecture Notes in Computer Science 687 Springer-Verlag, Berlin. ppll2-l3l.
1993
216 ISPIE Vol. 2359