Download Computer-Assisted Coronary CT Angiography Analysis

Document related concepts

Seven Countries Study wikipedia , lookup

Transcript
Linköping University Medical Dissertations, No. 1237
Computer-Assisted Coronary
CT Angiography Analysis
From Software Development to Clinical
Application
Chunliang Wang
Division of Radiological Sciences
And
Center for Medical Image Science and Visualization
Department of Medical and Health Sciences
Linköping University, Sweden
Linköping 2011
Chunliang Wang, 2011
Cover picture: A comparison of data presentation using conventional method and our
software: Left Column, image from the catheter angiography; Middle Column,
conventional 2D visualization of coronary CTA; Right Column, 3D visualization of
coronary CTA segmented with our software.
Published articles have been reprinted with the permission of the copyright holder.
Printed in Linköping, Sweden, 2011
ISBN 978-91-7393-191-5
ISSN 0345-0082
Abstract
ABSTRACT
Advances in coronary Computed Tomography Angiography (CTA) have resulted in a
boost in the use of this new technique in recent years, creating a challenge for radiologists
due to the increasing number of exams and the large amount of data for each patient. The
main goal of this study was to develop a computer tool to facilitate coronary CTA analysis
by combining knowledge of medicine and image processing, and to evaluate the
performance in clinical settings.
Firstly, a competing fuzzy connectedness tree algorithm was developed to segment the
coronary arteries and extract centerlines for each branch. The new algorithm, which is an
extension of the “virtual contrast injection” (VC) method, preserves the low-density soft
tissue around the artery, and thus reduces the possibility of introducing false positive
stenoses during segmentation. Visually reasonable results were obtained in clinical cases.
Secondly, this algorithm was implemented in open source software in which multiple
visualization techniques were integrated into an intuitive user interface to facilitate user
interaction and provide good overviews of the processing results. An automatic seeding
method was introduced into the interactive segmentation workflow to eliminate the
requirement of user initialization during post-processing. In 42 clinical cases, all main
arteries and more than 85% of visible branches were identified, and testing the centerline
extraction in a reference database gave results in good agreement with the gold standard.
Thirdly, the diagnostic accuracy of coronary CTA using the segmented 3D data from
the VC method was evaluated on 30 clinical coronary CTA datasets and compared with the
conventional reading method and a different 3D reading method, region growing (RG),
from a commercial software. As a reference method, catheter angiography was used. The
percentage of evaluable arteries, accuracy and negative predictive value (NPV) for
detecting stenosis were, respectively, 86%, 74% and 93% for the conventional method,
83%, 71% and 92% for VC, and 64%, 56% and 93% for RG. Accuracy was significantly
lower for the RG method than for the other two methods (p<0.01), whereas there was no
significant difference in accuracy between the VC method and the conventional method (p
= 0.22).
Furthermore, we developed a fast, level set-based algorithm for vessel segmentation,
which is 10-20 times faster than the conventional methods without losing segmentation
accuracy. It enables quantitative stenosis analysis at interactive speed.
In conclusion, the presented software provides fast and automatic coronary artery
segmentation and visualization. The NPV of using only segmented 3D data is as good as
using conventional 2D viewing techniques, which suggests a potential of using them as an
initial step, with access to 2D reviewing techniques for suspected lesions and cases with
heavy calcification. Combining the 3D visualization of segmentation data with the clinical
workflow could shorten reading time.
i
Acknowledgements
ACKNOWLEDGEMENTS
I would like to express my warmest gratitude to all of my colleagues and the people who
helped me throughout my PhD study. Special thanks to the following:
My supervisor, Örjan Smedby, who invited me to Sweden, and more importantly
introduced me into this “mathemedical” field that is filled with challenges and excitement.
I thank him for his understanding, support, trust and patience over the last two years. He
has been an incredible mentor and friend to me. Without his constant help and support, I
cannot imagine I could have started my first journey as a scientific researcher so smoothly.
My co-supervisors, Anders Persson and Hans Frimmel, for their invaluable scientific
support and frequent inspiration. Without this help, this thesis could never have been
written.
My cooperators and co-authors, Jan Engvall, Jakob De Geer, Sven-Göran Fransson,
Anders Björkholm, Waldemar Czekierda, Ebo De Muinck, Maria Engström and Helene
Zachrisson for participating in the clinical study and for their scientific feedback.
Prof. Osman Ratib, Dr Antoine Rosset and Joris Heuberger for developing the OsiriX
software and making it open source to society
Nils Dahlstrom, Filipe Miguel Maria Marreiros, Håkan Gustafsson, Olof Dahlqvist
Leinhard, Petter Quick, Maria Kvist, Johan Kihlberg, Annika Hall, Anders Tisell and
Ingela Allert for providing a warm and open atmosphere in CMIV. I did enjoy the
interesting discussions about research, life and culture in the coffee room.
Last but not least, I would like to thank my parents and my wife, who have always had
faith in me and encourage me to pursue a career that I love.
This research was funded by the Swedish Heart-Lung Foundation (Hjärtlungfonden).
iii
List of Papers
LIST OF PAPERS
This thesis is based on the following original papers, which are referred to in the text by
Roman numerals:
I: Wang C, Smedby Ö. Coronary artery segmentation and skeletonization based on
competing fuzzy connectedness tree. Med Image Comput Comput Assist Interv Int Conf
Med Image Comput Comput Assist Interv 2007; 10:311-318.
II: Wang C, Frimmel H, Persson A, Smedby Ö. An interactive software module for
visualizing coronary arteries in CT angiography. International Journal of Computer
Assisted Radiology and Surgery 2008; 3:11-18.
III: Wang C, Smedby Ö. Integrating automatic and interactive method for coronary artery
segmentation: let the PACS workstation think ahead. International Journal of Computer
Assisted Radiology and Surgery 2010; 5:275-285
IV: Wang C, Persson A, Engvall J, De Geer J, Fransson SG, Björkholm A, Czekierda W,
Smedby Ö. Can segmented 3D images be used for stenosis evaluation in coronary CT
angiography? Submitted for publication, 2011.
V: Wang C, Frimmel H, Smedby Ö. Level-set based vessel segmentation accelerated with
periodic monotonic speed function. SPIE Medical Imaging: Image processing 2011; p.
79621M-79621M-7
v
Abbreviations
ABBREVIATIONS
CA
Coronary Angiography
CAD
Coronary Artery Disease
CCTA
Coronary Computed Tomography Angiography
CPR
Curved Plane Reformatting
CTA
Computed Tomography Angiography
ECG
Electrocardiogram
IVUS
Intravascular Ultrasound
LAD
Left Anterior Descending Artery
LCX
Left Circumflex Artery
MDCT
Multidetector Helical CT
MIP
Maximum Intensity Projection
MPR
Multiplanar Reformatting
NPV
Negative Predictive Value
PPV
Positive Predictive Value
RCA
Right Coronary Artery
RG
Region Growing
VC
Virtual Contrast Injection
VRT
Volume Rendering Technique
vi
Contents
CONTENTS
Table of Contents
ABSTRACT ....................................................................................................................................... i
ACKNOWLEDGEMENTS ............................................................................................................ iii
LIST OF PAPERS............................................................................................................................ v
ABBREVIATIONS .........................................................................................................................vi
CONTENTS ................................................................................................................................... vii
1.
2.
Introduction ..........................................................................................................................1
Background ...........................................................................................................................3
3.
2.1. Coronary Artery Disease and Diagnostic Methods ....................................................... 3
2.2. Coronary CTA .............................................................................................................................. 6
2.2.1. The development of Coronary CTA ......................................................................................... 6
2.2.2. Advantages of Coronary CTA ..................................................................................................... 8
2.2.3. Limitations of Coronary CTA ...................................................................................................... 9
2.3. Image Visualization and Post-processing for Coronary CTA Analysis ................ 11
2.4. Vessel Segmentation/tracking methods – a brief review ........................................ 14
2.4.1. Class I: Methods using non-vascular-specific knowledge ............................................ 14
2.4.2. Class II: Methods using vascular-specific knowledge. ................................................... 20
4.
Summary of the Papers .................................................................................................. 27
5.
6.
Aims ....................................................................................................................................... 25
4.1.
4.2.
4.3.
4.4.
Algorithm Design: .............................................................................................................................27
Software Development ...................................................................................................................29
Clinical Evaluation ............................................................................................................................31
Optimization of Level Set-based 3D Segmentation............................................................. 34
Discussion ........................................................................................................................... 37
5.1.
5.2.
5.3.
5.4.
5.5.
The Clinical Role of Coronary CTA and Its Future ...................................................... 37
Stenosis evaluation in 3D .................................................................................................... 40
Software development from a radiologist’s perspective ......................................... 42
Disease-Centered Software Development ..................................................................... 45
Limitations and Future Work ............................................................................................ 46
Conclusion ........................................................................................................................... 49
References .................................................................................................................................. 50
vii
Introduction
1. INTRODUCTION
Despite worldwide efforts to investigate and control cardiovascular risk factors, coronary
artery disease (CAD) remains currently the primary cause of death worldwide, in particular
among Western nations [1]. Approximately one in five deaths is currently related to
cardiac disease in Europe and the US. Nearly 500,000 deaths caused by CAD are reported
every year in the US, and over 600,000 in Europe [2]. The lifetime risk of developing CAD
after 40 years of age is 49% for men and 32% for women [3]. In Sweden, although the agestandardized mortality of myocardial infarction (MI, an acute manifestation of CAD)
decreased from 1987 to 2004 by an average of 3.5% per year, and the age standardized MI
incidence from 1987 to 2000 decreased by 1-2% per year [4], CAD is still the most
common cause of death, and the case fatality of MI is still high. During 2008, about 17,000
out of 91,000 (18%) total deaths were caused by CAD-related ischemic cardiac disease [5].
It is expected that from 1990 to 2020, the global burden of cardiovascular disease will rise
by 55% in developing countries. The highest rise is foreseen in India and China [6]. These
alarming statistics highlight an acute need for tools to diagnose cardiac and coronary artery
disease. Presently, the gold-standard modality for diagnosis of CAD is invasive selective
coronary angiography (CA). The greatest advantage of this method is that interventions
can be performed immediately after the lesion has been located by X-ray. More than 2.5
million diagnostic coronary angiograms are performed every year in Europe and the US,
but only about 40% of them are followed by subsequent interventional treatment [7].
Moreover, a recent study has questioned the usefulness of interventional treatment in nonacute cases [8]. These data show the significant need for and importance of reliable noninvasive imaging for early and preventive diagnosis of CAD and other cardiac diseases.
Since the introduction of contrast-enhanced CT angiography (CTA), it has been
established as a reliable and widely used non-invasive imaging modality for vascular
diagnosis. Early in the 1980s, researchers started to dream about performing CTA on
coronary arteries [9]. It was only after the introduction of the helical CT, especially the
multidetector helical CT (MDCT), that coronary CTA became a more realistic possibility.
Although at first there were several major limitations of this new techniques, such as high
X-ray exposure, and low temporal resolution, requiring a heart rate below 70 beats per
minute for diagnostic images (which can be obtained by administering a beta-blocker)
[10], most of these have been overcome or at least attenuated by improvements in the
imaging technique. With current state-of-the-art CT scanners, motion-free images can be
acquired from most patients without any heart rate control, and techniques have been
developed to reduce the X-ray exposure to 0.87± 0.07mSv [11], which is no more than
the average annual background radiation (about 3mSv) experienced by each individual.
Thanks to these dramatic improvements in scanning techniques and some obvious benefits
of CT, such as the low cost, shorter acquisition time and non-invasive nature, the
acceptance of coronary CTA has continuously proceeded over the last five years. It is
generally believed that in the near future, the use of coronary CTA may replace a
substantial proportion of CA examinations, especially for assessing the degree of stenosis
and patency of grafts [12].
1
Introduction
However, unlike CA, the information supplied by the coronary CTA is distributed in
hundreds of transverse images. Radiologists and cardiologists still largely depend on
viewing original slices, oblique multiplanar reformatting (MPR) and curved plane
reformatting (CPR) images, sometimes complemented by a thin-slab maximum intensity
projection (MIP) image. Evaluating the coronary artery in such a large stack of timeresolved images is rather time-consuming. Taking into account the increasing number of
examinations per day, an important goal for medical image science is to find efficient and
accurate ways of viewing large numbers of images. Encouraged by this clinical
requirement, we have been devoting our knowledge and enthusiasm to developing
coronary CTA processing software from a radiologist’s perspective to facilitate the
diagnosis procedure and improve the accuracy of the stenosis assessment in coronary CTA
data. In this disease-centered study, we have attempted to combine our experience from
medical practice and our understanding of image processing techniques to build a “Swiss
army knife” for coronary CTA analysis. Thanks to generous help from clinical and
technical colleagues, an open-source software module for coronary CTA post-processing
was developed. New functionalities, which so far have included rib cage removal, tracing
of the ascending aorta, coronary artery segmentation and centerline tracking and
quantitative 2D cross-section measurement, have been added, and the quality and
performance of the software have also been consistently improved over the last four years.
This thesis will report studies describing and evaluating this software from a technical as
well as a clinical point of view.
2
Background
2. BACKGROUND
2.1.
Coronary Artery Disease and Diagnostic Methods
CAD occurs when the coronary arteries supplying blood to the myocardium become
hardened and narrow, which is usually caused by the buildup of atherosclerotic plaques on
their inner walls (Figure 1). Plaques are made up of fat, cholesterol, calcium, and other
substances found in the blood. Despite the buildup, most individuals with coronary artery
disease show no evidence of disease for decades. Eventually, the stenosis caused by the
plaques severely reduces the blood flow through the arteries and the first onset of
symptoms occurs. A common symptom is chest pain known as angina, indicating that the
heart muscle cannot get the blood or oxygen it needs. The resulting ischemia, i.e. oxygen
shortage, if left untreated for a sufficient period, can cause irreversible damage and/or
infarction of the myocardium. After decades of progression, some of the atherosclerotic
plaques may rupture and form an embolus that suddenly cuts off the heart’s blood supply,
causing permanent heart damage. If this happens in a main branch, it often leads to sudden
death.
Figure 1 Acute myocardium infarction caused by atherosclerotic plaque
Over time, CAD can also weaken the heart muscle and contribute to heart failure and
arrhythmias. Heart failure means that the heart is unable to pump blood well to the rest of
the body. Arrhythmias are changes in the normal beating rhythm of the heart.
Diagnosis of CAD is a relatively complicated procedure. Many tests are available for
this purpose. The choice of these tests and how many to perform depends on the patient’s
risk factors, history of heart problems, and current symptoms. Usually the tests begin with
the simplest and may progress to more complicated ones. Several commonly used
diagnostic techniques are listed below.
Electrocardiogram (ECG): An electrocardiogram records electrical signals as they
travel through the heart. When the heart muscle is damaged (reversibly or irreversibly), the
3
Background
electrical signals passed will also be affected, and this in turn causes changes in the ECG
pattern. An ECG can thus often reveal evidence of a previous heart attack or one in
progress. However, since a routine 12-lead ECG usually targets on the left ventricle, MI on
the right ventricle or posterior basal wall can be overlooked [13]. If the coronary
insufficiency is not very severe, a resting ECG could appear normal as the damage to the
myocardium is only temporary, and sometimes the ECG pattern can be very complicated
and uninterpretable. The degree of inter-observer variation can be significant in these cases
[14]. However, as the ECG is a widely accepted, non-invasive and convenient procedure, it
is still one of the most important diagnostic tools for CAD.
Biochemical tests: After myocardial necrosis, certain biochemical markers, such as
cardiac troponin I or T, will be released into the patient’s blood. A blood test can reveal an
elevation of such markers that starts 2-4 hours after onset of symptoms. Troponin testing in
primary care has shown to be helpful in the triage of chest pain patients [15]. However, in
some situations, such as unstable angina, myocardial ischemia is not associated with an
elevated level of cardiac troponin. Further, there are several reasons for cardiac troponin
elevation in the absence of ischemic heart disease [16].
Echocardiography: An echocardiogram is a sonogram of the heart. Also known as a
cardiac ultrasound, it uses standard ultrasound techniques to image 2D slices of the heart.
The latest ultrasound systems now employ 3D real-time imaging [17]. Since the spatial
resolution of echocardiography is not sufficient to evaluate coronary arteries directly, the
method can only be used in an indirect manner, like the other diagnostic methods
mentioned above. During echocardiography, the examiner can determine whether all parts
of the heart wall are contributing normally to the heart’s pumping activity. Parts with
impaired motility may have been damaged by a myocardial infarction or may be receiving
too little oxygen. This may indicate CAD or various other conditions.
Stress test: In patients showing normal results from ECG or echocardiography, but
signs and symptoms mostly after exercise, an alternative is to let the patient walk on a
treadmill or ride a stationary bike during an ECG, known as an exercise stress test [18]. In
other cases, medication to stimulate the patient’s heart may be used instead of exercise.
Some stress tests are done using an echocardiogram. Another type of stress test, known as
a nuclear stress test, measures blood flow to the myocardium at rest and during stress. It is
similar to a routine exercise stress test, but with images in addition to the ECG. Using
single photon emission computed tomography (SPECT), myocardial perfusion imaging
can be performed by tracing small amounts of radioactive material injected into the
patient’s circulation system to reveal areas that receive inadequate blood flow [19].
Coronary angiography: CA is a minimally invasive procedure to access the coronary
circulation [20]. A radio-contrast agent is injected into the coronary arteries through a long,
thin, flexible tube (catheter) that is inserted through an artery, usually in the leg, and
pushed up to the heart. X-ray images are then taken while the contrast agent is flushed
through the coronary tree, and the presence and extent of a stenosis can be directly judged
from these images. This contrasts with the other methods summarized above which rely on
indirect phenomena caused by a stenosis. In complex cases, intravascular ultrasound
(IVUS) [20] can be used to closely inspect the atherosclerotic plaque burden, using a
specially designed catheter with a miniaturized ultrasound probe attached to the distal end.
4
Background
One great advantage of CA is that, if narrow parts or blockages are revealed during the
procedure, a balloon can be pushed through the catheter and inflated to remove the
stenosis. A stent may then be used to keep the dilated artery open [20]. Despite several
well-known drawbacks, such as high expense, various complications, and absence of direct
plaque evaluation (unless IVUS is used), CA is currently the “gold standard” diagnostic
technique for CAD, due to its ultra-high spatial and temporal resolution and the possibility
to simultaneously perform interventional treatment.
Coronary CTA: Coronary CTA, also known as cardiac CTA, is a non-invasive
technique that can directly capture 3D images of a beating heart using a CT scanner.
During coronary CTA, the patient will receive a contrast agent injected intravenously
through the arm, and when the contrast agent arrives in the heart, CT images are acquired
continuously or triggered by ECG signals until the whole heart is covered. Acquired
images can then be registered together using the recorded ECG to show a “frozen” image
of the heart at a certain phase of the cardiac cycle. Compared to CA, coronary CTA can not
only determine the severity of blockages, but can also directly visualize the atherosclerotic
plaque deposited in the vessel wall. It can identify the early stages of soft (fatty and
fibrous) plaque formation even before the stenosis caused by the plaque can be visualized
on X-ray angiography images [21]. It also visualizes calcified plaque, which occurs in
more chronic coronary artery disease. Besides coronary arteries, the structure and function
of other parts of the heart, such as the myocardium and the valves, can also be evaluated
with coronary CTA. This technique is currently undergoing rapid development. A more
detailed review of the coronary CTA technique will be given in the next chapter.
Magnetic resonance imaging (MRI): The procedure using cardiac MRI technology is
often combined with an injected contrast medium to check for areas of narrowing or for
blockages. Although direct imaging of coronary arteries is possible with MRI, the limited
temporal and spatial resolution is still the drawback of this technique. The strengths of
magnetic resonance cardiovascular imaging, compared to CT, include superb definition of
tissue characteristics, perfusion, valvular function, absence of ionizing radiation, and lack
of need for potentially nephrotoxic contrast media. Limited temporal and spatial resolution,
partial volume artifacts (due to slice thickness limitations), reliance on multiple breathholds, and poor visualization of the left main coronary artery [22] all reduce the clinical
applicability of MR coronary angiography.
5
Background
2.2.
Coronary CTA
2.2.1. The development of Coronary CTA
Since its introduction by G. Hounsfield in 1972, the CT scan has become a reliable and
widely used non-invasive imaging modality for various diagnostic usages. The first
attempts to image the heart were in the very early days of CT, in the 1970s [23]. However,
due to the rapid motion of the heart and relatively long acquisition times (more than 10
seconds per slice) of early equipment, only large pathological lesions such as tumors along
the surface of the heart could be detected.
In the early 1980s, Electron beam computed tomography (EBCT), so-called “Ultrafast
CT”, was introduced [9]. With non-mechanical control and movement of the X-ray source,
a fixed detector system and ECG-correlated sequential scanning, EBCT enabled extremely
short image acquisition times that could virtually freeze cardiac motion. However, the
limited application spectrum of EBCT in general purpose use, the high cost of acquisition,
and very limited industry support have restricted distribution of the technology. Although
the concept of coronary calcium evaluation has been established since 1989 [24], and noninvasive coronary angiographic imaging with EBCT has been reported since 1995 [25],
these applications did not gain widespread appeal until studies with the MDCT became
available. The first sub-second single-slice scanner appeared in the late 1980s with the
introduction of the “slip ring” technique, which allows continuous rotation of detectors and
an X-ray source around the patient. The preliminary studies with single-slice spiral CT in
the early 1990s had very limited cardiac applications and significant motion artifacts. It
became possible to visualize the coronary arteries but not with sufficient reliability to
diagnose blockages.
During the 1990s there were rapid advancements in detector, X-ray tube generators,
circuitry, and computers. Together, these allowed the development of multi-row CT
scanners. In 1998, mechanical multi-slice CT systems with simultaneous acquisition of
four slices were introduced by all major CT manufacturers. For the first time, these
scanners enabled ECG-correlated multi-slice acquisition at considerably faster volume
coverage and higher spatial and temporal resolution for cardiac applications compared to
single-slice scanners. Then 16-row, 64-row and now 320-row CT scanners became
available commercially with the speed of image acquisition and volume coverage
continuing to increase rapidly with each new generation of CT. The current state-of-the-art
dual-source 64-slice CT scanners can achieve a temporal resolution of < 100 ms at all heart
rates. In a dual-source CT system, two X-ray tubes and two corresponding detectors are
mounted on the rotating gantry with an angular offset of 90°. Thus a complete data set of
180° of parallel-beam projections can be generated from two 90° data sets (“quarter-scan
segments”) that are simultaneously acquired by the two independent measurement systems.
While the scanner hardware has evolved, the image reconstruction techniques have also
improved in the recent decades. With the initial CT systems of the 1970s, researchers tried
to use a prospective triggering method, also known as “step-and-shoot”, to capture the
beating heart. The tube was turned off after acquisition of a single axial slice, and the
patient table incremented to the next slice position, where scanning was triggered to match
6
Background
specific cardiac phases. Despite gradual improvements in tube rotation time, these singleslice systems were too slow to image mobile organs.
Now, after the implementation of two key technical advances, spiral scanning and
multi-slice technology, data can be acquired throughout the entire cardiac cycle during
simultaneous recording of the ECG signal. Subsequently, data from specific periods of the
cardiac cycle (most commonly late diastole) are reconstructed by retrospective referencing
to the ECG signal. This technique is known as retrospective ECG gating. Since data are
acquired throughout the cardiac cycle, spiral imaging allows reconstruction from multiple
cardiac phases into cine-loops, which is required for functional assessment. However, an
obvious drawback is the continuous X-ray exposure during the entire cardiac cycle. Based
on the consideration of patient radiation dose, a dose modulation technique has been
introduced to reduce the tube current outside the selected phase. Most recently, a new
developed “step and shot” protocol for the MDCT has successfully reduced the mean
radiation dose to 2.1±0.6mSv (range 1.1–3.0 mSV) [26]. Besides ECG triggering
techniques, a few other advanced image reconstruction techniques, such as half-scan
reconstruction and multi-segment reconstruction, have also been developed to improve the
temporal resolution further.
Figure 2. (a) Sequential volume coverage with prospective ECG-triggered single-slice scanning
and (b) coverage with retrospective ECG-gated single-slice spiral scanning. Mechanical CT with
prospective ECG-triggering can acquire one slice during every second heartbeat, with a 500-ms
acquisition time. With retrospective ECG-gating, images can be reconstructed at every
heartbeat, with a temporal resolution equal to half the rotation time. Figure refined after [43].
7
Background
The introduction of the dual-source CT scanner pushed the temporal resolution higher.
By using two X-ray tubes and two detectors arranged at an angle of 90°, the scanner can
acquire the X-ray data for one cross-sectional image in just one-quarter rotation of the
gantry instead of a half-circle rotation. This effectively doubles the temporal resolution
when compared with a single-source CT at the same rotation speed. It also allows ECGtriggered spiral data acquisition with very high pitch values (3.0 or more) and acquisition
of the entire volumetric data set of the heart within a single cardiac cycle. This
significantly reduces radiation exposure since no slice overlap is used. In fact, appropriate
image acquisition parameters may allow a dose below 1 mSv for coronary CTA[11].
2.2.2. Advantages of Coronary CTA
Coronary CTA provides a quick and non-invasive diagnostic technique for CAD. The
technological advances that have occurred in CT have been directed towards non-invasive
coronary angiography. Many clinical studies have proved that the ability of modern
coronary CTA to detect significant CAD (stenosis with more than 50% diameter reduction)
is very close to CA [12][27]. Although it might not be able to totally replace coronary
angiography (CA) for diagnosis and assessment of CAD, its high sensitivity for patientbased detection of CAD and high negative predictive value suggest its ability to rule out
significant CAD. There are several widely recognized advantages that make coronary CTA
preferable to invasive CA for a selected patient spectrum [12][27][28][29][30].
Non-invasive: CA is an invasive procedure that might cause some complications for the
patients. Although the risk of severe complications such as death is relatively low, around
0.1-0.2% [31], the combined risk of all major complications such as MI, stroke, renal
failure, or major bleeding is around 2% [32]. Minor complications such as local pain,
ecchymosis, or hematoma at the catheterization site can be even more frequent [32].
Coronary CTA, on the other hand, is a non-invasive diagnostic technique. Although the
possibility of allergy and nephrotoxicity still exists, the total risk of complications is much
lower than for CA [33][34].
Time- and cost-efficient: Thanks to the advanced imaging techniques, performing a
coronary CTA exam is currently much less complicated than invasive CA. The cost of
coronary CTA is a small fraction of the cost of a diagnostic catheter in most countries. The
high sensitivity of the 64-slice CT avoids the costs of unnecessary CA in those patients
referred for investigation who do not have CAD. Although diagnostic strategies involving
the 64-slice CT will still require invasive CA for CT test positives to identify CT false
positives, several studies have proved the cost efficiency of coronary CT for rapid
disposition of the low risk population in an emergency department [30][35]. If the
associated death rate, although small, with the unnecessary CA is considered, the use of the
64-slice CT may also result in a small and immediate survival advantage in the presenting
population.
Three-dimensional modality: Unlike CA, coronary CTA is a three-dimensional
modality and is not limited to any particular two-dimensional projections/slice orientation.
This allows assessment of structures in any desired plane or angle, and offers volumetric
information on vessel stenosis and other structures such as cardiac chambers. Although
there is still no evidence suggesting that coronary CTA is more accurate at evaluating
stenosis than CA, the possibility should be kept in mind.
8
Background
Plaque imaging: Diagnostic CA (without IVUS) only gives the images of the
contrasted-filled lumen, which can only be used for estimating the stenosis caused by
plaques. On the other hand, with the extraordinary contrast resolution of CT, physicians
can closely investigate the composition of plaques and perform quantitative measurements
on them [36].
“Triple Rule Out” for chest pain diagnosis: Besides the coronary arteries, specially
designed coronary CTA protocols with a wider field of view (FOV) can simultaneously
visualize the pulmonary and systemic arteries of the chest, thereby excluding two other
important causes of chest pain: pulmonary embolism and aortic dissection. This is known
as a “triple rule out” study [37]. CT images acquired with this protocol can also give
accurate information on other structures in the chest, such as lung and bony tissue, which
cannot be otherwise visualized by other coronary artery modalities.
Four-dimensional modality for function analysis: The image reconstruction in
coronary CTA has been optimized for coronary artery visualization. However, with ECGgated spiral acquisition, image data are available for any phase of the cardiac cycle, which
makes coronary CTA a 4D modality that can give accurate information about the cardiac
muscle and valve function, and it can do so in a fashion that is less operator-dependent
than echocardiography [38].
2.2.3. Limitations of Coronary CTA
In comparison with invasive CA and other non-invasive cardiac imaging techniques,
coronary CTA has some inherent limitations that physicians should consider when
requesting this examination. These disadvantages have restricted the usage of coronary
CTA to selected patients who have atypical symptoms and are of intermediate risk for
coronary artery disease [39].
Radiation Exposure: Radiation exposure is a major drawback of CTA. The average
background radiation one experiences in a year is about 3 mSv, and the estimated radiation
dose from a chest X-ray is 0.04 mSv, while the radiation of a coronary CTA examination
currently is 6.4±1.9 and 11.0±4.1 mSv for 16- and 64-slice CTA [40]. In view of the
potential benefits, this is probably within acceptable limits, but is still higher than a
conventional coronary angiogram, with effective doses of 5.6±3.6 mSv [41]. This has
severely restricted the indications of coronary CTA examination.
Limited temporal resolution and cranio-caudal coverage: Studies have indicated
that temporal resolutions of 35 ms are needed to obtain motion-free images from a beating
heart [42]. A modern 64-slice CT can achieve a temporal resolution of 175-200 ms, and a
cranio-caudal (z-axis) coverage of 40 mm [43]. This makes coronary CTA highly
dependent on the ECG-gating technique that calibrates images from different parts of the
heart to the same phase of the cardiac cycle. Thus, coronary CTA is difficult to use with
patients with tachycardia and arrhythmia, where the images can suffer from registration
artifacts and blurring.
Nephrotoxic contrast medium: Coronary CTA requires iodinated contrast and often
additional medication such as beta-blockers [44]. Although the risk of these drugs is
9
Background
minimal, and CTA exams are usually performed in a hospital under the close supervision
of medical staff, it does limit the usage of this technique among patients with renal
insufficiency [45].
Difficulty with calcifications: The degree of luminal narrowing may be difficult and
even impossible to quantify if heavy calcification presents, due to the blooming artifacts.
One study shows that the area of calcified plaque measured with the MDCT was severely
overestimated compared to the histopathologic examination [46]. In addition to
calcifications, certain types of stents and bypass grafts with heavy metal content and
multiple clips may also cause severe artifacts and make the images non-evaluable.
Relatively high rate of false positives: The sixty-four-slice CT is almost as good as
invasive CA in terms of detecting true positives (negative predictive value range 86-100%,
median 100% [12]). However, its rate of false positives is relatively high (positive
predictive value range 64-100%, median 93% [12]). One study showed that the percentage
of stenosis measured by the MDCT was systemically overestimated by 12% [12]. Several
studies have suggested that a stenosis found with coronary CTA still requires confirmation
from invasive CA [12] [29] [30].
Inadequate scientific documentation and clinical guidelines: As with other newly
developed techniques, clinicians’ acceptance of coronary CTA varies considerably,
depending on their personal understanding of the technique. The proper use of this
technology may not yet be fully understood by cardiologists, and there is inadequate
scientific literature showing strong evidence of its true value in diagnostic testing in
various clinical scenarios. More evidence-based multidisciplinary evaluation studies are
needed to understand the role of coronary CTA in the diagnosis and treatment of early and
advanced stages of coronary artery disease.
10
Background
2.3.
Image Visualization and Post-processing for Coronary CTA Analysis
Images produced from coronary CTA are volume data usually consisting of 300-500 slices
of 512×512 (pixel) images. To achieve high accuracy and efficiency in evaluating the
coronary artery system from such volume data, proper visualization techniques are needed.
In order to give prominence to certain structures, hide unwanted information, or derive
additional information, post-processing techniques may also be required. Image
visualization and post-processing are essential for diagnostic accuracy of coronary CTA.
As the American Heart Association has recommended, a workstation that allows for
interactive manipulation and post-processing of the acquired dataset is crucial, and at least
two types of image display should be used.
In this section, several common visualization and post-processing techniques often used
for coronary CTA analysis are briefly explained.
Trans-axial image slices: Trans-axial image slices are the basic outcome of a multislice CT scan, and include all of the acquired information. Looking through these original
source images is recommended in all cardiac CT examinations [39]. This is usually
performed in the first step, before any other techniques are used, to get a quick overview of
the relevant cardiac structures, including the coronary arteries.
Multi-planar Reformatting (MPR): MPR is a visualization method that allows
reconstruction of a 2D slice in any plane that is defined in a 3D volume of the stacked axial
slices. Sagittal and coronal views, usually called orthogonal MPR, are two simple
examples. Modern software allows reconstruction in non-orthogonal (oblique) planes, so
that the optimal plane can be chosen to display a particular branch of the coronary tree
(Figure 3B). In practice, however, a stack of oblique planes is needed for each branch. Two
often used MPR stacks in coronary CTA are the one parallel to the left anterior descending
artery (LAD) and the one parallel to the right coronary artery (RCA) and the left
circumflex artery (LCX) [43]. Scrolling through these stacks allows a better overview of
the atherosclerotic plaque burden and vessel narrowing of each vessel. More accurate
segment-based lesion evaluation will require interactive MPR, where the viewer can
manipulate the cut plane to be parallel or perpendicular to the centerline of the affected
segment [47].
Curved MPR: MPR images cannot present an entire vessel in one slice because of the
tortuous course of coronary arteries. Alternatively, instead of using a straight cut plane, a
smooth curved surface can be used to fit into the winding vessel and cut open the volume
along the centerline of a vessel (Figure 3D). This allows bends in a vessel to be
’straightened’, so that the entire length can be visualized in one image. Once a vessel has
been straightened in this way, quantitative measurements of length and cross-sectional area
can be made. The centerline used for creating curved MPR images can be defined
manually by the user or created from an automatic or semi-automatic program, as
mentioned later. In most post-processing software, once a centerline is specified, curved
MPR can be produced in real-time and this allows users to rotate the curved plane around
the centerline to and thus obtain complete information on the vessel lumen.
11
Background
Maximum Intensity Projection (MIP): MIP is a computer visualization method that
presents 3D information of a volume in one 2D image. Each pixel of the 2D MIP image
represents the voxel with maximum intensity that falls in the path of parallel rays traced
from the viewpoint to the plane of projection [48]. An MIP image looks very similar to an
X-ray image, which makes it a good way to mimic CA images with coronary CTA.
However, full-volume MIP is not usually carried out in coronary CTA because of the
inevitable overlay between heart chambers and coronary arteries. Instead, so-called thinslab MIP, combining MIP with MPR, is normally used. In a thin-slab MIP, the maximum
CT number within a given distance orthogonal to the MPR plane is displayed for every ray
(Figure 3C). For evaluation of the coronary arteries, typical slab thicknesses range from 3
to 10 mm [43], according to the diameter of the vessel. MIP can also be combined with the
curved MPR technique to produce curved MIP images along the warped vessel. The
oblique MIP or curved MIP usually provides a better overview of the vessel than oblique
MPR or curved MPR. On the other hand, the depth information is sacrificed, as everything
in the slab is projected onto one plane, and this may sometimes affect the interpretation of
a lesion.
Volume Rendering Technique (VRT): VRT, sometimes also called Direct Volume
Rendering (DVR), is a 3D visualization technique that mimics how a camera captures
images. By using a transfer function that converts the intensity of pixels into colors and
opacity, VRT computes each desired pixel by summing up the weighted opacities and
colors of all voxels on a light ray starting at the center of projection of the camera (usually
the eye point) and passing through the image pixel on the imaginary image plane hanging
between the camera and the volume to be rendered [49]. The ray usually stops at voxels
with 100% opacity or the boundaries of the volume. VRT gives a vivid 2D image like a
picture taken in front of a 3D object (Figure 3E). In coronary CTA, whole heart VRT is
often used to provide an overview of the heart from the outside, with the coronaries visible
on the outer surface of the myocardium. This is very helpful in the evaluation of aberrant
coronary anatomy, as VRT provides good insight into the 3D relationship of anatomical
structures. However, for stenosis assessment, thin-slab VRT is used instead in the same
way as thin-slab MIP.
Four-dimensional visualization and function analysis techniques: To be able to
analyze cardiac valve function and heart wall motion, clinicians and diagnosticians usually
need 4D visualization techniques to navigate an anatomical structure’s changes throughout
the cardiac cycle. This visualization is typically carried out by playing MPR/thin-slab MIP
images from different cardiac phases in an ordered loop. Usually, the plane defining the
MPR/thin-slab MIP images is fixed, and the multiple-phase 3D volumes are reconstructed
using retrospective ECG gating with a step of 5 or 10% of the RR-interval [50]. In some
software modules, 4D datasets can be rendered with dynamic VRT in a continuous loop,
but this requires very powerful hardware support. More advanced software also provides
automatic or semi-automatic segmentation of the left ventricle and myocardium, which
allows calculation of the left ventricle ejection fraction and construction of a wall
thickening map [43].
Vessel segmentation, centerline tracking and plaque analysis: Thin-slab MIP and
thin-slab VRT provide intuitive ways to assess segments of the coronary arteries.
However, it is a time-consuming to manipulate the position and direction of the sample
12
Background
plane to investigate all branches. Using vessel segmentation, the user can hide all
unwanted structures in a full-volume MIP or VRT images by setting them to be
transparent. As only coronary arteries are shown in the view, the images look very similar
to invasive CA images, with which cardiologists are familiar. Moreover, MIP and VRT
techniques allow the vessel to be viewed from any projection even after the acquisition.
Centerline tracking is also very useful for coronary CTA, as it is the foundation of curved
MPR techniques. Most advanced medical workstations are able to semi-automatically find
a centerline between two points specified by the user. More recently, fully automatic
coronary artery centerline tracking and tree modeling have become available on some
workstations [43]. Automatic segmentation techniques are also used for calcium score
calculations. These calculations have been widely used to evaluate the CAD risk factor
[43]. More advanced plaque analysis tools are also under development. These are believed
to be able to provide more comprehensive quantitative information about the patient’s
plaque burden and to monitor the therapy response of patients undergoing medical
treatment [36].
Figure 3. Comparison of image visualization methods in patient with multiple atherosclerotic
plaques of the LAD. A. transverse images, at the level of left coronary ostium. B. Visualization of
stenosis in an oblique MPR. C. Visualization of stenosis in an oblique thin-slab MIP. D. Curved
MPR of left main and LAD. E Three-dimensional reconstruction using volume rendering.
Calcification can clearly be seen.
13
Background
2.4.
Vessel Segmentation/tracking methods – a brief review
As mentioned above, vessel segmentation is the key to many advanced image analysis and
visualization methods. These clinical needs have attracted tremendous interest of the image
analysis society. Researchers are aggressively attempting to segment the vessels from the
CTA or MRA data using graphic, geometric and statistic techniques. A large number of
algorithms have been published in the literature. Relatively extensive overviews of vessel
segmentation procedures can be found in [51][52][53][54]. However, so far no wellaccepted way of classifying the numerous methods has emerged because of the complexity
of the segmentation procedure. The conventional categorization [51][52][53] mainly
focuses on the character of the algorithm involved in the key step of extracting/segmenting
vessel structure while ignoring the image property or model information used in the
calculation. This causes ambiguity as the algorithms evolve and more advanced model
information is added to some conventional vessel-extracting frameworks. It also makes it
very difficult, if not impossible, to estimate or compare the performances of different
methods. In [54], the author analyzed the vessel segmentation algorithms from three
aspects: models, features and extracting techniques. This seems to be the most reasonable
way so far of classifying different methods, but also results in a complicated network when
analyzing existing segmentation methods as a whole, due to the various complicated
combinations of techniques from the three aspects. The overlaps in the concepts of the
models and features could cause further confusion. In this chapter, I will try to provide a
brief review of vessel segmentation methods using a hierarchical categorization layout.
The methods are first classified into categories and sub-categories according to the type of
knowledge that is used in the series steps, rather than how the knowledge is processed.
Within each sub-category, several example algorithms are then provided to demonstrate
how the knowledge can be extracted and used. This may facilitate the comprehension of
the robustness and accuracy of methods from different categories or discussion of the
efficiency of various algorithms within the same category that use the same knowledge
base. Due to the wide range of applications, it is impossible to list all the methods and their
variations here. My primary goal is to explicitly list the most common methods and
variations in the literature, while summarizing the rest under the umbrella of categories and
sub-categories. An outline of the classification of different vessel segmentation methods is
shown in Figure 4. The categories, subcategories and examples are all listed in order, from
simpler to more complex.
2.4.1. Class I: Methods using non-vascular-specific knowledge
Most methods in this class were designed for general purpose image segmentation. As
vessel segmentation is not very different from other image segmentation tasks, especially
when the image quality is sufficient, these methods were often “borrowed” to extract
vessel structure from CTA/MRA. According to how far image information or assumptions
of the object are used for judging the membership of image points, they can be further
divided into three sub-categories.
14
Background
Figure 4. A hierarchical categorization of the vessel segmentation methods
Class Ia. Using Only Local Image Properties
Basic image properties include intensity and gradient. In CTA or MRA, the pixels/voxels
will have relative higher intensity when they are inside the vessel area, or high gradient on
the border of a vessel. In an ideal world, this information is sufficient to separate the vessel
from nonvascular structures. However, due to noise, limited resolution and, more
importantly, the overlap of intensity range between different tissues (for example, bone
structures and contrast-agent-mixed blood have similar intensity in CTA), the
segmentation results are usually unacceptable in real cases. However most methods in this
category are very time efficient and allow real-time user interactivity. Examples are
thresholding and k-mean clustering:
Thresholding judges whether a pixel/voxel belongs to the vessel by examining if
its intensity is within a given range (often defined by the user). It is a very simple
technique compared with the other methods listed below, yet it is the most often used
tool in commercial systems because of its time efficiency and ease of understanding.
Although conventional global thresholding suffers from misclassification of noise and
non-vascular high intensity objects, a local adaptive thresholding scheme shows some
promise of segmenting vessel structure from a varying background and noise [55].
Apart from its use as a standalone segmentation method, thresholding is often
incorporated with other more advanced methods as a pre-processing step to limit the
computation in the remaining pipeline [56].
k-mean clustering automatically partitions the histogram of an image into k
clusters by minimizing the average distance from each point to the center of the
15
Background
nearest cluster. This can then be translated as the value ranges for thresholding. The
strength of the k-mean clustering method is its ability to handle multi-dimensional
data, such as color images (RGB channels) or flow data [57]. However, it still shares
the weaknesses of the simple thresholding method concerning noise and intensity
overlapping issues.
Class Ib. Using Local Image Properties + Connectivity
In the physical world, it is easy to notice how the parts of a vessel (or any other anatomical
structure) are all connected. This allows a surgeon to bluntly separate the vessels from the
patient’s body. In an image, the connectivity between different points can be verified by
finding a path between them on which the intensities of all the adjoining points are within
a given range (normally brighter than the background). The connectedness strength can
also be quantified by using a cost function (for example examining the lowest gray-value
on the path). Using this connectivity concept, the pixels/voxels that have similar intensity
to a vessel structure but are not connected with vessels (for example bone structures or
high intensity noise in the background) can then be excluded. Methods in this class usually
require the user to provide one or several seed points inside and/or outside vessels; the
remaining area is then classified by testing whether the connectedness strength to these
seeds is above a given threshold, or by comparing the strength to different sets of seeds.
Examples of algorithms using this type of knowledge include region growing, iso-surface
extraction and fuzzy connectedness:
Region growing, as suggested by the name, starts from a given seed point/region
and gradually adds the neighboring points into the region if their intensity value falls
within a given range (upper and lower threshold), e.g. Figure 5. As the pixels inside
the vessel tend to have relatively high intensity in CTA and most cases of MRA, the
region is expected to grow along the vessel. As this type of method is quite timeefficient, it is included in most commercial software for vessel analysis. The drawback of this method is the leaking problem that often occurs when a vessel is too close
to other high intensity structures, like bones or heart chambers. Due to limited spatial
resolution and partial volume effects, the pixels/voxels at the boundary between two
tissues may have relatively high intensity. The region can then grow into another
organ through these bridge points. Controlled region growing methods were proposed
to avoid this problem by automatically selecting a threshold [58]. These methods
usually involve running multiple passes of the region growing process while
successively decreasing the global threshold. A leak is detected when the volume of
the segmented object increases dramatically. However, if the threshold is set too high,
low intensity voxels in a stenosis lesion may cause the distal part of the vessel to be
lost in the segmentation. These problems can be better solved by using competing
regions, e.g. in the relative fuzzy connectedness framework [59], or using modelspecific information.
16
Background
Figure 5. Region growing method gradually adds the neighboring points (crossed) into the
region if their intensity value falls within a given range. Figure refined after [58]
Iso-surface extraction generates an “airtight” surface around an object by
connecting points that have the same intensity (normally between the object and the
background), these points are expected to be located on the edges of the object in a
noise-free image. Marching cubes is the most common algorithm [60]. Although the
surface of iso-intensity is rarely used for CTA segmentation due to noise, the surface
of the maximum gradient has proved to be relatively accurate for quantifying the
diameter of vessels in [61].
Fuzzy connectedness defines the connectedness strength between two adjoining
points with a cost function (affinity function) [59]. The strength of a path joining two
remote points is then decided by the lowest affinity value on the chain. Finally, the
connectivity between two arbitrary points is quantified by finding the strongest path
among all the possibilities. This can be solved using Dijkstra’s algorithm [62] or the
Bellman-Ford algorithm [63]. As the connectivity is quantified, the membership can
be decided by setting a threshold (as in region growing) or by comparing the
connectivity to different sets of seed [59].
Class Ic. Using Local Image Properties + Connectivity + General model
Noise is a common problem for all imaging modalities. Using only image properties it is
impossible to avoid misclassification of noise images. Using the connectivity assumption,
some noise far away from the object can easily be excluded, but the noise inside and near
the object will still be misclassified, and the result holds inside the region and fuzzy edges.
More geometric assumptions of the targeted object, usually called models, were introduced
to overcome these issues. The most common assumption is local smoothness, which can be
expressed by fitting a small model (usually a ball shape) into every corner of an object, or
defining the whole object as an elastic body.
Mathematic morphology methods try to fit a binary (or occasionally gray-scale)
shape (such as a ball shape) (“structuring element”) at every pixel/voxel [64].
Smoothing effects can be achieved by removing the points where the kernel does not
fit (erosion operation), or smearing the edges or holes with the kernel (dilation
operation). In reality, these two operations are normally combined to fill holes
(closing operation: dialation + erosion) or to remove fuzzy edges (opening operation:
erosion + dilation). Strictly speaking, mathematic morphology is a local model fitting
method. The conventional implementations do not consider connectivity information,
but for vessel segmentation, it is normally combined with region growing methods
[56] (which is why I listed it in Class Ic). Its application has been rather limited as
17
Background
these conventional techniques could remove the small branches since the kernel size is
fixed and cannot be fitted into thin branches. Udupa et al. proposed a different scheme
of the ball fitting in their scale-based fuzzy connectedness [65] (note that fuzzy
connectedness is also called ‘connected opening’ in some mathematical morphology
books [66]). Instead of fitting a ball, they measured the radius of the local superball
based on the intensity homogeneity, which was then used to quantify the
connectedness between two neighboring points. This concept was later extended to
tensor-scale fuzzy connectedness which uses a super ellipsoid shape instead [67].
However, the performance of these methods is less satisfactory, as ray-casting is used
at each point to estimate the radius of the ball or the ellipsoid.
Figure 6. A 2D graph cut example in a 3×3 image. O and B are the supernodes. The cost of
each edge is reflected by the edge’s thickness. The paths to O and B define Eintensity. The
horizontal edges define Ecoherence. Inexpensive edges are attractive choices for the minimum
cost cut. Figure refined after [68].
The graph cut is an image partition method based on graph theory and energy
minimization [68]. As with the fuzzy connectedness method, the image is represented
as a graph in which the pixels/voxels are seen as nodes, and neighboring pixels/voxels
are connected with edges. In addition, all nodes are also connected with two supernodes; one represents the background, and the other represents the object (Figure 6).
The costs of all edges are all positive, and negatively related to the intensity difference
between two nodes, i.e. a large difference corresponds to a small cost. The
segmentation problem can then be seen as finding a way to cut the graph into two
parts between the two supernodes, so that E = Eintensity + Ecoherence is minimized, where
Eintensity is the total cost of cutting the edges between a super-node and an image point
and Ecoherence is the total cost of cutting the edges between neighboring image points. In
18
Background
simple cases, the Eintensity term could be interpreted as a thresholding force trying to
keep high intensity pixels on the object side and keep the low intensity pixels on the
background side. Ecoherence on the other hand is a smoothing force trying to shorten the
contour of the object and penalize holes and fuzzy edges. If the costs between
neighboring image points are set to zero (Ecoherence = 0), the method is no different from
thresholding. While increasing the cost, a smoother segmentation can be achieved
[69]. However, as the smallest units are pixels/voxels, the segmentation accuracy is
limited by the size of the pixel/voxel.
Active contours, also called snakes, represent the object border using an elastic
contour (in 2D) or mesh (in 3D) which is driven by an external force and an internal
force [70]. The external force attracts the contour towards the object boundary to
minimize the external energy, which is lowest when the contour is at the maximum
gradient. The internal force (elastic force) tries to keep the contour smooth and
minimize the internal energy, which is minimal when the curvature of the contour is
zero. By increasing the weighting factor of the internal force, the contour will become
smoother in order to segment objects accurately on a noisy background. However, if it
is set too high, sharp corners of the sought object will also be cut off. The internal
force can also prevent the leaking problem to some extent as mentioned in the region
growing method. Unlike graph-based methods, active contours can achieve subpixel/voxel accuracy as the control points are not limited to discrete points. However,
the number of control points on the contour will affect the accuracy. Too few control
points will result in coarse shapes, while too many points will cause clustering
problems and lead to the development of unstable movements. In practice, active
contours are widely used for 2D cross-section segmentation based on vessel skeletons
generated from other vessel extracting methods. However, classical implementations
of active contours are rarely used for direct 3D extraction of vascular structures, as the
curvature regularization prevents identification of thin, elongated structures [54].
The level set method can also be seen as a variant of the active contour in which
the contour is implicitly embedded in a level set function, given by the distance to the
zero level set (Figure 7). This enables the level set method to handle automatically
complicated topological changes of the target object, such as dividing one object into
two. It also increases accuracy, as the control points of the contour are always evenly
distributed and the sample distance is no greater than one pixel/voxel. However, it
also results in much longer computation times than explicit active contour methods,
and therefore its clinical usefulness is limited. To accelerate the performance, the
narrow-band level set method was proposed [71], in which the computation of the
level-set function is limited to a band a few pixels wide instead of the whole image.
This was later extended to the sparse-field level set [72], where the narrow band is
only one pixel wide and the level set function is recalculated with a distance transform
in each iteration. These methods reduce the computation by one order of magnitude.
However, in our experience, the performance on 3D data is still not adequate for
clinical practice.
19
Background
Figure 7. Explicit contour is enlarged by moving the control points outwards. Implicit
contour represented by level set function is enlarged by moving the level sets downwards
2.4.2. Class II: Methods using vascular-specific knowledge.
Figure 8. Different vascular models: A, The shape-space for second order 3D variations
represented by 3 eigenvalues of the Hessian matrix [74]; B, a vessel template in 2D [75]; C, The
rays used in computing the maximum center likelihood [76].
The unique geometric appearance of a vascular structure enables medical doctors to
recognize patterns of vessels in very noisy images. Model-based vessel extraction
algorithms use a similar strategy by looking at the pixels/voxels in a local area as a whole
instead of examining them individually. The probability of a vessel existing is then
represented by how well the local geometric character agrees with the predefined model(s),
which is often referred to as vesselness measurement. Vascular models are usually
generated from human knowledge or training data. A tubular/cylindrical shape is the most
commonly used knowledge-based model. Using this assumption, numerous vesselness
evaluation methods have been proposed, such as local Hessian matrix analysis [73][74],
model fitting [75] and ray casting [76] (e.g. Figure 8). Training-based methods do not
20
Background
require previous knowledge of the vessel shape, but use the segmented results from human
experts as training data to discover the optimal combination of image features that can be
used to classify the vascular regions.
Class IIa. Using only vesselness measurement
The most straightforward way of using the vascular-specific models is to perform the
vesselness measurement on every pixel/voxel in the image, thus creating a property map.
This map can then be used by a classifier to produce the segmentation result. The apparent
drawback is that this type of method is usually computationally expensive. Hessian matrix
analysis is so far the most applicable approach in this category, as the computation is
limited to a 2×2 matrix in 2D, or 3×3 matrix in 3D. The matrix is usually computed using
convolution operations, and is then used to calculate the eigenvalues. It should be pointed
out that, as the diameter of a vessel varies, the vesselness measurement is normally
repeated in multiple scales and projected into one volume by choosing the maximum
vesselness value from all scales. The scale giving the highest vesselness measurement can
also be used to predict the local diameter of a vessel.
Thresholding is the simplest classifier for selecting points inside vessels. For
tubular structures, the eigenvalues of the Hessian matrix are distributed as a plate
shape in 3D, i.e. |λ1|≈0 and |λ1|<<|λ2| and λ2≈ λ3 [74]. By setting a range on each of the
two/three eigenvalues, voxels that belong to a tubular structure can be selected. Or the
two/three eigenvalues can be converted to a scalar via a vessel enhancement function
[74][77][73]. Compared with thresholding on the non-vascular-specific features
discussed in Class Ia, using vesselness can significantly reduce the misclassification
caused by noise. However, the result is still rarely satisfactory as many anatomical
structures can mimic the tubular shape and the bifurcation areas of a vessel could have
a relatively low vesselness value since the model is based on an assumption of tubular
shapes.
Machine learning has been gaining more and more attention within medical image
analysis in recent years, especially after fast algorithms, such as probabilistic boosting
tree (PBT), were developed [78]. A machine-learning-based vesselness measure can
exploit the rich domain-specific knowledge embedded in an expert-annotated dataset.
By providing a rich set of geometric and image features, the classifier can be trained
to assign a high score for voxels inside the artery and a low score to those outside.
Promising results have been achieved in some preliminary vessel segmentation studies
[79]. For tubular structure extraction, it is doubtful whether machine learning methods
could outperform other explicit model based methods, as one can imagine, after
massive training, the classifier will probably end up with a tubular structure detector;
however the strength of the learning base method is the ability to classify the
bifurcation areas which cannot be well expressed in the explicit models [79], as
discussed above.
Class IIb. Using vesselness + Connectivity
The connectivity of vessels can also be used together with the vesselness measurement.
The advantage of using connectivity is not only avoiding the misclassification of unrelated
tubular like structures, but also limiting the computation to a small proportion of points
near the vessel, which allows vessel extraction at interactive speed. As the number of
21
Background
processed points decreases, more complicated vesselness measurement methods, such as
model fitting or ray casting, can be implemented.
Ridge traverse methods extract a tubular structure by repeating two basic steps,
prediction and selection. Given a seed point or point from a previous tracking step, the
vessel direction can be estimated using the vesselness measurement via either Hessian
matrix analysis [74], model fitting [75] or ray casting [76]. A new position is then
predicted by moving a small step further (the distance can be predefined or proportion
to the vessel diameter) in that direction. Because the vessel is not always a straight
cylinder, a number of candidate points around the predicted position are evaluated.
The point with the highest vesselness measurement is then selected as the next center
point of the vessel. The growing process normally ends when the vesselness value
goes under a threshold, and results in a non-branching centerline. More complicated
algorithms have been proposed to detect possible bifurcation and extract the whole
vascular tree [75]. It is worth of mentioning that, in each selection step, the diameter
of the vessel is usually also estimated by varying the scale of the vascular model, and
used as an additional condition for selecting the best fit center point because the vessel
diameter normally will not change significantly over a short distance [75]. Ridge
traverse schemes explore the image on sparse locations and can thus achieve great
time and memory efficiency suitable for large 3D datasets. Embedding vesselness
information into the growth process generally improves robustness over previously
discussed connectivity-based methods (Class Ib). The limitation of this type of
method is the tendency for premature stopping in the presence of anomalies such as
stenoses and aneurysms. Sometimes, the one-click extraction could lead to difficulties
in forcing the centerline to grow along a desired branch while it is attracted by a
nearby branch with a prominent tubular shape.
Minimum cost path methods extract the vessel centerline by minimizing the
cumulative, monotonic cost metric integrated along the path between two given
points. Similar to the fuzzy connectedness method discussed in Class Ib, each
pixel/voxel in the image is assigned a cost function that is normally negatively related
to the vesselness value, for instance taking the reciprocal of the latter. This makes the
minimum path favor the center of a vessel. Efficient dynamic programming schemes,
such as the Dijkstra algorithm, are used to solve the minimization problem via a
growing process. The vesselness value of each pixel/voxel can be calculated
beforehand as discussed in Class IIa, using Hessian matrix-based methods [80], or
calculated only when the propagating region reaches the pixel/voxel using model
fitting or ray casting methods [81]. Compared with the ridge traverse method, the
minimum path searching method is based on global optimality, thus is more robust in
cases of corrupted data or severe anomalies.
Machine learning methods can also cope with the connectivity of the vessel
system. Trivial implementations include using the region growing method to further
refine the segmentation results from trained classifiers discussed in Class IIa [79] or
using ridge traverse schemes while replacing the selection step with the Bayesian
estimation approach [82]. Nevertheless, these methods simply replace other vesselness
measurement algorithms with a machine-learning-based vesselness measurement. A
more attractive approach proposed by Lacoste et al. incorporates geometric primitives
22
Background
and the connectivity property directly into the Markov marked point processes as prior
knowledge [83]. The extraction process is initialized from randomly generated small
segments of the image and is evolved through a reversible jump Markov chain MonteCarlo scheme. This algorithm governs the evolution of the parameters, along with the
birth and death of segments until they converge into a clean and stable structure that
corresponds to the vascular tree. By elegantly integrating all prior knowledge within a
homogeneous, well-posed theoretical framework, robust and accurate results can be
achieved on 2D coronary angiography images. However, the high computational cost
unfortunately prohibits its extension to 3D data [83].
Class IIc. Using vesselness + Connectivity + General model
Even though using geometric assumptions in Class IIa and IIb methods significantly
improve the robustness of locating the vessel structures and of extracting the vessel
centerlines in a noisy image, these methods normally cannot accurately identify the vessel
boundary, since the models are only used to estimate the likelihood of the vascular
appearance in the neighborhood. They are often combined with active contour approaches
to deliver quantitative analysis based on cross-section segmentation as mentioned in Class
Ic. However, these skeleton-based methods cannot be directly used to segment 3D vessel
structures. A more natural way of integrating vessel models, connectivity information and
smoothness assumption is to use the level set framework.
The level set is a flexible framework that allows the integration of various internal
and external forces. Vesselness measurements can be incorporated into the level set
framework in different ways. The most straightforward approach is to use the
vesselness value as an external force to push the contour outwards. However, as the
vesselness is measured from the local area instead of a single pixel, vague edge
information could be smeared away and the segmentation accuracy is then reduced.
Using the vesselness information to guide the internal force could avoid interfering
with the edge information while still pushing the contour forward along the vessel. In
conventional level set-based image segmentation frameworks, the smoothness of the
contour is controlled by penalizing high mean curvature, which is calculated by taking
the mean value of curvature measurements in orthogonal directions. As the contour
grows along thin vessels, the tip of the contour will always have a high mean
curvature. This entails a great difficulty of balancing between the smoothness of the
contour and the distance it can grow along the vessel. Anisotropic curvature
measurement has been proposed to address this problem. Using the estimated vessel
direction from the vesselness measurement (such as Hessian matrix analysis [84][85]),
anisotropic curvature weighting factors could be applied on the curvature in different
directions, which relaxes the constraints on the tips of the contour but strengthens the
restriction when the contour is aligned with the vessel surface.
23
Aims
3. AIMS
Although coronary CTA is sometimes conventionally evaluated without any postprocessing, e.g. using interactive MPR or thin-slab MIP, an increasing number of
physicians are convinced that using proper post-processing methods may greatly enhance
the efficiency of the diagnostic workflow and improve the diagnostic reliability. This
clinical need has encouraged the development of modern advanced medical image
processing software, and it is also the starting point of this project. The main goal of our
study is to develop a computer tool to facilitate coronary CTA analysis by combining
knowledge of medicine and image processing, and evaluate its performance in clinical
settings. More specifically, the aims of this thesis are:
A1.
To design a new algorithm for vessel segmentation and skeletonization using fuzzy
connectedness theory, and adapt it to coronary CTA analysis to achieve both
accuracy and time efficiency
A2.
To develop a software module implementing the new algorithm as the core and
adding additional functions to simplify or eliminate user intervention, and perform
visual and quantitative evaluation of the results in comparison with other available
methods
A3.
To evaluate the diagnostic accuracy of the developed software and compare it with
standard commercial software in a clinical setting
A4.
To implement a level set-based segmentation method for 3D vessel segmentation
and optimize the performance to meet the clinical requirements
25
Summary of The Papers
4. SUMMARY OF THE PAPERS
4.1.
Algorithm Design:
Paper I
Methods
In [86], a fuzzy connectedness method was developed for MRA image segmentation. A
previous study by our group [87] showed promising results when using the fuzzy
connectedness method, also called “virtual contrast injection”, for coronary CTA
segmentation. In this paper, we propose a new segmentation algorithm based on competing
fuzzy connectedness theory, which was used for visualizing coronary arteries in 3D CTA
images. The strength of connectedness from each voxel to two sets of seeds, artery seeds
and ventricle seeds defined by the user, is calculated and compared to decide which object
a voxel should belong to. The strength is calculated by finding the strongest among all
possible connecting paths. The strength assigned to a particular path is defined as the
lowest gray-scale value of voxels along the path. The membership of every voxel is then
calculated by comparing its connectedness value to different seeds, assigning the voxel to
the seed yielding the highest connectivity. Applying this strategy to all voxels in the image,
a natural segmentation by “virtual contrast injection” is achieved. The whole concept can
be likened to contrast agents of different color being injected into the seed regions and
circulating within the cardiovascular system while competing with each other. The
propagation procedure for a seed is somewhat similar to Region Growing, but uses a nonlocal propagation criterion.
The major difference compared to other fuzzy connectedness algorithms is that an
additional data structure, the connectedness tree, is constructed at the same time as the
seeds propagate. Using this tree structure not only speeds up the computational speed, but
also improves the segmentation results by solving an ambiguity problem that may arise
when propagation from different seeds occurs along the same path. In addition to that, the
fuzzy connectedness tree algorithm also includes automated extraction of the vessel
centerlines that can be used for creating curved MPR images along the arteries’ long axes.
Results
The proposed method was compared with the previously reported “SeparaSeed” method
[86] in 33 clinical coronary CTA datasets. With the new algorithm, the seed planting time
was shortened to 2–3 min from 30 min with the “SeparaSeed” algorithm. The computer
processing time was 3–5 min for the new algorithm and 10–15 min for the “SeparaSeed”
algorithm. Using only basic seed planting (with one root seed placed in each coronary
artery) resulted in all visible branches being completely segmented in 18 cases, compared
to 12 cases when using the same seeds with the previous algorithm (p<0.05; McNemar’s
test). In total, visually correct centerlines were obtained automatically in 95.3% (262/275)
of the visible branches. Figure 9 shows a segmentation example.
27
Summary of The Papers
Figure 9. A. Segmentation result shown in 3D with volume rendering, arrow indicating a severe
stenosis in the left circumflex artery (LCX). B. Segmentation result of left coronary artery shown
in 3D with MIP, arrow indicating the same stenosis. C. Segmentation result shown as a mask in
color on a 2D transverse image. The circle indicates the cross-section of the same lesion in A
and B. The red mask proves that the stenosis seen in 3D was not caused by errors in
segmentation. D-F. Curved MPR, oblique MPR and oblique MIP of the LCX, arrow indicating the
same stenosis.
28
Summary of The Papers
4.2.
Software Development
Paper II
Methods
Using the new vessel segmentation method proposed in paper I, a new software module for
coronary artery segmentation and visualization in CTA datasets was developed which aims
to interactively segment coronary arteries and visualize them in 3D with MIP and VRT.
The software was built as a free plug-in for an advanced open-source PACS workstation
– OsiriX [88]. The main segmentation function is based on the optimized ‘‘virtual contrast
injection’’ algorithm[I], which uses fuzzy connectedness of the vessel lumen to separate
the contrast-filled structures from each other. In contrast to many other algorithms, our
approach does not aim to define the exact border of the vessel, but rather to keep a
sufficient amount of myocardial tissue surrounding the vessel in the segmentation result,
which facilitates preservation of detailed information about the coronary artery tree.
The software was evaluated in 42 clinical coronary CTA datasets acquired with a 64slice CT using isotropic voxels of 0.3–0.5 mm.
Results
The median processing time was 6.4 min, and 100% of the main branches (right coronary
artery, left circumflex artery and left anterior descending artery) and 86.9% (219/252) of
visible minor branches were intact, as judged by visually checking if there was at least one
voxel of soft tissue surrounding the contrast-enhanced lumen (Figure 9). Visually correct
centerlines were obtained automatically in 94% (345/367) of the intact branches. The
median number of seeds planted during interactive segmentation (after the initial seeding)
was 4.
Paper III
Methods
To provide a more time-efficient workflow, the software module presented in paper II was
further developed to integrate both fully automatic [89] and interactive methods for the
coronary artery extraction method. To extract useful information from the coronary CTA
data, a quantitative coronary CTA analysis function was built into the new software
module. The image processing in the software can be divided into three phases: image
reception and classification; automatic coronary artery tree extraction; and interactive
coronary artery extraction and visualization (summarized in Figure 10).
During image reception and classification, the software checks all received images and
decides which should enter the automatic processing pipeline.
Automatic coronary artery tree extraction starts with rib cage removal using a
minimum-cost-path searching method to close the heart contour at the anterior and
posterior mediastinum. The ascending aorta is then detected using a Hough-transformbased 2D circle detection method, and extracted using a 3D cross-section growing method.
29
Summary of The Papers
The coronary arteries are finally segmented and skeletonized, with the “virtual contrast
injection” method being performed on the vessel structure enhanced data using the
detected aorta as the start seed.
In the interactive coronary artery extraction and visualization phase, the user can correct
the automatically created results by adding new seeds or an endpoint for a branch. For
quantitative analysis, a 2D level-set method is used to segment the vessel’s contour on the
cross-section and longitudinal-section images.
The computational power of an ordinary PC is exploited by running the non-supervised
coronary artery segmentation and centerline tracking in the background as soon as the
images are received. When the user opens the data, the software provides a real-time
interactive analysis environment.
Results
Standardized evaluation methodology and a reference database for evaluating coronary
artery centerline extraction algorithms (publicly available at http://coronary.bigr.nl) were
used for evaluating the new software’s performance. The average overlap between the
centerline created in our software and the reference standard was 96.0%. The average
distance between them was 0.28 mm. The automatic procedure ran for 3-5 min as a singlethread application in the background. Interactive processing took 3 min on average.
Figure 10. The workflow of the presented software. The non-supervised coronary artery
segmentation and centerline tracking starts in the background as soon as the images are
received. When the user opens the data, the software provides a real-time interactive analysis
environment
30
Summary of The Papers
4.3.
Clinical Evaluation
Paper IV
Methods
Patients: CTA data sets from 30 patients (23 men and 7 women, mean±SD age 64±8
years), with unstable coronary artery disease or non-ST segment elevation myocardial
infarction (Non-STEMI), were evaluated retrospectively.
CT Protocol: CT-coronary angiography was performed with a dual-source 64-slice
CT scanner. Five mg of intravenous metoprolol was administrated to obtain an optimal
heart rate (60±8 beats per minutes). A “test bolus” protocol was used to time the
acquisition. For coronary CTA, 70 ml of contrast agent was injected, followed by 50 ml
saline solution, both at flow rates of 6 ml/s. Axial images were reconstructed with 0.75 mm
slice thickness and 0.5 mm increment using retrospective ECG gating. The best diastolic
3D images were automatically selected with the minimum cardiac motion detection
technique.
Post-processing: The best diastolic series were transferred to an off-line workstation
(Leonardo; Siemens Healthcare, Erlangen, Germany). One observer performed interactive
vessel segmentation (with the region growing [RG]-based method) on all 30 data sets with
the “Circulation” software. The same observer also prepared another segmented series with
the interactive coronary artery segmentation tool (“virtual contrast injection” [VC]-based
method) on a Mac Pro computer running the open source software OsiriX (v 3.6.1) with
our plug-in. Segmentation results (cf. Figure 11) were then sent back to the Leonardo
workstation for evaluation.
Catheter angiography (CA) of the coronary arteries was performed within three days
of the coronary CTA, and was evaluated by a blinded independent observer using a
different medical image workstation, and used as the reference standard for stenosis
detection.
Evaluation: Three reviewers, blinded to the CA results, evaluated the coronary CTA
data for the presence of stenoses with diameter reduction of 50% or more using the SCCT
18 segments model [90]. For every examination, each of the three types of images was
exclusively reviewed by one reviewer. For the original series, the reviewer was allowed to
use all the visualization tools (mixed method) available on the Leonardo workstation
including MPR, CPR, MIP (with or without thin-slab) and VRT. For the segmented results
(results from RG and VC), the reviewer was instructed to only use the 3D MIP tool (Figure
11). To minimize the influence of individual ability and experience of interpretation, the
patients were randomized into three groups, and observers were asked to evaluate different
groups using different types of images.
Results
The mean (±SD) post-processing time for preparing the segmented result with the RG
method on the Leonardo workstation was 3.6±1.2 minutes per patient. Besides the initial
click inside the aorta, 3.9±2.8 seed points per patient were added during the following
interactive segmentation step. Using the VC method, the post-processing time was 4.7±1.1
minutes and 2.7±1.6 extra sets of seed were added in addition to the three initial seeds.
In CA, 426 segments were rated as evaluable. In coronary CTA, 386 segments were rated
as evaluable when reviewed with the “mixed method”, 371 segments with VC, and 301
segments with RG. The overall accuracies of the three presentation methods at segment
level, four-artery level and patient level are listed in Table 1, Table 2 and Table 3
31
Summary of The Papers
respectively. In these tables, non-evaluable branches were counted as “not accurately
classified”. Table 1 was excluded from further statistical analysis, because the numbering
of segments varied too much among the four reviewers.
At the artery level, the diagnostic accuracy, sensitivity and specificity of the RG-based 3D
visualization method were significantly lower than using the mixed method (p = 0.0001,
0.02 and 0.002 for accuracy, sensitivity and specificity, respectively) and the VC-based 3D
visualization method (p = 0.002, 0.04 and 0.005, respectively). However, there was no
significant difference in accuracy, sensitivity or specificity between the VC-based method
and the mixed method (p = 0.22, 0.18 and 0.5, respectively). The kappa value was 0.77
between the mixed method and the VC-based method (97 arteries were evaluable with both
methods), 0.63 between the mixed method and the RG-based method (75 arteries), and
0.63 between the VC-based method and the RG-based method (72 arteries).
Figure 11. An example of data presentation: Column A, catheter angiography; Column B, 5 mm
thin-slab MIP of original data; Column C, 3D MIP of VC segmented data; Column D, 3D MIP of
RG segmented data
Table 1. Overall accuracy for the detection of coronary artery stenoses (at segment level). [95%
confidence limits]
Mixed method
VC segmentation
RG segmentation
Accuracy
71% (302/426)
[66%; 75%]
68% (288/426)
[63%; 72%]
57% (243/426)
[52%; 62%]
Sensitivity
35% (23/66)
[23%; 48%]
32% (21/66)
[21%; 44%]
29% (19/66)
[18%; 41%]
Specificity
78% (279/360)
[72%; 81%]
74% (267/360)
[69%; 79%]
62% (224/360)
[57%; 67%]
Positive Predictive
Value (PPV)
38% (23/61)
[26%; 51%]
35% (21/60)
[23%; 48%]
40% (19/47)
[26%;56%]
Negative Predictive
Value (NPV)
86% (279/325)
[82%; 89%]
86% (267/311)
[81%; 90%]
88% (224/254)
[84%; 92%]
32
Summary of The Papers
Table 2. Overall accuracy for the detection of coronary artery stenoses (at artery level). [95%
confidence limits]
Mixed method
VC segmentation
RG segmentation
Accuracy
74% (89/120)
[65%; 82%]
71% (85/120)
[62%; 79%]
56% (67/120)
[46%; 65%]
Sensitivity
68% (23/34)
[49%; 83%]
59% (20/34)
[41%; 75%]
44% (15/34)
[27%; 62%]
Specificity
77% (66/86)
[66%; 85%]
76% (65/86)
[65%; 84%]
60% (52/86)
[49%; 71%]
PPV
72% (23/32)
[53%; 86%]
71% (20/28)
[51%; 87%]
71% (15/21)
[48%; 89%]
NPV
93% (66/71)
[84%; 98%]
92% (65/71)
[83%; 97%]
93% (52/56)
[83%; 98%]
Table 3. Overall accuracy for the detection of coronary artery stenoses (at patient level) [95%
confidence limits]
Mixed method
VC segmentation
RG segmentation
Accuracy
73% (22/30)
[54%; 88%]
73% (22/30)
[54%; 88%]
50% (15/30)
[31%; 69%]
Sensitivity
83% (15/18)
[59%; 96%]
83% (15/18)
[59%; 96%]
67% (12/18)
[41%; 87%]
Specificity
58% (7/12)
[28%; 85%]
58% (7/12)
[28%; 85%]
25% (3/12)
[5%; 57%]
PPV
88% (15/17)
[64%; 99%]
88% (15/17)
[64%; 99%]
80% (12/15)
[52%; 96%]
NPV
100% (7/7)
[65%; 100%]
100% (7/7)
[65%; 100%]
100% (3/3)
[37%; 100%]
33
Summary of The Papers 4.4.
Optimization of Level Set-­‐based 3D Segmentation Paper V
Methods
To accelerate the level set-based vessel segmentation, a monotonic speed function is
proposed. In most level set-based segmentation applications, neighboring points on the
zero level set (the contour) will not propagate at the same pace, due to noise and numerical
errors. When the contour assumes a zigzag shape, the faster points in the neighborhood
(points 2 and 4 in Figure 12 A) will be “pushed” back by the curvature force in the next
iteration, while the trend of the local segment is to move forward. This temporary
backwards propagation significantly slows down the propagation. In [V], we proposed a
monotonic speed function:
$v t
∂Φ/∂t=w(vt) where w(v t ) = %
&0
(if
(if
v t " trend(x,t) > 0)
v t " trend(x,t) # 0)
(1)
where “trend” indicates whether the local contour is expanding or shrinking. Its
value can only be 1 or –1, as in level set the function vt is a scalar. “trend” for each
pixel/voxel is initialized! when it is added for the first time to the narrow band, using
sign(vt), and passed onto the neighbors during propagation. Its values need not be
the same across the image; however, once set, it cannot be changed, which prevents
the contour from passing a point twice. With this monotonic speed function, the
faster points (points 2 and 4 in Figure 12 B) will not be moved back by the
curvature force (since vt × trend < 0), but stay in place and wait for the neighbors to
catch up. As the contour becomes flat again, vt at points 2 and 4 will have the same
sign as trend, and all points can move forward together. In this way, the expanding
parts of the contour keep expanding, and the shrinking parts keep shrinking in a
coherent manner, until they reach the boundary of the segmented object.
Figure 12 A: Example of temporary backwards propagation. B: Coherent propagation. The arrow indicates the global propagation trend, the gray line is the current contour, and the dashed line is the contour after one iteration. Coherent propagation also makes it possible to detect local convergence, and
corresponding points can simply be put to “sleep” and excluded from further updating. As
these points may be in just a temporal “equilibrium” (w(vt) =0), a wakeup mechanism is
implemented. To compensate for the overshoot at the end of single direction propagation, a
periodic forward and backward propagation is carried out.
Results
The proposed method was implemented as a single-thread application on a Mac Pro with a
2.93GHz Intel Xeon CPU. In general, a 5–15 times speed increase was achieved compared
34 Summary of The Papers
with the conventional sparse field level set method. Typical computation time for a
512×512 image was 0.1–0.2 s. With the 3D version, 12 cases of abdominal aorta CTA data
were tested. A single seed region was provided for each case. A speed increase of about
10–20 times was achieved. Typical computation time for a 256×256×256 volume was 5–
12 s. The difference between our method and the conventional method was measured by
comparing the distance (given directly by the level set function) of the points on the one
voxel wide narrow band. In the 12 datasets, the distance between two 0-level sets varied
from –1.80 to 1.46 voxel units (a negative distance means that the contour point from the
proposed method is outside the contour from the conventional method).
35
Discussion
5. DISCUSSION
5.1.
The Clinical Role of Coronary CTA and its Future
The substantial advances of coronary CTA have resulted in an increase in use of this new
technique in different clinical scenarios in the last few years. However, in view of its
inherent limitations and unavoidable disadvantages, the medical community must balance
the diagnostic information against the risks of radiation and iodinated contrast agents.
Considerable controversy about its role in clinical practice has arisen. The discussion of
the appropriateness and validation of coronary CTA has drawn much attention from
practitioners who wonder how they should incorporate coronary CTA into their practice.
Although there is still no universal consensus at this time, the picture of coronary CTA has
become clearer as more and more study results on different aspects of coronary CTA have
become available. In June 2008, the American Heart Association published a scientific
statement on noninvasive coronary artery imaging by MRA and MDCT [39], which gives
a systematic review of the coronary CTA history and clinical research papers related to this
technology, and offers six recommendations regarding clinical use. The six
recommendations are summarized here:
(1) Neither coronary CTA nor MRA should be used to screen for CAD in asymptomatic
populations;
(2) Both multivendor and additional multicenter validation studies are needed;
(3) The potential benefit of noninvasive coronary angiography is likely to be greatest
for symptomatic patients with intermediate risk for CAD after initial risk stratification.
Coronary CTA is not recommended for high-risk patients who are likely to require
intervention and invasive catheter angiography for definitive evaluation;
(4) MRA, when it is available, is preferred over CTA for anomalous coronary artery
evaluation;
(5) Reporting of coronary CTA and MRA results should describe technical quality,
coronary findings, and significant non-cardiac findings;
(6) Continued research is encouraged to non-invasively detect, characterize, and
measure atherosclerotic plaque burden, as well as its development over time.
These guidelines highlight the role of coronary CTA as a potential alternative to
invasive CA as a CAD diagnosis tool in carefully selected populations. However, it is
noticeable that these recommendations compromising the advantages and disadvantages of
coronary CTA are based on a review of current knowledge of coronary CTA. Encouraged
by the wide interest in this non-invasive method for CAD diagnosis, new developments in
imaging techniques, scanning protocols, and post processing methods are still under way
that will further enhance the reliability and expand the spectrum of CT in cardiac imaging
in the near future. A comparison of important technical parameters and medical impact
factors between CA and coronary CTA is given in Table 4. It is worth noticing that another
non-invasive coronary artery imaging technique, MRA, has also been under rapid
37
Discussion
development in recent years. Compared with CTA, MRA does not require x-ray exposure,
thus is safer to use. However its high expense and technical difficulties currently limit
coronary MRA to only a few well-equipped research-oriented healthcare centers. Due to
the focus of this thesis, the development of cardiac MRI techniques is left out of the
discussion. Interested readers are referred to [91]. Below, a few important aspects of future
cardiac CT exams are discussed.
Low X-ray dose: The biggest issue with coronary CTA is the radiation exposure
involved in the procedure. Although ionizing radiation from natural sources is part of our
daily existence (background radiation including air travel, ground sources, and television),
it is important for a medical doctor to balance the potential risks of a CT scan with the
potential benefits. This is particularly true for diagnostic tests that will be applied to
healthy individuals as part of a disease screening or risk stratification program. Recent
studies using the prospective ECG-gating technique, so-called “step and shoot”, to limit Xray exposure have achieved radiation doses as low as 1.1-3 mSv with acceptable diagnostic
image quality [26]. While using prospectively electrocardiogram-triggered high-pitch
spiral acquisition on dual source CT scanners, a consistent dose below 1 mSv was reported
[11]. Although the cardiac function analysis is sacrificed as images are acquired only
during one selected cardiac phase, these promising results might eventually stimulate
introduction of modern protocols to reduce radiation doses and widen the indications of
coronary CTA, for example to improve the “triple rule out” exam protocol used to triage
chest-pain patients in emergency departments [37].
Table 4. Comparison of technical parameters and medical impact factors [11][43][92]
Methods
Coronary CTA
CA
0.24-0.6 mm
0.2 mm
Temporal Resolution
75-165 ms
5-20 ms
Radiation Dose
0.8-18 mSv
5-10 mSv
Amount of Contrast Agent
50-120 ml
~50 ml
Accuracy (stenosis)
+++
++++
Plaque Assessment
+
++ (IVUS)
Intervention
-
+
Spatial Resolution
Higher temporal resolution and scan speed: The spatial resolution of coronary CTA
is 0.24-0.6 mm, which is roughly comparable to the 0.20-0.25 mm of CA. However, the
temporal resolution of coronary CTA is much lower than CA (75-175 ms of CTA
compared to 5-20 ms of CA) [43]. This implies that the image quality is strongly related to
the patient’s heart rate. However, these numbers are constantly improving due to technical
development. The record of high temporal resolution has improved from 175 ms to 75 ms
38
Discussion
in just five years with the introduction of a new generation of DSCT (Somatom Definition
Flash, Siemens Healthcare). This allows a heart with a fast or irregular heart rate to be
reliably displayed without using beta blockers. An additional benefit from higher temporal
resolution is shorter scanning time, which is inversely related to the X-ray dose. Wider
coverage is another way to reduce the scanning time. The coverage of the new 320-slice
CT is now 16 cm, which allows whole heart imaging during one heart beat without moving
the bed. It is foreseen that in the near future the innovations of the imaging technique will
eventually free coronary CTA from ECG-gating and provide even higher spatial resolution
[93].
Quantitative measures of coronary stenosis severity: Visual interpretation of the
coronary CTA is an important factor that contributes to the intra- and inter-observer
variability of visual estimates and assessment of lesion severity. Recent developments in
CT imaging techniques have greatly improved the image quality and spatial resolution of
coronary CTA images. This makes more precise evaluation of coronary stenosis possible.
Several studies addressing the possibility of quantitative measurements of vessel lumen
and stenosis severity have a revealed good correlation between coronary CTA and IVUS
and good inter-observer agreement [94][95][96]. However, it should be noted that the
image display setting (window setting) has an important influence on the CT-derived
measurements of lumen [97]. Software that can auto-adapt the display setting according to
local attenuation distribution is needed to further improve the intra- and inter-observer
agreement and reduce user interaction. Another factor influencing the accuracy of stenosis
evaluation is the location of the MPR plane on which the vessel diameter is measured. As
most commercial software only provides centerline-based 2D cross-section segmentation,
it is left to the user to decide where to put the MPR plane. With some 3D segmentation
methods, such as the level set method developed in [V], the nearest position can be
automatically detected and more thorough quantitative measurement, such as curvature
analysis and blood flow simulation, can be provided. This might provide clinically
important information in estimating cardiac risk and guiding treatment. In addition,
measurement of contrast opacification gradients from temporally uniform coronary CTA
can also provide indirect quantitative information about the severity of the stenosis. A
preliminary study from Steigner et al. (using our software) demonstrates feasibility and
reproducibility in patients with normal coronary arteries [94].
Function assessment: Cardiac function assessment is an important part of CAD
evaluation. As coronary CTA acquired with retrospective ECG-gating produces 4D data
throughout the cardiac cycle, with the help of certain post-processing software, it is
possible to quantitatively evaluate the patient’s global cardiac function, for example, the
ejection fraction, and regional function, for example, wall thickening [38]. Studies have
revealed a good correlation between coronary CTA and transthoracic echocardiography,
and a moderate correlation against SPECT [98][99]. Quantitative myocardial CT perfusion
remains a major challenge because of the rapid cardiac motion and considerable X-ray
exposure. Although a few animal studies have proved its feasibility [100], no clinical study
on a large number of patients has been reported so far. The “iodine map” created with the
dual energy CT imaging technique highlights an alternative method of evaluating the blood
flow of myocardium [101]. Another application of CT’s 4D ability is cardiac valve
evaluation. With the introduction of the dual source CT (DSCT), the increased spatial and
39
Discussion
temporal resolution of the latest 64-slice CT scanners makes functional valve-defect
visualization possible [102]. It should be pointed out that pure ventricular function analysis
is not the focus of multi-slice CT, since other non-invasive imaging modalities, such as
MRI and echocardiography, which do not require ionizing radiation or administration of
potentially nephrotoxic contrast media, are available. In most cases, functional assessment
will be carried out in addition to coronary CTA, and the imaging protocols are the same.
Plaque imaging and analysis: Coronary CTA is the most sensitive test for the
detection of soft and hard plaques, except for IVUS, which requires invasive
catheterization and insertion of the IVUS probe into every major branch, thus carrying
many inherent risks. Coronary artery calcium (CAC) score calculated from the calcified
plaque area and calcium lesion density is one of the most successful applications of CT
plaque imaging. Recent data provide support for the concept that the CAC score is a
reliable risk assessment tool for CAD, especially in terms of the incremental prognostic
value for populations with intermediate risk [103]. More recently, with the increasing
spatial resolution of CT scanners, clinicians have started to pay more attention to the soft
plaques, since some of them, so-called vulnerable plaques, may rupture in the future and
cause the sudden death of the patient [36][104][105]. Although there is currently no wellaccepted way to characterize these vulnerable plaques, in many cases, they have a thin
fibrous cap and a large and soft lipid pool underlying the cap. The spatial resolution of
current CT scanners still prevents direction segmentation of the fibrous cap and the lipid
core, but Becker et al. proved that the tissue attenuation of non-calcified plaques can be
used to predict their predominant component, i.e. lipid-rich and fibrous-rich plaques [106].
The developing coronary CTA technique may eventually allow quantitative analysis of the
composition of a plaque [36] and its geometric character [104][105]. This may lead to
more accurate prediction of such events and better treatment planning for these lesions, as
some studies based on IVUS exams have suggested that most myocardial infarctions occur
in areas with extensive atheroma within the artery wall, but very little stenosis of the
arterial lumen [107].
5.2.
Stenosis evaluation in 3D
Among all studied reviewing methods, MPR is recognized as the most accurate diagnostic
tool for stenosis evaluation in coronary CTA [47][108]. However, it is also the most timeconsuming method. 3D MIP and VRT are attractive alternatives in clinical practice and
have been widely used for viewing vascular structures in clinical practice. Studies have
proved their accuracy for stenosis evaluation [108]. Increased inter-observer concordance
has also been reported when using these techniques compared to using 2D viewing
methods [109]. However, as shown by Ferencik et al. [47], the diagnostic accuracy
decreased when using a 3D visualization technique for stenosis evaluation in coronary
CTA (accuracy for detecting stenosis was 91% for oblique MPR, and 73% for VRT). Their
study was quoted by the Society of Cardiovascular Computed Tomography Guidelines
Committee as evidence against recommending 3D visualization technique for the
assessment of coronary stenosis [90]. However, in Ferencik’s study, the 3D volumes were
prepared by manually isolating the heart, and no coronary artery segmentation was
performed. It was thus easy to anticipate that the accuracy would be low as the ventricles
conceal almost half the surface of the coronary arteries. Moreover, the VRT images were
40
Discussion
pre-rendered with a fixed window setting and limited angles, which are factors that are
known to interfere with the diagnostic accuracy of 3D MIP or VRT [110].
In our clinical study presented in Paper IV, 3D volumes were prepared with modern
software that can segment the coronary tree from all other non-vascular structures.
Observers were also allowed to freely rotate and reset the window settings. Although the
overall diagnostic accuracy was still lower when using the segmented data (especially with
the region growing-based method), the differences between the methods can be ignored
when excluding the non-evaluable cases due to imperfect segmentation. In practice, for a
user starting with 3D viewing, the non-evaluable arteries can be viewed with MPR or CPR
methods, as most software allows fast switching between 2D and 3D images if they are not
shown side by side. In view of the high NPV of 3D visualization methods, the user can
choose the 3D segmented data first for an overview and use MPR only on the suspected
lesions or the missing branches. As coronary CTA is recommended for intermediate risk
patients, image quality is expected to be better in clinical practice. Using segmented 3D
data for screening stenosis could shorten reading time.
Many methods, as listed in section 2.4, have been developed for coronary artery
segmentation [111][112]. Due to the complicated structure of the coronary artery tree,
most of them either yield imperfect segmentation results or are computationally too
expensive to be clinically useful. To date, their spread into the clinical routine appears to
have been rather limited. The controlled region growing method [58] still seems to be the
industrial standard. In contrast to such segmentation-based tools, our approach does not
aim to define the exact border of the vessel, but rather to segment each coronary artery
with some surrounding tissue from other vascular structures, each with its own surrounding
tissue. There is no threshold involved, as the principle is just to separate the vessels and
ventricles from each other. This may seem to contradict the definition of segmentation, but
since the results are shown with MIP or VRT, the exact delimitation of the lumen is left to
the eye of the user, who may interactively adjust the window setting or VRT transfer
function whenever needed. This approach facilitates preservation of detailed information
about the coronary artery tree, e.g., the tips of minor branches, which may otherwise be
lost as a result of under-segmentation. Moreover, it may prevent the introduction of false
stenoses arising from imperfect segmentation, which may be crucial for diagnostic
accuracy. The clinical study indeed found that the VC method generated more evaluable
branches and resulted in better diagnostic accuracy than the RG method. In most cases,
soft plaques are also included in the segmentation result (e.g. Figure 13). This gives
physicians more confidence in their diagnoses and may eliminate the need to switch to 2D
slices. High agreement between the VC-based method and the reference (mixed) method
was also found for this innovative segmentation method. It is worth noticing that the VCbased method creates 3D representations of the coronary tree that are similar to the images
from invasive angiography and are thus familiar to cardiologists. Thus, it seems to be a
promising post-processing method for accurately showing the distribution of coronary
lesions, and one that is understandable by referring physicians and patients (e.g. Figure
13).
Jinzaki et al. [113] have proposed a method called the Angiographic View Image
(AGV), which also used the “separation” concept instead of segmentation, similar to the
VC method. The AGV image removes heart chambers and great vessels and “keeps the
41
Discussion
coronary artery untouched”. High sensitivity (98%) and NPV (99%) were reported from a
pilot study [113]. No significant difference in accuracy was found between the AGV image
(91%) and the mixed 2D viewing method combining an axial image, partial MIP, MPR
and CPR (94%). These numbers and our clinical results indicate a clear message - that
when the perfect segment is unachievable, providing the “separation” results with
uncertainty information could be a better strategy than presenting the segmentation results
with potentially erroneous information.
Figure 13. An example showing that the 3D representation of coronary CTA helps cardiologists
to understand the coronary angiography. The arrows point at the occlusion in different
presentation methods. A, coronary angiography; B, 5 mm thin-slab MIP of the original data; C,
3D MIP of VC segmented data; D, 3D MIP of RG segmented data
5.3.
Software development from a radiologist’s perspective
As coronary CTA has become an area of great interest in medical imaging, many medical
image workstation vendors have been encouraged to develop coronary artery analysis
software, and there is considerable competition in this field. There are already several
commercial software systems available that can perform most of the tasks described in the
last chapter, such as Syngo Circulation from Siemens Healthcare, Brilliance Workspace
from Philips Medical Systems, and AW Workstation from GE Healthcare. Among
numerous differences between our software and the others, one factor distinguishing it
from commercial workstations is that it is developed mainly from a radiologist’s
perspective, and radiologists are involved in the whole development procedure. Classical
software development starts with software requirement analysis, and then software design,
coding and testing. Usually, software requirement analysis is carried out by physicians and
engineers working together. Once the requirements have been “clearly” specified, the
engineer will work on the remaining steps according to his understanding of the
requirements. However, due to limitations in the clinician’s knowledge about image
processing, the defined goal may be difficult to achieve. Conversely, because they lack a
medical background, technicians seldom question the definition of the users’ requirements
once it has been established, even though they may have practical difficulties in achieving
that. However, in a team where most members have both a solid medical background and a
good overview of medical image processing and visualization techniques, it is possible to
work together on the defined software requirements and the chosen method. Sometimes the
definition of the requirements can be changed after a thorough understanding of the chosen
algorithm is gained. So, instead of focusing on finding the right answer to the question, as
most developers do, we are trying to find the right answer to the right question.
42
Discussion
The choice between “segmentation” and “separation” is a typical example of redefining
the question when no good answer can be found to the original question. Traditional vessel
segmentation focuses on defining the exact area of the vessel, which means detecting the
edges of the contrast-enhanced lumen in coronary CTA. However, when it comes to
coronary CTA, the vessels are so thin (usually <15 voxels in diameter) that a small error in
the segmentation will lead to a misinterpretation. Although there are more sophisticated
algorithms available, such as the level-set method, these are generally computationally too
expensive to be clinically useful. Nevertheless, the accuracy of the segmentation results is
questionable. In our project, we instead decided that the eventual goal of coronary artery
segmentation would be to provide an intuitive and reliable 3D visualization of the coronary
artery tree for diagnostic purposes.
Another example of choosing proper image processing tools for the right application in
our project is the use of multi-layer QuickTime Virtual Reality (QTVR) Object Movies to
visualize high-dimensional datasets and their post-processing results [114]. The QTVR
technique is a relatively old technique that was introduced as an alternative to real-time
VRT. In QTVR movies, images in different projections are pre-rendered and arranged in a
grid, so that users can rotate the object around two axes by moving the grid horizontally or
vertically, relative to the current view window. Most engineers have abandoned it as realtime VRT has become increasingly practical on ordinary PCs with the evolution of
software and hardware. However, we noticed that there was a clinical need to transfer
information between radiographer and radiologist or between radiologist and cardiologist.
The most common practice is to save sequences of projections as movie clips, thus
sacrificing most of the interactivity. This becomes even more problematic when highdimensional datasets or post-processing results are involved, as visualization of these data
is usually highly dependent on particular software and hardware settings. In one study, we
extended the QTVR format to a high dimension visualization method by including
multiple layers in each projection [115]. By grouping the layers, a 5D dataset can also be
presented. The preliminary results tested on 3.5D datasets (3D volume with segmentation
information), 4D datasets (multiple-phase cardiac CT datasets) and 5D datasets (multiplephase PET-CT datasets) show that the multi-layer QTVR movie is an attractive solution
for intra- and inter-hospital communication and medical education [116], offering the user
a high degree of interactivity at low cost. The fast interaction at review may exceed the
interaction speed with a high-end workstation, and the compact file size facilitates data
transfer.
Another advantage of programming as a radiologist is that medical knowledge
sometimes also helps us to improve the algorithm design. For example, in the ascending
aorta tracking method mentioned in Paper III, we decided to let the aorta tracking stop
when the aortic valve was reached, as detected by an increased intensity variation in the
center of the cross-section. In order to make sure the valve will appear in the center of the
2D cross-section images, the direction of the cross-section plane is calibrated every 10 mm
by linear regression of the centers of the last ten cross-sections. This design is based on the
anatomical character of the aortic valve and the fact that most coronary CTA is acquired
during diastole. (The algorithm also works on data acquired during systole, as the valves
will not totally disappear from the aorta cross-section.) Compared to other authors’
algorithms that use one-direction, slice-by-slice region growing and apply a stop criterion
43
Discussion
if the size of this intersection area between the segmented region and the one in the
preceding slice drops below a defined minimum, our method is more robust in certain
extraordinary cases, as shown in Figure 14.
Figure 14. A special case on which the one-direction slice-by-slice region growing failed to find
the right coronary ostium due to the unusual angle between the ascending aorta and left
ventricle (the upper images). Our method can avoid this error by adjusting the growing
direction (the bottom images). Figure from [III]
Software development from a radiologist’s perspective also offers ways to improve the
clinical workflow. In paper III, we proposed fully automatic background pre-processing
when the image was received by the PACS workstation. Letting the workstation software
“think ahead” saves the users waiting time after they begin the CTA exam. Although four
or five minutes of auto-processing time may seem too long to be clinically applicable,
a workstation serving one CT scanner will still have ample time to perform the whole
procedure, as the interval between two exams sent from the scanner is rarely shorter than
10 minutes. Since the automatic coronary artery extraction pipeline is designed as a singlethread program in the background, the user can simultaneously carry out most 2D image
viewing tasks in other exams without a noticeable delay in performance on our dual-core
Mac system.
In [117], we generalized this radiologist-centered workflow to all computer-aided image
analysis by proposing a standardized inter-process communication protocol between a
PACS workstation and image processing software. As diverse clinical requirements
emerge with new clinical discoveries and rapidly developing post-processing techniques, a
single PACS supplier can no longer provide all desired functions in the self-centred
developing manner. Hospitals are forced to buy different PACS workstations from
different vendors, and clinicians have to move between different workstations for different
analyses. As more and more image analysis is incorporated into clinical routines, the
diagnostic workflow is disturbed by the separate location of each image processing
module. This fact highlights the need for an expandable PACS workstation system
44
Discussion
supporting the integration of third party function modules to provide the flexibility
required in today’s clinical routines. To support clinical adoption and integration of new
image processing and visualization tools, we propose a browser-server style method based
on inter-process communication (IPC) techniques, in which the PACS workstation shows
the front-end user interface defined in an XML file while the image processing software
runs in the background as a server. IPC techniques allow efficient exchange of image data,
parameters, and user input between the PACS workstation and stand-alone imageprocessing software. Using a predefined communication protocol, the PACS workstation
developer or image processing software developer does not need detailed information
about the other system, but will still be able to achieve seamless integration between the
two systems, and the IPC procedure is totally transparent to the final user. In this feasibility
study, the proposed method was implemented between the open source PACS workstation
software OsiriX [88] and the medical image-processing software platform MeVisLab.
However, there are also limits to performing programming as a radiologist. The most
prominent is the limited knowledge of various image-processing techniques, which
prevents the programmer from using more sophisticated techniques. It may also take a
longer time to choose proper algorithms for a certain task than for experts in image
processing. Another drawback is that lack of training in software engineering may make
maintenance of the software more difficult. We are aware of these problems and have tried
to minimize the negative effects by enrolling experienced technicians in our group,
consulting with image processing experts, and ensuring consistent training and selflearning. The goal, after years of practicing and training, is to break the knowledge fence
between medicine and image processing.
5.4.
Disease-Centered Software Development
An engineer is more often disposed to develop software centred on methods, which means
finding different medical applications of one or several methods. That is important because
one improvement in the image-processing field will benefit many aspects of medical
practice. On the other hand, researchers in the field of medicine often like to collect as
many image processing tools as possible to help them study, analyze and understand one
particular disease or clinical problem. However, disease-centered software development
should not only be understood as simply studying, understanding and implementing each
individual method or algorithm, but also as a procedure of systemically integrating
different methods, knowledge and experience into one functional system that can serve as
a powerful tool for the research goal. This can be explained at three different levels:
Firstly, integrating different visualization techniques and image processing methods can
provide a clearer picture of the targeted lesion. For example, combining MIP/VRT images
with MPR or curved MPR images for stenosis analysis will improve the accuracy of the
diagnosis [39][47]. By introducing quantitative measurements, the problem of interobserver variation will hopefully be reduced.
Secondly, integrating different software modalities or even imaging modalities gives a
thorough evaluation of the targeted disease. Diagnosis and assessment of a disease usually
involve many factors. In order to investigate and control the cardiovascular risk of CAD
45
Discussion
patients, cardiologists need to look into not only vessel narrowing measurements but also
atherosclerotic plaque burden analysis and cardiac function studies on subjects such as
myocardial perfusion from SPECT or CT perfusion imaging. Instead of listing them
separately, disease-centered software should be able to fuse the information from different
modalities to provide an intuitive visualization of the linkage between morphological
changes and function changes [118].
Thirdly, integrating the knowledge learned from different aspects of image processing
may help us understand the targeted disease better. Some pioneers in the coronary CTA
research field have attempted to extend our knowledge about the progress of CAD by
combining blood flow simulation with atherosclerotic plaque modeling. Olgac et al.
reported a study on using coronary CTA data and blood flow simulation to study the
transport of low-density lipoprotein (LDL) from blood into the arterial walls [104]. This
highlights the possibility of using imaging techniques to predict the development of
atherosclerotic plaques. In their study, at least three different kinds of knowledge — image
segmentation, blood flow simulation and LDL transport modeling — were required, which
can be seen as a typical example of disease-centered study served by interdisciplinary
collaboration.
In our study about CAD, we have been trying from the very beginning to integrate
knowledge from different fields. Although we have so far not been able to achieve the
second and third level, due to limited time and human resources, the long-term goal is to
explore all possibilities to expand our limited knowledge of CAD.
5.5.
Limitations and Future Work
A major limitation of the current version of the software is the lack of a calcification
segmentation function, which makes stenosis evaluation very difficult in segments with
heavy calcification. Centerline tracking does not perform very well in these segments
either. Although a threshold filter set at 650 HU is used in the latest version to eliminate
the influence of calcium, more sophisticated filters or segmentation algorithms will be
needed to correct the distorted segment. Another shortcoming is the fact that the
quantitative measurement function in the current software is based on 2D segmentation,
which means that the evaluation results will depend on the location and direction of the
MPR plane chosen. This may cause problems if the centerline is not well centered in some
segments, which in turn may cause the cross-sectional plane not to be perpendicular to the
vessel’s direction. Using 3D morphological measurements based on a more advanced 3D
segmentation algorithm (paper V) may not only prevent such errors but also provide the
user with a more intuitive 3D model. A continuing study that will address these issues has
been planned for the near future.
The continuing evolution of the coronary CTA imaging technique brings both
opportunities and challenges to medical image processing researchers. While the
introduction of 320-row CT scanners and dual-source CT scanners keep increasing
physicians’ faith in this technique [102][119], there are still questions regarding accuracy
and short-/long-term prognosis that need to be answered. Our future goals include
providing an accurate and efficient quantitative measurement tool for stenosis evaluation
46
Discussion
by extending current 2D segmentation to 3D segmentation. Plaque segmentation and
characterization will also be the focus of future studies.
In addition to technological development, further clinical evaluation on the efficiency
and accuracy of combining the VC method with 2D visualization methods is planned. The
intra- and inter-observer agreement also requires close investigation, e.g. by using the
concept of receiver operating characteristic (ROC) curves [120].
The development of another non-invasive coronary angiography technique, coronary
MRA, has also drawn our attention. According to an early feasibility study and feedback of
users, most functionalities of our current software will still work on MRA data. However,
future studies are needed to achieve optimal results. More importantly, the possibility of
integrating information from different imaging modalities, such as CT and MR, CTA and
SPECT, or CTA with invasive CA, will be more valuable for extending the view of
radiologists and cardiologists.
Multi-discipline cooperation projects, such as atherosclerotic plaque modeling, are
another natural continuation of the work reported in this thesis.
47
Conclusion
6. CONCLUSION
In this study, we have developed an open-source software module for coronary CTA
analysis from scratch by combining knowledge of medicine and image processing. Its
performance was evaluated in a clinical setting. The following conclusions can be drawn.
C1.
In paper I, a competing fuzzy connectedness tree algorithm was proposed, which
can yield both coronary artery segmentation and skeletonization. By adding a tree
structure during the fuzzy connectedness computation, some misclassification in
parts of the volume where the connectedness strength to both seeds is the same can
be avoided. Accurate results were achieved with less user interaction in comparison
with the conventional fuzzy connectedness methods. After optimization, the
computational speed was satisfactory on ordinary PCs.
C2.
In paper II, an interactive software module was developed to guide the user to
perform the vessel extraction step by step. Multiple visualization techniques were
integrated into the user interface to facilitate user intervention in the results and
provide good overviews of the coronary artery. In paper III, this software was
further developed to integrate an automatic seeding method, starting automatically
when proper image data is sent to the workstation. In preliminary experiments,
visually accurate segmentation results were achieved with limited user interaction.
The accuracy of the centerline tracking was acceptable when compared to manually
created centerlines, and superior to most existing vessel extracting methods.
C3.
In paper IV, a clinical evaluation was performed. The diagnostic accuracy was
relatively low when using only segmented 3D data from our software, although
better than with the region growing-based method. The relatively high NPV of the
3D methods suggests the potential of combining them with 2D reviewing
techniques, and using 3D visualization to quickly screen suspected lesions. The
fuzzy connectedness-based method can generate more evaluable arteries and has
greater accuracy than the region growing-based method.
C4.
In paper V, a fast level set method was proposed that uses a periodic monotonic
speed function to speed up convergence, and a lazy narrow band level set algorithm
was implemented to reduce the amount of computation. In preliminary
experiments, a significant speed increase (about 10 times) was achieved on 3D data
with almost no loss of segmentation accuracy.
49
References
REFERENCES
[1] The top 10 causes of death - Fact sheet N310, World Health Organization, 2008.
[2] D. Lloyd-Jones, R. Adams, M. Carnethon, G. De Simone, T.B. Ferguson, K. Flegal, E. Ford, K.
Furie, A. Go, K. Greenlund, N. Haase, S. Hailpern, M. Ho, V. Howard, B. Kissela, S. Kittner, D.
Lackland, L. Lisabeth, A. Marelli, M. McDermott, J. Meigs, D. Mozaffarian, G. Nichol, C.
O’Donnell, V. Roger, W. Rosamond, R. Sacco, P. Sorlie, R. Stafford, J. Steinberger, T. Thom, S.
Wasserthiel-Smoller, N. Wong, J. Wylie-Rosett, and Y. Hong, “Heart disease and stroke
statistics--2009 update: a report from the American Heart Association Statistics Committee and
Stroke Statistics Subcommittee,” Circulation, vol. 119, 2009, pp. 480-6.
[3] D.M. Lloyd-Jones, M.G. Larson, A. Beiser, and D. Levy, “Lifetime risk of developing coronary
heart disease,” Lancet, vol. 353, 1999, pp. 89-92.
[4] Myocardial infarctions in Sweden 1987‑2004 (Hjärtinfarkter 1987-2004 samt utskrivna efter vård för
akut hjärtinfarkt 1987-2005), The National Board of Health and Welfare (Socialstyrelsen,
Epidemiologiskt Centrum), 2007.
[5] Causes of Death 2008 (Dödsorsaker 2008), The National Board of Health and Welfare
(Socialstyrelsen, Epidemiologiskt Centrum), 2010.
[6] K.S. Reddy, “Cardiovascular diseases in the developing countries: dimensions, determinants,
dynamics and directions for public health action,” Public Health Nutr, vol. 5, 2002, pp. 231-7.
[7] S. Cook, A. Walker, O. Hugli, M. Togni, and B. Meier, “Percutaneous coronary interventions
in Europe: prevalence, numerical estimates, and projections based on data up to 2004,” Clin
Res Cardiol, vol. 96, 2007, pp. 375-82.
[8] W.E. Boden, R.A. O’Rourke, K.K. Teo, P.M. Hartigan, D.J. Maron, W.J. Kostuk, M. Knudtson,
M. Dada, P. Casperson, C.L. Harris, B.R. Chaitman, L. Shaw, G. Gosselin, S. Nawaz, L.M. Title,
G. Gau, A.S. Blaustein, D.C. Booth, E.R. Bates, J.A. Spertus, D.S. Berman, G.B. Mancini, and
W.S. Weintraub, “Optimal medical therapy with or without PCI for stable coronary disease,”
N Engl J Med, vol. 356, 2007, pp. 1503-16.
[9] D.P. Boyd and M.J. Lipton, “Cardiac computed tomography,” Proceedings of the IEEE, vol. 71,
1983, pp. 298-307.
[10] S.I. Lee, J.C. Miller, S. Abbara, S. Achenbach, I.K. Jang, and J.H. Thrall, “Coronary CT
angiography,” J Am Coll Radiol, vol. 3, 2006, pp. 560-4.
[11] S. Achenbach, M. Marwan, D. Ropers, T. Schepis, T. Pflederer, K. Anders, A. Kuettner, W.G.
Daniel, M. Uder, and M.M. Lell, “Coronary computed tomography angiography with a
consistent dose below 1 mSv using prospectively electrocardiogram-triggered high-pitch spiral
acquisition,” European heart journal, vol. 31, 2010, p. 340.
[12] G. Mowatt, J.A. Cook, G.S. Hillis, S. Walker, C. Fraser, X. Jia, and N. Waugh, “64-Slice
computed tomography angiography in the diagnosis and assessment of coronary artery
disease: systematic review and meta-analysis,” Heart, vol. 94, 2008, pp. 1386-93.
[13] 2005 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency
Cardiovascular Care, Part 8, 2005.
[14] S.M. Salerno, P.C. Alguire, and H.S. Waxman, “Competency in interpretation of 12-lead
electrocardiograms: a summary and appraisal of published evidence,” Ann Intern Med, vol.
138, 2003, pp. 751-60.
[15] K. Law, R. Elley, J. Tietjens, and S. Mann, “Troponin testing for chest pain in primary
healthcare: a survey of its use by general practitioners in New Zealand,” N Z Med J, vol. 119,
2006, p. U2082.
50
References
[16] N. Nagajothi and A. Trivedi, “Biomarkers in acute cardiac disease,” J Am Coll Cardiol, vol. 48,
2006, pp. 2358; author reply 2358-9.
[17] J. Hung, R. Lang, F. Flachskampf, S.K. Shernan, M.L. McCulloch, D.B. Adams, J. Thomas, M.
Vannan, and T. Ryan, “3D echocardiography: a review of the current status and future
directions,” J Am Soc Echocardiogr, vol. 20, 2007, pp. 213-33.
[18] B.D. Beleslin, M. Ostojic, J. Stepanovic, A. Djordjevic-Dikic, S. Stojkovic, M. Nedeljkovic, G.
Stankovic, Z. Petrasinovic, L. Gojkovic, and Z. Vasiljevic-Pokrajcic, “Stress echocardiography
in the detection of myocardial ischemia. Head-to-head comparison of exercise, dobutamine,
and dipyridamole tests,” Circulation, vol. 90, 1994, p. 1168.
[19] R. Hachamovitch, D.S. Berman, H. Kiat, I. Cohen, J.A. Cabico, J. Friedman, and G.A.
Diamond, “Exercise Myocardial Perfusion SPECT in Patients Without Known Coronary Artery
Disease : Incremental Prognostic Value and Use in Risk Stratification,” Circulation, vol. 93, Mar.
1996, pp. 905-914.
[20] D.S. Baim and W. Grossman, Grossman’s cardiac catheterization, angiography, and intervention,
Lippincott Williams & Wilkins, 2006.
[21] P. Schoenhagen, R.D. White, S.E. Nissen, and E.M. Tuzcu, “Coronary imaging: angiography
shows the stenosis, but IVUS, CT, and MRI show the plaque,” Cleve Clin J Med, vol. 70, 2003,
pp. 713-9.
[22] A.J. Duerinckx, D.P. Atkinson, J. Mintorovitch, O.P. Simonetti, and M.K. Vrman, “Twodimensional coronary MRA: limitations and artifacts,” Eur Radiol, vol. 6, 1996, pp. 312-25.
[23] G.N. Hounsfield, “Computed medical imaging,” Science, vol. 210, 1980, pp. 22-8.
[24] S.R. Tanenbaum, G.T. Kondos, K.E. Veselik, M.R. Prendergast, B.H. Brundage, and E.V.
Chomka, “Detection of calcific deposits in coronary arteries by ultrafast computed
tomography and correlation with angiography,” Am J Cardiol, vol. 63, 1989, pp. 870-2.
[25] W.E. Moshage, S. Achenbach, B. Seese, K. Bachmann, and M. Kirchgeorg, “Coronary artery
stenoses: three-dimensional imaging with electrocardiographically triggered, contrast agentenhanced, electron-beam CT,” Radiology, vol. 196, 1995, pp. 707-14.
[26] L. Husmann, I. Valenta, O. Gaemperli, O. Adda, V. Treyer, C.A. Wyss, P. Veit-Haibach, F.
Tatsugami, G.K. von Schulthess, and P.A. Kaufmann, “Feasibility of low-dose coronary CT
angiography: first experience with prospective ECG-gating,” Eur Heart J, vol. 29, 2008, pp. 1917.
[27] J. Abdulla, S.Z. Abildstrom, O. Gotzsche, E. Christensen, L. Kober, and C. Torp-Pedersen, “64multislice detector computed tomography coronary angiography as potential alternative to
conventional coronary angiography: a systematic review and meta-analysis,” Eur Heart J, vol.
28, 2007, pp. 3042-50.
[28] M.J. Budoff, S. Achenbach, R.S. Blumenthal, J.J. Carr, J.G. Goldin, P. Greenland, A.D. Guerci,
J.A. Lima, D.J. Rader, G.D. Rubin, L.J. Shaw, and S.E. Wiegers, “Assessment of coronary artery
disease by cardiac computed tomography: a scientific statement from the American Heart
Association Committee on Cardiovascular Imaging and Intervention, Council on
Cardiovascular Radiology and Intervention, and Committee on Cardiac Imaging, Council on
Clinical Cardiology,” Circulation, vol. 114, 2006, pp. 1761-91.
[29] Z. Sun, C. Lin, R. Davidson, C. Dong, and Y. Liao, “Diagnostic value of 64-slice CT
angiography in coronary artery disease: a systematic review,” Eur J Radiol, vol. 67, 2008, pp. 7884.
[30] G. Mowatt, E. Cummins, N. Waugh, S. Walker, J. Cook, X. Jia, G.S. Hillis, and C. Fraser,
“Systematic review of the clinical effectiveness and cost-effectiveness of 64-slice or higher
computed tomography angiography as an alternative to invasive coronary angiography in the
investigation of coronary artery disease,” Health Technol Assess, vol. 12, 2008, p. iii-iv, ix-143.
[31] T.J. Noto, L.W. Johnson, R. Krone, W.F. Weaver, D.A. Clark, J.R. Kramer, and G.W. Vetrovec,
“Cardiac catheterization 1990: a report of the Registry of the Society for Cardiac Angiography
and Interventions (SCA&I),” Cathet Cardiovasc Diagn, vol. 24, 1991, pp. 75-83.
51
References
[32] T. Batyraliev, M.R. Ayalp, A. Sercelik, Z. Karben, G. Dinler, F. Besnili, S. Ozgul, and I.
Perchucov, “Complications of cardiac catheterization: a single-center study,” Angiology, vol.
56, 2005, pp. 75-80.
[33] D.E. Allie, C.J. Hebert, and C.M. Walker, “CTA revolutionizes treatment of peripheral
vascular disease PV-CTA labors in shadow of cardiac imaging but has radically changed the
comprehensive management of PVD,” Diagn Imaging, vol. 29, 2007, pp. 5-12.
[34] F. Fernandez-Aviles, “Introduction: Diagnostic and interventional procedures for coronary
artery disease: implications for selection of contrast media,” European Heart Journal
Supplements, vol. 7, 2005, pp. G1-3.
[35] J.A. Ladapo, U. Hoffmann, F. Bamberg, J.T. Nagurney, D.M. Cutler, M.C. Weinstein, and G.S.
Gazelle, “Cost-effectiveness of coronary MDCT in the triage of patients with acute chest pain,”
AJR Am J Roentgenol, vol. 191, 2008, pp. 455-63.
[36] P. Schoenhagen, M. Barreto, and S.S. Halliburton, “Quantitative plaque characterization with
coronary CT angiography (CTA): current challenges and future application in atherosclerosis
trials and clinical risk assessment,” Int J Cardiovasc Imaging, vol. 24, 2008, pp. 313-6.
[37] K.M. Takakuwa and E.J. Halpern, “Evaluation of a ‘triple rule-out’ coronary CT angiography
protocol: use of 64-Section CT in low-to-moderate risk emergency department patients
suspected of having acute coronary syndrome,” Radiology, vol. 248, 2008, pp. 438-46.
[38] S. Abbara, A.V. Soni, and R.C. Cury, “Evaluation of cardiac function and valves by
multidetector row computed tomography,” Semin Roentgenol, vol. 43, 2008, pp. 145-53.
[39] D.A. Bluemke, S. Achenbach, M. Budoff, T.C. Gerber, B. Gersh, L.D. Hillis, W.G. Hundley,
W.J. Manning, B.F. Printz, M. Stuber, and P.K. Woodard, “Noninvasive coronary artery
imaging: magnetic resonance angiography and multidetector computed tomography
angiography: a scientific statement from the american heart association committee on
cardiovascular imaging and intervention of the council on cardiovascular radiology and
intervention, and the councils on clinical cardiology and cardiovascular disease in the young,”
Circulation, vol. 118, 2008, pp. 586-606.
[40] J. Hausleiter, T. Meyer, M. Hadamitzky, E. Huber, M. Zankl, S. Martinoff, A. Kastrati, and A.
Schomig, “Radiation dose estimates from cardiac multislice computed tomography in daily
practice: impact of different scanning protocols on effective dose estimates,” Circulation, vol.
113, 2006, pp. 1305-10.
[41] D.R. Coles, M.A. Smail, I.S. Negus, P. Wilde, M. Oberhoff, K.R. Karsch, and A. Baumbach,
“Comparison of radiation doses from multislice computed tomography coronary angiography
and conventional diagnostic angiography,” J Am Coll Cardiol, vol. 47, 2006, pp. 1840-5.
[42] C.J. Ritchie, J.D. Godwin, C.R. Crawford, W. Stanford, H. Anno, and Y. Kim, “Minimum scan
speeds for suppression of motion artifacts in CT,” Radiology, vol. 185, 1992, pp. 37-42.
[43] B. Ohnesorge, T. Flohr, K. Nieman, N. Mollet, R. Fischbach, B. Wintersperger, and A. Küttner,
Multi-slice and Dual-source CT in Cardiac Imaging, Springer Berlin Heidelberg, 2007.
[44] E.K. Fishman, “Multidetector-row computed tomography to detect coronary artery disease:
the importance of heart rate,” European Heart Journal Supplements, vol. 7, 2005, pp. G4-12.
[45] R.W. Katzberg and B.J. Barrett, “Risk of iodinated contrast material--induced nephropathy
with intravenous administration,” Radiology, vol. 243, 2007, pp. 622-8.
[46] F. Moselewski, D. Ropers, K. Pohle, U. Hoffmann, M. Ferencik, R.C. Chan, R.C. Cury, S.
Abbara, I.K. Jang, T.J. Brady, W.G. Daniel, and S. Achenbach, “Comparison of measurement of
cross-sectional coronary atherosclerotic plaque and vessel areas by 16-slice multidetector
computed tomography versus intravascular ultrasound,” Am J Cardiol, vol. 94, 2004, pp. 12947.
[47] M. Ferencik, D. Ropers, S. Abbara, R.C. Cury, U. Hoffmann, K. Nieman, T.J. Brady, F.
Moselewski, W.G. Daniel, and S. Achenbach, “Diagnostic accuracy of image postprocessing
methods for the detection of coronary artery stenoses by using multidetector CT,” Radiology,
vol. 243, 2007, pp. 696-702.
52
References
[48] M. Prokop, H.O. Shin, A. Schanz, and C.M. Schaefer-Prokop, “Use of maximum intensity
projections in CT angiography: a basic review,” Radiographics, vol. 17, 1997, pp. 433-51.
[49] P.S. Calhoun, B.S. Kuszyk, D.G. Heath, J.C. Carley, and E.K. Fishman, “Three-dimensional
volume rendering of spiral CT data: theory and method,” Radiographics, vol. 19, 1999, pp. 74564.
[50] A. Rosset, L. Spadola, L. Pysher, and O. Ratib, “Informatics in radiology (infoRAD):
navigating the fifth dimension: innovative interface for multidimensional multimodality
image navigation,” Radiographics, vol. 26, 2006, pp. 299-308.
[51] J.S. Suri, Kecheng Liu, L. Reden, and S. Laxminarayan, “A review on MR vascular image
processing: skeleton versus nonskeleton approaches: part II,” IEEE Transactions on Information
Technology in Biomedicine, vol. 6, Dec. 2002, pp. 338-350.
[52] K. Biuhlerv, P. Felkelv, and A. La Cruz, “Geometric Methods for Vessel Visualization and
Quantification-A Survey,” Geometric modeling for scientific visualization, 2004, p. 399.
[53] C. Kirbas and F. Quek, “A review of vessel extraction techniques and algorithms,” ACM
Computing Surveys, vol. 36, 2004, p. 81–121.
[54] D. Lesage, E.D. Angelini, I. Bloch, and G. Funka-Lea, “A review of 3D vessel lumen
segmentation techniques: Models, features and extraction schemes,” Medical Image Analysis,
vol. 13, 2009, p. 819–845.
[55] M.H.F. Wilkinson, T. Wijbenga, G. De Vries, and M.A. Westenberg, “Blood vessel
segmentation using moving-window robust automatic threshold selection,” Image Processing,
2003. ICIP 2003. Proceedings. 2003 International Conference on, 2003, p. II–1093.
[56] Y. Masutani, T. Schiemann, and K.H. H\öhne, “Vascular shape segmentation and structure
extraction using a shape-based region-growing model,” Medical Image Computing and
Computer-Assisted Interventation—MICCAI’98, 1998, p. 1242.
[57] D. Wilson and J. Noble, “Segmentation of cerebral vessels and aneurysms from MR
angiography data,” Information Processing in Medical Imaging, 1997, p. 423–428.
[58] T. Boskamp, D. Rinck, F. Link, B. Kummerlen, G. Stamm, and P. Mildenberger, “New Vessel
Analysis Tool for Morphometric Quantification and Visualization of Vessels in CT and MR
Imaging Data Sets,” Radiographics, vol. 24, Jan. 2004, pp. 287-297.
[59] J.K. Udupa and P.K. Saha, “Fuzzy connectedness and image segmentation,” Proceedings of the
IEEE, vol. 91, 2003, pp. 1649-1669.
[60] W.E. Lorensen and H.E. Cline, “Marching cubes: A high resolution 3D surface construction
algorithm,” ACM Siggraph Computer Graphics, vol. 21, 1987, p. 163–169.
[61] T. Wischgoll, J.S. Choy, E.L. Ritman, and G.S. Kassab, “Validation of Image-Based Method for
Extraction of Coronary Morphometry,” Annals of Biomedical Engineering, vol. 36, Jan. 2008, pp.
356-368.
[62] E.W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische mathematik,
vol. 1, 1959, p. 269–271.
[63] R. Bellman, “On a routing problem,” Quarterly of Applied Mathematics, vol. 16, 1958, pp. 87-90.
[64] J. Serra, Image Analysis and Mathematical Morphology, Volume 1 (Image Analysis & Mathematical
Morphology Series), Academic Press, 1982.
[65] P.K. Saha, J.K. Udupa, and D. Odhner, “Scale-Based Fuzzy Connected Image Segmentation:
Theory, Algorithms, and Validation,” Computer Vision and Image Understanding, vol. 77, Feb.
2000, pp. 145-174.
[66] L. Vincent, “Morphological area openings and closings for grey-scale images,” NATO ASI
SERIES F COMPUTER AND SYSTEMS SCIENCES, vol. 126, 1994, p. 197–197.
[67] P.K. Saha, “Tensor scale-based fuzzy connectedness image segmentation,” Proceedings of SPIE,
San Diego, CA, USA: 2003, pp. 1580-1590.
[68] Y.Y. Boykov and M.P. Jolly, “Interactive graph cuts for optimal boundary & region
segmentation of objects in ND images,” Computer Vision, 2001. ICCV 2001. Proceedings. Eighth
IEEE International Conference on, 2001, p. 105–112.
53
References
[69] H. Homann, G. Vesom, and J.A. Noble, “Vasculature segmentation of CT liver images using
graph cuts and graph-based analysis,” Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008.
5th IEEE International Symposium on, 2008, p. 53–56.
[70] T. Mcinerney and D. Terzopoulos, “Deformable models in medical image analysis: A survey,”
MEDICAL IMAGE ANALYSIS, vol. 1, 1996, p. 91--108.
[71] D. Adalsteinsson and J.A. Sethian, “A Fast Level Set Method for Propagating Interfaces,”
JOURNAL OF COMPUTATIONAL PHYSICS, vol. 118, 1994, p. 269--277.
[72] R.T. Whitaker, “A level-set approach to 3d reconstruction from range data,”
INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 29, 1998, p. 203--231.
[73] A.F. Frangi, W.J. Niessen, K.L. Vincken, and M.A. Viergever, “Multiscale vessel enhancement
filtering,” Lecture Notes in Computer Science, 1998, p. 130–137.
[74] Q. Lin, “Enhancement, extraction, and visualization of 3D volume data,” LINKOPING
STUDIES IN SCIENCE AND TECHNOLOGY-DISSERTATIONS-, 2003.
[75] O. Friman, M. Hindennach, C. Kühnel, and H.-O. Peitgen, “Multiple hypothesis template
tracking of small 3D vessel structures,” Medical Image Analysis, vol. 14, Apr. 2010, pp. 160-171.
[76] O. Wink, W.J. Niessen, and M.A. Viergever, “Fast delineation and visualization of vessels in 3D angiographic images,” Medical Imaging, IEEE Transactions on, vol. 19, 2000, p. 337–346.
[77] Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kikinis, “3D multiscale line filter for segmentation and visualization of curvilinear structures in medical images,”
CVRMed-MRCAS’97, 1997, p. 213–222.
[78] Z. Tu, “Probabilistic boosting-tree: Learning discriminative models for classification,
recognition, and clustering,” 2005.
[79] Y. Zheng, M. Loziczonek, B. Georgescu, S.K. Zhou, F. Vega-Higuera, and D. Comaniciu,
“Machine learning based vesselness measurement for coronary artery segmentation in cardiac
CT volumes,” Lake Buena Vista, Florida, USA: 2011, p. 79621K-79621K-12.
[80] C.T. Metz, M. Schaap, A.C. Weustink, N.R. Mollet, T. van Walsum, and W.J. Niessen,
“Coronary centerline extraction from CT coronary angiography images using a minimum cost
path approach,” Medical physics, vol. 36, 2009, p. 5568.
[81] M.A. Gülsün and H. Tek, “Robust vessel tree modeling,” Medical Image Computing and
Computer-Assisted Intervention: MICCAI ... International Conference on Medical Image Computing
and Computer-Assisted Intervention, vol. 11, 2008, pp. 602-611.
[82] C. Florin, N. Paragios, and J. Williams, “Particle filters, a quasi-monte carlo solution for
segmentation of coronaries,” Medical Image Computing and Computer-Assisted Intervention–
MICCAI 2005, 2005, p. 246–253.
[83] C. Lacoste, G. Finet, and I.E. Magnin, “Coronary tree extraction from X-ray angiograms using
marked point processes,” Biomedical Imaging: Nano to Macro, 2006. 3rd IEEE International
Symposium on, 2006, p. 157–160.
[84] A. Vasilevskiy and K. Siddiqi, “Flux Maximizing Geometric Flows,” IEEE TRANSACTIONS
ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 24, 2001, p. 1565--1578.
[85] M. Holtzman-Gazit, R. Kimmel, N. Peled, and D. Goldsher, “Segmentation of thin structures
in volumetric medical images,” Image Processing, IEEE Transactions on, vol. 15, 2006, p. 354–363.
[86] X. Tizon and Ö. Smedby, “Segmentation with gray-scale connectedness can separate arteries
and veins in MRA,” J Magn Reson Imaging, vol. 15, 2002, pp. 438-45.
[87] A. Löfving, X. Tizon, A. Persson, and Ö. Smedby, “Angiographic Visualization Of The
Coronary Arteries In Computed Tomography Angiography With Virtual Contrast Injection,”
The Internet Journal of Radiology, vol. 4, 2006.
[88] A. Rosset, L. Spadola, and O. Ratib, “OsiriX: an open-source software for navigating in
multidimensional DICOM images,” Journal of Digital Imaging: The Official Journal of the Society
for Computer Applications in Radiology, vol. 17, Sep. 2004, pp. 205-216.
[89] C. Wang and Ö. Smedby, “An Automatic Seeding Method For Coronary Artery Segmentation
and Skeletonization in CTA,” The MIDAS Journal, vol. Online Publishing, Sep. 2008.
54
References
[90] G.L. Raff, A. Abidov, S. Achenbach, D.S. Berman, L.M. Boxt, M.J. Budoff, V. Cheng, T.
DeFrance, J.C. Hellinger, and R.P. Karlsberg, “SCCT guidelines for the interpretation and
reporting of coronary computed tomographic angiography,” Journal of Cardiovascular Computed
Tomography, vol. 3, 2009, p. 122–136.
[91] J. Bogaert, S. Dymarkowski, and A.M. Taylor, Clinical cardiac MRI: with interactive CD-ROM,
Springer Verlag, 2005.
[92] M.J. Budoff, Cardiac CT imaging: diagnosis of cardiovascular disease, Springer Verlag, 2010.
[93] D. Ropers, “Heart rate-independent dual-source computed tomography coronary
angiography: growing experience,” J Cardiovasc Comput Tomogr, vol. 2, 2008, pp. 115-6.
[94] M.L. Steigner, D. Mitsouras, A.G. Whitmore, H.J. Otero, C. Wang, O. Buckley, N.A. Levit, A.Z.
Hussain, T. Cai, R.T. Mather, Ö. Smedby, M.F. DiCarli, and F.J. Rybicki, “Iodinated Contrast
Opacification Gradients in Normal Coronary Arteries Imaged With Prospectively ECG-Gated
Single Heart Beat 320-Detector Row Computed Tomography,” Circulation: Cardiovascular
Imaging, vol. 3, Mar. 2010, pp. 179 -186.
[95] A.W. Leber, A. Knez, F. von Ziegler, A. Becker, K. Nikolaou, S. Paul, B. Wintersperger, M.
Reiser, C.R. Becker, G. Steinbeck, and P. Boekstegers, “Quantification of obstructive and
nonobstructive coronary lesions by 64-slice computed tomography: a comparative study with
quantitative coronary angiography and intravascular ultrasound,” J Am Coll Cardiol, vol. 46,
2005, pp. 147-54.
[96] A. Sato, M. Hiroe, M. Tamura, H. Ohigashi, T. Nozato, H. Hikita, A. Takahashi, K. Aonuma,
and M. Isobe, “Quantitative measures of coronary stenosis severity by 64-Slice CT
angiography and relation to physiologic significance of perfusion in nonobese patients:
comparison with stress myocardial perfusion imaging,” J Nucl Med, vol. 49, 2008, pp. 564-72.
[97] N. Funabashi, Y. Kobayashi, M. Kudo, M. Asano, K. Teramoto, I. Komuro, and G.D. Rubin,
“New method of measuring coronary diameter by electron-beam computed tomographic
angiography using adjusted thresholds determined by calibration with aortic opacity,” Circ J,
vol. 68, 2004, pp. 769-77.
[98] R.C. Cury, K. Nieman, M.D. Shapiro, J. Butler, C.H. Nomura, M. Ferencik, U. Hoffmann, S.
Abbara, D.S. Jassal, T. Yasuda, H.K. Gold, I.K. Jang, and T.J. Brady, “Comprehensive
assessment of myocardial perfusion defects, regional wall motion, and left ventricular function
by using 64-section multidetector CT,” Radiology, vol. 248, 2008, pp. 466-75.
[99] E.D. Nicol, J. Stirrup, E. Reyes, M. Roughton, S.P. Padley, M.B. Rubens, and S.R. Underwood,
“Comparison of 64-slice cardiac computed tomography with myocardial perfusion
scintigraphy for assessment of global and regional myocardial function and infarction in
patients with low to intermediate likelihood of coronary artery disease,” J Nucl Cardiol, vol. 15,
2008, pp. 497-502.
[100]
K.M. Stantz, Y. Liang, C.A. Meyer, S. Teague, M. Stecker, G. Hutchins, G. McLennan, and
S. Persohn, “In-vivo regional myocardial perfusion measurements in a porcine model by ECGgated multislice computed tomography,” SPIE, 2003, pp. 222-233.
[101]
B. Ruzsics, H. Lee, E.R. Powers, T.G. Flohr, P. Costello, and U.J. Schoepf, “Images in
cardiovascular medicine. Myocardial ischemia diagnosed by dual-energy computed
tomography: correlation with single-photon emission computed tomography,” Circulation, vol.
117, 2008, pp. 1244-5.
[102]
T.R. Johnson, K. Nikolaou, B.J. Wintersperger, A.W. Leber, F. von Ziegler, C. Rist, S.
Buhmann, A. Knez, M.F. Reiser, and C.R. Becker, “Dual-source CT cardiac imaging: initial
experience,” Eur Radiol, vol. 16, 2006, pp. 1409-15.
[103]
M. Oudkerk, A.E. Stillman, S.S. Halliburton, W.A. Kalender, S. Mohlenkamp, C.H.
McCollough, R. Vliegenthart, L.J. Shaw, W. Stanford, A.J. Taylor, P.M. van Ooijen, L. Wexler,
and P. Raggi, “Coronary artery calcium screening: current status and recommendations from
the European Society of Cardiac Radiology and North American Society for Cardiovascular
Imaging,” Eur Radiol, vol. 18, 2008, pp. 2785-807.
55
References
[104]
U. Olgac, V. Kurtcuoglu, S.C. Saur, and D. Poulikakos, “Identification of atherosclerotic
lesion-prone sites through patient-specific simulation of low-density lipoprotein
accumulation,” Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput
Assist Interv, vol. 11, 2008, pp. 774-81.
[105]
F.J. Rybicki, S. Melchionna, D. Mitsouras, A.U. Coskun, A.G. Whitmore, M. Steigner, L.
Nallamshetty, F.G. Welt, M. Bernaschi, M. Borkin, J. Sircar, E. Kaxiras, S. Succi, P.H. Stone, and
C.L. Feldman, “Prediction of coronary artery plaque progression and potential rupture from
320-detector row prospectively ECG-gated single heart beat CT angiography: Lattice
Boltzmann evaluation of endothelial shear stress,” Int J Cardiovasc Imaging, 2009.
[106]
C.R. Becker, K. Nikolaou, M. Muders, G. Babaryka, A. Crispin, U.J. Schoepf, U. Loehrs,
and M.F. Reiser, “Ex vivo coronary atherosclerotic plaque characterization with multidetector-row CT,” European Radiology, vol. 13, Sep. 2003, pp. 2094-2098.
[107]
C. von Birgelen, W. Klinkhart, G.S. Mintz, H. Wieneke, D. Baumgart, M. Haude, T. Bartel,
S. Sack, J. Ge, and R. Erbel, “Size of emptied plaque cavity following spontaneous rupture is
related to coronary dimensions, not to the degree of lumen narrowing. A study with
intravascular ultrasound in vivo,” Heart, vol. 84, 2000, pp. 483-8.
[108]
K.A. Addis, K.D. Hopper, T.A. Iyriboz, Y. Liu, S.W. Wise, C.J. Kasales, J.S. Blebea, and
D.T. Mauger, “CT Angiography: In Vitro Comparison of Five Reconstruction Methods,” Am. J.
Roentgenol., vol. 177, Nov. 2001, pp. 1171-1176.
[109]
L. Saba, R. Sanfilippo, R. Montisci, and G. Mallarini, “Assessment of intracranial arterial
stenosis with multidetector row CT angiography: a postprocessing techniques comparison,”
AJNR. American Journal of Neuroradiology, vol. 31, May. 2010, pp. 874-879.
[110]
A. Persson, N. Dahlström, L. Engellau, E. ‐M Larsson, T.B. Brismar, and ö Smedby,
“Volume Rendering Compared with Maximum Intensity Projection for Magnetic Resonance
Angiography Measurements of the Abdominal Aorta,” Acta Radiologica, vol. 45, Jan. 2004, pp.
453-459.
[111]
A. Hennemuth, T. Boskamp, D. Fritz, C. Kuhnel, S. Bock, D. Rinck, M. Scheuering, and
H.O. Peitgen, “One-click coronary tree segmentation in CT angiographic images,” International
Congress Series, vol. 1281, 2005, pp. 317-321.
[112]
M.F. Khan, S. Wesarg, J. Gurung, S. Dogan, A. Maataoui, B. Brehmer, C. Herzog, H.
Ackermann, B. Aβmus, and T.J. Vogl, “Facilitating coronary artery evaluation in MDCT using
a 3D automatic vessel segmentation tool,” Eur Radiol, 2006, pp. 1789-95.
[113]
M. Jinzaki, K. Sato, Y. Tanami, M. Yamada, T. Anzai, A. Kawamura, K. Ueno, and S.
Kuribayashi, “Diagnostic accuracy of angiographic view image for the detection of coronary
artery stenoses by 64-detector row CT: a pilot study comparison with conventional postprocessing methods and axial images alone,” Circulation Journal: Official Journal of the Japanese
Circulation Society, vol. 73, Apr. 2009, pp. 691-698.
[114]
C. Wang, A. Persson, and Ö. Smedby, “Multilayer Quick Time Virtual Reality Object
Movie: A New Way to Share High Dimensional Post-processing Results,” RSNA 2008,
Chicago: RSNA, 2008.
[115]
“http://developer.apple.com/documentation/QuickTime,” QuickTime Virtual Reality.
[116]
H. Petersson, D. Sinkvist, C. Wang, and Ö. Smedby, “Web-based interactive 3D
visualization as a tool for improved anatomy learning,” Anatomical Sciences Education, vol. 2,
Mar. 2009, pp. 61-68.
[117]
C. Wang, F. Ritter, and Ö. Smedby, “Making the PACS workstation a browser of image
processing software: a feasibility study using inter-process communication techniques,”
International Journal of Computer Assisted Radiology and Surgery, vol. 5, Apr. 2010, pp. 411-419.
[118]
O. Gaemperli, T. Schepis, V. Kalff, M. Namdar, I. Valenta, L. Stefani, L. Desbiolles, S.
Leschka, L. Husmann, H. Alkadhi, and P.A. Kaufmann, “Validation of a new cardiac image
fusion software for three-dimensional integration of myocardial perfusion SPECT and standalone 64-slice CT angiography,” Eur J Nucl Med Mol Imaging, vol. 34, 2007, pp. 1097-106.
56
References
[119]
F.J. Rybicki, H.J. Otero, M.L. Steigner, G. Vorobiof, L. Nallamshetty, D. Mitsouras, H.
Ersoy, R.T. Mather, P.F. Judy, T. Cai, K. Coyner, K. Schultz, A.G. Whitmore, and M.F. Di Carli,
“Initial evaluation of coronary images from 320-detector row computed tomography,” Int J
Cardiovasc Imaging, vol. 24, 2008, pp. 535-46.
[120]
C.E. Metz, “Basic principles of ROC analysis,” Seminars in Nuclear Medicine, vol. 8, Oct.
1978, pp. 283-298.
57