Download Medical diagnostic ultrasound - physical

Document related concepts

Condensed matter physics wikipedia , lookup

Time in physics wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

Photon polarization wikipedia , lookup

Gamma spectroscopy wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Transcript
Medical diagnostic ultrasound
- physical principles and imaging
By Jens E. Wilhjelm, Andreas Illum, Martin Kristensson and Ole Trier Andersen
Biomedical Engineering, DTU Elektro
Technical University of Denmark
(Ver. 3.1, 2 October 2013) © 2001-2013 by J. E. Wilhjelm
Preface
This document attempts to introduce the physical principles of medical diagnostic ultrasound to a
broad audience ranging from non-engineering students to graduate level students in engineering and
science. This is sought achieved by providing chapters with different levels of difficulty:
Chapters with no asterisk can be read by most.
* These chapters are directed towards bachelor students in engineering.
** These chapters are directed towards graduate students in engineering.
The document can be studied at a given degree of detail without loss of continuation.
To help understanding, a number of Flash animations and quizzes are included. In
order for these to work, the computer used for viewing this document must have one
of the newest Flash players installed (www.adobe.com). The version of the current Flash player is
written in the box to the right. If no version number appear at all, you must update Flash.
If viewing this text in a browser, please use one that supports Adobe pdf-files with embedded Flash
such as e.g., Internet explorer or Mozilla Firefox. Also note that internet access might be required for
some animations to work. Please also note that if you start an animation and then move onto another
page, the animation might still run in the background slowing the computer.
This document contains a number of quizzes that will pop up in individual windows when activated
by the reader. If the window appears difficult to read, do this: right click on the quiz icon and enter
the window size (written in parenthesis after the quiz) in the window that appear and set “Play back
style” to “Play content in floating window”. If it is necessary to move the quiz window, drag the window at the black frame.
This chapter does not consider blood flow imaging with ultrasound, which is treated excellently elsewhere[5].
1 Introduction
Medical diagnostic ultrasound is an imaging modality that makes images showing a slice of the body,
so-called tomographic images (tomo = Gr. tome, to cut and graphic = Gr. graphein, to write). It is a
diagnostic modality, meaning that it gathers information about the biological medium without modification of any kind1.
BME
DTU Elektro
1/20
Ultrasound is sound with a frequency above the audible range which ranges from 20 Hz to 20 kHz.
Sound is mechanical energy that needs a medium to propagate. Thus, in contrast to electromagnetic
waves, it cannot travel in vacuum.
The frequencies normally applied in clinical imaging lies between 1 MHz and 20 MHz. The sound
is generated by a transducer that first acts as a loudspeaker sending out an acoustic pulse along a narrow beam in a given direction. The transducer subsequently acts as a microphone in order to record
the acoustic echoes generated by the tissue along the path of the emitted pulse. These echoes thus carry information about the acoustic properties of the tissue along the path. The emission of acoustic energy and the recording of the echoes normally take place at the same transducer, in contrast to CT
imaging, where the emitter (the X-ray tube) and recorder (the detectors) are located on the opposite
side of the patient.
This document attempts to give simple insight in to basic ultrasound, simple wave equations, some
simple wave types and generation and reception of ultrasound. This is followed by a description of
ultrasound’s interaction with the medium, which gives rise to the echo information that is used to
make images. The different kinds of imaging modalities is next presented, finalized with a description
of more advanced techniques. The chapter is concluded with a list of symbols, terms and references.
2 Basics of ultrasound
Ultrasound (as well as sound) needs a medium, in which it can propagate by means of local deformation of the medium. One can think of the medium as being made of small spheres (e.g. atoms or
molecules), that are connected with springs. When mechanical energy is transmitted through such a
medium, the spheres will oscillate around their resting position. Thus, the propagation of sound is due
to a continuous interchange between kinetic energy and potential energy, related to the density and
the elastic properties of the medium, respectively.
The two simplest waves that can exist in solids are longitudinal waves in which the particle movements occur in the same direction as the propagation (or energy flow), and transversal (or shear
waves) in which the movements occur in a plane perpendicular to the propagation direction. In water
and soft tissue the waves are mainly longitudinal. The frequency, f, of the particle oscillation is related
to the wavelength, , and the propagation velocity c:
f = c
(1)
The sound speed in soft tissue at 37°C is around 1540 m/s, thus at a frequency of 7.5 MHz, the wavelength is 0.2 mm.
2.1 The 1D wave equation*
Describing the wave propagation in 3D space in a lossy inhomogeneous medium ((Danish: et inhomogent medium med tab) such as living tissue is very complicated. However, the description in 1D
for a homogenous lossless medium is relatively simple as will be shown.
An acoustic wave is normally characterized by its pressure. Thus, in order to obtain a quantitative
relation between the particle velocity in the medium, u, and the acoustic pressure, p, a simple situation
1. To obtain acoustical contact between the transducer and the skin, a small pressure must be applied from the
transducer to the skin. In addition to that, ultrasound scanning causes a very small heating of tissue (less than
1°C) and some studies have demonstrated cellular effects under special circumstances.
BME
DTU Elektro
2/20
p
p + Δp
u
A
x
u+Δu
x+Δx
Figure 1 1D situation showing a liquid element inside a sound wave.
with 1D propagation in a lossless media will be considered, as shown in Figure 1. This figure shows
a volume element of length x and with cross-sectional area A. The volume is thus V = Ax. The density of the medium - a liquid, for instance, - is  and the mass of the element will then be Ax.
The pressure p is a function of both x and t. Consider the variation in space first: There will be a pressure difference, p, from the front surface at x to the back surface at x+x, thus the volume element
will be subject to a force –Ap. By applying Newton’s second law (F = ma):
du
– Ap = Ax -----dt
(2)
or after performing the limit (  d)
du
dp
------ = –  -----(3)
dt
dx
Next consider the variations over a time interval t. A difference in velocity, u, between the front
surface (at x) and the back surface (at x+x) of the elemental volume will result in a change in that
volume which is:
V = A   u + u t – ut  = Aut
(4)
which in turn is connected with a change in pressure, p, according to
V = –   Ax  p
(5)
where  is the compressibility of the material (e.g. a liquid) in units of Pa-1. Performing the same limit
as above, gives the second equation:
dp
du
------ = –  -----(6)
dt
dx
Equations (3) and (6) are the simplest form of the wave equations describing the relation between
pressure and particle velocity in a lossless isotropic medium.
BME
DTU Elektro
3/20
3 Types of ultrasound waves
The equations above describe the relation between pressure and displacement of the elements of the
medium. Two simple waves fulfilling the above will now be considered. Both are theoretical, since
they need an infinitely large medium.
Since optical rays can be visualized directly, and since they behave in a manner somewhat similar to
acoustic waves, they can help in understanding reflection, scattering and other phenomena taking
place with acoustic waves. Therefore, there will often be made references to optics.
There are two types of waves that are relevant. They can both be visualized in 2D with a square acrylic water tank placed on an overhead projector:
• The plane wave which can be observed by shortly lifting one side of the container.
• The spherical wave, which can be visualized by letting a drop of water fall into the surface of
the water.
When the plane wave is created at one side of the water tank, one will also be able to observe the
reflection from the other side of the tank. The wave is reflected exactly as a light beam from a mirror
or a billiard ball bouncing off the barrier of the table.
The spherical wave, that on the other hand, originates from a point source and propagates in all directions; it creates a complex pattern when reflected from the four sides of the tank.
3.1 The plane wave*
The plane wave is propagating in one direction in space; in a plane perpendicular to this direction,
the pressure (and all other acoustic parameters) is constant. As a plane extends over the entire space,
it is not physically realizable (but within a given space, an approximation to a plane wave can be obtained locally, such as in the shadow of a planar transducer (see later)).
(a)
(b)
Figure 2 Dynamic visualization of plane wave (a) and spherical wave (b) pressure fields. The pressure
fields are monochromatic, i.e., contains only one frequency. Pure black indicates zero pressure, red indicates positive and blue negative pressure values. The wavelength can be read directly from the plots. When
including the propagation velocity, c = 1500 m/s, the frequency of the wave can also be found.
BME
DTU Elektro
4/20
If the plane wave is further restricted to be monochromatic, that is, it oscillate at a single frequency,
f0, then the wave equation in 1D is:
p(x,t) = P0 exp(–j (2f0t – 2x/))
(7)
where P0 is the pressure magnitude (units in pascal, Pa) x is the distance along the propagation direction and  = c/f0. (7) is a complex sinusoid that depends on space and time. The equation will be the
same in 3D, provided that the coordinate system is oriented with the x-axis in the propagation direction. The plane wave travelling in the x-direction is sought illustrated by the pressure animation in Figure 2a.
A plane wave thus propagates in one direction, just like a laser beam, however, it is merely the opposite of a beam.
Quiz 1
(Open in floating window of size 800 x 1100)
3.2 The spherical wave*
The other type is a spherical wave. It originates from a point (source) and all acoustic parameters
are constant at spheres centred on this point. Thus, the equation is the same as in (7), except that the
x is substituted with r in a polar coordinate system:
p(r,t) = P0 exp(–j (2f0t – 2r/))
(8)
where r is the distance from the centre of the coordinate system (i.e., the source) to any point in 3D
space. The spherical wave are sought illustrated by the pressure animation in Figure 2b.
Problem 1 With the animations in Figure 2, measure the wavelength and calculate the centre frequency of the waves.
Problem 2 There are a few aspects of Figure 2, that were too difficult to visualize correctly, when
using Flash as the programming tool. Which?
3.3 Diffraction**
An important concept in wave theory is diffraction. Ironically, the term diffraction can best be described by what it is not: “Any propagating scalar field which experiences a deviation from a rectilinear propagation path, when such deviation is not due to reflection or refraction (see later), is generally
said to undergo diffraction effects. This description includes the bending of waves around objects in
their path. This bending is brought about by the redistribution of energy within the wave front as it
passes by an opaque body.”[3] Examples where diffraction effects are significant are: Propagation of
waves through an aperture in a baffle (i.e. a hole in a plate) and radiation from sources of finite size.[3]
With the above definition, the only non-diffracted wave is the plane wave.
4 The generation of ultrasound
The ultrasonic transducer is the one responsible for generating ultrasound and recording the echoes
generated by the medium. Since the transducer should make mechanical vibrations in the megahertz
range, a material that can vibrate that fast is needed. Piezoelectric materials are ideal for this.
BME
DTU Elektro
5/20
Figure 3 Example of modern ultrasound transducer of type 8820e (BK Medical,
Denmark) with frequency range 2 - 6 MHz. From www.bkmed.com.
The typical transducer consist of a disk-shaped piezoelectric element that is made vibrating by applying an electrical impulse via an electrode on each side of the disc. Likewise, the echo returning to
the disk makes it vibrate, creating a small electrical potential across the same two electrodes that can
be amplified and recorded. In modern clinical scanners, the transducer consists of hundreds of small
piezoelectric elements arranged as a 1D array packed into a small enclosure. The shape of this line
can be either linear or convex. An example of the latter can be seen in Figure 3. The use of arrays with
hundreds of elements, makes it possible to electronically focus and steer the beam, as will be considered later in Chapter 7. However, first the single-element transducer will be considered.
4.1 Piezoelectricity
The acoustic field is generated by using the piezo electric effect present in certain ceramic materials.
Electrodes (e.g. thin layers of silver) are placed on both sides of a disk of such a material. One side of
the disk is fixed to a damping so-called backing material, the other side can move freely. If a voltage
is applied to the two electrodes, the result will be a physical deformation of the crystal surface, which
will make the surroundings in front of the crystal vibrate and thus generate a sound field. If the material is compressed or expanded, as will be the case when an acoustic wave impinges on the surface,
the displacement of charge inside the material will cause a voltage change on the electrodes, as illustrated in Figure 4 (left). This is used for emission and reception of acoustic energy, respectively.
Backing
Crystal λ/4 matching layer and
wear plate
Shadow region
2a
Acoustic axis
Housing
g(t)
Figure 4 Left: Piezo electric crystal at different states of compression. Right: Single element transducer
consisting of piezoelectric crystal with electrodes. This “sandwich” is placed between a backing material
and the matching layer towards the medium.
BME
DTU Elektro
6/20
4.2 The acoustic field from a disk transducer*
Since the ultrasound transducer - or the piezoelectric crystal - has a size comparable to or larger than
the wavelength, the field generated becomes very complex. Rather than providing equations for describing the field, it will now be attempted visualised.
It is assumed that the piezoelectric, disk-shaped crystal is fixed at the back, as illustrated in Figure 4
(right) and can move freely at the front. Specifically, movement of the surface of the transducer can
be described by a velocity vector oriented perpendicular to the surface. In short, the electrical signal
applied to the transducer is converted by the electro-mechanical transfer function of the transducer to
a velocity function describing the movement of the transducer surface. Note the backing material located behind the crystal; this is used to dampen the free oscillation of the crystal (in the time period
just after a voltage is applied), thereby creating a short vibration, when an impulse is applied to the
crystal. The radius of the crystal is denoted a. The thickness of the crystal is selected according to the
frequency of operation, so that it is piezo/2, where piezo is the wavelength of sound in the crystal material.
In order to assess the pressure field generated by the transducer, the surface of the crystal will be
divided up into many small surface elements, each contributing to the entire pressure field. If the surface elements are much smaller than the wavelength, they can be considered point sources. In the
present case, the point source will generate a semi-spherical wave in the space in front of the transducer. The waves are identical, the only difference is the location of the point source. At a given field
point in front of the transducer, the total pressure will then be the pressure due to the individual point
sources. This is an application of Huygens1 principle. Of course, these individual pressure contributions will interfere positively and negatively dependent on the location of the field point. This interference will result in the final beam, which can be rather complex.
Rather than doing this calculation analytically, a graphical illustration is provided in Figure 5 which
shows point sources along a diameter of the transducer disk (the remaining point sources on the disk
surfaces are ignored for simplicity). For each point source, a bow shows the location of the equalphase-fronts (or equal-time-lag) of the spherical pressure wave generated from that source at given
instances in time. The equal-phase-fronts are not the same as the pressure field; the latter can be created by adding the pressure fields of each individual source time-shifted according to the equal-timelag. Hence, the moving bows in Figure 5 reveal how complicated the field is at a given point.
The “wave” fronts generated by the flat piston transducer in Figure 5 (left) tend towards a (locally)
plane wave inside the shadow of the transducer. The pressure field is thus broad, and unsuitable for
imaging purposes, as will become clear, when the imaging technique is considered later. In order to
focus the ultrasound field and obtain a situation where the acoustic energy travels along a narrow path,
a focused transducer is used, as illustrated in Figure 5 (right). In this situation, the individual spherical
waves from the transducer are performing constructive interference at the focal point, whereas at all
other points, the interference is more or less destructive. In order to make this work efficiently, the
wavelength must be much smaller than the distance to the focal point. However, a typical depth of the
focal point for a 7.5 MHz transducer - 20 mm - will correspond to 100.
Notice here, that the key to understand this is the fact that it takes a different amount of time to travel
to a given field point from two different source locations. The interference that is caused by this is
quite unique for ultrasound.
1. Christian Huygens, physicist from the Netherlands, 1629-95.
BME
DTU Elektro
7/20
Example: The interference phenomena can be explored in everyday life: if one positions oneself with
one ear pointing into a loudspeaker and turns up the treble, then the sound picture will change if you
move in front of the loudspeaker, especially when moving perpendicular to the loudspeaker’s acoustic
axis. What happens is that the ear is moved to different points in space, which exhibits different
amounts of constructive and destructive interference. This phenomenon is less distinct at low frequencies (bass), because the wavelength gets larger. This is also the reason that a stereo sound system can
do with one subwoofer for the very low frequency band, but needs two loudspeakers for the remaining
higher frequencies.
As noted above, dimensions give most insight, when they are measured in wavelength. Consider the
planar transducer in Figure 5 (left): The near field from this type of transducer is defined[4] as the region between the transducer and up to a range of a2/. The far field region corresponds to field points
at ranges much larger than a2/. In Figure 5 (left), a is specified, but  is not. If the transducer frequency is f0 = 0.5 MHz, then a2/ = 33 mm, which is in the middle of the plot. If f0 = 7.5 MHz, then
a2/ = 0.5 m! The explanation is as follows: The far field is defined as the region, where there is only
moderate to little destructive interference. If this should be possible, then from a given field point in
this region, the distance to any point on the transducer surface should vary much less than a wavelength: Consider a given field point not on the acoustic axis. Next, draw two lines to the two opposite
edges of the transducer. Now the difference in length of these two lines - measured in wavelength must be much less than one, in order to have little destructive interference at this field point. Thus, the
higher the frequency, the lower the wavelength, and the farther away one must move from the transducer surface in order to get differences between the length of the two lines much less than one wavelength.
An ultrasound field from a physical transducer will always show a complicated behaviour as can be
sensed from Figure 5. Each point source is assumed to emit exactly the same pressure wave (an example of the temporal shape is given in Figure 8). Thus, the circles in the animation in Figure 5 indicate spatial and temporal locations of each of the individual waveforms. The contribution of all these
waveforms would have to be added in order to construct the total pressure field in front of the transducer (however, the circles in Figure 5 only represent point sources on a single diameter across the
transducer; many more point sources would be needed to represent the total field from a disk transducer).
Figure 5 Left: Example of moving circles showing “wave fronts” of equal phase (or equal travel time)
at as a function of time from selected point sources (=red dots). For simplicity, only point sources located
on a diameter are shown, making this drawing two dimensional. Right: The same for a focused transducer.
The radius of curvature of the disk surface can be deducted from this Figure. What is it?
BME
DTU Elektro
8/20
Problem 3 Huygens’ principle. How would you find - or calculate - how many point sources are needed on the transducer surface in Figure 5 in order to represent the pressure field in front of the transducer with a given accuracy?
Problem 4 Write a short summary of this chapter.
5 Ultrasound’s interaction with the medium
The interaction between the medium and the ultrasound emitted into the medium can be described
by the following phenomena:
The echoes that travel back to the transducer and thus give information about the medium is due to
two phenomena: reflection and scattering. Reflection can be thought of as when a billiard ball bounces off the barrier of the table, where the angle of reflection is identical to the angle of incidence. Scattering (Danish: spredning) can be thought of, when one shines strong light on the tip of a needle: light
is scattered in all directions. In acoustics, reflection and scattering is taking place when the emitted
pulse is travelling through the interface between two media of different acoustic properties, as when
hitting the interface of an object with different acoustic properties.
Specifically, reflection is taking place when the interface is large relative to the wavelength (e.g. between blood and intima in a large vessel). Scattering is taking place when the interface is small relative
to the wavelength (e.g. red blood cell).
The abstraction of a billiard ball is not complete, however: In medical ultrasound, when reflection is
taking place, typically only a (small) part of the wave is reflected. The remaining part is transmitted
through the interface. This transmitted wave will nearly always be refracted, thus typically propagating in another direction. The only exception is when the wave impinges perpendicular on a large planar interface: The reflected part of the wave is reflected back in exactly the same direction as it came
from (like with a billiard ball) and the refracted wave propagates in the same way as the incident wave.
Reflection and scattering can happen at the same time, for instance, if the larger planar interface is
rough. The more smooth, the more it resembles pure reflection (if it is completely smooth, specular
reflection takes place). The rougher, the more it resembles scattering.
When the emitted pulse travels through the medium, some of the acoustic (mechanical) energy is
converted to heat by a process called Absorption. Of course, also the echoes undergo absorption.
Finally, the loss in intensity of the forward propagating acoustic pulse due to reflection, refraction,
scattering and absorption is under one named attenuation.
5.1 Reflection and transmission*
When a plane wave impinges on a plane, infinitely large, interface between two media of different
acoustic properties, reflection and refraction occurs meaning that part of the wave is reflected and part
of the wave is refracted. The wave thus continues its propagation, but in a new direction.
To describe this quantitatively, the specific acoustic impedance, z, is introduced. In a homogeneous
medium it is defined as the ratio of pressure to particle velocity in a progressing plane wave, and can
be shown to be the product of the physical density, , and acoustic propagation velocity c of the medium. Thus, if medium 1 is specified in terms of its physical density, 1, and acoustic propagation velocity c1, the specific acoustic impedance for this medium is z1 = 1c1. The units become kg/(m2s)
which is also denoted rayl. Likewise for medium 2: z2 = 2c2. The interaction of ultrasound with this
BME
DTU Elektro
9/20
interface can be illustrated by use of Figure 6, where an incident plane wave is reflected and transmitted at the interface between medium 1 and medium 2. The (pressure) reflection coefficient between
the two media is:[2]
z 2   cos  t  – z 1   cos  i 
R = -----------------------------------------------------------(9)
z 2   cos  t  + z 1   cos  i 
where the angle of incidence, i, and transmission, t, are related to the propagation velocities as
sin 
c
------------i = ----1- .
(10)
sin  t
c2
Equation (10) is a statement of Snell’s law,[2] which also states that: r = i. The pressure transmission
coefficient is T = 1 + R.
It should be noted here, that Snell’s law in optics are valid for rays, and therefore valid for plane
waves in acoustics. However, when the acoustic wave travels like a beam, Snell’s law is only approximately valid. The validity is related to the properties of the beam, namely to which degree the wave
field inside the beam can be considered locally plane (which again is related to the thickness of the
beam, measured in wavelengths).
Strictly speaking, if the field incident on an interface is not fully planar, and the interaction is to be
modelled quantitatively, then the field should be decomposed into a number of plane waves, just like
a temporal pulse can be decomposed into a number of infinite tone signals. The plane waves can then
be reflected one by one, using (9) and (10).
In the human body, approximate reflection can be observed at the interface between blood and the
intima of large vessel walls or at the interface between urine and the bladder wall.
Quiz 2
(Open in floating window of size 800x700)
Quiz 3
(Open in floating window of size 800x400)
5.2 Critical angle**
Depending on the speed of sound of the two media, some special cases occur.[2]
Figure 6 Graphical illustration of Snell’s law describing the direction of an incident plane wave (pi), reflected
plane wave (pr) and transmitted (refracted) plane wave (pt) from a large smooth interface. The three arrows indicate the propagation direction of the plane waves; the three parallel lines symbolizes that the wave is planar. The
pressure amplitudes of the reflected and transmitted waves are not depicted, but their relative amplitude can be
calculated from R and T. r = i.
BME
DTU Elektro
10/20
If c1  c2, the angle of transmission, t, is real and t < i, so that the transmitted wave is bent towards
the normal to the interface. This can be studied with the interactive Figure 6.
If c1 < c2, the so-called critical angle can be defined as
c
sin  c = ----1- .
(11)
c2
If i < c, the situation is the same as above, except that t < i, i.e., the transmitted wave is bent away
from the normal to the interface. This can be studied with the interactive Figure 6.
If i > c, the transmitted wave appear to have a very peculiar form. In short, no reflection occurs. The
interested reader can find more details in larger textbooks[2].
5.3 Scattering*
While reflection takes place at interfaces of infinite size, scattering takes place at small objects with
dimensions much smaller than the wavelength. Just as before, the specific acoustic impedance of the
small object must be different from the surrounding medium. The scattered wave will be more or less
spherical, and thus propagate in all directions, including the direction towards the transducer. The latter is denoted backscattering.
The scattering from particles much less than a wavelength is normally referred to as Rayleigh scattering. The intensity of the scattered wave increases with frequency to the power of four.
Biologically, scattering can be observed in most tissue and especially blood, where the red blood
cells are the predominant cells. They have a diameter of about 7 m, much smaller than the wavelength of clinical ultrasound.
5.4 Absorption*
Absorption is the conversion of acoustic energy into heat. The mechanisms of absorption are not fully understood, but relate, among other things, to the friction loss in the springs, mentioned in Subsection 2. More details on this can be found in the literature.[2]
Pure absorption can be observed by sending ultrasound through a viscous liquid such as oil.
5.5 Attenuation*
The loss of intensity (or energy) of the forward propagating wave due to reflection, refraction, scattering and absorption is denoted attenuation. The intensity is a measure of the power through a given
cross-section; thus the units are W/m2. It can be calculated as the product between particle velocity
and pressure: I = pu = p2/z, where z is the specific acoustic impedance of the medium. If I(0) is the
intensity of the pressure wave at some reference point in space and I(x) is the intensity at a point x
further along the propagation direction then the attenuation of the acoustic pressure wave can be written as:
I(x) = I(0)e–x
(12)
where  (in units of m-1) is the attenuation coefficient.  depends on the tissue type (and for some
tissue types like muscle, also on the orientation of the tissue fibres) and is approximately proportional
with frequency.
As a rule of thumb, the attenuation in biological media is 1 dB/cm/MHz. As an example, consider
ultrasound at 7.5 MHz. When a wave at this frequency has travelled 5 cm in tissue, the attenuation
BME
DTU Elektro
11/20
will (on average) be 1 dB/cm/MHz x 5 cm x 7.5 MHz = 37.5 dB. For bone, the attenuation is about
30 dB/MHz/cm. If these two attenuation figures are converted to intensity half-length (the distance
corresponding to a loss of 50 %) at 2 MHz, it would correspond to 15 mm in soft tissue and 0.5 mm
in bone.
Problem 5 Consider a scanning situation, with two interfaces. One located at a depth of 1 cm. There
is water between this and the transducer. The other is located at a depth of 2 cm and there is oil from
1 cm to 2 cm. From 2 cm there is water again. The attenuation of water is 0 dB, while it is 1.5 dB/cm/
MHz for oil. The transducer frequency is 5 MHz. What is the pressure magnitude at the receiving
transducer of the second, relative to the first? (Hint: put the information into a drawing.)
5.6 An example of ultrasound’s interaction with biological tissue
When an ultrasound wave travels in a biological medium all the above mechanisms will take place.
Reflection and scattering will not take place as two perfectly distinct phenomena, as they were described above. The reason is that the body does not contain completely smooth interfaces of infinite
size. And even though the body contain infinitesimally small point objects, the scattered wave from
these will be infinitesimally small in amplitude and thereby not measurable!
The scattered wave moving towards the transducer as well as the reflected wave moving towards the
transducer will be denoted the echo in this document.
So the echo is due to a mixture of reflection and scattering from objects of dimension:
• somewhat larger than the wavelength (example: blood media interface at large blood vessels)
• comparable to the wavelength
• down to maybe a 20th of a wavelength (example: red blood cells).
Voltage
Transducer
Attenuation
Scattering
Z1 =
1c1
Diffuse
scattering
Absorption
Z2 =
2c2
Reflection, 90°
Reflection, 90°
Z3 =
3c3
Refraction
time
Figure 7 Sketch of the interaction of ultrasound with tissue. The left drawing shows the medium with the
transducer on top. The ultrasound beam is shown superimposed onto the medium. The right part of the
drawing shows the corresponding received echo signal.
BME
DTU Elektro
12/20
The effects in Subsection 5.1 - 5.5 are illustrated in Figure 7.
The absorption continuously takes place along the acoustic beam, as media 1 and media 2 (indicated
by their specific acoustic impedances) are considered lossy.
Consider the different components of the medium: Scattering from a single inhomogeneity is illustrated at the top of the medium. Below is a more realistic situation where the echoes from many scatterers create an interference signal. If a second identical scattering structure is located below the first,
then the interference signal will be roughly identical to the interference signal from the first. The overall amplitude, however, will be a little lower, due to the absorption and the loss due to the first group
of scatterers. Notice that the interference signal varies quite a bit in amplitude.
The emitted signal next encounters a thin planar structure, resulting in a well-defined strong echo.
Next, an angled interface is encountered, giving oblique incidence and thus refraction, according to
(10) and Figure 6. The change in specific acoustic impedance is the same as above, but due to the nonperpendicular incidence, less energy is reflected back. The transmitted wave undergoes refraction,
and thus scatterers located below this interface will be imaged geometrically incorrect.
Problem 6 The example in Figure 7 is not totally correct. What is wrong?
6 Imaging
Imaging is based on the pulse-echo principle: A short ultrasound pulse is emitted from the transducer. The pulse travels along a beam pointing in a given direction. The echoes generated by the pulse
are recorded by the transducer. This electrical signal is always referred to as the received signal. The
later an echo is received, the deeper is the location of the structure giving rise to the echo. The larger
the amplitude of the echo received, the larger is the average specific acoustic impedance difference
between the structure and the tissue just above. An image is then created by repeating this process
with the beam scanning the tissue.
All this will now be considered in more detail by considering how Amplitude mode, Motion mode
and Brightness mode work.
Figure 8 Left: The basic principle behind pulse-echo imaging. An acoustic pulse is emitted from the transducer,
scattered by the point reflector and received after a time interval which is equal to the round trip travel time. The
emitted pulse is also present in the received signal due to limitations of the electronics controlling the transducer.
Right: the signal processing creating the envelope of the received signal followed by calculation of the logarithm
yielding the scan line.
BME
DTU Elektro
13/20
6.1 A-mode
The basic concept behind medical diagnostic ultrasound is shown in Figure 8, which also shows the
simplest mode of operation, A-mode. In the situation in Figure 8 (left) a single point scatterer is located in front of the transducer at depth d. A short pulse is emitted from the transducer, and at time
2d/c, the echo from the point target is received by the same transducer. Thus, the deeper the point scatterer is positioned, the later the echo from this point scatterer arrives. If many point scatterers (and
reflectors) are located in front of the transducer, the total echo can be found by simple superposition
of each individual echo, as this is a linear system, when the pressure amplitude is sufficiently low.
The scan line - shown in Figure 8 lower right - is created by calculating the envelope (Danish: indhyllingskurve) of the received signal followed by calculation of the logarithm, in order to compress
the range of image values for a better adoption to the human eye. So, the scan line can be called a gray
scale line. The M-mode and B-mode images are made from scan lines.
6.2 Calculation of the scan line*
The received signal, gr(t), is Hilbert transformed to grH(t) in order to create the corresponding analytical signal g̃ r(t) = gr(t) + jgrH(t). Twenty times the logarithm of the envelope of this signal,
20log|g̃ r(t)|, is then the envelope in dB, which can be displayed as a gray scale line, as shown in Figure
8 (right). Such a gray scale bar is called a scan line, which is also the word used for the imaginary line
in tissue, along which gr(t) is recorded. Note, that because the envelope process is not fully linear, the
scanner does not constitute a fully linear system.
Unfortunately, clinical ultrasound scanners do not feature images in dB. More image improvements
takes place in the scanner (typically proprietary software) and the gray scale is thus - at best - a pseudo
dB-scale, in this document denoted “dB”.
Quiz 4
(Open in floating window of size 800x700)
6.3 M-mode
If the sequence of pulse emission and reception is repeated infinitely, and the scan lines are placed
next to each other (with new ones to the right), motion mode, or M-mode, is obtained. The vertical
axis will be depth in meters downwards, while the horizontal axis will be time in seconds pointing to
the right. This mode can be useful when imaging heart valves, because the movement of the valves
will make distinct patterns in the “image”. An example is shown in Figure 9.
Figure 9 Screen dump of clinical ultrasound scanner used to image the carotid artery in the neck. Upper:
the B-mode image. Lower: the M-mode image recorded along the vertical line in the B-mode image. Notice
in the lower image, the change in location of the vessel walls due to the heart beat.
BME
DTU Elektro
14/20
Emission
&
reception
Transducer
Control
unit
Medium
Ultrasound image
Scan
conversion
Measurement situation
Ultrasound system
Figure 10 The principle of a simple B-mode ultrasound system. At this particular point in time, half of
the image has been recorded.
6.4 B-mode
Brightness or B-mode is obtained by physically moving the scan line to a number of adjacent locations. The principle is shown in Figure 10. In this figure, the transducer is moved in steps mechanically across the medium to be imaged. Typically 100 to 300 steps are used, with a spacing between
0.25 and 5 At each step, a short pulse is emitted followed by a period of passive registration of the
echo. In order to prevent mixing the echoes from different scan lines, the registration period has to be
long enough to allow all echoes from a given emitted pulse to be received. This will now be considered in detail.
Assume that the average attenuation of ultrasound in human soft tissue is  in units dB/MHz/cm. If
the smallest echo that can be detected - on average - has a level of  in dB, relative to the echo from
tissue directly under the transducer, then the maximal depth from where an echo can be expected is 
=  f0 2Dmax or
D max = ---------(13)
2  f0
Example: According to a rule of thumb, the average attenuation of ultrasound in human soft tissue is
1 dB/MHz/cm. Assume that  = 80 dB. At f0 = 7.5 MHz (13) gives Dmax = 5.3 cm.
The time between two emissions will then be Tr = 2Dmax/c, which is the time it take the emitted pulse
to travel to Dmax and back again. If there are Nl scan lines per image, then the frame-rate (number of
images per second produced by the scanner) will be
fr = (Tr Nl)–1.
(14)
Example: For Nl = 200, fr = 70 Hz a good deal more than needed to obtain “real-time” images (some
20 frames per second). However, an fr of 70 Hz might not be an adequate temporal resolution, when
studying heart valves. If the total image width is 40 mm, then the distance between adjacent scan lines
BME
DTU Elektro
15/20
is 40 mm / 200 = 0.2 mm. Please note that this number is not directly reflecting the spatial resolution
size of the scanner, which is considered in Chapter 8.
Problem 7 If the frame rate is fr = 20 Hz (a typical number for clinical use), how long time will be
available for recording half an image as shown in Figure 10?
In order to better appreciate the dynamics of the recording situation, Figure 11 shows the recording
situation in extreme slow motion. It will be wise to consider this animation in detail. To help with this,
a number of problems and quizzes are provided below:
Problem 8 Use a ruler (Danish: lineal) to check, if the green dots in Figure 11 are located correctly,
when the red dot is at the location shown?
Problem 9 How much slower is the scanning performed in Figure 11, compared to normal clinical
use?
Quiz 5
(Open in floating window of size 800x700)
Quiz 6
(Open in floating window of size 800x1000)
Examples of clinical B-mode images can be seen in the chapter on clinical imaging in this Webbook.
Figure 11 Schematic live illustration of the recording of a B-mode image. Left: The ultrasound transducer scanning
a piece of animal tissue in oil. The photograph is made by later slicing the tissue and photographing the slice where the
scanning took place. The red dot represents the emitted pulse, which decreases in amplitude the more tissue it penetrates. The green dot represents the echoes. Right: The screen of the scanner. The scan line is updated from left to right.
Not all in this “drawing” is to scale.
BME
DTU Elektro
16/20
0
Transducer
Transducer
Transducer
Transducer
D
x
0
+
+
Fully comppounded
region
=
zmax
z
scan lines
Partly
compounded region
max
Figure 12 The principle of spatial compound imaging for N = 3. Three single-angle images are recorded
from three different angles and then averaged to form the compound image. Inside the triangular region, the
image is fully compounded, outside, less compounded.
7 Array transducers
The recording of a B-mode ultrasound image by mechanical movement of the transducer is now an
old technique. Today most ultrasound systems apply array transducers, which consist of up to several
hundreds of crystals, arranged along a straight or curved line. The elements of the transducer array,
or a subset of elements, are connected to a multichannel transmitter/receiver, operating with up to several hundred independent channels. The shape, direction and location of the ultrasound beam can then
be controlled electronically (in the newest scanners completely by software) thereby completely eliminating mechanical components of the transducer. In the most flexible systems, the amplitude, waveform and delay of the pulses can be controlled individually and precisely.
Two different types of transducer systems exist: Phase array systems, where all elements are in use
all the time. The beam is then steered in different directions to cover the image plane. In the linear
array systems, a subset of elements is used for each scan line. From this subset a beam is created, and
then translated by letting the subset of elements “scan” over the entire array. The latter can be observed (schematically) in Figure 11: The blue dots show all the crystals. The light blue dots show the
active crystals, which are used for emitting a focused beam and receiving the echoes along the same
beam.
8 Resolution size and point spread function
The resolution size of an imaging system can be assessed in many different ways. One way is to record an image of a small point target. The resulting image is called the point spread function (psf), i.e.
an image which shows how much the image of a point target is “spread out”, due to the limitations of
the imaging system. The point target should preferably be much smaller than the true size of the psf.
Another related way is to image two point targets with different separations, and see how close they
can be positioned and still be distinguishable.
The –3 dB width of the psf in the vertical and horizontal image direction will then be a quantitative
measure for the resolution size. The two directions correspond to the depth and lateral direction in the
recording situation, respectively.
The resolution in the depth direction (axial resolution) can be appreciated from the echo signal in
Figure 8. This echo signal was created by emitting a pulse with the smallest possible number of periods. The resolution size is equal to the length of the echo pulse from a point target, which in the present noted is assumed identical the emitted sound pulse. Thus, if the axis resolution size should be
improved (decreased) the only possible way is to increase the centre frequency of the transducer. But
BME
DTU Elektro
17/20
increasing f0 will increase attenuation as well, as discussed in Subsection 5.5. The consequence is that
centre frequency and resolution size is always traded off.
The resolution size is treated in more detail in the chapter on image quality in this webbook.
9 Spatial compounding*
The array technique described in Subsection 7 can be used to implement so-called spatial compounding. In this technique, several images are recorded from different angles and then combined, to yield
an image with some desirable properties, relative to the conventional B-mode image. The technique
is illustrated in Figure 12. Because a single compound image consists of N single-angle images, the
frame-rate will be reduced by a factor of N compared to B-mode imaging.
An example of a conventional B-mode image and the corresponding compound image is shown in
Figure 13. If compared to the B-mode image, a number of (desirable) features become apparent:
The B-mode image has a quite “mottled” appearance, in the sense that the image consists of dots roughly the size of the psf - on a black background. This is the result of the before mentioned constructive and destructive interference from closely spaced scatterers and reflectors, as illustrated in
Figure 7. The phenomenon is commonly referred to as speckle noise. Speckle noise is a random phenomenon, and a given combination of constructive and destructive interference from a cloud of closely spaced scatterers is closely related to beam size, shape, orientation and direction. Thus, the
interference pattern will change for the same tissue region when imaged from a different direction. If
the change in view-angle is large enough, this interference patterns will be uncorrelated; so averaging
of several uncorrelated single-angle images, will yield a reduction in speckle noise.
Because the ultrasonic echoes from interfaces vary in strength with the angle of incidence, the more
scan angles used, the larger the probability that an ultrasound beam is perpendicular or nearly perpendicular to an interface, and the better the interface will be visualized.
The reduction in speckle noise and the improvement in visualization of interfaces give an image with
a more smooth appearance, better contrast and better delineation of boundaries. This can be seen in
Figure 13 (right).
Problem 10 In Figure 13 left, there are two bright dots at 9 o’clock and 10 o’clock, but only one at
10 o’clock in Figure 13 right. Why?
10 Nomenclature
a
Radius of transducer disk (m)
R
Radius of curvature of spherically focused transducer (m)
Figure 13 Left: Conventional image of a porcine artery. Right: Spatial compound image of the same porcine artery (average image of single-angle images from the angles: -21°, -14°, -7°, 0°, 7°, 14°, 21°).
BME
DTU Elektro
18/20

Wavelength of ultrasound (m)
f0
Centre frequency of emitted pulse (Hz)
c
Propagation speed of ultrasound (m/s)
N
Number of single-angle images in spatial compound ultrasound
Nl
Number of scan lines in an ultrasound image
fr=Tr–1 Pulse repetition frequency (Hz)
Dmax
Maximal depth (m)
gr(t)
Received signal (V)
|g̃ r(t)| Envelope of received signal (V)

Attenuation (m–1)
p
Pressure (Pa)
z
Specific acoustic impedance (rayl = kg/(m2s))

Physical density of medium (kg/m3)

Compressibility of a medium (Pa–1)
11 Glossary
Refraction “The deviation of light in passing obliquely from one medium to another of different
density. The deviation occurs at the surface of junction of the two media, which is known as the refracting surface. The ray before refraction is called incident ray; after refraction it is the refracted ray.
The point of junction of the incident and the refracted ray is known as the point of incidence. [...]”.[1]
Isotropic “Similar in all directions with respect to a property, as in a cubic crystal or a piece of
glass.”[1]
dB A magnitude variable, such as pressure, p, in Pa, can be written in as 20log10(p/pref) dB, where
pref is some given reference pressure, needed to render the argument to the logarithm dimension less.
Likewise intensities, I, can be written as: 10log10(I/Iref) dB.
12 References
[1] Dorland’s Illustrated Medical Dictionary. 27th edition. W. B. Saunders Co., Philadelphia, PA,
USA. 1988.
[2] Kinsler LE, Frey AR, Coppens AB & Sanders JV: Fundamentals of acoustics. 3rd ed. John
Wiley & sons, Inc. New York, NY, USA, 1982.
[3] Orofino, DP: Analysis of angle dependent spectral distortion in pulse-echo ultrasound. PhD dissertation, Department of Electrical Engineering, Worcester Polytechnic Institute, August 1992,
USA.
[4] Kino, GS: Acoustic waves. Prentice-Hall, Inc. Englewood Cliffs, New Jersey, USA. 1987.
[5] Jensen, JA: Estimation of Blood Velocities Using Ultrasound. A Signal Processing Approach.
Cambridge University Press, New York, 1996. ISBN 0-521-46484-6.
BME
DTU Elektro
19/20
13 Solutions to selected problems
Problem 3: A possible way is to simulate the field with a given number of sources, and then see if
the results change when the number of sources are increased (apart from scaling). If the number of
sources can be doubled, or tripled (etc.) without a change in form, the number of sources are probably
representative for the transducer surface.
Problem 4: Learning wise, it would be meaningless to provide an answer here. Instead, please write
the resume yourself. Then wait two weeks, read the chapter again and compare with the resume you
originally wrote.
Problem 5: –3 dB (or 3 dB lower).
Problem 6: The interface between z1 and z2 together with the interface between z2 and z3 generate an
interference echo that is different in shape from the emitted signal (the slap of material denoted z2 is
thinner than the pulse length, thus the two echoes will always overlap in time).
Problem 7: 1/40 s.
Problem 8: Here you have to consider travel time and location of interfaces, in order to see if the
green dots are placed correctly.
Problem 9: Time how much time it takes to finish one image. Calculate how much time it takes to
record an image, when the frame rate is 20 Hz. Divide the two numbers.
Problem 10: The dot at 10’oclock that appears on both images is probably due to a micro vessel supplying blood to the arterial wall. The dot a 9’oclock that only appear on the single-angle image is likely to be a result of quite strong constructive interference.
BME
DTU Elektro
20/20
Ultrasound images of various soft tissue organs
- including a description of artefacts
Søren Torp-Pedersen1 and Jens E. Wilhjelm2
1
Head of Rheumatological US, The Parker Institute, Frederiksberg Hosp., University of Copenhagen,
Nordre Fasanvej 57, DK-2000 Frederiksberg, DK
2DTU
Elektro, Ørsteds plads, building 348
Technical University of Denmark, 2800 Kgs. Lyngby
(Ver. 1 26/11/09) © 2001-2009 by S. Torp-Pedersen & J. E. Wilhjelm
1 Introduction
This document provides a number of examples of ultrasound images of various soft tissue organs.
The images are commented in the main text, but the letters specifying different organs are not placed
on the images. This is up to the reader to put these on. All aspects discussed in these notes are based
on clinical images, unless otherwise noted.
2 Clinical images
All images were recorded with a Logiq E9 ultrasound system (General Electric, ) 15MHz except the
abdominal images, which were recorded with a transducer with lower frequency. The scanner applied
spatial compounding, speckle reduction imaging and harmonic imaging.
Figure 1 Thyroid scanned transversally (Danish: Thyreoidea transverselt).
1/11
Figure 2 Arteria Carotis communis scanned longitudinally. The head is at the left on the image (Danish: Arteria carotis communis longitudinelt).
Figure 1 shows a transverse scan of the right thyroid lobe. The thyroid (T) is seen as a light gray homogeneous substance. Cc =common carotid artery, Tr =trachea (contains air artefact), M = muscles.
S = shadow from tendon. Figure 2 shows a longitudinal scan of the left common carotid artery (Cc).
M = muscles.
Figure 3 shows a longitudinal scan of the quadriceps tendon (T). P = patella, Sc = subcutaneous fat,
J = joint cavity, F = femoral bone.
Figure 3 Knee scanned suprapaterrally longitudinally. (Danish: Knæ suprapatellart longitudinelt)
2/11
Figure 4 Knee medial joint line in the right knee (Danish: Knæ mediale ledlinje frontalt).
Figure 4 shows a longitudinal frontal scan of the medial joint line in the right knee. M = meniscus,
F = femur, T = tibia, MCL = medial collateral ligament, Sc = subcutaneous fat.
Figure 5 shows the right heel pad, longitudinal scan with light pressure. C = calcaneus. The calcaneus
does not appear very clearly in this image. How can that be? Please make a drawing to try to answer
this question.
Figure 5 Heel pad scanned longitudinally with light pressure (Danish: Hælpude longitudinelt let
tryk).
3/11
Figure 6 (Heel pad scanned longitudinally with larger pressure than in previous figure (Danish:
Hælpude longitudinelt med hårdere tryk end forrige figur).
Figure 6 shows the right heel pad, longitudinal scan harder pressure. C = calcaneus. The calcaneus
is now easier to see..
Figure 7 shows a longitudinal scan of the left Achilles tendon (T). C = calcaneus, B = bursa.
Figure 7 Achilles tendon insertion scanned longitudinally (Danish: Achillessene insertion longitudinelt).
4/11
Figure 8 Achilles tendon scanned longitudinally. (Danish: Achillessene longitudinelt).
Figure 8 shows a longitudinal scan of the left Achilles tendon (T). M = muscle, Sc = subcutaneous
fat.
Figure 9 shows a longitudinal scan of the flexor muscles of the forearm. M = muscle, V = vessel.
Figure 9 Flexor muscle scanned longitudinally. (Danish: Underarm flexormuskel longitudinelt)
5/11
Figure 10 Achilles tendon scanned transversally. (Danish: Achillessene transverselt).
Figure 10 shows the Achilles tendon (T) in transverse scan. F = fat.
Figure 11 shows a transverse scan of the flexor muscles of the forearm – orthogonal insonation. M
= muscle, B = bone, Sc = subcutaneous fat.
Figure 11 Muscles in lower arm. (Danish: Underarm flexormuskel transverselt).
6/11
Figure 12 Muscles in lower arm. (Danish: Underarm flexormuskel transverselt skrå).
Figure 12 shows a longitudinal scan of the flexor muscles of the forearm with oblique insonation on
the long axis of the muslces. The muscles (except one) appear more hypoechoic.
Figure 13 shows a transverse scan of the right liver lobe. The diaphragm is seen as a white line in the
bottom of the image.
Figure 13 Liver. (Danish: Lever i epigastriet transverselt).
7/11
Figure 14 Liver scanned longitudinally. (Danish: Lever i epigastriet longitudinelt).
Figure 14 shows a longitudinal scan of the left liver lobe (L). V = vena cava.
Figure 15 shows a longitudinal scan of the left kidney (K). S = kidney sinus, B = bowel.
Figure 15 Kidney (left) scanned longitudinally. (Danish: Venstre nyre longitudinelt).
8/11
Figure 16 Kidney (right) scanned longitudinally (Danish: Højre nyre longitudinelt).
Figure 16 shows a longitudinal scan of the right kidney (K) seen through the liver (L).
Figure 17 shows a transverse scan of the right kidney (K) seen through the liver (L).
Figure 17 Kidney (right) scanned transversally (Danish: Højre nyre transversalt).
9/11
3 Artefacts
The ultrasound artefacts are numerous and may be understood with basic understanding of the physical principles of US. Many of the artefacts may be controlled to some extent. It is therefore advisable
to “read” and understand the ultrasound part of the image and not just the anatomical image.
Speckle is the fine-dotted background of the image i.e., the mottled appearance where the intensity
change from white to black. It is created by the huge amount of subresolveable echoes due to scattering and reflection and is an interference-pattern. It is a function of the distribution of the point-scatterers and reflectors and of the wavelength. It will therefore look different from tissue to tissue, from
frequency to frequency and from angle to angle. It provides us with tissue contrast. Some machines
have a speckle reduction function, e.g., by applying so-called spatial compounding, where images are
recorded from different angles and added.
Reverberation describes artefacts created by the sound pulse being reflected multiple times between
the transducer and an interface (simple reverberation) and/or between multiple interfaces (multiple
layers of the abdominal wall, complex reverberation). Simple reverberation is easily understood when
the same echo is repeated equidistantly down through the image. Complex reverberation results in a
“tail” of weak echoes being emitted from, for instance, the abdominal wall some time after the pulse
has travelled through. Complex reverberation explains why the transducer-near-end of cysts has echoes. Harmonic imaging removes reverberation artefacts.
Colour/power-Doppler is also subject to reverberation artefacts and it is advisable always to let the
colour box go to the transducer sole in order to know of any reverberation sources.
Comet tail When metal in soft tissue is insonated at 90°, a comet tail (white band) is seen. Typically,
this is seen with needles. It is a reverberation artefact and is not seen behind all metal. It is also seen
behind gall stones (in a shorter version).
Sidelobes The transducer crystals also emit sound at some angle to the intended direction. This explains why very good reflectors may generate echoes even when they are not in the path of the pulse.
The abdomen is filled with very good reflectors (gas-filled bowel loops) and they may generate echoes inside the urinary bladder or amniotic cavity (they are best seen when the background is black).
The side lobe issue plays a role both on send and receive: If the crystal emits a sidelobe at a given
angle it is also more sensitive on receive to an echo coming from that angle. The angle is frequency
dependent.
This is why imaging based on harmonics is effective in reducing sidelobe artefacts. The transmit and
receive sidelobes are at different angles (because send and receive frequencies are different).
Mirror. Good reflectors may generate mirror images. Best mirrors are smooth surfaces with high
acoustic impedance difference: gas filled bowel loops, pleura, smooth bone surface (extremity). The
mirror is easily identified and understood when the mirror image and “original” image both are
present in the image. The mirror may, however, be at an angle to the image plane and the “original”
may be placed in front of or behind the image plane.
Colour/power Doppler is also mirrored.
Shadow A shadow is a defect (black region) behind a structure that attenuates the pulse - absolutely
(stone) or relatively (dense connective tissue). Or a shadow generated by a smooth and rounded sur-
10/11
face when the sound hits like a tangent (vessels, subcutaneous fat, muscles, benign tumours). Some
shadows are actually mirror artefacts (both clean and dirty shadows behind bowel loops).
Enhancement When a structure attenuates the sound less than neighboring regions, the pulse will retain more energy and so will the echoes travelling back through this structure. The result is that the
structure has a band of higher amplitude echoes behind it (a band of more white gray scale compared
to neighboring regions. It is probably a function of water content and not pathognomic of cyst!). Some
liver metastasis, most breast fibroadenomas and areas of inflammation have enhancement.
Anisotrophy The human body is isotropic to CT and MR imaging meaning that a specific point in the
body looks the same when the body is rotated in the scanner. This does not apply to US which displays
anisotrophy. An interface appear whiter when insonated at 90 degrees, than at any other angle! Muscles and tendons display anisotrophy.
Interface thickness In reality, an interface has no thickness. However, the interface appears to have
a certain thickness, which is a function of pulse length, reflectivity, gain settings etc. The transducernear-side of an interface is trustworthy, not the posterior side. The interface thickness also accounts
for number of layers being overestimated if the interfaces are regarded as layers.
Sound speed The ultrasound scanner assumes that the pulse travels with 1540 m/s and uses this parameter to construct the depth dimension of the image. The sound speed is, however, different from
tissue to tissue (which is one of the reasons that echoes are generated). The error in distance may be
as high as 10 % in in vivo human imaging. Examples of errors are needles looking bent and pleura
being elevated behind cartilaginous ribs.
Refraction Ultrasound respects sound wave theory and refraction occurs when the pulse traverses an
interface at some angle (non-oblique incidence). It may become pronounced in the image when the
pulse travels through a trapezoid shape (with two successive additive refractions). The liver and
spleen may duplicate the upper renal pole with resulting pseudotumor.
3.1 Specific Doppler artefacts
The artefacts above also apply to the behaviour of the Doppler pulse, e.g. mirror, attenuation, reverberation. The following artefacts are inherent Doppler-artefacts:
Blooming Blooming is when the colour information bleeds outside the vessel. Intrasynovial vessels
are often below the resolution of gray-scale imaging and blooming is of course present when we see
these vessels. Blooming is Doppler gain dependent.
Aliasing This occurs when the PRF is too low (the Doppler shift is more than half of the PRF – the
Nyquist limit). The speed and direction of the blood is misinterpreted and a wrong colour is displayed.
This is only a problem if direction and speed are important parameters.
Twinkling Hard surfaces (bone, calcification) may display a rapid twinkling between red and blue,
although no movement occurs.
Generally, the way to distinguish between true and false flow is to use the spectral Doppler. In case
of mirroring and reverberation, however, the spectral Doppler is of no help since it is true flow shown
in a wrong place.
11/11
X-ray imaging:
Fundamentals and planar imaging
Mikael Jensen1 and Jens E. Wilhjelm2
1Hevesy
Laboratory, DTU Nutech, 2Biomedical Engineering, DTU Elektro
Technical University of Denmark
(Ver. 4 28/8/14) © 2004 - 2014 by M. Jensen and J. E. Wilhjelm
1 Introduction
X-ray imaging is the most widespread and well-known medical imaging technique. It dates back to
the discovery by Wilhelm Conrad Röntgen in 1895 of a new kind of penetrating radiation coming
from an evacuated glass bulb with positive and negative electrodes. Today, this radiation is known as
short wavelength electromagnetic waves being called X-rays in the English speaking countries, but
“Roengten” rays in many other countries. The X-rays are generated in a special vacuum tube: the Xray tube, which will be the subject of the first subsection. The emanating X-rays can be used to cast
shadows on photographic films or radiation sensitive plates for direct evaluation (the technique of planar X-ray imaging) or the rays can be used to form a series of electronically collected projections,
which are later reconstructed to yield a 2D map (thus, a tomographic image). This is the so-called
CAT or CT technique (see the chapter on CT imaging).
1.1 Characterization of X-ray
X-rays are electromagnetic radiation (photons) with wavelengths, 10 pm <  < 10 nm. They travel
with the speed of light, c0  300 000 km/s and has a frequency of  = c0/ [Hz]. The energy of the
individual photon is E = h [J], where h = 6.62×10-34 Js is Planck’s constant. The energy of an X-ray
is typically measured in electron volts (eV). 1 eV is the energy increase that an electron experiences,
when accelerated over a potential difference of 1 V. Thus, 1 eV = qeV = 1.602×10-19 J, where the
charge of an electron is qe (both qe and V is negative in this context).
Problem 1 Calculate the frequency and energy for monochromatic x-rays with  = 1 nm.
Answer:  = c0/  3×1017 Hz = 300 000 THz. E = h  1.99×10-16 J = 1.24 keV.
Protective lead
Cathode
(heated filament)
I
Glass envelope
Electrons
Anode
+
U
X-rays
Figure 1 X-ray tube showing cathode and anode with electrons accelerated from cathode towards anode. The
tube generates X-rays in all directions, but due to the encapsulation most are lost and only a fraction is used for
imaging.
HEVESY
BME
1/9
Characteristic X-ray emission from Wolfram
Relative intensity at fixed electron current
1200
1 mm Al filter cut-off
1000
V=50kV
V=90kV
V=140kV
800
600
400
200
0
0
20
40
60
80
100
120
140
160
Photon energy (keV)
Figure 2 X-ray emission from Wolfram anaode X-ray tube. Observe that for a given tube voltage, the
higher the energy of the photons, the less there are. And if the number of photons are to increase, then the
tube voltage should increase. Data from [1].
1.2 X-ray generation: The X-ray tube
A typical X-ray tube is depicted in Figure 1. It consists of an evacuated glass bulb with a heated filaments (Danish: glødetråd) as the negative electrode and a heavy metal positive anode. Thermic electrons emitted by the heated filament are accelerated across the gab to the anode. If the voltage between
cathode and anode is U volts and the current in the tube being I amperes, each electron will be hitting
the anode with a kinetic energy of U eV. The power deposited in the anode will be I times U, and the
total energy transferred to the anode in an exposure lasting t seconds will be IUt.
The electrons will be slowed down in the anode material, mainly releasing their energy as heat, but
to a small degree (few percent) the energy is transformed to either Bremsstrahlung or characteristic
X-rays. The Bremsstrahlung originates from the sudden deacceleration and direction changes of the
primary electrons in the field of the anode atoms, the characteristics X-rays originates from the knockout and subsequent level filling of inner electrons in the atoms of the anode material. The highest possible quantum energy of emanating X-rays (measured in eV) will be equal to U. Typical energy
spectra as a function of voltages are shown in Figure 2. Please note that the spectra are all taken at the
same current, only the voltage has been varied. This demonstrates that the total number of X-ray photons are heavily dependent on tube voltage. In addition to the information in Figure 2, a general rule
of thumb says that 15 keV increase in voltage corresponds to a doubling of the photon output. For
practical medical applications, the low energy part of the photons are normally not used but removed
by filtering either inside or just outside the X-ray tube. Normal filter materials are either aluminium
or copper. The thicker the filter and the higher the atomic number of the filter, the greater the cut-off
of low energy photons.
The description of the exposure characteristics of a given X-ray tube will comprise the voltage (in
units of kV), the current (in units of mA), the time of exposure (in units of s) and the degree of filtering
(for example a plate of Al, one mm thick next to a plate of Cu, 0.5 mm thick). As the total number of
photons produced for a given high voltage setting only depends on the product of current and time this
is often stated as a product in units of mAs.
HEVESY
BME
2/9
1.3 Anode material, power dissipation
Heavy elements are normally preferred for anode materials as the high Z-number gives efficient production of the part of the X-ray that originates from Bremsstrahlung. The characteristic X-ray lines,
which add to the total energy spectrum, normally appear in the range 50-70 keV, which ofte is in the
middle of the medical useful energy range.
The thermal load on the anode material both during the short exposures and averaged over time when
performing rapid, multiple exposures heats the anode dramatically. For this reason, normally a high
melting point material is used. Anodes made out of Tungsten (Danish: Wolfram), abbreviated W, are
very common. The area of thermal dissipation can be enlarged by rotating the anode during exposure.
An example of such a device is shown in Figure 3.
2 Typical X-ray system
Figure 4 shows a typical X-ray system. The X-ray is generated by the X-ray tube (Danish: Røntgenrør). Low energy photons are removed by the Al filter, since as they cannot penetrate the object
and contribution to the information on the film, they would only add needless to the dose received by
the object. X-ray radiation outside the image region on the film is removed by the collimator (Danish:
primærblænde). Attenuation (what is measured on the film) and Compton scattering take place at the
object. Only photons moving directly from the source to the film are allowed through the grid at the
bottom (Danish: sekundærblænde).
Figure 3 Rotating anode X-ray tube. "RTM" anode designates a Molybdenum anode mixed with 5% Rhenium
to improve the thermal stability. The metal anode is supported by graphite to improve the total thermal capacity.
Source: Siemens.
HEVESY
BME
3/9
Protective shield of
lead
X-ray tube
Al-filter (removes low
energy radiation)
Collimator
Object
Secondary radiation
(Compton scattering)
Grid (removes Compton scattering)
Screens
Film
Figure 4 Schematic illustration of a typical X-ray system.
Figure 5 provides an interactive illustration of a simple X-ray system allowing for translation of a
simple homogeneous phantom. As the distance is changed, different edge effects can be observed. By
pressing the “Lines” button, the geometry of the edge effect is visualized.
3 Geometrical considerations
Referring to Figure 5 we can define the distance from the X-ray origin (the focus1) to the objects as
FOD. The distance from the focus to the film2 or any other medium of radiation detection can be defined as FFD. Any object will to some degree attenuate the X-ray, and variation in X-ray absorption
across the objects will create a corresponding variation in the radiation impinging on the film. An unavoidable and sometimes desirable geometrical magnification of the image relative to the object can
be deduced from the triangle in Figure 5. The enlargement factor F, can be defined as:3
F = size of film image / size of object = FFD / FOD
(1)
If near to normal picture size and little variation in enlargement is sought for organs having different
depths in the body, the geometrical magnification should be minimized by using a large focus to film
distance (FFD) and a small object to film distance. A small object to film distance also improves image contrast, as blurring by scattering increases with increasing distance between object and film. This
is because the origin of the scatter is mainly inside the object: The longer the scattered radiation is
1. The "focus" is in this context the source of the X-ray photons, as it is the name for the electron spot on the
anode of the X-ray tube.
2. The term "film" is still common language, even though the conventional x-ray film to a large degree has been
replaced by various other imaging plates.
3. Here “size” means any distance e.g. the length of a given object.
HEVESY
BME
4/9
Figure 5 Simple illustration of the geometry of the planar X-ray system corresponding to Figure 4. The Xray image of the green homogeneous box is shown to the right. The focal point is the origin of the X-rays. The
film is identical to the detector. FOD = focus to object distance. FFD = focus to film distance.
allowed to travel between the object and the film, the more this radiation diverges from the true unscattered photons.
Similar geometrical considerations (i.e. similar triangles) can demonstrate that extended size of the
focus will generate blurring on the film.
4 Origins of contrast in the X-ray image
X-rays are attenuated according to the normal linear attenuation law:
I(x) = I0 exp(-x) = I0 exp(-/ x)
(2)
where x is the distance transversed in the material and  is the so-called linear attenuation coefficient
in units of m-1. I0 is the intensity at the entrance to the material (x = 0) and I(x) is the intensity at distance x. In the latter part of (2),  is the density of the material. By giving the attenuation coefficient
in units of / and the thickness in length times density (area weigth, e.g. g/cm2) the attenuation coefficient (now called mass attenuation coefficient) becomes independent on the physical state of the
material. Mass attenuation coefficients for some common tissues are given in Table 1.
The microscopic description of the attenuation comprises photo electric effects and Compton scattering, which are both described in the chapter on nuclear medicine. For the understanding of the X-ray
technique, it suffices to say that the linear attenuation coefficient for human tissues varies approximately as the electron density. Thus, it varies roughly proportional with the physical density (kg/m3).
Air has the lowest density, lung tissue has lower density than fat, fat has lower density than muscle,
HEVESY
BME
5/9
Table 1: Mass attenuation coefficients for typical tissues in /From [2].
µ given in cm2/g
50 keV
100 keV
200 keV
Air =0,0013 g/cm
0,208
0,154
0,122
0,227
0,171
0,137
Adipose tissue =0,95 g/cm
0,212
0,169
0,136
Muscle =1,05 g/cm
0,226
0,169
0,136
3
0,424
0,186
0,131
Lead =11,35 g/cm
8,041
5,549
0,999
3
Water =1,00 g/cm
3
3
Bone =1,92 g/cm
3
3
again having much lower density than the bone mineral of the skeleton. The X-ray attenuation varies
accordingly. X-rays transversing parts of the body having high absorbing material will be much more
attenuated, and the film or radiation capture device will in this region not receive as much radiation.
It should be remembered that the X-ray image is a negative (bright areas correspond to high attenuation) and that the image is a 2D projection of the 3D distribution of attenuation.
An example of a normal X-ray image of the chest (one of the most common medical imaging procedures) is seen in Figure 6. Notes that the most attenuating areas (ribs, vertebral column, heart) appear
white while the lungs with little attenuation appear black on a conventional planar X-ray image.
Problem 2 Does the planar X-ray image have arbitrary units (or, put in other words, are the pixel values relative or absolute)?
Figure 6 Normal chest X-ray image. This image is recorded with a tube voltage of
150 kV to minimize the contribution from bone. Press “?” to try to identify tissues.
HEVESY
BME
6/9
The image information in the planar X-ray is mainly anatomical, actual densitometric measurements
on the film are only performed for quality assurance programs and yields little medical information.
Today, all planar X-ray images are evaluated by a human observer.
5 Film, intensifyer foils and screens
Originally, the radiation was captured by a normal photographic film. In the film, the energetic Xray photons are absorbed in the silver halide (NaB-NaI) crystals, generating very small amounts of
free silver. During film processing, any grain with small amounts of free silver are completely converted to metallic, nontransparent silver, while the remaining unreduced silver halide is removed by
the fixative. X-ray films are of course made to the size necessary for the anatomical situation in question and can be very large. To increase sensitivity and thus lower radiation dose, the photo sensitive
film emulsions are often thicker and occasionally coated on both sides of the film, in contrast to normal photographic film. The silver contents of the films makes X-ray films rather expensive. Any film
has a specific range of optimal sensitivity (exposure range from complete transparency to completely
blackened). Although modern equipment are normally assisted by electronic exposure meters, the
correct choice of film, exposure time, exposure current and high voltage is still left to the judgement
of the X-ray technician.
To improve the sensitivity and thus lower radiation exposure to the patient, the film is often brought
in contact with a sheet of intensifying screen. The screen contains special chemical compounds of the
rare earth elements, that emits visible blue-green light when hit by X-rays or other ionizing radiation.
This permits the use photographic film with thinner emulsions and more normal sensitivity to visible
light. While increasing the sensitivity, the use of intensifying screen on the other hand blurs the images as the registration of X-ray radiation is no longer a direct, but an indicted process.
The patients or the object is not only the source of X-ray absorption but also of X-ray scattering,
mainly due to Compton effect (please see the chapter on Nuclear Medicine). Any part of the patient
exposed to the primary X-ray beam will be a source of secondary, scattered, X-rays. These X-rays will
have lower energy than the original ray, but as no energy discrimination is used in the registration,
also the secondary scattered radiation adds to the blackening of the film. The scattered radiation carries no direct geometrical information about the object and thus only reduces the contrast by increasing the background gray level of the film. Scattered radiation can to some degree be avoided by the
use of special collimators called raster. The raster can be a series of thin, closely lying bars of lead,
only allowing radiation coming from the direction of the focus point to hit the film while other directions are excluded. A typical example of a complete radiography cassette content is shown in Figure 7.
6 Digital radiography and direct capture
During the last years the conventional film based radiography has gradually been replaced by newer
digital techniques. The end points is of course the acquisition and storage of the X-ray image information as computer files. It should be noted that X-ray images are normally of very high resolution
(more than four thousand by four thousand pixels) with large dynamic range (12 to 16 bits). The correct handling and display of such image information without loss or compression is still the subjects
of specialized workstations. The digital X-ray images are normally stored and displayed in so-called
PACS systems (Picture Archiving and Communications System). The image information is either
temporarily captured on phosphor plates for subsequent transfer to digital storage by so-called phosphor plate readers, in function much related to the old X-ray film processors) or by direct, position
HEVESY
BME
7/9
X-ray passing
through the raster
X-ray not passing
through the raster
Raster
IS
Film
Figure 7 X-ray cassette, containing double coated film, intensifying screen (IS) and lead raster.
sensitive electronic X-ray detection devices, the so-called direct capture systems. At present (2003)
the geometrical resolution of the various digital techniques is still somewhat inferior to the best possible film technique. However, the benefits of rapid viewing, interactive image availability, postprocessing and digital transmission often outweighs the reduction in image quality. For special
applications, like breast cancer detection by mammography, the photographic film is still the system
of choice.
7 Analysis of image of phantom
Consider the conventional planar X-ray image of a typical phantom in the course shown in Figure 8.
This X-ray system has a high dynamic range, thus the image has many levels. To include as many
details as possible, this X-ray is shown - very untraditional - in color. Normal X-ray images are shown
in shades of gray.
It is not trivial to analyze such an image, unless one is very careful. First one should remember that
what one see (the contrast in the image) is total attenuation which again is dependent on what kind of
material is present and how much of this material is present in the trajectory between focus and a given
detector. And since the colorbar is representing relative values, both of these observations should be
considered relative to other places in the image.
The analysis goes as follows:
Since the image shows more than the box of the phantom itself, the connectors can be identified.
Those appear dark yellow and since they obviously must attenuate more than the nearby air (which
appears nearly white) it can be concluded, that a high pixel value corresponds to low attenuation (in
contrast to normal X-ray). Be aware of a potential pitfall: if the film is larger than the area that is exposed (as controlled by the collimator at the x-ray tube), then the image has an additional outer frame,
which, obviously, should not be used in the analysis.
Next consider the fiducial markers. Since the lid was in place during recording of the image, we have
two series of materials that are penetrated by the X-rays: one through the markers and another between
the markers:
• agar, acrylic base plate
• acrylic lid, agar, acrylic base plate.
HEVESY
BME
8/9
Vertical (mm)
20
1000
40
900
60
800
80
700
100
600
500
120
400
140
300
160
40
60
80
100
Horizontal (mm)
120
Figure 8 Left: Top photo of phantom 4 from 2009. Right: Planar X-ray image of same phantom. The image
is shown in color to enhance contrast. The two images are sought aligned as well as possible.
Problem 3 Please make a cross-sectional drawing of this, and be sure that the total distance penetrated by the X-ray is the same (for this particular problem, it is assumed that the X-rays hits perpendicular to the surface).
Since air-free agar is mainly water and thus has very much the same density as water and since acrylic has a higher density, the fiducial markers must attenuate less, and thus appear brighter than the surroundings. This is also the case in the image in Figure 8.
Next consider the tube. The tube contains air inside and thus the wall is the most attenuating. But
even if the tube contained water inside, the wall would still be the most attenuating. Thus, the tube
periphery should be darker than the center when looking at the projection image. The shape of the
actual attenuation profile through the tube is considered in Problem 13 in the exam of year 2008.
8 References
[1] Johannes Jensen og Jens Munk: Lærebog i Røntgenfysik, Odense 1973.2.udg. s.121.
[2] Data from http://physics.nist.gov/PhysRefData/XrayMassCoef/cover.html
HEVESY
BME
9/9
CT scanning
By Mikael Jensen & Jens E. Wilhjelm
Risø National laboratory
Ørsted•DTU
(Ver. 1.2 4/9/07) © 2002-2007 by M. Jensen and J. E. Wilhjelm)
1 Overview
As it can be imagined, planar X-ray imaging has an inherent limitation in resolving overlying structures as everything seen in the images are the result of a projection. It is, however, possible to resolve
the 3D distribution of X-ray attenuation from a set of projections.
This is actually what we do mentally when we access the 3D structure of an object, for example the
head of a person, by walking around the object and looking at it from all different angles. The CT scan
is exactly such a reconstruction of the 3D distribution based on a large set of X-ray projections obtained at many angles covering a complete circle around the patient. CT is an abbreviation of computed tomography. In Anglo-American literature one also occasionally finds the abbreviation CAT
denoting computed axial tomography. Tomography by itself means the rendering of slices: naturally
the 3D information cannot easily be displayed 3 dimensionally on a screen, instead it is most often
displayed as a series of axial slices.
The CT scanner was developed in the early 1970ies by Geoffrey Hounsfield and and his colleague
Alan Cormack in England (actually working for EMI on funds stemming from music record sales).
For this they were awarded the Nobel Prize in Medicine in 1979.
The basic three components of a CT scanner are still the same as in planar X-ray imaging: An X-ray
tube, an object (patient) and a detection system. In the earliest scanners the output of the X-ray tube
was collimated to a narrow, pencil-like beam and detected by a single detector. X-ray tube and detector were translated in unison (see Figure 1) across the object making a linear scan. After each scan,
Figure 1 Early CT scanner geometry
1/8
Rotating X-ray tube
Rotating X-ray tube
Patient
Patient
Rotating arc
of detectors
Static ring
of detectors
Active
detectors
Figure 2 Geometry of gantry in CT scanner. Left: The third generation is of type rotate-rotate, where
both X-ray tube and detectors rotate. Right: The fourth generation is of type rotate-fixed, where only the
X-ray tube rotate. The x-ray tube emits a fan-shaped beam.
typically lasting 10 seconds, the entire setup was rotated a few degrees, the scan repeated, and so forth.
From a set of such 256 scans the final image (a single slice) would be reconstructed by overnight computing. This reconstruction - which derives an image from a large set of projections - will be considered in Subsection 2.3. Modern scanners are now essentially of two types: the rotate-rotate system and
the rotate-fixed system. These are illustrated in Figure 2. Both systems use narrow fan-shaped beams
collimated to spread across the full width of the patient. In the rotate-rotate system as many as 700
detectors may be placed in an arc centered at the focal spot of the X-ray tube. The tube is run continuously as both it and the detectors revolve around the patient. The fast electronics of the detectors take
as many as thousand readings per detector for a total of 700 000 readings in one second. In the rotatefixed system as many as 2000 fixed detectors form a circle completely around the patient. The X-ray
tube is rotating concentrically within the detector ring. The detectors are normally focused at the centre of the ring. Acquiring the detector responses every one third of a degree produces more than 2 000
000 readings per second.
1.1 Hounsfield value
Using mathematical algorithms (as will be shown latere in Subsection 2.3), the computer can calculate the linear attenuation coefficient for each point (pixel) in the object and assign an attenuation value to it. Normally, this attenuation is not depicted as attenuation coefficients, instead radiology uses
a special unit, now called Hounsfield unit (HU). The corresponding Hounsfield value is defined as
follows:
HV = 1000 (μm-μw)/ μw
(1)
where μm is the (average) linear attenuation coefficient within the voxel it represents and μw is the
linear attenuation coefficient for water at the same spectrum of photon energies. The Hounsfield unit
is dimensionless.
From the above definition, one should think, that the Houndsfield values are very well-defined. This
is not so, as can be seen from Figure 3, which represent data from two different teaching books. There
can be a number of reasons for these differences:
• Different spectra of emitted energy (the center frequency (or energy) of the spectrum, the shape of
the spectrum).
2/8
(a)
(b)
Figure 3 Hounsfield values according to different text books: (a) is from [2] while (b) is from [3]. As can
be seen, the values does not fully agree.
• Different definitions of what a given tissue type actually represents.
• Tissue types seldomly consist of just one component (e.g. muscular tissue can contain various
amount of lipid, but still be described as “muscular tissue”).
1.2 Single slice versus multi-slice system
Originally the CT scanner only acquired one slice at a time, making extended axial field of view a
time consuming process. Today the scanners, whether of the third or fourth generation, acquire many
slices (16 to 256) at a time using an X-ray tube with an extended axial beam and multiple stacked detector chains. Rotation time is now down to fractions of a seconds making acquisition of of multi-slice
3/8
representation of the heart, almost motion arrested. If the patient is continuously slid through the gantry ring during the rotation, a so-called spiral CT scan is acquired. Proper reconstruction can thus yield
large series of closely lying slices over extended parts of the body, in principle from head to foot.
2 System details
2.1 CT scanner X-ray tube
Proper reconstruction of the CT scans is only possible if a very large number of photons are available
for the detectors. If the acquired projections are not statistically well-determined, the reading from a
detector will be noisy and the reconstruction algorithm will propagate this noise, leading to unacceptable high noise in the final image. Thus, normally, the CT scan is done with a high output from the
CT tube corresponding to large kilovolts and milliampere settings. As the scan are normally extended
for many slices and many revolutions, the final dose can be as high as 50 to 100 millisievert (see definition of Sievert in nuclear medicine chapter of this book). As the number of CT scans has been increasing with the wide spread installation of potent multi-slice and/or spiral scanners, the total
collective radiation dose from CT scans to the entire medical irradiation constitutes a major part
The high current and voltage and the extended exposure time, deposits very large amounts of primary electron beam energy in the anode of the X-ray tube. Special tubes have been developed for these
X-ray scanners, with large, fast rotating anodes of high melting point materials. Special problems are
related to the technology of bringing electricity of high voltage forward to the X-ray tube, rotating at
an orbital diameter of more than one meter with the speed of more than two revolutions per minute.
The rapid revolution of X-ray tube and perhaps detector chain also puts a large mechanical strain on
the entire X-ray gantry, which must be of extraordinary sturdy construction.
2.2 Detector chain technology
Today, at least three types of detectors are used. These detectors can be classified according to the
type of material stopping the X-rays:
•
Gas (Xenon)
I0
I0
I0
μ11
μ12
Ir1 = I0 exp(-μ11dx -μ12dx)
I0
μ21
μ22
Ir2= I0 exp(-μ21dx -μ22dx)
Ic2 = I0 exp(-μ12dx -μ22dx)
Ic1 = I0 exp(-μ11dx -μ21dx)
Figure 4 For a medium assumed to consist of four different types of materials, four measurements will allow enough information to obtain four equations with four unknowns.
4/8
•
Scintillator (transforms the X-ray energy into visible light, detected by a photo diode)
•
Solid state semiconductor
The gas detectors are less efficient than the other two types of detectors, but by using high pressure,
and extended radial dimensions efficiencies as high as 40 % can be achieved. These “deep” detectors
has the important property of being most sensitive to radially incoming X-rays thus providing inherence protection against too much scattered radiation. With the other two detectors, which are more
like surface detectors, the scattered radiation cannot be separated, and must be removed by the mathematical reconstruction algorithm. This is possible, because the scattered radiation has little spatial
structure, and can thus be detected and subtracted as a uniform blanket in the image matrix.
With many detectors in each chain and many slices the total data sampling rate of a modern CT scanner is extremely high. At present, it is exactly this data sampling rate which limits the performance of
state-of-the-art CT scanner technology.
2.3 Reconstruction
The reconstruction of the slices from a large number of different projections forms an algebraic problem. This can be seen by considering a CT image with two by two pixels. If the object is irradiated
with X-rays from two perpendicular directions, the detectors will measure the four values indicated
in Figure 4. The four attenuation values of the CT image can now be found by solving four equations
of four unknowns. The corresponding equations are:
ln(I0/Ir1) dx–1 = μ11 + μ12
(2)
ln(I0/Ir2) dx–1 = μ21 + μ22
(3)
ln(I0/Ic1) dx–1 = μ11 + μ21
(4)
ln(I0/Ic2) dx–1 = μ12 + μ22
(5)
Note that the basic physics does not require sampling of projections for more than 180°, as the measurement is basically a transmission measurement covering the entire depth forwards to backwards of
1
2
5
3
3
4
5
3
2
1
2
7
3
4
5
6
7
1
For all projections,
the measured values
are added to all
contributing pixels
Figure 5 Back projection. Each attenuation value is put back into the cells of the image that are located at the line of sight. The same values are put into each cell.
5/8
Figure 6 Evolution of backprojection. The first five images are derived from filtered
projections, whille the last is derived from raw projections.
the object. However, because of system stability, artifact suppression and noise reduction, scans are
normally acquired based on 360° acquisitions. However beautiful the algebraic reconstruction looks
the practical application is difficult due to the larger number of equations and unknowns. Reconstructing a single slice represented by a 512 by 512 matrix corresponds to the diagonalization of such a matrix, which is no simple task. Some algorithms, however, obtain this goal by iterative measures: first
making a guess of the distribution of attenuation values, subtracting the corresponding projections
from the actual projections measured and then iteratively minimizing this error difference.
2.4 Filtered backprojection
Because it is computationally more effective, the most used algorithm is the so-called filtered backprojection. Consider an image matrix whit pure zeros. Backprojection by itself simply fills the attenuation values of individual projections into each cell of the matrix along the line of sight. The values
filled in, are added to those already in the image matrix. This is sought illustrated in Figure 5. When
the backprojection is performed on a large number of projections, the final image begins to emerge,
6/8
Window width
LL
Greyscale value
UL
Window
centerline
0
500
Houndsfield units
1000
Figure 7 Windowing. Only HU between -300 and 600 are visualized in the gray scale
bar. LL = lower level. UL = upper level (drawing not fully to scale).
as seen in Figure 6. However, the image is blurred: a single point object with high attenuation (e.g. a
thin tube of water in air) will by this reconstruction be depicted as a “1/r” distribution. By filtering the
measured projections before backprojection, this blurring can be reduced. The filtering is actually a
convolution of the individual projection with a suitable spatial filter, amplifying high spatial frequencies and damping low spatial frequencies. The final reconstruction algorithm is often called LSFB, an
abbreviation for linear superposition of filtered backprojections. The exact choice of filter function
should be matched with the scanner characteristics, field of view and object of interest. There is no
need to reconstruct with filters using higher spatial frequencies than the inherent limits given by the
finite detector size in the detection chain. The reconstructed image represents the attenuation coefficients. These are re-calculated to Hounsfield units, and this image is displayed as gray values on the
screen. However, the range of Hounsfields units (or attenuation) can be very large, and if only soft
tissue is to be visualized, only a small window of values are displayed, as illustrated in Figure 7. Because of the large dymaic range of the CT scanner, it is often better from the beginning of the reconstruction to limit the interest area of the image values to a suitable range. For this reason
reconstruction is often formed in “brain window”, “lung window” or “bone window”.
3 Example of “clinical” CT image
Finally, a comparison between an anatomical photograph and a CT image from exactly the same
plane is included in Figure 8. The data is from the Visible Human Project. From the CT image, it is
very clear which types of tissue, that is best distinguished in the CT image.
4 Acknowledgements
Student Jonas Henriksen is greatfully acknowledged for the help with the tables for Hounsfield values.
7/8
Figure 8 An anatomical photograph and corresponding CT image at a horizontal scan plane of the head. Data from: [1]
5 References
[1] The visible human project: http://www.nlm.nih.gov/research/visible/visible_human.html
[2] Willi A. Kalender, "Computed Tomography", 2005, 2nd edition, Publicis Corporate Publishing,
Erlangen.
[3] Erich Krestel, "Imaging Systems for Medical Diagnostics", 1990, Siemens Aktiengesellschaft,
Berlin and Munich.
8/8
Nuclear medicine
By Mikael Jensen
Risø National laboratory
(Ver. 1 6/9/05) © 2003 by M. Jensen)
1 Introduction
Nuclear Medicine comprises the medical use of radioactive isotopes for in vivo and in vitro diagnosis
and therapy. The most important field is the use of radioactive tracers (radiopharmaceuticals) for imaging of organs, distribution of metabolism or patophysiological processes by the use of position sensitive detectors for detection of penetrating ionising radiation, most often gamma rays. These imaging
techniques are the topic of the present text. It should, however, be remembered that nuclear medicine
has a wider area of application and contains other important diagnostic and therapeutic techniques
such as RadioImmuno Assay (RIA), Whole body counting (WBC), isotope dilution analysis, clearance techniques and radionuclide therapy.
The workhorses of nuclear medicine are the radioactive isotopes. The known nuclides are commonly
shown in a diagram giving the number of neutrons, N, in the nuclide along the horizontal axis and the
number of protons, Z, along the vertical axis, as depicted in Figure 1. This diagram is known as the
“The Chart of Nuclides”.
Stable nuclides or stable isotopes are located along the “line of stability”. For lighter elements, N is
more or less equal to Z. For heavier elements, an increasing overweight of neutrons are necessary for
stability. Above Z = 92 (Uranium) the nuclides become increasingly unstable towards a special kind
Z
N
Figure 1 The chart of nuclides. Z = number of protons. N = number of neutrons. Example: 153Gd has Z = 64 and N = 89.
1/12
Sp=0
”proton drip
line”
Z
rrier
on ba
Fissi
Sn=0
”neutron drip line”
N
Figure 2 Limits to existence.
of instability called fission. This puts an upper level to the number of protons in any nuclide, stable or
unstable, at about Z=108. This is sketched graphically in Figure 2.
Each point in the nuclides chart corresponds to a definitive isotope. An isotope is given by the N, Z
numbers or more commonly by giving the symbol of the element in question (of course given by Z)
and the total number of particles (the mass number, A = N+Z) in the upper left hand corner of the elements symbol, as for example:
14C, 31P, 238U
2 Radionuclides
Most known nuclides are unstable, implying that they disintegrate over time, yielding decay products. The unstable or radioactive nuclides lie in a halo around the line of stable isotopes in the chart
of nuclides. The further you get from stability the shorter becomes the expected lifetime. Beyond this
A=14=6+8
8 neutrons
14C
Z=6, Carbon
Figure 3 Composition of radioactive atom, Carbon 14. In the neutral atom, the number of electrons is equal to the number of protons in the nucleus (Z).
2/12
halo of radionuclides, no bound system exists, not even for the shortest period of time. A theoretical
nuclide containing for example 3 protons (Lithium) and 12 neutrons simply does not exist and cannot
stay together.
An example of the composition of a radioactive atom from the unstable halo in shown in Figure 3.
3 Modes of disintegration
A given radionuclide has a characteristic way of stabilisation, which can be described by the disintegration rate k [time-1] and the type of emissions caused by the disintegration.
The emissions are normally one or more of the following nuclear radiations (particles):
•Alpha radiation (helium nuclide)
•Beta radiation (electrons or anti-electrons, also called positrons)
•Gamma radiation (energetic electromagnetic radiation)
Some nuclides can exist in excited states for extended periods of time, before releasing the excess
energy in the form of gamma radiation, but without altering the composition of the nuclide. These socalled isomeric states are very important for nuclear medicine as they represent nuclides capable of
delivering only gamma rays without any contribution from particle emission. (It should, however, be
remembered that the so-called internal conversion always can transform part of the gamma rays into
electrons).
Isomeric states are denoted by the letter m next to the mass number, as for example in:
99m Tc, 81m Kr
The information in the chart of nuclides can be found on the internet at many sites, but one of the
more authoritative sources is:
http://www2.bnl.gov/ton/
4 Rutherford’s law of decay and the definition of isotope
A given chemical element is characterised by the proton number (Z). The number of neutrons inside
the nucleus only contribute to the atomic mass, but not to the chemical characteristics of the substance.
Carbon (Z=6) can exist as one of the several isotopes (Number of neutrons = 3, 4, 5, 6, 7, 8, 9…):
9
C, 10C, 11C, 12C, 13C, 14C, 15C
Only the carbon isotopes 12C and 13C are stable.
A given number of atoms, n, of a specific nuclide (or “isotope”) has a well defined rate of disintegration k. This is actually the probability per unit time for the individual nucleus to disintegrate. In
differential terms:
dn = -k·n·dt
This differential equation can easily be integrated to the solution:
n(t) = n0 · exp(-k·t)
The activity A is defined as the number of disintegrations per unit time:
A= k·n
3/12
The activity is measured in a special unit called Becquerel (Bq) (most often used are the prefixed
units: kiloBecquerel kBq, MegaBecquerel MBq and GigaBecquerel GBq).
The activity also follows the exponential law of decay given above (Rutherford’s law):
A(t)=A0 · exp(-k·t)
Finally, the following relation exist between disintegration rate and the so-called half life:
T½ = ln(2) / k
It is normally the half-life we use to characterize an isotope (the half-life is given in the chart of nuclides). Of course the half-life can be used to compute activity as function of time:
A(t=T½) =0.5 · A0
A(t=2T½) =0.25 · A0
A(t=10T½) = A0/1024 ≅ 0.001· A0
A(t) = A0· 2-(t / T½)
5 Isotopes and Radiopharmaceuticals
As mentioned, we normally prefer the use of pure gamma ray emitters for imaging. The half-life
should more or less match the imaging situation in question. A short half-life reduces the total radiation dose to the patient, but cannot be used for the imaging of a process which only slowly homes the
isotope to the organ in question. A long half-life, however nice to work with, will lead to higher radiation dose to the patient and also potential radioactive waste problems.
The isotope should be linked to a definite chemical compound in a specific position before use. Such
a radioactively labelled chemical compound designed for medical diagnosis or treatment is called a
radiopharmaceutical and is normally administered to the patient in form of an intravenous injection.
The radioactive compound is distributed throughout the body by the blood circulation. Over time the
compound should bind to the organ or process under study. Subsequently, the emitted gamma radiation should be measured in order to give an image.
6 Penetration of gamma rays through matter
Gamma rays or γ−photons have a definite probability of passing unchanged through a given thickness of matter. The intensity I (photons per unit area per unit time) decreases as function of the matter
thickness x by the well-known attenuation formula:
I(x) = I0· exp(-µ·x)
4/12
I0
I
x
Figure 4 Left: Graphical illustration of attenuation. Right: attenuation
for water.
where µ is called the linear attenuation coefficient. µ is a function of photon energy and also the composition (mainly density) of the matter in question. The dimension of µ is m-1. This is shown graphically in Figure 4.
Occasionally, it is better to state the thickness of the attenuating matter by its “weight per area”, that
is, the product of thickness and density: x·ρ. Here the unit can then be, for example mg/cm2. The attenuation law can be rephrased to give:
I(x) = I0exp((-µ/ρ)·(xρ))
When thus used, µ ⁄ρ is called the mass attenuation coefficient. In general, the mass attenuation coefficient is less sensitive to the actual physical state of the attenuator, such as pressure and temperature,
etc.
The half-value layer thickness is the thickness of a layer that causes are 50% reduction in intensity:
x1/2 = ln(2) / µ or x1/2ρ = ln(2) / (µ/ρ)
Good tables of attenuation coefficients for all elements and many composite materials can be found
on the National Institute of Standards’ web page at the address:
http://physics.nist.gov/PhysRefData/XrayMassCoef/cover.html
6.1 Microscopic description of attenuation
The attenuation of gamma rays takes place through one of the following 3 microscopic interactions,
graphically illustrated in Figures 5 and 6.
Photoelectric effect: A gamma ray of energy E can interact with a bound electron, with total transfer
of energy to the electron. Thus, the gamma ray vanishes and the atom in question emits an electron
having an energy equal to the difference between the original photon energy, E, and the binding energy of the electron. This is the most dominant process at low photon energy (E<50 keV) and for high
Z materials. When the energy of the photon is just below or just above the binding energy of an inner,
closely bound electron, the probability of attenuation changes abruptly. The photon cannot cause
emission of a hard bound electron by photoelectric effect when its own energy is less than the binding
5/12
Figure 5 Left: Photoelectric effect. Right: Compton effect.
energy. Such discontinuities in the attenuation curve are called K-edges, L-edges, etc. referring to the
electron orbital in question, see for example the curve for lead in Figure 9.
Compton effect: Instead of being absorbed completely, a photon can be scattered by an atomic electron changing the direction and energy of the original photon and transferring part of the energy into
kinetic energy of the electron. Both the scattered gamma ray and the electron are emitted from the
atom. This process is called “Compton Scattering” and dominates gamma ray attenuation at intermediate energies for most materials. The energy of E‘γ of the scattered photon is a function of the scattering angle θ:
An object irradiated by a mono-energetic beam of photons becomes a source of scattered radiation
with many energies and directions.
Pair effect: The third important interaction between gamma rays and matter is called the pair effect.
It only takes place when the energy of the gamma ray is above the energy equivalent of 2 electron rest
masses. Above this energies threshold E = 2mec2 = 2·511 keV = 1022 keV, the gamma ray can form
an electron + positron pair in the close vicinity of an atomic nucleus.
By this effect the gamma ray disappears and part of its energy goes to the creation of the electron
positron pair (see Figure 6). The rest of the energy is imparted to the electron positron pair as kinetic
energy. Normally the positron will annihilate close to the point of creation and thus by itself becoming
Figure 6 Pair effect. A gamma ray of 1.02 MeV hits the nucleus of an atom sending out a
beta particle and a positron which in turn interacts with a resting electron, causing emission
of two gamma rays of 0.51 MeV. (Note: the electron itself has no energy, even though the
drawing suggest that. “mev” Should be MeV.)
6/12
a source of 511 keV gamma radiation. The pair effect is only important for the attenuation of gamma
rays of very high energy.
The total attenuation effect is the sum of the 3 individual effects.
µtotal = µphoto + µCompton + µpair
Photon attenuation can be seen from the graphs of total attenuation in Figure 9.
Lead is about 200 times more effective for stopping gamma radiation per distance unit below 100
keV, but only a factor of 10 better at energies above 1 MeV (and this mainly due to the difference in
density).
6.2 Other effects
For very small deflections of the primary gamma ray scattering can take place where the momentum
is taken up by the entire atom. In this case, there is practically no energy transfer from the gamma ray
and we talk about coherent scattering. The coherent scatter only introduces very small angles of deviation on the photons, and is normally of less importance than the compton scattering proces.
Also other interactions (due to the nucleons) can take place, but these processes have much lower
probabilities than the above mentioned three important interactions.
7 Imaging of gamma ray sources
Any gamma ray source will send out gamma rays isotropically (in all directions), as illustrated in
Figure 7. A radiation source in general can normally only be imaged by a system with an optical element, a “lens”. However, no lenses for gamma ray are available (except black holes, but these are seldom available at the surface of the earth).
Normal imaging and localisation thus requires a discrimination of gamma rays as function of their
direction. This is what we call collimation.
Collimation is done by a “collimator” made out of an absorbing material, normally lead. Small holes
in the collimator select one or few directions for the incoming gamma rays before they are allowed to
Figure 7 A gamma ray source will send out gamma rays isotropically.
7/12
Figure 8 Collimator only allowing rays parallel with the hole to go through. The
loss of sensitivity this way will be more than a factor of 100.
hit a detector, as illustrated in Figure 8. While directional discrimination of a collimator can be made
quite well, it is always at the price of a large drop in sensitivity. The collimator only works by throwing away most of the gamma ray information. Collimators can most effectively be made for gamma
radiation of low energy (E<350 keV), where lead is still a good absorbing material.
While collimators for gamma radiation above 500 keV are still possible, these devices become thick
and heavy. The spatial discrimination (resolution) of such systems become bad (several cm).
Figure 9 Example of linear attenuation coefficients for water (more or less like soft tissue)
and lead.[1]
8/12
8 Gamma ray imaging by scanning
Originally, imaging in nuclear medicine was done by probing the patient for radioactivity with a single, point-collimated detector. While this process can give good geometrical resolution, it is very
wasteful in terms of sensitivity (very few photons are measured at a time) and is thus very time consuming. Of course such a scanning system is not useful for capturing dynamic processes where the
radioisotope distribution evolves during the scanning.
9 Gamma camera (Anger camera)
For imaging of distribution of radiopharmaceuticals, the scanners have almost completely been replaced by the so called gamma camera (also called the Anger camera after the inventor Hal Anger),
as illustrated in Figure 10.
Figure 10 Gamma camera.
This device still uses a collimator, normally a parallel hole collimator made out of a 5 - 10 mm thick
slap of lead with many thousands of parallel small holes (<2 mm diameter). Behind the collimator,
the camera is equipped with a large detector crystal (so called scintillator, normally sodium iodide) at
least 10 mm thick and 500 mm in diameter. Gamma rays hitting the camera face parallel to the direction of the collimator holes will pass the collimator and hit the crystal, here giving rise to absorption.
In scintillators the absorption will be followed by light emission called scintillation. Normally, scintillation is the result of either photo process or compton effect in the scintillator material. In both cases, the energy lost by the gamma ray results in a flash of light photons from the point of interaction in
the crystal. The light flash can be detected by an array of light sensitive devices (photomultiplier
tubes, PMT’s) mounted on the back of the sodium iodide crystal. The total energy imparted to the
crystal can be judged by sum of all PMT outputs, and the relative position inside the crystal can be
decoded by looking at the proportion of output signal between individual photo multipliers.
The energy and position information can be digitised and each scintillation event can be stored at the
associated location in an image matrix, thus giving rise to the digital gamma camera image.
10 Energy discrimination and photopeak
The gamma camera works most effectively for gamma radiation energies above 80 keV and below
250 keV. At lower energies, the attenuation and scattering of the gamma rays inside the patient is
large, thus attenuating the image before it is collected. The high energy limit is mainly due to the lim9/12
itations in collimator construction and the required thickness of the scintillation crystal necessary to
stop high energy photons.
Typical modern cameras have circular or rectangular detectors with dimensions of the order of 30
to 100 cm. The intrinsic geometrical resolution of light localisation in the crystal is in the order of 23 mm. The image size is of course limited by detector size, but multi-headed cameras or scanning
camera heads moving up and down the patient can make whole body examinations from composite
images in the order of 20-40 minutes total acquisition time.
The gamma camera head is of course heavy (collimator, sodium iodide, crystal and shielding). A
special gantry with counter weight is often necessary to hold the detector head.
The single event detected from the crystal is analysed in terms of the energy before being accepted.
Low energy events (below the photo-effect peak, so called photopeak) will normally be the result of
a Compton process, either in the patient, in the crystal or in the collimator. Such events will not correspond to the correct position and are discarded.
Practical limits to energy resolution are about 15% FWHM.
Count rate limitations are given by the confluence of the light output of many simultaneous scintillation events in the crystal. Current generation cameras normally can maintain resolution and energy
discrimination up to above 50,000 counts per second.
11 Gamma cameras only give the projection image
The gamma cameras can detect and discriminate the point of entry of the gamma ray into the camera
head, but can normally not discriminate from what depths in the patient the radiation is coming. Thus,
the gamma cameras image is a projecting of all activity throughout the patient. Images are taken either
from the front of the patient (anterior projection) or from the back of the patient (posterior projection),
as illustrated with the two whole-body skeletons in Figure 11.
Figure 11 Gamma camera images (scintigraphy) of a bone seeking 99mTc-labelled radiopharmaceutical.
10/12
While these two projections should be identical for non-attenuated gamma rays, they will be quite
different and yield independent information in real world imaging because of the attenuation of the
gamma rays inside the patient. The anterior image will normally reflect activity lying close to the front
surface of the patient while the posterior image will reflect information from the back part of the patient. The best total quantitative representation of activity is given by the geometric mean of the interior and exterior image, but it is clear that the image is limited to a 2-dimensional representation of
the activity distribution. The interpretation of a gamma camera image can thus be quite difficult unless
other projections are available. This limitation is overcome when multiple projections are obtained
from many directions and total information reconstructed.
This is the subject of SPECT (single proton emission computed tomography) and PET (positron
emission tomography) covered in later chapters. These methods can yield 3D information on the activity distribution: in the case of PET even without the use of lead collimators.
Modern gamma cameras are stable, relatively cheap (2 - 4 million DKK). All bigger hospitals have
departments of nuclear medicine mainly concerned with gamma camera imaging. (In Denmark nuclear medicine is operating together with clinical physiology). Each department normally have several
cameras.
12 Isotopes and radiopharmaceuticals
The most commonly used radiopharmaceuticals are labelled with the 6-hour half-life isomer 99mTc
(Technetium) which emits 140 keV gamma rays (and very little other radiation). This isotope can be
obtained from an isotope generator brought into the hospitals once a week. The generator contains
99Mo which decays with 66 hours half-life into 99mTc.
The radioactive Technetium can be chemically coupled to many different compounds. This is done
in the hospital departments. The radioactive products are made available as radiopharmaceuticals for
a variety of diagnostic situations. The most common gamma camera examinations are:
•bone (skeleton)
•kidneys and bladder
•thyroid
•lungs (ventilation and perfusion)
•tumours
With modern cameras and 99mTc-labelled radiopharmaceuticals, a geometrical resolution at 10 mm
depth in the patient close to the surface of the collimator is about 6 mm FWHM, but deteriorates with
depth and distance from the collimator to more than 20 mm at 15 cm distance. Clearly, this resolution
is much lower than obtainable by X-ray techniques, but the information often reflects the function
more than the stationary anatomy. This information is normally not available from the standard X-ray
techniques.
Each point in a gamma camera image matrix reflects the result of the collection of individual gamma
rays at this exact location. Thus each image cell can be regarded as a counter and the contents of the
image cell is subject to normal Poisson statistics. Having collected a total of N counts in a matrix cell,
the standard deviation of this image cell will be:
σ=√N
11/12
In order to obtain good images without too much statistical noise, many counts have to be collected
in the entire image matrix of several million events.
Even with this many counts, the limit of geometrical resolution is very often given more by the statistical noise than by the ultimate geometrical resolution of the camera.
13 References
[1]Harshaw Booklet on Scintillation Detectors" Harshaw Chemical Company, 1972
12/12
Emission Tomography
By Markus Nowak Lonsdale
Department of Clinical Physiology and Nuclear Medicine,
Bispebjerg Hospital, Bispebjerg Bakke 23, 2400 Copenhagen
(Ver. 1.4 6/12/2011) © 2003-2011 by M. Nowak Lonsdale
1 Introduction
The tomographic methods used in nuclear medicine are SPECT (described in chapter
2) and PET (chapter 3). Both SPECT and PET were introduced in the 1960's but it
took quite long before they were established as part of clinical routine. In fact, while
SPECT has been a standard tool in every nuclear medicine department for years, PET
has not been available widely until the end of the 2010s. This has had mainly to do
with the cost of PET cameras and limitations in the supply of radioisotopes suitable
for PET.
Single Photon Emission Tomography (SPECT)
2.1 Introduction
2
Many gamma-cameras are able to perform Single Photon Emission Computed
Tomography (SPECT, sometimes also called SPET). All it requires is that the camera
can rotate around the patient or object of interest. With SPECT, it is possible to obtain
tomographic images, i.e. “slices through the patient”.
Figure 1: Transversal slice through the brain acquired with SPECT. The image shows the distribution
of 123I-FPCIT, a radioactively labelled substance that binds to Dopamine receptor sites. The basal
ganglia have many of these sites and stand out clearly. Uptake of the tracer in the background is due to
unspecific binding.
2.2
Projections
Imagine you let a gamma-camera rotate in small steps around a patient. Each twodimensional image taken from a given angle would then correspond to the projection
of photons emitted perpendicular to the detector. These projections are similar to
those recorded with CT. However, a SPECT camera does not record the transmission
1
of X-rays, but the emission of photons from the object (a patient in medical imaging)
into the direction of the camera (Figure 2).
Figure 2: Series of 60 projection images (360° in 6° steps) acquired on a rotating gamma-camera. Each
image is collected over 1 min and shows the count distribution of 99mTc-labelled HMPAO in the head
of a patient. HMPAO is a flow-tracer used for estimation of regional perfusion in the brain. Like in
Figure 3, the s-axis is indicated by a dashed red line.
For simplicity it is advantageous only to consider a single horizontal line in the
projection images shown in Figure 2. With the projection reduced to a line the object
is correspondingly reduced to a slice. The projection can be plotted as a profile as
illustrated in Figure 3, middle panel.
The projection data of each slice along the axis of the gamma camera (i.e. the axis of
rotation) is stored in an individual sinogram, where each row corresponds to one
projection. Different rows represent different projection angles. A sinogram can be
reconstructed into a tomogram (a slice), by using an algorithm called filtered backprojection (FBP). An animation illustrating the acquisition and reconstruction process
can be downloaded from the web site.
Typical for the FBP algorithm are star-and streak artefacts as illustrated in Figure 4.
Gamma-cameras often employ more than one detector so that different projection
angles are recorded simultaneously. A single detector thus only covers a part of a full
circle.
2
y
s
φ
number of counts
x
s
φ
s
Figure 3: Schematic illustration of a SPECT acquisition. The upper panel shows a single slice through
the detector rotating around the chest of a patient. The detector records photons emitted along
the direction of the black arrow. The projection data (number of recorded counts along the saxis) at the current angle are shown in the middle panel. All measured projections over a 360°
rotation (green arrow in upper panel) are collected in a sinogram (lower panel). Each
horizontal line in the sinogram corresponds to a certain projection angle, φ. In the lower panel,
the number of counts are coded in a greyscale “colour” table. The projection shown in the
upper panels is marked by a thin blue line. Note, that the origin of the s-axis is the point
closest to the axis of rotation.
Figure 4: Reconstructed object (left) and original object (right). Typical for the applied FBP algorithm
are the “star” or “streak” artefacts, which also can be observed in the background.
3
2.3 The Filtered Back Projection in more detail
(This section can be left out without loss of continuity.)
Proving that the FBP indeed is the correct way to reconstruct the image slice from the
projection data is somewhat more complicated than it may seem at first sight. Shortly,
the sinogram represents a Radon transform of the object in the SPECT camera:
Pφ (s) =
∫∫ ρ( x, y)δ (x cos φ + y sin φ + s)dxdy
(1)
FOV
where ρ(x,y) is the spatial distribution of the tracer in the field-of-view (FOV), limited
by the distance between the rotational centre and the detector. Pφ (s) is projection
along angle φ with s denoting the position along the detector surface. Thus, Pφ
represents a row in the sinogram, see also Figure 3.
The task is now to retrieve the original spatial distribution, ρ(x, y) of radioactivity in
the object under investigation. Considering Sφ (ω ) , the one-dimensional Fourier
transform of Pφ (s) (ω being the reciprocal of s) and rotating ρ(x, y) by φ , it can be
shown that:
∞ ∞
∫ ∫ S (ω )e
ρ(x, y) =
φ
i 2π (ux+vy )
dudv
−∞−∞
(2)
where u, v, ω, and φ are related by
u = ω cosφ
v = ω sin φ
(3)
Equation (2) is the sum of backprojections along angle φ of the Fourier-transformed
data in the sinogram. A backprojection simply "distributes" evenly the result of a
projection back onto the original projected space, simply reverting the projection
process without "inventing" any information about the original distribution along the
projection line.
Coming back to equation (2) it is important to note that the sinogram is described in a
rectangular coordinate system. Transforming to a polar coordinate system yields:
ρ(x, y) =
π ∞
∫ ∫S
0 −∞
φ
(ω )⋅ ω ⋅ e −i2π (x cosφ +y sin φ )dωdφ
4
(4)
To reconstruct the object, each row of the sinogram is Fourier-transformed ( Sφ (ω ) ),
multiplied by a "ramp" filter, ω , and subsequently back-projected. It should be noted
that a discrete approximation to the FBP, the so-called Riemann sum, is used. In
addition, the integration over ω is in reality constrained by the physical dimensions of
the detector. The integration over φ is often done from 0 to 2π to minimise attenuation
effects (see chapter 2.4). This means that the acquisition needs to be done over 360°.
On the internet, many web-sites discuss various aspects of the FBP algorithm. One I
have used a lot is http://www.sv.vt.edu/xray_ct/parallel/Parallel_CT.html
2.4 Attenuation correction
A problem in SPECT is that the emitted photons are attenuated by the patient before
they are detected by the camera. This means that there is a bias towards the outer
structures of the object as the signal from these structures is not attenuated as much as
signal from the inner structures.
In general, the attenuation of a photon beam I after distance r in a medium with linear
absorption coefficient µ(r) can be described by:
I(r) = I0 e− µ (r) r
(5)
Note that µ(r) can vary in space. Only for homogeneous materials, µ is independent
of position. This simple case with a uniform µ(r) in a phantom surrounded by air
( µ = 0) is illustrated in Figure 5.
Figure 5: Image of a plastic phantom filled with a 99mTc/water solution. The distribution of 99mTc is
uniform, as is the linear attenuation coefficient µ. The effect of attenuation is seen in the image, and
also illustrated as vertical and horizontal profile through the centre. The recorded signal from the inner
part of the phantom is lower that that from the edges which seem enhanced. The walls of the phantom
are thin and can be neglected here.
5
When scanning some organs, especially the brain, a post-hoc approach for correcting
attenuation is used: Each slice of the head is approximately described as an ellipse.
The matter inside the ellipse is modelled assuming uniform attenuation with a fixed
value for µ, e.g. 0.10/cm (µwater=0.15/cm). Then, each pixel i inside the ellipse is
multiplied by a factor e µ Δx i depending on its average distance Δ xi from the surface of
the ellipse, seen over all projection angles.
This is only a coarse approximation since µ differs markedly in bone, soft tissue and
air (e.g. upper air ways and lungs), but it works remarkably well in tissue with
relatively homogenous composition. The algorithm is often called Chang’s uniform
attenuation correction, and is illustrated in Figure 6.
Figure 6: Schematic presentation of Chang’s method: An attenuated image (left) is multiplied by a
correction matrix (middle) to yield the attenuation-corrected image (right). Below each image, the
profile through the centre point is plotted.
Another, theoretically better method is obtaining a second set of images in the same
geometric setting, where the attenuation coefficient in each voxel is measured. The
classic approach is to use gamma-radiation from radioactive isotopes located in an
external container (point or line source) as photon sources. The source is located such
that the patient is in between source and detector, while obtaining data under a full
360° rotation. The photons that are not attenuated are collected in the same way as the
photons emitted from the radioactive tracers inside the patient. It is thus possible to
measure the transmission and calculate an attenuation map. This map contains the
value of µ, in each position in the tomographic slice.
Depending on the manufacturer and model, the transmission scan is performed
separately or simultaneously. In any case, the separation between emission and
transmission photons is made on the basis of their energy. Emission counts are always
present. (Remember that it often takes hours before a tracer is distributed. Therefore,
it is not feasible to perform a transmission scan before administration of the tracer).
6
Figure 7: Energy spectrum of a 133Ba transmission source in the presence of 99mTc (emission). The
green part of the spectrum is used for collecting “transmission photons” from Ba (356 keV±20%),
whereas the red part (140 keV±10%) is used for “emission counts” from Tc. Additional windows can
be used to estimate the contribution of scattered Ba-photons in the energy range of the Tc-window.
Modern scanners are often built together with a CT scanner and are then called hybrid
systems. The CT part is either taken from a conventional CT scanner (thus useful for
both attenuation correction and regular diagnostic CT imaging) or an X-ray tube/
detector combination specifically designed for hybrid systems. The latter usually
implies that it is possible to obtain data for attenuation correction with a relatively low
radiation dose to the patient.
X-rays are polychromatic (50-150 KeV) while isotopes used in nuclear medicine are
usually monochromatic (e.g. 99mTc: 140 KeV) and this implies that the CT-data has to
be converted to so-called µ-maps (reflecting attenuation coefficients corresponding to
the energy of the radioactive isotope) before being used for attenuation correction.
Despite the similar energy range, the detector used for the emission radiation cannot
be used for X-rays because of the enormous photon flux from an X-ray tube.
2.5 Scatter
Scattered radiation degrades SPECT images, just as it does planar images. There is no
way to circumvent scatter, apart from removing unnecessary objects in the vicinity of
the patient during acquisition. Scattered radiation has lost part of its original energy,
thus it can be distinguished from unscattered radiation in terms of energy. The
acceptance window settings (i.e. the “allowed energy interval”) reflect a compromise
between reduction in scatter (unwanted photons) and reduction in sensitivity.
Scatter correction algorithms are important when dealing with more than one energy,
e.g. in double-isotope investigations or when using radioactive transmission sources
for attenuation correction. A discussion of these algorithms is also beyond the scope
of this book.
7
2.6 Advanced Reconstruction Methods
Correction for scatter and attenuation is best applied during the reconstruction, and
not afterwards as a “post-hoc” method. However, as the correction for attenuation
and scatter depends on the spatial distribution of the radioactive substances, the
solutions are not separable, i.e. everything must be calculated at once. Therefore,
iterative reconstruction methods like EM-ML and OS-EM are often used. A
discussion of these methods is beyond the scope of this book.
2.7 Filtering
SPECT is very much constrained by the low signal-to-noise ratio. The Poisson
statistics of the radioactive decay dominates the noise characteristics of the
reconstructed data – and in fact, often the overall impression of the image. A property
of the Poisson distribution is that the mean (expectation value) is equal to the
variance, i.e. 100 counts in one pixel have a variance of 100 counts2 and thus a
standard deviation of 10 counts (10%) – plus the influence of all other noise sources.
Therefore, filtering of the data is essential in order to improve the signal-to-noise
ratio. The idea with filtering is to remove high-frequency noise, assuming that the
features that make up the image predominantly have low-frequency content. The most
commonly used filter is a low-pass, Butterworth-filter. Its frequency response
function is:
1
H(f ) =
1+
( )
f
N
(6)
fcutoff
where fcutoff is the frequency where the magnitude of the transfer function has fallen to
50%. N is the order of the filter (i.e. the steepness of transition), see Figure 8. The
filter parameters are very much dependent on the application and the organ of interest,
but typical numbers are order=4 and cut-off=0.4.
Filtering can be done slice-by-slice or for the complete three-dimensional dataset.
That means that the filter actually acts in two or three dimensions.
8
Figure 8: Frequency response function of the Butterworth filter. See also text.
2.8
Corrections
SPECT requires very high quality gamma-cameras. The detection uniformity of each
detector must be very high, the geometric centre of rotation well-defined and the
projection direction of the collimators very precise. Otherwise systematic errors
during angular sampling are introduced which result in various types of artefacts in
the reconstructed images.
The typical spatial resolution of SPECT is just below 1 cm. However, this is very
much dependent on the organ under investigation and the distance between the organ
and the detector.
3
Positron Emission Tomography (PET)
The fundamental difference between Single Photon Emission Tomography and
Positron Emission Tomography is that the former is based on detection of a single
photon from a decay event whereas the latter detects two coinciding photons arising
from an annihilation process after a positron decay.
A positron is the antimatter counterpart to an electron, having the same rest mass but
opposite charge. Figure 9 illustrates the creation of a positron.
9
Figure 9: Simplified illustration of a positron decay. An excited nucleus removes energy by converting
a proton (p+) into a neutron (n) and a positron (e+), also called β+ particle. Electrons (green) reside in
their shells. Only relevant particles are mentioned here.
In biological tissue, a positron annihilates with an electron after travelling few
millimetres, see Figure 10. Both particles are converted into energy in the form of two
photons travelling along a line in opposite direction. The laws of conservation of mass
and energy enforce this. This annihilation radiation can be detected with a PET
camera. The energy of the photons is always 511 keV, corresponding to the rest mass
of an electron (and positron).
Figure 10: Annihilation of an electron/positron pair into 2 photons
The detector system of a PET scanner consists of a few dozen detector rings with a
total of several thousand individual crystals. (Each of these crystals can be regarded
as a “mini-gamma-camera”.) The detection of a photon in a crystal starts an electronic
coincidence circuit that tries to find a matching event on the opposite side of the
detector ring with the chosen coincidence window, e.g. 10 ns, see Figure 11. In the
case of no matching event, the primary event is discarded. If there is a matching
event, the line-of-response (LOR) between the two crystals is recorded.
Strictly speaking, the annihilation photons are not necessarily emitted with an
angle of exactly 180°. This is correct only in the centre-of-mass frame of
reference of the electron-positron pair. In reality the difference is up to 0.5°
which is negligible in this context.
10
Figure 11: Illustration of a decay, emission of two photons and subsequent coincidence detection along
the line-of-response (LOR) within a single detector ring
As the reader may now have noticed by now, the term “positron emission
tomography” is misleading: The detected particles are not emitted positrons but the
annihilation photon pairs. Therefore, some people prefer to call PET for ART
(“annihilation radiation tomography”).
In contrast to SPECT cameras, PET cameras do not employ collimators. The direction
of the projection is determined by the coincidence logic. That is sometimes called
“electronic collimation”. Note that while gamma cameras use NaI as detector
material, PET cameras require other crystals with better stopping power. During the
1990s, BGO (bismuth germanate) was the standard crystal material for PET. During
the last years, other detector materials like LSO (Luthetium-oxyorthosilicate) have
become the material of choice. The great challenge is to develop detector materials
that have high stopping power combined with high light output (counting statistics
and efficiency), good energy resolution (reduction of scatter) and short decay time
(short dead-time).
Modern PET cameras normally have a few dozen detector rings resulting in an axial
field-of-view (FOV) of 15-20 cm. The radial FOV is usually about 50-60 cm. When
coincidence events between different detector rings are recorded as well, the
acquisition method is called “3D-PET”. Recording only coincidence events within
one ring (as in Figure 11) is called “2D-PET”. In 2D-PET, collimating septa are
inserted between the detector rings to reduce unwanted cross-talk. (The difference
between these septa and collimators in gamma-cameras is that septa only separate
slices.) 3D-PET is about five times more sensitive that 2D-PET, but more susceptible
to scattered radiation and radiation sources outside the FOV. Modern PET cameras do
not to support 2D-PET any more, and offer solutions for the drawbacks of 3D-PET.
The typical spatial resolution of a PET camera is below 5 mm in all directions.
11
Ultimately, the resolution in PET is limited by the distance the positron travels away
from the decaying nucleus to the point of annihilation.
The reconstruction of the collected coincidence data into an image is very similar to
SPECT and CT. As the geometry of the detector ring is fixed, each detector-pair (ie.
each LOR) corresponds to a certain projection angle in SPECT. Instead of sampling
all projection angles one by one, all angles are sampled in parallel at the same time.
Detected coincidence events (i.e. a detector-pair has recorded coinciding photons) are
distributed (“sorted”) into an array (the sinogram) according to the LOR of that event
which again corresponds to a projection angle. This can be done on-line in real-time
or off-line after the acquisition. The resulting sinogram is reconstructed as described
above for SPECT cameras. Iterative reconstruction algorithms are available as well
and widely used.
As in SPECT, the problem of photon attenuation can be solved by recording an
additional transmission image. In “conventional” PET cameras this is achieved by
rotating a radioactive line source (often 68Ge or 137Cs) around the patient and counting
the photons transmitted through the patient and those detected close to the source.
Modern hybrid PET/CT have replaced these systems and employ a CT image for
estimation of photon attenuation. While CT yield much better images (virtually noise
free), some scaling of the attenuation coefficients is required, as photon attenuation in
tissue varies considerably between 120 keV (photon energy in CT) and 511 keV. The
CT images can subsequently also used for image fusion, accurate anatomic
localisation of PET findings and – if acquired with the correct parameters – for
diagnosis.
Traditionally, PET systems were installed in research institutions with nearby
facilities for isotope production. There were various demands on PET images, one of
them being that PET should be a quantitative method, i.e. that the pixel data in PET
could be calibrated to reflect radioactivity concentration in tissue. This requires
accurate attenuation and scatter correction and both were possible even with limited
computing power. Most important in this context is the fact that the main obstacle,
attenuation correction, is separable from solving the problem of reconstruction of
emission images. The two types of images can be calculated separately and
independently. This is because the attenuation along a LOR does not depend on the
location of the annihilation event. (This is easy to see – make a drawing and write
down the probability for detecting both photons.) PET has been a quantitative method
ever since meaning that pixel data in PET usually has units of Bq/ml.
12
The most widely used isotope for PET is 18F with a half-life of about 110 minutes –
sufficiently long to allow production with a cyclotron, subsequent synthesis of
labelled radiopharmaceuticals, and finally delivery to PET sites without cyclotron of
their own. 18F is usually produced for the synthesis of 18F-FDG, a glucose analogue
that allows for mapping glucose metabolism in patients. Major applications are
diagnosis, staging, and treatment evaluation of cancer. Another 18F-labelled
radiopharmaceutical is 18F-NaF, a bone-seeking agent used for detecting skeletal
abnormalities. The uptake of 18F-NaF in bone reflects blood flow and bone
remodelling (Fig. 12). The clinical use of 18F-NaF is expected to grow considerably
over the next years.
Figure 12: 3D-rendering of a fused PET/CT dataset. 18F-NaF PET and CT were acquired in a single
session on a combined PET/CT system less than one hour after tracer administration. Note the 18F-NaF
uptake in the spine and hips compared to the low uptake in the skull and bones of the limbs. 18F-NaF is
excreted through the bladder.
Suggestions for further reading
4
•
T. Turkington, J Nucl Med Technol 2001; 29:1-8
•
G. Muehllehner and J. Karp, Phys. Med. Biol. 51 (2006) R117–R137
13
Introduction to Magnetic Resonance
Imaging Techniques
Lars G. Hanson, [email protected]
Danish Research Centre for Magnetic Resonance (DRCMR),
Copenhagen University Hospital Hvidovre
Latest document version: http://www.drcmr.dk/
Translation to English: Theis Groth
Revised: August, 2009
It is quite possible to acquire images with an MR scanner without understanding the principles
behind it, but choosing the best parameters and methods, and interpreting images and artifacts,
requires understanding. This text serves as an introduction to magnetic resonance imaging
techniques. It is aimed at beginners in possession of only a minimal level of technical expertise,
yet it introduces aspects of MR that are typically considered technically challenging. The notes
were written in connection with teaching of audiences with mixed backgrounds.
Contents
1
Introduction
2
1.1
1.2
2
3
Supplemental material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recommended books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Magnetic resonance
4
3
The magnetism of the body
8
4
The rotating frame of reference
12
Relaxation
12
5.1
5.2
5.3
13
14
15
5
Weightings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Causes of relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inhomogeneity as a source of signal loss, T2∗ . . . . . . . . . . . . . . . . . . . . . . . .
6
Sequences
16
7
Signal-to-noise and contrast-to-noise ratios
16
1
8
9
Quantum Mechanics and the MR phenomenon
17
8.1
19
Corrections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Imaging
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11
9.12
9.13
20
Background . . . . . . . . . . . . . . . . .
Principles . . . . . . . . . . . . . . . . . .
Slice selection . . . . . . . . . . . . . . . .
Spatial localization within a slice . . . . . .
Extension to more dimension – k-space . .
Similarity and image reconstruction . . . .
Moving in k-space . . . . . . . . . . . . . .
Image acquisition and echo-time . . . . . .
Frequency and phase encoding . . . . . . .
Spatial distortions and related artifacts . . .
Slice-selective (2D-) versus 3D-sequences .
Aliasing and parallel imaging . . . . . . . .
Finishing remarks on the subject of imaging
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
21
22
24
25
27
29
29
31
32
34
34
10 Noise
35
11 Scanning at high field
36
12 MR Safety
36
1 Introduction
This text was initially written as lecture notes in Danish. Quite a few of the students are English
speaking, however. Moreover, the approach taken is somewhat different than for most other
introductory texts. Hence it was deemed worth the effort to do a translation. The task was taken
on by Theis Groth (July 2009 version) whose efforts are much appreciated.
The goal of the text is to explain the technical aspects accurately without the use of mathematics.
Practical uses are not covered in the text, but it introduces the prerequisites and, as such, seeks
to provide a sturdy base for further studies. While reading, you may find the glossary of help
(located towards the end of the notes). It elaborates on some concepts, and introduces others.
Material has been added along the way whenever a need was felt. As a result, the text covers
diverse subjects. During the accompanying lectures, software that illustrates important aspects
of basic MR physics, including popular sequences, contrast and imaging, was used. It is highly
recommended that you download and experiment with these programs, which are available at
no charge. Corrections, comments and inspiring questions concerning the text and software are
always appreciated.
The text is concerned only with general aspects that are discussed in pretty much any technical
textbook on MR. Therefore, very few specific references are given.
1.1 Supplemental material
http://www.drcmr.dk/bloch Interactive software that can illustrate a broad spectrum
of MRI concepts and methods, and that can contribute significantly to the understanding.
2
Figure 1: For teaching purposes, a free, interactive simulator, that illustrates a large number of important
concepts and techniques, has been developed. The user chooses the initial conditions for the simulation
and can thereafter manipulate the components of the magnetization with radio waves and gradients, as is
done during a real scanning session. A screenshot from the simulator is shown above. It is described in
detail at http://www.drcmr.dk/bloch
Animations that illustrate the use of the software and selected MR phenomena are on the
homepage.
http://www.drcmr.dk/MR Text and animations that briefly describes MR in English. The
page was created as a supplement to an article that discusses the connection between the
classical and the quantum mechanical approach to the MR phenomenon, and which also
describes problems that appear in many basic text books.
http://www.drcmr.dk/MRbasics.html Links to this text and an example of accompanying “slides”.
1.2 Recommended books
Even technically oriented readers can benefit from reading a relatively non-technical introduction.
Having a basic understanding makes it much easier to understand the formalism in later stages.
Once a basic understanding is acquired, the Internet is a good source of more specific texts. Some
books deserve special mention:
“Magnetic Resonance Imaging: Physical and Biological Principles” by Stewart C. Bushong. A
good introduction that does not demand a technical background from its readers.
3
“Clinical Magnetic Resonance Imaging” by Edelman, Hesselink and Zlatkin. Three volumes
featuring a good mixture of technique and use. Not an intro, but a good follow-up (according
to people who have read it. I haven’t).
‘Magnetic Resonance Imaging – Physical Principles and Sequence Design” by Haacke, Brown,
Thompson and Venkantesan. Broadly oriented textbook with plenty of physics, techniques
and sequences. Not an easily read introduction, but suitable for physicists and similar
people.
“Principles of Nuclear Magnetic Resonance Microscopy” by Paul T. Callaghan. A classic within
the field of probing molecular dynamics. Technically demanding. Should only be read in
the company of a grown-up.
“Spin Dynamics: Basics of Nuclear Magnetic Resonance” by Malcolm H. Levitt. Covers
theoretical aspects of MR spectroscopy as used in chemical analysis and is thus irrelevant
to most who work with MR image analysis. Excels in clear, coherent writing and a
description of quantum mechanics devoid of typical misconceptions.
Please note that these books have been published in several editions.
2 Magnetic resonance
Initially it is described how magnetic resonance can be demonstrated with a pair of magnets and
a compass. The level of abstraction already increases towards the end of this first section, but
despair thee not: As the complexity rises, so shall it fall again. A complete understanding of
earlier sections is not a prerequisite for future gain.
If a compass happens to find itself near a powerful magnet, the compass needle will align with
the field. In a normal pocket compass, the needle is embedded in liquid to dampen its oscillations.
Without liquid, the needle will vibrate through the north direction for a period before coming to
rest. The frequency of the oscillations depend on the magnetic field and of the strength of the
magnetic needle. The more powerful these are, the faster the vibrations will be.
Radio waves are magnetic fields that change in time (oscillate1 ) and as long as the needle
vibrates, weak radio waves will be emitted at the same frequency as that of the needle. The
frequency is typically reported as the number of oscillations per second, known as Hertz (Hz).
If the needle oscillates 3 times per second, radio waves will be emitted at a frequency of 3 Hz,
for example. The strength of the radio waves (known as the amplitude) is weakened as the
oscillations of the needle gradually vanish.
Imagine the following situation: A compass is placed in a magnetic field created by one or
more powerful magnets. After a short period of time, the needle has settled and is pointing in the
1 Throughout
this text, the term “radio waves” is used for rapidly oscillating magnetic fields. It has been pointed out
by David Hoult in particular, that the wave nature of the oscillating magnetic field is not important for MR except
at ultra-high field. The typical choice of wording is therefore unfortunate (“oscillating magnetic field” or “B1 field”
is preferred over “radio wave field”). I agree, but will nevertheless comply with the somewhat unfortunate standard
as it facilitates reading and correctly highlights the oscillatory nature of the B1 -field. The difference lies in the
unimportant electrical component of the field, and in the spatial distribution not felt by individual nuclei.
4
(a)
(c)
(b)
Figure 2: (a) A magnetic needle in equilibrium in a magnetic field. The needle orients itself along the
field lines that go from the magnets northern pole to the southern pole (and yes, the magnetic south pole of
the earth is close to the geographical North Pole). (b) A weak perpendicular magnetic field can push the
magnet a tad away from equilibrium. If the magnetic field is changed rhythmically in synchrony with the
oscillations of the needle, many small pushes can result in considerable oscillation. (c) The same effect
can be achieved by replacing the weak magnet with an electromagnet. An alternating current will shift
between pushing and pulling the north pole of the magnetic needle (opposite effect on the south pole).
Since the field is most powerful inside the coil, the greatest effect is achieved by placing the needle there.
direction of the magnetic field, figure 2(a). If the needle is given a small push perpendicular to the
magnetic field (a rotation), it will vibrate through north, but gradually settle again. The oscillations
will occur at a frequency that will hereafter be referred to as the resonance frequency. As long as
the magnetic needle is oscillating, radio waves with the same frequency as the oscillation will
be emitted. The radio waves die out together with the oscillations, and these can in principle be
measured with an antenna (coil, figure 2(c)). The measurement can, for example, tell us about the
strength of the magnetic needle and the damping rate of its oscillations.
As will be made clear, there are also “magnetic needles” in the human body, as hydrogen nuclei
are slightly magnetic. We can also manipulate these so they oscillate and emit radio waves, but it
is less clear how we may give the needles the necessary “push.” In order to make an ordinary
compass needle swing, we may simply give it a small push perpendicular to the magnetic field
with a finger, but this is critically less feasible in the body. Instead, we may take advantage of the
fact that magnets influence other magnets, so that the weak push perpendicular to the magnetic
field can be delivered by bringing a weak magnet near the needle, as shown in figure 2(b). With
this method, we may push the needle from a distance by moving a weak magnet towards the
needle and away again.
In an MR scanner, the powerful magnet is extremely powerful for reasons that will be explained
below. The magnetic field across is quite weak in comparison. The push that the weak magnet
5
can deliver is therefore too weak to produce radio waves of any significance, if the magnetic
needle is only “pushed” once.
If, on the other hand, many pushes are delivered in synchrony with the mentioned oscillations
of the magnetic needle, even waving a weak magnet can produce strong oscillations of the
magnetic needle. This is achieved if the small magnet is moved back and forth at a frequency
identical to the natural oscillation frequency, the resonance frequency, as described above.
What is described here is a classical resonance phenomenon where even a small manipulation
can have a great effect, if it happens in synchrony with the natural oscillation of the system in
question. Pushing a child on a swing provides another example: Even weak pushes can result
in considerable motion, if pushes are repeated many times and delivered in synchrony with the
natural oscillations of the swing.
Let us focus on what made the needle oscillate: It was the small movements of the magnet,
back and forth, or more precisely the oscillation of a weak magnetic field perpendicular to the
powerful stationary magnetic field caused by the movement of the magnet.
But oscillating magnetic fields is what we understand by “radio waves”, which means that in
reality, we could replace the weak magnet with other types of radio wave emitters. This could,
for example, be a small coil subject to an alternating current, as shown in figure 2(c). Such a coil
will create a magnetic field perpendicular to the magnetic needle.
The field changes direction in synchrony with the oscillation of the alternating current, so if
the frequency of the current is adjusted to the resonance frequency of the magnetic needle, the
current will set the needle in motion. When the current is then turned off, the needle will continue
to swing for some time. As long as this happens, radio waves will be emitted by the compass
needle. The coil now functions as an antenna for these radio waves. In other words, a voltage is
induced over the coil because of the oscillations of the magnetic field that the vibration of the
magnetic needle causes.
In summary, the needle can be set in motion from a distance by either waving a magnet or
by applying an alternating current to a coil. In both situations, magnetic resonance is achieved
when the magnetic field that motion or alternating currents produce, oscillates at the resonance
frequency. When the waving or the alternating current is stopped, the radio waves that are
subsequently produced by the oscillating needle, will induce a voltage over the coil, as shown.
The voltage oscillates at the resonance frequency and the amplitude decreases over time. A
measurement of the voltage will reflect aspects of the oscillating system, e.g. “the relaxation
time”, meaning the time it takes for the magnetic needle to come to rest.
The above mentioned experiment is easily demonstrated, as
nothing beyond basic school physics is involved. Nobody acquainted with magnets and electromagnetism will be surprised
during the experiment. Nonetheless, the experiment reflects the
most important aspects of the basic MR phenomenon, as it is
used in scanning. There are, however, differences between MR
undertaken with compasses and nuclei. These are due to the fact
that the nuclei are not only magnetic, but also rotate around an
axis of their own, as shown in figure 3 to the right (a twist to this
is discussed in section 8). The rotation, called spin, makes the Figure 3: The spin of the nuclei
nuclei magnetic along the rotational axis. This is equivalent to (rotation) makes them magnetic.
6
a situation where the compass needle described above constantly and quickly rotates around
its own length direction (has spin or angular momentum). Such a needle would swing around
north in a conical motion if the mounting allowed for this, rather than swing in a plane through
north. This movement is called precession and it is illustrated in figure 4. Think of it as a variant
of the normal needle oscillation. Similarly, a pendulum can swing around a vertical axis rather
than in a vertical plane, but basically there is no great difference. In the rest of this section, the
consequences of spin are elaborated on. The finer nuances are not important for understanding
MR scanning.
Precession around the direction of a field is also known from a
B0
spinning top (the gravitational field rather than the magnetic field,
in this example): A spinning top that rotates quickly will fall
slowly, meaning that it will gradually align with the direction of
M
the gravitational field. Rather than simply turning downwards, it
will slowly rotate around the direction of gravity (precess) while
falling over. These slow movements are a consequence of the fast
spin of the spinning top.
The same applies to a hypothetical magnetic needle that rotates
quickly around its own length axis, as the atomic nuclei in the Figure 4: A magnetization M
body do. In the experiment described above, such a magnetic that precess around the magnetic
needle with spin would rotate around north (precess) after having field B0 because of spin (rotation
received a push. It would spiral around north until it eventually around M).
pointed in that direction, rather than simply vibrate in a plane as a normal compass needle. It
would meanwhile emit radio waves with the oscillation frequency, which we may now also call
the precession frequency. This frequency is independent of the amplitude, i.e., independent of
the size of the oscillations. This consequence of spin is another difference to the situation shown
in figure 2 since the natural oscillation frequency of a normal compass is only independent of
amplitude for relatively weak oscillations.
Like before, it is possible to push the magnetic needle with a weak, perpendicular magnetic
field that oscillates in synchrony with the precession (meaning that it oscillates at the resonance
frequency). Just as the magnetic needle precess around the stationary magnetic field, it will
be rotated around the weak, rotating magnetic field, rather than (as before) moving directly
in the direction of it. In practice, this means that a perpendicular magnetic field, that rotates
in synchrony with the precession of the needle, will rotate it slowly around the rotating fields
direction (meaning a slow precession around the weak rotating magnetic field).
We have now in detail described the magnetic resonance phenomenon, as employed for
scanning: The influence of spin on the movement of the magnetic axis can, at first glance, be
difficult to understand: That the force in one direction can result in pushes towards another
direction may appear odd (that the pull in a magnet needle towards north will make it rotate
around north, if the needle rotates around its own length axis (has spin)). It does, however, fall in
the realms of classical mechanics.
One does not need to know why spinning tops, gyros, nuclei and other rotating objects act
queer in order to understand MR, but it is worth bearing in mind that the spin axis rotates around
the directions of the magnetic fields. The precession concept is thus of importance, an it is also
worth remembering that spin and precession are rotations around two different axes.
7
3 The magnetism of the body
Equipped with a level of understanding of how magnet needles with and without spin are affected
by radio waves, we now turn to the “compass needles” in our very own bodies.
• Most frequently, the MR signal is derived from hydrogen nuclei (meaning the atomic nuclei
in the hydrogen atoms). Most of the body’s hydrogen is found in the water molecules. Few
other nuclei are used for MR.
• Hydrogen nuclei (also called protons) behave as small compass needles that align themselves parallel to the field. This is caused by an intrinsic property called nuclear spin (the
nuclei each rotate as shown in figure 3). By the “direction of the nuclear spins” we mean
the axis of rotation and hence the direction of the individual “compass needles”.
• The compass needles (the spins) are aligned in the field, but due to movements and nuclear
interactions in the soup, the alignment only happens partially, as shown in figure 5 –
very little, actually. There is only a weak tendency for the spins to point along the field.
The interactions affect the nuclei more than the field we put on, so the nuclear spins are
still largely pointing randomly, even after the patient has been put in the scanner. An
analogy: If you leave a bunch of compasses resting, they will all eventually point towards
north. However, if you instead put them into a running tumble dryer, they will point in
all directions, and the directions of individual compasses will change often, but there will
still be a weak tendency for them to point towards north. In the same manner, the nuclei in
the body move among each other and often collide, as expressed by the temperature. At
body temperature there is only a weak tendency for the nuclei to point towards the scanners
north direction.
Together, the many nuclei form a total magnetization (compass needle) called the net
magnetization. It is found, in principle, by combining all the many contributions to the
magnetization, putting arrows one after another. If an equal number of arrows point in all
directions, the net magnetization will thus be zero. Since it is generally the sum of many
contributions that swing in synchrony as compass needles, the net magnetization itself
swings as a compass needle. It is therefore adequate to keep track of the net magnetization
rather than each individual contribution to it.
As mentioned above, the nuclei in the body move among each other (thermal motion) and
the net magnetization in equilibrium is thus temperature dependent. Interaction between
neighboring nuclei obviously happens often in liquids, but they are quite weak due to
the small magnetization of the nuclei. Depending on the character and frequency of the
interaction, the nuclei precess relatively undisturbed over periods of, for example, 100 ms
duration. At any time, there is a certain probability that a nucleus takes part in a dramatic
clash with other nuclei, and thus will point in a new direction, but this happens rather
infrequently.
The net magnetization is equivalent to only around 3 per million nuclear spins oriented
along the direction of the field (3 ppm at 1 tesla). This means that the magnetization of
a million partially aligned hydrogen nuclei in the scanner equals a total magnetization of
only 3 completely aligned nuclei.
8
B0
Figure 5: The figure shows the same situations in two and three dimensions (top and bottom,
respectively). The nuclear spins are shown as numerous arrows (vectors). In the lower graphs,
they are all drawn as beginning in the same point, so that the distribution over directions is clear
(implicit coordinate system (Mx , My , Mz )). When a patient arrives in the ward, the situation is as
shown in the two graphs to the left: The spins are oriented randomly, with a uniform distribution
over directions, meaning that there is about an equal number of spins pointing in all directions. The
net magnetization is near zero and the nuclei do not precess. When a magnetic field B0 is added, as
shown in the two figures to the right, a degree of alignment (order) is established. The field is only
shown explicitly in the top right figure, but the effect is visible in both: The direction distribution
becomes “skewed” so that a majority of the nuclei point along the field. In the lower right figure,
the net magnetization (thick vertical arrow) and the precession (the rotation of the entire ball caused
by the magnetic field) are shown. The lower figures appear in the article Is Quantum Mechanics
necessary for understanding Magnetic Resonance? Concepts in Magnetic Resonance Part A, 32A
(5), 2008.
9
Figure 6: Scene from animation found at http://www.drcmr.dk/MR that shows how radio
waves affect a collection of nuclear spins precessing around B0 (vertical axis) at the Larmor
frequency. The radio wave field that rotates around the same axis at the same frequency, induces
simultaneous rotation around a horizontal axis, as symbolized by the circular arrow. The relative
orientation of the nuclei does not change, and it is therefore adequate to realize how the net
magnetization (here shown by a thick arrow) is affected by the magnetic field.
With the gigantic number of hydrogen nuclei (about 1027 ) found in the body, the net
magnetization still becomes measurable. It is proportional to the field: A large field
produces a high degree of alignment and thus a large magnetization and better signal-tonoise ratio.
• If the net magnetization has been brought away from equilibrium, so it no longer points
along the magnetic field, it will subsequently precess around the field with a frequency of
42 million rotations/second at 1 tesla (42 MHz, megahertz). This is illustrated in figure 4.
Eventually it will return to equilibrium (relaxation), but it takes a relatively long time on
this timescale (e.g. 100 ms as described above). Meanwhile, radio waves at this frequency
are emitted from the body. We measure and analyze those. Notice: The position of the
nuclei in the body does not change - only their axis of rotation.
• The precession frequency is known as the Larmor frequency in the MR community. The
Larmor equation expresses a connection between the resonance frequency and the magnetic
field, and it is said to be the most important equation in MR:
f = γB0
The equation tells us that the frequency f is proportional to the magnetic field, B0 . The
proportionality factor is 42 MHz/T for protons. It is called “the gyromagnetic ratio” or
simply “gamma”. Thus, the resonance frequency for protons in a 1.5 tesla scanner is
63 MHz, for example.
The Larmor equation is mainly important for MR since it expresses the possibility of
designing techniques based on the frequency differences observed in inhomogeneous fields.
Examples of such techniques are imaging, motion encoding and spectroscopy.
10
• But how is the magnetization rotated away from its starting point? It happens by applying
radio waves at the above mentioned frequency.
Radio waves are magnetic fields that change direction in time. The powerful stationary
field pushes the magnetization so that it precesses. Likewise the radio waves push the
magnetization around the radio wave field, but since the radio wave field is many thousand
times weaker than the static field, the pushes normally amount to nothing.
Because of this, we will exploit a resonance phenomenon: By affecting a system rhythmically at an appropriate frequency (the systems resonance frequency), a large effect
can be achieved even if the force is relatively weak. A well-known example: Pushing a
child sitting on a swing. If we push in synchrony with the swing rhythm, we can achieve
considerable effect through a series of rather weak pushes. If, on the other hand, we push
against the rhythm (too often or too rarely) we achieve very little, even after many pushes.
With radio waves at an appropriate frequency (a resonant radio wave field), we can slowly
rotate the magnetization away from equilibrium. “Slowly” here means about one millisecond for a 90 degree turn, which is a relatively long time as the magnetization precesses
42 million turns per second at 1 tesla (the magnetization rotates 42 thousand full turns in
the time it takes to carry out a 90 degree turn, i.e., quite a lot faster).
Figure 6 is a single scene from an animation found at http://www.drcmr.dk/MR
that shows how a collection of nuclei each precessing around both B0 and a rotating radio
wave field as described earlier, together form a net magnetization that likewise moves as
described.
The strength of the radio waves that are emitted from the body depends on the size of the net
magnetization and on the orientation. The greater the oscillations of the net magnetization,
the more powerful the emitted radio waves will be. The signal strength is proportional
to the component of the magnetization, that is perpendicular to the magnetic field (the
transversal magnetization), while the parallel component does not contribute (known as the
longitudinal magnetization). In figure 4, the size of the transversal magnetization is the
circle radius.
If the net magnetization points along the magnetic field (as in equilibrium, to give an
example) no measurable radio waves are emitted, even if the nuclei do precess individually.
This is because the radio wave signals from the individual nuclei are not in phase, meaning
that they do not oscillate in synchrony perpendicular to the field. The contributions thereby
cancel in accordance with the net magnetization being stationary along the B0 -field (there
is no transversal magnetization).
• The frequency of the radio waves is in the FM-band so if the door to a scanner room
is open, you will see TV and radio communication as artifacts in the images. At lower
frequencies we find line frequencies and AM radio. At higher frequencies, we find more
TV, mobile phones and (far higher) light, X-ray and gamma radiation. From ultra-violet
light and upwards, the radiation becomes “ionizing”, meaning that it has sufficient energy
to break molecules into pieces. MR scanning uses radio waves very far from such energies.
Heating, however, is unavoidable, but does not surpass what the body is used to.
11
4 The rotating frame of reference
Confusion would arise if the descriptions to come continued to involve both a magnetization
that precesses and radio waves that push this in synchrony with the precession. It is simply not
practical to keep track of both the Larmor-precession and the varying radio wave field at the same
time, and this is necessary to establish the direction of the pushes, and thus how the magnetization
changes. Instead, we will now change our perspective.
An analogy: We describe how the horse moves on a merry-go-round in action. Seen by
an observer next to the merry-go-round, the horse moves in a relatively complicated pattern.
However, if the merry-go-round is mounted, the movement of the horse appears limited to a linear
up-and-down movement. We say that we have changed from the stationary frame of reference to
a rotating frame of reference.
In the same vein, we can simplify the description of the resonance phenomenon by moving to
a rotating frame of reference and getting rid of the Larmor-precession. We mount the merry-goround that rotates with the Larmor frequency and recognize that in this frame the magnetization is
stationary until we employ radio waves. The effect of B0 therefore appears to be gone, as shown
in figure 7.
M
B0
M
M
B1
Figure 7: In the stationary frame of reference, a rapid precession occurs (left). In the rotating frame of
reference (middle) the magnetization usually only moves slowly due to relaxation and smaller differences
in frequency. When resonant radio waves are transmitted, the effect of these is to induce precession
around an axis perpendicular to the B0 field (right). Descriptions in the rotating and stationary frames of
reference are equally precise, but explanations are almost always given in the rotating frame of reference,
since the movement of the magnetization is more simple there. Furthermore, the scanner always presents
measurements as if they were obtained in the rotating frame of reference. This happens because the scanner
removes the rapid variation in the signals before they are stored in the computer (the signals are modulated
down from the Larmor frequency to lower frequencies).
5 Relaxation
Interactions happening at near-collisions between nuclei give rise to the magnetization constantly
approaching the equilibrium size. This is called relaxation. The speed at which relaxation occurs
depends on the protons interactions with their neighbors, which in turn depends on the firmness
of the substance (the consistency). It is the difference in consistency and the presence of large
12
molecules that limit the waters free movement, which causes most of the contrast we see in MR
images.
The relaxation occurs on two different time scales: The magnetization perpendicular to the
magnetic field (the transversal magnetization) often decreases relatively rapidly, while it can take
considerably longer to recover the magnetization along the field (the longitudinal magnetization).
• The transversal magnetization (Mxy ) decreases exponentially on a timescale T2 (e.g. around
100 ms for brain tissue. Several seconds for pure water).
• The longitudinal magnetization (Mz ) approaches equilibrium M0 on a timescale T1 (for
example approximately 1 s for brain tissue. Several seconds for pure water).
The relaxation times depend on the mobility of the molecules and the strength of the magnetic
field, as discussed in section 5.2.
5.1 Weightings
The contrast in an MR-image is controlled by the choice of measuring method (sequence and
sequence parameters, which will be discussed later). For example, we call an image T2 -weighted
if the acquisition parameters are chosen so the image contrast mainly reflect T2 -variations. One
must understand, however, that even in a heavily T2 -weighted image, the contrast will often reflect
more than just T2 -variation. To provide an example, variation in water content always results in
some contrast.
The echo time, TE, is the period from we rotate the magnetization into the transversal plane until
we decide to measure the radio waves (a more precise definition will follow later). Meanwhile,
a loss of magnetization and signal will occur due to T2 -relaxation. The echo time is thus the
period within the measurement which gives T2 -weighting in the images. A long TE compared to
T2 will thus result in considerable T2 -contrast, but only little signal. The greatest sensitivity to
T2 -variation will be achieved when TE 'T2 .
Often, we will repeat similar measurements several times, e.g. once per line in an image.
The repetition-time, TR, is the time between these repetitions. Every time we make such a
measurement, we (partially) use the longitudinal magnetization present (the magnetization is
rotated into the transversal plane which results in emission of radio waves while the transversal
component gradually disappears). If we use the magnetization often (short TR), every repeat will
therefore only produce a small signal. If we, on the other hand, wait longer between repetitions
(long TR), the magnetization will nearly recover to equilibrium between repetitions. What is
meant by short and long TR? It is relative to T1 that is the time scale on which the longitudinal
magnetization is rebuilt.
If the magnetization is completely rebuilt between measurements for all types of tissue in
the scanner, meaning if TR is significantly longer than the maximum T1 , the T1 -contrast will
disappear. In this case, the transversal magnetization immediately following the rotation of the
nuclei reflects the equilibrium magnetization. The radio waves do so, as well. The equilibrium
magnetization is governed by the hydrogen concentration, also known as the proton density (PD).
Thus, we may conclude that using a long TR results in limited T1 -weighting but a strong signal.
If we apply a shorter TR, the signal is reduced for all types of tissue, but the signal becomes more
13
T1 -weighted, meaning that the images will be less intense, but with a relatively greater signal
variation between tissues with differing T1 .
Finally, we can minimize both T1 - and T2 -contrast, which results in a PD-weighted image. In
such an image, variation in the water content is the primary source of contrast, since the proton
density is the density of MR-visible hydrogen that is largely proportional to the water content.
Summarized, the following apply to weightings in a simple excitation-wait-measure-wait-repeat
sequence.
• T1 -weighted images are made by applying short TR and short TE, since T1 contrast is
thereby maximized and T2 contrast is minimized.
• T2 -weighted images are made by applying long TR and long TE, since T1 contrast is thereby
minimized and T2 contrast maximized.
• PD-weighted images are made by applying long TR and short TE, since both T1 - and
T2 -contrast are thereby minimized. Meanwhile, the signal is maximized, but it is of no
use if the contrast disappears, since only small variations exist in the water content of the
tissue.
The “lacking” combination of long TE and short TR results in a weak signal and mixed T1 - and
T2 contrast. Images of bottles with jello in picture 8 illustrates how the contrast and signal in an
image varies with TE, TR and consistency.
(a) Four images, all obtained
with a common TR=5 seconds
and TE=90, 50, 20, 15 ms (shown
in reading order).
(b) Six images obtained with a common TE=15 ms
and TR=500, 1000, 2000, 3000, 4000, 5000 ms
(shown in reading order).
Figure 8: Phantom data which illustrates signal intensity and contrast for bottles filled with jello af
varying consistency. Where is T1 long/short? How long, how short? The same for T2 ? Which bottles might
be pure water? Which jello is most firm? What pictures are the most T1 -, T2 - and PD-weighted?
5.2 Causes of relaxation
The difference between T1 and T2 is due to a difference in the causes of relaxation. For the protons
in firm matter, the spins will rapidly be dephased after excitation, meaning that they will point in
all directions perpendicular to the field (have all kinds of phases). This is caused by the small
local contributions to the magnetic field that the individual spins are making, and that make the
neighboring nuclei precess at altered frequencies. In firm matter, these interactions are constant in
14
time, while they vary in fluids, since nuclei are constantly experiencing new neighbors. Thus, the
spins can remain in phase for relatively long (seconds) in fluids, while they loose their common
orientation in a matter of milliseconds or less in firm matter. We can therefore conclude that T2 is
short in firm matter.
The described process affects the individual spins Larmor-frequency, but does not give rise to
a change of the longitudinal magnetization, since the interaction of just two nuclei cannot alter
the combined energy, which is proportional to the longitudinal magnetization. Therefore this
type of nuclear interaction does not contribute T1 -relaxation – that requires more drastic nuclear
interactions that involve an exchange of energy with the surroundings. All processes that result in
T1 -relaxation also result in T2 -relaxation, ensuring that T1 is never shorter than T2 .
Generally T2 becomes shorter with increasing firmness of the matter, but this does not apply to
T1 , which is long in very firm matter and very fluid matter (e.g., several seconds), but is short for
semi-firm matter.
The shortest T1 is achieved precisely when random interactions with the neighbors fulfill the
resonance condition, meaning, for example, that at 1 tesla, T1 is shortest for matter where a proton
meets about 42 million other nuclei in 1 second (the Larmor frequency is 42 MHz). This is
understandable – even random pushes to a swing can result in considerable oscillations if the
frequency is approximately correct. If we instead push way too frequently or rarely, very little
will be accomplished regardless of whether the pushes are random or not.
5.3 Inhomogeneity as a source of signal loss, T2∗
Interactions between nuclei in the ever changing environment of molecules is the cause of radio
signal loss on a timescale T2 and longitudinal magnetization recovery on a longer timescale T1 .
However, a loss of transversal magnetization is also observed because of inhomogeneity in the
field, meaning variation in B0 . As expressed through the Larmor-equation, the nuclei precess at a
frequency depending on the magnetic field. If this varies over the investigated area, the nuclei will
after some time, point in all directions transversally, even if they had some degree of alignment
after excitation. This process is called dephasing. Since the measured signal is proportional to
the transversal net magnetization, inhomogeneity gives rise to a loss of signal. The larger the
field inhomogeneity, the faster the dephasing. How quickly this happens in denoted by the time
constant T2∗ (pronounced “T2 -star”). The degree of inhomogeneity depends partly on the scanners
ability to deliver a uniform field (called a good shim). T1 and T2 , are tissue-dependent parameters
for which normal values are published. This does not apply to T2∗ values, since they depend, for
example, on the size of the voxel since the inhomogeneity increases with this.
Loss of signal due to interactions of nuclei is irreversible, but through a sly trick, signal lost
due to inhomogeneity can be recovered. The inverse process of dephasing is called refocusing,
and it involves gradual re-alignment of nuclei in the transversal plane. Refocusing is triggered by
repeated use of radio waves as described below. The recovered signal is known as an “echo”.
15
6 Sequences
A “sequence” is an MR measurement method and there exists an extraordinary number of these.
A measurement is characterized by the sequence of events that play out during the measurements.
In a classic “spin-echo-sequence” as shown in figure 9, the magnetization is first rotated 90
degrees away from equilibrium through the use of radio waves. A short waiting period follows
with dephasing caused by inhomogeneity. Subsequently the magnetization is turned an additional
180 degrees with radio waves. After a little more waiting time and refocusing, the signal is
measured. This course is usually repeated. In the sequence description above, a couple of
sequence parameters were involved, e.g. the waiting time from excitation to measurement. A
measurement method can thus be described by a sequence name and the associated sequence
parameters. Together they control the measurement’s sensitivity to different tissue parameters,
and the choice is therefore made on the background of the actual clinical situation. Sequences
include elements such as
excitation Turning of the magnetization away from equilibrium.
dephasing Field inhomogeneity causes the nuclei to precess at different speeds, so that the
alignment of the nuclei – and thus the signal – is lost.
refocusing pulse After excitation and dephasing, part of the signal that has been lost due to
inhomogeneity can be recovered. This is achieved by sending a 180-degree radio wave
pulse (in this context known as a refocusing pulse) that turns the magnetization 180 degrees
around the radio wave field. The effect is that those spins which precess most rapidly
during dephasing are put the furthest back in evolution and vice versa (mirroring around
the radio wave field causes the sign of the phase angle to be reversed). In the following
“refocusing period” where the nuclei are still experiencing the same inhomogeneity, they
will therefore gradually align again (come into phase). The lost signal is recovered, so that
an “echo” is measured.
readout Measurement of MR signal from the body.
waiting times Periods wherein the relaxation contributes to the desired weighting.
Further sequence elements (building blocks) that can be mentioned include inversion, bipolar
gradients, spoiling and imaging gradients. Some of these are described later in the main text or in
the glossary.
7 Signal-to-noise and contrast-to-noise ratios
In section 5.1, it was hinted that PD-weighted images are rarely the best at reflecting pathology.
This is in spite of the fact that the signal-to-noise ratio is often good in such images, because
imaging with fully relaxed magnetization and short echo-time increases the signal strength. This
exemplifies that the contrast-to-noise ratio always is more important for diagnostics than the
signal-to-noise ratio, meaning that a good image is characterized by the signal difference between
conditions or tissues that we wish to distinguish (the contrast) being large compared to the relevant
16
90◦y
180◦x
Dephasing
Refocusing
Readout
.
Figure 9: Sequence diagrams illustrate with an implicit time axis from left to right, the events during a
measurement. The first blocks in this spin-echo sequence ilustrate radio wave pulses, meaning periods
wherein the radio wave sender is turned on. The diagram is read as follows: Initially radio waves are sent
for the duration needed to turn the magnetization by 90 degrees around the y-axis in the rotating frame of
reference. After a short waiting period wherein the spins will dephase, radio waves are again sent for a
period appropriate for turning the magnetization 180 degrees around the x-axis. This gives rise to gradual
refocusing. The signal is read out (measured) when this is complete, which happens after a brief waiting
period equal to the duration of the dephasing. Sequence diagrams most often contain several events. An
example with gradients is found in figure 17
.
noise (that is, the signal variations that make the distinction difficult, whether these are random
or not). If the contrast-to-noise ratio is small, on the other hand, the relevant signal differences
drown in noise.
An image can be quite noisy, meaning that is has a low signal-to-noise ratio, but still be good
at reflecting a given pathology. One example is diffusion-weighted images in connection with
ischemia, since diffusion weighting lowers the signal for all types of tissue, but improves contrast
between normal and ischemic tissue. Something similar is often true for images acquired with
long echo-time.
In rough terms, the signal-to-noise ratio is an expression for how nice-looking an image is,
while the contrast-to-noise ratio is a measure of the usefulness insofar as the contrast is defined
meaningfully from all the parameters that can help answering the clinically relevant questions. It
is thus not trivial to define the contrast for a given clinical situation (a metric), but it is in principle
simple to optimize it. In practice the optimization can easily be the most difficult part, however,
as it may require many measurements to be performed with different sets of parameters.
8 Quantum Mechanics and the MR phenomenon
This section is non-essential for understanding the rest. Here, some common misconceptions
found in many introducing textbooks (even good ones) are pointed out. These misconceptions
have been recognized by a minority for a long time, but they are sadly still appearing in many
new books, as authors often quote each other without a deep understanding of MR phenomena.
The problem derives mainly from the fact that few authors of MR textbooks have a good
background in quantum mechanics, which is what the misconceptions center around. Such a
background is needed, if authors present explanations based on quantum mechanics, but luckily
such explanations are not needed.
The fundamental problem is that quantum mechanics is often presented as necessary in order
to understand MR, where a classical description is fully adequate. If the two are combined
incorrectly, one easily ends up with something that is contrary to both common sense and
17
experimental evidence.
First of all, a little background: In physics, there exists a distinction between classical phenomena and quantum phenomena. The first are those that can be described by classical physics,
such as Newtons laws. The vast majority of the experiences we gain from everyday life belong
in the classical category. In certain situations, however, the classical description is inadequate,
so that predictions derived from it turns out to be inconsistent with observations. In such cases,
one needs quantum mechanics that is believed to be able to describe all of the currently observed
physical phenomena, but which is difficult to understand, since we make very few unclassical
observations in our everyday life. One example of a quantum phenomenon is superconductivity
(resistance free electrical conductance), that we use when creating powerful magnetic fields in
the scanner. Superconductivity would not be possible if classical physics were accurate on an
atomic level.
Quantum mechanics, on the other hand, describes superconductivity, meaning that if one takes
the quantum mechanical equivalents to Newtons fundamental laws as a basis, it can be shown
that a broad range of surprising phenomena are predictable, including superconductivity. All
phenomena that can be described using classical mechanics can also be described with quantum
mechanics, although the complexity grows tremendously, since explanations require familiarity
with the quantum mechanical terminology rather than simply experience from everyday life.
Magnetic resonance is not a quantum phenomenon, although it is often presented this way. It
can be described with quantum mechanics, but the approach necessitates considerable background
knowledge. MR can alternative be described classically, since it can be shown that the classical
description of magnetic dipoles behavior in the magnetic field is a direct consequence of quantum
mechanics.
The existence of the proton spin and magnetic moment is, on the contrary, a quantum mechanical phenomenon, and the classical figure 3 should therefore not be taken too literally (it has its
limitations, although these are not evident in MR experiments). If one accepts the existence of
the magnetic needle quality of nuclei, the rest follows from classical physics.
For a number of introductions to MR, the classical approach is not chosen. On the other hand,
the authors typically do not posses the necessary background in order to use the heavy (and in
this case superfluous) quantum mechanical formalism in a meaningful way. Instead, we are often
presented with a semi-classical description with elements of classical physics and elements of
quantum physics. Such a description can be precise and useful, but easily gets to include elements
of nonsense, which is the case for much literature on the subject. Examples are presented below.
Although quantum mechanics is not needed to understand the magnetic resonance phenomenon
or pretty much anything else going on in scanners, physicists and chemists can still benefit from
learning about it. Quantum mechanics is, for example, necessary in order to determine the size of
T1 and T2 correctly from theoretical assumptions alone (without actual measurements), while the
existence of T1 and T2 is purely classical (they can be calculated classically, but the results will be
wrong). In the same vein, quantum mechanics is practical in connection with certain types of
spectroscopy.
There are consequently good reasons to use quantum mechanics in descriptions aimed at
chemists and physicists with the necessary quantum background, while an incorrect pseudoquantum introduction is never helpful. Finally, quantum mechanics is necessary to describe
measurements carried out on single protons. Such measurements are never performed, however,
18
as the signal always comes from a large collection of protons, a so-called statistical ensemble. For
such an ensemble of non-interacting protons, it is a relatively easy to show, quantum mechanically,
that the classical description is exactly right.
Quantum mechanics is most often employed in the following contexts in introductions to
magnetic resonance:
• Quantum mechanics is very convenient for “explaining” the resonance phenomenon. This
is often done by drawing two lines, calling them energy levels and say that quantum
mechanics tell us that the photon energy must match the energy difference between the
energy levels. A problem with this approach is that it is non-intuitive and raises more
questions than it answers, unless the reader is already schooled in quantum mechanics (why
are almost half of the nuclei pointing opposite the field? Why doesn’t continuous use of
radio waves level out the population differences? What happens if the photon-energy isn’t
quite precise? Alternatively, it can be intuitively argued (as attempted above) that the radio
waves must push the magnetization in synchrony with the precession in order to rotate the
magnetization away from equilibrium.
• Quantum mechanics can conveniently be used to calculate the size of the equilibrium
magnetization. In equilibrium, the coherence terms average to zero and the partition
function consequently is reduced to a sum of just two terms. Classically, more states
contribute, and the calculation is therefore more complicated. The results of the two
calculations are equal, however.
8.1 Corrections
After this tirade, time has come to comment on some of the myths and incorrect figures often
found in literature. It is more important to recognize and ignore the erroneous figures than it is to
understand why they are wrong. The details can be found in the article Is Quantum Mechanics
necessary for understanding Magnetic Resonance? Concepts in Magnetic Resonance Part A,
32A (5), 2008, that contains detailed accounts.
MR is a quantum mechanical phenomenon No, the resonance phenomenon is classical. See
above. The existence of nuclear spin and aspects of certain experiments are, however,
quantum mechanical.
Protons can only be in two conditions: Near-parallel or anti-parallel with the field.
This situation, as illustrated in figure 10, does not occur. There are two conditions that
are characterized quantum mechanically by having a well-defined energy. As expected,
however, the spins can point in all directions both classically and quantum mechanically.
The radio waves align the spins on the two cones The illustration is the result of the erro-
neous belief that the protons can only be in one of two conditions, combined with the
fact that we can rotate the magnetization. The result is pure hocus-pocus: Why are the
individual spins not rotated the same way by the radio waves? How do the protons “know”
where they are each supposed to be pointing? According to this view, it appears possible to
change the magnetization with radio waves. Why don’t we see this happening in practise?
The questions are numerous, simply because the drawing is wrong. It is relatively simple
19
to show (classically as well as quantum mechanically) that the locally homogeneous fields
that magnet and RF coils generate, can never change the relative orientation of spins locally.
Only inhomogeneous fields such as those the nuclei affect each other with, can do that.
They are the source of relaxation.
N
Mz
My
S
Mx
Figure 10: These figures are seen often, but they are misleading. They reflect a factually wrong perception
of spins as only being capable of being in spin-up or spin-down conditions.
Mz
Mz
My
M
My
RF
Mx
Mx
Figure 11: The spins are not aligned on two cones with a net magnetization in the transversal plane, as
this figure claims. Homogeneous radio wave fields cannot change the relative orientation of spins.
9 Imaging
So far, it has been discussed how the body can be brought to emit radio waves, but it has not been
mentioned how radio waves from one position in the body can be distinguished from radio waves
emitted at another position, which is rather necessary for imaging. In this section, techniques for
20
spatial localization and imaging are described. Initially, it is explained why these techniques are
fundamentally different from more easily understood techniques known from everyday life.
9.1 Background
The most obvious methods for MR imaging could be imagined to be projection or the usage
of antennas that can detect where in the body the radio waves are emitted. X-ray and normal
microscopy are examples of such “optical” techniques, and it would appear obvious to extend
this type of imaging to MR. Optical techniques are, however, “wavelength-limited”, which means
that they cannot be used to acquire images more detailed than approximately one wavelength.
In other words: Due to fundamental causes, one cannot localize the source of radio waves more
precisely than about one wavelength when using lenses or other direction-sensitive antennas. The
radio waves used in MR scanning are typically several meters long, so with optical techniques we
can hardly determine whether the patient is in or outside the scanner (this argument is really only
valid in the far field, which is the background for parallel imaging. More on this later).
Optical techniques as we know them from binoculars, eyesight, CT, X-ray, ultrasound and
microscopes, are thus practically useless for MR-imaging, and a fundamentally different principle
is necessary. This principle was introduced by Paul Lauterbur in 1973, and it resulted in the
Nobel Prize in Medicine in 2003. Basically, Lauterbur made the protons give their own locations
away by making the frequency of the emitted radio waves reflect the position. Lauterbur shared
the prize with Sir Peter Mansfield, who also contributed greatly to the development of techniques
used in magnetic resonance imaging (MRI).
9.2 Principles
A requirement for MR imaging is that the scanner is equipped with extra electromagnets called
“gradient coils” that cause linear field variations. Direction and strength can be altered as desired.
The spatial localization occurs according to different principles, of which the most simple is slice
selection. Other forms of coding involve the so-called k-space, which will be introduced in a later
section.
9.3 Slice selection
By using gradient coils, the magnetic field strength can be controlled so that it, for example,
increases from left to right ear, while the direction is the same everywhere (along the body).
This is called a field gradient from left to right. By making the field inhomogeneous in this
way, the resonance frequency varies in the direction of the field gradient. If we then push the
protons with radio waves at a certain frequency, the resonance condition will be fulfilled in a
plane perpendicular to the gradient as shown in figure 12. The spins in the plane have thus been
rotated significantly, while spins in other positions simply vibrate slightly. Thus we have achieved
slice selective excitation of the protons and a sagittal slice has been chosen.
21
Figure 12: Spin is influenced selectively in a sagital slice, if a gradient from left to right is applied while
radio waves are transmitted.
9.4 Spatial localization within a slice
After the protons in a slice are excited, they will all emit radio waves. In order to create images
of the slice, we must introduce a way to distinguish the signals from different positions within
the slice. The fundamental principle can appear a bit foreign, but will be explained in detail at a
later stage. Briefly told, different patterns in the magnetization throughout the slice are created
with the help of gradients. The strength of the radio signals that are returned tell us how much the
object in the scanner “looks like” the applied pattern. By combining patterns according to their
measured similarity to the object, the well-know MR-images are created.
What is meant by “patterns” is first illustrated in one dimension, meaning that we consider
spins placed on a line (e.g. between the ears) and watch their location and direction immediately
after excitation.
As illustrated, immediately after excitation the spins all point in the same direction perpendicular to the magnetic field, which points out of the paper. They will thereafter precess around the
magnetic field, that is, they will rotate in the plane of the paper at a frequency that is dependent
on the magnetic field. Insofar as the field is made to increase from left to right by applying a field
gradient briefly, the spins will each turn an angle that depends linearly on the nucleus’ position:
This so-called “phase roll” is an example of the above mentioned spin patterns, that can be
created in the patient by use of gradients. The word “phase” expresses the direction that the spins
point in. It is seen that the neighboring spins point in nearly the same direction, but throughout
22
the object, the magnetization has rotated several turns. The longer time a gradient is turned on,
and the greater the field variation that arises, the more “phase roll” is accumulated (more rotations
per unit length).
We have through use of gradients, made the spins point in all directions in a controlled fashion
and have thus simultaneously lost the signal. This is seen by comparing the two situations
above, since the measured magnetization is the sum of all the contributions from the individual
spins. When the spins are in phase (that is, pointing in the same direction), they jointly create a
considerable magnetization that gives rise to radio waves being emitted. When the spins point
in all directions as when a gradient has been applied, their sum is quite small. As a result,
comparably weak radio waves are emitted.
The gain from using the gradient can thus appear quite small: We have simply lost the signal.
That does not, however, have to be the case. Look, for example, at the situation illustrated below
where there are not (as above) protons uniformly distributed from left to right, but where there is
a regular variation in water content instead.
Before gradient:
After gradient:
In the figure we see how the spins point before and after applying the same gradient as above.
The only difference between the two earlier figures are “holes” that indicate a lack of protons
(locally water-free environment – bone, for example).
We now consider the total magnetization and thus the signal in this new situation. As long as
the spins all point in the same direction, i.e. before the gradient is applied, we receive less signal
than before, as fewer spins are contributing to the magnetization. The signal before the gradient
is applied is thus a measure of the total water content.
After the gradient is applied, on the other hand, we receive more signal than in the comparable
situation above (homogeneous water distribution). This may appear odd, since there are fewer
protons that contribute to the magnetization. The remaining protons all largely point in the same
direction, however, with the result that their combined magnetization is relatively large.
That the object with structure (varying water content) emits radio waves even after the gradient
has been applied, is caused by the match of the phase roll and the structure in the object,
understood in the sense that the “wave length” of the phase roll is the same as the distance
between the water puddles in the example.
If this is not the case, there will be tendency for the different contributions to the magnetization
to cancel each other. The signal is thus a measure of “similarity” between the phase roll pattern
and the structure in the object: After a phase roll has been introduced in the object by applying a
gradient, the signal we receive is a measure of whether there is structure in the object matching
the phase roll.
Thus we now have a basis for understanding MR-imaging: Different phase roll patterns in
the body are drawn one after the other. The resulting radio wave signals are recorded for each
23
of the patterns. The size of the signal for a particular phase roll pattern tells us whether there is
similarity to the structure of the body.
In few cases there is an obvious similarity between structure and phase roll pattern. Most often,
the similarity is very limited. Generally, the images are obtained by combining patterns weighted
by the appropriate measured signals. This principle is actually known from the game MasterMind,
which was popular long ago (figure 13).
Figure 13: The game Mastermind. One player places a row of colored pieces hidden behind a screen at
one end of the board. The opposing player must then guess the combination of pieces. After each guess
(pieces placed in the opposite end) the first player informs to which degree the guess was correct, that is,
how many pieces were placed correctly (though not which). The guessing player will after a few rounds be
able to predict the hidden combination based on responses to earlier guesses. The same principle is in
action in MR imaging: The inner parts of the patient are hidden, but through gradients we may “draw”
spin patterns in the patient. In return, we receive radio waves whose strength reveals the similarity between
patient structure and the drawn pattern.
9.5 Extension to more dimension – k-space
So far, we have considered a situation with hydrogen nuclei placed as pearls on a string. We will
now extend to two dimension and consider the situation in a slice of tissue with uniform water
content after a gradient has been applied:
The figure show a phase roll in the gradient direction (upwards and right). The “thick arrow”
on the middle of the drawing shows the net magnetization, that is, the sum of all the small
magnetization vectors (a vector is an arrow). It is seen that the sum is relatively small, since there
is an approximately equally many arrows pointing in all directions.
24
As in the one-dimensional case, there may still be a considerable net magnetization even after
a gradient has been applied. This happens if there, as shown in the following figure, is structure
in the object that matches the phase roll (the dots mark places where there are no protons).
For the sake of clarity, the spin patterns in the following will be shown in two different ways.
Rather than showing arrows pointing in different directions, phase rolls will be shown as an intensity variation as in the figure in the middle, where the color reflects the direction of the arrows2 :
ky
k
kx
For each such pattern, a vector k is assigned, whose direction indicates the direction of change
and whose length indicates the density of stripes as shown to the right.
By applying gradients, we “draw” patterns in the patient, one by one, in a predetermined order
specific for the chosen sequence. All directions of stripes and all densities of stripes are drawn up
until a certain limit. As said, the returned radio wave signal depends on the similarity between
object and pattern and it is registered as a function of k, as shown in figure 14. Example patterns
are shown, but for each single point in k-space there is a corresponding pattern.
9.6 Similarity and image reconstruction
Above, it has been described how the similarity between object and phase roll is measured, and it
has been said that the image is calculated by adding patterns together that are weighted by the
measured MR-signal. This section elaborates on what this actually means.
2 The
figure is not brilliant: the left situation corresponds to a stripe pattern with only 2 stripes rather than the 6
shown in the middle. The direction, however, is spot on in all three illustrations.
25
Figure 14: The structure of k-space. The signal is measured as a function of the k-vector (kx , ky ), which
changes when field gradients are applied. The lighter region in the middle of k-space tells us that the object
in the scanner (a head) has greater similarity to slowly variating patterns, or expresed differently, the most
powerful radio waves are emitted from the body when low-density phase roll patterns are created.
26
The concept of “similarity” is illustrated in this case with an example.
To the left is shown part of a city map (Manhattan). To the right is shown the unique stripe
pattern that looks most like the map in an MR sense. The similarity may seem small, but it is
evident when it is realized that the stripes are going in the same directions as the north-south
roads, that they have about the same distance as these and that the pattern is shifted, so that it is
bright where the map is bright.
We find almost equally high similarity with a similar east-west oriented pattern. If the two
patterns are put on top of each other (stacked as two overheads), one would almost have a usable
road map. This is how image reconstruction is done, with the exception that it usually takes a lot
of patterns to create a useful image. This is caused by the fact that the similarity between object
and each individual pattern is typically extremely small, but not zero.
It may appear surprising that very complex pictures can be created by simply adding different
patterns. There is also an important difference from the overhead example, which would provide
darker and darker pictures the more patterns are added: The difference lies in the fact that one
must also be able to subtract patterns in order to produce usable images, meaning that the sign of
the MR-signal (the phase) is important. Figure 15 shows how patterns can be added to form brain
images.
9.7 Moving in k-space
The phase roll is controlled by applying gradients with different directions, strengths and durations.
As long as a gradient is active, and the field thus varies linearly with position, the phase roll
will change, but it will always have the shape of a stripe pattern as shown earlier. Meanwhile,
there will continuously be emitted radio waves from the body which tell us how well the pattern
matches the structure of the object.
Example: If a gradient going from ear to ear (the x-direction) is applied, the result is a phase
roll in this direction corresponding to the k-vector growing along the kx -axis. As long as the
gradient is “on”, the stripes in the pattern are getting closer as the phase rolls accumulate (the
wavelength, defined as the distance between stripes, becomes shorter). If another gradient in
the direction from scalp to foot is applied, a phase roll in this direction will accumulate and the
k-vector grows along the ky -axis.
27
Figure 15: Image reconstruction. The figure shows how simple patterns (line 1) can be summed up to
complex images (line 2). The reconstructed images are, in this case, created from the patterns that have
the greatest similarity with the subject in the scanner, meaning the areas of k-space where the strongest
radio waves were measured (line 3). Still more patterns are included in the image reconstruction (indicated
by the dark areas on the k-space images and the reconstructions are likewise becoming more and more
detailed (the number of included patterns is doubled in each step). The final reconstructed image (lower
right) has been created on the background of the thousand patterns appearing most similar to the excited
slice. The top line shows the “most recently added” pattern. It is seen that the slow signal variations
(intensities) are measured in the middle of k-space, while the sharpness of the edges are derived from the
measurements further out in the k-space.
28
If both gradients are applied, a tilted phase roll is achieved. This is true regardless of whether
the gradients are applied one at a time or simultaneously: By applying them one at a time, the
k-vector moves along the kx -axis first and then along ky . By applying them simultaneously, it
moves directly to the same point and the final result is the same.
One example of the path through k-space is shown for the spin-echo sequence in figure 16. The
gradient, which is itself described by a vector, determines at all times where the measurement is
headed in k-space and is thus the speed of the k-vectors movement in k-space.
That the emitted signal depends entirely of the position in k-space and not on the path there, is
important for the success of the k-space description of MR-imaging. Many sequences differ by
the path through k-space (the order in which patterns are created), but image reconstruction is
fundamentally identical.
9.8 Image acquisition and echo-time
Earlier in the text, the echo time was described as the duration from excitation until the point
where radio waves are measured. Since the transversal magnetization in this period is lost on a
timescale of T2 (or T2∗ , if a refocusing pulse is not used), the echo time TE is thus determining the
corresponding relaxation time weighting of the measurement.
After the introduction of imaging, we now need to reconsider the definition of echo time, since
several positions in k-space are typically being measured after each excitation. A typical approach
to imaging is, for example, to measure individual points along a line in k-space one by one after
each excitation.
The single points (corresponding to different stripe patterns) are consequently not measured
at the same time after excitation, and the echo-time definition has therefore become blurry. For
echo-planar imaging (EPI), this problem is extreme, since the entire image is measured after a
single excitation, meaning that some points in k-space are measured milliseconds after excitation,
while others are measured, for example, 100 ms later and thus with a completely different T2
weighting.
Is the possibility to characterize T2 -weighting by a single parameter therefore lost? No! It turns
out that the echo-time definition can be adjusted, so that it still can be interpreted as above. A
surprising characteristic comes into play here: Even though parts of k-space are acquired shortly
after excitation and other parts a long time after, the reconstructed image looks (contrast-wise)
as if it has been acquired at a very certain time after excitation, that being the time where the
middle of k-space has been acquired. As such, it makes good sense to define the echo-time as the
duration from excitation until the time where the middle of k-space is measured (see an example
in figure 16).
9.9 Frequency and phase encoding
Introductions to MR imaging methodology often takes their outset in “frequency encoding” that
is readout of signal in presence of a gradient so that the frequency of the radio waves reflect
the position. It is intuitively easy to understand how a frequency analysis of the measured
signal reveals how the body water is distributed along the direction of the gradient and how
frequency encoding thereby offers simple 1D imaging. Even though frequency encoding is
29
Preparation
Imaging
180◦
90◦
RF
TE/2
TE/2
Gz
PHy
Gy
Output
Gx
Ny
TR2
TR
ky
kx
Figure 16: Here, the spin-echo sequence illustrates how gradients can be used to maneuver in k-space.
As in figure 9, the top line indicates the use of radio wave pulses, while the bottom three show gradients
in perpendicular directions. Initially, the magnetization is rotated 90 degrees into the transversal plane.
Midway between excitation and output, an RF refocusing pulse is applied so that the signal at time TE
after excitation is unaffected by field inhomogeneity, if such is present. A gradient along the z-direction is
applied simultaneously with the two radio wave pulses, so that these become slice-selective as shown in
figure 12. The gradient direction is inverted shortly after the first RF pulse in order to remove the phase
roll that is accumulated in the period immediately following excitation. There is consequently no phase
roll immediately prior to application of the imaging gradients in the x- and y-directions: The local net
magnetization point in the same direction everywhere and we are at this point located in the middle of
k-space. The field is now made inhomogeneous in both the x- and y-directions (positive Gy , negative Gx )
so that a phase roll is accumulated and (kx , ky ) is likewise changed toward the upper left corner of k-space.
When the y-gradient is turned off and the x-gradient inverted, readout of a line in k-space is begun. The
process is repeated after a period TR with a new value of the y-gradient so that a new line of k-space is
read out. This is indicated in the sequence diagram with a gradient table (PHy ) and an outer bracket that
indicates repetition for each line in k-space.
30
easier to understand, a direct k-space approach was chosen above. The reason is partly that
frequency encoding leaves a need to explain “phase encoding” which is a technique necessary
to obtain information about the water distribution in the direction perpendicular to the readout
and slice selection gradient (figure 16). Phase encoding is most easily understood from phase
roll and similarity considerations (nuclei are not given a unique phase along the phase encoding
direction though attempts to explain the technique that way are often made – look at figure 14 for
confirmation). Another reason to choose the direct k-space approach is that the frequency and
phase encoding approaches are inadequate for understanding sequences such as spiral EPI and
to some extent also normal “blipped” EPI (even the necessity of the pre-winding gradient for
normal frequency encoding appears non-obvious). In contrast, the k-space approach gives a far
more general understanding of MR imaging (see glossary). A related advantage is that frequency
and phase encoding in k-space appears as simple variations on a general theme. Even sequences
such as spiral EPI appear conceptually similar to other sequences.
It is important to know of frequency and phase encoding for several reasons, however. Firstly,
the corresponding directions are often referred to in articles and in scanner software as the
scan duration, for example, often depends on the number of phase encoding directions and
steps. Secondly, the artifacts in the phase and frequency encoding directions are very different.
Examples are given in the following sections, and it will be advantageous to have frequency
encoding in mind when reading them. Whether readout gradients were used or not, it is often
fruitful to consider in terms of frequency encoding how a particular artifact would appear, had
k-space along a particular direction been filled using a readout gradient respecting the timing of
the actual filling.
9.10 Spatial distortions and related artifacts
As described above in section 9.8, MR images appear contrast-wise to have been captured at a
certain time after excitation (the echo time) although measurements are far from always performed
that way. The cause of this is that the central parts of k-space primarily contains information
about the image contrast whereas the outer regions contain information about the finer structural
details (edges). That the acquisition of the images occurs during a longer period is therefore not
reflected in the contrast that is determined in the relatively short time it takes to pass the middle
of k-space. The duration is instead primarily reflected in spatial distortions (displacements) and
other oddities.
The so-called “chemical shift” artifact and the EPI distortion artifact are both such consequences
of different parts of k-space being acquired at different times after excitation. The artifacts appear
as relative displacements of nuclei in the image plane, happening if these precess at different
frequencies, even in absence of imaging gradients. It is a fundamental assumption in imaging that
all frequency differences are caused by linear imaging gradients, and violations cause artifacts.
For the distortion artifacts, it is differences between phase rather than intensity that cause the
displacements. Intensity differences also have an effect, however, but it is hard to see at first
glance: Because the outer parts of k-space contain information about finer structural details, and
because variation in signal over k-space is T2 -dependent, the spatial resolution of a measurement
is dependent on T2 .
Among other things, this has the surprising effect that a blood vessel under a bolus-passage of
31
contrast agent will not only change intensity on EPI images, but will also appear to be “smeared”
over a larger area of the image. Because of the phase differences that induced field changes give
rise to, the vessel will even appear displaced under the bolus passage.
Displacements in the image plane of two substances that oscillate at different frequencies (fat
and water, for example) is proportional to the frequency difference. In order to calculate the
displacement in pixels, the frequency difference must be divided by the “bandwidth per pixel”
that can often be examined or even controlled from the scanner software. In accordance with
the above, the bandwidth is inversely proportional to the time it takes to traverse k-space in a
given direction. The displacement in a given direction is thus simply the product of the frequency
difference and the time it takes to pass k-space in this direction. The clock is “reset” at each new
excitation.
9.11 Slice-selective (2D-) versus 3D-sequences
A distinction is made between multi-slice and 3D-sequences. This is confusing since the result
is a 3D-dataset in both cases (most often represented as a series of slices). The somewhat
unfortunate choice of words does not refer to the result but rather to the method of acquisition:
For slice-selective sequences, protons in one slice at a time are affected as described above
(section 9.3), while in 3D sequences all spins in the coil are affected identically and simultaneously.
Subsequently the k-space methods described in chapter 9.4 are used to distinguish signals from
different positions (3D k-space is similar to 2D k-space). A frequently used 3D sequence is
MPRAGE (Magnetization Prepared Rapid Acquisition Gradient Echo sequence). This is shown
in figure 17 and is described in appendix A.
3D sequences are almost always applied with a short repetition time TR, since the measurements
will otherwise become very time consuming. Therefore, they are practically always T1 -weighted,
but can furthermore contain significant T2 -weighting, as is the case for the CISS sequence, for
example, that has mixed contrast. Sometimes, the T1 weighting is isolated in the edges of k-space,
so that the image appears T2 weighted. Because of the short TR, there is much less room for
contrast variation compared to multi-slice sequences.
High spatial resolution in all directions simultaneously can be achieved for 3D measurements,
e.g. isotropic (1 mm)3 resolution (=1 microliter). For regular 2D sequences, the case is different
since slice thicknesses under 3 mm are rarely employed. The resolution in the image plane is
often set high for 2D sequences, which results in good-looking images but quite oblong voxels.
This may cause problems when distinguishing small or flat structures such as grey matter. In
summary:
3D sequences Allows for high spatial resolution and isotropic voxels, but most often only T1
weighting and mixed weighting turn out well for short measurement time.
Multi slice sequences High “in plane” spatial resolution, but rarely under 3 mm slice-thickness.
Flexible contrast.
32
Preparation
180◦
3D-FLASH
Recovery
α
RF
TI
TE
TD
PHx
Gx
PHy
Gy
Output
Gz
Nx
Ny
TR
Figure 17: The MPRAGE sequence is an example of a 3D sequence where excitation is non-selective or
weakly selective, and where the 3D k-space must be traversed in order to achieve spatial resolution in all
directions. Initially, the magnetization is rotated by radio waves so that it points opposite the magnetic
field in the following period TI (the “inversion time”). This is done to increase the T1 -weighting of the
magnetization and thus also of the image. In order to detect the magnetization, it is tilted slightly into
the transversal plane by an RF pulse with a small flip-angle α. The signal is coded spatially by the green
gradients and it is measured in the readout period. The inner, stippled bracket indicates that the enclosed
part of the sequence is repeated once for each kx value. By applying small flip-angles, the longitudinal
magnetization is only affected slightly each time, so that many lines along the kz axis can be measured
before the magnetization must once again be inverted. This happens after a recovery period TD. The
entire sequence is repeated for each ky value, so that all of k-space is covered. The gradient applied
simultaneously with the α pulse is not strictly necessary, but can be used to choose a thick slice (slab)
within which phase encoding can be carried out. Thus, aliasing may be avoided, even if only relatively few
phase encoding steps are employed. Further information regarding MPRAGE and FLASH can be found in
the appendix.
33
9.12 Aliasing and parallel imaging
That k-space imaging is a Fourier technique (meaning that
it is based on the addition of phase rolls) is reflected in
the artifacts that may appear in images, if the underlying
assumptions are not met. If we do not have knowledge
about the subject in the scanner a priori, the similarity
with all possible phase roll patterns must in principle be
measured in order to reconstruct a perfect MR image. It
does, however, take time to create patterns, and in practise
only a central, rectangular part of k-space is covered with
finite resolution, as illustrated in figure 15. Usually, it is
the number of measured lines that determines the duration
of a measurement. A limited density in k-space of these is
therefore desired.
If the chosen density is too low, it results in a characteristic “aliasing artifact”: If sampling in k-space is not Figure 18: Aliasing: The image disdense enough to catch signal variations (i.e., if the Nyquist- plays a brain in profile, but since the
criteria is not met), errors will appear in the reconstructed sampling density in k-space is too low,
the nose ends up in the neck and vice
images. The errors consist of signals being misplaced and versa. Thus, pathology can be hidden.
overlapping, as seen to the right.
In 1997, however, Sodickson and Manning brought attention to the fact that if several antennas
with different sensitivity profiles are measuring the same k-space, the “lacking” k-space samples
can be calculated, thus avoiding aliasing. Simultaneously applying two antennas for signal
reception can in principle therefore accelerate the measurement by a factor of two, although in
practice it requires several more antennas to get reliable images. Pruessmann et al soon proved
that sets of reconstructed, aliased images acquired with different antennas can intuitively be
“un-aliased”, which is principally equivalent. The techniques are today known under the common
name “parallel imaging”.
That parallel imaging is an independent technique, even though it was first proposed as a
modification of the k-space technique, is most easily understood through a thought experiment: In
the extreme limit “each-pixel-has-its-own-antenna” (e.g. many tiny antennas spread equidistantly
through the imaged object), the gradients are not necessary to measure detailed images. It is simultaneously realized that parallel imaging is not a Fourier technique, meaning that reconstruction is
not based on the summation of stripe patterns.
9.13 Finishing remarks on the subject of imaging
Typically, relaxation times are relatively long which is responsible for the enormous flexibility
in MR scanning. In 100 ms, for example, series of RF and gradient pulses can be applied to
prepare the magnetization in a way that allows its size or direction to reflect physiologically
interesting parameters, in other words manipulating the contrast in the images. Additionally, long
T2 ’s allows for the generation of a large number of phase roll patterns and for measurement of the
corresponding radio wave signals before the system’s memory of the preparation has been erased.
Actually, the whole k-space can be covered after a single excitation (echo-planar imaging, EPI).
34
It is interesting to note how the wavelength limitation known from optical techniques was
avoided. In the case of k-space techniques, this happened by using frequency differences rather
than far field properties, for spatial source localization of radio waves. The traditional wavelength
limitation is thus replaced with a requirement that the neighboring nuclei’s frequencies must be
distinguishable during the measurement period, if the positions are to be distinguished. This can
be expressed alternatively: No structural details considerably less than the shortest wavelength
of the phase roll can be observed. Thus, a new wavelength limitation is introduced, which is
intuitively understandable from figure 15. Since it takes time to move in k-space, the ultimate
limitations for the spatial resolution are determined by gradient strengths, relaxation times and
movement of the water molecules, including diffusion.
In parallel imaging, the limited and different sensitivities of the coil elements are used to avoid
the wavelength limitation. Since the sensitivity area’s dimensions are more or less equal to those
of the coil elements, the spatial resolution is determined by the coil elements’ size for “pure”
parallel imaging. This resolution is usually inadequate for practical purposes.
Parallel imaging is, however, completely compatible with k-space imaging, in that the two
techniques can be combined and images acquired faster than normally. In some situations,
speed is more important than spatial resolution, and therefore experimentation is now done with
images acquired without applying gradients. Such images are acquired with sub-millisecond time
resolution of up to hundreds of antennas simultaneously. These images are of interest for imaging
of substances with short relaxation times, for example.
10 Noise
The limiting factor for many MR examinations is noise. We can, for example, not directly detect
substances in vivo in concentrations below a few millimolars on a a reasonable timescale, because
the signal is drowned by noise.
The noise can be physiological (pulse, respiration, movement), but even if the patient is lying
completely still, there exists an upper limit for the image quality achievable in a given period. In
the absence of physiological noise and under a couple of other assumptions (which are rarely
completely fulfilled), the signal to noise ratio is proportional to the voxel size and the square root
of the time it has taken to acquire the image.
It is essential to realize that the signal-to-noise ratio is not dependent on the number of voxels
in the image. If you double the matrix (points along one side of the image) and the
√ field-of-view
it will typically imply that the signal-to-noise ratio is increased by a factor of 2 because the
measuring time is thereby typically doubled while the voxel size remains unaltered. That it may
only be extra air that is included by the expanded field-of-view is irrelevant, and the noise level is
not affected by the fact that 4 times as many voxels are being measured.
Insofar as the scanner is well functioning, the electronics is not the primary source of noise.
Instead, that is the random motion of charged particles (ions) in the patient. When charged
particles diffuse, they emit random radio waves as they change their direction of motion. The
higher the temperature and conductivity of a material, the more noise it emits.
Thermal noise is evenly distributed despite the fact that it is emitted almost only from within
the patient, who may only fill part of the image area. This is caused by the noise not being an MR
35
signal, and the gradient-induced spatial coding of the signal is therefore not affecting the noise.
Instead, the noise is received in a steady stream during the entire measurement and it is therefore
evenly spread over k-space and consequently also evenly over the MR image.
The noise from patients cannot be avoided, but we can, to some extent, avoid measuring it. The
idea is to use a small coil that only detects noise from a small area of the patient (a surface coil,
for example). It is a common (but non-essential) misconception that using a small coil primarily
enhances the signal-to-noise-ratio through its improved sensitivity to signal. Improved signal
sensitivity, however, also increases sensitivity to noise that is generated in the same regions of
the body. Improving the sensitivity therefore does not by itself improve the signal-to-noise ratio.
Instead, the surface coil limits the noise by being sensitive to a smaller part of the body, of size
similar to that of the surface coil. A small surface coil only detects the noise (and signal) from
a small part of the body, and this noise appears evenly distributed over the entire image. The
surface coil therefore improves the signal-to-noise ratio by reducing the sensitivity to distant
noise sources. These effects are well described by Redpath in The British Journal of Radiology,
71:704-7, 1998, for example.
11 Scanning at high field
Three tesla is the largest, clinically available field strength at
present. The high field is especially beneficial for imaging with
a high spatial resolution, as well as for performing measures
that normally suffer under a low signal-to-noise-ratio, including
perfusion measurements and spectroscopy. The image to the
right depicts a newborn child and demonstrates high resolution
and fine contrast after only 3 minutes acquisition at 3T. A small
periventricular bleeding is visible in the right hemisphere (left on
the image, according to radiological convention). The relaxation
times are changed at high field (typically longer T1 and shorter
T2 ). Sequences and parameters must therefore be adjusted to the
field strength in order to obtain good images.
For functional imaging, fMRI, the high field strength provides enhanced contrast upon brain
activation, so it can be measured in which areas activation occurs in a given situation (see fMRI
in the vocabulary). The methods can, for example, be used clinically for planning of surgery
preserving essential functions such as language. There are, however, several pitfalls doing fMRI
and the quality of the results must thus be evaluated carefully before conclusions can be drawn.
12 MR Safety
Is the MR technique harmless, and just how certain are we of this? It is a natural question, since
we do use radio waves that have been suspected of being carcinogenic in connection with high
voltage wiring and mobile telephones. Furthermore, magnetic fields of quite significant strength
are employed, from 25.000 to 75.000 times the average field of the earth. By nature, we cannot
36
know with total certainty whether MR is completely harmless. At this time, no harmful effects
have been documented, although the question has been explored in depth.
It is known that possible harmful effects are rare or have very limited effect. This knowledge is
partially derived from animal experiments with long term exposure. Population studies have been
conducted of radiographers and others working close to magnets or radio wave fields (chemists
and physicists have worked in high magnetic fields since the 40’s and many people working with
radar have been exposed to monstrous amounts of radio waves).
The area is regulated by international standards and the scanners are equipped with monitoring
equipment that ensures that no radio waves more powerful than allowed are emitted, even by
accident.
The most significant risks associated with MR scanning are probably resulting from misinterpretations of the images, as well as purely
practical issues concerning handling. Reported injuries include hearing damage, burns as a result of wire loops (in jewelry, for example), as well as serious incidents with metal objects such as the flying
chair shown to the right. There are scary examples of oxygen cannisters, floor waxers and other things going astray, found in anecdotes at
http://www.simplyphysics.com/flying objects.html
Even though the scanning itself is harmless, it has been shown that MR contrast agents injected
intravenously in connection with certain types of scanning, can be dangerous to patients with
kidney problems. Because of differences in the contents and chemical stability of the contrast
agents used, the risk varies. In the case of patients with serious kidney problems, the use of
gadolinium based contrast agents should be avoided when possible, in accordance with the
guidelines found on the websites of the European and International MR Societies.
http://www.esmrmb.org/ and http://www.ismrm.org/
Usually, the contrast agents are excreted from the body through the kidneys shortly after
scanning, but in the case of kidney patients, it can be trapped in the body. It must be emphasized
that the problem only occurs in very special cases (exclusively when combining a high dose of
contrast agent, a heavily impaired kidney function and the use of certain substances). At Hvidovre
Hospitals MR department, there are no suspected cases, although contrast have been used for
20 years for thousands of patients. The same is true of most other departments with MR scanners.
37
38
Appendices
Appendix A: The educational software mprage
This appendix is the online help for the software “mprage”. It also describes the design and
important qualities of this often used 3D-sequence. The program is unfortunately not in a form
where it can be widely spread, but it can be started in Hvidovre from the linux command line by
writing mprage. It can also be distributed on demand.
Introduction
This is the online manual for a simulation program for the sequences MPRAGE and FLASH. The program
was developed in connection with a research project (inhomogeneity correction) and has since been adapted
for sequence optimization and educational use.
The MPRAGE sequence
The MPRAGE sequence as shown in figure 17, is an often used T1-weighted 3D sequence that includes
two repetitions (loops): An inner loop repeating an excitation pulse with a small flip angle, and an outer
loop that begins with a 180 degree inversion pulse.
The sequence is a 3D FLASH with phase encoding in two directions, meaning that the magnetization
is rotated away from the direction of the magnetic field by the use of many RF pulses with small flip
angles in quick succession (typically over a hundred with about 10ms interval). In contrast to the spin
echo sequence where the magnetization is used up completely at each excitation, a typical FLASH flip
angle is small, e.g. 10 degrees.
39
Each pulse only affects the magnetization slightly, but when many pulses are employed, the longitudinal
magnetization approach a level that depends on T1 and the strength and frequency of the RF pulses.
A FLASH sequence usually results in T1 weighted images, since the degree of saturation is T1 dependent.
Said in another way: The many RF pulses eventually drive the magnetization to a new equilibrium level
(“driven equilibrium”), that reflects how quickly the magnetization is rebuilt. For the purpose of providing
extra T1 -weighting, the MPRAGE sequence has a built-in inversion pulse, employed before each FLASH
module.
After each excitation, the fresh transversal magnetization is affected by gradients that creates a phase
roll corresponding to a new position in k-space. A readout gradient causes movement through k-space
along a straight line while signal is acquired for a few milliseconds. When the inner and outer loops are
finished, the entire 3D k-space is covered and an image of the whole volume can be reconstructed. Each
line in k-space is weighted differently since the preceding FLASH pulses influence the magnetization.
T2 does not play a significant role concerning the contrast in FLASH and MPRAGE, since the echo
time typically is a few milliseconds and thus much shorter than T2 .
Using the program
The parameters from MPRAGE can be varied in the program, and the equivalent sequence and evolution
of the longitudinal magnetization, Mz , can be followed (left graphs). The parameters occurring are
TI The inversion time. The period from the inversion pulse until the first FLASH pulse.
Beta The flip angle of the inversion pulse (180◦ for MPRAGE, 0◦ for FLASH, 90◦ for saturation-recovery-
FLASH).
Alpha The FLASH flip-angle.
TD Pause from last FLASH pulse until the next inversion pulse.
Nlin The number of lines that are acquired in one of the k-space phase encoding directions. The other
directions (phase encoding and readout) are not so interesting in this context since they do not
influence contrast or voxel shape. The uninteresting phase encoding direction is here chosen
perpendicular to the image plane.
EchoLin% A percent rate that indicates when in the FLASH period the middle of k-space is passed. The
contrast is primarily determined here. As an example, a choice of 50% indicates that the center of
k-space is measured after half of the FLASH pulses are played out. Often, the center of k-space is
measured asymmetrically to increase the effect of the inversion pulse.
The naming of the parameters are in accordance with older Siemens scanners. On more recent scanners,
the parameters are defined and named differently. Ask.
On the graph seen to the left, a single iteration of the outer loop is illustrated. The evolution of the
magnetization is shown at the top, and the RF pulse timings are shown at the bottom. The horizontal red
line in the top graph shows the timing of the FLASH module and the vertical line illustrates when the
middle of k-space is reached.
The outer loop is typically repeated many times (e.g. 200) and after a few iterations, the same curve is
achieved every time (the one shown). The magnetizations shown in the right and left side of the graph are
therefore connected in the sense that only the sign differs between the first and last point for beta equal to
180 degrees, for example (an inversion pulse inverts the magnetization, hence Mz changes sign).
Along with the contrast evolution, simulated k-space images (raw data) and reconstructed images are
shown. The raw data are generated by adding k-space images for the single tissue types, but weighted
line-wise with the relevant Mz during the FLASH period. The reconstructed images are produced by
Fourier-transforming the k-space images.
40
When the parameters are changed, the corresponding changes in contrast and efficiency are shown.
The latter is a contrast measure corrected for sequence duration (achieving a better contrast is easy, if the
duration of the measuring is increased). The contrast is calculated as the signal difference between grey
and white matter (GM, WM) at the time when the middle of k-space is passed. This contrast measure is
reasonable, if the voxel shape is not bizarre for any√
tissue type, i.e., if the k-space weighting is reasonably
uniform. Efficiency is defined as the contrast per duration (so that it is independent of the number of
repeated measures (signal averages)).
Important points
The program illustrates a number of important aspects:
• There are many parameters to adjust in a sequence like MPRAGE. The contrast and image quality
depends strongly on these.
• It is not possible to provide easy rules for how the contrast changes when a given parameter is
adjusted – it depends on the other parameters. Simulation tools like this one are thus required. The
software also contains optimization tools. Ask, if this interests you.
• Contrast and spatial resolution (matrix) are not independent.
• During the pauses, Mz approaches the equilibrium magnetization (proportional to proton density)
on a timescale of T1 .
• During the FLASH period, when pulsing is frequent, the magnetization approaches another level
dictated by T1 , pulse interval and strength. The greater the flip angle alpha, the faster “driven
equilibrium” is achieved, and the less the influence of the inversion pulse.
• The signal from one type of tissue can be “nulled” in the middle of k-space through appropriate
parameter choices. The signal from the tissue type in question is thereby suppressed. The tissue still
contributes with signal in other regions in k-space, however, and bright edges may therefore result.
• The voxel shape and spatial resolution depend on the type of tissue, since the k-space weighting
(meaning the signal variation over k-space) depends on the type of tissue. This is most obvious
when a null-crossing appears close to the k-space center (often producing ugly images).
• Although the GM/WM contrast, as defined above, can be good for a given set of parameters, the
image may be rubbish. This occurs if the k-space weighting produces an unfortunate voxel shape for
one or more types of tissue (see above). Thus it is difficult to automatically optimize the sequence
in a meaningful way (tradeoff between voxel shape and contrast).
• The importance of the inversion pulse for the contrast, is dependent on the duration and number of
FLASH pulses that occur between inversion and passage of the k-space center.
Precautions
The simulation is not more precise than the parameters entering the simulation, including
• the assumed relaxation times (shown in the window - these depend on the field strength),
• the assumed proton densities,
• sequence parameters, including FLASH-TR, which at the moment can not be varied without altering
the program slightly,
• the assumed noise level,
• the choice of phase encoding directions,
• the lack of influence from RF-inhomogeneity that is not simulated. The program has “hidden”
graphics, that illustrate this sensitivity (press Print). Ask, if your interest has been aroused.
41
Appendix B: Glossary
This glossary can be used for repetition and for learning more about individual topics.
The BOLD effect This effect is the basis for most fMRI. The rate at which the MR signal dies
out is weakly dependent on the concentration of oxygen in the blood, since oxygenation
changes the magnetic qualities of the blood. Deoxyhemoglobin is a paramagnetic contrast
agent contrary to oxygenated hemoglobin that is diamagnetic like the tissue.
The signal intensity is usually increased by neuronal activation, since the blood supply
(perfusion) increases more than the oxygen consumption. This is the so-called BOLD-effect
(blood oxygenation level dependent effect) that results in signal variation of a few percent
in gradient-echo sequences. The greatest sensitivity occurs at TE ' T2∗ (consider why this
is).
Coil This word is used for several concepts in MR. It is typically used for the antennas that send
and receive radio waves. The coil is usually fitted to the body part that is being scanned,
so it is near the area of investigation. A head coil surrounds the entire head, for example,
and not much else. The choice of coil involves a trade-off of sensitivity, homogeneity and
coverage.
In contrast, gradient coils, which are electromagnets, are not used for transmitting and
receiving radiowaves, but for creating linear field gradients (see Gradients). Shim coils are
similarly used to to eliminate non-linear, background field gradients.
Contrast agents Substances that affect the contrast in images. Contrary to X-ray contrast agents,
for example, the MR contrast agents are usually not seen directly. Rather, we see their
effect on the protons (increased relaxation – see gadolinium). Contrast agents are, for
example, used for detecting defects in the blood-brain barrier: Gd-DTPA usually stays in
the blood stream, but is temporarily trapped in the tissue in the case of blood-brain barrier
breakdown. Magnetically tagged blood can also be used as a contrast agent (spin labelling).
Diamagnetism Magnetic fields deform the electronic orbits so that the magnetic field is weak-
ened slightly. This phenomenon is called diamagnetism and it is exhibited by all substances.
However, there are substances where the diamagnetic quality is completely overshadowed
by far more powerful influences (paramagnetic and ferromagnetic substances). The socalled magnetic susceptibility expresses how much the field is strengthened or weakened
by a material.
Diffusion Diffusion is measured by employing a powerful gradient that introduces a short-
wave phase roll over the object. When the gradient is then inverted (or equivalently, if
a refocusing pulse is applied), the spins will be rotated back in phase, forming an echo
insofar as the spins have not moved meanwhile. If the spins have diffused, meaning that
they moved randomly among each other, the signal will only be partially restored. The
signal intensity therefore reflects the mobility of the protons (the diffusion coefficient).
Echo-planar imaging, EPI A rapid imaging method where k-space is traversed using oscillating
gradients after a single excitation. The acquisition time for a single EPI image is typically
42
below 100 ms. EPI often exhibits low signal-to-noise ratio and pronounced artifacts,
including spatial distortions and signal voids near air-filled spaces. It is used frequently,
however, when time resolution is crucial.
Eddy current See induction.
Electrons Negatively charged particles that orbit the positive atomic nuclei. The electron cloud
is responsible for the chemical properties of atoms and molecules. Its appearance is weakly
reflected in the MR signal, since the electron cloud partly screens off the magnetic field.
This is exploited in spectroscopy. Electrons also have spin, and ESR is the term used for
Electron Spin Resonance. Unfortunately, the short relaxation times of electrons make ESR
useless for medical imaging.
Excitation The magnetization cannot be measured before it is rotated and has acquired a compo-
nent perpendicular to the magnetic field (a transversal component). The rotation is caused
by transmitted resonant radio waves, a process called excitation.
Exponential function When “something” decreases exponentially (the transversal magnetiza-
tion, for example), it means that a specific fraction is lost when we wait for a certain
period of time: When we say that T2 is 100 ms it means that about 60% of the signal is
lost in 100 ms. The same fraction of the remaining signal is lost again if we wait another
100 ms. The T1 relaxation too involves exponential functions, since the longitudinal magnetization approaches equilibrium exponentially after excitation (the difference decreases
exponentially).
Ferromagnetism Diamagnetism provides a minimal weakening of the field, and paramagnetism
typically provides a more powerful increase of the field. Such changes are in the range of
per mille or less. There are, however, a few agents that have far more powerful magnetic
attributes, among them iron, nickel and cobalt that are ferromagnetic. These materials may
not be introduced in the scanner room without special precautions. Be aware that only
metallic iron is ferromagnetic – ionic iron is paramagnetic.
Flow Velocity and direction of uniform movement can be measured with a technique similar to
that described above for measuring diffusion. Yet, the employed gradients are typically
weaker and the measurements are interpreted differently, since it is the net rotation rather
than the signal loss that reflects the speed. After employing the bipolar gradients, the spins
have achieved a total phase change that is proportional to the velocity along the gradient.
The technique is therefore called phase contrast MRI and the velocity is determined by
comparing phase images.
fMRI Functional MRI. Mapping of the areas of the brain that are activated in a given situation.
Usually based on the BOLD-effect.
Frequency encoding If signal is read out while a gradient coil is active, the frequency con-
tent will directly reflect the distribution of nuclei along the gradient direction. A onedimensional image (a projection) can consequently be acquired by doing a frequency
43
analysis of the measured signal. Since the gradient is active during signal readout, its direction is called the readout direction. Even though frequency encoding is easy to understand,
it has proven fruitful to think instead of how the phase roll that the gradient is creating,
is causing the signal to reflect sample structure. Frequency encoding is consequently a
special case of k-space imaging. See also “phase encoding”.
Gadolinium Gadolinium (Gd) is one of the less well-known elements, a heavy metal. It is
characterized by having unpaired electrons in inner shells, which results in the atom having
magnetic qualities that are far stronger than that of protons. The field near a gadolinium
atom is strongly inhomogeneous. Protons that pass near gadolinium therefore receive a
powerful magnetic influence which gives rise to increased relaxation. Free gadolinium
is toxic and it is therefore placed in a larger organic structure – DTPA is typically used.
Contrary to X-ray contrast agents, among others, it is not the actual agent that is seen with
MR, but rather the protons that come near gadolinium (or those that do not, for T2 -weighted
images). Thus, there exists long range effects of gadolinium (e.g. over the blood-brain
barrier, which water can pass but Gd-DTPA cannot).
Gradients Gradients are variations in the magnetic field. The scanner is equipped with gradient
coils that give linear field variations whose direction and strength can be altered as needed.
This is essential for slice selection, imaging and more. Field alterations result in mechanical
forces on the scanner, vibrations, and thus acoustic noise.
Non-linear gradients also appear due to imperfect shimming. On a microscopic scale,
the field is inhomogeneous, e.g., near paramagnetic particles such as gadolinium and
deoxyhemoglobin. For this reason, T2∗ is always shorter than T2 .
Gradient echo A gradient creates a phase roll, i.e. a position-dependent rotation of the spins. If
the gradient is subsequently inverted, the phase roll will gradually rewind until the spins
are back in phase. Thus the magnetization is recovered (in fact, it has only been hidden).
Signal recovered in this manner is called a gradient-echo, and it is employed for imaging
and for flow- and diffusion measurements. The related spin-echo removes the effect of
inhomogeneities caused by variation in the magnetic properties of the tissue. This does not
apply to the gradient echo, since we can only actively invert the gradient direction for the
gradients we control, i.e. the ones that are created with electromagnets.
Induction If the magnetic field through a wire loop is changed, a current in the wire will be
induced (an eddy current). This may, for example, happen to electrodes that form a noose
in the scanner, which may result in burns. Radio waves are typically the most significant
source of heating. Eddy currents also appear in the scanner metal. Magnetic fields formed
in this way can cause distortions of the MR-signals.
Inversion Often, the equilibrium magnetization is inverted for a period by use of a 180-degree
pulse before it is flipped into the transversal plane to be measured. The inversion time,
TI, is this period with T1 -weighting, i.e., the time from inversion until the excitation pulse.
Inversion is typically introduced to add T1 -weighting (inversion recovery) or to get rid of
signal derived from fat or CSF (STIR, FLAIR).
44
Magnetic dipole Spin makes nuclei act as small compass needles. Mere precisely, they act as
magnetic dipoles, meaning that they have a south pole and a north pole. Furthermore the
angular momentum (rotation) makes them precess in the magnetic fields, meaning that the
north/south axis rotate around the direction of the field.
Metabolites Substances that play a part in the metabolism. The most easily detectable metabo-
lites in vivo are NAA, choline, creatine and lipids. Others are lactate, inositoles, GABA,
NAAg, taurine, glutamine, glutamate and alanine. Some of these are painstakingly difficult
to measure reliably. There are many other agents in the body, of course, but only small,
mobile molecules in high concentrations are detectable in a clinical setting.
MR Magnetic Resonance.
MRI Magnetic Resonance Imaging.
NMR “Nuclear Magnetic Resonance”. The same as MR. The reference to the nuclei is said to
be removed to avoid associations with radioactivity, nuclear power and atomic weapons.
The chemists have maintained the N and they use the technique for identifying agents and
mapping protein structures by spectroscopy, for example.
Paramagnetism All substances have diamagnetic properties, but some are also paramagnetic,
which makes them alter the local field significantly more than the diamagnetic contributions
do. Gadolinium and deoxygenated blood are examples. In pure paramagnetic agents, the
field is typically increased in the range of a few per mille. See Diamagnetism above.
Perfusion Contrary to bulk flow where a large amount of blood is flowing in the same direction,
the blood flow in the smallest vessels (capillaries) is far from uniform within a voxel (there
are millions of capillaries oriented randomly within a voxel of tissue). The blood flow in
these is called perfusion and it is responsible for oxygen supply to the tissue. It is typically
measured by following the passage of a contrast agent through the vasculature on a series of
images. The subsequent analysis is demanding, especially for quantitative measurements.
Phase The word “phase” is used for the direction of the magnetization perpendicular to the B0
field (meaning the direction in the transversal plane). This is affected by the magnetic field
that causes precession. Immediately after excitation, the individual contributions to the
net magnetization point along the same direction perpendicular to the field – the spins are
said to be in phase (or having the same phase). Subsequently, a gradual dephasing and
signal loss follows. This reflects that the common phase is lost, resulting in a loss of net
magnetization. This loss can be caused by random interactions between nuclei (in which
case the loss is irreversible), or it can be caused by field gradients (and is then typically
reversible, since the signal can be recovered after using a refocusing pulse).
Phase images Usual MR images show the size (amplitude) of the magnetization in the transver-
sal plane measured at the echo time, but they do not tell us anything about the direction (the
phase). This must, however, be known to measure flow as the velocity is often reflected
in the phase. For most sequences, one can therefore choose to acquire phase images
in addition to regular amplitude images. They are most frequently zebra-striped, which
45
indicates the presence of a phase roll: The phase angle moves from 0 to 360 degree, jumps
to 0 and increases steadily again. This is equivalent to the spins being rotated one or more
turns.
Phase encoding Both slice selection and frequency encoding are relatively easy to understand
without considering phase rolls, but these techniques are inadequate to achieve spatial
resolution in the third dimension. Phase encoding is often the solution: The sequence is
repeated with different strengths of a gradient inserted immediately after the excitation.
The direction of this is called the phase encoding direction. Phase encoding is most easily
explained in term of k-space imaging: Different phase rolls are induced along the gradient
direction, and the resulting signal strength reflects the degree to which the patient structure
“matches” the phase roll. This signal is subsequently frequency encoded and measured.
Even simple imaging becomes relatively complicated when phase and frequency encoding
are combined, but described separately. They both appear on equal footing, however, in
the k-space description of imaging, which has proven much more viable and general than
phase encoding and frequency encoding separately. There are still good reasons, however,
to get acquainted with frequency and phase encoding as described in the main text.
Proton Nuclear particle. All atomic nuclei are built of protons and neutrons. The hydrogen
nucleus is the smallest and consists of only one proton.
Precession Spin, which is the rotation of the nuclei around their own axis, gives rise to the
nuclear magnetic property as shown in figure 3. This rotation should not be confused with
“precession”, which is the rotation of the magnetization in a magnetic field as shown in
figure 4), i.e., the rotation of the nuclear north-south axis.
Quantum Mechanics A fascinating subject that you should be wise not to ask me about, unless
you have plenty of time on your hands. Irrelevant for understanding most aspects of MR,
including nearly everything that scanners are used for. Introductions to MRI often contain
misunderstood QM.
Radio waves, RF Radio waves in the relevant frequency range are often simply called ’RF’
(for radio frequency). The employed frequencies overlap with the ones used in radio
communication. MR must therefore be carried out in an RF shielded room (also known
as an RF cabin or a Faraday cage). The shielding can be tackled a number of ways. In
animal experimental scanners, the “plugs” in the ends of the scanner act as boundaries for
the shielded volume.
Refocusing pulse A pulse of radio waves transmitted with the intention of creating a spin echo,
is often called a refocusing pulse. The pulse typically has a flip angle of 180 degrees.
Relaxation times T1 and T2 are time constants that describe how quickly the magnetization
is approaching equilibrium. The magnetization is longitudinal in equilibrium, meaning
that it is pointed along the magnetic field (Mxy = 0, Mz = M0 ). Away from equilibrium,
the transversal magnetization Mxy decreases on a timescale T2 , while the longitudinal
magnetization Mz approaches M0 on a timescale T1 .
46
Resonance The magnetization will only be rotated in a useful manner if it is subject to radio
waves at the right frequency. This is a resonance phenomenon as known from the example
of a swing: The amplitude of the oscillations will only change if the swing is pushed in
synchrony with its natural oscillation frequency (i.e., “on resonance”).
Saturation Most often, the purpose of excitation is to create a signal, but not always so. Satura-
tion pulses are used to get rid of signal in a given position, from a given type of tissue or
from a particular metabolite. Examples include saturation slices and lipid suppression.
Saturation more generally refers to the attenuation of signal caused by reduced magnetization, e.g. resulting from incomplete recovery of longitudinal magnetization after prior
excitation pulses.
Slice selection When excitation is performed while a gradient is active, the protons in some
positions will precess in synchrony with the radio waves, and the nuclei will hence be
rotated. At other positions, the resonance condition will not be fulfilled and the nuclei are
hardly influenced. Specifically, a slice perpendicular to the direction of the gradient will
be excited. When radio wave transmission and the gradient is subsequently turned off, a
signal from the nuclei in the slice can not immediately be detected, however. Perpendicular
to the slice, the last part of the slice selection gradient will have induced an unwanted phase
roll that needs to be unwind before a signal from the slice can be measured. Slice selection
will therefore typically be followed by a gradient directed opposite to the slice direction
gradient (figure 16). This brings the nuclei in the slice into phase.
Spectroscopy The common term for techniques based on measuring differences in frequency
content (color sight, variance in tone, NIRS. . . ). Pretty much any MR technique is spectroscopic (imaging, for example), but the term MR-spectroscopy is usually used to refer to
techniques aimed at distinguishing metabolite signals.
Spin Spin is a quantum mechanical quality possessed by protons and neutrons. In brief, this may
be perceived as rotation of individual nuclei around an axis of their own. The magnetic
characteristics of the nuclei (the dipole quality) is caused by spin and the dipole is along
the direction of the spin. In nuclei, protons pair up in entities that seemingly have no spin.
The same is true of neutrons. There are consequently only few nuclei that are suitable for
MR (an unequal number of protons and/or neutrons is required).
Spin-echo If the magnetic field is inhomogeneous, the spins will loose alignment after excitation
since some spins precess faster than others – signal is lost. Using pulses of radio waves, the
dephasing can be reversed so that the lost signal is recovered: A 180 degree pulse inverts
the phase of the spins, so that those that have precessed the most are set most back in the
process. Waiting a while longer, the signal can be recovered in the form of a spin-echo
since the inhomogeneity persists.
Tesla The unit for magnetic field strength. The earth magnetic field is on average about 0.04 mT,
while the magnetic fields used in MR scanning are typically 1-3 T.
47
Voxel Volume element. Typically refers to the volume that a given point in an image corresponds
to. The word pixel (picture element) might be better known, but it does not reflect the
depth-aspect of any MR measurement (2D rather than 3D). The voxel size is the same as
the spatial resolution. The voxel dimensions are the edge-lengths of a voxel.
48
Aspects of image quality
Jens E. Wilhjelm
DTU Elektro, Ørsteds plads, building 348
Technical University of Denmark
2800 Kgs. Lyngby
(Ver. 3.1 26/8/10) © 2001-2010 by J. E. Wilhjelm
1 Introduction
The quality of medical images can be assessed many different ways. However, the primary use of
the images is to make the diagnosis, so the image quality is naturally connected with how easy the
diagnosis can be made and the degree of confidence it can be made with.
The most essential is that the images are correct, e.g., that the tissue borders appear correct and that
the same kind of tissue appears the same way in the picture. This can be assessed by comparing a medical image to the corresponding anatomical image, if this exists. However, there is not a linear relation
between the image correctness and the ability to make a diagnosis from the image. As an example,
some images might appear geometrically distorted if - for instance - the assumed speed of sound in
an ultrasound scanner is different from the actual one: If looking for a plaque in the Carotid artery, the
actual depth of the plaque might be wrong. However, the doctor knows this can happen and precise
knowledge of the depth of the plaque is not so important.
Apart from the requirement that images should be correct, there are two central quality factors,
namely contrast and spatial resolution size. Contrast is a measure of how easy it is for the eye to distinguish between two different types of tissue. Contrast has no units. Spatial resolution size is a measure of the size of a point object on the medical image. This has units of meters.
250
0
200
Vertical (mm)
10
150
1
2
20
100
30
50
40
0
10
20
30
40
Horizontal (mm)
50
60
70
GSV
0
Figure 1 Ultrasound image of formalin fixed porcine thoracic tissue. Two regions are outlined: Number 1 (left)
and number 2 (right).
1/4
This document uses examples from ultrasound as they most easily provides clear insight into the aspects considered.
2 Contrast
Contrast is a measure of the degree of difference in image intensity between two types of tissues on
an (medical) image. The images with the lowest contrast are typically those based on ultrasound.
Thus, we will start out by considering the ultrasound image in Figure 1. There are two outlines on this
image, outlining different types of tissue. The question is now, if the regions appear different on the
image. Let the mean and standard deviation of the pixel values of these two regions be called μ1, μ2,
σ1 and σ2, respectively. Then, a measure of contrast can be calculated this way:
Cn = |μ1 – μ2| / (σ12 +σ22)1/2
(1)
which is an adapted version of the so-called contrast-to-noise ratio in the sense that a large numerator
correspond to large contrast and a large denominator represents a high noise in the regions.[1] Specifically, equation (1) measures both the difference in amplitude value between the two regions as well
as the variation in amplitude inside the regions. Due to this, the contrast can increase in either of two
ways: The larger the difference between the mean values of the two regions, the higher the contrast.
The smaller the variation inside a region, the higher is the contrast.
With the image in Figure 1, the following values are obtained: μ1 = 46.5, μ2 = 59.3, σ1 = 18.1, σ2 =
= 20, such that Cn becomes 0.5.
The variance in the region represents both measurement artifacts and noise. In ultrasound images it
is speckle and noise. In CT images, it is mainly noise.
If the variance of the image values representing the tissue regions is small, the denominator in equation (1) will dominate, making the influence of the nominator, |μ1 – μ2|, small, rendering the equation
inappropriate. The following contrast measure can be used as an alternative:
C = |μ1 – μ2| / |μ1 + μ2|
21.5
Multi-element transducer
0
6000
5000
22
Depth (mm)
10
Depth (mm)
(2)
Agar
20
30
4000
22.5
3000
2000
23
40
50
-10
1000
23.5
Water
0
10
20
Lateral (mm)
30
40
18
50
18.5
19
19.5
Lateral (mm)
20
20.5
V
Figure 2 Left: Measurement situation. An ultrasound transducer is scanning an agar block in water.
A 0.1-mm-in-diameter glass sphere is embedded at the centre of the agar block. Right: The corresponding
ultrasound image of the glass sphere. The size of the glass sphere is indicated with the ring. (On some
print of this document, the white dot in the agar block to the left can disappear.)
2/4
which can be more appropriate for planar X-ray, CT and MRI. Equation (2) represents the most classical measure.
The contrasts in equation (1) and (2) are measures, that attempt to quantify what the eyes see. The
general idea is, that the more difference in image intensity there is between two regions, the easier it
is to see the difference and the higher will be the calculated contrast.
But since the two equations are quite simple, they have limited applicability. If - for instance - equation (1) is used on CT images with very low noise and low variation within the image, the denominator
becomes very large and the contrast measure "explodes".
It is fine to calculate both equations for all kinds of images, but care should be taken when interpreting the measures.
3 Spatial resolution size
Consider the measurement situation in the left part of Figure 2. An ultrasound transducer is used to
scan an agar block in water. A 0.1 mm in diameter glass sphere is embedded at the centre of the agar
block. The image of such a target does not just become a black image with a white dot, but rather a
black image with a quite large “Gaussian dot”, as shown in Figure 2(right). This function is called the
point spread function (psf), as it shows that a point target can never be imaged as a point, but will be
spread out to a (larger) spot. The psf can have different dimensions in the depth direction, the lateral
direction and the image-to-image direction.
Problem 1 The dimensions in the depth and lateral direction are typically different for ultrasound
images but not CT images. Why?
The question is now how we can define the size of this function? If we look at a profile along the
dotted line in Figure 2(right), we get the curve in Figure 3.
This curve has roughly a Gaussian shape and a natural definition of the width (here just along the
lateral direction) could be to measure the width of the curve at “half maximum”. If the maximum of
the curve is considered to be 0 dB, then the width at half maximum will correspond to –6 dB, as
20log10(0.5) = –6 dB. Other definitions such as –3 dB are just as common. At –6 dB, the lateral width
of the psf is approximately 1 mm.
6000
Magnitude (V)
5000
4000
3000
2000
1000
0
18
18.5
19
19.5
Lateral (mm)
20
20.5
Figure 3 The profile of the point spread function in Figure 2(right) along the dotted line.
3/4
Problem 2 Please draw this –6 dB lateral width in Figure 3.
Problem 3 The –6 dB width in the depth direction of the ultrasound image is typically lower as can
be observed in Figure 2(right). Why? (you might have already answered this in Problem 1.)
3.1 Use of notation
Note that an imaging system with a (–6 dB) spatial resolution size (in a given direction) of 1 mm is
a better system (all other things equal) than a system with a (–6 dB) spatial resolution size of 2 mm.
The first system can image smaller objects than the second system. We here talk about a numerical
quantity, and smaller is thus better.
It is “wrong” to say that a high spatial resolution size is better (unless this is desired).
One can say that a given system has a better or higher resolution than another system, but here higher
resolution then refers to lower spatial resolution size, which can be quite confusing.
4 Temporal resolution size
When investigating dynamic properties of living tissue, often a series of images are recorded after
each other. Here the time distance between images becomes important. For ultrasound systems, this
temporal resolution size1 is referred to in terms of its reciprocal value, the frame rate (see more in the
ultrasound chapter of this book).
5 References
[1]Wilhjelm JE, Jensen MS, Jespersen SK, Sahl B & Falk E: Visual and Quantitative Evaluation of
Selected Image Combination Schemes in Ultrasound Spatial Compound Scanning. IEEE Transactions on Medical Imaging, Vol. 23, No. 2, February 2004.
1. Strictly speaking: If there is no temporal averaging taking place.
4/4