Download No Slide Title

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Super-resolution microscopy wikipedia , lookup

Camera obscura wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Microscopy wikipedia , lookup

Confocal microscopy wikipedia , lookup

Chemical imaging wikipedia , lookup

Ultrafast laser spectroscopy wikipedia , lookup

Photomultiplier wikipedia , lookup

Camera wikipedia , lookup

Phosphor wikipedia , lookup

Gaseous detection device wikipedia , lookup

Night vision device wikipedia , lookup

Image stabilization wikipedia , lookup

Image intensifier wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Opto-isolator wikipedia , lookup

Transcript
Spot film
camera
(Photospot)
(105 mm in
this case)
Cine (35 mm)
camera
Video
camera
Spot film
device
Image intensifier
Appendix H: Chapter 40 in Carlton: Fluoroscopy
Slides 1-5: Fluoroscopic equipment and recording devices
6-13: Component and characteristics of the Image Intensifier (II)
14-18: Multified, FOV, and spatial resolution calculation
19: ABC
20-25: The video camera
26: Monitors
27: Coupling the II and the camera
28. Transfer of energy
29-30. Interlaced scanning
31. Digitizing the video signal
32. Fluoro gray scale
Appendix H: Chapter 40 in Carlton: Fluoroscopy
Cones allow photopic vision (daylight)
Rods allow scotopic vision (low light)
Spot films recorded on a spot film device
4
on
1
* Both were taken during fluoroscopy
* Both are radiographic exposures
* Both are 9 x 9 inch films made
especially for this purpose, (though
some fluoroscopes use standard
size cassettes)
1
on
1
* Both were filmed using the same
device, but in different formats.
Spot film
camera
(Photospot)
(105 mm in
this case)
Cine (35 mm)
camera
Video
camera
Spot film
device
Image intensifier
35 mm cine film. Real
time motion, projected
on a projector.
Spot films (Photospots) recorded on a
spot film (photospot) camera
105 mm
identifiable
by sprockets
* Both were taken during fluoroscopy
that drives
the roll of
film that is
unique for
this size
* Both are fluoroscopic exposures (i.e.
taken off the output phosphor of the II.)
105 mm
* Both are serial films (not designed to be
projected as a moving image, but taken
in rapid sequence such as 1, 2, or 4 a
second
* Both are filmed by the same camera
but are different sizes.
90 mm (cut film)
The transfer of energy through the
fluoroscopic imaging chain: The II
The three major components
of the fluoroscopic imaging
chain are the:
1. image intensifier,
2. the video camera, and
3. the CRT monitor
Image Intensifier (II)
Output Phosphor
Zinc Cadmium Sulfide
Anode
25,000 V
Electrostatic focusing
lens
Glass envelope
Photocathode
Cesium & Antimony
Input Phosphor
CsI
Concave surface so all
electrons arrive at the
output screen at the same
time, but causes vignetting.
The transfer of energy through the
fluoroscopic imaging chain
Focal Point
1. Remnant to light on input phosphor
2. Light to ionized electrons on photocathode
3. Electrons back to light on output phosphor
Intensification is accomplished
by flux gain, and minification
gain.
Flux Gain
Anode
25kV + potential
Output Phosphor
(zinc-cadmium sulfide)
Electrostatic focusing
lens
Photocathode
Electrons accelerated across the tube gain kinetic energy from the
attractive force of the anode (conversion efficiency). The collision at
the output screen liberates that energy in the form of more light photons
Minification Gain
1” diameter
Output Phosphor
(zinc-cadmium sulfide)
The ratio of the areas of the input
and output screens is expressed as
the minification gain.
92
12
= 81 times
Input Phosphor
(CsI)
9” diameter
Total Brightness Gain
The product of the flux gain
and the minification gain is
the total brightness gain.
If the flux gain were 70, and
the brightness gain 81
70 x 81 = 5670 total brightness gain
5000-30,000 is the range
Quantum Mottle
Because the image intensifier makes the image on the output
screen thousands of times brighter than the image
on the input screen, much less radiation is needed.
If too few photons are used, the image becomes
grainy and unacceptable for diagnostic purposes.
Generally speaking a better image is always
obtained by using more photons, but the price
is paid in patient dose.
Conversion factor
The intensity of illumination at the output phosphor (candela per meter squared) to
the radiation intensity that produced it (mR/s). Typical conversion factors of 50 to
300 relate to the 5000-30,000 BG
Brightness gain = ratio of input to output phosphors
Conversion factor = ratio of radiation intensity to brightness of output.
The conversion factor is the new and best characteristic for purchasing decisions
Veiling Glare
Scatter radiation from
x-ray, electrons, and light
Multifield (Duel focus) Electronic Magnification
(FOV as opposed to increased OID magnification)
By increasing the positive charge on the electrostatic focusing lens,
the convergence (focal) point
is changed (further
from out put screen).
}
}}
11” mode
Field of View (FOV)
7”
9”
11”
7” mode
FOV
Duel focus or Electronic Magnification
(FOV as opposed to increased OID magnification)
When the convergence point is further from the output screen,
(blue) the photoelectrons have further to diverge, and the image
arriving at the output screen is larger.
But patient dose is
increased.
Electronic magnification
creates better resolution
}
}}
11” mode
7”
9”
11”
7” mode
Resolution (Resolving Power)
Measured in line pairs.
One line and one space is a line pair
1 mm
=1
line pair per millimeter
of spatial resolution
A Line Pair Test Tool
(for Testing Spatial Resolution)
The Test Tool provides line pairs of various
sizes to measure spatial resolution
More lines pairs per mm that are resolved = better spatial resolution
Because? More line pairs demonstrate smaller things.
Back to: FOV Magnification vs OID Magnification
Calculating spatial resolution in lp/mm
When OID is increased the image size is magnified but so is penumbra and detail is sacrificed.
When FOV is decreased the image size is magnified but the spatial resolution actually improves.
Here’s why:
In this simplified example* let’s say we have 100 pixels across the width of the monitor.
If we display 25 mm of anatomy in those 100 pixels, 4 pixels will be used to display each mm of
anatomy. It takes 2 pixels to display a line pair (one pixel for the line and one for the space).
Therefore, in this 25 mm FOV there is 2 lp/mm of spatial resolution.
If we display 10 mm of the same anatomy in a smaller FOV, 10 pixels will be used to display
each mm of anatomy, and the image will be magnified. Therefore, in a 10 mm FOV there will
be 5 lp/mm of spatial resolution. Ergo, a smaller FOV = more lp/mm = better spatial resolution.
Using a smaller FOV to display better spatial resolution applies to all imaging, not just
fluoroscopic. So why isn’t it used exclusively? Because you don’t see as much of the big picture
in a small FOV.
* This simplified example does not take into account the actual number of pixels across the
monitor, such as 525, 1024, 2048, or so on, and it does not include the calculation to convent
inches of the FOV to mm before doing the calculation.
Automatic Brightness Control
(ABC)
While doing fluoroscopy the .5 to 5 mA, and the kVp will be
automatically adjusted to compensate for changes in part
thickness and composition of the part as the fluoro tube is being
moved. ABC adjusts the mA. Lag is evident, especially if the
tube is moving fast.
ABC also increases patient dose
when using electronic
magnification
Vidicon and Plumbicon camera tubes
The target: about the size of a dime
Vidicon Camera
Component parts
Tube
Window
Signal Plate
Target
Anode
Electron gun
(cathode)
Globules
Control grid
Steering and
deflecting coils
Vidicon and plumbicon cameras are analog tubes that have been in use since the early
days of television. They are being replaced by CCDs. (discussed in the presentation on
digital radiography)
The transfer of energy through the
fluoroscopic imaging chain
How the camera works:
The concept: The image from the output phosphor
of the II must be broken down into individual units
(like pixels, only called globules on the camera target).
Each of these globules must encode the intensity of
light by turning it into an electrical impulse that will
flow out of the camera (through the pins at the rear)
and send it to the monitor for display. Each globule
of the camera target has a corresponding pixel on the
monitor. That is, they are both in the same column
and row of their respective matrices.
Optical lens
to focus
light from II
The transfer of energy through the
fluoroscopic imaging chain
Encoding is accomplished by the unique properties of
the globule material. When excited by light, atoms
from the globules raise into higher orbital shells, which
makes them conductors of electricity. More intense
light leads to more excitation and better conduction. In
this manner the light image from the II is represented
(encoded) as a matrix of globules in various stages of
excited ionization (like the space charge around the
filament of the x-ray tube)
Optical lens
to focus
light from II
The transfer of energy through the
fluoroscopic imaging chain
Then each globule must be discharged, to send the
image data it contains out of the camera as electrical
impulses. A stream of electrons from the electron gun
showers the globules, left to right, top to bottom, (like
reading a book). The steering and deflecting coils
provide electromagnetic force fields that control the
motion of the pencil point size foused beam. As it
scans the globules on the target each globule conducts
electrons through it, in proportion to the how excited
(and how conductive) each particular globule is.
Optical lens
to focus
light from II
The transfer of energy through the
fluoroscopic imaging chain
Cable for
video signal
As each globule is discharged individually, the current
flowing though them travels to the signal plate which is
wired to pins on the rear of the tube. The series of
impulses (some strong, some weak, and everything in
between) flowing from the camera comprises the video
signal. This encoded signal will control the intensity
of fluorescence on the pixels of the monitor, and
recreate the light image that came from the output
phosphor of the II.
The changes in the intensity of the video
signal is known as modulation
Optical lens
to focus
light from II
The Monitor
To recreate the image on the monitor
the number of electrons stripped from
the cathode (electron gun) of the tube
must be controlled, for each pixel
fluoresces at a different intensity. This
is where the video signal comes in.
The control grid on the camera supplies
a constant negative potential to
strip electrons from the space charge
of the cathode. But the control grid on
the monitor is supplied by the video
signal. Since the video signals modulates
as the encoded image signal, the charge
it supplies the control grid is constantly
changing, and a greater of lesser number
of electron is fired toward the phosphor
with each impulse.
In this manner the each pixel of the
monitor is caused to fluoresce in direct
proportion to the intensity of light
that was incident on its corresponding
globule on the camera.
The transfer of energy through the
fluoroscopic imaging chain
No
signal
High
intensity
Low
intensity
Optical lens
to focus
light from II
Coupling of the II to
the camera
In the previous
discussion a lens
coupling was
used to focus the light
on the camera target.
Fiber optics are a
newer alternative.
* Beam splitting
mirror allows
photospot and
camera filming at
the same time
In conventional TV systems
(not high definition) the
electron beam of the monitor
does not write to every
line of the pixels, but to
every other. First to odd
numbered lines, then
the screen is blanked, then
to even numbered lines.
Each is called a field, and
there are 60 fields written
a second. The combination
of 2 fields is a frame and
although a full frame never
appears on the screen, it
happens too fast for us to
perceive it. At 30 frames
a second we see no flicker.
Interlaced Scanning
Line 1
Line 2
Line 524
Line 525
262 1/2 Odd Lines scanned first = Field 1
262 1/2 Even Lines scanned first = Field 2
2 Fields = 1 Frame
Question: Is the interlaced scheme desirable?
No
Then why is it used?
It is a remnant of the original technology.
What is better than interlaced?
See the next presentation on High Definition
Question: How is a conventional fluoroscopic, analog imaging chain converted to digital?
Answer: Measure the impulses from the camera with an analog to digital converter (ADC)
and store the results in a computer. If the monitor is analog a digital to analog converter
(DAC) must restore the analog signal as it came from the camera.
ADC
1
0
1
1
ALU
CU
Primary
Memory
(RAM)
Secondary
Memory
DAC
Finally: Why are densities reversed on the
fluoro monitor? Comparison of maxillary sinuses
* Low atomic densities
* Low attenuation
* Input phosphor glows brightly
* Camera target highly excited.
* Video signal is strong
* Many electrons are fired from
the electron gun of the monitor
* Pixels glow brightly
* Light area on monitor
On Film
On Fluoro
The End