Download CCD and CMOS sensors - International Space Science Institute

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
— 23 —
CCD and CMOS sensors
Nick WalthamI
Abstract
The charge-coupled device (CCD) has been developed primarily as a compact
image sensor for consumer and industrial markets, but is now also the preeminent
visible and ultraviolet wavelength image sensor in many fields of scientific research
including space-science and both Earth and planetary remote sensing. Today’s
scientific or science-grade CCD will strive to maximise pixel count, focal plane
coverage, photon detection efficiency over the broadest spectral range and signal
dynamic range whilst maintaining the lowest possible readout noise. The relatively
recent emergence of complementary metal oxide semiconductor (CMOS) image sensor technology is arguably the most important development in solid-state imaging
since the invention of the CCD. CMOS technology enables the integration on a
single silicon chip of a large array of photodiode pixels alongside all of the ancillary
electronics needed to address the array and digitise the resulting analogue video
signal. Compared to the CCD, CMOS promises a more compact, lower mass, lower
power and potentially more radiation tolerant camera.
The charge-coupled device
The concept of the charge-coupled device (CCD) emerged from the search for
a silicon-based electrical equivalent of the magnetic bubble memory (Boyle and
Smith 1970). In its simplest implementation, the CCD structure consists of a series
of closely spaced electrodes separated from an underlying semiconductor substrate
by a thin insulating oxide layer (Figure 23.1a). When a bias voltage is applied to an
electrode, a depletion region is formed in the semiconductor immediately beneath
it. The depletion region is in effect a potential well which can store an electrical
charge packet. By pulsing the electrodes in an appropriate sequence the potential
well, and hence its charge packet, can be transferred through the semiconductor
(Figure 23.1b). A shift register can be formed by adding circuits for the insertion
and detection of charge packets.
Although the CCD was originally conceived as a device to store digital information, it was evident that because a potential well could store variable quantities
of charge it could also convey analogue signals. As a result, the CCD concept has
I Space Science and Technology Department, Rutherford Appleton Laboratory, Harwell Science
and Innovation Campus, Didcot, UK
391
392
23. CCD and CMOS sensors
(a)
(b)
Figure 23.1: (a) Charge-coupled device structure. (b) Charge transfer operation.
been applied to analogue signal processing functions including simple delay lines
and transverse filters. Without question, however, the greatest impact of the CCD
has been in the application of solid-state image sensors. Today the CCD appears
throughout the consumer, industrial, medical, security and scientific imaging sectors. Only the relatively recent emergence of CMOS sensors has challenged the
CCD in the consumer markets of mobile telephones and digital cameras where
miniaturisation and lower power consumption are key requirements. This begs the
question of why the CCD has become the image sensor of choice for nearly all
visible light imaging systems and in particular scientific applications. There already exist a number of texts that describe the concepts, physics and operation
of the basic CCD structure, including its variations and refinements for particular
applications — see for example: Séquin and Tompsett (1975), Beynon and Lamb
(1980), Janesick (2001) and Holst and Lomheim (2007). Only a brief overview can
be presented here, summarising the types of arrays and technologies that have been
developed and optimised for scientific imaging applications and in particular space
instrumentation.
Prior to the development of the CCD, electronic imaging relied mostly on the use
of camera tubes. As a solid-state device the CCD offered the immediate attractions
of compactness, ruggedness and low-voltage operation. Of importance is that silicon
is very responsive to visible light. At the peak of its sensitivity a back-illuminated
CCD is capable of absorbing and sensing very nearly all incident photons. With
appropriate fabrication and optimisation, silicon can also respond well to extreme
ultraviolet light and soft X-rays. In its simplest form, the basic structure of a
CCD image sensor is formed from an array of electrodes running orthogonally
to a series of isolated charge transfer channels (Figure 23.2). The electrodes are
usually connected together in groups of two, three or four phases. Biasing of the
electrode phases creates an array of isolated potential wells, or pixels, that collect
the photon-generated electrons. Following an exposure, the imaging area electrodes
393
Figure 23.2: Full-frame CCD array architecture.
are pulsed or “clocked” to transfer the integrated image charge pattern down the
array one line at a time. The lowest line is transferred into a serial readout register
that runs orthogonally to the imaging area transfer channels or columns. This
register is clocked separately and at a higher rate allowing each pixel to be read
out sequentially through a charge detection amplifier. Once the complete line has
been read out the imaging area electrodes can be clocked again so that the next
image line is transferred into the readout register. This sequence is repeated until
all image lines have been read.
The CCD architecture described above is commonly referred to as a full-frame
CCD array, the most popular for scientific imaging applications in space and
ground-based astronomy. However, an obvious limitation is that the image will
be smeared if the CCD remains exposed while being read out, particularly if the
frame readout time is a significant fraction of the exposure time. A common solution is to incorporate a mechanical shutter in front of the CCD so that an integrated
image can be read out in darkness. An alternative solution is to add a light-shielded
storage array to which the integrated charge pattern can be transferred rapidly at
the end of the exposure period. The storage array can then be read out while a
new image accumulates in the imaging array. There are two basic formats: the
frame-transfer CCD (Figure 23.3a) and the interline-transfer CCD (Figure 23.3b).
In the frame-transfer CCD the storage array is added beneath the imaging array.
This effectively doubles the size of the silicon chip and is unattractive in commercial
cost-sensitive applications. In the interline-transfer CCD the storage array is incorporated as light-shielded columns adjacent to the imaging columns. The weakness
in this approach is that the sensitivity is effectively halved. Unsurprisingly, this
architecture has seen little interest in the scientific community but rapidly gained
popularity in the digital camera and camcorder markets. Two further CCD architectures applicable to Earth and planetary remote sensing are the linear CCD
(Figure 23.4a), and the time-delay-integration (TDI) CCD (Figure 23.5a). The linear CCD consists of a row of photodiodes from which photon-generated charge is
394
23. CCD and CMOS sensors
(a)
(b)
Figure 23.3: (a) Frame-transfer CCD architecture. (b) Interline-transfer CCD architecture.
(a)
(b)
Figure 23.4: (a) The linear CCD. (b) Push-broom imaging.
transferred into a serial readout register. The acquisition of a two-dimensional image relies on the sensor being scanned across the target scene. Linear CCDs with
12 000 pixels or more are readily available. The technique of push-broom imaging
is used to acquire very high resolution imagery (Figure 23.4b).
The TDI CCD is advantageous when there is insufficient illumination to obtain
a useful signal from a conventional linear array. Its architecture is essentially the
same as a full-frame CCD array except that the x-y image format is typically very
much larger in x than in y (Figure 23.5a). To avoid image smear, push-broom
imaging requires the integrating CCD charge packets to be clocked down the array
at the same rate as the projected image scans over the array (Figure 23.5b). The
TDI provides gain because the target scene is now integrated over the number
of lines in the array. TDI CCDs have been used successfully to obtain very high
resolution imagery of Mars (Ebben et al 2007).
395
(a)
(b)
Figure 23.5: (a) The TDI CCD. (b) TDI CCD push-broom imaging.
The scientific CCD
CCD fabrication technologies have undergone continuous development to improve performance. The development of large-format CCD arrays for scientific applications was driven largely by the aspirations of the ground-based astronomical
community and the requirement for a solid-state image sensor to replace the vidicon camera tube in space applications. In the late 1970s a number of astronomical
research groups developed camera systems using the first commercially available
sensors from Fairchild — see for example: Marcus et al (1979) and Leach et al
(1980). At the same time the CCD was being proposed for what were to become
NASA’s HST and the Galileo mission to Jupiter. The earliest sensors, for example
the Fairchild CCD201, had just 100 pixels × 100 pixels. RCA rose to the challenge
of designing a CCD with 320 pixels × 512 pixels, the minimum needed to meet the
requirements of the US TV standard. Texas Instruments collaborated with JPL
under NASA sponsorship to develop arrays of 400 pixels × 400 pixels and later
800 pixels × 800 pixels — the dawn of the scientific CCD. This quest for a largearea, high-sensitivity and low-noise solid state image sensor resulted in many of the
developments now seen in today’s scientific or science-grade CCD catalogue. The
scientific CCD typically has several million pixels, high quantum efficiency over a
broad spectral range, low readout noise and wide dynamic range.
Charge-transfer efficiency
The first significant development in CCD fabrication addressed the poor chargetransfer efficiency (CTE) of the simple surface-channel CCD architecture illustrated
in Figure 23.1a. Interface “traps” at the silicon-silicon dioxide interface were found
to absorb and release charge with differing time constants, resulting in the smearing
of charge packets being transferred through them. The CTE was found to be so poor
as to make large-format image sensors impractical. The problem was overcome with
the introduction of the buried-channel CCD in which an additional layer of n-type
silicon is formed at the top of the p-type substrate and immediately below the
silicon dioxide insulating layer. The n-type silicon forces the potential wells to form
deeper within the substrate and away from the traps at the silicon-silicon dioxide
396
(a)
23. CCD and CMOS sensors
(b)
Figure 23.6: (a) CCD charge detection amplifier. (b) Correlated double sampling.
interface. Signal charge can now be transferred through the substrate without it
coming into contact with the traps. Transfer efficiencies better than 99.999 % and
as much as 99.9999 % are now achieved, enabling thousands of transfers without
significant signal loss or smear.
Readout noise
Another important and on-going development has been to reduce readout noise.
The output circuit of a CCD is a charge detection amplifier consisting of an output diffusion which has an associated parasitic node capacitance, a reset transistor
to recharge the node capacitance and an output transistor operating as a sourcefollower to sense voltage variations on the node (Figure 23.6a). Pixel readout requires that the output node capacitance is pre-charged to a “reset” potential (Vrd )
by pulsing the reset transistor “on” and “off” with a reset clock (∅r ) as illustrated
in Figure 23.6b. In practice there is a small discharge of the output node resulting
from feed-through of the reset clock as it falls low. Signal charge from the last gate
of the serial readout register (Vg ) is then transferred onto the output node resulting
in further discharge. The output transistor presents a video output signal (Vos ). As
the reset clock falls low, Johnson noise from the channel resistance of the reset
transistor is frozen on the output node
p capacitance (C) and results in RMS pixelto-pixel reset voltage fluctuations of kB T /C. This noise is commonly referred to
as kB T C noise, or reset noise, and would be the dominant source of CCD readout
noise without a cancellation technique known as correlated double sampling (CDS).
CDS acquires two samples of the CCD output, one before (S1), and one after (S2)
the signal charge has been transferred onto the output node (Figure 23.6b). Subtraction of the two samples cancels the kB T C noise as it is remains “correlated”
or unchanged between the two samples. After reset-noise cancellation, transistor
noise in the source-follower output transistor becomes the dominant noise source.
The resultant readout noise will depend on the signal measurement bandwidth
and thus the CCD readout rate. Higher readout rates require greater bandwidth
in the readout electronics and result in higher readout noise. The best of today’s
CCDs achieve a readout noise of ≈ 6 electrons RMS when read at ≈ 1 MHz, or
as little as ≈ 2 electrons RMS when read at ≈ 100 kHz. A compromise is required
397
(a)
(b)
Figure 23.7: (a) The electron multiplication CCD. (b) Avalanche breakdown gain.
between a low readout rate to minimise readout noise and a high readout rate to
minimise the overall frame readout time. High frame readout rate is clearly desirable to maximise observing efficiency. One solution to the dilemma is to add
additional readout amplifiers to the CCD array such that a number of sub-sections
can be read in parallel. Low readout rate for minimal noise is now possible without
compromising observing efficiency. An added benefit is that the additional readout
amplifiers provide a degree of redundancy in the case of a failure of a single readout amplifier. A disadvantage, however, is that additional calibration of the data
is required to correct for gain mismatch between the multiple readout amplifiers.
Electron multiplication readouts
To reduce readout noise still further, two CCD manufacturers have developed
electron multiplication readouts to provide on-chip gain (Hynecek 2001). A multiplication register of typically several hundred stages is added to the end of the serial
readout register (Figure 23.7a). One electrode phase in the multiplication register
is clocked at significantly higher voltage (typically up to 40 V in amplitude) which
enables a probability of avalanche multiplication (Figure 23.7b). The word “probability” is key as the net gain of a single stage might only be ≈ 1.01 but after
500 stages the net gain is 1.01500 or ≈ 145. Because the gain is applied before the
charge detection output amplifier, the effective readout noise is the output noise
divided by the gain. This enables very low readout noise to be maintained at the
high readout rates of TV cameras (≈ 10 MHz).
The electron multiplication CCD provides a solid-state equivalent of an intensified camera tube and with sufficient gain a solid-state photon counting detector.
Not to be overlooked is an additional noise, introduced by the discrete nature√of
the multiplication process, which effectively increases the shot noise by a factor 2
(Robbins and Hadwen 2003). The net gain is highly dependent on the amplitude
of the high-voltage clock and so appropriate control of its stability is necessary.
Despite these and further issues concerned with spurious charge and ageing of the
gain stage, the electron multiplication CCD provides a new and powerful tool for
low-light imaging.
398
23. CCD and CMOS sensors
Spectral range
A third continuing development of the CCD has been the challenge to maximise
its sensitivity over the widest spectral range possible. The first CCDs were “frontilluminated” requiring the incident photons to penetrate through semi-transparent
polysilicon electrodes to reach the underlying substrate. Unfortunately, the absorption depth of silicon is such that a large fraction of the light is absorbed within the
electrodes, particularly at the shorter “blue” wavelengths. On the other hand, there
is a greater chance that the longer “red” and near-infrared wavelength photons will
penetrate deeply into the silicon substrate below the depletion regions. Electrons
from these photons are free to drift laterally in a field-free region with the result that
image resolution is degraded or blurred. A solution is to manufacture the CCD on
epitaxial silicon which consists of a thin layer of the nominally-doped silicon, typically 10 µm to 20 µm thick, on top of a very highly-doped bulk substrate. Electrons
created in the bulk substrate are likely to recombine before they drift to the depletion regions. Image resolution is maintained but at the cost of sensitivity to “red”
wavelengths. To solve the lack of sensitivity to “blue” wavelengths researchers and
manufacturers started to develop the thinned “back-illuminated” CCD. This relies
on the bulk of the silicon substrate being removed by chemical etching, and the
CCD being illuminated through the back surface rather than through the polysilicon electrodes. In principle, the sensitivity, or quantum efficiency (QE), is very
much increased. However, a bare untreated silicon surface will contain a high density of recombination centres or trapping sites. Any photoelectrons which make
contact with this surface will re-combine with holes and be lost. Surface passivation is therefore necessary to reap the full benefits of the thinning and to maximise
the QE, particularly for the shorter “blue” wavelengths which are absorbed close
to this surface. An early method, referred to as back-surface charging, relied on
the CCD being held in a vacuum and subjected to a flooded exposure of ultraviolet light. This caused the native oxide to hold a negative charge that repelled
photo-generated electrons away and towards the depletion regions. Experiments
demonstrated very high QE but the process was unstable and required periodic
recharging. A more reliable and stable method, referred to as back-surface doping,
relies on a thin layer of dopant, typically boron, being incorporated in the back surface to form a p+ -layer (Figure 23.8a). The change of doping concentration gives
rise to a small potential step that repels photo-generated electrons towards the
depletion regions (Figure 23.8b).
Dopant is generally introduced by ion implantation but does not become active
unless in substitutional lattice sites. The lattice needs to reach a high temperature
for this to occur but, because the thinned substrate will not tolerate furnace temperatures, only the thin doped surface layer is heated, using a very short pulse from
an ultraviolet laser which heats the surface silicon to ≈ 1500 ◦ C whilst increasing
the temperature of the remaining silicon by only 5 ◦ C to 10 ◦ C. With the addition of an anti-reflection coating, excellent QE can now be obtained over a broad
spectral range. The advantages of rear-illumination are clear from the characteristic QE curves of front- and rear-illuminated CCDs illustrated in Figure 23.9.
However, multiple internal reflections within the thinned silicon substrate can give
rise to interference fringing, typically at wavelengths in excess of ≈ 750 nm to
399
(a)
(b)
Figure 23.8: (a) The thinned back-illuminated CCD. (b) Potential profile of the
CCD.
Figure 23.9: Quantum efficiency curves for front- and back-illuminated CCDs.
800 nm, and rising in amplitude to ≈ 20 % at longer wavelengths. An example
frame is reproduced in Figure 23.10. Their calibration and removal in subsequent
image processing can prove problematic in some applications such as narrow-band
imaging and spectroscopy. One method of suppressing the fringing is to apply an
anti-reflection coating peaked at the fringing wavelengths that minimises internal
reflections. A second method is to etch a λ/4 (≈ 80 nm) deep groove over each half
pixel so any fringing effect over the two half pixels is complementary.
Focal plane coverage
The final significant developments of the science-grade CCD addressed the requirement for greater focal plane coverage than provided by the first CCDs which
were limited by the maximum reticle size of the manufacturing photolithography
to ≈ 20 mm × 20 mm. The technique of “stitching”, as illustrated in Figure 23.11,
was developed to allow a large-format CCD to be constructed by precision lithography from a selection of reticles, each containing a different part of the overall
CCD design. CCD size is now limited only by the size of the silicon wafer and
400
23. CCD and CMOS sensors
Figure 23.10: Interference fringing in a thinned back-illuminated CCD.
Figure 23.11: Fabrication of large-format CCDs by stitching.
the more practical aspects of defect-free yield and hence cost. For even greater
coverage, an obvious solution is to populate the focal plane with more than one
CCD, accepting that there will be gaps between the chips due to the chip packaging and necessary electrical connections. To minimise these gaps, CCDs have been
designed with minimal non-imaging on-chip circuitry and chip-carrier packaging
along one or more edges such that two or more chips can be closely butted up
against each other. Gaps of only ≈ 0.2 mm to 0.5 mm are achievable. By way
of example, photographs of two HST Wide Field Camera-3 (WFC3) CCDs, first
on a five-inch (12.7 cm) silicon wafer and then butted up together within the focal
plane assembly, are reproduced in Figure 23.12. The basic CCD design is a stitched
array of ≈ 4096 pixels × 2048 pixels with 15 µm pixel pitch such that the two
chips butted together provide an overall coverage of ≈ 4096 pixels × 4096 pixels
or ≈ 61.4 mm × 61.4 mm.
Dark current
Depending on the application, an important CCD characteristic is its leakage
current, otherwise known as dark current. All semiconductors suffer from leakage
currents that result from electrons having sufficient thermal energy to break free
401
Figure 23.12: HST WFC3 CCDs on a five-inch wafer and integrated in the focal
plane assembly ( e2v technologies plc and Ball Aerospace & Technologies Corp.).
©
from the lattice. The surface traps at the silicon-silicon dioxide interface are largely
responsible for the dark current generated in a CCD. At 20 ◦ C, 1 nA cm−2 is typical,
but this decreases by a factor of two for roughly every 7 ◦ C to 8 ◦ C reduction
in operating temperature. Dark current has associated shot noise and a relative
statistical pixel-to-pixel non-uniformity of roughly 3 % to 10 % RMS. At 20 ◦ C,
dark current may well be the dominant contributor to the overall readout noise
of a CCD, even in moderate frame rate applications like TV sensors. However,
with sufficient cooling, typically in the range −60 ◦ C to −100 ◦ C, dark current
can be reduced to negligible levels even for the hour-long exposures often required
for astronomy. Liquid nitrogen has been popular for ground-based applications,
whereas passive radiators have been used to cool sensors in space instrumentation.
An important consideration is the need for cleanliness because contamination,
including materials out-gassing from the immediate surroundings, can condense on
the cold CCD (Kimble et al 1994). The issue is particularly serious for CCDs imaging in the extreme ultraviolet as even the smallest layer of surface contamination
can significantly reduce the detector’s sensitivity. One solution to the problem is
to provide a cold-trap on which contamination is condensed before it reaches the
CCD. Another solution is to add a decontamination heater to the CCD package to
allow periodic warming and out-gassing of contaminants. One operating technique
and a further development have enabled CCDs to be operated with significantly
reduced dark current. Biasing the CCD such that the low level of the CCD clocks
is significantly below the substrate potential causes holes to accumulate at the silicon surface, fill surface traps, and so suppress the dominant surface dark signal
generation. The holes effectively change the surface from n-type to p-type and the
silicon can be considered as “inverted”. The technique of “dither-clocking” relies
on a confined signal charge being periodically shifted back and forth between adjacent electrodes and ensures that all of the silicon is periodically inverted. The
dark current suppression is a function of operating temperature and the frequency
of the “dither” and can result in more than two orders of magnitude of reduction.
A further development was to add an implant under one of the electrode phases to
402
23. CCD and CMOS sensors
define fixed potential wells under which charge could be collected with the whole
of the surface inverted. Dither clocking was no longer necessary. This CCD variant
is referred to as a multi-pinned phase (MPP) CCD, or sometimes as inverted-mode
operation (IMO). Disadvantages of the MPP CCD are firstly a reduction in the
charge storage capacity, or full-well capacity, of roughly 20 % and secondly a significantly increased dark signal non-uniformity.
Dynamic range
Finally, two of the most important metrics of a science-grade CCD are its fullwell capacity and readout noise. Together these determine the dynamic range of
the CCD with ≈ 1 × 105 or more being obtainable from today’s best devices.
Associated with these parameters is the relative linearity, which is typically 0.5 %
to 1 % and dominated by the characteristics of the charge detection amplifier.
Full-well capacity is a measure of the maximum signal charge that a pixel can
store and transfer, and is a function of the isolated electron storage area which is
determined by both pixel size and the number of electrode phases used to define
the pixel. A 4-phase pixel holds and transfers charge beneath half of the pixel area
whilst a 3-phase pixel can only work with one third of the pixel area. A 2-phase
pixel operates beneath half of the pixel area but the storage density is limited by
a greatly reduced potential well depth. By way of example, a 12 µm × 12 µm pixel
will typically store ≈ 60 000 electrons (2-phase), ≈ 100 000 electrons (3-phase) and
≈ 200 000 electrons (4-phase). A larger 24 µm × 24 µm 4-phase pixel could hold
more than 106 electrons.
This necessarily brief introduction to the science-grade CCD concludes by directing the reader to the literature for more in-depth discussion, recommending
Janesick (2001) in particular as a comprehensive source.
The CCD in space
The first CCDs to be flown in space were on the two Russian Vega probes
launched in 1984 to image the nucleus of comet Halley in 1986. ESA followed
with CCD cameras on board the Giotto spacecraft, also to comet Halley. The
development of the CCD for space applications can be traced back to 1974 when,
under the sponsorship of NASA, JPL undertook a programme to develop largeformat CCD arrays for the Galileo mission to Jupiter, launched in 1989, and the
HST, launched in 1990. Today the CCD is the preeminent visible- and ultravioletwavelength image sensor in space science and in both Earth and planetary remote
sensing.
Radiation damage
One of the most important issues for operation in the space environment is
radiation damage. Radiation changes the operational and performance characteristics of CCDs, and the primary concerns are ionisation, displacement damage and
transient effects.
403
Ionising radiation results in the accumulation of trapped charge in the CCD’s
oxide and the generation of traps at the silicon-silicon dioxide interface. The trapped
charge results in changes to the effective bias voltages applied to the CCD and are
referred to as flat-band voltage shifts. The accumulation of radiation is quantified
by a parameter called the total ionising dose (TID). A biased CCD will typically
exhibit a flat-band voltage shift of ≈ 0.08 V krad−1 (Si).1 Shifts of up to ≈ 2 V
can usually be tolerated by careful optimisation of the CCD bias voltages prior
to launch and/or by enabling software-controlled adjustments to be made during
the mission. The second effect of ionising radiation is a significant increase in the
surface dark current. Besides operating at a reduced temperature, inverted-mode
operation, either by dither-clocking or the use of an MPP CCD, will generate holes
that will suppress the additional dark charge.
Displacement damage results from energetic particles, for example protons and
neutrons, which collide with the silicon atoms and displace them from their lattice
sites, creating vacancy-interstitial pairs. Many of these recombine but some vacancies can link with phosphorous atoms to form a trap (e-centre) that degrades CTE.
Particle-induced lattice damage can also give rise to an increase in dark current and
its non-uniformity, and introduce a new type of pixel noise referred to as “random
telegraph signals” (RTS). A good review of the subject can be found in Hopkinson
et al (1996). The observable degradation in CTE is very much application dependent but is particularly serious in the field of spectroscopic X-ray astronomy. Signal
charge trapped by a proton-induced defect is likely to be later released into trailing
pixels with resulting image smear. This arises because the time for an empty trap
to capture charge is very short whereas the time constant of the release is very
much longer. The release time constant is exponentially temperature dependent
and so a complex interdependency exists between operating temperature and the
readout rate of the CCD. The observable CTE degradation is also a strong function
of the general background signal present on the CCD with the smearing of small
charge packets being more pronounced in the absence of a background signal to
fill the traps. CTE is often assessed by the degree of smearing observed in X-ray
events from an Fe55 source (Janesick 2001). Events will normally be confined to
one or two pixels but will become increasingly smeared with increasing radiationinduced CTE degradation (Figure 23.13). The same effect has been observed in the
images of star fields on HST (Kimble et al 2000) and has significant implications
for the accuracy of any radiometric calibration. The second result of displacement
damage is an increase in the contribution of dark current and its non-uniformity
from the bulk silicon. Individual pixels may become localised regions of high dark
current, often referred to as “hot pixels”, and the number can be expected to increase during the mission life. Some pixels exhibit RTS noise, an unstable increase
in dark current which jumps randomly between well-defined levels (Hopkins and
Hopkinson 1993, 1995). The final aspect of displacement damage to address is the
finding that a good proportion ≈ 60 % to 85 % of CTE damage can be repaired
by annealing. This involves periodically heating the CCD to ≈ 120 ◦ C, or higher,
for tens of hours at a time (Holland 1991). Similar studies have also demonstrated
1 0.008
V Gy−1
404
23. CCD and CMOS sensors
Figure 23.13: Smearing of Fe55 X-ray events after CTE degradation in an irradiated
CCD.
partial annealing of hot-pixel damage at elevated temperatures (Holland et al 1990;
Holland 1991).
Transient radiation effects occur when a particle, for example a cosmic ray,
passes through the active volume of the CCD. Ionisation creates charge along the
particle’s path and results in a track of charge that may traverse many pixels.
Although the events are transient and cause no lasting damage, they can result
in significant noise within an image. The identification and rejection of cosmicray events in subsequent image processing requires that the observer acquires two
images of the same field and accepts only those features that are common to both
images.
Protective measures
The importance of the CCD in space instrumentation has led to considerable
research being undertaken to understand radiation damage, how best to protect
against it and the development of radiation-hardening technologies. The first defence has been to ensure that the CCD is optimally shielded, typically with up
to ≈ 25 mm of aluminium. Significantly more shielding worsens matters because
incident high-energy particles create an excess of secondary events from the shielding itself. The effects of ionising radiation can also be reduced by manufacturing
the CCD with a modified gate dielectric layer but has the penalty of some loss in
production yield. Minimisation of CTE degradation has seen the development of
the supplementary buried channel or notch. The concept is to confine the signal
charge to a narrow channel, and thus a smaller volume, such that it is exposed to
a reduced number of trapping sites. This can be effective for the transfer of small
signals but the advantage is lost once the signal spills out of the notch and into
the wider channel. Finally, although CCD manufacturing has focused almost exclusively on the buried n-channel CCD, p-channel devices are now being explored
for their greater resilience to CTE degradation, principally because of the absence
of phosphorous atoms with which to form e-centres (Hopkinson 1999).
405
(a)
(b)
Figure 23.14: (a) 3-Transistor CMOS Active Pixel Sensor. (b) Array architecture.
CMOS
The emergence of CMOS image sensor technology is arguably the most important development in solid-state imaging since the invention of the CCD. CMOS image sensors, of which today most can be referred to as CMOS Active Pixel Sensors
(APS), exploit the same silicon chip technology used in microprocessor systems.
The attraction is mostly due to the nature of CMOS technology in that many millions of transistors can be integrated on a single silicon circuit. This presents the
opportunity to integrate a large array of pixels, each with its own photodiode and
readout transistors, alongside all of the ancillary electronics needed to address the
array, buffer the analogue video signal and even digitise it ready for processing,
storage or display.
CMOS sensors, like CMOS integrated circuits, operate at considerably lower
voltages than CCDs, typically between 1.8 V and 5 V depending on process form
and geometry. The large-scale functional integration and low-voltage operation are
clearly very attractive in consumer markets that demand compactness, long battery
life and low production cost. As a consequence, many digital cameras and mobile
telephones now exploit CMOS rather than CCD sensors.
The question arises as to how CMOS compares to CCD technology for scientific
imaging applications and in particular space instrumentation. CMOS sensors are
designed in many architectural forms and a good introduction can be found in
Holst and Lomheim (2007). In its simplest form the CMOS APS pixel consists of
a photodiode and three transistors: one to pre-charge the photodiode, one to sense
the signal voltage on the photodiode and one to select the row (Figure 23.14a).
A pixel array will typically be x-y addressed by shift registers (Figure 23.14b)
although some implementations employ address decoders to enable random pixel
access. The APS pixel array is accessed one row at a time by enabling all the
row-select transistors within a single row of pixels. At the bottom of the array, the
individual pixels within the row are selected and read out column-by-column. The
video signal may be fed through a multiplexer to an analogue output amplifier, a
serial analogue-to-digital converter (ADC) or an array of column-parallel ADCs.
406
23. CCD and CMOS sensors
Figure 23.15: Layout of a typical CMOS pixel.
A disadvantage of the basic three-transistor pixel is that it is subject to kB T C
noise, which is typically the dominant source of readout noise. CDS has been found
to work with varying degrees of success by storing a “reset” frame off-chip prior to
an exposure and later subtracting it from the “signal” readout frame. To enable
on-chip CDS, researchers and manufacturers have developed four-transistor pixels
in which the new transistor is used as a gate between the photodiode and the sense
transistor. The concept is that the output node can be sampled after reset and
before signal from the photodiode is switched onto the output node. This appears
analogous to the CCD output amplifier circuit but requires the photodiode to
be “pinned” at appropriate potential. This ensures that all the signal charge is
transferred to the output node, thus avoiding image lag, and that the kB T C noise
remains correlated between the two CDS samples. The disadvantage of the pinned
photodiode is that the pinning significantly reduces the charge storage capacity of
the pixel and thus also the usable linear dynamic range.
Quantum efficiency
The quest for a scientific CMOS sensor is today still in its infancy. The quantum efficiency of front-illuminated CMOS sensors is compromised by the in-pixel
electronics and aluminium bus tracks reducing the “fill factor”, a measure of the
fraction of the pixel’s area that is actually sensitive to light. The layout of a typical
CMOS pixel is shown in Figure 23.15. In this example, the photodiode occupies
only 19 % of the pixel area (as defined by the dotted line) but will typically yield a
fill factor ≈ 30 %. The losses arise from the reflection of light from the aluminium
bus lines and photon-induced electrons in the substrate being absorbed and lost
within the in-pixel transistor electronics. An obvious solution to the problem is to
thin and back-illuminate the sensor and several research groups have now demonstrated back-illuminated CMOS sensors that achieve quantum efficiencies in line
with their CCD counterparts (Waltham et al 2007; Janesick et al 2007).
407
Readout noise and dynamic range
A second area of research is to minimise readout noise and maximise charge
storage capacity and linearity. A limitation is the linear voltage swing that can
be obtained outside the transistor threshold regions in modern low-voltage CMOS
processes. The linear dynamic range of today’s best CMOS sensors is ≈ 5000,
considerably less than a CCD. Several approaches to overcoming the problem are
being investigated including the concept of pixels that deliberately behave in a nonlinear fashion and sensors that allow individual pixels to have varying exposure
periods. More complex five- and six-transistor pixels are also being investigated
in the pursuit of increased dynamic range (Janesick et al 2006, 2007). We can
anticipate further progress in the future as researchers adapt to exploiting the
advantages of CMOS technology rather than attempting to emulate the elegance
of the CCD.
CMOS in space
CMOS sensors are already used in space, having applications in satellite bus
instrumentation such as star trackers and inspection cameras. CMOS is yet to have
a significant impact in scientific payloads for which the CCD remains dominant.
The principal advantages of CMOS over the CCD for space instrumentation are
compactness, low mass, low power and radiation hardness. The CCD remains unchallenged in dynamic range and photometric accuracy. The effects of radiation in
CMOS and CCD sensors are similar in that both suffer from ionising radiation and
displacement damage. However, the key advantage of the CMOS APS is that there
is no degradation of CTE. A new effect, unseen in the CCD, is a susceptibility to
single-event latch-up (SEL). SEL is a potentially destructive condition, triggered
by an energetic particle, in which parasitic circuit elements form the equivalent of a
silicon-controlled rectifier. The result is a large and potentially damaging increase
in supply current that can only be recovered from by temporarily removing power.
CMOS sensors will also exhibit the other effects of displacement damage such as
hot pixels and RTS noise. CMOS sensors will undoubtedly play an increasing role
in space instrumentation, but today they are most likely to appear in those applications that have extreme size, mass or power constraints, applications which require
complex operational modes such as random pixel access, or in those missions for
which the radiation damage to a CCD would prove insurmountable.
Bibliography
Beynon JDE, Lamb DR (1980) Charge-coupled devices and their applications.
McGraw-Hill
Boyle WS, Smith GE (1970) Charge-coupled semiconductor devices. Bell Syst Tech
J 49:587
Ebben TH, Bergstrom J, Spuhler P, Delamere A, Gallagher D (2007) Mission to
Mars: the HiRISE camera on-board MRO. Proc SPIE 6690:6690B-1–14
Janesick J (2001) Scientific Charge-Coupled Devices. Proc SPIE Press Book, ISBN
9780819436986
408
23. CCD and CMOS sensors
Janesick J, Andrews J, Elliott T (2006) Fundamental performance differences between CMOS and CCD imagers: Part I, Proc SPIE 6276:62760M-1–19
Janesick J, Andrews J, Tower J, Grygon M, Elliott T, Cheng J, Lesser M, Pinter J
(2007) Fundamental performance differences between CMOS and CCD imagers;
Part II. Proc SPIE 6690:669003-1–23
Holland AD (1991) Annealing of proton-induced displacement damage in CCDs
for space use, Proc. of 10th Symposium on Photoelectronic Image Devices, IoP,
33–40
Holland A, Abbey A, Lumb D, McCarthy K (1990) Proton damage effects in EEV
charge coupled devices. Proc SPIE 1344:378–395
Holst GC, Lomheim TS (2007) CMOS/CCD Sensors and Camera Systems, Bellingham, Wash.; The International Society for Optical Engineering; JCD Publishing
Hopkins IH, Hopkinson GR (1993) Random Telegraph Signals from Protonirradiated CCDs. IEEE Trans Nuc Sci 40:1567–1574
Hopkins IH, Hopkinson GR (1995) Further measurements of random telegraph
signals from proton-irradiated CCDs. IEEE Trans Nuc Sci 42:2074–2081
Hopkinson GR (1999) Proton damage effects on p-channel CCDs, IEEE Trans Nuc
Sci 46:1790–1796
Hopkinson GR, Dale CJ, Marshall PW (1996) Proton effects in charge-coupled
devices, IEEE Trans Nuc Sci 43:614–627
Hynecek J (2001) Impactron – a new solid state image intensifier. IEEE Trans on
Electron Devices, 48:2238–2241
Kimble R, Brown L, Fowler W (plus 11 authors) (1994) CCD detector for the Space
Telescope Imaging Spectrograph. Proc SPIE 2282:169–181
Kimble RA, Goudfrooij P, Gilliland RL (2000) Radiation damage effects on the
CCD detector of STIS, UV, optical and IR space telescopes. Proc SPIE 4013:532–
544
Leach RW, Schild RE, Gursky H (plus three authors) (1980) Description, performance and calibration of a charge-coupled device camera. PASP 92:233–245
Marcus S, Nelson R, Lynds R (1979) Preliminary evaluation of a Fairchild CCD-211
and a new camera system. Proc SPIE 172:207–231
Robbins MS, Hadwen BJ (2003) The noise performance of electron multiplying
charge-coupled devices. IEEE Trans on Electron Devices 50:1227–1232
Séquin CH, Tompsett MF (1975) Charge transfer devices. Academic Press, NY
Waltham NR, Prydderch M, Mapson-Menard, H (plus three authors) (2007) Development of a thinned back-illuminated CMOS active pixel sensor for extreme
ultraviolet spectroscopy and imaging in space science. Proc SPIE 6690:6690071–14