Download imaging devices

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Immunity-aware programming wikipedia , lookup

Buck converter wikipedia , lookup

Mains electricity wikipedia , lookup

Voltage optimisation wikipedia , lookup

Transistor wikipedia , lookup

Resistive opto-isolator wikipedia , lookup

P–n diode wikipedia , lookup

Spirit DataCine wikipedia , lookup

Shockley–Queisser limit wikipedia , lookup

Video camera tube wikipedia , lookup

Opto-isolator wikipedia , lookup

Photomultiplier wikipedia , lookup

Transcript
Imaging devices
The first imaging device - the eye
The first observers used their eyes.
The eye is amazing. color, various light conditions, can hormonally increase the sensitivity in
low light conditions. we focus by changing the shape of the lens. the dynamic range is
staggering. The eye can adjust to extreme conditions.
On the other hand , The eye is not very sensitive, cannot integrate over time, hard to make
‘reproducible’ measurements. Color is subjective. The brain is what adjusts the color balance to
preconceived perceptions. Our color sensitivity depends on the light level for example. We are
much more red-sensitive at low light levels. The response of the retina is x-y dependent. We are
more sensitive to low light in peripheral vision. Oh, and there’s the blind spots. Our brain
interpolates - ‘invents’ data. Yet that’s what we had for centuries.
Galileo wrote/drew the orbits of the Jovian moons he discovered.
Tycho made a SN ‘light curve’.
And Reverend Evans, from the 50s all the way to now, looks at galaxies and finds SNe by eye.
Though he may be getting older. last one was 08aw. he was after all born in 1937.
He found tens of SNe this way, and kept such a good log of his observations that they were
translated into a measurement of the SN rate in the local universe.
Next – photography
Most 20th century telescopes used large glass plates with photosensitive solutions.
These were exposed, and then developed, like film. Wise still has the dark room, with left over
chemicals. Measurements were then done on prints. Ask some of the ‘older’ astronomers in the
dept.
However, their Quantum efficiency is of the order of 2%, their response non-linear, and well,
they are obviously cumbersome.
The 1980s brought the revolution of digital imaging
electronic device, captures the image, sends it to computer, can be then manipulated digitally,
kept without loss ( we wish - try to read old tapes...), can be sent.
The main device - CCD Charge Coupled Device. There are also CMOS, not used much in astro
(except IR), for reasons we will discuss. These are extremely sensitive (can reach over 90%
QE), mostly linear, now they are quite cheap, but they used to be small. In 2009 Smith and
Boyle got the Nobel for its invention in the 60s. The 70s saw a military development (for spy
satellites).
CCD
In short:
every pixel has a ‘photo-sensitive layer’ that makes an electron when a photon is absorbed.
The pixel acts as a capacitor, accumulating the charge, until voltage is applied, the charge is
shipped out from all the pixels, translated to voltage which is then sampled, digitized, and
stored.
Physics of a pixel:
Semiconductor – has an energy gap, so only if some threshold voltage (or other energy source
to the electrons, in our case – light) is applied there can be free electrons in the ‘conduction
band’ and conductivity. The most common natural semiconductors are silicon and germanium,
both in the same column of the periodic table. Can also combine from left and right - GaAs
(Gallium Arsenide) InSb (Indium Antinomic)
When an electron jumps from the valence to the conduction band it leaves a hole behind, that
aids conduction as well. it can get filled from an electron one one side, leaving a hole there. it is
as if the holes are moving in the direction opposite the current.
Doping – By doping an SC (adding impurities), typically from the next or previous column in the
table, one can change the band structure and available conduction carriers.
For example, a 1 cm3 specimen of a metal or semiconductor has of the order of 1022 atoms. In a
metal every atom donates at least one free electron for conduction, thus 1 cm3 of metal contains
on the order of 1022 free electrons. Whereas a 1 cm3 of sample pure germanium at 20 °C,
contains about 4.2×1022 atoms but only 2.5×1013 free electrons and 2.5×1013 holes. The
addition of 0.001% (10-5) of arsenic (an impurity) donates an extra 1017 free electrons in the
same volume and the electrical conductivity is increased by a factor of 10,000.
n-type (n for negative) doped with more charges, make conduction easier.
p-type (p for positive) doped with more holes. more capacitive.
Common dopants are boron, arsenic, phosphorus.
Photo-sensitivity – When optical light, which has a wavelength of few 103 A ( = few eV) hits the
SC an electron-hole pair is created. In a CCD we want to keep them physically apart, so that
they don’t recombine, thus building charge.
The flux goes likes I(z)=I(0)exp(-z/z0), so the thicker the slab of silicon, the more likely it will be
caught.
z0 depends on the material obviously, but also on wavelength and T.
Shorter λ -> smaller z0 (can be thinner).
Lower T -> larger z0 (because of Fermi Dirac distribution. Lower T - less electron in conduction
band). However, higher T means lots of thermal noise.
For Si, z0(4000,6000,8000A) @ 77K, is (0.25μ, 4μ, 20μ)
if λ > λc, where λc = hc/Egap , z0 goes to ∞.
Typical λc:
GaAs& 9,200A
Si&
11,100A
Ge &
18,500A
InSb & 69,000A
Doping will change these numbers.
Trapping of the charge –
Once an electron-hole pair is produced we want to keep them separated. We an electric field
that will keep them apart. Then we can accumulate the electrons until readout time.
Charge transfer –
pixels are arranged in rows. Between them there are ‘channel stops’ – charge can only move in
one direction. motion is done with 3 phases, voltage is shifted = ‘clocked’, with a square wave.
all rows are moved one step. the last row is the output register, it is read out entirely, a FET
(field effect transistor) is transforming the charge to voltage which is then amplified, digitized etc.
(we will discuss later).
3 phases is the most common design (there are clever designs with less phases).
for a typical transistor with C = 0.1 pF, V = Q / C, V = 1.6 μV per electron. So this is then
amplified, before the A/D converter.
So in a basic buried-channel CCD we have, going from the top:
1) (3 or less) contacts.
2) The ‘gate oxide’, an insulator which acts as the dielectric for the capacitor.
3) n-buried channel. this keeps the electrons from accumulating too close to the surface of the
silicon. this is the ‘channel’ on which the electrons will flow once we readout.
4) a photoactive layer - p-doped silicon. this is where the pairs are formed.
5) channel stops to prevent columns from spilling.
Variations:
MPP (multi pinned phase) CCDs
Most modern CCDs are MPP.
By holding negative voltage dark current at the oxide interface is prevented, the electrons are
trapped in boron implants, and the voltage is then inverted during readout.
Back illumination
Front side has all the electrodes that block the light. Especially blue light. -> illuminate from the
back side. Then the silicon has to be thin (of order 10μ) or the pairs will be produced far from
the gate and recombine. This is a ‘thinned back illuminated CCD’.
Better for the blue, but more fringing + not enough depth for the redder bands (who have larger
z0).
typically there is also a NR coating (reduces reflections) and sometimes a fluorescent layer that
when hit with UV produces an optical photon - ‘down converter’, makes the CCD more sensitive
to UV.
Gold or platinum can give far-UV or X-ray sensitivity.
Thick, deep depleted CCDs
These are red-optimized, are actually thick and not thin, and there’s a larger depletion area
where the pairs are formed and electrons are captured. The price is less blue sensitivity, and
more cosmic rays.
Orthogonal transfer arrays
New. Yet untested on sky really, implemented in ODI (WYIN) and Pan Starrs1.
Channel stops are dynamic. the charge is shifted with the atmosphere.
Additional properties of CCDs
- Depending on optimization: good QE from ~ 1000A to 1.1micron. Can get close to 100%.
- Dark current:
* room temperature, non-mpp, 105 e/s/pixel. Will kill astronomy.
* 150K (liquid nitrogen), mpp, 1 e/s/pixel. Negligible.
* this is why astronomy cameras are cooled, usually with liquid nitrogen. amateur
cameras use thermo electric cooling that can go to ~ 0C.
* The CCD typically sits in vacuum (against condensation), with a cold finger that dips
into the nitrogen bath.
- Cosmic rays: high energy particles (mostly protons) from the sun or other sources create
cascades in the atmosphere. Muons can survive through to the detector, and in thicker CCDs
they bump into electrons. They look like streaks on the image. Bigger problem in space.
- Luminescence: pixels with impurities that act like LEDs...
- Saturation: when too many electrons accumulate, they can spill over, usually not through the
channel stop, so you get ‘blooming’ or ‘bleeding’. a whole row can go fakakt. This limits the
dynamic range.
- CTE: Charge transfer efficiency. How much charge is lost during a shift.
&
CTE of 99.999% may sound great, but in a 4000x4000 CCD charge is shifted 4000
times.
&
0.999994000 = 0.96. so up to 4% is gone missing. It is rarely a real problem today. By over
scanning one can measure the CTE.
- cosmetics - there always are hot/bad/dead pixels, sometimes columns. The price of a detector
is often proportional to the defects, since during manufacture they get a fraction of each
quality. ‘engineering’ grade means bad cosmetics, cheap, good for initial studies and
instrument design. For astronomy they are often good enough, since we can always
compensate for the smaller filling factor. You can’t do that in other applications.
- pixel scale:
pixel sizes – 9-30 micron. around 15 micron average.
Let’s consider a 4m, f/4 prime focus telescope.
f=16m.
angular scale = 1/16 rad/m = 4 deg/m -> X 3600/106 X 15 micron = 0.22’’ / 15 micron pixel
The atmosphere smears the signal of a point source - the PSF. typically with a FWHM (define)
of 0.5’’ at great sites with good conditions, up to 3-5’’ in bad sites. 0.2’’ allows for a good
sampling of the PSF.
More pixels means more (useless) data, more noise, slower read time, more expensive.
- Size
CCDs used to be tiny. they are less and less so. they are also abuttable, so one can build
mosaics.
This also allows for a shorter read-time. every sub array is read independently.
One can ‘pave’ a focal plane. Largest are ~ 10K x 10K. With 10 micron pixels this is still 15cm.
In angular size: 0.22’’ X 10,000 ~ 0.6 deg.
Reminder - the moon (and the sun) have a diameter of about 0.5deg.
In PTF for example, we have 12 CCDs, 2kX4K, two rows of 6 CCDs, with 1’’/pixel (seeing ~ 2’’
at best, so no sampling problem), so 8k X 12k ~ 8000’’ X 12000’’ ~ 7 deg2.
But the P48 on which we work has a corrected field of about 30 deg2. This is how big
photometric plates used to be!
PTF was using an existing camera (old CFHT 12K), but we are planning to pave the whole focal
plane with a new camera, now that it is feasible (financially).
data, amplifiers, read noise
how much data is generated from big arrays:
for PTF, 2K x 4K x 12 x 2bytes ~ 200Mb per exposure. We take one every minute + 30sec read,
slew etc. 8h/night = ~ 300 exposures / night = 60Gb / night = tens Tb per year.
Why 2 bytes? the voltage coming out of the amplifier(s) is going to A/D converter(s), typically
16bit.
This allows one to divide the scale to 216 = 65,536 discrete values. So a range of 1–10V means
152μV / count. (This is indeed 100 times more than you get straight from the CCD, hence the
amplifier).
BUT - the amplifier creates noise. typically few electrons / pixel.
(note that this means you don’t want pixels that are too small, or you’ll have more read noise
per source).
Gain - the number of electrons / count.
Let’s assume the well depth is 250,000 electrons ( ~ saturation level).
if the gain is 1 e/count, then with a 16bit A/D and a gain of one we will saturate at 65,000.
we could set a gain of 4, and then have a ‘coarser’ measurement, but more dynamic range and
the digitation noise is typically negligible (unless doing high-res spectroscopy or very narrow
filters - then you’d choose a smaller gain). in most instruments the gain is fixed to an optimized
level.
on chip binning (less read noise)
subarray readout (less read time)
Another clever way of operating CCDs is called drift scanning:
The sky is allowed to drift along the image, i.e., there is no tracking, but the CCD is read at the
same pace.
t=detector size ’’/( 15’’/s x cos(DEC)
15’’/s is 360deg/24h
This will typically give exposure of 1-2mn at most. Good for shallow surveys and was used in
SDSS.
But why?
Because CCDs are slow to read precisely. They can be read faster but then the read noise goes
up and the CTE goes down. This is their first flaw.
Solutions:
1) many smaller arrays read independently. can reduce read time to a few seconds, from the
few minutes it can take on the large arrays. it is also cheaper (easier to make small chips), but
fill factors become an issue, and electronics become more of an issue. How to make sure there
is no cross-talk between the devices (ghosts).
2) CMOS detectors.
complementary metal oxide semiconductors.
active individual sensors - a bunch of photodiodes (light -> voltage) where everything happens
at the pixel level.
active means there is amplification ‘on site’.
Pros:
- They are cheap.
- They can be read quickly.
- Read is non-destructive.
- No blooming.
- Less prone to cosmic rays.
Cons:
- Low QE.
- Their read noise is high, though getting better.
- Their fill-factor is low, tens of percent. In consumer electronics they put tiny lenses on each
pixel. Not demonstrated in cooled detector.
- They tend to have annoying lingering charge effects. The previous image leaves a ghost.
- can be more trivially adapted to the IR by using other SC rather than silicon.
We (= Nat Butler) played once with the Foveon 3X. It’s a CMOS detector, where they pick the
voltage in 3 levels, translating it to color. get 3 ‘filters’ for the price of one. Kind of messy with
overlap between the responses, a lot of persistence between exposures, weird electronics (data
would wrap at 8bit).
by the way - consumer cameras use ‘bayer filters’ or some other clever arrangement effectively
having 1/3 the resolution they typically claim. Some expensive cameras use dichroic(s) and 2/3
individual CCDs for the RGB channels.
In astronomy we typically either do spectroscopy with a dispersive element, as we will discuss
later on, or imaging, where we use filters to control the wavelength that we measure.
The other flaw of CCDs is that silicon is not a good IR sensor. At wavelength ~ 1 micron their
QE goes to zero.
So you make an array of gates, just like a CCD, but with other materials, such as InSb, or
HgCdTe, and you “bump-bond” them to an array of readout electronics done in silicon. “bumpbonded direct hybrid array”. This can give you sensitivity at 1-40 micron, depending on the
hybrid used and on the cooling.
QEs can be large, read noise few to few tens of electrons.
at farther IR, above 40micron the dearth of development is likely due to a lack of military
interest. Technology is different. Read about Herschel.
While CMOS detectors do not necessarily need a shutter since it can be purely electronic, a
CCD, like film photography, needs a mechanical shutter. Needs to be fast, precise, and robust.
Operate thousands of times without breaking.
Also, a traditional ‘diaphragm’ shutter will cause uneven illumination that will depend on the ratio
of the shutter speed to the exposure time.