Download Wide Dynamic Range Imaging Techniques

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Current source wikipedia , lookup

Alternating current wikipedia , lookup

Rectifier wikipedia , lookup

Switched-mode power supply wikipedia , lookup

Stray voltage wikipedia , lookup

Anastasios Venetsanopoulos wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Voltage optimisation wikipedia , lookup

Voltage regulator wikipedia , lookup

Rectiverter wikipedia , lookup

Power MOSFET wikipedia , lookup

Mains electricity wikipedia , lookup

Buck converter wikipedia , lookup

Resistive opto-isolator wikipedia , lookup

Opto-isolator wikipedia , lookup

Transcript
Wide Dynamic Range Imaging Techniques
A high-resolution image sensor is only the start of the story. How you process its output will
ultimately determine image quality.
By (Venkata Raghavan S., Product Applications Manager, Cypress Semiconductor Corp.)
Executive Summary
Images captured in real life have varying illumination levels in a scene. The ability of the image sensor to capture all the
illumination levels (contrast) with fidelity will result in a wide dynamic range (WDR) image. The key factor for achieving WDR is
to identify the dark and bright portions of the image and control the integration of the pixels, avoiding saturation. Pixels getting
impinged with high illumination needs saturation control and the pixels impinged with the dark portion of the image need more
integration time. Hence the contrast of the scene is restored by the imager. This article describes wide dynamic range imaging
techniques for CMOS image sensors.
Introduction
Digital imaging technology has progressed from digital still cameras into an array of portable applications such as cellular
handsets, where it serves as a differentiating feature. In applications such as automotive sensors and medical equipment,
digital imaging serves as the foundation of higher-order functionality such as machine vision, instrumentation, document
scanning, motion analysis and scientific high-speed image analysis.
Image resolution is often seen as the most important factor determining the quality of an image. However, in all of these
applications, from those that produce images for viewing or perform complex analysis of images, dynamic range plays a key
role in determining how efficient and effective later imaging processing will be. Dynamic range directly affects final image
quality as well as the effectiveness of any compression or analytics technologies.
The dynamic range of CMOS image sensors is defined by their ability to capture dark as well as bright sections within an
image while maintaining the highest fidelity. Real-life images have varying illumination levels throughout a scene. An image
sensor must have wide dynamic range (WDR) in order to capture all the illumination levels (contrast) of an image. The wider
the dynamic range of the sensor, the more illumination levels that can be captured.
The key to achieving sufficient WDR is identifying the dark and bright portions of the image and controlling the integration of
the pixels to avoid saturation. Pixels impinged with high illumination need saturation control, while pixels impinged within the
dark portion of the image need longer integration times. In this way, the contrast of the scene can be restored by the imager.
This article describes the definition of wide dynamic range and appropriate imaging techniques to use with CMOS image
sensors.
Dynamic Range
Dynamic range is the ratio of the maximum signal output of the sensor to the smallest signal output of the sensor. The higher
the dynamic range of the sensor, the greater the fidelity of the optical image transformation from the object scanned. The
sensor generates its maximum output for brighter portions of the image and its minimum output for darker portions. The
maximum output of the sensor without saturation is limited by the well capacity of the photo diode, while the minimum
detectable output is limited by the noise floor of the sensor. Figure 1 illustrates the noise floor in detail, where dynamic range
(db) = 20 log (sensor’s maximum output / sensor’s minimum output).
Wide Dynamic Range Imaging Techniques
Published in Portable Design (http://www.portabledesign.com)
Page 1 of 6
August 2007
[+] Feedback
Figure 1: Noise floor definition of the CMOS image sensor
A higher dynamic range results in a better transformation of the object to an image, accurately capturing the varying light
intensities of a scene from the brightest to the darkest parts, including shadows. Factors that affect dynamic range include:
• Dark current of the sensor
• Saturation of the pixels
• Quantum efficiency/well capacity of the photo diode
Signal-to-Noise Ratio
The signal-to-noise ratio (SNR) is a measure of a captured image’s immunity to noise. SNR is the ratio of the peak light
intensity without saturation to the scene background noise. A camera with a higher SNR typically produces low noise video in
darker scenes. Photodiodes generally have better SNR when they are charged to more than half of their total capacity.
However, they cannot be charged beyond saturation.
There are two basic noise sources in a CMOS image sensor:
1. Temporal noise caused by the photo diode and MOSFET, shot noise, thermal noise and 1/f noise as envisaged in electronic
circuits.
2. Fixed pattern noise caused because of the pixel output variation for the same illumination. The sources of fixed pattern
noise are device and interconnect mismatches.
Figure 2 shows a three-transistor APD pixel circuit widely used in CMOS image sensors. The first MOSFET is used to reset
the photo diode to a known voltage by pulsing the gate (reset) signal. The photo diode’s stray capacitance is charged to a
known voltage (reset voltage) and the photo diode is allowed to integrate the charge. Due to electron hole re-combination, the
reset voltage is pulled down, depending upon the light intensity impinging on the photo diode. If the pixel is saturated, the
whole charge across the photodiode will be depleted completely. The second MOSFET is a source follower, while the third
MOSFET switches the selected row/pixel to the bit line, which in turn is connected to the column amplifier. The actual voltage
integrated by the photo diode is the reset voltage minus the voltage across the diode after integration.
Wide Dynamic Range Imaging Techniques
Published in Portable Design (http://www.portabledesign.com)
Page 2 of 6
August 2007
[+] Feedback
Figure 2: 3-Transistor pixel
Hence the photo diode is operated with one reset for one integration cycle. This method is called linear integration (open loop
with no control of saturation of the pixel). If the photo diode needs saturation, control multiple resets are applied with varying
integration times optimally chosen for the end application to achieve a wide dynamic range. The first reset voltage would be
the largest voltage level as per the design, and second, third and so on will be smaller voltage levels.
The steps involved in the multi-reset system are:
1. Reset the photo diode with the maximum reset voltage VR1 allowed and allow integration time T1.
2. Reset the photo diode with smaller voltage VR2 (voltage less then VR1). With this reset voltage, the pixels saturated will
have a source of the MOSFET less than the drain, and the MOSFET will conduct a voltage across the photo diode equal to the
new reset voltage. If the pixel is not saturated, the source voltage will be less than VR2 and the MOSFET will not charge the
photo diode to the new reset voltage. The photo diode will continue integration with time T2. This pixel corresponds to the dark
portion of the image and receives more integration time, which is desirable.
3. The resets are repeated until the last barrier (reset) voltage completes the integration.
Wide Dynamic Range Imaging Techniques
Published in Portable Design (http://www.portabledesign.com)
Page 3 of 6
August 2007
[+] Feedback
Figure 3: Two barrier reset timing
Pixel Response with Multi-Barrier Operation
Pixels impinging with the brighter portion of the image are reset and pixels impinging with the darker portion receive more
integration time to develop the charge. In this way, the saturation is controlled.
Parameters of control to achieve dynamic range:
• Integration time
• Barrier voltage (reset voltage is also called barrier voltage)
By choosing a suitable combination of barrier voltage and integration time, the desired wide dynamic range can be achieved.
The three-transistor pixel (Figure 2) allows multiple resets and integration time control. External circuits are required to detect
the saturation and reset the pixel. Also required is a compression/transfer function table to decide the barrier voltage and
integration times. In this way, the required dynamic range can be controlled and achieved based on the end application.
Achieving Wider Dynamic Range
Techniques to achieve wide dynamic range include:
• Double shot exposure. To increase the dynamic range of an image, pixels are integrated two times, once with a shorter
integration time to capture the brighter portion of the object scanned, and again with a longer integration time to capture the
darkest portion of the object scanned. The final image is comprised of the two frames of data through signal processing.
• Multiple image detectors. Using this approach, multiple detectors capture the same scene at the same time using beam
splitters to project multiple copies of the same image onto each detector at the same time. Each detector is preset for a
predetermined exposure time (electronic control) or optical attenuators (optical control). This method is suitable for generating
high dynamic range images in real time. Additionally, the camera or the scene can be moving during scanning.
Wide Dynamic Range Imaging Techniques
Published in Portable Design (http://www.portabledesign.com)
Page 4 of 6
August 2007
[+] Feedback
• Saturation detection per pixel. Each pixel of the CMOS array has a saturation detection circuit, such as a comparator, to
compare for a known voltage across the diode to detect and reset in case of saturation. This approach imposes the overhead
of a saturation detection circuit per pixel, which is too expensive for large pixel arrays.
• Multi-reset with different integration times. Pixels based on an APD as described in Figure 2 can be reset several times
during the integration period. The minimum requirement is to have two barriers (reset voltages) with different integration times.
Again, the first barrier would be larger than the following barrier. The second barrier voltage can be selected based on the well
capacity and the previous integration output (Figure 3).
Figure 4: Images illustrating (left) linear integration and (right) the improvement with multi-slope integration as
captured by an IBIS-5 image sensor from Cypress
Dynamic range is one of the important factors affecting the image quality of CMOS sensors. The wider the dynamic range, the
better a digital imaging system will be able to accurately capture the varying light intensities of a scene. By varying the
combination of barrier voltage and integration time, as well as applying different techniques to widen the dynamic range,
developers can exercise more accurate control of image quality across a broad range of imaging applications.
Wide Dynamic Range Imaging Techniques
Published in Portable Design (http://www.portabledesign.com)
Page 5 of 6
August 2007
[+] Feedback
References
1. S. Decker, R.D. McGrath, K. Brehmer, and C.G. Sodini, “A 256 x 256 CMOS Imaging Array with Wide Dynamic Range
Pixels and Column-Parallel Digital Output,” IEEE Journal of Solid-State Circuits, vol. SC-33, No. 12, pp. 2081-2091, Dec.
1998.
2. O. Yadid-Pecht and E.R. Fossum, “Wide Intrascene Dynamic Range CMOS APS Using Dual Sampling,” IEEE
Transaction on Electron Devices, vol. ED-44, No. 10, pp 1721-1723, Oct. 1997.
3. D.X.D. Yang, A. El Gamal, B. Fowler, and H. Tian, “A 640 x 512 CMOS Image Sensor with Ultrawide Dynamic
Range Floating-Point Pixel-Level ADC,” IEEE Journal of Solid-State Circuits, vol. SC-34, No. 12, pp. 1821-1834,
Dec. 1999.
4. S.G. Chamberlain and J.P.Y. Lee, “A Novel Wide Dynamic Range Silicon Photodetector and Linear Imaging Array,” IEEE
Journal of Solid-State Circuits, vol. SC-19, No. 2, pp. 41-48, Feb. 1984.
5. F.J. Kub and G.W. Anderson, “Compressing Photodetectors for Long Optical Pulses Using a Lateral Blooming
Drain Structure,” IEEE Trans. Electron Devices, vol. ED-40, pp. 1740-1744, Oct. 1993.
Cypress Semiconductor
198 Champion Court
San Jose, CA 95134-1709
Phone: 408-943-2600
Fax: 408-943-4730
http://www.cypress.com
© Cypress Semiconductor Corporation, 2007. The information contained herein is subject to change without notice. Cypress Semiconductor Corporation assumes no responsibility for the
use of any circuitry other than circuitry embodied in a Cypress product. Nor does it convey or imply any license under patent or other rights. Cypress products are not warranted nor intended
to be used for medical, life support, life saving, critical control or safety applications, unless pursuant to an express written agreement with Cypress. Furthermore, Cypress does not authorize
its products for use as critical components in life-support systems where a malfunction or failure may reasonably be expected to result in significant injury to the user. The inclusion of
Cypress products in life-support systems application implies that the manufacturer assumes all risk of such use and in doing so indemnifies Cypress against all charges.
PSoC Designer™, Programmable System-on-Chip™, and PSoC Express™ are trademarks and PSoC® is a registered trademark of Cypress Semiconductor Corp. All other trademarks or
registered trademarks referenced herein are property of the respective corporations.
This Source Code (software and/or firmware) is owned by Cypress Semiconductor Corporation (Cypress) and is protected by and subject to worldwide patent protection (United States and
foreign), United States copyright laws and international treaty provisions. Cypress hereby grants to licensee a personal, non-exclusive, non-transferable license to copy, use, modify, create
derivative works of, and compile the Cypress Source Code and derivative works for the sole purpose of creating custom software and or firmware in support of licensee product to be used
only in conjunction with a Cypress integrated circuit as specified in the applicable agreement. Any reproduction, modification, translation, compilation, or representation of this Source Code
except as specified above is prohibited without the express written permission of Cypress.
Disclaimer: CYPRESS MAKES NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Cypress reserves the right to make changes without further notice to the materials described
herein. Cypress does not assume any liability arising out of the application or use of any product or circuit described herein. Cypress does not authorize its products for use as critical
components in life-support systems where a malfunction or failure may reasonably be expected to result in significant injury to the user. The inclusion of Cypress’ product in a life-support
systems application implies that the manufacturer assumes all risk of such use and in doing so indemnifies Cypress against all charges.
Use may be limited by and subject to the applicable Cypress software license agreement.
Wide Dynamic Range Imaging Techniques
Published in Portable Design (http://www.portabledesign.com)
Page 6 of 6
August 2007
[+] Feedback