Download ComputationalPlenoptic

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Hyperspectral imaging wikipedia , lookup

Imagery analysis wikipedia , lookup

Quill (satellite) wikipedia , lookup

Transcript
Alexander Jermyn
September 10, 2012
G. Wetzstein, I. Ihrke, D. Lanman, W. Heidrich, “Computational Plenoptic Imaging,” Comp. Graph.
Forum, vol. 30, no. 8, pp. 2397-2426, 2011. doi: 10.1111/j.1467-8659.2011.02073.x
Most traditional camera designs are based on the functions of the human eye. However, the
human eye has dimensional, spatial, and spectral limitations, as well as others. Whereas the eye
captures images in two dimensions, light can be modeled in four dimensions. The plenoptic function,
along with computational approaches, gives a ray-based model of light in higher dimensions. This
describes light not only in spatial terms, but also in terms of spectrum. By capturing the complete
plenoptic function, a larger range of data can be extracted from an image. Applications of this include
high dynamic range (HDR) imaging, multi-spectral imaging, light field imaging, gigapixel
photography, and others.
HDR imaging has been an active area of research in recent years. Dynamic range is defined as
“the ratio of largest and smallest possible value in the range ” of an imaging system. The dynamic
range of current high level imaging sensors found in digital cameras is about 10000:1, comparable to
that of color film. One theory on increasing that range is through capturing intensity gradients instead
of pixel intensity. The simplest way to increase dynamic range is to combine multiple exposures either
taken by a single camera or by a camera array.
A light field is composed of a set of 2D images taken from slightly different viewpoints, fully
describing the plenoptic function of a scene. Applications include synthetic aperture imaging, creating
novel viewpoints, and post-capture refocusing. The light field is captured by using an array of cameras
spread over a planar surface. Custom hardware allows for the accurate calibration of the cameras for
image processing. This method simulates an aperture the size of the entire array, giving sharp images
of objects obscured by any one of the cameras. However, a sparse array may not provide enough
information to recreate an accurate light field view. In time-sequential imaging, either the camera or
the object is moved to capture the light field. The main disadvantage to this method is that it is
incapable of capturing the light field of moving objects. A solution to this is single-shot multiplexing.
To capture the light field as if from multiple viewpoints, either an array of microlenses is placed in the
optical system, or an array of mirrors is photographed to simulate multiple viewpoints. Applications of
light field imaging include synthesizing new viewpoints and image refocusing.
Gigapixel photography aims to capture extremely high-resolution images by combining a set of
megapixel images. These images can be captured in various ways. A camera can be mounted on a
rotation stage or a small sensor can be moved in the image plane of a large format camera to capture
sequential images of a scene. Multiple small sensors can also be placed in a large housing to capture
the image in a single shot.
The plenoptic function has uses in HDR imaging, light field imaging, and gigapixel
photography. By combining multiple images containing slices of the plenoptic function, more
information is extracted from the combined image during processing. These images can be captured
sequentially or in a single shot, depending on the situation of use. Each method has its advantages and
trade-offs, so the uses are currently limited.