Download A Wide Field, High Dynamic Range, Stereographic Viewer

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Night vision device wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Transcript
A Wide Field, High Dynamic Range, Stereographic Viewer
Patrick Ledda
Department of Computer Science
University of Bristol, UK
[email protected]
Greg Ward
Exponent
Palo Alto, USA
[email protected]
Abstract
In this paper we present a High Dynamic Range viewer based on
the 120-degree field-of-view LEEP (Large Expanse Extra
Perspective) stereo optics used in the original NASA virtual reality systems. By combining these optics with an intense backlighting system (20 Kcd/m2) and layered transparencies, we are able to
reproduce the absolute luminance levels and full dynamic range of
almost any visual environment. This is important because it allows
us to display environments with luminance levels that would not
be displayable on a standard monitor. This technology may enable
researchers to conduct controlled experiments in visual contrast,
chromatic adaptation, and disability and discomfort glare without
the usual limitations of dynamic range and field of view imposed
by conventional CRT display systems. In this paper, we describe
the basic system and techniques used to produce the transparency
layers from a high dynamic range rendering or scene capture. We
further present a series of psychophysical experiments demonstrating the device's ability to reproduce visual percepts, and compare this result to the real scene and a visibility matching tone
reproduction operator presented on a conventional CRT display.
CR Categories: I.3.3 [Computer Graphics]: Picture/Image
Generation - Display Algorithms; I.3.7 [Computer Graphics]:
Three-Dimensional Graphics and Realism - Color, Shading,
Shadowing, and Texture; I.4.0 [Image Processing and Computer
Vision]: General - Image Displays
Additional Keywords and Phrases: Tone Reproduction,
Dynamic Range.
1
Introduction
The natural world presents our visual system with a wide range of
colors and intensities. A starlit night has an average luminance
level of around 10-3 candelas/m2, and daylight scenes are close to
105 cd/m2. Humans can see detail in regions that vary by 1:104 at
any given adaptation level, over which the eye gets swamped by
stray light (i.e., disability glare) and details are lost. Modern camera lenses, even with their clean-room construction and coated
Alan Chalmers
Department of Computer Science
University of Bristol, UK
[email protected]
optics, cannot rival human vision when it comes to low flare and
absence of multiple paths ("sun dogs") in harsh lighting environments. Even if they could, conventional negative film cannot capture much more range than this, and most digital image formats do
not even come close. With the possible exception of cinema, there
has been little push for achieving greater dynamic range in the
image capture stage, because common displays and viewing environments limit the range of what can be presented to about two
orders of magnitude between minimum and maximum luminance.
A well-designed CRT monitor may do slightly better than this in a
darkened room, but the maximum display luminance is only
around 100 cd/m2, which does not begin to approach daylight levels. A high-quality xenon film projector may get a few times
brighter than this, but they are still two orders of magnitude away
from the optimal light level for human acuity and color perception.
During the last decade, a great deal of work has been done by
computer graphics researches to find ways to map world luminances to targets display luminances. Most of these models compute images with the aim of improving their appearance. This is
known as the tone reproduction problem: how do we map these
real world (or simulated) luminances to displayable luminances?
Realistic image synthesis has shown that it is possible, via tone
mapping, to generate images similar to a real scene by careful
mapping to a set of luminances that can be displayed on a CRT.
Different tone mapping operators, for example [Chiu et al. 1993,
Schilick 1995, Tumblin and Rushmeier 1993, Tumblin et al. 1997,
Tumblin et al. 1999, Ward 1994, Ward et al. 1997], have been
developed over the years for static images. Some researches
[Ferwerda et al. 1996, Pattanaik et al. 1998, Pattanaik et al. 2000]
have extended the problem to dynamic settings. Most of these
models include knowledge of the human visual system to improve
the resultant quality of the images [Chalmers et al. 2001,
McNamara et al. 2000, Meyer et al. 1986, Parraga et al. 2000,
Ramasubramanian et al. 1999, Rushmeier et al. 1995]. However
none of them have managed to offer the substantial contrast reduction needed to display images without some loss of information.
Figure 1: HDR Viewer capable of 10,000:1 contrast ratio.
In this paper, we present a novel device for displaying stereo-
graphic, high dynamic range images to a single viewer [Ward
2002]. This static display device combines simple, known technologies in a way that faithfully reproduces the visual field of a
natural or simulated scene, resulting in a heightened sense of realism and presence. Following a description of the device and the
requisite image processing techniques, we demonstrate the quality
of its output by comparing, based on psychophysical experiments,
the visibility in the original scene to the visibility in our HDR
(High Dynamic Range) viewer and on a standard CRT monitor.
2
3
Image Processing
The basic transformation required for correct perspective viewing
with the ARV-1 optics is a hemispherical fisheye projection. This
projection is most easily explained in the form of a diagram, as
shown in Figure 4.
Device Description
The high dynamic range viewer itself is a simple device, consisting of a bright, uniform backlight joined to a set of LEEP ARV-1
optics. The optics allow a 120° field of view in each eye for a very
natural and complete stereo view from a pair of 2.5" (6.4 cm) fisheye images mounted side-by-side. A schematic of the viewer is
shown in Figure 1, and the actual viewer is shown in Figure 2.
Four viewers have been constructed thus far, and this is the latest
model. Figure 3 shows a user looking inside the viewer.
Figure 2: Photograph of the High Dynamic Range viewer.
Figure 4: The hemispherical fisheye projection.
In a hemispherical fisheye projection, the distance from the
center of the image is equal to the sine of the angle from the principal view axis (i.e., depth). This can be visualized as a hemisphere placed on the image plane and projected directly down onto
it, as illustrated in Figure 4.
Each pixel in the image corresponds to the ray that passes
through the point in the hemisphere directly above it. In vector
terms, this ray is defined by the origin view point and the unit
direction vector given in Eq. 1. The x and y values in the equation
are the offsets from the image center normalized to half the width
of a full 180° image. These coordinates will equal -1 or 1 at the
image edges, and 0 in the center. The unit view vectors for x, y,
and z correspond respectively to the horizontal, vertical, and depth
directions for this image.
Note that the view is not defined if (x2 + y2) > 1.
(Eq. 1)
Figure 3: A user looking in the viewer.
The transparencies sit on top of the diffuser in front of the
ARV-1 optics. Focus adjustment is performed by shifting the
optics slightly closer or away from the transparencies, which are
held in place by a clip. Precise focus is not important, as the eye
will accommodate over a certain range. It is important when viewing to remove eyeglasses that might prevent the eyes from getting
close enough to the optics to provide the full field of view.
Provided the wearer does not have severe astigmatism, the focus
adjustment permits acuity at the visual limit of 20cycles/degree, so
long as the transparency is rendered at a density of 800 dpi or better (2048 x 2048 resolution).
Since each image covers only 120° rather than 180°, only the corners of the image are perpendicular to the principal view axis. The
final projection can be seen in the processed image shown in
Figure 5.
Due to chromatic aberration in the LEEP ARV-1 optics, it is
best to precorrect the image by scaling the red image plane proportionally more than the blue image plane so that the red is about
1.5% larger than the blue, and the green is in between. This can
be seen as a red fringe near the edges of Figure 5, which results in
fairly good color convergence when viewed through the LEEP
optics. The middle of the image is not subject to the same chromatic distortion, so the center of view is unaffected.
Due to limitations in the film recording process, it is not possible to achieve the full, desired dynamic range in a single transparency. Although the film recorder we used is capable of producing nearly 1000:1 at the limit, the bottom reaches of this range
have fairly large intensity steps. The useful range where the intsty
steps are below the visible threshold necessary to avoid banding
artifacts is closer to 100:1. Since we wish to produce images with
root of each pixel value in the blurred image, and dividing the original image by this image for the detail layer. This result is the one
shown in Figure 5. The effects of dividing by the scaling layer
may be seen in the odd gradients near the window's edge, which
disappear when the layers are placed together in the HDR viewer.
Alignment marks at the corners of the images aid in their registration.
Figure 7: Stereoscopic view.
Figure 5: A detail transparency layer for the HDR viewer.
a dynamic range in excess of 10,000:1, we need to use two transparencies, layered one on top of the other. By combining the
ranges in this way, we double our dynamic range in log terms as
the densities add together. To avoid problems with transparency
alignment and ghosting, we reduce the resolution of one layer
using a Gaussian blur function. Because the dynamic range of the
individual color channels is not important for perception, we convert to gray in the scaling layer to simplify our calculations. The
resulting image is shown in Figure 6. The degree of blur is not
critical - we use a blur radius equivalent to 16 cycles across the
image. We have found this to allow for easy image alignment
without seriously compromising the maximum luminance gradient
in the combined result.
So far, we have discussed only the generation of a single layered image, whereas two images are required for a stereo view, one
for each eye. In cases where stereo perspective is negligible or
unimportant, a single image may be duplicated for each eye to
achieve a satisfactory monocular view. For stereoscopic viewing,
one must capture or render two images, and this is done by simply
shifting the real or virtual camera by the average interocular distance, which is approximately 2.5" (6.4 cm). This is shown in
Figure 7. It is not necessary to adjust the principal view axis in the
two perspectives, as the observer should and will make his or her
own accommodation when viewing the stereo pair.
4
We have performed two types of validation for the HDR viewer's
performance, one physical and one psychophysical. In the physical validation, we wanted to measure the luminances produced by
the viewer and compared them to the original input to verify that
the method of combining transparency layers performs as specified. In the psychophysical validation, we took 40 participants
into a darkened room and presented them with a contrast visibility
chart, which we then reproduced in our HDR viewer. We start by
discussing the physical validation, and follow with a presentation
of the psychophysical results.
4.1
Figure 6: A scaling transparency for the HDR viewer.
Since the combination of the two transparency layers is equivalent to adding the densities, each image must have half the density of the original. This is easily accomplished by taking the square
Validation Experiments
Physical Validation
The quantitative measurement process presented some challenges,
as there are no luminance probes with sufficient spatial resolution
and freedom from stray light to measure the very large gradients
we produce in the viewer. We therefore restricted our quantitative
validation to a simple verification that combining transparencies
adds densities as predicted. To accomplish this, we used an industry standard densitometer made by X-rite, and found that layering
transparencies corresponds to adding densities within the 2% accuracy of the densitometer. Mathematically, the greatest theoretical
error will happen near the lowest densities, where multiple interreflections could increase the effective transmittance by as much
as 1%. This is not enough of an error to be significant in most simulations.
4.2
Psychophysical Validation
In our psychophysical comparison, we asked 40 participants to
view the chart [Ayres 1996] shown in Figure 8 in a darkened room,
illuminated dimly by two spotlights designed to simulate a car's
headlights. The subjects were asked to identify carefully selected
targets in the real scene, on a rendered version and on a HDR photograph both of which were displayed on a CRT monitor and in the
HDR viewer.
4.3
make a comparison with the real world. It was therefore fundamental to use a physically based renderer in order to minimize
errors. Once the image of the scene had been rendered, a fisheye
projection as described above, was applied to correct for the LEEP
ARV-1 optics. The resulting image is shown in Figure 10. Two layers were then produced using the technique mentioned above and
then printed on to 2.5" transparencies.
The Real and Virtual Scenes
The purpose of this experiment was to validate the HDR viewer
and determine whether there is the need for such devices. It was
very important therefore to simulate an environment that would be
very difficult to represent on a CRT monitor, and compare the output to the HDR viewer.
Figure 10: Transparency used for the rendered scene.
Figure 8: Contrast chart used for the experiments.
The scene was quite simple: it consisted of two lights, 30 cm
apart and 3m away from the viewpoint. The contrast chart was
then positioned between the lights. Figure 9 illustrates the scene
setup.
Because the aim of the experiments was to validate the high
dynamic range viewer, it was preferable to create an environment
with a contrast ratio displayable in the viewer. From Figure 9 it
can be seen that filters were positioned in front of the lights to minimize the maximum luminance in the scene. The original scene
(with no diffusers) had a contrast ratio of the order of 105:1. This
was too high even for the HDR viewer and therefore the results
would have not been very meaningful. The filters helped to reduce
the contrast ratio to 104:1 (which is what the device is capable of
producing) allowing a meaningful comparison between reality,
HDR viewer and the CRT monitor.
Figure 9: The scene setup
The virtual scene was modeled in AliasWavefront Maya and
subsequently rendered in Radiance [Ward and Shakespeare 1997].
The rendering process was very important since we wanted to
Figure 11: Captured image of HDR scene
As well as rendered images with high dynamic ranges, photo-
graphs of scene were also taken with a wide range of luminance
levels. We photographed the chart using an Olympus C3040 digital camera and a method for combining multiple exposures into a
calibrated, high dynamic range image [Debevec and Malik 1997,
Mitsunga and Nayar 1999, Ward 2001]. This calibrated image is
shown in Figure 11. This image was composited from two HDR
photographs, one for the target and one for the surround. This was
necessary because the lens flare in the camera, which is similar to
the scattering that happens in the eye, prevents it from capturing
the detail on the test chart together with the bright light sources.
Using a black cardboard mask with a hole that exposed only the
central target, we were able to capture the correct luminance levels on the target without suffering from scattering in the lens. This
mask was used for the central region only.
4.4
was to determine whether the perception of the targets was at all
related to the fact that the images shown on the displays were artificial. All the images were then tone-mapped using Ward et al.
algorithm [Ward et al. 1997]. This tone operator allows us to
optionally take into account human visibility effects such as disability glare and visual acuity. Figure 13 illustrates the rendered
versions without human visibility factors (top) and with (bottom).
We wanted to verify the differences, if any, that this causes in the
perception of targets. Both the HDR photograph and the rendered
images were therefore tone-mapped with and without human visibility effects. Figure 14 shows all the images produced for the
experiment.
Psychophysics
Figure 12: Running the psychophysical experiment.
In the psychophysical comparison, we asked 40 participants to
view the chart shown in Figure 8 in a darkened room, illuminated
by two spotlights. The participants were asked to observe carefully selected targets in the real scene and try to determine whether
they could clearly distinguish the direction of the targets.
Subsequently a short questionnaire on the perceived quality of the
targets had to be completed. The same experiment was then repeated on the CRT monitor and HDR viewer. Figure 12 shows two participants during the experiment. Both an HDR photograph of the
scene and a rendered version were displayed on the monitor and
viewer. The main reason of taking a HDR photograph of the scene
Figure 14: Images displayed: CRT (all) and HDR (b&d)
5
Results
From the psychophysical results it was clear that there are differences between standard CRT monitors and the HDR viewer that
we built. In all the different tests that we run, it clearly emerged
that the HDR viewer gives a closer match to reality. Figure 15
shows the result of the experiment when participants were asked to
count the number of small targets of which they could clearly distinguish the direction.
Real vs. Monitor & HDR Small Targets
5
4
3
2
1
0
real
photo CRT
rendered photo HDR rendered
CRT
HDR
Figure 15: Visibility of small targets (human vision).
Figure 13: The rendered images displayed on a CRT.
The first column illustrates on average how many targets were
clearly visible in the real scene. This shows that nearly 4 targets
were distinguishable on average. Ideally, if both the CRT and HDR
viewer gave a faithful representation of reality, then all the other
columns would have the same height. Clearly, this is not the case.
The 2nd and 3rd column show the result when the photograph and
render were displayed on the CRT display. The last two columns
show the same for the HDR viewer. From this graph, it can be seen
that the visibility of small targets in the HDR viewer was much
closer to what it could be seen in reality, compared to a CRT monitor. On the monitor, the number of targets that could be identified
was always less than in the real scene. We then repeated the experiment with the larger targets and not surprisingly the results were
very similar. This is illustrated in Figure 16.
Real vs Monitor & HDR Large Targets
4
These first two tests show that there are differences in performance between the devices. Secondly, we wished to determine
which display is closer to reality. When comparing Real vs CRT:
p<0.001 (render, (d)) and p=0.005 (photograph, (c)), leading to the
conclusion that there is a significant difference between the two.
When comparing Real vs HDR viewer: p=0.074 (render, (f)) and
p=0.478 (photograph, (e)). Note that the values for p when comparing reality versus the HDR display are greater that 0.05 - much
larger in the case of the photographed scene. The null hypothesis
is retained in this case meaning that the HDR viewer and reality
are closely matched. The t-test strongly supports the null hypothesis that there is no significant difference between viewing a captured image on the HDR viewer and viewing the same scene in
reality. The differences were greater between the real scene and
our rendering viewed on the HDR device, but not enough to be
considered statistically significant by our experiments.
3
2
1
Real vs Monitor & HDR
0
real
photo CRT
rendered
CRT
photo HDR
rendered
HDR
Figure 16: Visibility of large targets (human vision).
These two results show that not only the HDR viewer gives a
closer match to reality in terms of visibility, but also that the target's size does not affect the outcome.
Although the results were as expected (ie: the HDR viewer
gave a closer match to reality), a significance test was necessary.
The purpose of the test was to demonstrate that the probability of
the result occuring, if the null hypothesis is not true, is more than
99.5%. Every parametric test is based on a null hypothesis (H0)
which states that the mean difference between pairs of samples
should be 0. When comparing the HDR viewer to the CRT monitor, we expected significant differences between the devices, thus
rejecting the null hypothesis. There are many tests that can be run
to determine the significance of the results. We decided that the
most appropriate test in this particular case is the two-tailed t-test.
This test compares the differences between pairs of data.
Table 1: Probabilities from the t-tests.
CRT vs HDR
CRT vs Real Scene
HDR vs Real Scene
Photo (a) Rend (b) Photo (c)
Rend (d) Photo (e) Rend (f)
p <0.001
<0.001
0.005
<0.001
0.478
0.074
p was calculated based on df=39 (degrees of freedom).
The tests that we ran are the following: we compared the display devices against each other and then we compared these results
against the real scene. The results are shown in Table 1. In the first
case, we expected a large value for t, meaning that there is a low
probability p of no difference between the devices. The first t-test
that we ran was to compare the results obtained by counting the
targets on the CRT and HDR viewer respectively when viewing
the photograph. The t-test gave a probability p less than 0.001
meaning that the chance of the t value occuring under the null
hypothesis is less than 1/100 (actually much less, see Table 1, (a))
. Note that any value for p<0.05 rejects the null hypothesis and is
a statistically significant. In the second experiment, we ran the ttest to compare the result of viewing the rendered image. In this
case, we expected an even smaller value for p since the difference
of the results was even more significant (See Figure 15, 3rd & 5th
columns). The value for p obtained was less than 0.001 which
again rejects H0.
12
10
8
6
4
2
0
real
photo CRT
rendered CRT
Figure 17: Visibility of targets (no human vision).
The first two experiments took into account human visibility
factors (images looked like Figure 13, bottom). We then ran another experiment in which both the HDR photograph and rendered
scene had been tone-mapped without taking into account human
visibility. The result of this experiment are shown in Figure 17.
Here it can be seen an opposite effect: the visibility of targets on
the monitor is even more than what it could be seen in reality,
which is obviously incorrect. Due to human visual disabilities,
including the loss of contrast and color sensitivity at low light levels, and especially disability glare caused by scattering in the lens
and iris, the real scene presents visual challenges that are not
reproduced on a standard CRT display. If these effects are not simulated with our tone-mapping operator, then objects will in fact be
more visible on the CRT than they are in reality, as this experiment
demonstrates.
6
Conclusions and Future Work
In this paper, we have presented a device for displaying high
dynamic range, wide-field stereo imagery. Although the device
itself is quite simple, psychophysical tests have shown that the
HDR viewer is capable of displaying contrast ratios in the order of
104:1. The experiments that we run demonstrated that the device
faithfully produces luminance levels up to this order.
On average, the vast majority of participants considered the
HDR viewer much closer to reality compared to a CRT device.
This was then statistically demonstrated using a t-test. In the
future, we would like to run more psychophysical experiments
using multiple scenes. One of the problems in doing so is that it is
difficult to have controlled conditions (luminance levels for example) that faithfully reproduce real environments on a computer.
Also, to run meaningful experiments, it is very important to ask
viewers to compare what they see in the viewer with the real
world. Obviously, it is not always possible to have access to any
real scene and run experiments on the spot. Nevertheless, we
would like to repeat the experiment on a less artificial scene. It
would also be interesting to show participants color charts as well
as contrast charts. This would give us a better idea of how colors
match reality.
The most serious limitation of the current device is its restriction to static scenes. Clearly, it would be better if we could quickly change from one image to another, without needing to swap
transparencies and interrupt the observer. Ideally, we would like
to present animations of virtual reenactments or reproductions at
video frame rates. The challenge of achieving the necessary resolution and bandwidth, although significant, may be met with
today's PC graphics hardware. It may require multiple cards in the
same machine and specialized drivers, but it should be possible.
We would like to build a dynamic HDR viewer based on LCD displays. However, the main limitation at present is that the resolution
that these displays can produce is not high enough for our applications.
Acknowledgments
We would like to acknowledge Antonis Petroutsos for his help
during the experiments and in producing some of the images.
References
AYRES T., 1996. Psychophysical Validation of Photographic
Representations, ASME 1996.
BRUCE V. AND GREEN P. AND GEORGESON M. 1996. Visual
Perception: Physiology, Psychology, and Ecology. East
Colours, UK. Psychology notes.
CHALMERS AG., DALY S., MCNAMARA A., MYSZKOWSKI K.,
RUSHMEIER H., TROSCIANKO T. 2001. Seeing is believing:
Reality perception in modelling, rendering and animation,
SIGGRAPH 2001, Course notes # 21.
CHIU K., HERF M., SHIRLEY P., SWAMY S., WANG C., ZIMMERMAN
K. 1993. Spatially Nonunifrom Scaling Functions for High
Contrast Images. Proceedings Graphics Interface 93, 245-254
DEBEVEC P.E. AND MALIK J. 1997. Recovering High Dynamic
Range Radiance Maps from Images. Proceedings SIGGRAPH
97. 369-378
FERWERDA J., PATTANAIK SN., SHIRLEY P. AND GREENBERG DP.
1996. A Model of Visual Adaptation for Realistic Image
Synthesis, In Proceedings of SIGGRAPH 1996, ACM Press /
ACM SIGGRAPH, New York. H. Rushmeier, Ed., Computer
Graphics Proceedings, Annual Conference Series, ACM, pp.
249-258
GREENBERG DP., TORRANCE KE., SHIRLEY P., ARVO J., FERWERDA
J., PATTANAIK SN. LAFORTUNE E., WALTER B., FOO S-C. AND
TRUMBORE B. 1997. A Framework for Realistic Image
Synthesis, In Proceedings of SIGGRAPH 1997 (Special
Session), ACM Press / ACM SIGGRAPH, New York. T.
Whitted, Ed., Computer Graphics Proceedings, Annual
Conference Series, ACM, 477-494.
GORDON I. Theories of visual perception. John Wiley & Sons, New
York, NY, 1997
HORVITZ E. AND LENGYEL J. 1997. Perception, Attention, and
Resources: A Decision-Theoretic Approach to Graphics
Rendering, In Proceedings of 13the Conference on
Uncertainty in AI, pp. 238-249. Providence.
MCNAMARA A., CHALMERS AG., TROSCIANKO T., AND GILCHRIST I.
2000. Comparing Real and Synthetic Scenes using Human
Judgements of Lightness. In B Peroche and H Rushmeier
(eds), 11the Eurographics Workshop on Rendering.
MEYER G., RUSHMEIER H., COHEN M., GREENBERG D, AND
TORRANCE K. 1986. An Experimental Evaluation of Computer
Graphics Imagery. In Transactions on Graphics, 5 (1), pp. 3050. New York, ACM.
MITSUNAGA T. AND NAYAR S.K., 1999. Radiometric Self
Calibration, Proc. IEEE Conference on Computer Vision and
Pattern Recognition, 1999.
MYSZKOWSKI K., TAWARA T., AKAMINE H. AND SEIDEL H-P. 2001
Perception-Guided Global Illumination Solution for
Animation Rendering. In Proceedings of SIGGRAPH 2001,
ACM Press / ACM SIGGRAPH, New York. E. Fiume, Ed.,
Computer Graphics Proceedings, Annual Conference Series,
ACM, 221-230.
PARRAGA S, TROSCIANKO T, AND TOLHURST D. 2000. The Human
Visual System is Optimised For Processing The Spatial
Information in Natural Visual Images, In Current Biology,10,
pp. 35-38.
PATTANAIK SN., TUMBLIN JE.,YEE H. AND GREENBERG DP. 2000.
Time-Dependent Visual Adaptation for Realistic Real-Time
Image Display, In Proceedings of SIGGRAPH 2000, ACM
Press / ACM SIGGRAPH, New York. K. Akeley, Ed.,
Computer Graphics Proceedings, Annual Conference Series,
ACM, 47-54.
PATTANAIK SN., FERWERDA J, FAIRCHILD MD., AND GREENBERG DP.
1998. A Multiscale Model of Adaptation and Spatial Vision for
Realistic Image Display, in Proceedings of SIGGRAPH 1998,
ACM Press / ACM SIGGRAPH, New York. M. Cohen, Ed.,
Computer Graphics Proceedings, Annual Conference Series,
ACM, 287-298.
RAMASUBRAMANIAN M., PATTANAIK SN., GREENBERG DP. 1999. A
Perceptually Based Physical Error Metric for Realistic Image
Synthesis, In Proceedings of SIGGRAPH 1999, ACM Press /
ACM SIGGRAPH, New York. Computer A. Rockwood, Ed.,
Graphics Proceedings, Annual Conference Series, ACM, 7382
RUSHMEIER H, LARSON, G, PIATKO C, SANDERS P, AND RUST B.
1995. Comparing Real and Synthetic Images: Some Ideas
About Metrics, In Proceedings of Eurographics Rendering
Workshop 1995. Springer Wein, New York.
SCHILICK C. 1995. Quantization Techniques for High Dynamic
Range Pictures. In G. Sakasm, P.Shurley and S. Mueller(eds)
Photoreslistic Rendering Techniques, Berlin:Springer-Verlag,
7-20.
TUMBLIN J., RUSHMEIER H. 1993. Tone Reproduction for Realistic
Images, IEEE Computer Graphics and Applications. 13(6),
42-48.
TUMBLIN J., HODGKINS J. AND GUENTER B. 1997. Display of High
Contrast Images Using Models of Visual Adaptation. Visual
proceedings SIGGRAPH 97, 154
TUMBLIN, J. AND HODGINGS, J.K., GUENTEr, B. 1999. Two Methods
for Display of High Contrast Images. ACM Transactions on
Graphics, Vol. 18, No. 1, 56-94.
VOLEVICH V, MYSZKOWSKI K, KHODULEV A AND KOPYLOV E, 2000.
Measuring and Using the Visual Differences Predictor to
Improve Performance of Progressive Global Illumination
Computation, In Transactions on Graphics, 19 (1), pp. 122-
161. New York, ACM.
WARD G. 1994. A contrast-based scalefactor for luminances display. In Graphics Gems IV, P. Heckbert, Ed. Academic Press,
Boston, 415-421.
WARD G. AND SHAKESPEARE R. 1997. Rendering with Radiance:
The Art and Science of Lighting Visualisation, Morgan
Kaufmann Publication
WARD LARSON G., RUSHMEIER H., PIATKO C. 1997. A Visibility
Matching Tone Reproduction Operator for High Dynamic
Range Scenes, IEEE Transactions on Visualization and
Computer Graphics, Vol. 3, No. 4, 1997.
WARD, G. HDRgen software, private communcation, 2001.
WARD, G. 2002. A Wide Field, High Dynamic Range,
Stereographic Viewer, Proceedings of PICS 2002
WATSON B., FRIEDMAN A. AND MCGAFFEY A, 2001. Measuring and
Predicting Visual Fidelity, In Proceedings of SIGGRAPH
2001, ACM Press / ACM SIGGRAPH, New York. E. Fiume,
Ed., Computer Graphics Proceedings, Annual Conference
Series, ACM, 213 – 220.
A Wide Field, High Dynamic Range, Stereographic Viewer: Ledda P., Ward G. and Chalmers A.
Figure 7: Stereoscopic view.
Figure 2: Photograph of the High Dynamic Range viewer.
Figure 10: Transparency used for the rendered scene.
Figure 14: Images displayed: CRT (all) and HDR (b&d)