Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Datasets and Benchmarks for Densely Sampled 4D Light Fields Sven Wanner, Stephan Meister and Bastian Goldluecke Heidelberg Collaboratory for Image Processing, University of Heidelberg e-mail: http://lightfield-analysis.net {sven.wanner,stephan.meister,bastian.goldluecke}@iwr.uni-heidelberg.de Selected light fields in the database Contributions Code and benchmarks Light field datasets thirteen high quality densely sampled light fields seven CG generated datasets with ground truth disparity four of these with ground truth segmentation six real world datasets captured using a gantry one transparency dataset with ground truth disparity for both the surface as well as an object behind it Suitable for the evaluation of continuous methods for light field analysis based on epipolar plane images X Computer graphics generated with ground truth depth: CUDA library with complete source code for continuous optimization and light field analysis Several methods for disparity reconstruction Inverse problems: denoising, inpainting, super-resolution, segmentation Fully scripted evaluation on the light field database Front-end to interface with other databases 4D Light Field Parametrization and Epipolar Plane Images (EPIs) Light field structure: Lumigraph [2] Computer graphics generated with ground truth depth and object labels: Disparity and epipolar plane images Ω P = (X, Y, Z) y Π x1 t x2 ∆s Epipolar plane image [1] f s1 x2 − x1 = s2 Light field parametrization f ∆s Z Disparity equals local slope Comparison to existing light field data sets Existing data sets either lack ground truth data for depth and segmentation, or are sparsely sampled and unsuitable for benchmarking continuous methods. Stanford Light Field Archive: More than 20 light fields sampled using a camera array, a gantry and a light field microscope No ground truth disparities UCSD/MERL Light Field Repository: Video and static light fields No ground truth disparities, one-dimensional domain of view points only Recorded with the gantry, ground truth depth: MIT Media Lab Synthetic Light Field Archive: Synthetic light fields including challenges like transparencies, occlusions and reflections No ground truth disparities Middlebury Multiview Stereo Datasets: One 4D light field with ground truth depth for the center view Additional 3D light fields with depth for two out of seven views Large base lines and disparities, unsuitable for epipolar plane image analysis Database Overview dataset name category resolution GTD GTL buddha Blender 768×768×3 full yes horses Blender 576×1024×3 full yes papillon Blender 768×768×3 full yes stillLife Blender 768×768×3 full yes buddha2 Blender 768×768×3 full no medieval Blender 720×1024×3 full no monasRoom Blender 768×768×3 full no couple Gantry 898×898×3 cv no cube Gantry 898×898×3 cv no maria Gantry 926×926×3 cv no pyramide Gantry 898×898×3 cv no statue Gantry 898×898×3 cv no transparency Gantry 926×926×3 cv no category: Blender (rendered synthetic dataset) or Gantry (real-world dataset sampled using a single moving camera). resolution: spatial resolution of the views, all light fields consist of 9x9 views. GTD: completeness of ground truth depth data, either cv (only center view) or full (all views). GTL: indicates whether object segmentation data is available. Recorded with the gantry, transparent object with two ground truth depth layers: Data Generation Synthetic datasets Real-world datasets Bibliography All datasets generated with the open source software Blender For ground truth labels, objects rendered with fixed color Plugin for light field rendering available on our web site X Vision, Modeling, and Visualization, Lugano, 2013 Nikon D800 digital camera mounted on stepper-motor driven gantry Objects pre-scanned with structured light scanning device to obtain ground truth data R. Bolles, H. Baker, and D. Marimont. Epipolar-plane image analysis: An approach to determining structure from motion. International Journal of Computer Vision, 1(1):7–55, 1987. S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen. The Lumigraph. In Proc. SIGGRAPH, pages 43–54, 1996.