Published on Tue Apr 05 2016

Radiometric Scene Decomposition: Scene Reflectance, Illumination, and Geometry from RGB-D Images

Stephen Lombardi, Ko Nishino

Revealing the radiometric properties of a scene is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications. We use RGB-D images to bootstrap geometry and simultaneously recover the complex reflectance and natural illumination.

0
0
0
Abstract

Recovering the radiometric properties of a scene (i.e., the reflectance, illumination, and geometry) is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications. Deciphering the radiometric ingredients from the appearance of a real-world scene, as opposed to a single isolated object, is particularly challenging as it generally consists of various objects with different material compositions exhibiting complex reflectance and light interactions that are also part of the illumination. We introduce the first method for radiometric scene decomposition that handles those intricacies. We use RGB-D images to bootstrap geometry recovery and simultaneously recover the complex reflectance and natural illumination while refining the noisy initial geometry and segmenting the scene into different material regions. Most important, we handle real-world scenes consisting of multiple objects of unknown materials, which necessitates the modeling of spatially-varying complex reflectance, natural illumination, texture, interreflection and shadows. We systematically evaluate the effectiveness of our method on synthetic scenes and demonstrate its application to real-world scenes. The results show that rich radiometric information can be recovered from RGB-D images and demonstrate a new role RGB-D sensors can play for general scene understanding tasks.

Thu May 07 2020
Computer Vision
NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image
This paper reviews the second challenge on spectral reconstruction from RGB images. The Clean and Real World tracks had 103 and 78 registered participants respectively. 14 teams competing in the final testing phase.
0
0
0
Fri Nov 22 2019
Computer Vision
Unsupervised Learning for Intrinsic Image Decomposition from a Single Image
Intrinsic image decomposition is an essential task in computer vision. It is challenging since it needs to separate one image into two components. Traditional methods introduce various priors to constrain the solution, yet they have limited performance.
0
0
0
Wed Sep 26 2018
Computer Vision
Photometric Depth Super-Resolution
This study explores the use of photometric techniques for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward. It is then shown that dependency upon a specific reflectance
0
0
0
Mon Sep 25 2017
Computer Vision
Variational Reflectance Estimation from Multi-view Images
We tackle the problem of reflectance estimation from a set of multi-view images, assuming known geometry. The approach we put forward turns the input images into reflectance maps, through a robust variational method.
0
0
0
Wed Jan 11 2017
Computer Vision
Revisiting Deep Intrinsic Image Decompositions
Deeplearning based approaches have also been proposed to compute intrinsic image decompositions. Current data sources are quite limited, and broadly speaking fall into one of two categories. We adopt core network structures that universally reflect loose prior knowledge regarding the intrinsic image formation process and can largely shared across datasets.
0
0
0
Thu Oct 08 2015
Computer Vision
Learning Data-driven Reflectance Priors for Intrinsic Image Decomposition
We propose a data-driven approach for intrinsic image decomposition. We train a model to predict relative reflectance ordering between image patches. We compare our method to the state-of-the-art approach of Bell et al. on image relighting tasks.
0
0
0