Revealing the radiometric properties of a scene is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications. We use RGB-D images to bootstrap geometry and simultaneously recover the complex reflectance and natural illumination.
Recovering the radiometric properties of a scene (i.e., the reflectance,
illumination, and geometry) is a long-sought ability of computer vision that
can provide invaluable information for a wide range of applications.
Deciphering the radiometric ingredients from the appearance of a real-world
scene, as opposed to a single isolated object, is particularly challenging as
it generally consists of various objects with different material compositions
exhibiting complex reflectance and light interactions that are also part of the
illumination. We introduce the first method for radiometric scene decomposition
that handles those intricacies. We use RGB-D images to bootstrap geometry
recovery and simultaneously recover the complex reflectance and natural
illumination while refining the noisy initial geometry and segmenting the scene
into different material regions. Most important, we handle real-world scenes
consisting of multiple objects of unknown materials, which necessitates the
modeling of spatially-varying complex reflectance, natural illumination,
texture, interreflection and shadows. We systematically evaluate the
effectiveness of our method on synthetic scenes and demonstrate its application
to real-world scenes. The results show that rich radiometric information can be
recovered from RGB-D images and demonstrate a new role RGB-D sensors can play
for general scene understanding tasks.