Lighting virtual objects in a single image
via coarse scene understanding

CHEN XiaoWu, JIN Xin & WANG Ke

Abstract

   Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently, scene understanding from a single image has made great progress. The estimated geometry, semantic labels and intrinsic components provide mostly coarse information, and are not accurate enough to re-render the whole scene. However, carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image, without any pre-recorded 3D geometry, re ectance,illumination acquisition equipment or imaging information of the image.

   

Citation

CHEN XiaoWu, JIN Xin & WANG Ke.Lighting virtual objects in a single image via coarse scene understanding,Science China 2013  PDF

Xiaowu Chen, Ke Wang and Xin Jin.Single image based illumination estimation for lighting virtual object in real scenePDF