Lighting virtual objects in a single image via coarse scene understanding

Lighting virtual objects in a single image via coarse scene understanding

 CHEN XiaoWu, Jin Xin1 & WANG Ke

1Beijing Electronic Science and Technology Institute, Beijing 100070, China

Abstract

   Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently, scene understanding from a single image has made great progress. The estimated geometry, semantic labels and intrinsic components provide mostly coarse information, and are not accurate enough to re-render the whole scene. However, carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image, without any pre-recorded 3D geometry, reflectance, illumination acquisition equipment or imaging information of the image.

Gallery

han

Estimating the illumination of a scene to insert a virtual helicopter into a real scene. The lighting effects of the virtual helicopter match those in the existing image and convincing shadows are cast on the real scene rendered using the estimated illumination.

lighting-2

Workflow of our method. First we estimate the coarse geometry model and semantic labels of the input image. The input image is decomposed into intrinsic components including a shading image and a reflectance image. Then, we combine the coarse geometry model, the semantic labels and the shading image of the scene to estimate the illumination parameters of a ray based illumination model. Finally, the virtual object is illuminated using the estimated illumination. The virtual helicopter matches the input image in terms of lighting effects and casts convincing shadows on the real ground. Although the estimated geometry, semantic labels, and intrinsic components are not accurate in every pixel, the estimated illumination using via our illumination estimation algorithm is basically correct.

lighting-3

Estimation example: (a) input image, (b) 3D geometry model estimated by the method in[12]. (c) model with the input image as its texture. (d) and (e) shading image and reflectance image, respectively, estimated by the method in [13], and (f) predicted semantic labels according to [12].

lighting-4

Sparse radiance map and ray combination (Eqs. 1, 2, 6, and 7). The sparse radiance map contains m sparse and discrete directional light sources evenly distributed on half a sphere around the scene and directed to the center point of the ground circle.

lighting-5

Normal and semantic constrains. We first use the normal constraint to eliminate those surfaces that do not obey Eq. 6 and Eq. 7. Then, for the selected ones, we use the semantic constraint rules to select more appropriate surfaces.

lighting-7

Comparison between using the methods in [11] and [12]. In the outdoor images, with semantic constrains, the ground estimated in [12] is flatter than that in [11], which is more suitable for the illumination estimation and shadow rendering in our task. In the user study, the subjects were invited to rank the results as first or second according to the realism of the illumination effects of the virtual objects. The average rank scores are shown, where lower ranks are better.

Paper and Slides

pdf

, , :Lighting virtual objects in a single image via coarse scene understanding. SCIENCE CHINA Information Sciences 57(9): 1-14 ().

pdf
, , :Single Image Based Illumination Estimation for Lighting Virtual Object in Real Scene. CAD/Graphics : 450-455
winrar    

BibTeX

@article{DBLP:journals/chinaf/ChenJW14,
  author    = {Xiaowu Chen and
               Xin Jin and
               Ke Wang},
  title     = {Lighting virtual objects in a single image via coarse scene understanding},
  journal   = {{SCIENCE} {CHINA} Information Sciences},
  volume    = {57},
  number    = {9},
  pages     = {1--14},
  year      = {2014},
  url       = {https://doi.org/10.1007/s11432-013-4936-0},
  doi       = {10.1007/s11432-013-4936-0},
  timestamp = {Wed, 17 May 2017 14:25:34 +0200},
  biburl    = {http://dblp.uni-trier.de/rec/bib/journals/chinaf/ChenJW14},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}
@inproceedings{DBLP:conf/cadgraphics/ChenWJ11,
  author    = {Xiaowu Chen and
               Ke Wang and
               Xin Jin},
  title     = {Single Image Based Illumination Estimation for Lighting Virtual Object
               in Real Scene},
  booktitle = {12th International Conference on Computer-Aided Design and Computer
               Graphics, CAD/Graphics 2011, Jinan, China, September 15-17, 2011},
  pages     = {450--455},
  year      = {2011},
  crossref  = {DBLP:conf/cadgraphics/2011},
  url       = {https://doi.org/10.1109/CAD/Graphics.2011.19},
  doi       = {10.1109/CAD/Graphics.2011.19},
  timestamp = {Wed, 17 May 2017 10:54:59 +0200},
  biburl    = {http://dblp.uni-trier.de/rec/bib/conf/cadgraphics/ChenWJ11},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}

发表评论

电子邮件地址不会被公开。 必填项已用*标注