Face Illumination Manipulation using a Single Reference Image by Adaptive Layer Decomposition

Face Illumination Manipulation using a Single Reference Image by Adaptive Layer Decomposition

Xiaowu Chen, Hongyu Wu, Xin Jin1 and Qinping Zhao

1Beijing Electronic Science and Technology Institute, Beijing 100070, China

Abstract

   This article proposes a novel image-based framework to manipulate the illumination of human face through adaptive layer decomposition. According to our framework, only a single reference image, without any knowledge of the 3D geometry or material information of the input face, is needed. To transfer illumination effects of a reference face image to a normal lighting face, we first decompose the lightness layers of the reference and the input images into large-scale and detail layers through Weighted Least Squares (WLS) filter with adaptive smoothing parameters according to the gradient values of the face images. The large-scale layer of the reference image is filtered with the guidance of the input image by Guided Filter with adaptive smoothing parameters according to the face structures. The relit result is obtained by replacing the largescale layer of the input image with that of the reference image. To normalize the illumination effects of a non-normal lighting face (i.e. face delighting), we introduce Similar Reflectance Prior (SRP) to the layer decomposition stage by WLS filter, which make the normalized result less affected by the high contrast light and shadow effects of the input face. Through the above two procedures, we can change the illumination effects of a nonnormal lighting face by first normalizing the illumination and then transferring the illumination of another reference face to it. We acquire convincing relit results of both face relighting and delighting on numerous input and reference face images with various illumination effects and genders. Comparisons with previous works show that our framework is less affected by geometry differences and can preserve better the identification structure and skin color of the input face.

Gallery

The plain image, cipher image and corresponding decrypted image of "Lena"(a-c), "black"(d-f)and "peppers"(g-i), respectively. 

The proposed face illumination transfer method. The reference face image is warped according to the shape of the input face. Both the input image and the warped reference image are cropped to the contour of the mark points. For the face illumination transfer task, we consider that the reference face image is taken under nearly white light sources. Then the two cropped images are decomposed into lightness layer and color layer, and only lightness layer is operated on. The two lightness layers are decomposed into the large-scale layer and the detail layer by using WLS filter. The reference large-scale layer is filtered with the guidance of the input large-scale layer using Guided Filter to form the large-scale layer of the relit result. By compositing the filtered large-scale layer and the input detail layer, the lightness layer of relit result is obtained. Finally, the relit result is calculated by compositing the lightness layer and the color layer of the input face image.

(a) Input lightness layer, decomposed to large-scale layer (d) by using the same λ over all the image; (b) is normalized γ, which is used to calculate spatial λ. (c) is the large-scale layer calculated by using λ spatial determined by (b). It could be observed that (c) can obtain less detail information than (d) in the regions of facial hair and eyebrows.

(a) A rough contour line is decided by face mark points; (b) the face structure region is then determined by the contour line; (c) the Canny edge detector is applied to the face structure region; (d) distance transform is employed to the detected edges, for the pixels far from edges, smaller kernel sizes are used, and for the pixels near the edges, larger kernel sizes are used.

Large-scale layers with different Guided Filter parameters. (a) input large-scale layer; (b) reference large-scale layer; (c) a single kernel size r = 3 is used over all the image; (d) a single kernel size r = 18 is used over all the image; and it can be observed that (c) preserves much more structures of the reference image, it retains much shading information as well as more identification characteristics of the reference face; (d) maintains the structure of the input face well, but blurs the shading information of the reference face; (e) performs a trade-off between preserving the input structure in the face structure region and retaining the shading information of the reference face by using adaptive kernel sizes.

The workflow of face illumination normalization method. The reference face image (a normal lighting face) is warped according to the landmarks of the input face. The two images are filtered with WLS filter with parameter selection scheme constrained by the similar reflectance prior. The large-scale layer of input image is divided by that of reference image to obtain the illumination component. The reflectance component (the normalized result) is the quotient between the input image and the illumination component.

Adaptive parameter selection in skin region. (a) The blue channel of the input face. (b) Visualization of the square sum of gradients on vertical and horizontal axis. (c) Visualization of the adaptive parameter constrained by SRP. From (a) and (b), Along the edge of the cast shadow, the gradients value is very large. The variance is caused by illumination, and thus the smoothing parameter γ are set small according to the similar reflectance prior.

Our illumination transfer results between genders.

Illumination transfer in multi-color spaces. (a) is the input image,(b) is the reference image, (c) is the illumination transfer result in R, G, B channel of RGB color space, (d) is the illumination transfer result in L channel of Lab color space, (e) is the illumination transfer result in H channel of HSV color space, (f) is the illumination transfer result in Y channel of YCbCr color space.

The results of face illumination normalization. For the input Asia female faces we use a Asia female face image taken under normal lighting.For the input Euro female faces we use the average female face of 64 female faces photos generated by the Beauty Check project. The 64 female faces are taken under normal lighting. The illumination of the average face is more uniform. The illumination normalization results are shown in (c) with the separated illumination components shown in (d).

Arbitrary Face Relighting. (a) is the reference of illumination transfer. (b) is the reference of illumination normalization (c) is the input face with non-normal lighting. (d) is the result of directly transferring the illumination of (a) to (c) using the method in Section III. (e) is the illumination normalized faces of (c) by using the method in Section IV. (f) is the relit result of transferring the illumination of (a) to (e).

Comparisons with Chen et al. [4]. (b)(d) and (c)(e) are reference faces from YaleB [41] taken under normal and the desired lighting conditions,respectively. (f) and (h) are results of Chen et al. [4] using two reference images. (g) and (i) are our results using one reference image.

Comparisons with Li et al. [22]. (a) are the reference images, (b)are the input images, (c) are the illumination transfer results of Li et al. [22],(d) are the our results.

The results of Bousseau et al. [12] and Shen et al. [39] are shown in (b) and (c), respectively. (d) is our result. Our method can separate the illumination from the input image more convincingly. The variance caused
by illumination is not retained in our reflectance components.

Paper and Slides

[pdf]

Face Illumination Manipulation Using a Single Reference Image by Adaptive Layer Decomposition. IEEE Trans. Image Processing 22(11): 4249-4259 ()

[pdf]
video

mp4

BibTeX

@article{DBLP:journals/tip/ChenWJZ13,
  author    = {Xiaowu Chen and
               Hongyu Wu and
               Xin Jin and
               Qinping Zhao},
  title     = {Face Illumination Manipulation Using a Single Reference Image by Adaptive
               Layer Decomposition},
  journal   = {{IEEE} Trans. Image Processing},
  volume    = {22},
  number    = {11},
  pages     = {4249--4259},
  year      = {2013},
  url       = {https://doi.org/10.1109/TIP.2013.2271548},
  doi       = {10.1109/TIP.2013.2271548},
  timestamp = {Fri, 26 May 2017 22:51:39 +0200},
  biburl    = {http://dblp.uni-trier.de/rec/bib/journals/tip/ChenWJZ13},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}
@inproceedings{DBLP:conf/cvpr/ChenCJZ11,
  author    = {Xiaowu Chen and
               Mengmeng Chen and
               Xin Jin and
               Qinping Zhao},
  title     = {Face illumination transfer through edge-preserving filters},
  booktitle = {The 24th {IEEE} Conference on Computer Vision and Pattern Recognition,
               {CVPR} 2011, Colorado Springs, CO, USA, 20-25 June 2011},
  pages     = {281--287},
  year      = {2011},
  crossref  = {DBLP:conf/cvpr/2011},
  url       = {https://doi.org/10.1109/CVPR.2011.5995473},
  doi       = {10.1109/CVPR.2011.5995473},
  timestamp = {Thu, 25 May 2017 00:41:19 +0200},
  biburl    = {http://dblp.uni-trier.de/rec/bib/conf/cvpr/ChenCJZ11},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}

发表评论

电子邮件地址不会被公开。 必填项已用*标注