Automatic Makeup with Region Sensitive Generative Adversarial Networks
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (U1536203, 61572493, 61876177)

  • Article
  • | |
  • Metrics
  • |
  • Reference [46]
  • |
  • Related
  • |
  • Cited by
  • | |
  • Comments
    Abstract:

    Automatic makeup refers to the editing and synthesis of face makeup through computer algorithms. It belongs to the field of face image analysis, and plays an important role in interactive entertainment applications, image and video editing, and face recognition. However, as a face editing problem, it is still difficult to ensure that the editing result of the image is natural and satisfies the editing requirements. Makeup still has some difficulties such as precisely controlling the editing area is hard, the image consistency before and after editing is poor, and the image quality is insufficient. In response to these difficulties, this study innovatively proposes a mask-controlled automatic makeup generative adversarial network. Through a masking method, this network can edit the makeup area with emphasis, restrict the area that does not require editing, and maintain the key information. At the same time, it can separately edit the eye shadow, lips, cheeks, and other local areas of the face to achieve makeup on specific areas and enrich the makeup function. In addition, this network can be trained jointly on multiple datasets. In addition to makeup dataset, it can also use other face datasets as an aid to enhance the model's generalization ability and get a more natural makeup result. Finally, based on a variety of evaluation methods, more comprehensive qualitative and quantitative experiments are carried out, the results are compared with the other methods, and the performance of the proposed method is comprehensively evaluated.

    Reference
    [1] Denton EL, Chintala S, Fergus R. Deep generative image models using a Laplacian pyramid of adversarial networks. In:Proc. of the Int'l Conf. on Neural Information Processing Systems. 2015. 1486-1494.
    [2] Huang X, Li Y, Poursaeed O, Hopcroft JE, Belongie SJ. Stacked generative adversarial networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 2:3.
    [3] Zhao J, Mathieu M, LeCun Y. Energy-based generative adversarial network. In:Proc. of the Int'l Conf. on Learning Representations. 2017. 2.
    [4] Guo D, Sim T. Digital face makeup by example. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2009. 73-79.
    [5] Scherbaum K, Ritschel T, Hullin M, Thormählen T, Blanz V, Seidel HP. Computer-suggested facial makeup. Computer Graphics Forum, 2011,30(2):485-492.
    [6] Tong WS, Tang CK, Brown MS, Xu YQ. Example-based cosmetic transfer. In:Proc. of the Computer Graphics and Applications. 2007. 211-218.
    [7] Liu L, Xing J, Liu S, Xu H, Zhou X, Yan S. Wow! you are so beautiful today! ACM Trans. on Multimedia Computing, Communications, and Applications, 2014,11(1s):20.
    [8] Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 5967-5976.
    [9] Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017. 2242-2251.
    [10] Kim T, Cha M, Kim H, Lee JK, Kim J. Learning to discover cross-domain relations with generative adversarial networks. In:Proc. of the Int'l Conf. on Machine Learning. 2017. 1857-1865.
    [11] Mirza M, Osindero S. Conditional generative adversarial nets. arXiv Preprint arXiv:1411.1784, 2014.
    [12] Resales R, Achan K, Frey B. Unsupervised image translation. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2003,1:472-478.
    [13] Zhu JY, Krähenbühl P, Shechtman E, Efros AA. Generative visual manipulation on the natural image manifold. In:Proc. of the European Conf. on Computer Vision. 2016. 597-613.
    [14] Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Shi W. Photo-realistic single image super-resolution using a generative adversarial network. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 105-114.
    [15] Li M, Zuo W, Zhang D. Deep identity-aware transfer of facial attributes. arXiv Preprint arXiv:1610.05586, 2016.
    [16] Shen W, Liu R. Learning residual images for face attribute manipulation. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 1225-1233.
    [17] Zhang Z, Song Y, Qi H. Age progression/regression by conditional adversarial autoencoder. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 4352-4360.
    [18] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv Preprint arXiv:1511.06434, 2015.
    [19] Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training GANs. In:Proc. of the Conf. and Workshop on Neural Information Processing Systems. 2016. 2226-2234.
    [20] Mathieu MF, Zhao JJ, Zhao J, Ramesh A, Sprechmann P, LeCun Y. Disentangling factors of variation in deep representation using adversarial training. In:Proc. of the Conf. and Workshop on Neural Information Processing Systems. 2016. 5040-5048.
    [21] Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA. Context encoders:Feature learning by inpainting. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2016. 2536-2544.
    [22] Mathieu M, Couprie C, LeCun Y. Deep multi-scale video prediction beyond mean square error. In:Proc. of the Int'l Conf. on Learning Representations. 2016. 2.
    [23] Vondrick C, Pirsiavash H, Torralba A. Generating videos with scene dynamics. In:Proc. of the Conf. and Workshop on Neural Information Processing Systems. 2016. 613-621.
    [24] Wu J, Zhang C, Xue T, Freeman B, Tenenbaum J. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In:Proc. of the Conf. and Workshop on Neural Information Processing Systems. 2016. 82-90.
    [25] Gatys LA, Ecker AS, Bethge M. A neural algorithm of artistic style. arXiv Preprint arXiv:1508.06576, 2015.
    [26] Johnson J, Alahi A, Li FF. Perceptual losses for real-time style transfer and super-resolution. In:Proc. of the European Conf. on Computer Vision. Cham:Springer-Verlag, 2016. 694-711.
    [27] Fišer J, Jamriška O, Simons D, Shechtman E, Lu J, Asente P, Lukac M, Sýkora D. Example-based synthesis of stylized facial animations. ACM Trans. on Graphics (TOG), 2017,36(4):155.
    [28] Liao J, Yao Y, Yuan L, Hua G, Kang SB. Visual attribute transfer through deep image analogy. arXiv Preprint arXiv:1705.01088, 2017. 1, 2, 4, 6, 7, 8.
    [29] Sangkloy P, Lu J, Fang C, Yu F, Hays J. Scribbler:Controlling deep image synthesis with sketch and color. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2016. 6836-6845.
    [30] Bousmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017,1(2):7.
    [31] Li Y, Song L, Wu X, He R, Tan T. Anti-makeup:Learning a bi-level adversarial network for makeup-invariant face verification. arXiv Preprint arXiv:1709.03654, 2017.
    [32] Wang S, Fu Y. Face behind makeup. In:Proc. of the AAAI Conf. on Artificial Intelligence. 2016. 58-64.
    [33] Li C, Zhou K, Lin S. Simulating makeup through physics-based manipulation of intrinsic image layers. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2015. 4621-4629.
    [34] Liu S, Ou X, Qian R, Wang W, Cao X. Makeup like a superstar:Deep localized makeup transfer network. In:Proc. of the Int'l Joint Conf. on Artificial Intelligence. 2016. 2568-2575.
    [35] Chang H, Lu J, Yu F, Finkelstein A. PairedCycleGAN:Asymmetric style transfer for applying and removing makeup. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 40-48.
    [36] Odena A. Semi-supervised learning with generative adversarial networks. arXiv Preprint arXiv:1606.01583, 2016.
    [37] Odena A, Olah C, Shlens J. Conditional image synthesis with auxiliary classifier GANs. In:Proc. of the Int'l Conf. on Machine Learning. 2017. 2642-2651.
    [38] Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H. Generative adversarial text to image synthesis. In:Proc. of the Int'l Conf. on Machine Learning. 2016. 1060-1069.
    [39] Zhang H, Xu T, Li H, Zhang S, Huang X, Wang X, Metaxas D. StackGAN:Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv Preprint arXiv:1612.03242, 2016,2(3):5.
    [40] Kim T, Cha M, Kim H, Lee JK, Kim J. Learning to discover cross-domain relations with generative adversarial networks. In:Proc. of the Int'l Conf. on Machine Learning. 2017. 1857-1865.
    [41] Taigman Y, Polyak A, Wolf L. Unsupervised cross-domain image generation. arXiv Preprint arXiv:1611.02200, 2016.
    [42] Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC. Improved training of wasserstein GANs. In:Proc. of the Int'l Conf. on Neural Information Processing Systems. 2017. 5767-5777.
    [43] Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2015. 2, 4, 6.
    [44] Choi Y, Choi M, Kim M, Ha JW, Kim S, Choo J. StarGAN:Unified generative adversarial networks for multi-domain image-to-image translation. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 8789-8797.
    [45] Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. DeepLab:Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2018,40(4):834-848.
    [46] Mao X, Li Q, Xie H, Lau RY, Wang Z, Smolley SP. Least squares generative adversarial networks. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017.
    Related
    Cited by
Get Citation

包仁达,庾涵,朱德发,黄少飞,孙瑶,刘偲.基于区域敏感生成对抗网络的自动上妆算法.软件学报,2019,30(4):896-913

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:April 16,2018
  • Revised:June 13,2018
  • Online: April 01,2019
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063