一种面向人脸活体检测的对抗样本生成算法
作者:
作者单位:

作者简介:

马玉琨(1983-),女,河南新乡人,博士,CCF专业会员,主要研究领域为数字图像处理;毋立芳(1970-),女,博士,教授,博士生导师,CCF专业会员,主要研究领域为数字图像处理,模式识别;简萌(1987-),女,博士,讲师,CCF专业会员,主要研究领域为模式识别,多媒体计算;刘方昊(1993-),男,硕士,主要研究领域为优化理论;杨洲(1994-),男,硕士生,主要研究领域为计算机视觉.

通讯作者:

毋立芳,E-mail:lfwu@bjut.edu.cn

中图分类号:

基金项目:

北京市教委科技创新项目(KZ201510005012);国家自然科学基金(61702022);中国博士后科学基金(2017M610026,2017M610027)


Algorithm to Generate Adversarial Examples for Face-spoofing Detection
Author:
Affiliation:

Fund Project:

Science and Technology Innovation Project of Beijing Municipal Education Commission (KZ201510005012); National Natural Science Foundation of China (61702022); China Postdoctoral Science Foundation (2017M610026, 2017M610027)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    近年来,基于深度卷积神经网络的人脸活体检测技术取得了较好的性能.然而,深度神经网络被证明容易受到对抗样本的攻击,影响了人脸系统的安全性.为了建立更好的防范机制,需充分研究活体检测任务对抗样本的生成机理.相对于普通分类问题,活体检测任务具有类间距离小,且扰动操作难度大等特性.在此基础上,提出了基于最小扰动维度和人眼视觉特性的活体检测对抗样本生成算法,将扰动集中在少数几个维度上,并充分考虑人眼的视觉连带集中特性,加入扰动点的间距约束,以便最后生成的对抗样本更不易被人类察觉.该方法只需平均改变输入向量总维度的1.36%,即可成功地欺骗网络,使网络输出想要的分类结果.通过志愿者的辨认,该方法的人眼感知率比DeepFool方法降低了20%.

    Abstract:

    Face-spoofing detection based on deep convolutional neural networks has achieved good performance in recent years. However, deep neural networks are vulnerable to adversarial examples, which will reduce the safety of the face based application systems. Therefore, it is necessary to analyze the mechanism of generating the adversarial examples, so that the face-spoofing detection algorithms will be more robust. Compared with the general classification problems, face-spoofing detection has the smaller inter-class distance, and the perturbation is difficulty to assign. Motivated by the above, this study proposes an approach to generate the adversarial examples for face-spoofing detection by combining the minimum perturbation dimensions and visual concentration. In the proposed approach, perturbation is concentrated on a few pixels in a single component, and the intervals between pixels are constrained-according to the visual concentration. With such constraints, the generated adversarial examples can be perceived by human with low probability. The adversarial examples generated from the proposed approach can defraud the deep neural networks based classifier with only 1.36% changed pixels on average. Furthermore, human vision perception rate of the proposed approach decreases about 20% compared with DeepFool.

    参考文献
    相似文献
    引证文献
引用本文

马玉琨,毋立芳,简萌,刘方昊,杨洲.一种面向人脸活体检测的对抗样本生成算法.软件学报,2019,30(2):469-480

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2017-09-13
  • 最后修改日期:2017-10-30
  • 录用日期:
  • 在线发布日期: 2018-03-14
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号