基于梯度放大的联邦学习激励欺诈攻击与防御
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP309

基金项目:

浙江省“尖兵”计划(2024C01021)


Reward Fraud Attack and Defense for Federated Learning Based on Gradient Scale-up
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在联邦学习领域, 激励机制是吸引高质量数据持有者参与联邦学习并获得更优模型的重要工具. 然而, 现有的联邦学习研究鲜有考虑到参与者可能滥用激励机制的情况, 也就是他们可能会通过操纵上传的本地模型信息来获取更多的奖励. 针对这一问题进行了深入研究. 首先, 明确定义联邦学习中的参与者激励欺诈攻击问题, 并引入激励成本比来评估不同激励欺诈攻击方法的效果以及防御方法的有效性. 其次, 提出一种名为“梯度放大攻击(gradient scale-up attack)”的攻击方法, 专注于对模型梯度进行激励欺诈. 这种攻击方法计算出相应的放大因子, 并利用这些因子来提高本地模型梯度的贡献, 以获取更多奖励. 最后, 提出一种高效的防御方法, 通过检验模型梯度的二范数值来识别欺诈者, 从而有效地防止梯度放大攻击. 通过对MNIST等数据集进行详尽地分析和实验验证, 研究结果表明, 所提出的攻击方法能够显著提高奖励, 而相应的防御方法能够有效地抵制欺诈参与者的攻击行为.

    Abstract:

    In the field of federated learning, incentive mechanisms play a crucial role in enticing high-quality data contributors to engage in federated learning and acquire superior models. However, existing research in federated learning often neglects the potential misuse of these incentive mechanisms. Specifically, participants may manipulate their locally trained models to dishonestly maximize their rewards. This issue is thoroughly examined in this study. Firstly, the problem of rewards fraud in federated learning is clearly defined, and the concept of reward-cost ratio is introduced to assess the effectiveness of various rewards fraud techniques and defense mechanisms. Following this, an attack method named the “gradient scale-up attack” is proposed, focusing on manipulating model gradients to exploit the incentive system. This attack method calculates corresponding scaling factors and utilizes them to increase the contribution of the local model to gain more rewards. Finally, an efficient defense mechanism is proposed, which identifies malicious participants by examining the L2-norms of model updates, effectively thwarting gradient scale-up attacks. Through extensive analysis and experimental validation on datasets such as MNIST, the findings of this research demonstrate that the proposed attack method significantly increases rewards, while the corresponding defense method effectively mitigates fraudulent behavior by participants.

    参考文献
    相似文献
    引证文献
引用本文

乐紫莹,陈珂,寿黎但,骆歆远,陈刚.基于梯度放大的联邦学习激励欺诈攻击与防御.软件学报,,():1-16

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-09-28
  • 最后修改日期:2023-11-10
  • 录用日期:
  • 在线发布日期: 2024-09-14
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号