基于梯度放大的联邦学习激励欺诈攻击与防御
作者:
作者单位:

浙江大学区块链与数据安全全国重点实验室,杭州

基金项目:

浙江省“尖兵”计划项目


Reward Fraud Attack and Defense for Federated Learning Based on Gradient Scale-up
Fund Project:

The "Pioneer" R&D Program of Zhejiang Province

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [26]
  • | |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    在联邦学习领域,激励机制是吸引高质量数据持有者参与联邦学习并获得更优模型的重要工具.然而,现有的联邦学习研究鲜有考虑到参与者可能滥用激励机制的情况,也就是他们可能会通过操纵上传的本地模型信息来获取更多的奖励.本文针对这一问题进行了深入研究.首先,我们明确定义了联邦学习中的参与者激励欺诈攻击问题,并引入了激励成本比来评估不同激励欺诈攻击方法的效果以及防御方法的有效性.其次,我们提出了一种名为“梯度放大攻击”(Gradient Scale-up Attack)的攻击方法,专注于对模型梯度进行激励欺诈.这种攻击方法计算出相应的放大因子,并利用这些因子来提高本地模型梯度的贡献,以获取更多奖励.最后,我们提出了一种高效的防御方法,通过检验模型梯度的二范数值来识别欺诈者,从而有效地防止梯度放大攻击.通过对MNIST等数据集进行详尽的分析和实验验证,本文研究结果表明,我们提出的攻击方法能够显著提高奖励,而相应的防御方法能够有效地抵制欺诈参与者的攻击行为.

    Abstract:

    In the field of federated learning, incentive mechanisms serve as vital tools for attracting high-quality data contributors to participate in federated learning and obtain superior models. However, existing research in federated learning often overlooks the potential misuse of these incentive mechanisms. Specifically, participants may manipulate their locally trained models to dishonestly maximize their rewards. This paper delves deep into addressing this issue. To begin with, we provide a clear definition of the problem of reward fraud in federated learning. We introduce the concept of “reward-cost ratio” to evaluate the effectiveness of different reward fraud techniques and defense mechanisms. Subsequently, we introduce an attack method known as the “Gradient Scale-up Attack”, which focuses on manipulating model gradients to exploit the incentive system. This attack method calculates corresponding scaling factors and utilizes them to increase the contribution of local model to gain more rewards. Lastly, we propose an efficient defense mechanism that identifies malicious participants by examining the L2-norms of model updates, effectively thwarting Gradient Scale-up Attacks.Through extensive analysis and experimental validation on datasets such as MNIST, the findings of this research demonstrate that our proposed attack method significantly increases rewards, while the corresponding defense method effectively mitigates fraudulent behavior by participants.

    参考文献
    [1] Kone?ny J, McMahan H B, Yu F X, Richtárik P, Suresh AT, Bacon D. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016.
    [2] McMahan B, Moore E, Ramage E, Hampson S, Arcas BA. Communication-efficient learning of deep networks from decentralized data. In: Proc. of the 20th Int''l Conf. on Artificial Intelligence and Statistics. PMLR, 2017. 1273?1282.
    [3] Zhan Y, Zhang J, Hong Z, Wu L, Li P, Guo S. A survey of incentive mechanism design for federated learning. IEEE Transactions on Emerging Topics in Computing, 2021, 10(2): 1035-1044.
    [4] Zhang Z, Dong D, Ma Y, Ying Y, Jiang D, Chen K, Shou L, Chen G. Refiner: A reliable incentive-driven federated learning system powered by blockchain. Proceedings of the VLDB Endowment, 2021, 14(12): 2659-2662.
    [5] Zhang Z, Cao X, Jia J, Gong NZ. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In: Proc. of the 28th ACM SIGKDD Conf. on Knowledge Discovery and Data Mining. New York: Association for Computing Machinery, 2022. 2545-2555.
    [6] Blanchard P, El Mhamdi E M, Guerraoui R, Stainer J. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 2017, 30.
    [7] Yin D, Chen Y, Kannan R, Bartlett P. Byzantine-robust distributed learning: Towards optimal statistical rates. In: Proc. of the 35th International Conf. on Machine Learning. PMLR, 2018: 5650-5659.
    [8] Cao X, Fang M, Liu J, Gong NZ. FLTrust: Byzantine-robust federated learning via trust bootstrapping. arXiv preprint arXiv: 2012.13995, 2020.
    [9] Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V. How to backdoor federated learning. In: Proc. of the 23rd International Conf. on Artificial Intelligence and Statistics. PMLR, 2020: 2938-2948.
    [10] Wang Y, Li GL, Li KY. Survey on Contribution Evaluation for Federated Learning. Journal of Software, 2023, 34(3): 1168-1192(in Chinese). http://www.jos.org.cn/1000-9825/6786.htm
    [11] Shapley LS. A value for n-person games. Annals of Mathematical Studies, 1953, 28: 307-317.
    [12] Liu Z, Chen Y, Yu H, Liu Y, Cui L. Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning. ACM Trans. on Intelligent Systems and Technology (TIST), 2022, 13(4): 1-21.
    [13] Xiao H, Rasul K, Vollgraf R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
    [14] Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. 2009.
    [15] Fang M, Cao X, Jia J, Gong N. Local model poisoning attacks to byzantine-robust federated learning. In: Proc. of the 29th USENIX security Symp. (USENIX Security 20). 2020: 1605-1622.
    [16] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
    [17] Deng J, Dong W, Socher R, Li L, Li K; Fei-Fei L. Imagenet: A large-scale hierarchical image database. IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2009: 248-255.
    [18] Gu YH, Bai YB. Survey on Security and Privacy of Federated Learning Models. Journal of Software, 2023, 34(6): 2833-2864(in Chinese). http://www.jos.org.cn/1000-9825/6658.htm
    [19] Tang LT, Chen ZN, Zhang LF, Wu D. Research Progress of Privacy Issues in Federated Learning. Journal of Software, 2023, 34(1): 197-229(in Chinese). http://www.jos.org.cn/1000-9825/6411.htm
    [20] Liu YX, Chen H, Liu YH, Li CP. Privacy-preserving Techniques in Federated Learning. Journal of Software, 2022, 33(3): 1057-1092(in Chinese). http://www.jos.org.cn/1000-9825/6446.htm
    [21] Qiao SJ, Lin YF, Han N, Yang GP, Li H, Yuan G, Mao R, Yuan CA, Louis AG. Decentralized Federated Learning Framework Based on Proof-of-contribution Consensus Mechanism. Journal of Software, 2023, 34(3): 1148-1167(in Chinese). http://www.jos.org.cn/1000-9825/6784.htm
    [22] [10] 王勇, 李国良, 李开宇. 联邦学习贡献评估综述. 软件学报, 2023, 34(3): 1168-1192. http://www.jos.org.cn/1000-9825/6786.htm
    [23] [18] 顾育豪, 白跃彬. 联邦学习模型安全与隐私研究进展. 软件学报, 2023, 34(6): 2833-2864. http://www.jos.org.cn/1000-9825/6658.htm
    [24] [19] 汤凌韬, 陈左宁, 张鲁飞, 吴东. 联邦学习中的隐私问题研究进展. 软件学报, 2023, 34(1): 197-229. http://www.jos.org.cn/1000-9825/6411.htm
    [25] [20] 刘艺璇, 陈红, 刘宇涵, 李翠平. 联邦学习中的隐私保护技术. 软件学报, 2022, 33(3): 1057-1092. http://www.jos.org.cn/1000-9825/6446.htm
    [26] [21] 乔少杰, 林羽丰, 韩楠, 杨国平, 李贺, 袁冠, 毛睿, 元昌安, Louis Alberto GUTIERREZ. 基于贡献度证明共识机制的去中心化联邦学习框架. 软件学报, 2023, 34(3): 1148-1167. http://www.jos.org.cn/1000-9825/6784.htm
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文
分享
文章指标
  • 点击次数:247
  • 下载次数: 0
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2023-09-28
  • 最后修改日期:2024-01-31
  • 录用日期:2024-03-28
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号