Antelope: 基于GPU的三方隐私保护机器学习框架
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP18

基金项目:

国家重点研发计划 (2023YFB4503202); 国家自然科学基金 (62372202)


Antelope: 3-party Privacy-preserving Machine Learning Framework Based on GPU
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着数据隐私问题越来越受重视, 能有效保护数据隐私的安全多方计算 (secure multi-party computation, MPC) 吸引了众多研究者的目光. 然而安全多方计算协议的通信和内存要求限制了它在隐私保护机器学习 (privacy-preserving machine learning, PPML) 中的性能. 减少安全计算协议的交互轮数和内存开销十分重要但也极具挑战性, 尤其是在使用 GPU 硬件加速的情况下. 重点关注线性和非线性计算的 GPU友好协议的设计和实现. 首先, 为避免整数计算的额外开销, 基于 PyTorch 的 CUDA 扩展实现了 64 位整数的矩阵乘法和卷积运算. 然后, 提出一种基于 0-1 编码方法的低通信轮数的最高符号位 (most significant bit, MSB) 获取协议, 并针对性地提出低通信复杂度的混合相乘协议, 简化了安全比较计算的通信开销, 可实现快速的 ReLU 激活层计算. 最后提出Antelope, 一个基于 GPU 的快速三方隐私保护机器学习框架, 进一步缩短了与明文框架之间的性能差距, 且支持深层网络的完整训练. 实验结果表明, 与 CPU 上广泛使用的经典架构 FALCON (PoPETs 2020) 相比, 训练和推理性能是 FALCON 的29–101 倍和 1.6–35 倍. 与基于 GPU 的工作相比, 在训练方面是 CryptGPU (S&P 2021) 的 2.5–3倍, 是 Piranha (USENIX Security 2022) 的 1.2–1.6 倍. 在推理方面, 是 CryptGPU 的 11 倍, 是 Piranha 的2.8 倍. 特别地, 所提安全比较协议在输入数据量较小时具有很大优势.

    Abstract:

    As concerns over data privacy continue to grow, secure multi-party computation (MPC) has gained considerable research attention due to its ability to protect sensitive information. However, the communication and memory demands of MPC protocols limit their performance in privacy-preserving machine learning (PPML). Reducing interaction rounds and memory overhead in secure computation protocols remains both essential and challenging, particularly in GPU-accelerated environments. This study focuses on the design and implementation of GPU-friendly protocols for linear and nonlinear computations. To eliminate overhead associated with integer operations, 64-bit integer matrix multiplication, and convolution are implemented using CUDA extensions in PyTorch. A most significant bit (MSB) extraction protocol with low communication rounds is proposed, based on 0-1 encoding. In addition, a low-communication-complexity hybrid multiplication protocol is introduced to reduce the communication overhead of secure comparison, enabling efficient computation of ReLU activation layers. Finally, Antelope, a GPU-based 3-party framework, is proposed to support efficient privacy-preserving machine learning. This framework significantly reduces the performance gap between secure and plaintext computation and supports end-to-end training of deep neural networks. Experimental results demonstrate that the proposed framework achieves 29×–101× speedup in training and 1.6×–35× in inference compared to the widely used CPU-based FALCON (PoPETs 2020). When compared with GPU-based approaches, training performance reaches 2.5×–3× that of CryptGPU (S&P 2021) and 1.2×–1.6× that of Piranha (USENIX Security 2022), while inference is accelerated by factors of 11× and 2.8×, respectively. Notably, the proposed secure comparison protocol exhibits significant advantages when processing small input sizes.

    参考文献
    相似文献
    引证文献
引用本文

余欢,华强胜,卢必然,石宣化,金海. Antelope: 基于GPU的三方隐私保护机器学习框架.软件学报,2026,37(2):732-748

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-01-09
  • 最后修改日期:2025-03-15
  • 录用日期:
  • 在线发布日期: 2025-08-13
  • 出版日期: 2026-02-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号