基于联邦学习的BERT模型高效训练框架
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP18

基金项目:

浙江省“尖兵”计划(2024C01021)


Efficient Framework for BERT Model Training Based on Federated Learning
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    高质量的训练数据对于预训练语言模型(PLM)至关重要, 但许多专业领域的数据因隐私问题而无法集中收集用于模型训练. 借助联邦学习, 可以在保护数据隐私的前提下完成模型训练. 然而, 联邦学习的客户端通常资源有限, 无法完成预训练语言模型的训练. 针对这一问题进行深入研究. 首先, 明确定义在资源有限前提下完成模型训练的问题, 通过调整计算开销与通信开销来优化模型的训练效果. 其次, 介绍一种适用于联邦学习环境下的BERT模型高效训练框架——FedBT. 该框架旨在实现BERT模型在联邦学习客户端上的训练, 涵盖进一步预训练和下游任务微调两种场景. FedBT适应不同的应用场景, 在客户端针对BERT模型的关键参数进行训练, 并仅将更新的参数上传至服务器进行聚合. 这种方法显著减少模型训练过程中的计算和通信成本. 最后, 在多个专业领域的数据集上进行充分的实验对比, 进一步预训练场景下, FedBT框架可以降低客户端的训练开销与通信开销至原来的34.31%和7.04%, 下游任务微调场景下, FedBT框架可以降低客户端的训练开销与通信开销至原来的48.26%和20.19%, 并且均实现同传统联邦学习训练完整模型接近的精确度.

    Abstract:

    High-quality training data is instrumental in pre-trained language models (PLMs), yet privacy concerns often preclude the centralized collection of data from many professional domains. Federated learning offers a solution by enabling model training while safeguarding data privacy. However, the limited resources of federated learning clients pose a challenge to the training of pre-trained language models. This study addresses this issue through several steps. Firstly, it defines the problem of completing model training with limited resources and explores strategies to balance computational and communication costs for optimizing training efficiency. Secondly, it introduces an efficient federated learning framework for BERT further pre-training and fine-tuning (FedBT). FedBT facilitates the training of the BERT model on federated learning clients, encompassing both further pre-training and downstream task fine-tuning. Depending on the application context, FedBT selectively trains key parameters of the BERT model at the clients, uploading only the updated parameters to the server for aggregation. This approach significantly reduces both computational and communication overhead during training. Finally, extensive experiments are conducted on datasets from multiple professional domains. Results demonstrate that FedBT reduces client-side computational costs to 34.31% and communication costs to 7.04% during further pre-training. In downstream task fine-tuning, it reduces client-side computational costs to 48.26% and communication costs to 20.19%. The accuracy achieved in both pre-training and downstream task fine-tuning is comparable to traditional federated learning methods that train the entire model.

    参考文献
    相似文献
    引证文献
引用本文

王鑫澳,陈珂,寿黎但,骆歆远,陈刚.基于联邦学习的BERT模型高效训练框架.软件学报,,():1-24

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-03-20
  • 最后修改日期:2024-05-05
  • 录用日期:
  • 在线发布日期: 2025-01-24
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号