可扩展机器学习的并行与分布式优化算法综述
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(U1435220);北京市科技重大项目(D171100003417002);民航科技重大专项(MHRD20160109)


Survey on Parallel and Distributed Optimization Algorithms for Scalable Machine Learning
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (U1435220); Beijing Major Science and Technology Projects (D171100003417002); Civil Aviation Science and Technology Major Project (MHRD20160109)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    机器学习问题通常会转换成一个目标函数去求解,优化算法是求解目标函数中参数的重要工具.在大数据环境下,需要设计并行与分布式的优化算法,通过多核计算和分布式计算技术来加速训练过程.近年来,该领域涌现了大量研究工作,部分算法也在各机器学习平台得到广泛应用.针对梯度下降算法、二阶优化算法、邻近梯度算法、坐标下降算法、交替方向乘子算法这5类最常见的优化方法展开研究,每一类算法分别从单机并行和分布式并行来分析相关研究成果,并从模型特性、输入数据特性、算法评价、并行计算模型等角度对每种算法进行详细对比.随后,对有代表性的可扩展机器学习平台中优化算法的实现和应用情况进行对比分析.同时,对所介绍的所有优化算法进行多层次分类,方便用户根据目标函数类型选择合适的优化算法,也可以通过该多层次分类图交叉探索如何将优化算法应用到新的目标函数类型.最后分析了现有优化算法存在的问题,提出可能的解决思路,并对未来研究方向进行展望.

    Abstract:

    Machine learning problems can be viewed as optimization-centric programs, and the optimization algorithm is an important tool to solve the objective function. In the era of big data, in order to speed up the training process, it is essential to design parallel and distributed optimization algorithms by multi-core computing and distributed computing technologies. In recent years, there are a lot of research works in this field, and some algorithms have been widely applied on machine learning platforms. In this paper, five common optimization algorithms, including gradient descent algorithm, second order optimization algorithm, proximal gradient algorithm, coordinate descent algorithm and alternating direction method of multiplier, are studied. Each type of algorithm is analyzed from the view of parallel and distributed respectively, and algorithms of the same type are compared by their model type, input data characteristic, algorithm evaluation and parallel communication mode. In addition, the implementations and applications of the optimization algorithm on representative scalable machine learning platforms are analyzed. Meanwhile, all the optimization algorithms introduced in this paper are categorized by a hierarchical classification diagram, which can be used as a tool to select the appropriate optimization algorithm according to the objective function type, and also to cross explore how to apply optimization algorithms to the new objective function type. Finally, the problems of the existing optimization algorithms are discussed, and the possible solutions and the future research directions are proposed.

    参考文献
    相似文献
    引证文献
引用本文

亢良伊,王建飞,刘杰,叶丹.可扩展机器学习的并行与分布式优化算法综述.软件学报,2018,29(1):109-130

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2017-05-05
  • 最后修改日期:2017-06-09
  • 录用日期:
  • 在线发布日期: 2017-10-09
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号