AutoConfig: 面向深度学习编译优化的自动配置机制
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP314

基金项目:

国家重点研发计划(2022YFB4401402)


AutoConfig: Automatic Configuration Mechanism for Deep Learning Compilation Optimization
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着深度学习模型和硬件架构的快速发展, 深度学习编译器已经被广泛应用. 目前, 深度学习模型的编译优化和调优的方法主要依赖基于高性能算子库的手动调优和基于搜索的自动调优策略. 然而, 面对多变的目标算子和多种硬件平台的适配需求, 高性能算子库往往需要为各种架构进行多次重复实现. 此外, 现有的自动调优方案也面临着搜索开销大和缺乏可解释性的挑战. 为了解决上述问题, 提出AutoConfig, 一种面向深度学习编译优化的自动配置机制. 针对不同的深度学习计算负载和特定的硬件平台, AutoConfig可以构建具备可解释性的优化算法分析模型, 采用静态信息提取和动态开销测量的方法进行综合分析, 并基于分析结果利用可配置的代码生成技术自动完成算法选择和调优. AutoConfig创新性地将优化分析模型与可配置的代码生成策略相结合, 不仅能保证性能加速效果, 还能减少重复开发的开销, 同时可以简化调优过程. 在此基础上, 进一步将AutoConfig集成到深度学习编译器Buddy Compiler中, 对矩阵乘法和卷积的多种优化算法建立分析模型, 并将自动配置的代码生成策略应用在多种SIMD硬件平台上进行评估. 实验结果可验证AutoConfig在代码生成策略中完成参数配置和算法选择的有效性. 与经过手动或自动优化的代码相比, 由AutoConfig生成的代码可达到相似的执行性能, 并且无需承担手动调优的重复实现开销和自动调优的搜索开销.

    Abstract:

    Deep learning compilers have been widely employed with the rapid development of deep learning models and hardware architectures. At present, the compilation optimization and tuning methods of deep learning models mainly rely on high-performance operator libraries and automatic compiler tuning. However, facing various target operators and adaptation requirements of several hardware platforms, high-performance operator libraries should conduct multiple implementations for different architectures. Additionally, existing auto-tuning schemes face challenges in substantial search overheads and interpretability. To this end, this study proposes AutoConfig, an automatic configuration mechanism for deep learning compilation optimization. Targeting different deep learning workloads and multiple hardware platforms, AutoConfig builds interpretable performance analysis models, conducts a thorough assessment via static information extraction and dynamic overhead measurement, and automates algorithm selection and configuration tuning for code generation. The key innovation of this study is combining the optimization analysis model and a configurable code generation strategy, which ensures a performance acceleration effect and reduces repeated development overheads with the simplified tuning process. Furthermore, this study integrates AutoConfig into a deep learning compiler Buddy Compiler, builds analysis models for convolution and matrix multiplication optimization, and evaluates the optimization on multiple SIMD hardware platforms. Experimental results indicate that AutoConfig effectively completes parameter configuration and algorithm selection in the code generation strategy. Additionally, compared with the codes by manual or automatic optimization, the codes generated by AutoConfig can yield comparable performance without both the repeated manual tuning implementation overheads and auto-tuning search overheads.

    参考文献
    相似文献
    引证文献
引用本文

张洪滨,周旭林,邢明杰,武延军,赵琛. AutoConfig: 面向深度学习编译优化的自动配置机制.软件学报,2024,35(6):1-19

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-09-11
  • 最后修改日期:2023-10-30
  • 录用日期:
  • 在线发布日期: 2024-01-05
  • 出版日期:
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号