一种神经网络指令集扩展与代码映射机制
作者:
作者简介:

娄文启(1995-),男,博士生,主要研究领域为神经网络处理器,可重构硬件加速器.
王超(1985-),男,博士,副教授,CCF高级会员,主要研究领域为神经网络加速器,深度学习处理器.
宫磊(1990-),男,博士后,CCF专业会员,主要研究领域为计算机系统结构,可重构硬件加速器,神经网络处理器.
周学海(1966-),男,博士,教授,博士生导师,CCF高级会员,主要研究领域为计算机体系结构,嵌入式系统.

通讯作者:

王超,E-mail:cswang@ustc.edu.cn

基金项目:

国家重点研发计划(2017YFA0700900,2017YFA0700903);国家自然科学基金(61379040);江苏省自然科学基金(BK20181193);中国科学院青年创新促进会资助项目(2017497)


Neural Network Instruction Set Extension and Code Mapping Mechanism
Author:
Fund Project:

National Key Research and Development Program of China (2017YFA0700900, 2017YFA0700903); National Natural Science Foundation of China (61379040); Natural Science Foundation of Jiangsu Province, China (BK20181193); Youth Innovation Promotion Association CAS (2017497)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [27]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    近年来,卷积神经网络(CNN)在图像识别和分类领域的高精度表现使其在机器学习领域受到了广泛关注.然而CNN的计算与访存密集特性给需要支持各种负载的通用处理器带来了巨大压力.因此,涌现了大量CNN专用硬件加速器.它们虽然提高了效率但却缺乏灵活性.基于新兴的RISC-V架构设计了包含10条矩阵指令的专用指令集RV-CNN.通过抽象典型CNN中的计算为指令,该指令集可灵活支持CNN推理过程并具有比通用ISA更高的代码密度.在此基础上,提出了代码至指令的映射机制.通过在Xilinx ZC702上使用该指令集构建不同网络模型后发现,相比于x86处理器,RV-CNN平均具有141倍的能效和8.91倍的代码密度;相比于GPU,平均具有1.25倍的能效和1.95倍的代码密度.另外,相比于以往的CNN加速器,该设计在支持典型CNN模型的同时仍具有不错的能效.

    Abstract:

    In recent years, due to the high-accuracy performance of Convolutional Neural Network (CNN) in character recognition and image classification, it has received widespread attention in the field of machine learning. Nevertheless, the compute-intensive and memory-intensive characteristics of CNN have posed huge challenges to the general-purpose processor, which needs to support various workloads. Therefore, a large number of CNN-specific hardware accelerators have emerged to improve efficiency. Whereas, although previous accelerators are significantly efficient, they usually lack flexibility. In this study, classical CNN models are analyzed and a domain-specific instruction set of 10 matrix instructions, called RV-CNN, is design based on the promising RISC-V architecture. By abstracting CNN computation into instructions, the proposed design can provide sufficient flexibility for CNN and possesses a higher code density than the general ISA. Based on this, a code-to-instruction mapping mechanism is proposed. By using the RV-CNN to build different CNN models on the Xilinx ZC702, it was found that compared to x86 processors, RV-CNN has an average of 141 times energy efficiency and 8.91 times the code density; compared to GPU, it has an average of 1.25 times energy efficiency and 1.95 times the code density. Besides, compared to previous CNN accelerators, the design supports typical CNN models while having good energy efficiency.

    参考文献
    [1] Wu F, Kong Y, Dong W, et al. Gradient-aware blind face inpainting for deep face verification. Neurocomputing, 2019, 331(FEB.28):301-311.
    [2] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2016. 779-788. [doi: 10.1109/CVPR.2016.91]
    [3] Sainath TN, Mohamed A, Kingsbury B, et al. Deep convolutional neural networks for LVCSR. In: Proc. of the IEEE Int’l Conf. on Acoustics, Speech, and Signal Processing. IEEE, 2013. 8614-8618. [doi: 10.1109/ICASSP.2013.6639347]
    [4] Collobert R, Weston J, Bottou L, et al. Natural language processing (Almost) from scratch. Journal of Machine Learning Research, 2011,12(1):2493-2537. [doi: 10.1016/j.chemolab.2011.03.009]
    [5] Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2012,25(2):1097-1105. [doi: 10.1145/3065386]
    [6] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2014. 1-9. [doi: 10.1109/CVPR.2015.7298594]
    [7] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2016. 770-778. [doi: 10.1109/CVPR.2016.90]
    [8] Gong L, Wang C, Li X, et al. MALOC: A fully pipelined FPGA accelerator for convolutional neural networks with all layers mapped on chip. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2018,37(11):2601-2612. [doi: 10. 1109/TCAD.2018.2857078]
    [9] Wang C, Gong L, Yu Q, et al. DLAU: A scalable deep learning accelerator unit on FPGA. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2017,36(3):513-517. [doi: 10.1109/TCAD.2016.2587683]
    [10] Wang C, Li X, Chen Y, et al. Service-oriented architecture on FPGA-based MPSoC. IEEE Trans. on Parallel and Distributed Systems, 2017,28(10):2993-3006. [doi: 10.1109/TPDS.2017.2701828]
    [11] Chen T, Du Z, Sun N, et al. DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In: Proc. of the Architectural Support for Programming Languages and Operating Systems. ACM, 2014. 269-284. [doi: 10.1145/2541940. 2541967]
    [12] Moons B, Verhelst M. An energy-efficient precision-scalable ConvNet processor in 40-nm CMOS. IEEE Journal of Solid-state Circuits, 2017,52(4):903-914. [doi: 10.1109/JSSC.2016.2636225]
    [13] Chen Y, Luo T, Liu S, et al. DaDianNao: A machine-learning supercomputer. In: Proc. of the Int’l Symp. on Microarchitecture. IEEE, 2014. 609-622. [doi: 10.1109/MICRO.2014.58]
    [14] Liu S, Du Z, Tao J, et al. Cambricon: An instruction set architecture for neural networks. In: Proc. of the 43rd ACM/IEEE Annual Int’l Symp. on Computer Architecture (ISCA). IEEE, 2016. 393-405. [doi: 10.1145/3007787.3001179]
    [15] Conti F, Rossi D, Pullini A, et al. PULP: A ultra-low power parallel accelerator for energy-efficient and flexible embedded vision. Journal of Signal Processing Systems, 2015,84(3):339-354. [doi: 10.1007/s11265-015-1070-9]
    [16] Schiavone MG, Benini L. A near-threshold RISC-V core with DSP extensions for scalable IoT endpoint devices. IEEE Trans. on Very Large Scale Integration Systems, 2017,25(10):2700-2713.
    [17] Lou W, Wang C, Gong L, et al. RV-CNN: Flexible and efficient instruction set for CNNs based on RISC-V processors. In: Proc. of the Int’l Symp. on Advanced Parallel Processing Technologies. Cham: Springer-Verlag, 2019. 3-14.
    [18] Bao Y, Wang S. Labeled von Neumann architecture for software-defined cloud. Journal of Computer Science and Technology, 2017,32(2):219-223. [doi: 10.1007/s11390-017-1716-0]
    [19] Cong J, Ghodrat MA, Gill M, et al. Accelerator-rich architectures: Opportunities and progresses. In: Proc. of the Design Automation Conf. IEEE, 2014. 1-6. [doi: 10.1145/2593069.2596667]
    [20] Lu LQ, Zheng SZ, Xiao QC, et al. Accelerating convolutional neural networks on FPGAs. Science in China (Information Sciences), 2019,49(3):277-294(in Chinese with English abstract). [doi: 10.1360/N112018-00291]
    [21] Alwani M, Chen H, Ferdman M, et al. Fused-layer CNN accelerators. In: Proc. of the 49th Annual IEEE/ACM Int’l Symp. on Microarchitecture (MICRO) IEEE, 2016. 1-12. [doi: 10.1109/micro.2016.7783725]
    [22] Gysel P, Pimentel J, Motamedi M, et al. Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks. IEEE Trans. on Neural Networks, 2018,29(11):5784-5789.
    [23] Zhang C, Li P, Sun G, et al. Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: Proc. of the 2015 ACM/SIGDA Int’l Symp. on Field-programmable Gate Arrays. ACM, 2015. 161-170.
    [24] Suda N, Chandra V, Dasika G, et al. Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks. In: Proc. of the 2016 ACM/SIGDA Int’l Symp. on Field-programmable Gate Arrays. ACM, 2016. 16-25.
    [25] Guan Y, Liang H, Xu N, et al. FP-DNN: An automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates. In: Proc. of the 25th IEEE Annual Int’l Symp. on Field-programmable Custom Computing Machines (FCCM). IEEE, 2017. 152-159.
    附中文参考文献:
    [20] 卢丽强,郑思泽,肖倾城,等.面向卷积神经网络的FPGA设计.中国科学(信息科学),2019,49(3):277-294. [doi: 10.1360/N112018-00291]
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

娄文启,王超,宫磊,周学海.一种神经网络指令集扩展与代码映射机制.软件学报,2020,31(10):3074-3086

复制
分享
文章指标
  • 点击次数:3047
  • 下载次数: 6441
  • HTML阅读次数: 3787
  • 引用次数: 0
历史
  • 收稿日期:2020-02-16
  • 最后修改日期:2020-04-04
  • 在线发布日期: 2020-06-11
  • 出版日期: 2020-10-06
文章二维码
您是第19892860位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号