Neural Network Instruction Set Extension and Code Mapping Mechanism
Author:
Affiliation:

Fund Project:

National Key Research and Development Program of China (2017YFA0700900, 2017YFA0700903); National Natural Science Foundation of China (61379040); Natural Science Foundation of Jiangsu Province, China (BK20181193); Youth Innovation Promotion Association CAS (2017497)

  • Article
  • | |
  • Metrics
  • |
  • Reference [27]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    In recent years, due to the high-accuracy performance of Convolutional Neural Network (CNN) in character recognition and image classification, it has received widespread attention in the field of machine learning. Nevertheless, the compute-intensive and memory-intensive characteristics of CNN have posed huge challenges to the general-purpose processor, which needs to support various workloads. Therefore, a large number of CNN-specific hardware accelerators have emerged to improve efficiency. Whereas, although previous accelerators are significantly efficient, they usually lack flexibility. In this study, classical CNN models are analyzed and a domain-specific instruction set of 10 matrix instructions, called RV-CNN, is design based on the promising RISC-V architecture. By abstracting CNN computation into instructions, the proposed design can provide sufficient flexibility for CNN and possesses a higher code density than the general ISA. Based on this, a code-to-instruction mapping mechanism is proposed. By using the RV-CNN to build different CNN models on the Xilinx ZC702, it was found that compared to x86 processors, RV-CNN has an average of 141 times energy efficiency and 8.91 times the code density; compared to GPU, it has an average of 1.25 times energy efficiency and 1.95 times the code density. Besides, compared to previous CNN accelerators, the design supports typical CNN models while having good energy efficiency.

    Reference
    [1] Wu F, Kong Y, Dong W, et al. Gradient-aware blind face inpainting for deep face verification. Neurocomputing, 2019, 331(FEB.28):301-311.
    [2] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2016. 779-788. [doi: 10.1109/CVPR.2016.91]
    [3] Sainath TN, Mohamed A, Kingsbury B, et al. Deep convolutional neural networks for LVCSR. In: Proc. of the IEEE Int’l Conf. on Acoustics, Speech, and Signal Processing. IEEE, 2013. 8614-8618. [doi: 10.1109/ICASSP.2013.6639347]
    [4] Collobert R, Weston J, Bottou L, et al. Natural language processing (Almost) from scratch. Journal of Machine Learning Research, 2011,12(1):2493-2537. [doi: 10.1016/j.chemolab.2011.03.009]
    [5] Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2012,25(2):1097-1105. [doi: 10.1145/3065386]
    [6] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2014. 1-9. [doi: 10.1109/CVPR.2015.7298594]
    [7] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. IEEE, 2016. 770-778. [doi: 10.1109/CVPR.2016.90]
    [8] Gong L, Wang C, Li X, et al. MALOC: A fully pipelined FPGA accelerator for convolutional neural networks with all layers mapped on chip. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2018,37(11):2601-2612. [doi: 10. 1109/TCAD.2018.2857078]
    [9] Wang C, Gong L, Yu Q, et al. DLAU: A scalable deep learning accelerator unit on FPGA. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2017,36(3):513-517. [doi: 10.1109/TCAD.2016.2587683]
    [10] Wang C, Li X, Chen Y, et al. Service-oriented architecture on FPGA-based MPSoC. IEEE Trans. on Parallel and Distributed Systems, 2017,28(10):2993-3006. [doi: 10.1109/TPDS.2017.2701828]
    [11] Chen T, Du Z, Sun N, et al. DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In: Proc. of the Architectural Support for Programming Languages and Operating Systems. ACM, 2014. 269-284. [doi: 10.1145/2541940. 2541967]
    [12] Moons B, Verhelst M. An energy-efficient precision-scalable ConvNet processor in 40-nm CMOS. IEEE Journal of Solid-state Circuits, 2017,52(4):903-914. [doi: 10.1109/JSSC.2016.2636225]
    [13] Chen Y, Luo T, Liu S, et al. DaDianNao: A machine-learning supercomputer. In: Proc. of the Int’l Symp. on Microarchitecture. IEEE, 2014. 609-622. [doi: 10.1109/MICRO.2014.58]
    [14] Liu S, Du Z, Tao J, et al. Cambricon: An instruction set architecture for neural networks. In: Proc. of the 43rd ACM/IEEE Annual Int’l Symp. on Computer Architecture (ISCA). IEEE, 2016. 393-405. [doi: 10.1145/3007787.3001179]
    [15] Conti F, Rossi D, Pullini A, et al. PULP: A ultra-low power parallel accelerator for energy-efficient and flexible embedded vision. Journal of Signal Processing Systems, 2015,84(3):339-354. [doi: 10.1007/s11265-015-1070-9]
    [16] Schiavone MG, Benini L. A near-threshold RISC-V core with DSP extensions for scalable IoT endpoint devices. IEEE Trans. on Very Large Scale Integration Systems, 2017,25(10):2700-2713.
    [17] Lou W, Wang C, Gong L, et al. RV-CNN: Flexible and efficient instruction set for CNNs based on RISC-V processors. In: Proc. of the Int’l Symp. on Advanced Parallel Processing Technologies. Cham: Springer-Verlag, 2019. 3-14.
    [18] Bao Y, Wang S. Labeled von Neumann architecture for software-defined cloud. Journal of Computer Science and Technology, 2017,32(2):219-223. [doi: 10.1007/s11390-017-1716-0]
    [19] Cong J, Ghodrat MA, Gill M, et al. Accelerator-rich architectures: Opportunities and progresses. In: Proc. of the Design Automation Conf. IEEE, 2014. 1-6. [doi: 10.1145/2593069.2596667]
    [20] Lu LQ, Zheng SZ, Xiao QC, et al. Accelerating convolutional neural networks on FPGAs. Science in China (Information Sciences), 2019,49(3):277-294(in Chinese with English abstract). [doi: 10.1360/N112018-00291]
    [21] Alwani M, Chen H, Ferdman M, et al. Fused-layer CNN accelerators. In: Proc. of the 49th Annual IEEE/ACM Int’l Symp. on Microarchitecture (MICRO) IEEE, 2016. 1-12. [doi: 10.1109/micro.2016.7783725]
    [22] Gysel P, Pimentel J, Motamedi M, et al. Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks. IEEE Trans. on Neural Networks, 2018,29(11):5784-5789.
    [23] Zhang C, Li P, Sun G, et al. Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: Proc. of the 2015 ACM/SIGDA Int’l Symp. on Field-programmable Gate Arrays. ACM, 2015. 161-170.
    [24] Suda N, Chandra V, Dasika G, et al. Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks. In: Proc. of the 2016 ACM/SIGDA Int’l Symp. on Field-programmable Gate Arrays. ACM, 2016. 16-25.
    [25] Guan Y, Liang H, Xu N, et al. FP-DNN: An automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates. In: Proc. of the 25th IEEE Annual Int’l Symp. on Field-programmable Custom Computing Machines (FCCM). IEEE, 2017. 152-159.
    附中文参考文献:
    [20] 卢丽强,郑思泽,肖倾城,等.面向卷积神经网络的FPGA设计.中国科学(信息科学),2019,49(3):277-294. [doi: 10.1360/N112018-00291]
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

娄文启,王超,宫磊,周学海.一种神经网络指令集扩展与代码映射机制.软件学报,2020,31(10):3074-3086

Copy
Share
Article Metrics
  • Abstract:3037
  • PDF: 6414
  • HTML: 3751
  • Cited by: 0
History
  • Received:February 16,2020
  • Revised:April 04,2020
  • Online: June 11,2020
  • Published: October 06,2020
You are the first2035265Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063