轻量级神经网络架构综述
作者:
作者简介:

葛道辉(1994-),男,博士生,CCF学生会员,主要研究领域为深度学习,计算机视觉,目标跟踪;刘如意(1989-),女,博士,讲师,CCF专业会员,主要研究领域为计算机视觉,机器学习,目标分割提取;李洪升(1994-),男,博士生,主要研究领域为深度学习,视频分类,动作识别;沈沛意(1969-),男,博士,教授,CCF专业会员,主要研究领域为计算机视觉,DSPFPGA理论及应用,数字图像处理,计算机网络;张亮(1981-),男,博士,副教授,博士生导师,主要研究领域为嵌入式多核系统,机器人语义SLAM,三维场景语义分割,手势识别;苗启广(1972-),男,博士,教授,博士生导师,CCF杰出会员,主要研究领域为计算机视觉,机器学习,大数据分析.

通讯作者:

李洪升,E-mail:hsli@stu.xidian.edu.cn

基金项目:

国家重点研发计划(2018YFC0807500,2019YFB1311600);国家自然科学基金(61772396,61472302,61772392,61902296);西安市大数据与视觉智能关键技术重点实验室课题(201805053ZD4CG37);中央高校基本科研业务费专项资金(JBF180301);陕西省重点研发计划(2018ZDXM-GY-036)


Survey of Lightweight Neural Network
Author:
Fund Project:

National Key Research and Development Program of China (2018YFC0807500, 2019YFB1311600); National Natural Science Foundation of China (61772396, 61472302, 61772392, 61902296); Xi'an Key Laboratory of Big Data and Intelligent Vision (201805053ZD4CG37); Fundamental Research Funds for the Central Universities (JBF180301); Shaanxi Province Key Research and Development Program (2018ZDXM-GY-036)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [99]
  • |
  • 相似文献 [20]
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    深度神经网络已经被证明可以有效地解决图像、自然语言等不同领域的问题.同时,伴随着移动互联网技术的不断发展,便携式设备得到了迅速的普及,用户提出了越来越多的需求.因此,如何设计高效、高性能的轻量级神经网络,是解决问题的关键.详细阐述了3种构建轻量级神经网络的方法,分别是人工设计轻量级神经网络、神经网络模型压缩算法和基于神经网络架构搜索的自动化神经网络架构设计;同时,简要总结和分析了每种方法的特点,并重点介绍了典型的构建轻量级神经网络的算法;最后,总结现有的方法,并给出了未来发展的前景.

    Abstract:

    Deep neural network has been proved to be effective in solving problems in different fields such as image, natural language, and so on. At the same time, with the continuous development of mobile Internet technology, portable devices have been rapidly popularized, and users have put forward more and more demands. Therefore, how to design an efficient and high performance lightweight neural network is the key to solve the problem. In this paper, three methods of constructing lightweight neural network are described in detail, which are artificial design of lightweight neural network, compression algorithm of neural network model, and automatic neural network architecture design based on searching of neural network architecture. The characteristics of each method are summarized and analyzed briefly, and the typical algorithms of constructing lightweight neural network are introduced emphatically. Finally, the existing methods are summarized and the prospects for future development are given.

    参考文献
    [1] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In:Proc. of the 3rd Int'l Conf. on Learning Representations. 2015.
    [2] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2016.770-778.
    [3] Huang G, Liu Z, Maaten LVD, Weinberger K. Densely connected convolutional networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017.4700-4708.
    [4] Han S, Mao H, Dally WJ. Deep compression:Compressing deep neural network with pruning, trained quantization and huffman coding. In:Proc. of the 4th Int'l Conf. on Learning Representations. 2016.
    [5] He Y, Lin J, Liu Z, Wang H, Li L, Han S. AMC:Automl for model compression and acceleration on mobile devices. In:Proc. of the European Conf. on Computer Vision. 2018.784-800.
    [6] Zoph B, Vasudevan V, Shlens JV, Le Q. Learning transferable architectures for scalable image recognition. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.8697-8710.
    [7] Tan M, Chen B, Pang R, Vasudevan V, Sandler M, Howard AV, Le Q. Mnasnet:Platform-aware neural architecture search for mobile. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019.2820-2828.
    [8] Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets:Efficient convolutional neural networks for mobile vision applications. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017.432-445.
    [9] Zhang X, Zhou X, Lin M, Sun J. Shufflenet:An extremely efficient convolutional neural network for mobile devices. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.6848-6856.
    [10] Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L. MobileNetV2:Inverted residuals and linear bottlenecks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.4510-4520.
    [11] Qin Z, Li Z, Zhang Z, Bao Y, Yu G, Peng Y, Sun J. ThunderNet:Towards real-time generic object detection on mobile devices. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2019.6718-6727.
    [12] Ma N, Zhang X, Zheng H, Sun J. ShuffleNet V2:Practical guidelines for efficient cnn architecture design. In:Proc. of the European Conf. on Computer Vision. 2018.116-131.
    [13] Landola FN, Han S, Moskewicz MW, Ashraf K, Han S, Dally WJ, Keutzer K. SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2017.
    [14] Bergstra JS, Bardenet R, Bengio Y, Kégl B. Algorithms for hyper-parameter optimization. In:Proc. of the Advances in Neural Information Processing Systems. 2011.2546-2554.
    [15] Yoon J, Kim T, Dia O, Kim S, Bengio Y, Ahn S. Bayesian model-agnostic meta-learning. In:Proc. of the Advances in Neural Information Processing Systems. 2018.7332-7342.
    [16] Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions. In:Proc. of the British Machine Vision Conf. 2014.
    [17] Peng C, Zhang X, Yu G, Luo G, Sun J. Large kernel matters-Improve semantic segmentation by global convolutional network. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017.4353-4361.
    [18] Rizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In:Proc. of the Advances in Neural Information Processing Systems. 2012.1097-1105.
    [19] Xie S, Girshick R, Dollár P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In:Proc. of the IEEE Conf. on Computer Vision And Pattern Recognition. 2017.1492-1500.
    [20] Zhang T, Qi GJ, Xiao B, Wang J. Interleaved group convolutions. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017.4373-4382.
    [21] Xie G, Wang J, Zhang T, Lai J, Hong R, Qi GJ. Interleaved structured sparse convolutional neural networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.8847-8856.
    [22] Mehta S, Rastegari M, Caspi A, Shapiro L, Hajishirzi H. Espnet:Efficient spatial pyramid of dilated convolutions for semantic segmentation. In:Proc. of the European Conf. on Computer Vision. 2018.552-568.
    [23] Mehta S, Rastegari M, Shapiro L, Hajishirzi H. Espnetv2:A light-weight, power efficient, and general purpose convolutional neural network. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019.9190-9200.
    [24] Li H, Kadav A, Durdanovic I, Samet H, Graf HP. Pruning filters for efficient convnets. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2017.
    [25] Liu Z, Li J, Shen Z, Huang G, Yan S, Zhang C. Learning efficient convolutional networks through network slimming. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017.2736-2744.
    [26] Hu H, Peng R, Tai YW, Tang CK. Network trimming:A data-driven neuron pruning approach towards efficient deep architectures. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2015.
    [27] Tian Q, Arbel T, Clark JJ. Deep LDA-pruned nets for efficient facial gender classification. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops. 2017.10-19.
    [28] Molchanov P, Tyree S, Karras T, Aila T, Kautz J. Pruning convolutional neural networks for resource efficient inference. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2018.
    [29] Luo JH, Wu J, Lin W. Thinet:A filter level pruning method for deep neural network compression. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017.5058-5066.
    [30] He Y, Zhang X, Sun J. Channel pruning for accelerating very deep neural networks. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017.1389-1397.
    [31] Wen W, Wu C, Wang Y, Chen Y, Li H. Learning structured sparsity in deep neural networks. In:Proc. of the Advances in Neural Information Processing Systems. 2016.2074-2082.
    [32] Yu R, Li A, Chen CF, Lai JH, Morariu VI, Han X, Davis LS. Nisp:Pruning networks using neuron importance score propagation. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.9194-9203.
    [33] Gupta S, Agrawal A, Gopalakrishnan K, Barayanan P. Deep learning with limited numerical precision. In:Proc. of the Int'l Conf. on Machine Learning. 2015.1737-1746.
    [34] Dettmers T. 8-bit approximations for parallelism in deep learning. In:Proc. of the 4th Int'l Conf. on Learning Representations. 2016.
    [35] Courbariaux M, Bengio Y, David JP. Binaryconnect:Training deep neural networks with binary weights during propagations. In:Proc. of the Advances in Neural Information Processing Systems. 2015.3123-3131.
    [36] Hubara I, Courbariaux M, Soudry D, Yaniv R, Bengio Y. Binarized neural networks. In:Proc. of the Advances in Neural Information Processing Systems, Vol.29.2016.4107-4115.
    [37] Li F, Zhang B, Liu B. Ternary weight networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops. 2016.
    [38] Leng C, Dou Z, Li H, Zhu S, Jin R. Extremely low bit neural network:Squeeze the last bit out with ADMM. In:Proc. of the 32nd AAAI Conf. on Artificial Intelligence. 2018.
    [39] Hu Q, Wang P, Cheng J. From hashing to CNNs:Training binary weight networks via hashing. In:Proc. of the 32nd AAAI Conf. on Artificial Intelligence. 2018.
    [40] Wang P, Hu Q, Zhang Y, Zhang C, Liu Y, Cheng J. Two-Step quantization for low-bit neural networks. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.4376-4384.
    [41] Tung F, Mori G. CLIP-Q:Deep network compression learning by in-parallel pruning-quantization. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.7873-7882.
    [42] Denton EL, Zaremba W, Bruna J, Cun YL, Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. In:Proc. of the Advances in Neural Information Processing Systems. 2014.1269-1277.
    [43] Zhang X, Zou J, He K, Sun J. Accelerating very deep convolutional networks for classification and detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2015,38(10):1943-1955.
    [44] Lebedev V, Ganin Y, Rakhuba M, Oseledets I, Lempitsky V. Speeding-Up convolutional neural networks using fine-tuned cp-decomposition. In:Proc. of the 3rd Int'l Conf. on Learning Representations. 2015.
    [45] Kim YD, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power mobile applications. In:Proc. of the 4th Int'l Conf. on Learning Representations. 2016.
    [46] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. In:Proc. of the Advances in Neural Information Processing Systems Workshop, Vol.27.2014.
    [47] Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, Bengio Y. Fitnets:Hints for thin deep nets. In:Proc. of the 3rd Int'l Conf. on Learning Representations. 2015.
    [48] Zagoruyko S, Komodakis N. Paying more attention to attention:Improving the performance of convolutional neural networks via attention transfer. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2017.
    [49] Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Wierstra D. Continuous control with deep reinforcement learning. In:Proc. of the 4th Int'l Conf. on Learning Representations. 2016.
    [50] Ashok A, Rhinehart N, Beainy F, Kitani KM. N2N learning:Network to network compression via policy gradient reinforcement learning. In:Proc. of the 6th Int'l Conf. on Learning Representations. 2018.
    [51] Wong C, Houlsby N, Lu Y, Gesmundo A. Transfer learning with neural AutoML. In:Proc. of the Advances in Neural Information Processing Systems, Vol.31.2018.8366-8375.
    [52] Lin J, Rao Y, Lu J, Zhou J. Runtime neural pruning. In:Proc. of the Advances in Neural Information Processing Systems. 2017.2181-2191.
    [53] Wang H, Zhang Q, Wang Y, Hu H. Structured probabilistic pruning for convolutional neural network acceleration. In:Proc. of the British Machine Vision Conf. 2018.
    [54] Real E, Aggarwal A, Huang Y, Le QV. Regularized evolution for image classifier architecture search. In:Proc. of the AAAI Conf. on Artificial Intelligence. 2019.
    [55] Chen LC, Collins M, Zhu Y, Papandreou G, Zoph B, Schroff F, Adam H, Shlens J. Searching for efficient multi-scale architectures for dense image prediction. In:Proc. of the Advances in Neural Information Processing Systems. 2018.8699-8710.
    [56] Chollet F. Xception:Deep learning with depth-wise separable convolutions. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017.1251-1258.
    [57] Yu F, Koltun V. Multi-Scale context aggregation by dilated convolutions. In:Proc. of the 4th Int'l Conf. on Learning Representations. 2016.
    [58] Baker B, Gupta O, Naik N, Raskar R. Designing neural network architectures using reinforcement learning. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2017.
    [59] Suganuma M, Shirakawa S, Nagao T. A genetic programming approach to designing convolutional neural network architectures. In:Proc. of the Genetic and Evolutionary Computation Conf. 2017.497-504.
    [60] Cai H, Chen T, Zhang W, Yu Y, Wang J. Efficient architecture search by network transformation. In:Proc. of the 32nd AAAI Conf. on Artificial Intelligence. 2018.
    [61] Mendoza H, Klein A, Feurer M, Springenberg J, Hutter F. Towards automatically-tuned neural networks. In:Proc. of the Workshop on Automatic Machine Learning. 2016.58-65.
    [62] Zoph B, Le QV. Neural architecture search with reinforcement learning. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2017.
    [63] Szegedy C, Liu W, Jia Y, Sermanet P, Reed RE, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2015.1-9.
    [64] Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li LJ, Fei L, Yuille AL, Huang J, Murphy K. Progressive neural architecture search. In:Proc. of the European Conf. on Computer Vision. 2018.19-34.
    [65] Pham H, Guan MY, Zoph B, Le QV, Dean J. Efficient neural architecture search via parameter sharing. In:Proc. of the 35th Int'l Conf. on Machine Learning. 2018.4092-4101.
    [66] Elsken T, Metzen JH, Hutter F. Efficient multi-objective neural architecture search via lamarckian evolution. In:Proc. of the 7th Int'l Conf. on Learning Representations. 2019.
    [67] Cai H, Yang J, Zhang W, Han S, Yu Y. Path-Level network transformation for efficient architecture search. In:Proc. of the 35th Int'l Conf. on Machine Learning. 2018.677-686.
    [68] Liu H, Simonyan K, Yang Y. DARTS:Differentiable architecture search. In:Proc. of the 7th Int'l Conf. on Learning Representations. 2019.
    [69] Zhong Z, Yan J, Wu W, Shao J, Liu CL. Practical block-wise neural network architecture generation. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.2423-2432.
    [70] Dong JD, Cheng AC, Juan DC, Wei W, Sun M. Dpp-Net:Device-aware progressive search for pareto-optimal neural architectures. In:Proc. of the European Conf. on Computer Vision. 2018.517-531.
    [71] Zhong Z, Yan J, Wu W, Shao J, Liu CL. Practical block-wise neural network architecture generation. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018.2423-2432.
    [72] Chen T, Goodfellow I, Shlens J. Net2net:Accelerating learning via knowledge transfer. In:Proc. of the 4th Int'l Conf. on Learning Representations. 2016.
    [73] Goldberg DE, Deb K. A comparative analysis of selection schemes used in genetic algorithms. In:Proc. of the Foundations of Genetic Algorithms. Elsevier. 1991.69-93.
    [74] Cubuk ED, Zoph B, Schoenholz SS, Le QV. Intriguing properties of adversarial examples. In:Proc. of the 6th Int'l Conf. on Learning Representations Workshop. 2018.
    [75] Liu H, Simonyan K, Vinyals O, Fernando C, Kavukcuoglu K. Hierarchical representations for efficient architecture search. In:Proc. of the 6th Int'l Conf. on Learning Representations. 2018.
    [76] Elsken T, Metzen JH, Hutter F. Simple and efficient architecture search for convolutional neural networks. In:Proc. of the 6th Int'l Conf. on Learning Representations Workshop. 2018.
    [77] Wistuba M. Deep learning architecture search by neuro-cell-based evolution with function-preserving mutations. In:Proc. of the Joint European Conf. on Machine Learning and Knowledge Discovery in Databases. 2018.243-258.
    [78] Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A. Large-scale evolution of image classifiers. In:Proc. of the 34th Int'l Conf. on Machine Learning, Vol.70.2017.2902-2911.
    [79] Xie L, Yuille A. Genetic CNN. In:Proc. of the IEEE Int'l Conf. on Computer Vision. 2017.1379-1388.
    [80] Kandasamy K, Neiswanger W, Schneider J, Póczos B, Xing EP. Neural architecture search with bayesian optimisation and optimal transport. In:Proc. of the Advances in Neural Information Processing Systems. 2018.2016-2025.
    [81] Luo R, Tian F, Qin T, Chen E, Liu TY. Neural architecture optimization. In:Proc. of the Advances in Neural Information Processing Systems. 2018.7816-7827.
    [82] Bender G, Kindermans PJ, Zoph B, Vasudenvan V, Le QV. Understanding and simplifying one-shot architecture search. In:Proc. of the Int'l Conf. on Machine Learning. 2018.549-558.
    [83] Brock A, Lim T, Ritchie JM, Weston N. SMASH:One-shot model architecture search through HyperNetworks. In:Proc. of the 6th Int'l Conf. on Learning Representations. 2018.
    [84] Zhang C, Ren M, Urtasun R. Graph HyperNetworks for neural architecture search. In:Proc. of the 7th Int'l Conf. on Learning Representations. 2019.
    [85] Real E, Aggarwal A, Huang Y, Le QV. Regularized evolution for image classifier architecture search. In:Proc. of the AAAI Conf. on Artificial Intelligence, Vol.33.2019.4780-4789.
    [86] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein MS, Berg AC, Li FF. Imagenet large scale visual recognition challenge. Int'l Journal of Computer Vision, 2015,115(3):211-252.
    [87] Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL. Microsoft COCO:Common objects in context. In:Proc. of the European Conf. on Computer Vision. 2014.740-755.
    [88] Zela A, Klein A, Falkner S, Frank H. Towards automated deep learning:Efficient joint neural architecture and hyperparameter search. In:Proc. of the 21th Int'l Conf. on Artificial Intelligence and Statistics Workshop. 2018.
    [89] Klein A, Falkner S, Bartels S, Hennig P, Hutter F. Fast Bayesian optimization of machine learning hyperparameters on large datasets. In:Proc. of the 20th Int'l Conf. on Artificial Intelligence and Statistics. 2017.528-536.
    [90] Domhan T, Springenberg JT, Hutter F. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In:Proc. of the 24th Int'l Joint Conf. on Artificial Intelligence. 2015.
    [91] Snoek J, Rippel O, Swersky K, Kiros R, Satish N, Sundaram N, Patwary M, Prabhat, Adams R. Scalable Bayesian optimization using deep neural networks. In:Proc. of the 32nd Int'l Conf. on Machine Learning. 2015.2171-2180
    [92] Klein A, Falkner S, Springenberg JT, Hutter F. Learning curve prediction with Bayesian neural networks. In:Proc. of the 5th Int'l Conf. on Learning Representations. 2017.
    [93] Baker B, Gupta O, Raskar R, Naik N. Accelerating neural architecture search using performance prediction. In:Proc. of the 6th Int'l Conf. on Learning Representations. 2018.
    [94] Wei T, Wang C, Rui Y, Chen C. Network morphism. In:Proc. of the 33nd Int'l Conf. on Machine Learning. 2016.564-572.
    [95] Runge F, Stoll D, Falkner S, Hutter F. Learning to design RNA. In:Proc. of the 7th Int'l Conf. on Learning Representations. 2019.
    [96] Li L, Jamieson K, De Salvo G, Rostamizadeh A, Talwalkar A. Hyperband:A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research, 2017,18(185):1-52.
    [97] Xie S, Zheng H, Liu C, Lin L. SNAS:Stochastic neural architecture search. In:Proc. of the 7th Int'l Conf. on Learning Representations. 2019.
    [98] Weng Y, Zhou T, Li Y, Qiu X. NAS-Unet:Neural architecture search for medical image segmentation. IEEE Access, 2019,7:44247-44257.
    [99] Liu C, Chen LC, Schroff F, Adam H, Hua W, Yuille A, Li FF. Auto-Deeplab:Hierarchical neural architecture search for semantic image segmentation. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019.82-92.
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

葛道辉,李洪升,张亮,刘如意,沈沛意,苗启广.轻量级神经网络架构综述.软件学报,2020,31(9):2627-2653

复制
分享
文章指标
  • 点击次数:8920
  • 下载次数: 16352
  • HTML阅读次数: 5621
  • 引用次数: 0
历史
  • 收稿日期:2019-07-01
  • 最后修改日期:2019-08-18
  • 在线发布日期: 2019-12-06
  • 出版日期: 2020-09-06
文章二维码
您是第19766133位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号