面向移动终端智能的自治学习系统
作者:
作者简介:

徐梦炜(1992-),男,博士,主要研究领域为移动/边缘计算,操作系统.
刘渊强(1997-),男,硕士,主要研究领域为移动/边缘计算,操作系统.
黄康(1992-),男,博士,主要研究领域为机器学习,自然语言处理.
刘譞哲(1980-),男,博士,副教授,博士生导师,CCF专业会员,主要研究领域为服务计算,Web技术,软件工程.
黄罡(1975-),男,博士,教授,博士生导师,CCF高级会员,主要研究领域为软件中间件,软件体系结构,网构软件.

通讯作者:

刘譞哲,E-mail:xzl@pku.edu.cn

基金项目:

国家杰出青年科学基金(61725201);广东省重点领域研发计划(2020B010164002)


Autonomous Learning System Towards Mobile Intelligence
Author:
Fund Project:

Science Fund for Distinguished Young Scholars of China (61725201); R&D Projects in Key Areas of Guangdong Province of China (2020B010164002)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [39]
  • |
  • 相似文献 [20]
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    在移动终端设备中部署机器学习模型已成为学术界和产业界的研究热点,其中重要的一环是利用用户数据训练生成模型.然而,由于数据隐私日益得到重视,特别是随着欧洲出台GDPR、我国出台《个人信息保护法》等相关法律法规,导致开发者不能任意从用户设备中获取训练数据(特别是隐私数据),从而无法保证模型训练的质量.国内外学者针对如何在隐私数据上训练神经网络模型展开了一系列研究,对其进行了总结并指出其相应的局限性.为此,提出了一种新型的面向移动终端隐私数据的机器学习模型训练模式,将所有与用户隐私数据相关的计算任务都部署在本地终端设备,无需用户以任何形式上传数据,从而保护用户隐私.这种训练模式被为自治式学习(autonomous learning).为了解决自治式学习面临的移动终端数据量不足与计算能力不足两大挑战,设计实现了自治学习系统AutLearn,通过云(公共数据,预训练)和端(隐私数据,迁移学习)协同的思想,以及终端数据增强技术,提高了终端设备上模型的训练效果.进一步地,通过模型压缩、神经网络编译器优化、运行时缓存等一系列技术,AutLearn可以极大地优化移动终端上的模型训练计算开销.基于AutLearn在两个经典的神经网络应用场景下实现了自治式学习,实验结果表明,AutLearn可以在保护隐私数据的前提下,训练模型达到甚至超过传统的集中式/联邦式模式,并且极大地减小了在移动终端上进行模型训练的计算和能耗开销.

    Abstract:

    How to efficiently deploy machine learning models on mobile devices has drawn a lot of attention in both academia and industry, among which the model training is a critical part. However, with increasingly public attention on data privacy and the recently adopted laws and regulations, it becomes harder for developers to collect training data from users and thus cannot train high-quality models. Researchers have been exploring approaches of training neural networks on decentralized data. Those efforts will be summarized and their limitations be pointed out. To this end, this work presents a novel neural network training paradigm on mobile devices, which distributes all training computations associated with private data on local devices and requires no data to be uploaded in any form. Such training paradigm autonomous learning is named. To deal with two main challenges of autonomous learning, i.e., limited data volume and insufficient computing power available on mobile devices, the first autonomous learning system AutLearn is designed and implemented. It incorporates the cloud (public data, pre-training)—client (private data, transfer learning) cooperation methodology and data augmentation techniques to ensure the model convergence on mobile devices. Furthermore, by utilizing a series of optimization techniques such as model compression, neural network compiler, and runtime cache reuse, AutLearn can significantly reduce the on-client training cost. Two classical scenarios of autonomous learning are implemented based on AutLearn and carried out a set of experiments. The results showed that AutLearn can train the neural networks with comparable or even higher accuracy compared to traditional centralized/federated training mode with privacy preserved. AutLearn can also significantly reduce the computational and energy cost of neural network training on mobile devices.

    参考文献
    [1] Xu M, Liu J, Liu Y, Lin FX, Liu Y, Liu X. A first look at deep learning apps on smartphones. In:Proc. of the World Wide Web Conf. 2019. 2125-2136.
    [2] General Data Protection Regulation (GDPR). 2018. https://gdpr-info.eu/
    [3] McMahan HB, Moore E, Ramage D, Hampson S. Communication-efficient learning of deep networks from decentralized data. arXiv Preprint arXiv:1602.05629, 2016.
    [4] Bost R, Popa RA, Tu S, Goldwasser S. Machine learning classification over encrypted data. In:Proc. of the NDSS. 2015. 4324-4325.
    [5] Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L. Deep learning with differential privacy. In:Proc. of the 2016 ACM SIGSAC Conf. on Computer and Communications Security. 2016. 308-318.
    [6] Konečný J, McMahan HB, Ramage D, Richtárik P. Federated optimization:Distributed machine learning for on-device intelligence. arXiv Preprint arXiv:1610.02527, 2016.
    [7] Li L, Xiong H, Wang J, Xu CZ, Guo Z. SmartPC:Hierarchical pace control in real-time federated learning system. In:Proc. of the IEEE Real-time Systems Symp. (RTSS 2019). 2019.
    [8] Bonawitz K, Ivanov V, Kreuter B, Marcedone A, McMahan HB, Patel S, Ramage D, Segal A, Seth K. Practical secure aggregation for privacy-preserving machine learning. In:Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security. 2017. 1175-1191.
    [9] Vaidya J, Kantarcıoğlu M, Clifton C. Privacy-preserving naive bayes classification. The VLDB Journal, 2008,17(4):879-898.
    [10] Graepel T, Lauter K, Naehrig M. ML confidential:Machine learning on encrypted data. In:Proc. of the Int'l Conf. on Information Security and Cryptology 2012. Berlin, Heidelberg:Springer-Verlag, 2012. 1-21.
    [11] Nandakumar K, Ratha N, Pankanti S, Halevi S. Towards deep neural network training on encrypted data. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops. 2019.
    [12] Xiong P, Zhu T, Wang X. A Survey on differential privacy and applications. Chinese Journal of Computers, 2014,37(1):101-122(in Chinese with English abstract).
    [13] Shokri R, Shmatikov V. Privacy-preserving deep learning. In:Proc. of the 22nd ACM SIGSAC Conf. on Computer and Communications Security. 2015. 1310-1321.
    [14] Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V. How to backdoor federated learning. arXiv Preprint arXiv:1807.00459, 2018.
    [15] Chellappa RK, Sin RG. Personalization versus privacy:An empirical examination of the online consumer's Dilemma. Information Technology and Management, 2005,6(2/3):181-202.
    [16] Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konecny J, Mazzocchi S, McMahan HB, Van Overveldt T. Towards federated learning at scale:System design. In:Proc. of the SysML. 2019.
    [17] Zhuang F, Luo P, He Q, Shi Z. Survey on transfer learning research. Ruan Jian Xue Bao/Journal of Software, 2015,26(1):26-39(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/4631.htm[doi:10.13328/j.cnki.jos.004631]
    [18] Xue J, Li J, Gong Y. Restructuring of deep neural network acoustic models with singular value decomposition. In:Proc. of the Interspeech. 2013. 2365-2369.
    [19] Wei JW, Zou K. EDA:Easy data augmentation techniques for boosting performance on text classification tasks. In:Proc. of the EMNLP/IJCNLP (1). 2019. 6381-6387.
    [20] Chen T, Moreau T, Jiang Z, Zheng L, Yan E, Shen H, Cowan M, Wang L, Hu Y, Ceze L, Guestrin C. TVM:An automated end-to-end optimizing compiler for deep learning. In:Proc. of the 13th USENIX Symp. on Operating Systems Design and Implementation (OSDI 18). 2018. 578-594.
    [21] Ragan-Kelley J, Barnes C, Adams A, Paris S, Durand F, Amarasinghe S. Halide:A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. ACM SIGPLAN Notices, 2013,48(6):519-530.
    [22] Zerrell T, Bruestle J. Stripe:Tensor compilation via the nested polyhedral model. arXiv Preprint arXiv:1903.06498, 2019.
    [23] Lei J, Gao X, Song J, Wang XL, Song ML. Survey of deep neural network model compression. Ruan Jian Xue Bao/Journal of Software, 2018,29(2):251-266(in Chinese with English abstract). http://wwww,.jos.org.cn/1000-9825/5428.htm[doi:10.13328/j. cnki.jos.005428]
    [24] Xu M, Qian F, Zhu M, Huang F, Pushp S, Liu X. Deepwear:Adaptive local offloading for on-wearable deep learning. IEEE Trans. on Mobile Computing, 2019,19(2):314-330.
    [25] Xu M, Zhu M, Liu Y, Lin FX, Liu X. DeepCache:Principled cache for mobile deep vision. In:Proc. of the 24th Annual Int'l Conf. (MobiCom). 2018. 129-144.
    [26] Yang TJ, Howard A, Chen B, Zhang X, Go A, Sandler M, Sze V, Adam H. Netadapt:Platform-aware neural network adaptation for mobile applications. In:Proc. of the European Conf. on Computer Vision (ECCV). 2018. 285-300.
    [27] Tan M, Chen B, Pang R, Vasudevan V, Sandler M, Howard A, Le QV. MNASnet:Platform-aware neural architecture search for mobile. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019. 2820-2828.
    [28] Yang TJ, Chen YH, Sze V. Designing energy-efficient convolutional neural networks using energy-aware pruning. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 5687-5695.
    [29] Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997,9(8):1735-1780.
    [30] Caldas S, Wu P, Li T, Konečný J, McMahan HB, Smith V, Talwalkar A. Leaf:A benchmark for federated settings. arXiv Preprint arXiv:1812.01097, 2018.
    [31] Khomenko V, Shyshkov O, Radyvonenko O, Bokhan K. Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization. In:Proc. of the 1st IEEE Int'l Conf. on Data Stream Mining & Processing (DSMP). 2016. 100-103.
    [32] Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets:Efficient convolutional neural networks for mobile vision applications. arXiv Preprint arXiv:1704.04861, 2017.
    [33] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In:Advances in Neural Information Processing Systems. 2012. 1097-1105.
    [34] Taylor SA, Jaques N, Nosakhare E, Sano A, Picard R. Personalized multitask learning for predicting tomorrow's mood, stress, and health. IEEE Trans. on Affective Computing, 2017.
    [35] Li M, Zhang T, Chen Y, Smola AJ. Efficient mini-batch training for stochastic optimization. In:Proc. of the 20th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining. 2014. 661-670.
    附中文参考文献:
    [12] 熊平,朱天清,王晓峰.差分隐私保护及其应用.计算机学报.2014,37(1):101-122.
    [17] 庄福振,罗平,何清,史忠植.迁移学习研究进展.软件学报,2015,26(1):26-39. http://www.jos.org.cn/1000-9825/4631.html[doi:10. 13328/j.cnki.jos.004631]
    [23] 雷杰,高鑫,宋杰,王兴路,宋明黎.深度网络模型压缩综述.软件学报,2018,29(2):251-266. http://www.jos.org.cn/1000-9825/5428.htm[doi:10.13328/j.cnki.jos.005428]
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

徐梦炜,刘渊强,黄康,刘譞哲,黄罡.面向移动终端智能的自治学习系统.软件学报,2020,31(10):3004-3018

复制
分享
文章指标
  • 点击次数:3626
  • 下载次数: 8191
  • HTML阅读次数: 3863
  • 引用次数: 0
历史
  • 收稿日期:2020-02-07
  • 最后修改日期:2020-04-04
  • 在线发布日期: 2020-06-11
  • 出版日期: 2020-10-06
文章二维码
您是第20224368位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号