一种解决连续空间问题的真实在线自然梯度AC算法
作者:
作者简介:

朱斐(1978-),男,江苏苏州人,博士,副教授,CCF专业会员,主要研究领域为机器学习,人工智能,生物信息学;陈冬火(1974-),男,博士,讲师,CCF专业会员,主要研究领域为程序分析和验证,模型检验,自动推理,机器学习;朱海军(1992-),男,硕士,主要研究领域为强化学习,核方法;伏玉琛(1968-),男,博士,教授,CCF高级会员,主要研究领域为强化学习,人工智能;刘全(1969-),男,博士,教授,博士生导师,CCF高级会员,主要研究领域为强化学习,核方法.

通讯作者:

伏玉琛,E-mail:yuchenfu@suda.edu.cn

中图分类号:

TP301

基金项目:

国家自然科学基金(61303108,61373094,61472262);江苏省高校自然科学研究项目(17KJA520004);符号计算与知识工程教育部重点实验室(吉林大学)资助项目(93K172014K04);苏州市应用基础研究计划工业部分(SYG201422);高校省级重点实验室(苏州大学)项目(KJS1524);中国国家留学基金(201606920013)


True Online Natural Actor-Critic Algorithm for the Continuous Space Problem
Author:
Fund Project:

National Natural Science Foundation of China (61303108, 61373094, 61472262); Jiangsu College Natural Science Research Key Program (17KJA520004); Program of the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (Jilin University) (93K172014K04); Suzhou Industrial Application of Basic Research Program (SYG201422); Program of the Provincial Key Laboratory for Computer Information Processing Technology (Soochow University) (KJS1524); China Scholarship Council Project (201606920013)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [28]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    策略梯度作为一种能够有效解决连续空间决策问题的方法得到了广泛研究,但由于在策略估计过程中存在较大方差,因此,基于策略梯度的方法往往受到样本利用率低、收敛速度慢等限制.针对该问题,在行动者-评论家(actor-critic,简称AC)算法框架下,提出了真实在线增量式自然梯度AC(true online incremental natural actor-critic,简称TOINAC)算法.TOINAC算法采用优于传统梯度的自然梯度,在真实在线时间差分(true online time difference,简称TOTD)算法的基础上,提出了一种新型的前向观点,改进了自然梯度行动者-评论家算法.在评论家部分,利用TOTD算法高效性的特点来估计值函数;在行动者部分,引入一种新的前向观点来估计自然梯度,再利用资格迹将自然梯度估计变为在线估计,提高了自然梯度估计的准确性和算法的效率.将TOINAC算法与核方法以及正态策略分布相结合,解决了连续空间问题.最后,在平衡杆、Mountain Car以及Acrobot等连续问题上进行了仿真实验,验证了算法的有效性.

    Abstract:

    Policy gradient methods have been extensively studied as a solution to the continuous space control problem. However, due to the presence of high variance in the gradient estimation, policy gradient based methods are restricted by low sample data utilization and slow convergence. Aiming at solving this problem, utilizing the framework of actor-critic algorithm, a true online incremental natural actor-critic (TOINAC) algorithm, which takes advantage of the natural gradient that is superior to conventional gradient, and is based on true online time difference (TOTD), is proposed. In the critic part of TOINAC algorithm, the efficiency of TOTD is adopted to estimate the value function, and in the actor part of TOINAC algorithm, a novel forward view is introduced to compute and estimate natural gradient. Then, eligibility traces are utilized to turn natural gradient into online estimation, thereby improving the accuracy of natural gradient and efficiency of the method. The TOINAC algorithm is used to integrate with the kernel method and normal distribution policy to tackle the continuous space problem. The simulation tests on cart pole, Mountain Car and Acrobot, which are classical benchmark tests for continuous space problem, verify the effeteness of the algorithm.

    参考文献
    [1] Zhu F, Liu Q, Fu QM, Fu YC. A least square actor-critic approach for continuous action space. Journal of Computer Research and Development, 2014,51(3):548-558(in Chinese with English abstract).[doi:10.7544/issn1000-1239.2014.20130901]
    [2] Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529(7587):484-489.[doi:10.1038/nature16961]
    [3] Riedmiller M, Gabel T, Hafner R, Lange S. Reinforcement learning for robot soccer. Autonomous Robots, 2009,27(1):55-73.[doi:10.1007/s10514-009-9120-4]
    [4] Bagnell JA, Schneider JG. Autonomous helicopter control using reinforcement learning policy search methods. In:Proc. of the IEEE Int'l Conf. on Robotics and Automation (ICRA 2001). New York:IEEE, 2001. 1615-1620.[doi:10.1109/ROBOT.2001.932842]
    [5] Millán JDR, Posenato D, Dedieu E. Continuous-Action Q-learning. Machine Learning, 2002,49(2-3):247-265.[doi:10.1023/A:1017988514716]
    [6] Sutton RS, McAllester DA, Singh SP, Mansour Y. Policy gradient methods for reinforcement learning with function approximation. In:Proc. of the Advances in Neural Information Processing Systems (NIPS 2000). Denver:Neural Information Processing Systems Foundation Inc., 2000. 1057-1063.
    [7] Carden S. Convergence of a Q-learning variant for continuous states and actions. Journal of Artificial Intelligence Research, 2014, 49(1):705-731.[doi:10.1613/jair.4271]
    [8] Venayagamoorthy GK, Harley RG, Wunsch DC. Comparison of heuristic dynamic programming and dual heuristic programming adaptive critics for neurocontrol of a turbogenerator. IEEE Trans. on Neural Networks, 2002,13(3):764-773.[doi:10.1109/TNN. 2002.1000146]
    [9] Howell MN, Frost GP, Gordon TJ, Wu QH. Continuous action reinforcement learning applied to vehicle suspension control. Mechatronics, 1997,7(3):263-276.[doi:10.1016/S0957-4158(97)00003-2]
    [10] Rodríguez A, Vrancx P, Nowé A. A reinforcement learning approach to coordinate exploration with limited communication in continuous action games. Knowledge Engineering Review, 2016,31(1):77-95.[doi:10.1017/S026988891500020X]
    [11] Hasselt HV. Reinforcement Learning in Continuous State and Action Spaces. Berlin, Heidelberg:Springer-Verlag, 2012. 207-251.[doi:10.1007/978-3-642-27645-3_7]
    [12] Xu X, Liu C, Hu D. Continuous-Action reinforcement learning with fast policy search and adaptive basis function selection. Soft Computing, 2011,15(6):1055-1070.[doi:10.1007/s00500-010-0581-3]
    [13] Williams RJ. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 1992, 8(3-4):229-256.[doi:10.1007/BF00992696]
    [14] Peters J, Schaal S. Natural actor-critic. Neurocomputing, 2008,71(7):1180-1190.[doi:10.1016/j.neucom.2007.11.026]
    [15] Bhatnagar S, Sutton RS, Ghavamzadeh M, Lee M. Incremental natural actor-critic algorithms. In:Proc. of the Advances in Neural Information Processing Systems (NIPS 2007). Vancouver:Neural Information Processing Systems Foundation Inc., 2007. 105-112.
    [16] Degris T, Pilarski PM, Sutton RS. Model-Free reinforcement learning with continuous action in practice. In:Proc. of the 2012 American Control Conf. (ACC). New York:IEEE, 2012. 2177-2182.[doi:10.1109/ACC.2012.6315022]
    [17] Seijen VH, Sutton RS. True online TD(λ). In:Proc. of the 31st Int'l Conf. on Machine Learning. Berlin, Heidelberg:Springer-Verlag, 2012. 692-700.
    [18] Wiering M, Otterlo MV. Reinforcement Learning:State of the Art. Heidelberg, New York:Springer-Verlag, 2012. 1-42.[doi:10. 1007/978-3-642-27645-3]
    [19] Ormoneit D, Sen Ś. Kernel-Based reinforcement learning. Machine Learning, 2002,49(2-3):161-178.[doi:10.1023/A:1017928328829]
    [20] Parr R, Li L, Taylor G, Wakefield CP, Littman ML. An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning. In:Proc. of the 25th Int'l Conf. on Machine Learning. New York:ACM Press, 2008. 752-759.[doi:10.1145/1390156.1390251]
    [21] Heydari A, Balakrishnan N. Finite-Horizon control-constrained nonlinear optimal control using single network adaptive critics. IEEE Trans. on Neural Networks and Learning Systems, 2013,24(1):145-157.[doi:10.1109/TNNLS.2012.2227339]
    [22] Scholkopf B, Smola AJ. Learning with Kernels:Support Vector Machines, Regularization, Optimization, and Beyond. Boston:MIT Press, 2001. 25-55.
    [23] Engel Y, Mannor S, Meir R. The kernel recursive least-squares algorithm. IEEE Trans. on Signal Processing, 2004,52(8):2275-2285.[doi:10.1109/TSP.2004.830985]
    [24] Schölkopf B, Smola A, Müller K. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1998,10(5):1299-1319.[doi:10.1162/089976698300017467]
    [25] Chen X, Gao Y, Wang R. Online selective kernel-based temporal difference learning. IEEE Trans. on Neural Networks and Learning Systems, 2013,24(12):1944-1956.[doi:10.1109/TNNLS.2013.2270561]
    [26] Bhatnagar S, Sutton RS, Ghavamzadeh M, Lee M. Natural actor-critic algorithms. Automatica, 2009,45(11):2471-2482.[doi:10. 1016/j.automatica.2009.07.008]
    附中文参考文献:
    [1] 朱斐,刘全,傅启明,伏玉琛.一种用于连续动作空间的最小二乘行动者-评论家方法.计算机研究与发展,2014,51(3):548-558.[doi:10.7544/issn1000-1239.2014.20130901]
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

朱斐,朱海军,刘全,陈冬火,伏玉琛.一种解决连续空间问题的真实在线自然梯度AC算法.软件学报,2018,29(2):267-282

复制
分享
文章指标
  • 点击次数:3481
  • 下载次数: 5604
  • HTML阅读次数: 1419
  • 引用次数: 0
历史
  • 收稿日期:2016-11-04
  • 最后修改日期:2016-12-13
  • 在线发布日期: 2017-03-30
文章二维码
您是第19862253位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号