School of Computer Science and Technology, Soochow University, Suzhou 215006, China;Key Laboratory of Symbolic Computation and Knowledge Engineering Jilin University, Ministry of Education, Changchun 130012, China 在期刊界中查找 在百度中查找 在本站中查找
School of Computer Science and Technology, Soochow University, Suzhou 215006, China 在期刊界中查找 在百度中查找 在本站中查找
Affiliation:
Fund Project:
摘要
|
图/表
|
访问统计
|
参考文献 [17]
|
相似文献 [20]
|
引证文献
|
资源附件
|
文章评论
摘要:
在大规模状态空间或者连续状态空间中,将函数近似与强化学习相结合是当前机器学习领域的一个研究热点;同时,在学习过程中如何平衡探索和利用的问题更是强化学习领域的一个研究难点.针对大规模状态空间或者连续状态空间、确定环境问题中的探索和利用的平衡问题,提出了一种基于高斯过程的近似策略迭代算法.该算法利用高斯过程对带参值函数进行建模,结合生成模型,根据贝叶斯推理,求解值函数的后验分布.在学习过程中,根据值函数的概率分布,求解动作的信息价值增益,结合值函数的期望值,选择相应的动作.在一定程度上,该算法可以解决探索和利用的平衡问题,加快算法收敛.将该算法用于经典的Mountain Car 问题,实验结果表明,该算法收敛速度较快,收敛精度较好.
In machine learning with large or continuous state space, it is a hot topic to combine the function approximation and reinforcement learning. The study also faces a very difficult problem of how to balance the exploration and exploitation in reinforcement learning. In allusion to the exploration and exploitation dilemma in the large or continuous state space, this paper presents a novel policy iteration algorithm based on Gaussian process in deterministic environment. The algorithm uses Gaussian process to model the action-value function, and in conjunction with generative model, obtains the posteriori distribution of the parameter vector of the action-value function by Bayesian inference. During the learning process, it computes the value of perfect information according to the posteriori distribution, and then selects the appropriate action with respect to the expected value of the action-value function. The algorithm achieves the balance between exploration and exploitation to certain extent, and therefore accelerates the convergence. The experimental results on the Mountain Car problem show that the algorithm has faster convergence rate and better convergence performance.
[2] Sutton RS. Learning to predict by the method of temporal differences. Machine Learning, 1988,3(1):9-44.
[3] Liu Q, Fu QM, Gong SR, Fu YC, Cui ZM. Reinforcement learning algorithm based on minimum state method and average reward. Journal on Communications, 2011,32(1):66-71 (in Chinese with English abstract).[doi: 10.1109/CSIE.2009.433]
[4] Tadic V. On the convergence of temporal-difference learning with linear function approximation. Machine Learning, 2001,42(3): 241-267.[doi:10.1023/A:1007609817671]
[5] Shah H, Gopal M. Fuzzy decision tree function approximation in reinforcement learning. Int''l Journal of Artificial Intelligence and Soft Computing, 2010,2(1-2):26-45.[doi: 10.1504/IJAISC.2010.032511]
[6] Precup D, Sutton RS, Dasgupta S. Off-Policy temporal-difference learning with function approximation. In: Proc. of the 18th Int''l Conf. on Machine Learning. Morgan Kaufmann Publishers, 2001. 417-424.
[7] Lagoudakis M, Parr R. Least squares policy iteration. Journal of Machine Learning Research, 2003,4(12):1107-1149.
[8] Engel Y, Mannor S, Meir R. Reinforcement learning with Gaussian processes. In: Proc. of the 22nd Int''l Conf. on Machine Learning. ACM Press, 2005. 201-208.[doi: 10.1145/1102351.1102377]
[9] Sutton RS, Szepesvári C, Maei HR. A convergent O(n) algorithm for off-policy temporal-difference learning with linear function approximation. In: Proc. of the 22nd Annual Conf. on Neural Information Processing Systems. Cambridge: MIT Press, 2008. 1609-1616.
[10] Sutton RS, Hamid RM, Precup D, Bhatnagar S, Silver D, Szepesvári C, Wiewiora E. Fast gradient-descent methods for temporaldifference learning with linear function approximation. In: Proc. of the 26th Int''l Conf. on Machine Learning. ACM Press, 2009. 993-1000.[doi: 10.1145/1553374.1553501]
[11] Dearden R, Friedman N, Russell S. Bayesian Q-learning. In: Proc. of the 15th National/10th Conf. on Artificial Intelligence/Innovative Applications of Artificial Intelligence. AAAI Press, 1998. 761-768.
[12] Ghavamzadeh M, Engel Y. Bayesian actor-critic algorithms. In: Proc. of the 24th Int''l Conf. on Machine Learning. ACM Press, 2007. 297-304.[doi: 10.1145/1273496.1273534]
[13] Dimitrakakis C, Rothkopf CA. Bayesian multitask inverse reinforcement learning. In: Proc. of the 9th European Workshop on Reinforcement Learning. Springer-Verlag, 2012. 273-284.[doi: 10.1007/978-3-642-29946-9_27]
[14] Xu X, Hu DW, Lu XC. Kernel-Based least squares policy iteration for reinforcement learning. IEEE Trans. on Neural Networks, 2007,18(4):973-992.[doi: 10.1109/TNN.2007.899161]
[15] Wingate D, Goodman ND, Roy DM, Kaelbling LP, Tenenbaum JB. Bayesian policy search with policy priors. In: Proc. of the 22nd Int''l Joint Conf. on Artificial Intelligence. AAAI Press, 2011. 1565-1570.
[16] Ross S, Pineau J. Model-Based Bayesian reinforcement learning in large structured domains. In: Proc. of the 24th Conf. on Uncertainty in Artificial Intelligence. AUAI Press, 2008. 476-483.
[17] Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning. Cambridge: MIT Press, 2006.