基于记忆策略的元解释学习
作者:
通讯作者:

王榕,E-mail:rwang_bily@stu.xidian.edu.cn

中图分类号:

TP311

基金项目:

国家自然科学基金(62192734).


Meta-Interpretive Learning Based on Memory Strategy
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [34]
  • |
  • 相似文献
  • | | |
  • 文章评论
    摘要:

    元解释学习(Meta-interpretive Learning,MIL)是一种归纳逻辑程序设计(Inductive Logic Programming,ILP)方法,旨在从一组实例、元规则和其他背景知识中学习一个程序.MIL采用深度优先和失败驱动策略在程序空间中搜索适当的子句以生成程序.事实上,这种机制不可避免地引发了对相同目标重复证明的问题.在本文中,我们提出了一种剪枝策略,该策略利用Prolog内置的数据库机制来存储未能达成的目标及其对应的错误信息,从而有效避免冗余的证明过程.此后,这些累积的错误信息能够作为指导,帮助MIL系统在未来的学习过程中进行优化和调整.我们证明了剪枝算法的正确性,并在理论上计算了程序空间的缩减比例.我们将所提出的方法应用于两个现有的MIL系统MetagolMetagolAI,从而产生了两个新的MIL系统MetagolFMetagolAI_F.在四个不同任务上的实证结果表明,所提出的策略可以显著减少学习相同程序的时间消耗.

    Abstract:

    Meta-Interpretive Learning (MIL) is a method of Inductive Logic Programming (ILP), aiming to learn a program from a set of examples, metarules, and other background knowledge. MIL adopts a depth-first and failure-driven strategy to search proper clauses in the program space for generating programs. As a matter of fact, this mechanism inevitably raises the problem of repeated proof for the same goals. In this paper, we propose a pruning strategy that leverages the built-in database mechanism of Prolog to store the failed goals and their corresponding error information, effectively avoiding redundant proof processes. Subsequently, this accumulated error information can serve as guidance to assist the MIL system in optimizing and adjusting its learning process in the future. We prove the correctness of the pruning algorithm and calculate the reduced proportion of the program space in theory. We apply the proposed approach to two existing MIL systems Metagol and MetagolAI, resulting in two new MIL systems MetagolF and MetagolAI_F. Empirical results on four different tasks show that the proposed strategy can significantly reduce the time consumption for learning the same programs.

    参考文献
    [1] Muggleton S, de Raedt L. Inductive Logic Programming: Theory and methods. The Journal of Logic Programming, 1994, 19-20:629-679. [doi: 10.1016/0743-1066(94)90035-3].
    [2] Muggleton SH, et al. Meta-interpretive learning: application to grammatical inference. Machine Learning, 2014, 94(1):25-49. [doi: 10.1007/s10994-013-5358-3].
    [3] Quinlan JR, Cameron-Jones RM. FOIL: A midterm report. In: Proc. of Machine Learning: ECML-93: European Conference on Machine Learning. 1993. 1-20.
    [4] Muggleton S. Inverse entailment and progol. New Generation Computing, 1995, 13(3-4):245-286. [doi: 10.1007/bf03037227].
    [5] Blockeel H, De Raedt L. Top-down Induction of First-Order Logical Decision Trees. Artif. Intell., 1998, 101(1-2):285-297. [doi: 10.1016/s0004-3702(98)00034-4].
    [6] Srinivasan A. The Aleph Manual. 2001.
    [7] Lin D, Dechter E, Ellis K, et al. Bias reformulation for one-shot function induction[M]//ECAI 2014. IOS Press, 2014: 525-530.
    [8] Cropper A, Muggleton SH. Logical minimisation of meta-rules within meta-interpretive learning. In: Inductive Logic Programming: 24th International Conference, ILP 2014, Nancy, France, September 14-16, 2014, Revised Selected Papers. Springer, 2015. 62-75.
    [9] Cropper A, Tourret S. Derivation reduction of metarules in meta-interpretive learning. In: Proc. of International Conference on Inductive Logic Programming. 2018. 1-21. [doi: 10.1007/978-3-319-99960-9_1].
    [10] Morel R, Cropper A, Ong CHL. Typed Meta-interpretive Learning of Logic Programs. In: Proc. of Logics in Artificial Intelligence. 2019. 198-213. [doi: 10.1007/978-3-030-19570-0_13].
    [11] Hocquette C. Efficient instance and hypothesis space revision in Meta-Interpretive Learning. PhD thesis, Imperial College London, 2022.
    [12] Cyrus D, Milani GA, Tamaddoni-Nezhad A. Explainable Game Strategy Rule Learning from Video. 2023.
    [13] Cyrus D, Trewern J, Tamaddoni-Nezhad A. Meta-interpretive Learning from Fractal Images. In: Proc. of Inductive Logic Programming: 32nd International Conference, ILP 2023, Bari, Italy, November 13-15, 2023, Proceedings. 2023. 166.
    [14] Muggleton SH, Santos JCA, Tamaddoni-Nezhad A. Toplog: Ilp using a logic program declarative bias. In: Proc. of International Conference on Logic Programming. 2008. 687-692.
    [15] Muggleton SH, Lin D, Tamaddoni-Nezhad A. Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Machine Learning, 2015, 100(1):1-25. [doi: 10.1007/s10994-014-5471-y].
    [16] Tourret S, Cropper A. SLD-resolution reduction of second-order Horn fragments. In: Proc. of Logics in Artificial Intelligence: 16th European Conference, JELIA 2019, Rende, Italy, May 7-11, 2019, Proceedings. 2019. 259-276.
    [17] Gallier JH. Logic for computer science: foundations of automatic theorem proving. Courier Dover Publications, 2015.
    [18] Nienhuys-Cheng SH, et al. Foundations of Inductive Logic Programming. Berlin, Heidelberg: Springer-Verlag, 1997. [doi: 10.1007/3540-62927-0].
    [19] Cropper A, Muggleton SH. Learning efficient logic programs. Machine Learning, 2019, 108:1063-1083.
    [20] Cropper A, Muggleton SH. Learning Higher-Order Logic Programs Through Abstraction and Invention. In: Proc. of International Joint Conference on Artificial Intelligence, IJCAI. 2016.
    [21] Cropper A, Morel R, Muggleton SH. Learning Higher-Order Programs through Predicate Invention. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(09):13655-13658. [doi: 10.1609/aaai.v34i09.7113].
    [22] Muggleton S, Feng C, et al. Efficient induction of logic programs. Citeseer, 1990.
    [23] Muggleton S. Duce, an Oracle-Based Approach to Constructive Induction. In: Proc. of Proceedings of the 10th International Joint Conference on Artificial Intelligence - Volume 1. 1987. 287-292.
    [24] Muggleton S, Buntine WL. Machine Invention of First Order Predicates by Inverting Resolution. In: Proc. of Proceedings of the Fifth International Conference on Machine Learning. 1988. 339-352. [doi: 10.1016/b978-0-934613-64-4.50040-2].
    [25] Rouveirol C, Puget JF. Beyond Inversion of Resolution. In: Proc. of Proceedings of the Seventh International Conference (1990) on Machine Learning. 1990. 122-130. [doi: 10.1016/b978-1-55860-141-3.50018-3].
    [26] Lavrac N, Dzeroski S, Grobelnik M. Learning Nonrecursive Definitions of Relations with LINUS. In: Proc. of Proceedings of the European Working Session on Learning on Machine Learning. 1991. 265-281. [doi: 10.1007/bfb0017020].
    [27] De Raedt L, Bruynooghe M. CLINT: a multi-strategy interactive concept-learner and theory revision system. 1991. 175-191.
    [28] Cropper A, Tamaddoni-Nezhad A, Muggleton S. Meta-Interpretive Learning of Data Transformation Programs. In: Proc. of Inductive Logic Programming. 2016. 46-59. [doi: 10.1007/978-3-319-40566-7_4].
    [29] Cropper A, Hocquette C. Learning logic programs by discovering where not to search. In: Proc. of Proceedings of the AAAI Conference on Artificial Intelligence. 2023. 6289-6296.
    [30] Cropper A, Muggleton SH. Learning Efficient Logical Robot Strategies Involving Composable Objects. In: Proc. of Proceedings of the 24th International Conference on Artificial Intelligence. 2015. 3423-3429.
    [31] Kazmi M, Schuller P, Saygin Y. Improving scalability of inductive logic programming via pruning and best-effort optimisation. Expert Systems with Applications, 2017, 87:291-303.
    [32] Kaminski T, Eiter T, Inoue K. Meta-Interpretive Learning Using HEX-Programs. In: Proc. of IJCAI. 2019. 6186-6190.
    [33] Cropper, A., & Morel, R. (2021). Learning programs by learning from failures. Machine Learning, 110(4), 801-856.
    [34] Wang, R., Sun, J., Tian, C., & Duan, Z. (2024). Meta-Interpretive LEarning with Reuse. Mathematics, 12(6), 916.
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

王榕,田聪,孙军,于斌,段振华.基于记忆策略的元解释学习.软件学报,2025,36(8):0

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-08-25
  • 最后修改日期:2024-10-14
  • 在线发布日期: 2024-12-10
文章二维码
您是第19811425位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号