Meta-Interpretive Learning Based on Memory Strategy
Author:
Affiliation:

Clc Number:

TP311

  • Article
  • | |
  • Metrics
  • |
  • Reference [34]
  • |
  • Related
  • | | |
  • Comments
    Abstract:

    Meta-Interpretive Learning (MIL) is a method of Inductive Logic Programming (ILP), aiming to learn a program from a set of examples, metarules, and other background knowledge. MIL adopts a depth-first and failure-driven strategy to search proper clauses in the program space for generating programs. As a matter of fact, this mechanism inevitably raises the problem of repeated proof for the same goals. In this paper, we propose a pruning strategy that leverages the built-in database mechanism of Prolog to store the failed goals and their corresponding error information, effectively avoiding redundant proof processes. Subsequently, this accumulated error information can serve as guidance to assist the MIL system in optimizing and adjusting its learning process in the future. We prove the correctness of the pruning algorithm and calculate the reduced proportion of the program space in theory. We apply the proposed approach to two existing MIL systems Metagol and MetagolAI, resulting in two new MIL systems MetagolF and MetagolAI_F. Empirical results on four different tasks show that the proposed strategy can significantly reduce the time consumption for learning the same programs.

    Reference
    [1] Muggleton S, de Raedt L. Inductive Logic Programming: Theory and methods. The Journal of Logic Programming, 1994, 19-20:629-679. [doi: 10.1016/0743-1066(94)90035-3].
    [2] Muggleton SH, et al. Meta-interpretive learning: application to grammatical inference. Machine Learning, 2014, 94(1):25-49. [doi: 10.1007/s10994-013-5358-3].
    [3] Quinlan JR, Cameron-Jones RM. FOIL: A midterm report. In: Proc. of Machine Learning: ECML-93: European Conference on Machine Learning. 1993. 1-20.
    [4] Muggleton S. Inverse entailment and progol. New Generation Computing, 1995, 13(3-4):245-286. [doi: 10.1007/bf03037227].
    [5] Blockeel H, De Raedt L. Top-down Induction of First-Order Logical Decision Trees. Artif. Intell., 1998, 101(1-2):285-297. [doi: 10.1016/s0004-3702(98)00034-4].
    [6] Srinivasan A. The Aleph Manual. 2001.
    [7] Lin D, Dechter E, Ellis K, et al. Bias reformulation for one-shot function induction[M]//ECAI 2014. IOS Press, 2014: 525-530.
    [8] Cropper A, Muggleton SH. Logical minimisation of meta-rules within meta-interpretive learning. In: Inductive Logic Programming: 24th International Conference, ILP 2014, Nancy, France, September 14-16, 2014, Revised Selected Papers. Springer, 2015. 62-75.
    [9] Cropper A, Tourret S. Derivation reduction of metarules in meta-interpretive learning. In: Proc. of International Conference on Inductive Logic Programming. 2018. 1-21. [doi: 10.1007/978-3-319-99960-9_1].
    [10] Morel R, Cropper A, Ong CHL. Typed Meta-interpretive Learning of Logic Programs. In: Proc. of Logics in Artificial Intelligence. 2019. 198-213. [doi: 10.1007/978-3-030-19570-0_13].
    [11] Hocquette C. Efficient instance and hypothesis space revision in Meta-Interpretive Learning. PhD thesis, Imperial College London, 2022.
    [12] Cyrus D, Milani GA, Tamaddoni-Nezhad A. Explainable Game Strategy Rule Learning from Video. 2023.
    [13] Cyrus D, Trewern J, Tamaddoni-Nezhad A. Meta-interpretive Learning from Fractal Images. In: Proc. of Inductive Logic Programming: 32nd International Conference, ILP 2023, Bari, Italy, November 13-15, 2023, Proceedings. 2023. 166.
    [14] Muggleton SH, Santos JCA, Tamaddoni-Nezhad A. Toplog: Ilp using a logic program declarative bias. In: Proc. of International Conference on Logic Programming. 2008. 687-692.
    [15] Muggleton SH, Lin D, Tamaddoni-Nezhad A. Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Machine Learning, 2015, 100(1):1-25. [doi: 10.1007/s10994-014-5471-y].
    [16] Tourret S, Cropper A. SLD-resolution reduction of second-order Horn fragments. In: Proc. of Logics in Artificial Intelligence: 16th European Conference, JELIA 2019, Rende, Italy, May 7-11, 2019, Proceedings. 2019. 259-276.
    [17] Gallier JH. Logic for computer science: foundations of automatic theorem proving. Courier Dover Publications, 2015.
    [18] Nienhuys-Cheng SH, et al. Foundations of Inductive Logic Programming. Berlin, Heidelberg: Springer-Verlag, 1997. [doi: 10.1007/3540-62927-0].
    [19] Cropper A, Muggleton SH. Learning efficient logic programs. Machine Learning, 2019, 108:1063-1083.
    [20] Cropper A, Muggleton SH. Learning Higher-Order Logic Programs Through Abstraction and Invention. In: Proc. of International Joint Conference on Artificial Intelligence, IJCAI. 2016.
    [21] Cropper A, Morel R, Muggleton SH. Learning Higher-Order Programs through Predicate Invention. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(09):13655-13658. [doi: 10.1609/aaai.v34i09.7113].
    [22] Muggleton S, Feng C, et al. Efficient induction of logic programs. Citeseer, 1990.
    [23] Muggleton S. Duce, an Oracle-Based Approach to Constructive Induction. In: Proc. of Proceedings of the 10th International Joint Conference on Artificial Intelligence - Volume 1. 1987. 287-292.
    [24] Muggleton S, Buntine WL. Machine Invention of First Order Predicates by Inverting Resolution. In: Proc. of Proceedings of the Fifth International Conference on Machine Learning. 1988. 339-352. [doi: 10.1016/b978-0-934613-64-4.50040-2].
    [25] Rouveirol C, Puget JF. Beyond Inversion of Resolution. In: Proc. of Proceedings of the Seventh International Conference (1990) on Machine Learning. 1990. 122-130. [doi: 10.1016/b978-1-55860-141-3.50018-3].
    [26] Lavrac N, Dzeroski S, Grobelnik M. Learning Nonrecursive Definitions of Relations with LINUS. In: Proc. of Proceedings of the European Working Session on Learning on Machine Learning. 1991. 265-281. [doi: 10.1007/bfb0017020].
    [27] De Raedt L, Bruynooghe M. CLINT: a multi-strategy interactive concept-learner and theory revision system. 1991. 175-191.
    [28] Cropper A, Tamaddoni-Nezhad A, Muggleton S. Meta-Interpretive Learning of Data Transformation Programs. In: Proc. of Inductive Logic Programming. 2016. 46-59. [doi: 10.1007/978-3-319-40566-7_4].
    [29] Cropper A, Hocquette C. Learning logic programs by discovering where not to search. In: Proc. of Proceedings of the AAAI Conference on Artificial Intelligence. 2023. 6289-6296.
    [30] Cropper A, Muggleton SH. Learning Efficient Logical Robot Strategies Involving Composable Objects. In: Proc. of Proceedings of the 24th International Conference on Artificial Intelligence. 2015. 3423-3429.
    [31] Kazmi M, Schuller P, Saygin Y. Improving scalability of inductive logic programming via pruning and best-effort optimisation. Expert Systems with Applications, 2017, 87:291-303.
    [32] Kaminski T, Eiter T, Inoue K. Meta-Interpretive Learning Using HEX-Programs. In: Proc. of IJCAI. 2019. 6186-6190.
    [33] Cropper, A., & Morel, R. (2021). Learning programs by learning from failures. Machine Learning, 110(4), 801-856.
    [34] Wang, R., Sun, J., Tian, C., & Duan, Z. (2024). Meta-Interpretive LEarning with Reuse. Mathematics, 12(6), 916.
    Related
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

王榕,田聪,孙军,于斌,段振华.基于记忆策略的元解释学习.软件学报,2025,36(8):0

Copy
Share
Article Metrics
  • Abstract:85
  • PDF: 129
  • HTML: 0
  • Cited by: 0
History
  • Received:August 25,2024
  • Revised:October 14,2024
  • Online: December 10,2024
You are the first2032497Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063