Abstract:Meta-Interpretive Learning (MIL) is a method of Inductive Logic Programming (ILP), aiming to learn a program from a set of examples, metarules, and other background knowledge. MIL adopts a depth-first and failure-driven strategy to search proper clauses in the program space for generating programs. As a matter of fact, this mechanism inevitably raises the problem of repeated proof for the same goals. In this paper, we propose a pruning strategy that leverages the built-in database mechanism of Prolog to store the failed goals and their corresponding error information, effectively avoiding redundant proof processes. Subsequently, this accumulated error information can serve as guidance to assist the MIL system in optimizing and adjusting its learning process in the future. We prove the correctness of the pruning algorithm and calculate the reduced proportion of the program space in theory. We apply the proposed approach to two existing MIL systems Metagol and MetagolAI, resulting in two new MIL systems MetagolF and MetagolAI_F. Empirical results on four different tasks show that the proposed strategy can significantly reduce the time consumption for learning the same programs.