Abstract:Reasoning over knowledge graphs aims to infer new facts based on known ones, so as to make the graphs as complete as possible. In recent years, distributed embedding-based reasoning methods have made great success on this task. However, due to their black-box nature, these methods cannot provide interpretability for a specific prediction. Therefore, there has been a growing interest in how to design user-understandable and user-trustworthy reasoning models. Starting from the basic concept of interpretability, this work systematically studies the recently developed methods for interpretable reasoning on knowledge graphs. Specifically, it introduces the research progress of ante-hoc and post-hoc interpretable reasoning models. According to the scope of interpretability, ante-hoc interpretable models can be further divided into local-interpretable and global-interpretable models. In post-hoc interpretable reasoning models, this study reviews representative reasoning methods and introduces two post-hoc interpretation methods in detail. Next, it also summarizes the application of explainable knowledge reasoning in such fields as finance and healthcare. Then, this study summarizes the current situation in explainable knowledge learning. Finally, the future technological development of interpretable reasoning models is prospected.