Abstract:The question-answering system based on knowledge graphs can analyze user questions, and has become an effective way to retrieve relevant knowledge and automatically answer the given questions. The knowledge graph-based question-answering system usually uses a neural program induction model to convert natural language question into a logical form, and the answer can be obtained by executing the logical form on the knowledge graph. However, the knowledge question-answering system by using pre-trained language models and knowledge graphs involves two challenges: (1) given the QA (question-answering) context, relevant knowledge needs to be identified from a large KG (knowledge graph); (2) it isneeded to perform the joint reasoning on QA context and KG. Based on these challenges, a language model-driven knowledge graph question-answering model is proposed, which connects the QA context and KG to form a joint graph, and uses a language model to calculate the relevance of the given QA context nodes and KG nodes, and a multi-head graph attention network is employed to update the node representation. Extensive experiments on the CommonsenseQA, OpenBookQA and MedQA-USMLE real datasets are conducted to evaluate the performance of QA-KGNet and the experimental results show that QA-KGNet outperforms existing benchmark models and exhibits excellent structured reasoning capability.