Abstract:The task of machine reading comprehension is to make the machine understand natural language text and correctly answer text-related questions. Due to the limitation of the dataset scale, most of the early machine reading comprehension methods were modeled based on manual features and traditional machine learning methods. In recent years, with the development of knowledge bases and crowdsourcing, high quality large-scale datasets have been proposed by researchers, which has brought a new opportunity for the advance of neural network models and machine reading comprehension. In this survey, an exhaustive review on the state-of-the-art research efforts on machine reading comprehension based on neural network is made. First, an overview of machine reading comprehension, including development process, problem formulation, and evaluation metric, is given. Then, a comprehensive review is conducted of related technologies in the most fashionable neural reading comprehension framework including the embedding layer, encoder layer, interaction layer, and output layer as well as the latest BERT pre-training model and its advantages are discussed. After that, this paper concludes the recent research progress of machine reading comprehension datasets and neural reading comprehension model, and gives a comparison and analysis of the most representative datasets and neural network models in detail. Finally, the research challenges and future direction of machine reading comprehension are presented.