Abstract:Language model, to express implicit knowledge of language, has been widely concerned as a basic problem of natural language processing in which the current research hotspot is the language model based on deep learning. Through pre-training and fine-tuning techniques, language models show their inherently power of representation, also improve the performance of downstream tasks greatly. Around the basic principles and different application directions, this study takes the neural probability language model and the pre-training language model as a pointcut for combining deep learning and natural language processing. The application as well as challenges of neural probability and pre-training model is introduced, which is based on the basic concepts and theories of language model. Then, the existing neural probability, pre-training language model include their methods are compared and analyzed. In addition, the training methods of pre-training language model are elaborated from two aspects of new training tasks and improved network structure. Meanwhile, the current research directions of pre-training model in scale compression, knowledge fusion, multi-modality, and cross-language are summarized and evaluated. Finally, the bottleneck of language model in natural language processing application is summed up, afterwards the possible future research priorities are prospected.