Abstract:In recent years, the sequence-to-sequence learning model with the encoder-decoder architecture has become the mainstream summarization generation approach. Currently, the model usually only considers limited words before (or after) when calculating the hidden layer state of a word, but can not obtain global information, so as to optimize the global situation. In order to address above challenges, this study introduces a global self-matching mechanism to optimize the encoder globally, and proposes a global gating unit to extract the core content of the text. The global self-matching mechanism dynamically collects relevant information from the entire input text for each word in the text according to the matching degree of each word semantics and the overall semantics of the text, and then effectively encodes the word and its matching information into the final hidden layer representation to obtain the hidden layer representation containing the global information. Meanwhile, considering that integrating global information into each word may cause redundancy, this study introduces a global gating unit, filters the information flow into the decoder according to the global information obtained from the self-matching layer, and filters out the core content of the source text. Experimented result shows that the proposed model has a significant improvement in the Rouge evaluation over the state-of-the art method.