国家重点研发计划(2022YFE0197600)
随着以Transformer为代表的预训练模型等深度学习技术的发展, 大语言模型(LLM)日益展现出强大的理解力和创造力, 对抽象摘要、对话生成、机器翻译和数据到文本生成等下游任务产生了重要影响, 同时也在图像说明、视觉叙事等多模态领域展现出了广阔的应用前景. 虽然大语言模型具备显著的性能优势, 但深度学习架构使其难以避免内容幻觉问题, 这不仅会削减系统性能, 还严重影响其可信性和应用广泛性, 由此衍生的法律风险和伦理风险成为掣肘其进一步发展与落地的主要障碍. 聚焦大语言模型的幻觉问题, 首先, 对大语言模型的幻觉问题展开系统概述, 分析其来源及成因; 其次, 系统概述大语言模型幻觉问题的评估方法和缓解方法, 对不同任务的评估和缓解方法类型化并加以深入比较; 最后, 从评估和缓解角度展望应对幻觉问题的未来趋势和应对方案.
With the development of deep learning technologies such as pre-trained models, represented by Transformer, large language models (LLMs) have shown excellent comprehension and creativity. They not only have an important impact on downstream tasks such as abstractive summarization, dialogue generation, machine translation, and data-to-text generation but also exhibit promising applications in multimodal fields such as image description and visual narratives. While LLMs have significant advantages in performance, deep learning-based LLMs are susceptible to hallucinations, which may reduce the system performance and even seriously affect the trustworthiness and broad applications of LLMs. The accompanying legal and ethical risks have become the main obstacles to their further development and implementation. Therefore, this survey provides an extensive investigation and technical review of the hallucinations in LLMs. Firstly, the hallucinations in LLMs are systematically summarized, and their origin and causes are analyzed. Secondly, a systematical overview of hallucination evaluation and mitigation is provided, in which the evaluation and mitigation methods are categorized and thoroughly compared for different tasks. Finally, the future challenges and research directions of the hallucinations in LLMs are discussed from the perspectives of evaluation and mitigation.
刘泽垣,王鹏江,宋晓斌,张欣,江奔奔.大语言模型的幻觉问题研究综述.软件学报,2025,36(3):1152-1185
复制