Model Hallucination
Model hallucination, the generation of factually incorrect or nonsensical outputs by large language models (LLMs) and other AI systems, is a significant challenge hindering their reliable deployment. Current research focuses on developing methods to detect and mitigate hallucinations, employing techniques like retrieval-augmented generation (RAG), contrastive decoding, and targeted instruction tuning across various model architectures, including LLMs and large vision-language models (LVLMs). These efforts aim to improve the accuracy and trustworthiness of AI systems, impacting diverse applications from medical report generation to virtual try-on technologies. The development of robust hallucination detection and mitigation strategies is crucial for building reliable and responsible AI systems.
Papers
Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review
Jing Su, Chufeng Jiang, Xin Jin, Yuxin Qiao, Tingsong Xiao, Hongda Ma, Rong Wei, Zhi Jing, Jiajun Xu, Junhong Lin
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States
Hanyu Duan, Yi Yang, Kar Yan Tam