Relation Hallucination
Relation hallucination, the fabrication of non-existent relationships between objects in multimodal large language models (MLLMs), is a significant challenge hindering the development of reliable AI systems. Current research focuses on developing benchmarks to systematically evaluate and analyze these hallucinations across various model architectures, identifying contributing factors such as reliance on prior knowledge and limitations in visual reasoning. This work aims to improve the accuracy and trustworthiness of MLLMs by uncovering the underlying causes of relation hallucinations and developing mitigation strategies, ultimately impacting the safety and reliability of AI applications that rely on visual and textual understanding.
Papers
October 30, 2024
August 18, 2024
June 24, 2024
November 13, 2023