Multilingual Hallucination

Multilingual hallucination in large language models (LLMs) refers to the generation of factually incorrect or nonsensical translations, particularly problematic in low-resource languages. Current research focuses on developing methods to detect and mitigate these hallucinations, employing techniques like cross-lingual alignment, large language model-based detectors, and modular model architectures designed to disentangle language-specific information. Addressing this issue is crucial for improving the reliability and trustworthiness of machine translation systems and other multilingual AI applications, impacting fields ranging from cross-cultural communication to scientific research.

Papers