Paper ID: 2403.09743

The Human Factor in Detecting Errors of Large Language Models: A Systematic Literature Review and Future Research Directions

Christian A. Schiller

The launch of ChatGPT by OpenAI in November 2022 marked a pivotal moment for Artificial Intelligence, introducing Large Language Models (LLMs) to the mainstream and setting new records in user adoption. LLMs, particularly ChatGPT, trained on extensive internet data, demonstrate remarkable conversational capabilities across various domains, suggesting a significant impact on the workforce. However, these models are susceptible to errors - "hallucinations" and omissions, generating incorrect or incomplete information. This poses risks especially in contexts where accuracy is crucial, such as legal compliance, medicine or fine-grained process frameworks. There are both technical and human solutions to cope with this isse. This paper explores the human factors that enable users to detect errors in LLM outputs, a critical component in mitigating risks associated with their use in professional settings. Understanding these factors is essential for organizations aiming to leverage LLM technology efficiently, guiding targeted training and deployment strategies to enhance error detection by users. This approach not only aims to optimize the use of LLMs but also to prevent potential downstream issues stemming from reliance on inaccurate model responses. The research emphasizes the balance between technological advancement and human insight in maximizing the benefits of LLMs while minimizing the risks, particularly in areas where precision is paramount. This paper performs a systematic literature research on this research topic, analyses and synthesizes the findings, and outlines future research directions. Literature selection cut-off date is January 11th 2024.

Submitted: Mar 13, 2024