Paper ID: 2410.10853
Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector Store in Large Language Models to Enhance Mental Health Support
Abdul Muqtadir, Hafiz Syed Muhammad Bilal, Ayesha Yousaf, Hafiz Farooq Ahmed, Jamil Hussain
This research work delves into the manifestation of hallucination within Large Language Models (LLMs) and its consequential impacts on applications within the domain of mental health. The primary objective is to discern effective strategies for curtailing hallucinatory occurrences, thereby bolstering the dependability and security of LLMs in facilitating mental health interventions such as therapy, counseling, and the dissemination of pertinent information. Through rigorous investigation and analysis, this study seeks to elucidate the underlying mechanisms precipitating hallucinations in LLMs and subsequently propose targeted interventions to alleviate their occurrence. By addressing this critical issue, the research endeavors to foster a more robust framework for the utilization of LLMs within mental health contexts, ensuring their efficacy and reliability in aiding therapeutic processes and delivering accurate information to individuals seeking mental health support.
Submitted: Oct 6, 2024