Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
Building Trust in Mental Health Chatbots: Safety Metrics and LLM-Based Evaluation Tools
Jung In Park, Mahyar Abbasian, Iman Azimi, Dawn Bounds, Angela Jun, Jaesu Han, Robert McCarron, Jessica Borelli, Jia Li, Mona Mahmoudi, Carmen Wiedenhoeft, Amir Rahmani
Distinguishing Chatbot from Human
Gauri Anil Godghase, Rishit Agrawal, Tanush Obili, Mark Stamp