Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
Tailoring Generative AI Chatbots for Multiethnic Communities in Disaster Preparedness Communication: Extending the CASA Paradigm
Xinyan Zhao, Yuan Sun, Wenlin Liu, Chau-Wai Wong
Designing a Dashboard for Transparency and Control of Conversational AI
Yida Chen, Aoyu Wu, Trevor DePodesta, Catherine Yeh, Kenneth Li, Nicholas Castillo Marin, Oam Patel, Jan Riecke, Shivam Raval, Olivia Seow, Martin Wattenberg, Fernanda Viégas
Battling Botpoop using GenAI for Higher Education: A Study of a Retrieval Augmented Generation Chatbots Impact on Learning
Maung Thway, Jose Recatala-Gomez, Fun Siong Lim, Kedar Hippalgaonkar, Leonard W. T. Ng
How Reliable AI Chatbots are for Disease Prediction from Patient Complaints?
Ayesha Siddika Nipu, K M Sajjadul Islam, Praveen Madiraju
Modeling Real-Time Interactive Conversations as Timed Diarized Transcripts
Garrett Tanzer, Gustaf Ahdritz, Luke Melas-Kyriazi
From Human-to-Human to Human-to-Bot Conversations in Software Engineering
Ranim Khojah, Francisco Gomes de Oliveira Neto, Philipp Leitner