Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
SRSA: A Cost-Efficient Strategy-Router Search Agent for Real-world Human-Machine Interactions
Yaqi Wang, Haipei Xu
RV4Chatbot: Are Chatbots Allowed to Dream of Electric Sheep?
Andrea Gatti (University of Genoa), Viviana Mascardi (University of Genoa), Angelo Ferrando (University of Modena and Reggio Emilia)
Evaluating the Accuracy of Chatbots in Financial Literature
Orhan Erdem, Kristi Hassett, Feyzullah Egriboyun
Script-Strategy Aligned Generation: Aligning LLMs with Expert-Crafted Dialogue Scripts and Therapeutic Strategies for Psychotherapy
Xin Sun, Jan de Wit, Zhuying Li, Jiahuan Pei, Abdallah El Ali, Jos A.Bosch
[Vision Paper] PRObot: Enhancing Patient-Reported Outcome Measures for Diabetic Retinopathy using Chatbots and Generative AI
Maren Pielka, Tobias Schneider, Jan Terheyden, Rafet Sifa
WASHtsApp -- A RAG-powered WhatsApp Chatbot for supporting rural African clean water access, sanitation and hygiene
Simon Kloker, Alex Cedric Luyima, Matthew Bazanya