Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
Retrieval and Generative Approaches for a Pregnancy Chatbot in Nepali with Stemmed and Non-Stemmed Data : A Comparative Study
Sujan Poudel, Nabin Ghimire, Bipesh Subedi, Saugat Singh
Anticipating User Needs: Insights from Design Fiction on Conversational Agents for Computational Thinking
Jacob Penney, João Felipe Pimentel, Igor Steinmacher, Marco A. Gerosa
Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
Sander Schulhoff, Jeremy Pinto, Anaum Khan, Louis-François Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Carnahan, Jordan Boyd-Graber
ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles
Savvas Petridis, Ben Wedin, James Wexler, Aaron Donsbach, Mahima Pushkarna, Nitesh Goyal, Carrie J. Cai, Michael Terry