Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
Friend or Foe? Exploring the Implications of Large Language Models on the Science System
Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle, Fabian Sofsky
Inspire creativity with ORIBA: Transform Artists' Original Characters into Chatbots through Large Language Model
Yuqian Sun, Xingyu Li, Ze Gao
Perceived Trustworthiness of Natural Language Generators
Beatriz Cabrero-Daniel, Andrea Sanagustín Cabrero
Chatbots to ChatGPT in a Cybersecurity Space: Evolution, Vulnerabilities, Attacks, Challenges, and Future Recommendations
Attia Qammar, Hongmei Wang, Jianguo Ding, Abdenacer Naouri, Mahmoud Daneshmand, Huansheng Ning