Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
ChatGPT and a New Academic Reality: Artificial Intelligence-Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing
Brady Lund, Ting Wang, Nishith Reddy Mannuru, Bing Nie, Somipam Shimray, Ziang Wang
Chinese Intermediate English Learners outdid ChatGPT in deep cohesion: Evidence from English narrative writing
Tongquan Zhou, Siyi Cao, Siruo Zhou, Yao Zhang, Aijing He
The Open-domain Paradox for Chatbots: Common Ground as the Basis for Human-like Dialogue
Gabriel Skantze, A. Seza Doğruöz
Rewarding Chatbots for Real-World Engagement with Millions of Users
Robert Irvine, Douglas Boubert, Vyas Raina, Adian Liusie, Ziyi Zhu, Vineet Mudupalli, Aliaksei Korshuk, Zongyi Liu, Fritz Cremer, Valentin Assassi, Christie-Carol Beauchamp, Xiaoding Lu, Thomas Rialan, William Beauchamp
Do large language models resemble humans in language use?
Zhenguang G. Cai, Xufeng Duan, David A. Haslett, Shuqi Wang, Martin J. Pickering