Chatbot Response
Chatbot response research centers on improving the accuracy, empathy, and safety of chatbot interactions across diverse applications, from customer service to mental health support. Current efforts focus on refining large language models (LLMs) like BERT and GPT, often through fine-tuning and techniques such as Retrieval Augmented Generation (RAG), to enhance context awareness and generate more human-like, relevant, and unbiased responses. This field is crucial for advancing human-computer interaction and ensuring responsible AI development, with implications for various sectors including healthcare, education, and customer service. Ongoing research emphasizes the need for robust evaluation frameworks, incorporating both automated and human assessment, to address issues like bias and ensure trustworthy chatbot performance.
Papers
Generative AI: Implications and Applications for Education
Anastasia Olga, Tzirides, Akash Saini, Gabriela Zapata, Duane Searsmith, Bill Cope, Mary Kalantzis, Vania Castro, Theodora Kourkoulou, John Jones, Rodrigo Abrantes da Silva, Jen Whiting, Nikoleta Polyxeni Kastania
MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large Language Models in Medicine
Jie Xu, Lu Lu, Sen Yang, Bilin Liang, Xinwei Peng, Jiali Pang, Jinru Ding, Xiaoming Shi, Lingrui Yang, Huan Song, Kang Li, Xin Sun, Shaoting Zhang
InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language
Zhaoyang Liu, Yinan He, Wenhai Wang, Weiyun Wang, Yi Wang, Shoufa Chen, Qinglong Zhang, Zeqiang Lai, Yang Yang, Qingyun Li, Jiashuo Yu, Kunchang Li, Zhe Chen, Xue Yang, Xizhou Zhu, Yali Wang, Limin Wang, Ping Luo, Jifeng Dai, Yu Qiao
A Taxonomy of Foundation Model based Systems through the Lens of Software Architecture
Qinghua Lu, Liming Zhu, Xiwei Xu, Yue Liu, Zhenchang Xing, Jon Whittle