Multi Turn Conversation
Multi-turn conversation research focuses on enabling computers to engage in natural, nuanced, and contextually aware dialogues extending beyond single exchanges. Current efforts concentrate on improving large language models' ability to infer and adapt to individual preferences, handle noisy or incomplete input, and maintain coherent context across multiple turns, often employing techniques like reinforcement learning from human feedback, contrastive learning, and advanced attention mechanisms. This research is crucial for advancing human-computer interaction, improving the performance of conversational AI systems in various applications (e.g., chatbots, virtual assistants, and recommendation systems), and addressing safety and ethical concerns related to these technologies.
Papers
OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model
Sumeth Yuenyong, Kobkrit Viriyayudhakorn, Apivadee Piyatumrong, Jillaphat Jaroenkantasima
Building a Taiwanese Mandarin Spoken Language Model: A First Attempt
Chih-Kai Yang, Yu-Kuan Fu, Chen-An Li, Yi-Cheng Lin, Yu-Xiang Lin, Wei-Chih Chen, Ho Lam Chung, Chun-Yi Kuan, Wei-Ping Huang, Ke-Han Lu, Tzu-Quan Lin, Hsiu-Hsuan Wang, En-Pei Hu, Chan-Jan Hsu, Liang-Hsuan Tseng, I-Hsiang Chiu, Ulin Sanga, Xuanjun Chen, Po-chun Hsu, Shu-wen Yang, Hung-yi Lee