Open Domain
Open-domain research focuses on developing AI systems capable of handling diverse, unstructured inputs and tasks without requiring extensive pre-training or fine-tuning for each specific domain. Current research emphasizes retrieval-augmented generation (RAG) methods, often incorporating knowledge graphs and vector stores to improve accuracy and reduce hallucinations, alongside advancements in masked diffusion transformers for efficient sound and image generation. This work is significant because it aims to create more adaptable and robust AI systems applicable across various fields, from e-commerce chatbots to autonomous driving and biomedical named entity recognition, ultimately improving the accessibility and effectiveness of AI technologies.
Papers
DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI
Jianguo Zhang, Kun Qian, Zhiwei Liu, Shelby Heinecke, Rui Meng, Ye Liu, Zhou Yu, Huan Wang, Silvio Savarese, Caiming Xiong
Enhancing conversational quality in language learning chatbots: An evaluation of GPT4 for ASR error correction
Long Mai, Julie Carson-Berndsen
Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation
Yunjie Ji, Yan Gong, Yong Deng, Yiping Peng, Qiang Niu, Baochang Ma, Xiangang Li
ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human
Junfeng Tian, Hehong Chen, Guohai Xu, Ming Yan, Xing Gao, Jianhai Zhang, Chenliang Li, Jiayi Liu, Wenshen Xu, Haiyang Xu, Qi Qian, Wei Wang, Qinghao Ye, Jiejing Zhang, Ji Zhang, Fei Huang, Jingren Zhou