Knowledge Intensive Task
Knowledge-intensive tasks, requiring access and reasoning over extensive factual information, are a central focus in current natural language processing research. Current efforts concentrate on improving large language models (LLMs) by integrating external knowledge bases (via retrieval-augmented generation or knowledge graph integration), refining internal knowledge representation through fine-tuning strategies, and mitigating issues like hallucinations and outdated information. These advancements aim to enhance the reliability and accuracy of LLMs for applications ranging from question answering and knowledge graph construction to more complex reasoning tasks, ultimately impacting various fields that rely on accurate and efficient information processing.
Papers
Understanding the Interplay between Parametric and Contextual Knowledge for Large Language Models
Sitao Cheng, Liangming Pan, Xunjian Yin, Xinyi Wang, William Yang Wang
Mars: Situated Inductive Reasoning in an Open-World Environment
Xiaojuan Tang, Jiaqi Li, Yitao Liang, Song-chun Zhu, Muhan Zhang, Zilong Zheng