NLP Task
Natural Language Processing (NLP) research currently focuses on enhancing Large Language Models (LLMs) for a wider range of tasks, including improved long-context processing, reliable benchmark creation using synthetic data, and seamless integration of generation and retrieval capabilities. Active research areas involve developing efficient frameworks for handling extensive input sequences within memory constraints, evaluating the effectiveness of LLMs across diverse and challenging benchmarks (including those for specialized domains like finance and law), and mitigating issues like data contamination and hallucination. These advancements are crucial for improving the reliability and applicability of LLMs in various real-world applications, from legal tech to healthcare and beyond.
Papers
Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts
Wenyu Huang, Guancheng Zhou, Mirella Lapata, Pavlos Vougiouklis, Sebastien Montella, Jeff Z. Pan
UniDM: A Unified Framework for Data Manipulation with Large Language Models
Yichen Qian, Yongyi He, Rong Zhu, Jintao Huang, Zhijian Ma, Haibin Wang, Yaohua Wang, Xiuyu Sun, Defu Lian, Bolin Ding, Jingren Zhou