Natural Language Processing Task
Natural Language Processing (NLP) research currently focuses heavily on leveraging Large Language Models (LLMs) to improve the accuracy and efficiency of various tasks. Key areas of investigation include mitigating LLMs' susceptibility to hallucinations (generating inaccurate information), optimizing their deployment across different hardware platforms (including edge devices), and developing robust evaluation methods that go beyond simple metrics. These advancements are significant because they address critical limitations of LLMs, paving the way for more reliable and accessible NLP applications in diverse fields like healthcare, fraud detection, and machine translation.
Papers
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
Guoxuan Chen, Han Shi, Jiawei Li, Yihang Gao, Xiaozhe Ren, Yimeng Chen, Xin Jiang, Zhenguo Li, Weiyang Liu, Chao Huang
The Role of Natural Language Processing Tasks in Automatic Literary Character Network Construction
Arthur Amalvy (LIA), Vincent Labatut (LIA), Richard Dufour (LS2N - équipe TALN)
Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Tsz Ting Chung, Leyang Cui, Lemao Liu, Xinting Huang, Shuming Shi, Dit-Yan Yeung
Light-Weight Fault Tolerant Attention for Large Language Model Training
Yuhang Liang, Xinyi Li, Jie Ren, Ang Li, Bo Fang, Jieyang Chen