Natural Language Processing Task
Natural Language Processing (NLP) research currently focuses heavily on leveraging Large Language Models (LLMs) to improve the accuracy and efficiency of various tasks. Key areas of investigation include mitigating LLMs' susceptibility to hallucinations (generating inaccurate information), optimizing their deployment across different hardware platforms (including edge devices), and developing robust evaluation methods that go beyond simple metrics. These advancements are significant because they address critical limitations of LLMs, paving the way for more reliable and accessible NLP applications in diverse fields like healthcare, fraud detection, and machine translation.
Papers
Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models
Yuyan Chen, Qiang Fu, Yichen Yuan, Zhihao Wen, Ge Fan, Dayiheng Liu, Dongmei Zhang, Zhixu Li, Yanghua Xiao
QET: Enhancing Quantized LLM Parameters and KV cache Compression through Element Substitution and Residual Clustering
Yanshu Wang, Wang Li, Zhaoqian Yao, Tong Yang