NLP Task
Natural Language Processing (NLP) research currently focuses on enhancing Large Language Models (LLMs) for a wider range of tasks, including improved long-context processing, reliable benchmark creation using synthetic data, and seamless integration of generation and retrieval capabilities. Active research areas involve developing efficient frameworks for handling extensive input sequences within memory constraints, evaluating the effectiveness of LLMs across diverse and challenging benchmarks (including those for specialized domains like finance and law), and mitigating issues like data contamination and hallucination. These advancements are crucial for improving the reliability and applicability of LLMs in various real-world applications, from legal tech to healthcare and beyond.
Papers
When to Use What: An In-Depth Comparative Empirical Analysis of OpenIE Systems for Downstream Applications
Kevin Pei, Ishan Jindal, Kevin Chen-Chuan Chang, Chengxiang Zhai, Yunyao Li
GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective
Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
PATS: Sensitivity-aware Noisy Learning for Pretrained Language Models
Yupeng Zhang, Hongzhi Zhang, Sirui Wang, Wei Wu, Zhoujun Li
A Benchmark Study of Contrastive Learning for Arabic Social Meaning
Md Tawkat Islam Khondaker, El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed, Laks V. S. Lakshmanan