NLP Task
Natural Language Processing (NLP) research currently focuses on enhancing Large Language Models (LLMs) for a wider range of tasks, including improved long-context processing, reliable benchmark creation using synthetic data, and seamless integration of generation and retrieval capabilities. Active research areas involve developing efficient frameworks for handling extensive input sequences within memory constraints, evaluating the effectiveness of LLMs across diverse and challenging benchmarks (including those for specialized domains like finance and law), and mitigating issues like data contamination and hallucination. These advancements are crucial for improving the reliability and applicability of LLMs in various real-world applications, from legal tech to healthcare and beyond.
Papers
Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation
Adithya V Ganesan, Yash Kumar Lal, August Håkan Nilsson, H. Andrew Schwartz
Reimagining Retrieval Augmented Language Models for Answering Queries
Wang-Chiew Tan, Yuliang Li, Pedro Rodriguez, Richard James, Xi Victoria Lin, Alon Halevy, Scott Yih
TopEx: Topic-based Explanations for Model Comparison
Shreya Havaldar, Adam Stein, Eric Wong, Lyle Ungar
Free Lunch for Efficient Textual Commonsense Integration in Language Models
Wanyun Cui, Xingran Chen
Pento-DIARef: A Diagnostic Dataset for Learning the Incremental Algorithm for Referring Expression Generation from Examples
Philipp Sadler, David Schlangen
Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark
Minje Choi, Jiaxin Pei, Sagar Kumar, Chang Shu, David Jurgens
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Fanny Jourdan, Agustin Picard, Thomas Fel, Laurent Risser, Jean Michel Loubes, Nicholas Asher
KGA: A General Machine Unlearning Framework Based on Knowledge Gap Alignment
Lingzhi Wang, Tong Chen, Wei Yuan, Xingshan Zeng, Kam-Fai Wong, Hongzhi Yin