Natural Language Processing Model
Natural Language Processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research emphasizes improving model performance on diverse tasks, including text classification, summarization, question answering, and even complex interactive scenarios like negotiations, often leveraging transformer-based architectures like BERT and its variants, along with large language models (LLMs). These advancements are driving significant improvements in various applications, from enhancing educational assessments and medical diagnosis to creating more effective search and recommendation systems. A key challenge remains ensuring fairness, robustness, and efficiency across different languages and domains, with ongoing efforts focused on mitigating biases and optimizing model resource usage.
Papers
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Eldar David Abraham, Karel D'Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher Potts, Roi Reichart, Zhengxuan Wu
StereoKG: Data-Driven Knowledge Graph Construction for Cultural Knowledge and Stereotypes
Awantee Deshpande, Dana Ruiter, Marius Mosbach, Dietrich Klakow
Domain Specific Fine-tuning of Denoising Sequence-to-Sequence Models for Natural Language Summarization
Brydon Parker, Alik Sokolov, Mahtab Ahmed, Matt Kalebic, Sedef Akinli Kocak, Ofer Shai
Forecasting Cryptocurrency Returns from Sentiment Signals: An Analysis of BERT Classifiers and Weak Supervision
Duygu Ider, Stefan Lessmann