NLP Community
The Natural Language Processing (NLP) community focuses on enabling computers to understand, interpret, and generate human language, driving advancements in various applications. Current research emphasizes multilingual capabilities, particularly for low-resource languages, improving model reliability and addressing biases in large language models (LLMs) and other architectures like transformers. This work is crucial for advancing fields like healthcare (e.g., dementia research), legal analysis, and education, while also raising important ethical considerations regarding data usage and model transparency.
Papers
Toward Stronger Textual Attack Detectors
Pierre Colombo, Marine Picot, Nathan Noiry, Guillaume Staerman, Pablo Piantanida
Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
Pierre Colombo, Victor Pellegrain, Malik Boudiaf, Victor Storchan, Myriam Tami, Ismail Ben Ayed, Celine Hudelot, Pablo Piantanida
Large Language Models Are Also Good Prototypical Commonsense Reasoners
Chenin Li, Qianglong Chen, Yin Zhang, Yifei Zhang, Hongxiang Yao
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Tongxu Luo, Fangyu Lei, Jiahe Lei, Weihao Liu, Shihu He, Jun Zhao, Kang Liu