NLP Model
Natural Language Processing (NLP) models aim to enable computers to understand, interpret, and generate human language. Current research focuses on improving model robustness to noisy or user-generated content, enhancing explainability and interpretability through techniques like counterfactual explanations and latent concept attribution, and addressing biases related to fairness and privacy. These advancements are crucial for building reliable and trustworthy NLP systems with broad applications across various domains, including legal tech, healthcare, and social media analysis.
Papers
Backdoor Attack against NLP models with Robustness-Aware Perturbation defense
Shaik Mohammed Maqsood, Viveros Manuela Ceron, Addluri GowthamKrishna
Contrastive language and vision learning of general fashion concepts
Patrick John Chia, Giuseppe Attanasio, Federico Bianchi, Silvia Terragni, Ana Rita Magalhães, Diogo Goncalves, Ciro Greco, Jacopo Tagliabue