Transformer Based Natural Language
Transformer-based natural language processing (NLP) leverages deep learning architectures to achieve state-of-the-art performance in various NLP tasks, focusing on improving model robustness, explainability, and efficiency. Current research emphasizes enhancing model resilience against adversarial attacks (including dynamic backdoors) and improving the reliability of predictions over time and across diverse languages. These advancements are significant for building more trustworthy and effective NLP systems with applications ranging from social media monitoring and medical text analysis to automated negotiation and software development.
13papers
Papers
December 22, 2024
September 2, 2024
January 25, 2024
November 23, 2023
June 23, 2023
March 31, 2023