Transformer Based Natural Language
Transformer-based natural language processing (NLP) leverages deep learning architectures to achieve state-of-the-art performance in various NLP tasks, focusing on improving model robustness, explainability, and efficiency. Current research emphasizes enhancing model resilience against adversarial attacks (including dynamic backdoors) and improving the reliability of predictions over time and across diverse languages. These advancements are significant for building more trustworthy and effective NLP systems with applications ranging from social media monitoring and medical text analysis to automated negotiation and software development.
Papers
September 2, 2024
August 13, 2024
January 25, 2024
November 23, 2023
November 21, 2023
June 23, 2023
May 23, 2023
March 31, 2023
February 20, 2023
January 9, 2023
August 21, 2022
May 10, 2022
April 7, 2022