Adversarial Framework
Adversarial frameworks are a class of machine learning techniques that leverage competition between two or more models to improve robustness, interpretability, or other desirable properties. Current research focuses on applying these frameworks to diverse areas, including improving the robustness of deep neural networks (DNNs) against adversarial attacks, enhancing the security of large language models (LLMs), and developing more reliable anomaly detection methods. These techniques are significant because they address critical vulnerabilities in existing machine learning systems, leading to more reliable and trustworthy AI applications across various domains.
Papers
October 11, 2024
October 1, 2024
August 21, 2024
August 20, 2024
August 17, 2024
June 6, 2024
May 31, 2024
May 25, 2024
April 29, 2024
April 24, 2024
February 29, 2024
January 24, 2024
December 15, 2023
October 25, 2023
October 2, 2023
September 3, 2023
July 19, 2023
July 14, 2023