Adversarial Framework
Adversarial frameworks are a class of machine learning techniques that leverage competition between two or more models to improve robustness, interpretability, or other desirable properties. Current research focuses on applying these frameworks to diverse areas, including improving the robustness of deep neural networks (DNNs) against adversarial attacks, enhancing the security of large language models (LLMs), and developing more reliable anomaly detection methods. These techniques are significant because they address critical vulnerabilities in existing machine learning systems, leading to more reliable and trustworthy AI applications across various domains.
Papers
March 23, 2023
January 16, 2023
January 4, 2023
November 14, 2022
October 20, 2022
October 8, 2022
October 7, 2022
September 29, 2022
July 14, 2022
July 9, 2022
July 3, 2022
June 23, 2022
June 17, 2022
May 20, 2022
April 11, 2022
February 19, 2022
December 31, 2021
December 10, 2021