Adversarial Perspective
Adversarial perspective research investigates the vulnerabilities of various machine learning models, including large language models, graph neural networks, and deep reinforcement learning agents, to malicious attacks or unexpected inputs. Current research focuses on developing and evaluating adversarial attacks against these models, often employing techniques like adversarial training and exploring the transferability of attacks across different architectures. This work is crucial for enhancing the robustness and reliability of AI systems across diverse applications, ranging from improving the safety of autonomous systems to ensuring the fairness and privacy of data-driven decision-making.
Papers
September 26, 2024
August 25, 2024
August 5, 2024
August 1, 2024
July 2, 2024
April 2, 2024
February 28, 2024
February 25, 2024
October 25, 2023
July 21, 2023
July 16, 2023
June 24, 2023
June 9, 2023
May 27, 2023
May 24, 2023
February 5, 2023
October 16, 2022
October 13, 2022