General Adversarial
General adversarial research focuses on improving the robustness of machine learning models against adversarial attacks, which involve subtly altering inputs to cause misclassification. Current research emphasizes developing more efficient adversarial training methods, such as those leveraging intrinsic dimensionality or dynamic perturbations, and exploring novel attack strategies tailored to specific domains like medical imaging, graph data, and network security. This work is crucial for enhancing the reliability and security of AI systems across various applications, particularly in safety-critical contexts where model vulnerability poses significant risks.
Papers
September 10, 2024
May 27, 2024
March 11, 2024
November 27, 2023
October 31, 2023
March 25, 2023
March 3, 2023
February 17, 2023
October 27, 2022
June 27, 2022
April 11, 2022
February 15, 2022