Victim Model
Victim model research focuses on understanding and mitigating vulnerabilities in machine learning models, primarily concerning adversarial attacks and model extraction. Current efforts concentrate on developing more effective attack methods (e.g., leveraging gradient-based and generative adversarial approaches) and robust defenses, including techniques like noise injection and dataset inference. This research is crucial for enhancing the security and privacy of machine learning systems across various applications, from healthcare to finance, by identifying weaknesses and developing countermeasures against malicious exploitation.
Papers
October 18, 2024
July 23, 2024
April 8, 2024
March 5, 2024
February 29, 2024
January 16, 2024
October 3, 2023
July 3, 2023
March 13, 2023
September 16, 2022
May 1, 2022
March 4, 2022