Adversarial Parameter
Adversarial parameter attacks explore vulnerabilities in machine learning models by subtly altering their internal parameters, rather than manipulating input data. Research focuses on developing methods to find these adversarial parameters, often framing the problem as a game between an attacker seeking to reduce model robustness and a defender aiming to maintain accuracy. This line of research is crucial for assessing and improving the security and reliability of deep neural networks, particularly in safety-critical applications where model robustness is paramount. Current work investigates efficient algorithms for finding these parameters and analyzing the conditions under which such attacks are most effective.
Papers
August 14, 2024
July 19, 2022