Evasion Attack
Evasion attacks target machine learning models by subtly altering input data to cause misclassification, aiming to assess and exploit model vulnerabilities. Current research focuses on developing increasingly sophisticated evasion techniques across diverse applications, including image generation, human motion prediction, software vulnerability detection, and power grid management, often employing techniques like adversarial training, reinforcement learning, and feature manipulation within various model architectures (e.g., deep neural networks, diffusion models, tree ensembles). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of machine learning systems in safety-critical and security-sensitive domains.