Gradient Based Attack

Gradient-based attacks exploit the gradients of machine learning models to generate adversarial examples—inputs subtly modified to cause misclassification. Current research focuses on improving attack effectiveness and transferability across different models and datasets, exploring techniques like momentum acceleration, hyperparameter optimization, and novel loss functions within various architectures (e.g., Graph Neural Networks, Large Language Models, and image classifiers). Understanding and mitigating the vulnerability of models to these attacks is crucial for ensuring the reliability and security of machine learning systems in diverse applications, from autonomous driving to healthcare.

Papers