Transfer Attack
Transfer attacks exploit the vulnerability of machine learning models by crafting adversarial examples on one model (the surrogate) and then using these examples to fool other, unseen models (the victims). Current research focuses on improving the transferability of these attacks across diverse model architectures, including vision-language models and object detectors, often employing techniques like bilevel optimization, gradient manipulation, and prompt engineering to enhance their effectiveness. This area is crucial because successful transfer attacks highlight significant security risks in real-world applications of machine learning, particularly in black-box settings where model details are unavailable to the attacker, and motivates the development of more robust and secure models.