Physical Backdoor Attack
Physical backdoor attacks exploit vulnerabilities in deep learning models by embedding triggers into training data, causing the model to misclassify inputs containing these triggers, even if they are physically present rather than digitally manipulated. Current research focuses on developing effective attack methods using various triggers (e.g., clothing items, environmental conditions, ultrasound) and exploring their robustness across different model architectures (including object detectors, person re-identification, and lane detection systems) and datasets. This research highlights significant security risks in deploying deep learning models in real-world applications, particularly in safety-critical systems like autonomous driving, necessitating the development of robust defenses.