Shot Backdoor

Shot backdoor attacks exploit vulnerabilities in machine learning models, particularly those trained with limited data (few-shot learning), to manipulate their predictions by embedding hidden triggers. Current research focuses on developing both more effective attacks, leveraging techniques like neural tangent kernels and adversarial prompt tuning, and robust defenses, such as Shapley value-based neuron pruning. Understanding and mitigating these attacks is crucial for ensuring the security and reliability of machine learning systems across diverse applications, including natural language processing, image recognition, and visual object tracking.

Papers