Test Time Adversarial

Test-time adversarial defense aims to improve the robustness of machine learning models, particularly deep neural networks, against adversarial attacks encountered during deployment without retraining. Current research focuses on developing efficient, training-free methods, such as those leveraging neuron importance ranking or data-free approaches, to enhance model resilience against various attack strategies. These efforts are crucial for ensuring the reliability and safety of AI systems in real-world applications where retraining is impractical or impossible, particularly in safety-critical domains. A key challenge remains achieving robust generalization and mitigating the trade-off between adversarial robustness and standard accuracy.

Papers