Paper ID: 2410.04682 • Published Oct 7, 2024
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
Yongyi Su, Yushu Li, Nanqing Liu, Kui Jia, Xulei Yang, Chuan-Sheng Foo, Xun Xu
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Test-time adaptation (TTA) updates the model weights during the inference
stage using testing data to enhance generalization. However, this practice
exposes TTA to adversarial risks. Existing studies have shown that when TTA is
updated with crafted adversarial test samples, also known as test-time poisoned
data, the performance on benign samples can deteriorate. Nonetheless, the
perceived adversarial risk may be overstated if the poisoned data is generated
under overly strong assumptions. In this work, we first review realistic
assumptions for test-time data poisoning, including white-box versus grey-box
attacks, access to benign data, attack order, and more. We then propose an
effective and realistic attack method that better produces poisoned samples
without access to benign samples, and derive an effective in-distribution
attack objective. We also design two TTA-aware attack objectives. Our
benchmarks of existing attack methods reveal that the TTA methods are more
robust than previously believed. In addition, we analyze effective defense
strategies to help develop adversarially robust TTA methods. The source code is
available at this https URL