Adversarial Distillation
Adversarial distillation is a machine learning technique that enhances the robustness and accuracy of smaller "student" models by transferring knowledge from larger, more robust "teacher" models, often trained with adversarial examples. Current research focuses on improving knowledge transfer efficiency through methods like dynamic guidance, feature-level distillation, and adversarial training incorporating dynamic labels or gradient matching. This approach is significant for improving model performance in resource-constrained environments and for bolstering defenses against adversarial attacks, with applications ranging from image classification and natural language processing to medical image analysis and 3D generation.
Papers
October 30, 2024
October 19, 2024
September 12, 2024
September 3, 2024
August 23, 2024
April 18, 2024
March 11, 2024
February 21, 2024
December 11, 2023
December 6, 2023
November 2, 2023
August 15, 2023
June 25, 2023
June 13, 2023
June 7, 2023
May 22, 2023
November 28, 2022
November 2, 2022
June 5, 2022