Adversarial Knowledge Distillation
Adversarial knowledge distillation (AKD) leverages adversarial training techniques within the knowledge distillation framework to improve the efficiency, robustness, and privacy of deep learning models. Current research focuses on applying AKD to diverse architectures, including diffusion probabilistic models, multi-exit neural networks, and graph neural networks, often aiming to compress large models into smaller, faster equivalents while maintaining or improving performance. This approach is significant because it addresses critical challenges in deploying deep learning models on resource-constrained devices and enhances their resilience against adversarial attacks and privacy breaches.
Papers
May 31, 2024
November 1, 2023
June 19, 2023
November 28, 2022
August 12, 2022
May 24, 2022
March 14, 2022