Defensive Distillation
Defensive distillation is a technique used to enhance the robustness of deep neural networks (DNNs) against adversarial attacks, which involve subtly altering inputs to cause misclassification. Current research focuses on improving defensive distillation's effectiveness against increasingly sophisticated attacks, exploring its application with various architectures like Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs), and integrating it with other defense mechanisms such as adversarial training and denoising autoencoders. This work is significant because it addresses the critical vulnerability of DNNs to malicious manipulation, impacting the reliability and security of AI systems in diverse applications, from medical image analysis to wireless network security.
Papers
A Knowledge Distillation-Based Backdoor Attack in Federated Learning
Yifan Wang, Wei Fan, Keke Yang, Naji Alhusaini, Jing Li
Defensive Distillation based Adversarial Attacks Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless Networks
Ferhat Ozgur Catak, Murat Kuzlu, Evren Catak, Umit Cali, Ozgur Guler