DEfense Mechanism
Defense mechanisms against adversarial attacks on machine learning models are a critical area of research, focusing on improving the robustness and security of systems in various applications, from cybersecurity to federated learning. Current efforts concentrate on developing novel algorithms and architectures, such as neurosymbolic AI for intrusion detection, embedding inspection for backdoor attack mitigation, and adversarial training and randomized smoothing for enhanced robustness against data and model poisoning. These advancements are crucial for ensuring the reliability and trustworthiness of machine learning systems across diverse domains, particularly where data privacy and security are paramount.
Papers
KDk: A Defense Mechanism Against Label Inference Attacks in Vertical Federated Learning
Marco Arazzi, Serena Nicolazzo, Antonino Nocera
FedMID: A Data-Free Method for Using Intermediate Outputs as a Defense Mechanism Against Poisoning Attacks in Federated Learning
Sungwon Han, Hyeonho Song, Sungwon Park, Meeyoung Cha