DEfense Mechanism

Defense mechanisms against adversarial attacks on machine learning models are a critical area of research, focusing on improving the robustness and security of systems in various applications, from cybersecurity to federated learning. Current efforts concentrate on developing novel algorithms and architectures, such as neurosymbolic AI for intrusion detection, embedding inspection for backdoor attack mitigation, and adversarial training and randomized smoothing for enhanced robustness against data and model poisoning. These advancements are crucial for ensuring the reliability and trustworthiness of machine learning systems across diverse domains, particularly where data privacy and security are paramount.

Papers