Defense Method

Defense methods in machine learning aim to protect models from various adversarial attacks, including data poisoning, model stealing, and evasion attacks targeting specific model architectures like Graph Neural Networks (GNNs) and Large Language Models (LLMs). Current research focuses on developing robust and efficient defenses, often employing techniques such as contrastive learning, gradient masking, prompt engineering, and data augmentation, alongside the development of novel algorithms like LayerCAM-AE and UNIT. These advancements are crucial for ensuring the reliability and security of machine learning systems across diverse applications, ranging from cybersecurity to healthcare and finance.

Papers