Defense Method
Defense methods in machine learning aim to protect models from various adversarial attacks, including data poisoning, model stealing, and evasion attacks targeting specific model architectures like Graph Neural Networks (GNNs) and Large Language Models (LLMs). Current research focuses on developing robust and efficient defenses, often employing techniques such as contrastive learning, gradient masking, prompt engineering, and data augmentation, alongside the development of novel algorithms like LayerCAM-AE and UNIT. These advancements are crucial for ensuring the reliability and security of machine learning systems across diverse applications, ranging from cybersecurity to healthcare and finance.
Papers
January 11, 2025
January 8, 2025
December 29, 2024
December 10, 2024
December 2, 2024
November 14, 2024
November 12, 2024
October 23, 2024
October 20, 2024
October 10, 2024
October 8, 2024
August 29, 2024
August 15, 2024
July 16, 2024
June 14, 2024
June 2, 2024
May 28, 2024
April 17, 2024