Defense Method
Defense methods in machine learning aim to protect models from various adversarial attacks, including data poisoning, model stealing, and evasion attacks targeting specific model architectures like Graph Neural Networks (GNNs) and Large Language Models (LLMs). Current research focuses on developing robust and efficient defenses, often employing techniques such as contrastive learning, gradient masking, prompt engineering, and data augmentation, alongside the development of novel algorithms like LayerCAM-AE and UNIT. These advancements are crucial for ensuring the reliability and security of machine learning systems across diverse applications, ranging from cybersecurity to healthcare and finance.
Papers
November 12, 2024
October 23, 2024
October 20, 2024
October 10, 2024
October 8, 2024
August 29, 2024
August 15, 2024
July 16, 2024
June 14, 2024
June 2, 2024
May 28, 2024
April 17, 2024
April 8, 2024
March 18, 2024
March 14, 2024
March 5, 2024
February 21, 2024
January 30, 2024