Adversarial Manipulation
Adversarial manipulation explores how machine learning models, particularly deep neural networks and federated learning systems, can be subtly altered to produce incorrect or malicious outputs. Current research focuses on developing robust defenses against these attacks, including techniques like adversarial training, layered aggregation, and self-supervised contrastive learning, often applied to vision-language models, and investigating the vulnerabilities of specific model architectures and algorithms to various attack strategies. Understanding and mitigating these vulnerabilities is crucial for ensuring the reliability and security of AI systems across diverse applications, from medical image analysis to online security.
Papers
November 12, 2024
September 11, 2024
August 26, 2024
August 21, 2024
June 11, 2024
June 3, 2024
May 24, 2024
March 20, 2024
February 6, 2024
January 11, 2024
January 4, 2024
December 15, 2023
October 12, 2023
October 4, 2023
September 13, 2023
July 31, 2023
May 10, 2023
September 26, 2022
September 11, 2022