Adversarial Manipulation

Adversarial manipulation explores how machine learning models, particularly deep neural networks and federated learning systems, can be subtly altered to produce incorrect or malicious outputs. Current research focuses on developing robust defenses against these attacks, including techniques like adversarial training, layered aggregation, and self-supervised contrastive learning, often applied to vision-language models, and investigating the vulnerabilities of specific model architectures and algorithms to various attack strategies. Understanding and mitigating these vulnerabilities is crucial for ensuring the reliability and security of AI systems across diverse applications, from medical image analysis to online security.

Papers