Adversarial Forgery

Adversarial forgery focuses on creating synthetic media—images, documents, and even signatures—that evade detection by both automated systems and human observers. Current research emphasizes developing sophisticated generative models, such as diffusion models and StyleGAN variants, to produce highly realistic forgeries and simultaneously designing adversarial attacks to fool existing detection algorithms. This area is crucial due to the increasing prevalence of deepfakes and other forms of manipulated media, demanding the development of more robust detection methods and a deeper understanding of the vulnerabilities of current AI-based forensic techniques. The impact spans from improving digital forensics to mitigating the spread of misinformation and protecting against identity theft.

Papers