Deep Fake
Deepfakes, synthetic media created using AI, pose a significant threat by generating highly realistic yet fabricated content, primarily focusing on audio and video manipulation. Current research emphasizes developing robust detection methods using various approaches, including multimodal frameworks that analyze both visual and auditory cues, and novel architectures like Vision Transformers and Convolutional Neural Networks, often incorporating techniques like self-supervised learning and adversarial training to improve generalization and robustness. The ability to reliably detect deepfakes is crucial for maintaining the integrity of digital media and mitigating the risks of misinformation, fraud, and privacy violations, driving ongoing efforts to improve detection accuracy and address the evolving sophistication of deepfake generation techniques.
Papers
Region-Based Optimization in Continual Learning for Audio Deepfake Detection
Yujie Chen, Jiangyan Yi, Cunhang Fan, Jianhua Tao, Yong Ren, Siding Zeng, Chu Yuan Zhang, Xinrui Yan, Hao Gu, Jun Xue, Chenglong Wang, Zhao Lv, Xiaohui Zhang
Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models
Namhyuk Ahn, KiYoon Yoo, Wonhyuk Ahn, Daesik Kim, Seung-Hun Nam
What constitutes a Deep Fake? The blurry line between legitimate processing and manipulation under the EU AI Act
Kristof Meding, Christoph Sorge
FaceShield: Defending Facial Image against Deepfake Threats
Jaehwan Jeong, Sumin In, Sieun Kim, Hannie Shin, Jongheon Jeong, Sang Ho Yoon, Jaewook Chung, Sangpil Kim
Exploring the Robustness of AI-Driven Tools in Digital Forensics: A Preliminary Study
Silvia Lucia Sanna, Leonardo Regano, Davide Maiorca, Giorgio Giacinto
Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection
Delong Zhu, Yuezun Li, Baoyuan Wu, Jiaran Zhou, Zhibo Wang, Siwei Lyu