Deepfake Detection
Deepfake detection research aims to develop robust methods for identifying manipulated media, combating the spread of misinformation and fraudulent content. Current efforts focus on improving the generalization of detection models across diverse deepfake generation techniques, employing architectures like Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs), often incorporating multimodal analysis (audio-visual) and leveraging pre-trained models like CLIP. This field is crucial for maintaining digital media integrity and security, with implications for law enforcement, cybersecurity, and the broader fight against disinformation.
Papers
Harder or Different? Understanding Generalization of Audio Deepfake Detection
Nicolas M. Müller, Nicholas Evans, Hemlata Tak, Philip Sperl, Konstantin Böttinger
AVFF: Audio-Visual Feature Fusion for Video Deepfake Detection
Trevine Oorloff, Surya Koppisetti, Nicolò Bonettini, Divyaraj Solanki, Ben Colman, Yaser Yacoob, Ali Shahriyari, Gaurav Bharaj
EEG-Features for Generalized Deepfake Detection
Arian Beckmann, Tilman Stephani, Felix Klotzsche, Yonghao Chen, Simon M. Hofmann, Arno Villringer, Michael Gaebler, Vadim Nikulin, Sebastian Bosse, Peter Eisert, Anna Hilsmann
A Timely Survey on Vision Transformer for Deepfake Detection
Zhikan Wang, Zhongyao Cheng, Jiajie Xiong, Xun Xu, Tianrui Li, Bharadwaj Veeravalli, Xulei Yang
PolyGlotFake: A Novel Multilingual and Multimodal DeepFake Dataset
Yang Hou, Haitao Fu, Chuankai Chen, Zida Li, Haoyu Zhang, Jianjun Zhao
In Anticipation of Perfect Deepfake: Identity-anchored Artifact-agnostic Detection under Rebalanced Deepfake Detection Protocol
Wei-Han Wang, Chin-Yuan Yeh, Hsi-Wen Chen, De-Nian Yang, Ming-Syan Chen
Exploring Self-Supervised Vision Transformers for Deepfake Detection: A Comparative Analysis
Huy H. Nguyen, Junichi Yamagishi, Isao Echizen