Facial Expression
Facial expression research aims to automatically recognize and understand human emotions from facial movements, enabling applications in human-computer interaction, mental health assessment, and other fields. Current research focuses on improving the accuracy and robustness of emotion recognition models, particularly under challenging conditions like partial occlusion or limited data, often employing deep learning architectures such as Vision Transformers (ViTs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), along with techniques like data augmentation and multimodal fusion. These advancements are driving progress in areas like real-time emotion analysis, improved understanding of complex emotions, and the development of more accurate and fair facial analysis tools.
Papers
GoHD: Gaze-oriented and Highly Disentangled Portrait Animation with Rhythmic Poses and Realistic Expression
Ziqi Zhou, Weize Quan, Hailin Shi, Wei Li, Lili Wang, Dong-ming Yan
AFFAKT: A Hierarchical Optimal Transport based Method for Affective Facial Knowledge Transfer in Video Deception Detection
Zihan Ji, Xuetao Tian, Ye Liu
LokiTalk: Learning Fine-Grained and Generalizable Correspondences to Enhance NeRF-based Talking Head Synthesis
Tianqi Li, Ruobing Zheng, Bonan Li, Zicheng Zhang, Meng Wang, Jingdong Chen, Ming Yang
Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis
Tianqi Li, Ruobing Zheng, Minghui Yang, Jingdong Chen, Ming Yang