Facial Expression
Facial expression research aims to automatically recognize and understand human emotions from facial movements, enabling applications in human-computer interaction, mental health assessment, and other fields. Current research focuses on improving the accuracy and robustness of emotion recognition models, particularly under challenging conditions like partial occlusion or limited data, often employing deep learning architectures such as Vision Transformers (ViTs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), along with techniques like data augmentation and multimodal fusion. These advancements are driving progress in areas like real-time emotion analysis, improved understanding of complex emotions, and the development of more accurate and fair facial analysis tools.
Papers
Joker: Conditional 3D Head Synthesis with Extreme Facial Expressions
Malte Prinzler, Egor Zakharov, Vanessa Sklyarova, Berna Kabadayi, Justus Thies
GReFEL: Geometry-Aware Reliable Facial Expression Learning under Bias and Imbalanced Data Distribution
Azmine Toushik Wasi, Taki Hasan Rafi, Raima Islam, Karlo Serbetar, Dong Kyu Chae
Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures
Marcel C. Bühler, Gengyan Li, Erroll Wood, Leonhard Helminger, Xu Chen, Tanmay Shah, Daoye Wang, Stephan Garbin, Sergio Orts-Escolano, Otmar Hilliges, Dmitry Lagun, Jérémy Riviere, Paulo Gotardo, Thabo Beeler, Abhimitra Meka, Kripasindhu Sarkar
Decoding Emotions: Unveiling Facial Expressions through Acoustic Sensing with Contrastive Attention
Guangjing Wang, Juexing Wang, Ce Zhou, Weikang Ding, Huacheng Zeng, Tianxing Li, Qiben Yan
Data Augmentation for 3DMM-based Arousal-Valence Prediction for HRI
Christian Arzate Cruz, Yotam Sechayk, Takeo Igarashi, Randy Gomez