Facial Expression
Facial expression research aims to automatically recognize and understand human emotions from facial movements, enabling applications in human-computer interaction, mental health assessment, and other fields. Current research focuses on improving the accuracy and robustness of emotion recognition models, particularly under challenging conditions like partial occlusion or limited data, often employing deep learning architectures such as Vision Transformers (ViTs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), along with techniques like data augmentation and multimodal fusion. These advancements are driving progress in areas like real-time emotion analysis, improved understanding of complex emotions, and the development of more accurate and fair facial analysis tools.
Papers
An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Xiaotian Li, Xiang Zhang, Huiyuan Yang, Wenna Duan, Weiying Dai, Lijun Yin
A Naturalistic Database of Thermal Emotional Facial Expressions and Effects of Induced Emotions on Memory
Anna Esposito, Vincenzo Capuano, Jiri Mekyska, Marcos Faundez-Zanuy
Multi-modal Multi-label Facial Action Unit Detection with Transformer
Lingfeng Wang, Shisen Wang, Jin Qi
Facial Expression Recognition based on Multi-head Cross Attention Network
Jae-Yeop Jeong, Yeong-Gi Hong, Daun Kim, Yuchul Jung, Jin-Woo Jeong
Privileged Attribution Constrained Deep Networks for Facial Expression Recognition
Jules Bonnard, Arnaud Dapogny, Ferdinand Dhombres, Kévin Bailly