Facial Expression
Facial expression research aims to automatically recognize and understand human emotions from facial movements, enabling applications in human-computer interaction, mental health assessment, and other fields. Current research focuses on improving the accuracy and robustness of emotion recognition models, particularly under challenging conditions like partial occlusion or limited data, often employing deep learning architectures such as Vision Transformers (ViTs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), along with techniques like data augmentation and multimodal fusion. These advancements are driving progress in areas like real-time emotion analysis, improved understanding of complex emotions, and the development of more accurate and fair facial analysis tools.
Papers
eMotion-GAN: A Motion-based GAN for Photorealistic and Facial Expression Preserving Frontal View Synthesis
Omar Ikne, Benjamin Allaert, Ioan Marius Bilasco, Hazem Wannous
Facial Features Integration in Last Mile Delivery Robots
Delgermaa Gankhuyag, Stephanie Groiß, Lena Schwamberger, Özge Talay, Cristina Olaverri-Monreal
FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features
Andre Rochow, Max Schwarz, Sven Behnke
EmoVOCA: Speech-Driven Emotional 3D Talking Heads
Federico Nocentini, Claudio Ferrari, Stefano Berretti
Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition
Bach Nguyen-Xuan, Thien Nguyen-Hoang, Thanh-Huy Nguyen, Nhu Tai-Do
Driving Animatronic Robot Facial Expression From Speech
Boren Li, Hang Li, Hangxin Liu
Exploring Facial Expression Recognition through Semi-Supervised Pretraining and Temporal Modeling
Jun Yu, Zhihong Wei, Zhongpeng Cai, Gongpeng Zhao, Zerui Zhang, Yongqi Wang, Guochen Xie, Jichao Zhu, Wangyuan Zhu
HSEmotion Team at the 6th ABAW Competition: Facial Expressions, Valence-Arousal and Emotion Intensity Prediction
Andrey V. Savchenko
Zero-shot Compound Expression Recognition with Visual Language Model at the 6th ABAW Challenge
Jiahe Wang, Jiale Huang, Bingzhao Cai, Yifan Cao, Xin Yun, Shangfei Wang