Facial Expression
Facial expression research aims to automatically recognize and understand human emotions from facial movements, enabling applications in human-computer interaction, mental health assessment, and other fields. Current research focuses on improving the accuracy and robustness of emotion recognition models, particularly under challenging conditions like partial occlusion or limited data, often employing deep learning architectures such as Vision Transformers (ViTs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), along with techniques like data augmentation and multimodal fusion. These advancements are driving progress in areas like real-time emotion analysis, improved understanding of complex emotions, and the development of more accurate and fair facial analysis tools.
Papers
ExpLLM: Towards Chain of Thought for Facial Expression Recognition
Xing Lan, Jian Xue, Ji Qi, Dongmei Jiang, Ke Lu, Tat-Seng Chua
How Do You Perceive My Face? Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations
Florian Blume, Runfeng Qu, Pia Bideau, Martin Maier, Rasha Abdel Rahman, Olaf Hellwich
EmoFace: Emotion-Content Disentangled Speech-Driven 3D Talking Face with Mesh Attention
Yihong Lin, Liang Peng, Jianqiao Hu, Xiandong Li, Wenxiong Kang, Songju Lei, Xianjia Wu, Huang Xu
EMO-LLaMA: Enhancing Facial Emotion Understanding with Instruction Tuning
Bohao Xing, Zitong Yu, Xin Liu, Kaishen Yuan, Qilang Ye, Weicheng Xie, Huanjing Yue, Jingyu Yang, Heikki Kälviäinen