Expression Recognition
Facial expression recognition (FER) aims to automatically identify human emotions from facial images or videos, seeking to improve accuracy and interpretability. Current research emphasizes developing robust models, including convolutional neural networks (CNNs), vision transformers (ViTs), and generative adversarial networks (GANs), often incorporating techniques like multi-task learning, self-supervised learning, and attention mechanisms to handle challenges such as pose variation, data imbalance, and noisy labels. This field is significant for its potential applications in various domains, including healthcare (e.g., depression detection), human-computer interaction, and security, driving efforts to create more accurate, efficient, and unbiased FER systems. Furthermore, there's a growing focus on improving the interpretability of these systems and mitigating biases related to demographics.
Papers
Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition
Bach Nguyen-Xuan, Thien Nguyen-Hoang, Thanh-Huy Nguyen, Nhu Tai-Do
SUN Team's Contribution to ABAW 2024 Competition: Audio-visual Valence-Arousal Estimation and Expression Recognition
Denis Dresvyanskiy, Maxim Markitantov, Jiawei Yu, Peitong Li, Heysem Kaya, Alexey Karpov
Exploring Facial Expression Recognition through Semi-Supervised Pretraining and Temporal Modeling
Jun Yu, Zhihong Wei, Zhongpeng Cai, Gongpeng Zhao, Zerui Zhang, Yongqi Wang, Guochen Xie, Jichao Zhu, Wangyuan Zhu
Zero-shot Compound Expression Recognition with Visual Language Model at the 6th ABAW Challenge
Jiahe Wang, Jiale Huang, Bingzhao Cai, Yifan Cao, Xin Yun, Shangfei Wang