Expression Recognition
Facial expression recognition (FER) aims to automatically identify human emotions from facial images or videos, seeking to improve accuracy and interpretability. Current research emphasizes developing robust models, including convolutional neural networks (CNNs), vision transformers (ViTs), and generative adversarial networks (GANs), often incorporating techniques like multi-task learning, self-supervised learning, and attention mechanisms to handle challenges such as pose variation, data imbalance, and noisy labels. This field is significant for its potential applications in various domains, including healthcare (e.g., depression detection), human-computer interaction, and security, driving efforts to create more accurate, efficient, and unbiased FER systems. Furthermore, there's a growing focus on improving the interpretability of these systems and mitigating biases related to demographics.
Papers
Learning from Synthetic Data: Facial Expression Classification based on Ensemble of Multi-task Networks
Jae-Yeop Jeong, Yeong-Gi Hong, JiYeon Oh, Sumin Hong, Jin-Woo Jeong, Yuchul Jung
AU-Supervised Convolutional Vision Transformers for Synthetic Facial Expression Recognition
Shuyi Mao, Xinpeng Li, Junyao Chen, Xiaojiang Peng
Hand-Assisted Expression Recognition Method from Synthetic Images at the Fourth ABAW Challenge
Xiangyu Miao, Jiahe Wang, Yanan Chang, Yi Wu, Shangfei Wang