Expression Recognition
Facial expression recognition (FER) aims to automatically identify human emotions from facial images or videos, seeking to improve accuracy and interpretability. Current research emphasizes developing robust models, including convolutional neural networks (CNNs), vision transformers (ViTs), and generative adversarial networks (GANs), often incorporating techniques like multi-task learning, self-supervised learning, and attention mechanisms to handle challenges such as pose variation, data imbalance, and noisy labels. This field is significant for its potential applications in various domains, including healthcare (e.g., depression detection), human-computer interaction, and security, driving efforts to create more accurate, efficient, and unbiased FER systems. Furthermore, there's a growing focus on improving the interpretability of these systems and mitigating biases related to demographics.
Papers
AI-Driven Early Mental Health Screening with Limited Data: Analyzing Selfies of Pregnant Women
Gustavo A. Basílio, Thiago B. Pereira, Alessandro L. Koerich, Ludmila Dias, Maria das Graças da S. Teixeira, Rafael T. Sousa, Wilian H. Hisatugu, Amanda S. Mota, Anilton S. Garcia, Marco Aurélio K. Galletta, Hermano Tavares, Thiago M. Paixão
xLSTM-FER: Enhancing Student Expression Recognition with Extended Vision Long Short-Term Memory Network
Qionghao Huang, Jili Chen
Spatial Action Unit Cues for Interpretable Deep Facial Expression Recognition
Soufiane Belharbi, Marco Pedersoli, Alessandro Lameiras Koerich, Simon Bacon, Eric Granger
Decoding Emotions: Unveiling Facial Expressions through Acoustic Sensing with Contrastive Attention
Guangjing Wang, Juexing Wang, Ce Zhou, Weikang Ding, Huacheng Zeng, Tianxing Li, Qiben Yan