Facial Action Unit
Facial Action Units (AUs) are the fundamental building blocks of facial expressions, representing individual muscle movements coded in the Facial Action Coding System (FACS). Current research focuses on improving the accuracy and robustness of AU detection and intensity estimation, employing various deep learning architectures such as convolutional neural networks, vision transformers, and large language models, often incorporating techniques like contrastive learning, attention mechanisms, and graph neural networks to model AU relationships and temporal dynamics. This work is significant for advancing affective computing, enabling more accurate and nuanced analysis of human emotions in applications ranging from mental health assessment to human-computer interaction. The development of more generalizable and robust AU detection methods, particularly in challenging conditions like occlusions or low-quality video, remains a key area of ongoing investigation.
Papers
Multi-modal Multi-label Facial Action Unit Detection with Transformer
Lingfeng Wang, Shisen Wang, Jin Qi
Multiple Emotion Descriptors Estimation at the ABAW3 Challenge
Didan Deng
Random Forest Regression for continuous affect using Facial Action Units
Saurabh Hinduja, Shaun Canavan, Liza Jivnani, Sk Rahatul Jannat, V Sri Chakra Kumar