Action Unit
Facial action unit (AU) detection aims to automatically identify and classify the individual muscle movements that constitute facial expressions. Current research focuses on improving AU detection accuracy and robustness using various deep learning architectures, including vision transformers and Siamese networks, often incorporating techniques like contrastive learning, causal inference, and attention mechanisms to address challenges posed by data scarcity, inter-subject variability, and complex AU interactions. These advancements are crucial for applications in affective computing, human-computer interaction, and clinical settings where accurate emotion recognition is vital.
Papers
October 2, 2024
August 30, 2024
March 7, 2024
March 6, 2024
August 15, 2023
March 10, 2023
October 23, 2022
May 18, 2022
April 17, 2022