Expression Classification
Expression classification, the automated recognition of human emotions from various modalities like facial expressions, voice, and body language, aims to build systems capable of understanding and responding to human affect. Current research focuses on improving accuracy and robustness, particularly in challenging "in-the-wild" scenarios, employing techniques like multi-modal fusion, ensemble learning, transformer networks, and self-supervised learning to address data limitations and improve generalization. These advancements have significant implications for human-computer interaction, mental health monitoring, and other fields requiring nuanced understanding of human emotion.
Papers
Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers
Jia Li, Yin Chen, Xuesong Zhang, Jiantao Nie, Ziqiang Li, Yangchen Yu, Yan Zhang, Richang Hong, Meng Wang
Facial Affect Recognition based on Transformer Encoder and Audiovisual Fusion for the ABAW5 Challenge
Ziyang Zhang, Liuwei An, Zishun Cui, Ao xu, Tengteng Dong, Yueqi Jiang, Jingyi Shi, Xin Liu, Xiao Sun, Meng Wang
Facial Affective Behavior Analysis Method for 5th ABAW Competition
Shangfei Wang, Yanan Chang, Yi Wu, Xiangyu Miao, Jiaqiang Wu, Zhouan Zhu, Jiahe Wang, Yufei Xiao