Automatic Emotion Recognition
Automatic emotion recognition (AER) aims to automatically identify human emotions from various modalities like speech, facial expressions, and physiological signals. Current research heavily utilizes deep learning, focusing on multimodal approaches that fuse information from different sources and employing architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and graph neural networks (GNNs) for feature extraction and classification. This field is significant for its potential applications in human-computer interaction, healthcare, and other areas requiring understanding of human affect, with ongoing efforts to improve accuracy, robustness, and explainability of AER systems, particularly in real-world, "in-the-wild" settings.