Emotion Label
Emotion labeling in various modalities (text, speech, images, video) focuses on automatically identifying and classifying emotional states from data, aiming to improve human-computer interaction and related applications. Current research emphasizes multimodal fusion techniques, often employing transformer networks and contrastive learning to leverage information across different data types and address challenges like data imbalance and label ambiguity. This field is significant for advancing affective computing, enabling more nuanced and empathetic AI systems in areas such as mental health analysis, personalized education, and human-robot interaction.
Papers
Human-LLM Collaborative Construction of a Cantonese Emotion Lexicon
Yusong Zhang, Dong Dong, Chi-tim Hung, Leonard Heyerdahl, Tamara Giles-Vernick, Eng-kiong Yeoh
Leveraging LLM Embeddings for Cross Dataset Label Alignment and Zero Shot Music Emotion Prediction
Renhang Liu, Abhinaba Roy, Dorien Herremans
InstructERC: Reforming Emotion Recognition in Conversation with a Retrieval Multi-task LLMs Framework
Shanglin Lei, Guanting Dong, Xiaoping Wang, Keheng Wang, Sirui Wang
Personalization of Affective Models to Enable Neuropsychiatric Digital Precision Health Interventions: A Feasibility Study
Ali Kargarandehkordi, Matti Kaisti, Peter Washington