Paper ID: 2208.00877

Self-supervised Group Meiosis Contrastive Learning for EEG-Based Emotion Recognition

Haoning Kan, Jiale Yu, Jiajin Huang, Zihe Liu, Haiyan Zhou

The progress of EEG-based emotion recognition has received widespread attention from the fields of human-machine interactions and cognitive science in recent years. However, how to recognize emotions with limited labels has become a new research and application bottleneck. To address the issue, this paper proposes a Self-supervised Group Meiosis Contrastive learning framework (SGMC) based on the stimuli consistent EEG signals in human being. In the SGMC, a novel genetics-inspired data augmentation method, named Meiosis, is developed. It takes advantage of the alignment of stimuli among the EEG samples in a group for generating augmented groups by pairing, cross exchanging, and separating. And the model adopts a group projector to extract group-level feature representations from group EEG samples triggered by the same emotion video stimuli. Then contrastive learning is employed to maximize the similarity of group-level representations of augmented groups with the same stimuli. The SGMC achieves the state-of-the-art emotion recognition results on the publicly available DEAP dataset with an accuracy of 94.72% and 95.68% in valence and arousal dimensions, and also reaches competitive performance on the public SEED dataset with an accuracy of 94.04%. It is worthy of noting that the SGMC shows significant performance even when using limited labels. Moreover, the results of feature visualization suggest that the model might have learned video-level emotion-related feature representations to improve emotion recognition. And the effects of group size are further evaluated in the hyper parametric analysis. Finally, a control experiment and ablation study are carried out to examine the rationality of architecture. The code is provided publicly online.

Submitted: Jul 12, 2022