Paper ID: 2410.03696

Improving Emotion Recognition Accuracy with Personalized Clustering

Laura Gutierrez-Martin (1), Celia Lopez Ongil (1 and 2), Jose M. Lanza-Gutierrez (3), Jose A. Miranda Calero (4) ((1) Department of Electronics, Universidad Carlos III de Madrid, Spain, (2) Gender Studies Institute, Universidad Carlos III de Madrid, Spain, (3) Department of Computer Science, Universidad de Alcala, Spain, (4) Embedded Systems Laboratory, Ecole Polytechnique Federale de Lausanne, Switzerland)

Emotion recognition through artificial intelligence and smart sensing of physical and physiological signals (Affective Computing) is achieving very interesting results in terms of accuracy, inference times, and user-independent models. In this sense, there are applications related to the safety and well-being of people (sexual aggressions, gender-based violence, children and elderly abuse, mental health, etc.) that require even more improvements. Emotion detection should be done with fast, discrete, and non-luxurious systems working in real-time and real life (wearable devices, wireless communications, battery-powered). Furthermore, emotional reactions to violence are not equal in all people. Then, large general models cannot be applied to a multiuser system for people protection, and customized and simple AI models would be welcomed by health and social workers and law enforcement agents. These customized models will be applicable to clusters of subjects sharing similarities in their emotional reactions to external stimuli. This customization requires several steps: creating clusters of subjects with similar behaviors, creating AI models for every cluster, continually updating these models with new data, and enrolling new subjects in clusters when required. A methodology for clustering data compiled (physical and physiological data, together with emotional labels) is presented in this work, as well as the method for including new subjects once the AI model is generated. Experimental results demonstrate an improvement of 4% in accuracy and 3% in f1-score w.r.t. the general model, along with a 14% reduction in variability.

Submitted: Sep 23, 2024