Paper ID: 2308.04091
From Unimodal to Multimodal: improving sEMG-Based Pattern Recognition via deep generative models
Wentao Wei, Linyan Ren
Objective: Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy compared to unimodal HGR systems. However, acquiring multimodal gesture recognition data typically requires users to wear additional sensors, thereby increasing hardware costs. Methods: This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals. Specifically, we trained a deep generative model based on the intrinsic correlation between forearm sEMG signals and forearm IMU signals to generate virtual forearm IMU signals from the input forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU signals were fed into a multimodal Convolutional Neural Network (CNN) model for gesture recognition. Results: We conducted evaluations on six databases, including five publicly available databases and our collected database comprising 28 subjects performing 38 gestures, containing both sEMG and IMU data. The results show that our proposed approach significantly outperforms the sEMG-based unimodal HGR approach (with increases of 2.15%-13.10%). Moreover, it achieves accuracy levels closely matching those of multimodal HGR when using virtual Acceleration (ACC) signals. Conclusion: It demonstrates that incorporating virtual IMU signals, generated by deep generative models, can significantly improve the accuracy of sEMG-based HGR. Significance: The proposed approach represents a successful attempt to bridge the gap between unimodal HGR and multimodal HGR without additional sensor hardware, which can help to promote further development of natural and cost-effective myoelectric interfaces in the biomedical engineering field.
Submitted: Aug 8, 2023