Paper ID: 2204.03873

Spatial Transformer Network on Skeleton-based Gait Recognition

Cun Zhang, Xing-Peng Chen, Guo-Qiang Han, Xiang-Jie Liu

Skeleton-based gait recognition models usually suffer from the robustness problem, as the Rank-1 accuracy varies from 90\% in normal walking cases to 70\% in walking with coats cases. In this work, we propose a state-of-the-art robust skeleton-based gait recognition model called Gait-TR, which is based on the combination of spatial transformer frameworks and temporal convolutional networks. Gait-TR achieves substantial improvements over other skeleton-based gait models with higher accuracy and better robustness on the well-known gait dataset CASIA-B. Particularly in walking with coats cases, Gait-TR get a 90\% Rank-1 gait recognition accuracy rate, which is higher than the best result of silhouette-based models, which usually have higher accuracy than the silhouette-based gait recognition models. Moreover, our experiment on CASIA-B shows that the spatial transformer can extract gait features from the human skeleton better than the widely used graph convolutional network.

Submitted: Apr 8, 2022