Paper ID: 2111.12290
Attention-based Dual-stream Vision Transformer for Radar Gait Recognition
Shiliang Chen, Wentao He, Jianfeng Ren, Xudong Jiang
Radar gait recognition is robust to light variations and less infringement on privacy. Previous studies often utilize either spectrograms or cadence velocity diagrams. While the former shows the time-frequency patterns, the latter encodes the repetitive frequency patterns. In this work, a dual-stream neural network with attention-based fusion is proposed to fully aggregate the discriminant information from these two representations. The both streams are designed based on the Vision Transformer, which well captures the gait characteristics embedded in these representations. The proposed method is validated on a large benchmark dataset for radar gait recognition, which shows that it significantly outperforms state-of-the-art solutions.
Submitted: Nov 24, 2021