Paper ID: 2209.03764

Deep Multi-Scale Representation Learning with Attention for Automatic Modulation Classification

Xiaowei Wu, Shengyun Wei, Yan Zhou

Currently, deep learning methods with stacking small size convolutional filters are widely used for automatic modulation classification (AMC). In this report, we find some experienced improvements by using large kernel size for convolutional deep convolution neural network based AMC, which is more efficient in extracting multi-scale features of the raw signal I/Q sequence data. Also, Squeeze-and-Excitation (SE) mechanisms can significantly help AMC networks to focus on the more important features of the signal. As a result, we propose a multi-scale feature network with large kernel size and SE mechanism (SE-MSFN) in this paper. SE-MSFN achieves state-of-the-art classification performance on the public well-known RADIOML 2018.01A dataset, with average classification accuracy of 64.50%, surpassing CLDNN by 1.42%, maximum classification accuracy of 98.5%, and an average classification accuracy of 85.53% in the lower SNR range 0dB to 10dB, surpassing CLDNN by 2.85%. In addition, we also verified that ensemble learning can help further improve classification performance. We hope this report can provide some references for developers and researchers in practical scenes.

Submitted: Aug 31, 2022