Paper ID: 2404.11311

Use of Parallel Explanatory Models to Enhance Transparency of Neural Network Configurations for Cell Degradation Detection

David Mulvey, Chuan Heng Foh, Muhammad Ali Imran, Rahim Tafazolli

In a previous paper, we have shown that a recurrent neural network (RNN) can be used to detect cellular network radio signal degradations accurately. We unexpectedly found, though, that accuracy gains diminished as we added layers to the RNN. To investigate this, in this paper, we build a parallel model to illuminate and understand the internal operation of neural networks, such as the RNN, which store their internal state in order to process sequential inputs. This model is widely applicable in that it can be used with any input domain where the inputs can be represented by a Gaussian mixture. By looking at the RNN processing from a probability density function perspective, we are able to show how each layer of the RNN transforms the input distributions to increase detection accuracy. At the same time we also discover a side effect acting to limit the improvement in accuracy. To demonstrate the fidelity of the model we validate it against each stage of RNN processing as well as the output predictions. As a result, we have been able to explain the reasons for the RNN performance limits with useful insights for future designs for RNNs and similar types of neural network.

Submitted: Apr 17, 2024