Paper ID: 2310.14621

Spiking mode-based neural networks

Zhanghan Lin, Haiping Huang

Spiking neural networks play an important role in brain-like neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input, output modes and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks -- digit classification and selective sensory integration tasks. Our method accelerate the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.

Submitted: Oct 23, 2023