Paper ID: 2205.10130

Spiking Neural Operators for Scientific Machine Learning

Adar Kahana, Qian Zhang, Leonard Gleyzer, George Em Karniadakis

The main computational task of Scientific Machine Learning (SciML) is function regression, required both for inputs as well as outputs of a simulation. Physics-Informed Neural Networks (PINNs) and neural operators (such as DeepONet) have been very effective in solving Partial Differential Equations (PDEs), but they tax computational resources heavily and cannot be readily adopted for edge computing. Here, we address this issue by considering Spiking Neural Networks (SNNs), which have shown promise in reducing energy consumption by two orders of magnitude or more. We present a SNN-based method to perform regression, which has been a challenge due to the inherent difficulty in representing a function's input domain and continuous output values as spikes. We first propose a new method for encoding continuous values into spikes based on a triangular matrix in space and time, and demonstrate its better performance compared to the existing methods. Next, we demonstrate that using a simple SNN architecture consisting of Leaky Integrate and Fire (LIF) activation and two dense layers, we can achieve relatively accurate function regression results. Moreover, we can replace the LIF with a trained Multi-Layer Perceptron (MLP) network and obtain comparable results but three times faster. Then, we introduce the DeepONet, consisting of a branch (typically a Fully-connected Neural Network, FNN) for inputs and a trunk (also a FNN) for outputs. We can build a spiking DeepONet by either replacing the branch or the trunk by a SNN. We demonstrate this new approach for classification using the SNN in the branch, achieving results comparable to the literature. Finally, we design a spiking DeepONet for regression by replacing its trunk with a SNN, and achieve good accuracy for approximating functions as well as inferring solutions of differential equations.

Submitted: May 17, 2022