Neural Operator
Neural operators are deep learning models designed to learn mappings between infinite-dimensional function spaces, primarily focusing on efficiently solving and analyzing partial differential equations (PDEs). Current research emphasizes improving the accuracy, efficiency, and interpretability of these operators, exploring architectures like Fourier neural operators, DeepONets, and state-space models, as well as incorporating physics-informed learning and techniques like multigrid methods. This field is significant because it offers a powerful alternative to traditional numerical methods for solving complex PDEs, impacting diverse scientific domains and enabling faster, more accurate simulations in areas such as fluid dynamics, materials science, and climate modeling.
Papers
Quantitative Approximation for Neural Operators in Nonlinear Parabolic Equations
Takashi Furuya, Koichi Taniguchi, Satoshi Okuda
Disentangled Representation Learning for Parametric Partial Differential Equations
Ning Liu, Lu Zhang, Tian Gao, Yue Yu
Mamba Neural Operator: Who Wins? Transformers vs. State-Space Models for PDEs
Chun-Wun Cheng, Jiahao Huang, Yi Zhang, Guang Yang, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero