Neural Operator
Neural operators are deep learning models designed to learn mappings between infinite-dimensional function spaces, primarily focusing on efficiently solving and analyzing partial differential equations (PDEs). Current research emphasizes improving the accuracy, efficiency, and interpretability of these operators, exploring architectures like Fourier neural operators, DeepONets, and state-space models, as well as incorporating physics-informed learning and techniques like multigrid methods. This field is significant because it offers a powerful alternative to traditional numerical methods for solving complex PDEs, impacting diverse scientific domains and enabling faster, more accurate simulations in areas such as fluid dynamics, materials science, and climate modeling.
Papers
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
Wuyang Chen, Jialin Song, Pu Ren, Shashank Subramanian, Dmitriy Morozov, Michael W. Mahoney
Operator Learning: Algorithms and Analysis
Nikola B. Kovachki, Samuel Lanthaler, Andrew M. Stuart
Learning Semilinear Neural Operators : A Unified Recursive Framework For Prediction And Data Assimilation
Ashutosh Singh, Ricardo Augusto Borsoi, Deniz Erdogmus, Tales Imbiriba
Diffeomorphism Neural Operator for various domains and parameters of partial differential equations
Zhiwei Zhao, Changqing Liu, Yingguang Li, Zhibin Chen, Xu Liu
Emulating the interstellar medium chemistry with neural operators
Lorenzo Branca, Andrea Pallottini
Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators
Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter