Neural Operator
Neural operators are deep learning models designed to learn mappings between infinite-dimensional function spaces, primarily focusing on efficiently solving and analyzing partial differential equations (PDEs). Current research emphasizes improving the accuracy, efficiency, and interpretability of these operators, exploring architectures like Fourier neural operators, DeepONets, and state-space models, as well as incorporating physics-informed learning and techniques like multigrid methods. This field is significant because it offers a powerful alternative to traditional numerical methods for solving complex PDEs, impacting diverse scientific domains and enabling faster, more accurate simulations in areas such as fluid dynamics, materials science, and climate modeling.
Papers
Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators
Dingcheng Luo, Thomas O'Leary-Roseberry, Peng Chen, Omar Ghattas
Representation Equivalent Neural Operators: a Framework for Alias-free Operator Learning
Francesca Bartolucci, Emmanuel de Bézenac, Bogdan Raonić, Roberto Molinaro, Siddhartha Mishra, Rima Alaifari
Beyond Regular Grids: Fourier-Based Neural Operators on Arbitrary Domains
Levi Lingsch, Mike Y. Michelis, Emmanuel de Bezenac, Sirani M. Perera, Robert K. Katzschmann, Siddhartha Mishra