Neural Operator
Neural operators are deep learning models designed to learn mappings between infinite-dimensional function spaces, primarily focusing on efficiently solving and analyzing partial differential equations (PDEs). Current research emphasizes improving the accuracy, efficiency, and interpretability of these operators, exploring architectures like Fourier neural operators, DeepONets, and state-space models, as well as incorporating physics-informed learning and techniques like multigrid methods. This field is significant because it offers a powerful alternative to traditional numerical methods for solving complex PDEs, impacting diverse scientific domains and enabling faster, more accurate simulations in areas such as fluid dynamics, materials science, and climate modeling.
Papers
Diffusion models as probabilistic neural operators for recovering unobserved states of dynamical systems
Katsiaryna Haitsiukevich, Onur Poyraz, Pekka Marttinen, Alexander Ilin
Generative flow induced neural architecture search: Towards discovering optimal architecture in wavelet neural operator
Hartej Soin, Tapas Tripura, Souvik Chakraborty
Neural Operators Learn the Local Physics of Magnetohydrodynamics
Taeyoung Kim, Youngsoo Ha, Myungjoo Kang
MD-NOMAD: Mixture density nonlinear manifold decoder for emulating stochastic differential equations and uncertainty propagation
Akshay Thakur, Souvik Chakraborty
Neural Operator induced Gaussian Process framework for probabilistic solution of parametric partial differential equations
Sawan Kumar, Rajdip Nayek, Souvik Chakraborty
Can physical information aid the generalization ability of Neural Networks for hydraulic modeling?
Gianmarco Guglielmo, Andrea Montessori, Jean-Michel Tucny, Michele La Rocca, Pietro Prestininzi
Derivative-informed neural operator acceleration of geometric MCMC for infinite-dimensional Bayesian inverse problems
Lianghao Cao, Thomas O'Leary-Roseberry, Omar Ghattas