Neural Operator
Neural operators are deep learning models designed to learn mappings between infinite-dimensional function spaces, primarily focusing on efficiently solving and analyzing partial differential equations (PDEs). Current research emphasizes improving the accuracy, efficiency, and interpretability of these operators, exploring architectures like Fourier neural operators, DeepONets, and state-space models, as well as incorporating physics-informed learning and techniques like multigrid methods. This field is significant because it offers a powerful alternative to traditional numerical methods for solving complex PDEs, impacting diverse scientific domains and enabling faster, more accurate simulations in areas such as fluid dynamics, materials science, and climate modeling.
Papers
Neural Operators for Boundary Stabilization of Stop-and-go Traffic
Yihuai Zhang, Ruiguo Zhong, Huan Yu
Operator-learning-inspired Modeling of Neural Ordinary Differential Equations
Woojin Cho, Seunghyeon Cho, Hyundong Jin, Jinsung Jeon, Kookjin Lee, Sanghyun Hong, Dongeun Lee, Jonghyun Choi, Noseong Park
Can Physics Informed Neural Operators Self Improve?
Ritam Majumdar, Amey Varhade, Shirish Karande, Lovekesh Vig
Mechanical Characterization and Inverse Design of Stochastic Architected Metamaterials Using Neural Operators
Hanxun Jin, Enrui Zhang, Boyu Zhang, Sridhar Krishnaswamy, George Em Karniadakis, Horacio D. Espinosa