Monte Carlo
Monte Carlo methods are computational techniques that use repeated random sampling to obtain numerical results, primarily for approximating solutions to complex problems where deterministic approaches are infeasible. Current research focuses on improving the efficiency and accuracy of Monte Carlo methods through advancements in algorithms like Multilevel Monte Carlo and importance sampling, often combined with neural networks for enhanced function approximation and variance reduction. These improvements are driving progress in diverse fields, including reinforcement learning, Bayesian inference, and scientific computing, by enabling more efficient and accurate estimations in high-dimensional spaces and complex systems.
Papers
Enhancing Global Sensitivity and Uncertainty Quantification in Medical Image Reconstruction with Monte Carlo Arbitrary-Masked Mamba
Jiahao Huang, Liutao Yang, Fanwen Wang, Yang Nan, Weiwen Wu, Chengyan Wang, Kuangyu Shi, Angelica I. Aviles-Rivero, Carola-Bibiane Schönlieb, Daoqiang Zhang, Guang Yang
The surprising efficiency of temporal difference learning for rare event prediction
Xiaoou Cheng, Jonathan Weare