Monte Carlo
Monte Carlo methods are computational techniques that use repeated random sampling to obtain numerical results, primarily for approximating solutions to complex problems where deterministic approaches are infeasible. Current research focuses on improving the efficiency and accuracy of Monte Carlo methods through advancements in algorithms like Multilevel Monte Carlo and importance sampling, often combined with neural networks for enhanced function approximation and variance reduction. These improvements are driving progress in diverse fields, including reinforcement learning, Bayesian inference, and scientific computing, by enabling more efficient and accurate estimations in high-dimensional spaces and complex systems.
Papers
A Monte Carlo Framework for Calibrated Uncertainty Estimation in Sequence Prediction
Qidong Yang, Weicheng Zhu, Joseph Keslin, Laure Zanna, Tim G. J. Rudner, Carlos Fernandez-Granda
ELBOing Stein: Variational Bayes with Stein Mixture Inference
Ola Rønning, Eric Nalisnick, Christophe Ley, Padhraic Smyth, Thomas Hamelryck
Combining Open-box Simulation and Importance Sampling for Tuning Large-Scale Recommenders
Kaushal Paneri, Michael Munje, Kailash Singh Maurya, Adith Swaminathan, Yifan Shi
Bayesian computation with generative diffusion models by Multilevel Monte Carlo
Abdul-Lateef Haji-Ali, Marcelo Pereyra, Luke Shaw, Konstantinos Zygalakis
Neural Control Variates with Automatic Integration
Zilu Li, Guandao Yang, Qingqing Zhao, Xi Deng, Leonidas Guibas, Bharath Hariharan, Gordon Wetzstein
Parameter Tuning of the Firefly Algorithm by Standard Monte Carlo and Quasi-Monte Carlo Methods
Geethu Joy, Christian Huyck, Xin-She Yang
Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using Monte Carlo Methods
Andrej Tschalzev, Paul Nitschke, Lukas Kirchdorfer, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt