Importance Sampling
Importance sampling is a statistical technique aiming to efficiently estimate expectations from a target distribution by sampling from a more convenient proposal distribution and weighting the samples accordingly. Current research focuses on improving the efficiency and accuracy of importance sampling in various contexts, including reinforcement learning, generative modeling (using normalizing flows and diffusion models), and large language model training, often incorporating adaptive sampling strategies and novel weighting schemes to mitigate bias and variance. These advancements have significant implications for diverse fields, enabling more efficient training of complex models, improved evaluation of policies in reinforcement learning, and more accurate estimations in high-dimensional problems like those found in physics and drug discovery.
Papers
Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling
Yuanqi Du, Michael Plainer, Rob Brekelmans, Chenru Duan, Frank Noé, Carla P. Gomes, Alán Aspuru-Guzik, Kirill Neklyudov
Cost-aware Simulation-based Inference
Ayush Bharti, Daolang Huang, Samuel Kaski, François-Xavier Briol
Fine-detailed Neural Indoor Scene Reconstruction using multi-level importance sampling and multi-view consistency
Xinghui Li, Yuchen Ji, Xiansong Lai, Wanting Zhang
Combining Open-box Simulation and Importance Sampling for Tuning Large-Scale Recommenders
Kaushal Paneri, Michael Munje, Kailash Singh Maurya, Adith Swaminathan, Yifan Shi
Federated Graph Learning with Adaptive Importance-based Sampling
Anran Li, Yuanyuan Chen, Chao Ren, Wenhan Wang, Ming Hu, Tianlin Li, Han Yu, Qingyu Chen
Data Pruning via Separability, Integrity, and Model Uncertainty-Aware Importance Sampling
Steven Grosz, Rui Zhao, Rajeev Ranjan, Hongcheng Wang, Manoj Aggarwal, Gerard Medioni, Anil Jain
Adaptive Mixture Importance Sampling for Automated Ads Auction Tuning
Yimeng Jia, Kaushal Paneri, Rong Huang, Kailash Singh Maurya, Pavan Mallapragada, Yifan Shi