Open Sampling
Open sampling in machine learning focuses on efficiently selecting data subsets for training and inference, aiming to improve model performance and reduce computational costs. Current research explores diverse sampling strategies, including those based on gradient information, low-discrepancy sequences, and normalizing flows, often integrated with various model architectures like neural networks, diffusion models, and generative adversarial networks. These advancements are crucial for handling large datasets, improving the accuracy and efficiency of various applications, from image synthesis and video summarization to drug discovery and autonomous driving. The development of efficient and effective sampling methods is a key challenge across many subfields of machine learning.
Papers
Fast, accurate training and sampling of Restricted Boltzmann Machines
Nicolas Béreux, Aurélien Decelle, Cyril Furtlehner, Lorenzo Rosset, Beatriz Seoane
AGS-GNN: Attribute-guided Sampling for Graph Neural Networks
Siddhartha Shankar Das, S M Ferdous, Mahantesh M Halappanavar, Edoardo Serra, Alex Pothen