Sample Efficiency
Sample efficiency in machine learning focuses on minimizing the amount of data needed to train effective models, a crucial concern given the cost and difficulty of data acquisition in many domains. Current research emphasizes improving sample efficiency through various techniques, including the development of novel algorithms (like alternating minimization and those incorporating diffusion models), the use of inductive biases in model architectures (such as equivariant neural networks), and leveraging external knowledge sources (like large language models). These advancements are vital for making machine learning more practical and accessible, particularly in resource-constrained settings and applications like robotics and drug discovery where data collection is expensive or time-consuming.
Papers
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
Jiabin Lin, Shana Moothedath, Namrata Vaswani
Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering
Klaus-Rudolf Kladny, Bernhard Schölkopf, Michael Muehlebach
Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians
Jane H. Lee, Anay Mehrotra, Manolis Zampetakis