Sample Efficiency
Sample efficiency in machine learning focuses on minimizing the amount of data needed to train effective models, a crucial concern given the cost and difficulty of data acquisition in many domains. Current research emphasizes improving sample efficiency through various techniques, including the development of novel algorithms (like alternating minimization and those incorporating diffusion models), the use of inductive biases in model architectures (such as equivariant neural networks), and leveraging external knowledge sources (like large language models). These advancements are vital for making machine learning more practical and accessible, particularly in resource-constrained settings and applications like robotics and drug discovery where data collection is expensive or time-consuming.
Papers
Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency
Zhihan Liu, Hao Hu, Shenao Zhang, Hongyi Guo, Shuqi Ke, Boyi Liu, Zhaoran Wang
Estimation and Inference in Distributional Reinforcement Learning
Liangyu Zhang, Yang Peng, Jiadong Liang, Wenhao Yang, Zhihua Zhang