Tiny Set
"Tiny set" research focuses on efficiently processing and learning from small datasets, addressing limitations in data availability and annotation costs across various domains. Current research explores techniques like data augmentation (e.g., concatenating video clips for sign language translation), leveraging large language models for interpreting counterfactual examples, and developing novel algorithms for optimization and representation learning on sets of data points (e.g., using covariance matrix adaptation evolution strategy or deep neural network augmentations for particle filters). This work is significant because it enables effective machine learning in scenarios with limited data, impacting fields ranging from AI explainability and high-energy physics to automated reasoning and multi-modal learning.
Papers
Covering Multiple Objectives with a Small Set of Solutions Using Bayesian Optimization
Natalie Maus, Kyurae Kim, Yimeng Zeng, Haydn Thomas Jones, Fangping Wan, Marcelo Der Torossian Torres, Cesar de la Fuente-Nunez, Jacob R. GardnerSETS: Leveraging Self-Verification and Self-Correction for Improved Test-Time Scaling
Jiefeng Chen, Jie Ren, Xinyun Chen, Chengrun Yang, Ruoxi Sun, Sercan Ö Arık