Dataset Condensation
Dataset condensation aims to create smaller, synthetic datasets that retain the essential information of much larger original datasets, thereby reducing computational costs and storage needs for training machine learning models. Current research focuses on improving the efficiency and accuracy of condensation methods, often employing distribution matching techniques or gradient-based optimization, sometimes within the context of specific model architectures like autoencoders. This field is significant because it addresses the growing challenges of big data in machine learning, potentially impacting various applications by enabling more efficient model training and deployment, particularly in resource-constrained environments.
Papers
September 22, 2024
June 14, 2024
June 6, 2024
June 4, 2024
June 3, 2024
May 30, 2024
May 27, 2024
May 25, 2024
May 22, 2024
April 21, 2024
March 12, 2024
March 10, 2024
February 16, 2024
February 8, 2024
January 31, 2024
December 26, 2023
December 21, 2023
November 29, 2023
October 21, 2023