Dataset Condensation
Dataset condensation aims to create smaller, synthetic datasets that retain the essential information of much larger original datasets, thereby reducing computational costs and storage needs for training machine learning models. Current research focuses on improving the efficiency and accuracy of condensation methods, often employing distribution matching techniques or gradient-based optimization, sometimes within the context of specific model architectures like autoencoders. This field is significant because it addresses the growing challenges of big data in machine learning, potentially impacting various applications by enabling more efficient model training and deployment, particularly in resource-constrained environments.
Papers
September 14, 2023
July 19, 2023
June 22, 2023
June 9, 2023
May 29, 2023
May 5, 2023
March 7, 2023
January 17, 2023
October 21, 2022
September 29, 2022
August 21, 2022
July 30, 2022
July 20, 2022
June 15, 2022
June 1, 2022
May 30, 2022
March 3, 2022