Gradient Matching
Gradient matching is a technique used to create smaller, synthetic datasets that mimic the training behavior of larger, original datasets. Research focuses on applying this method to various machine learning tasks, including graph condensation, federated learning, and point cloud completion, often incorporating techniques like information bottleneck principles or contrastive learning to improve efficiency and privacy. This approach offers significant potential for reducing computational costs and resource consumption in training large models, while also enhancing model interpretability and addressing privacy concerns in distributed learning settings.
Papers
October 28, 2024
September 10, 2024
May 7, 2024
May 6, 2024
April 22, 2024
February 7, 2024
January 5, 2024
December 8, 2023
December 6, 2023
October 19, 2023
March 18, 2023
November 23, 2022
November 6, 2022
October 30, 2022
October 20, 2022
July 30, 2022
June 15, 2022
March 29, 2022
March 16, 2022