Batch Greenkhorn
Batch Greenkhorn, encompassing various algorithms and model architectures, focuses on improving efficiency and effectiveness in machine learning tasks by processing data in batches rather than sequentially. Current research emphasizes enhancing batch processing in areas like federated learning (improving gradient inversion resistance), Bayesian optimization (minimizing regret and redundancy), and model editing (achieving consecutive and memory-efficient updates). These advancements are significant for addressing challenges in privacy preservation, resource-intensive computations, and improving the accuracy and speed of various machine learning applications.
Papers
October 29, 2024
September 26, 2024
July 5, 2024
April 1, 2024
March 8, 2024
March 7, 2024
February 22, 2024
February 4, 2024
December 31, 2023
December 20, 2023
December 19, 2023
June 23, 2023
March 7, 2023
June 27, 2022
January 19, 2022