Large Scale Hyperparameter Optimization
Large-scale hyperparameter optimization focuses on efficiently finding the best settings for machine learning models, especially in scenarios with massive datasets and computationally expensive training. Current research emphasizes developing scalable algorithms, such as Bayesian optimization and its asynchronous decentralized variants, and leveraging techniques like dataset condensation and implicit differentiation to accelerate the search process. These advancements are crucial for training complex models like spiking neural networks and for deploying AI in resource-intensive applications such as high-energy physics and fault diagnosis, ultimately improving model accuracy and efficiency.
Papers
October 31, 2024
August 5, 2024
May 27, 2024
March 1, 2024
February 20, 2023
December 7, 2022
October 10, 2022
September 22, 2022
July 1, 2022