AutoML Benchmark
AutoML benchmarking aims to objectively compare the performance and efficiency of different automated machine learning systems across diverse datasets and tasks. Current research focuses on evaluating various ensemble methods, including greedy ensemble selection and gradient-free optimization techniques like CMA-ES, and exploring the effectiveness of meta-learning approaches for zero-shot AutoML and multimodal data (combining tabular and text data). These benchmarks are crucial for identifying strengths and weaknesses of existing AutoML frameworks, guiding future development, and ultimately improving the accessibility and reliability of automated machine learning for practical applications.
Papers
July 1, 2023
December 6, 2022
July 25, 2022
June 16, 2022
April 18, 2022
November 27, 2021