AutoML Benchmark

AutoML benchmarking aims to objectively compare the performance and efficiency of different automated machine learning systems across diverse datasets and tasks. Current research focuses on evaluating various ensemble methods, including greedy ensemble selection and gradient-free optimization techniques like CMA-ES, and exploring the effectiveness of meta-learning approaches for zero-shot AutoML and multimodal data (combining tabular and text data). These benchmarks are crucial for identifying strengths and weaknesses of existing AutoML frameworks, guiding future development, and ultimately improving the accessibility and reliability of automated machine learning for practical applications.

Papers