Multi Objective Hyperparameter
Multi-objective hyperparameter optimization (MO-HPO) tackles the challenge of simultaneously optimizing multiple, often conflicting, performance metrics of machine learning models, such as accuracy and computational cost. Current research focuses on adapting and extending existing single-objective optimization algorithms like Population Based Training (PBT) and Tree-structured Parzen Estimators (TPE) to the multi-objective setting, and developing novel algorithms like ADUMBO to efficiently explore the Pareto front of optimal solutions. This field is crucial for developing more efficient and sustainable AI systems, improving model performance across various criteria, and enabling the deployment of complex models in resource-constrained environments.