Consistent Comparison
Consistent comparison across diverse methods and datasets is a crucial aspect of many scientific fields, aiming to objectively evaluate and improve model performance and identify optimal approaches. Current research focuses on comparing various model architectures (e.g., convolutional neural networks, transformers, autoencoders) and algorithms (e.g., reinforcement learning, genetic programming) across different applications, including medical image analysis, natural language processing, and robotics. These comparative studies are essential for advancing methodological rigor, informing best practices, and ultimately improving the reliability and effectiveness of models in various scientific and practical domains.
Papers
Towards Rapid Prototyping and Comparability in Active Learning for Deep Object Detection
Tobias Riedlinger, Marius Schubert, Karsten Kahl, Hanno Gottschalk, Matthias Rottmann
Comparison and Evaluation of Methods for a Predict+Optimize Problem in Renewable Energy
Christoph Bergmeir, Frits de Nijs, Abishek Sriramulu, Mahdi Abolghasemi, Richard Bean, John Betts, Quang Bui, Nam Trong Dinh, Nils Einecke, Rasul Esmaeilbeigi, Scott Ferraro, Priya Galketiya, Evgenii Genov, Robert Glasgow, Rakshitha Godahewa, Yanfei Kang, Steffen Limmer, Luis Magdalena, Pablo Montero-Manso, Daniel Peralta, Yogesh Pipada Sunil Kumar, Alejandro Rosales-Pérez, Julian Ruddick, Akylas Stratigakos, Peter Stuckey, Guido Tack, Isaac Triguero, Rui Yuan
Comparison of machine learning algorithms for merging gridded satellite and earth-observed precipitation data
Georgia Papacharalampous, Hristos Tyralis, Anastasios Doulamis, Nikolaos Doulamis
Comparison of Model-Free and Model-Based Learning-Informed Planning for PointGoal Navigation
Yimeng Li, Arnab Debnath, Gregory J. Stein, Jana Kosecka