Consistent Comparison
Consistent comparison across diverse methods and datasets is a crucial aspect of many scientific fields, aiming to objectively evaluate and improve model performance and identify optimal approaches. Current research focuses on comparing various model architectures (e.g., convolutional neural networks, transformers, autoencoders) and algorithms (e.g., reinforcement learning, genetic programming) across different applications, including medical image analysis, natural language processing, and robotics. These comparative studies are essential for advancing methodological rigor, informing best practices, and ultimately improving the reliability and effectiveness of models in various scientific and practical domains.
Papers
Interpretation of High-Dimensional Regression Coefficients by Comparison with Linearized Compressing Features
Joachim Schaeffer, Jinwook Rhyu, Robin Droop, Rolf Findeisen, Richard Braatz
Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods
Egor Kovalev, Georgii Bychkov, Khaled Abud, Aleksandr Gushchin, Anna Chistyakova, Sergey Lavrushkin, Dmitriy Vatolin, Anastasia Antsiferova
Do we need more complex representations for structure? A comparison of note duration representation for Music Transformers
Gabriel Souza, Flavio Figueiredo, Alexei Machado, Deborah Guimarães
Comparison of deep learning and conventional methods for disease onset prediction
Luis H. John, Chungsoo Kim, Jan A. Kors, Junhyuk Chang, Hannah Morgan-Cooper, Priya Desai, Chao Pang, Peter R. Rijnbeek, Jenna M. Reps, Egill A. Fridgeirsson