Consistent Comparison
Consistent comparison across diverse methods and datasets is a crucial aspect of many scientific fields, aiming to objectively evaluate and improve model performance and identify optimal approaches. Current research focuses on comparing various model architectures (e.g., convolutional neural networks, transformers, autoencoders) and algorithms (e.g., reinforcement learning, genetic programming) across different applications, including medical image analysis, natural language processing, and robotics. These comparative studies are essential for advancing methodological rigor, informing best practices, and ultimately improving the reliability and effectiveness of models in various scientific and practical domains.
Papers
SafeWebUH at SemEval-2023 Task 11: Learning Annotator Disagreement in Derogatory Text: Comparison of Direct Training vs Aggregation
Sadat Shahriar, Thamar Solorio
A Comparison of Pneumatic Actuators for Soft Growing Vine Robots
Alexander M. Kübler, Cosima du Pasquier, Andrew Low, Betim Djambazi, Nicolas Aymon, Julian Förster, Nathaniel Agharese, Roland Siegwart, Allison M. Okamura
A comparison of short-term probabilistic forecasts for the incidence of COVID-19 using mechanistic and statistical time series models
Nicolas Banholzer, Thomas Mellan, H Juliette T Unwin, Stefan Feuerriegel, Swapnil Mishra, Samir Bhatt
Comparison of Optimization-Based Methods for Energy-Optimal Quadrotor Motion Planning
Welf Rehberg, Joaquim Ortiz-Haro, Marc Toussaint, Wolfgang Hönig
XAI-based Comparison of Input Representations for Audio Event Classification
Annika Frommholz, Fabian Seipel, Sebastian Lapuschkin, Wojciech Samek, Johanna Vielhaben