Consistent Comparison
Consistent comparison across diverse methods and datasets is a crucial aspect of many scientific fields, aiming to objectively evaluate and improve model performance and identify optimal approaches. Current research focuses on comparing various model architectures (e.g., convolutional neural networks, transformers, autoencoders) and algorithms (e.g., reinforcement learning, genetic programming) across different applications, including medical image analysis, natural language processing, and robotics. These comparative studies are essential for advancing methodological rigor, informing best practices, and ultimately improving the reliability and effectiveness of models in various scientific and practical domains.
Papers
Comparison of Pedestrian Prediction Models from Trajectory and Appearance Data for Autonomous Driving
Anthony Knittel, Morris Antonello, John Redford, Subramanian Ramamoorthy
Interpretable Machine Learning based on Functional ANOVA Framework: Algorithms and Comparisons
Linwei Hu, Vijayan N. Nair, Agus Sudjianto, Aijun Zhang, Jie Chen
Comparison of machine learning models applied on anonymized data with different techniques
Judith Sáinz-Pardo Díaz, Álvaro López García
Machine Vision Using Cellphone Camera: A Comparison of deep networks for classifying three challenging denominations of Indian Coins
Keyur D. Joshi, Dhruv Shah, Varshil Shah, Nilay Gandhi, Sanket J. Shah, Sanket B. Shah
SafeWebUH at SemEval-2023 Task 11: Learning Annotator Disagreement in Derogatory Text: Comparison of Direct Training vs Aggregation
Sadat Shahriar, Thamar Solorio
A Comparison of Pneumatic Actuators for Soft Growing Vine Robots
Alexander M. Kübler, Cosima du Pasquier, Andrew Low, Betim Djambazi, Nicolas Aymon, Julian Förster, Nathaniel Agharese, Roland Siegwart, Allison M. Okamura