Comparative Study
Comparative studies are a cornerstone of scientific advancement, rigorously evaluating different approaches to solve a problem or understand a phenomenon. Current research focuses on comparing various machine learning models (e.g., CNNs, Transformers, LLMs, and GANs) across diverse applications, including image classification, natural language processing, and optimization problems. These comparisons often involve analyzing the impact of different hyperparameters, data augmentation techniques, and training strategies on model performance and efficiency, leading to improved algorithms and more effective solutions. The insights gained from these studies are crucial for advancing both theoretical understanding and practical applications across numerous scientific disciplines and industrial sectors.
Papers
A comparative analysis of deep learning models for lung segmentation on X-ray images
Weronika Hryniewska-Guzik, Jakub Bilski, Bartosz Chrostowski, Jakub Drak Sbahi, Przemysław Biecek
Deep Reinforcement Learning for Personalized Diagnostic Decision Pathways Using Electronic Health Records: A Comparative Study on Anemia and Systemic Lupus Erythematosus
Lillian Muyama, Antoine Neuraz, Adrien Coulet
Deep models for stroke segmentation: do complex architectures always perform better?
Yalda Zafari-Ghadim, Ahmed Soliman, Yousif Yousif, Ahmed Ibrahim, Essam A. Rashed, Mohamed Mabrok
A Comparative Analysis of Visual Odometry in Virtual and Real-World Railways Environments
Gianluca D'Amico, Mauro Marinoni, Giorgio Buttazzo
A comparative analysis of embedding models for patent similarity
Grazia Sveva Ascione, Valerio Sterzi
Emotion Detection with Transformers: A Comparative Study
Mahdi Rezapour
Evaluating Named Entity Recognition: A comparative analysis of mono- and multilingual transformer models on a novel Brazilian corporate earnings call transcripts dataset
Ramon Abilio, Guilherme Palermo Coelho, Ana Estela Antunes da Silva