Comparative Study
Comparative studies are a cornerstone of scientific advancement, rigorously evaluating different approaches to solve a problem or understand a phenomenon. Current research focuses on comparing various machine learning models (e.g., CNNs, Transformers, LLMs, and GANs) across diverse applications, including image classification, natural language processing, and optimization problems. These comparisons often involve analyzing the impact of different hyperparameters, data augmentation techniques, and training strategies on model performance and efficiency, leading to improved algorithms and more effective solutions. The insights gained from these studies are crucial for advancing both theoretical understanding and practical applications across numerous scientific disciplines and industrial sectors.
Papers
Comparative analysis of machine learning and numerical modeling for combined heat transfer in Polymethylmethacrylate
Mahsa Dehghan Manshadi, Nima Alafchi, Alireza Taat, Milad Mousavi, Amir Mosavi
A Comparative Study of Faithfulness Metrics for Model Interpretability Methods
Chun Sik Chan, Huanqi Kong, Guanqing Liang
A pipeline and comparative study of 12 machine learning models for text classification
Annalisa Occhipinti, Louis Rogers, Claudio Angione
Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study
Serra Sinem Tekiroglu, Helena Bonaldi, Margherita Fanton, Marco Guerini
A Comparative Study of Fusion Methods for SASV Challenge 2022
Petr Grinberg, Vladislav Shikhov
A comparative study between linear and nonlinear speech prediction
Marcos Faundez-Zanuy, Enric Monte, Francesc Vallverdú
A Comparative Study on Speaker-attributed Automatic Speech Recognition in Multi-party Meetings
Fan Yu, Zhihao Du, Shiliang Zhang, Yuxiao Lin, Lei Xie
Exploiting Single-Channel Speech for Multi-Channel End-to-End Speech Recognition: A Comparative Study
Keyu An, Ji Xiao, Zhijian Ou