Comparative Study
Comparative studies are a cornerstone of scientific advancement, rigorously evaluating different approaches to solve a problem or understand a phenomenon. Current research focuses on comparing various machine learning models (e.g., CNNs, Transformers, LLMs, and GANs) across diverse applications, including image classification, natural language processing, and optimization problems. These comparisons often involve analyzing the impact of different hyperparameters, data augmentation techniques, and training strategies on model performance and efficiency, leading to improved algorithms and more effective solutions. The insights gained from these studies are crucial for advancing both theoretical understanding and practical applications across numerous scientific disciplines and industrial sectors.
Papers
Variational Autoencoder for Anomaly Detection: A Comparative Study
Huy Hoang Nguyen, Cuong Nhat Nguyen, Xuan Tung Dao, Quoc Trung Duong, Dzung Pham Thi Kim, Minh-Tan Pham
Utilizing Large Language Models for Named Entity Recognition in Traditional Chinese Medicine against COVID-19 Literature: Comparative Study
Xu Tong, Nina Smirnova, Sharmila Upadhyaya, Ran Yu, Jack H. Culbert, Chao Sun, Wolfgang Otto, Philipp Mayr
Prompt Recovery for Image Generation Models: A Comparative Study of Discrete Optimizers
Joshua Nathaniel Williams, Avi Schwarzschild, J. Zico Kolter
Design Proteins Using Large Language Models: Enhancements and Comparative Analyses
Kamyar Zeinalipour, Neda Jamshidi, Monica Bianchini, Marco Maggini, Marco Gori
A Comparative Analysis of CNN-based Deep Learning Models for Landslide Detection
Omkar Oak, Rukmini Nazre, Soham Naigaonkar, Suraj Sawant, Himadri Vaidya
A Comparative Analysis of Wealth Index Predictions in Africa between three Multi-Source Inference Models
Márton Karsai, János Kertész, Lisette Espín-Noboa