Empirical Study
Empirical studies across diverse fields are rigorously evaluating the capabilities and limitations of various machine learning models, particularly large language models and neural networks. Current research focuses on assessing model performance across different tasks (e.g., question answering, image classification, code generation), investigating the impact of model architecture and hyperparameter tuning, and analyzing the robustness of models to various challenges like adversarial attacks and data imbalance. These studies provide crucial insights into model behavior, identify areas for improvement, and inform the development of more reliable and effective AI systems for both scientific research and practical applications.
Papers
Do we listen to what we are told? An empirical study on human behaviour during the COVID-19 pandemic: neural networks vs. regression analysis
Yuxi Heluo, Kexin Wang, Charles W. Robson
Can Large Language Models Understand Content and Propagation for Misinformation Detection: An Empirical Study
Mengyang Chen, Lingwei Wei, Han Cao, Wei Zhou, Songlin Hu
Notion of Explainable Artificial Intelligence -- An Empirical Investigation from A Users Perspective
AKM Bahalul Haque, A. K. M. Najmul Islam, Patrick Mikalef
Discourse Relations Classification and Cross-Framework Discourse Relation Classification Through the Lens of Cognitive Dimensions: An Empirical Investigation
Yingxue Fu
An Empirical Study of Frame Selection for Text-to-Video Retrieval
Mengxia Wu, Min Cao, Yang Bai, Ziyin Zeng, Chen Chen, Liqiang Nie, Min Zhang
An Empirical Study of Translation Hypothesis Ensembling with Large Language Models
António Farinhas, José G. C. de Souza, André F. T. Martins
An empirical study of automatic wildlife detection using drone thermal imaging and object detection
Miao Chang, Tan Vuong, Manas Palaparthi, Lachlan Howell, Alessio Bonti, Mohamed Abdelrazek, Duc Thanh Nguyen