Empirical Study
Empirical studies across diverse fields are rigorously evaluating the capabilities and limitations of various machine learning models, particularly large language models and neural networks. Current research focuses on assessing model performance across different tasks (e.g., question answering, image classification, code generation), investigating the impact of model architecture and hyperparameter tuning, and analyzing the robustness of models to various challenges like adversarial attacks and data imbalance. These studies provide crucial insights into model behavior, identify areas for improvement, and inform the development of more reliable and effective AI systems for both scientific research and practical applications.
Papers
Could We Generate Cytology Images from Histopathology Images? An Empirical Study
Soumyajyoti Dey, Sukanta Chakraborty, Utso Guha Roy, Nibaran Das
Empirical Studies of Parameter Efficient Methods for Large Language Models of Code and Knowledge Transfer to R
Amirreza Esmaeili, Iman Saberi, Fatemeh H. Fard
Bugs in Large Language Models Generated Code: An Empirical Study
Florian Tambon, Arghavan Moradi Dakhel, Amin Nikanjam, Foutse Khomh, Michel C. Desmarais, Giuliano Antoniol
An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model
Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv
Key Design Choices in Source-Free Unsupervised Domain Adaptation: An In-depth Empirical Analysis
Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale
An Empirical Study of Challenges in Machine Learning Asset Management
Zhimin Zhao, Yihao Chen, Abdul Ali Bangash, Bram Adams, Ahmed E. Hassan