Empirical Study
Empirical studies across diverse fields are rigorously evaluating the capabilities and limitations of various machine learning models, particularly large language models and neural networks. Current research focuses on assessing model performance across different tasks (e.g., question answering, image classification, code generation), investigating the impact of model architecture and hyperparameter tuning, and analyzing the robustness of models to various challenges like adversarial attacks and data imbalance. These studies provide crucial insights into model behavior, identify areas for improvement, and inform the development of more reliable and effective AI systems for both scientific research and practical applications.
Papers
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization
Hiroki Naganuma, Kartik Ahuja, Shiro Takagi, Tetsuya Motokawa, Rio Yokota, Kohta Ishikawa, Ikuro Sato, Ioannis Mitliagkas
Behavior of Hyper-Parameters for Selected Machine Learning Algorithms: An Empirical Investigation
Anwesha Bhattacharyya, Joel Vaughan, Vijayan N. Nair
Is margin all you need? An extensive empirical study of active learning on tabular data
Dara Bahri, Heinrich Jiang, Tal Schuster, Afshin Rostamizadeh
Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Emmy Liu, Graham Neubig
An Empirical Study on How the Developers Discussed about Pandas Topics
Sajib Kumar Saha Joy, Farzad Ahmed, Al Hasib Mahamud, Nibir Chandra Mandal