Empirical Study
Empirical studies across diverse fields are rigorously evaluating the capabilities and limitations of various machine learning models, particularly large language models and neural networks. Current research focuses on assessing model performance across different tasks (e.g., question answering, image classification, code generation), investigating the impact of model architecture and hyperparameter tuning, and analyzing the robustness of models to various challenges like adversarial attacks and data imbalance. These studies provide crucial insights into model behavior, identify areas for improvement, and inform the development of more reliable and effective AI systems for both scientific research and practical applications.
Papers
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
Xinpeng Wang, Leonie Weissweiler, Hinrich Schütze, Barbara Plank
Prompting Large Language Models for Counterfactual Generation: An Empirical Study
Yongqi Li, Mayi Xu, Xin Miao, Shen Zhou, Tieyun Qian
Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation
Bashar Alhafni, Go Inoue, Christian Khairallah, Nizar Habash