Empirical Study
Empirical studies across diverse fields are rigorously evaluating the capabilities and limitations of various machine learning models, particularly large language models and neural networks. Current research focuses on assessing model performance across different tasks (e.g., question answering, image classification, code generation), investigating the impact of model architecture and hyperparameter tuning, and analyzing the robustness of models to various challenges like adversarial attacks and data imbalance. These studies provide crucial insights into model behavior, identify areas for improvement, and inform the development of more reliable and effective AI systems for both scientific research and practical applications.
Papers
Towards Intelligent Augmented Reality (iAR): A Taxonomy of Context, an Architecture for iAR, and an Empirical Study
Shakiba Davari, Daniel Stover, Alexander Giovannelli, Cory Ilo, Doug A. Bowman
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study
André Storhaug, Jingyue Li
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Szymon Bobek, Paloma Korycińska, Monika Krakowska, Maciej Mozolewski, Dorota Rak, Magdalena Zych, Magdalena Wójcik, Grzegorz J. Nalepa
ViMoE: An Empirical Study of Designing Vision Mixture-of-Experts
Xumeng Han, Longhui Wei, Zhiyang Dou, Zipeng Wang, Chenhui Qiang, Xin He, Yingfei Sun, Zhenjun Han, Qi Tian