Empirical Study
Empirical studies across diverse fields are rigorously evaluating the capabilities and limitations of various machine learning models, particularly large language models and neural networks. Current research focuses on assessing model performance across different tasks (e.g., question answering, image classification, code generation), investigating the impact of model architecture and hyperparameter tuning, and analyzing the robustness of models to various challenges like adversarial attacks and data imbalance. These studies provide crucial insights into model behavior, identify areas for improvement, and inform the development of more reliable and effective AI systems for both scientific research and practical applications.
Papers
An Empirical Study of Validating Synthetic Data for Formula Generation
Usneek Singh, José Cambronero, Sumit Gulwani, Aditya Kanade, Anirudh Khatry, Vu Le, Mukul Singh, Gust Verbruggen
An Empirical Study of Mamba-based Pedestrian Attribute Recognition
Xiao Wang, Weizhe Kong, Jiandong Jin, Shiao Wang, Ruichong Gao, Qingchuan Ma, Chenglong Li, Jin Tang
Context Matters: An Empirical Study of the Impact of Contextual Information in Temporal Question Answering Systems
Dan Schumacher, Fatemeh Haji, Tara Grey, Niharika Bandlamudi, Nupoor Karnik, Gagana Uday Kumar, Jason Cho-Yu Chiang, Paul Rad, Nishant Vishwamitra, Anthony Rios
Segment Anything Model for automated image data annotation: empirical studies using text prompts from Grounding DINO
Fuseini Mumuni, Alhassan Mumuni
Benchmarking Deep Learning Models on NVIDIA Jetson Nano for Real-Time Systems: An Empirical Investigation
Tushar Prasanna Swaminathan, Christopher Silver, Thangarajah Akilan
Can Large Language Models Understand DL-Lite Ontologies? An Empirical Study
Keyu Wang, Guilin Qi, Jiaqi Li, Songlin Zhai
An Empirical Study on the Characteristics of Bias upon Context Length Variation for Bangla
Jayanta Sadhu, Ayan Antik Khan, Abhik Bhattacharjee, Rifat Shahriyar