Global Impact
Research on global impact examines how various factors influence the performance, fairness, and broader consequences of machine learning models and algorithms across diverse applications. Current investigations focus on understanding the effects of data characteristics (e.g., homophily, outliers, imbalanced classes), model architectures (e.g., CNNs, LLMs, GNNs), and training methodologies (e.g., regularization, transfer learning) on model behavior and outcomes. These studies are crucial for improving model robustness, fairness, and efficiency, ultimately leading to more reliable and beneficial applications in fields ranging from healthcare and autonomous systems to open-source software development and environmental monitoring. The ultimate goal is to develop more responsible and effective AI systems that minimize unintended consequences and maximize societal benefit.
Papers
Evaluating Detection Thresholds: The Impact of False Positives and Negatives on Super-Resolution Ultrasound Localization Microscopy
Sepideh K. Gharamaleki, Brandon Helfield, Hassan Rivaz
Tooling or Not Tooling? The Impact of Tools on Language Agents for Chemistry Problem Solving
Botao Yu, Frazier N. Baker, Ziru Chen, Garrett Herb, Boyu Gou, Daniel Adu-Ampratwum, Xia Ning, Huan Sun
HarmLevelBench: Evaluating Harm-Level Compliance and the Impact of Quantization on Model Alignment
Yannis Belkhiter, Giulio Zizzo, Sergio Maffeis
Identifying the impact of local connectivity patterns on dynamics in excitatory-inhibitory networks
Yuxiu Shao (1 and 2), David Dahmen (3), Stefano Recanatesi (4), Eric Shea-Brown (5 and 6), Srdjan Ostojic (2) ((1) School of Systems Science, Beijing Normal University, Beijing, China, (2) Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, Paris, France, (3) Institute for Advanced Simulation (IAS-6) Computational and Systems Neuroscience, Jülich Research Center, Jülich, Germany, (4) Technion, Israel Institute of Technology, Haifa, Israel, (5) Department of Applied Mathematics and Computational Neuroscience Center, University of Washington, Seattle, WA, USA, (6) Allen Institute for Brain Science, Seattle, WA, USA)
On the Impact of White-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh, Bram Adams, Ahmed E. Hassan
Evaluating the Impact of Lab Test Results on Large Language Models Generated Differential Diagnoses from Clinical Case Vignettes
Balu Bhasuran, Qiao Jin, Yuzhang Xie, Carl Yang, Karim Hanna, Jennifer Costa, Cindy Shavor, Zhiyong Lu, Zhe He
Counting Ability of Large Language Models and Impact of Tokenization
Xiang Zhang, Juntai Cao, Chenyu You
Impact of Leakage on Data Harmonization in Machine Learning Pipelines in Class Imbalance Across Sites
Nicolás Nieto, Simon B. Eickhoff, Christian Jung, Martin Reuter, Kersten Diers, Malte Kelm, Artur Lichtenberg, Federico Raimondo, Kaustubh R. Patil
On the Diversity of Synthetic Data and its Impact on Training Large Language Models
Hao Chen, Abdul Waheed, Xiang Li, Yidong Wang, Jindong Wang, Bhiksha Raj, Marah I. Abdin
Toward Robust RALMs: Revealing the Impact of Imperfect Retrieval on Retrieval-Augmented Language Models
Seong-Il Park, Jay-Yoon Lee