Global Impact
Research on global impact examines how various factors influence the performance, fairness, and broader consequences of machine learning models and algorithms across diverse applications. Current investigations focus on understanding the effects of data characteristics (e.g., homophily, outliers, imbalanced classes), model architectures (e.g., CNNs, LLMs, GNNs), and training methodologies (e.g., regularization, transfer learning) on model behavior and outcomes. These studies are crucial for improving model robustness, fairness, and efficiency, ultimately leading to more reliable and beneficial applications in fields ranging from healthcare and autonomous systems to open-source software development and environmental monitoring. The ultimate goal is to develop more responsible and effective AI systems that minimize unintended consequences and maximize societal benefit.
Papers
Measuring The Impact Of Programming Language Distribution
Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta
Analyzing the impact of climate change on critical infrastructure from the scientific literature: A weakly supervised NLP approach
Tanwi Mallick, Joshua David Bergerson, Duane R. Verner, John K Hutchison, Leslie-Anne Levy, Prasanna Balaprakash
Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods
Astrid Berg, Eva Vandersmissen, Maria Wimmer, David Major, Theresa Neubauer, Dimitrios Lenis, Jeroen Cant, Annemiek Snoeckx, Katja Bühler
A Framework for Evaluating the Impact of Food Security Scenarios
Rachid Belmeskine, Abed Benaichouche