Global Impact
Research on global impact examines how various factors influence the performance, fairness, and broader consequences of machine learning models and algorithms across diverse applications. Current investigations focus on understanding the effects of data characteristics (e.g., homophily, outliers, imbalanced classes), model architectures (e.g., CNNs, LLMs, GNNs), and training methodologies (e.g., regularization, transfer learning) on model behavior and outcomes. These studies are crucial for improving model robustness, fairness, and efficiency, ultimately leading to more reliable and beneficial applications in fields ranging from healthcare and autonomous systems to open-source software development and environmental monitoring. The ultimate goal is to develop more responsible and effective AI systems that minimize unintended consequences and maximize societal benefit.
Papers
Media Bias Matters: Understanding the Impact of Politically Biased News on Vaccine Attitudes in Social Media
Bohan Jiang, Lu Cheng, Zhen Tan, Ruocheng Guo, Huan Liu
Investigation of the Impact of Synthetic Training Data in the Industrial Application of Terminal Strip Object Detection
Nico Baumgart, Markus Lange-Hegermann, Mike Mücke
Impact of Decentralized Learning on Player Utilities in Stackelberg Games
Kate Donahue, Nicole Immorlica, Meena Jagadeesan, Brendan Lucier, Aleksandrs Slivkins
Enhancing Steganographic Text Extraction: Evaluating the Impact of NLP Models on Accuracy and Semantic Coherence
Mingyang Li, Maoqin Yuan, Luyao Li, Han Pengsihua
Impact of network topology on the performance of Decentralized Federated Learning
Luigi Palmieri, Chiara Boldrini, Lorenzo Valerio, Andrea Passarella, Marco Conti
LLM Task Interference: An Initial Study on the Impact of Task-Switch in Conversational History
Akash Gupta, Ivaxi Sheth, Vyas Raina, Mark Gales, Mario Fritz