Global Impact
Research on global impact examines how various factors influence the performance, fairness, and broader consequences of machine learning models and algorithms across diverse applications. Current investigations focus on understanding the effects of data characteristics (e.g., homophily, outliers, imbalanced classes), model architectures (e.g., CNNs, LLMs, GNNs), and training methodologies (e.g., regularization, transfer learning) on model behavior and outcomes. These studies are crucial for improving model robustness, fairness, and efficiency, ultimately leading to more reliable and beneficial applications in fields ranging from healthcare and autonomous systems to open-source software development and environmental monitoring. The ultimate goal is to develop more responsible and effective AI systems that minimize unintended consequences and maximize societal benefit.
Papers
Exploring the Impact of the Output Format on the Evaluation of Large Language Models for Code Translation
Marcos Macedo, Yuan Tian, Filipe R. Cogo, Bram Adams
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan
Impact of Video Compression Artifacts on Fisheye Camera Visual Perception Tasks
Madhumitha Sakthi, Louis Kerofsky, Varun Ravi Kumar, Senthil Yogamani
Media Bias Matters: Understanding the Impact of Politically Biased News on Vaccine Attitudes in Social Media
Bohan Jiang, Lu Cheng, Zhen Tan, Ruocheng Guo, Huan Liu
Investigation of the Impact of Synthetic Training Data in the Industrial Application of Terminal Strip Object Detection
Nico Baumgart, Markus Lange-Hegermann, Mike Mücke