Global Impact
Research on global impact examines how various factors influence the performance, fairness, and broader consequences of machine learning models and algorithms across diverse applications. Current investigations focus on understanding the effects of data characteristics (e.g., homophily, outliers, imbalanced classes), model architectures (e.g., CNNs, LLMs, GNNs), and training methodologies (e.g., regularization, transfer learning) on model behavior and outcomes. These studies are crucial for improving model robustness, fairness, and efficiency, ultimately leading to more reliable and beneficial applications in fields ranging from healthcare and autonomous systems to open-source software development and environmental monitoring. The ultimate goal is to develop more responsible and effective AI systems that minimize unintended consequences and maximize societal benefit.
Papers
Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization
Aniruddha Nrusimha, Mayank Mishra, Naigang Wang, Dan Alistarh, Rameswar Panda, Yoon Kim
A Methodology to Study the Impact of Spiking Neural Network Parameters considering Event-Based Automotive Data
Iqra Bano, Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique
The Impact of Unstated Norms in Bias Analysis of Language Models
Farnaz Kohankhaki, D. B. Emerson, Jacob-Junqi Tian, Laleh Seyyed-Kalantari, Faiza Khan Khattak
Exploring the Impact of the Output Format on the Evaluation of Large Language Models for Code Translation
Marcos Macedo, Yuan Tian, Filipe R. Cogo, Bram Adams
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan
Impact of Video Compression Artifacts on Fisheye Camera Visual Perception Tasks
Madhumitha Sakthi, Louis Kerofsky, Varun Ravi Kumar, Senthil Yogamani