Global Impact
Research on global impact examines how various factors influence the performance, fairness, and broader consequences of machine learning models and algorithms across diverse applications. Current investigations focus on understanding the effects of data characteristics (e.g., homophily, outliers, imbalanced classes), model architectures (e.g., CNNs, LLMs, GNNs), and training methodologies (e.g., regularization, transfer learning) on model behavior and outcomes. These studies are crucial for improving model robustness, fairness, and efficiency, ultimately leading to more reliable and beneficial applications in fields ranging from healthcare and autonomous systems to open-source software development and environmental monitoring. The ultimate goal is to develop more responsible and effective AI systems that minimize unintended consequences and maximize societal benefit.
Papers
Impact of spiking neurons leakages and network recurrences on event-based spatio-temporal pattern recognition
Mohamed Sadek Bouanane, Dalila Cherifi, Elisabetta Chicca, Lyes Khacef
Exploring the Impact of Noise and Degradations on Heart Sound Classification Models
Davoud Shariat Panah, Andrew Hines, Susan McKeever
Impact of Video Compression on the Performance of Object Detection Systems for Surveillance Applications
Michael O'Byrne, Vibhoothi, Mark Sugrue, Anil Kokaram
Impact of Adversarial Training on Robustness and Generalizability of Language Models
Enes Altinisik, Hassan Sajjad, Husrev Taha Sencar, Safa Messaoud, Sanjay Chawla