Artificial Intelligence Model
Artificial intelligence (AI) models are rapidly evolving, with current research focusing on improving their reliability, security, and fairness. Key areas of investigation include mitigating model errors (including adversarial attacks), ensuring robustness across diverse datasets and contexts, and addressing biases that may lead to unfair or culturally insensitive outputs. These advancements are crucial for building trust in AI systems and enabling their safe and effective deployment across various sectors, from healthcare and finance to manufacturing and autonomous systems.
Papers
Rethinking AI Cultural Evaluation
Michal Bravansky, Filip Trhlik, Fazl Barez
Data and System Perspectives of Sustainable Artificial Intelligence
Tao Xie, David Harel, Dezhi Ran, Zhenwen Li, Maoliang Li, Zhi Yang, Leye Wang, Xiang Chen, Ying Zhang, Wentao Zhang, Meng Li, Chen Zhang, Linyi Li, Assaf Marron
AI-Spectra: A Visual Dashboard for Model Multiplicity to Enhance Informed and Transparent Decision-Making
Gilles Eerlings, Sebe Vanbrabant, Jori Liesenborgs, Gustavo Rovelo Ruiz, Davy Vanacken, Kris Luyten
How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception
Sahibzada Adil Shahzad, Ammarah Hashmi, Yan-Tsung Peng, Yu Tsao, Hsin-Min Wang