Artificial Intelligence Model
Artificial intelligence (AI) models are rapidly evolving, with current research focusing on improving their reliability, security, and fairness. Key areas of investigation include mitigating model errors (including adversarial attacks), ensuring robustness across diverse datasets and contexts, and addressing biases that may lead to unfair or culturally insensitive outputs. These advancements are crucial for building trust in AI systems and enabling their safe and effective deployment across various sectors, from healthcare and finance to manufacturing and autonomous systems.
Papers
AI-Spectra: A Visual Dashboard for Model Multiplicity to Enhance Informed and Transparent Decision-Making
Gilles Eerlings, Sebe Vanbrabant, Jori Liesenborgs, Gustavo Rovelo Ruiz, Davy Vanacken, Kris Luyten
How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception
Sahibzada Adil Shahzad, Ammarah Hashmi, Yan-Tsung Peng, Yu Tsao, Hsin-Min Wang
Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers
Md Kamrul Siam, Huanying Gu, Jerry Q. Cheng
Dynamic Intelligence Assessment: Benchmarking LLMs on the Road to AGI with a Focus on Model Confidence
Norbert Tihanyi, Tamas Bisztray, Richard A. Dubniczky, Rebeka Toth, Bertalan Borsos, Bilel Cherif, Mohamed Amine Ferrag, Lajos Muzsai, Ridhi Jain, Ryan Marinelli, Lucas C. Cordeiro, Merouane Debbah
Economic Anthropology in the Era of Generative Artificial Intelligence
Zachary Sheldon, Peeyush Kumar