Artificial Intelligence Model
Artificial intelligence (AI) models are rapidly evolving, with current research focusing on improving their reliability, security, and fairness. Key areas of investigation include mitigating model errors (including adversarial attacks), ensuring robustness across diverse datasets and contexts, and addressing biases that may lead to unfair or culturally insensitive outputs. These advancements are crucial for building trust in AI systems and enabling their safe and effective deployment across various sectors, from healthcare and finance to manufacturing and autonomous systems.
Papers
Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages
Soroosh Tayebi Arasteh, Cristian David Rios-Urrego, Elmar Noeth, Andreas Maier, Seung Hee Yang, Jan Rusz, Juan Rafael Orozco-Arroyave
Prevention is better than cure: a case study of the abnormalities detection in the chest
Weronika Hryniewska, Piotr Czarnecki, Jakub Wiśniewski, Przemysław Bombiński, Przemysław Biecek
Imitation versus Innovation: What children can do that large language and language-and-vision models cannot (yet)?
Eunice Yiu, Eliza Kosoy, Alison Gopnik
Multivariate Analysis on Performance Gaps of Artificial Intelligence Models in Screening Mammography
Linglin Zhang, Beatrice Brown-Mulry, Vineela Nalla, InChan Hwang, Judy Wawira Gichoya, Aimilia Gastounioti, Imon Banerjee, Laleh Seyyed-Kalantari, MinJae Woo, Hari Trivedi
Unlocking the Potential of Collaborative AI -- On the Socio-technical Challenges of Federated Machine Learning
Tobias Müller, Milena Zahn, Florian Matthes
Measuring Bias in AI Models: An Statistical Approach Introducing N-Sigma
Daniel DeAlcala, Ignacio Serna, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia