Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
A survey on learning from imbalanced data streams: taxonomy, challenges, empirical study, and reproducible experimental framework
Gabriel Aguiar, Bartosz Krawczyk, Alberto Cano
Machine Learning-Enabled IoT Security: Open Issues and Challenges Under Advanced Persistent Threats
Zhiyan Chen, Jinxin Liu, Yu Shen, Murat Simsek, Burak Kantarci, Hussein T. Mouftah, Petar Djukic
Overcoming challenges in leveraging GANs for few-shot data augmentation
Christopher Beckham, Issam Laradji, Pau Rodriguez, David Vazquez, Derek Nowrouzezahrai, Christopher Pal
Automatic Detection of Expressed Emotion from Five-Minute Speech Samples: Challenges and Opportunities
Bahman Mirheidari, André Bittar, Nicholas Cummins, Johnny Downs, Helen L. Fisher, Heidi Christensen
Mind the gap: Challenges of deep learning approaches to Theory of Mind
Jaan Aru, Aqeel Labash, Oriol Corcoll, Raul Vicente
The Challenges of Continuous Self-Supervised Learning
Senthil Purushwalkam, Pedro Morgado, Abhinav Gupta
Graph Neural Networks in Particle Physics: Implementations, Innovations, and Challenges
Savannah Thais, Paolo Calafiura, Grigorios Chachamis, Gage DeZoort, Javier Duarte, Sanmay Ganguly, Michael Kagan, Daniel Murnane, Mark S. Neubauer, Kazuhiro Terao