Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
AI4GCC -- Track 3: Consumption and the Challenges of Multi-Agent RL
Marco Jiralerspong, Gauthier Gidel
Neuro-Symbolic RDF and Description Logic Reasoners: The State-Of-The-Art and Challenges
Gunjan Singh, Sumit Bhatia, Raghava Mutharaju
Explainable AI in Orthopedics: Challenges, Opportunities, and Prospects
Soheyla Amirian, Luke A. Carlson, Matthew F. Gong, Ines Lohse, Kurt R. Weiss, Johannes F. Plate, Ahmad P. Tafti
Unraveling the Complexity of Splitting Sequential Data: Tackling Challenges in Video and Time Series Analysis
Diego Botache, Kristina Dingel, Rico Huhnstock, Arno Ehresmann, Bernhard Sick
Sources of Opacity in Computer Systems: Towards a Comprehensive Taxonomy
Sara Mann, Barnaby Crook, Lena Kästner, Astrid Schomäcker, Timo Speith