Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
Selection, Ignorability and Challenges With Causal Fairness
Jake Fawkes, Robin Evans, Dino Sejdinovic
A Survey on Recent Advances and Challenges in Reinforcement Learning Methods for Task-Oriented Dialogue Policy Learning
Wai-Chung Kwan, Hongru Wang, Huimin Wang, Kam-Fai Wong
Recent Advances and Challenges in Deep Audio-Visual Correlation Learning
Luís Vilaça, Yi Yu, Paula Viana
Governance of Autonomous Agents on the Web: Challenges and Opportunities
Timotheus Kampik, Adnane Mansour, Olivier Boissier, Sabrina Kirrane, Julian Padget, Terry R. Payne, Munindar P. Singh, Valentina Tamma, Antoine Zimmermann
Investigating the Challenges of Class Imbalance and Scale Variation in Object Detection in Aerial Images
Ahmed Elhagry, Mohamed Saeed