Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
A Meta-Summary of Challenges in Building Products with ML Components -- Collecting Experiences from 4758+ Practitioners
Nadia Nahar, Haoran Zhang, Grace Lewis, Shurui Zhou, Christian Kästner
Augmented Collective Intelligence in Collaborative Ideation: Agenda and Challenges
Emily Dardaman, Abhishek Gupta
Natural Language Processing in Ethiopian Languages: Current State, Challenges, and Opportunities
Atnafu Lambebo Tonja, Tadesse Destaw Belay, Israel Abebe Azime, Abinew Ali Ayele, Moges Ahmed Mehamed, Olga Kolesnikova, Seid Muhie Yimam
The Challenges of Studying Misinformation on Video-Sharing Platforms During Crises and Mass-Convergence Events
Sukrit Venkatagiri, Joseph S. Schafer, Stephen Prochaska
TinyML: Tools, Applications, Challenges, and Future Research Directions
Rakhee Kallimani, Krishna Pai, Prasoon Raghuwanshi, Sridhar Iyer, Onel L. A. López
Reimagining Application User Interface (UI) Design using Deep Learning Methods: Challenges and Opportunities
Subtain Malik, Muhammad Tariq Saeed, Marya Jabeen Zia, Shahzad Rasool, Liaquat Ali Khan, Mian Ilyas Ahmed