Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
Challenges and Practices of Deep Learning Model Reengineering: A Case Study on Computer Vision
Wenxin Jiang, Vishnu Banna, Naveen Vivek, Abhinav Goel, Nicholas Synovic, George K. Thiruvathukal, James C. Davis
A Survey of Graph Prompting Methods: Techniques, Applications, and Challenges
Xuansheng Wu, Kaixiong Zhou, Mingchen Sun, Xin Wang, Ninghao Liu
Challenges facing the explainability of age prediction models: case study for two modalities
Mikolaj Spytek, Weronika Hryniewska-Guzik, Jaroslaw Zygierewicz, Jacek Rogala, Przemyslaw Biecek
Blockchain-Empowered Trustworthy Data Sharing: Fundamentals, Applications, and Challenges
Linh T. Nguyen, Lam Duc Nguyen, Thong Hoang, Dilum Bandara, Qin Wang, Qinghua Lu, Xiwei Xu, Liming Zhu, Petar Popovski, Shiping Chen
A comprehensive review of visualization methods for association rule mining: Taxonomy, Challenges, Open problems and Future ideas
Iztok Fister, Iztok Fister, Dušan Fister, Vili Podgorelec, Sancho Salcedo-Sanz
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh, Sunayana Sitaram, Monojit Choudhury