Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
The Decades Progress on Code-Switching Research in NLP: A Systematic Survey on Trends and Challenges
Genta Indra Winata, Alham Fikri Aji, Zheng-Xin Yong, Thamar Solorio
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends
Yonghao Xu, Tao Bai, Weikang Yu, Shizhen Chang, Peter M. Atkinson, Pedram Ghamisi
On Text-based Personality Computing: Challenges and Future Directions
Qixiang Fang, Anastasia Giachanou, Ayoub Bagheri, Laura Boeschoten, Erik-Jan van Kesteren, Mahdi Shafiee Kamalabad, Daniel L Oberski
The Challenges of HTR Model Training: Feedback from the Project Donner le gout de l'archive a l'ere numerique
Beatrice Couture, Farah Verret, Maxime Gohier, Dominique Deslandres
Progress and Challenges for the Application of Machine Learning for Neglected Tropical Diseases
Chung Yuen Khew, Rahmad Akbar, Norfarhan Mohd. Assaad
Programming Is Hard -- Or at Least It Used to Be: Educational Opportunities And Challenges of AI Code Generation
Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, Eddie Antonio Santos