Technical Challenge
Research into technical challenges across diverse AI applications reveals a common thread: improving model robustness, fairness, and explainability while addressing limitations in data availability and computational efficiency. Current efforts focus on developing and adapting model architectures (e.g., LLMs, YOLO variants, diffusion models) for specific tasks, refining evaluation metrics, and designing robust training and deployment strategies (e.g., federated learning). These advancements are crucial for ensuring the responsible and effective deployment of AI in various sectors, from healthcare and finance to manufacturing and environmental monitoring.
Papers
Metaverse: Requirements, Architecture, Standards, Status, Challenges, and Perspectives
Danda B Rawat, Hassan El alami
Advances and Challenges in Multimodal Remote Sensing Image Registration
Bai Zhu, Liang Zhou, Simiao Pu, Jianwei Fan, Yuanxin Ye
Causal Effect Estimation: Recent Advances, Challenges, and Opportunities
Zhixuan Chu, Jianmin Huang, Ruopeng Li, Wei Chu, Sheng Li
Gender Neutralization for an Inclusive Machine Translation: from Theoretical Foundations to Open Challenges
Andrea Piergentili, Dennis Fucci, Beatrice Savoldi, Luisa Bentivogli, Matteo Negri
Explainable Deep Reinforcement Learning: State of the Art and Challenges
George A. Vouros
Opportunities and Challenges in Neural Dialog Tutoring
Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan
Applications and Challenges of Sentiment Analysis in Real-life Scenarios
Diptesh Kanojia, Aditya Joshi