Artificial Intelligence Model
Artificial intelligence (AI) models are rapidly evolving, with current research focusing on improving their reliability, security, and fairness. Key areas of investigation include mitigating model errors (including adversarial attacks), ensuring robustness across diverse datasets and contexts, and addressing biases that may lead to unfair or culturally insensitive outputs. These advancements are crucial for building trust in AI systems and enabling their safe and effective deployment across various sectors, from healthcare and finance to manufacturing and autonomous systems.
Papers
Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models
Jaspreet Pannu, Doni Bloomfield, Alex Zhu, Robert MacKnight, Gabe Gomes, Anita Cicero, Thomas V. Inglesby
Online Resource Allocation for Edge Intelligence with Colocated Model Retraining and Inference
Huaiguang Cai, Zhi Zhou, Qianyi Huang
Integration of Mixture of Experts and Multimodal Generative AI in Internet of Vehicles: A Survey
Minrui Xu, Dusit Niyato, Jiawen Kang, Zehui Xiong, Abbas Jamalipour, Yuguang Fang, Dong In Kim, Xuemin, Shen
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, David Lo