Robust Version
Robustness in machine learning models is a crucial area of research focusing on improving the reliability and resilience of models against various forms of uncertainty, including noisy data, adversarial attacks, and environmental variations. Current research emphasizes developing novel algorithms and architectures, such as transformers, to enhance model performance under these challenging conditions, often incorporating techniques like knowledge distillation, data augmentation, and robust optimization. This work is significant because it directly addresses the limitations of existing models, leading to more reliable and trustworthy AI systems across diverse applications, from medical imaging and autonomous navigation to natural language processing and personalized pricing.
Papers
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data
Mohammad Ghabel Rahmat, Majid Khalilian
TSVC:Tripartite Learning with Semantic Variation Consistency for Robust Image-Text Retrieval
Shuai Lyu, Zijing Tian, Zhonghong Ou, Yifan Zhu, Xiao Zhang, Qiankun Ha, Haoran Luo, Meina Song
Efficient and Responsible Adaptation of Large Language Models for Robust and Equitable Top-k Recommendations
Kirandeep Kaur, Manya Chadha, Vinayak Gupta, Chirag Shah
Microservice Deployment in Space Computing Power Networks via Robust Reinforcement Learning
Zhiyong Yu, Yuning Jiang, Xin Liu, Yuanming Shi, Chunxiao Jiang, Linling Kuang