Strong Consistency
Strong consistency, in the context of machine learning models, refers to the ability of a model to produce similar or identical outputs for semantically similar inputs, a crucial aspect for robustness and trustworthiness. Current research focuses on improving consistency in various model types, including large language models (LLMs), vision-language models (VLMs), and neural networks applied to diverse tasks like image generation, change detection, and robot control. Addressing inconsistencies through techniques like adapter modules, consistency regularization, and knowledge distillation is vital for building reliable AI systems and improving the validity of research findings across numerous scientific domains and practical applications.
Papers
Tensor Completion with Provable Consistency and Fairness Guarantees for Recommender Systems
Tung Nguyen, Jeffrey Uhlmann
Con$^{2}$DA: Simplifying Semi-supervised Domain Adaptation by Learning Consistent and Contrastive Feature Representations
Manuel Pérez-Carrasco, Pavlos Protopapas, Guillermo Cabrera-Vives
Consistency regularization-based Deep Polynomial Chaos Neural Network Method for Reliability Analysis
Xiaohu Zheng, Wen Yao, Yunyang Zhang, Xiaoya Zhang
Robust Single Image Dehazing Based on Consistent and Contrast-Assisted Reconstruction
De Cheng, Yan Li, Dingwen Zhang, Nannan Wang, Xinbo Gao, Jiande Sun