Strong Consistency
Strong consistency, in the context of machine learning models, refers to the ability of a model to produce similar or identical outputs for semantically similar inputs, a crucial aspect for robustness and trustworthiness. Current research focuses on improving consistency in various model types, including large language models (LLMs), vision-language models (VLMs), and neural networks applied to diverse tasks like image generation, change detection, and robot control. Addressing inconsistencies through techniques like adapter modules, consistency regularization, and knowledge distillation is vital for building reliable AI systems and improving the validity of research findings across numerous scientific domains and practical applications.
Papers
Calibrating Likelihoods towards Consistency in Summarization Models
Polina Zablotskaia, Misha Khalman, Rishabh Joshi, Livio Baldini Soares, Shoshana Jakobovits, Joshua Maynez, Shashi Narayan
Consistent123: Improve Consistency for One Image to 3D Object Synthesis
Haohan Weng, Tianyu Yang, Jianan Wang, Yu Li, Tong Zhang, C. L. Philip Chen, Lei Zhang
EC-Depth: Exploring the consistency of self-supervised monocular depth estimation in challenging scenes
Ziyang Song, Ruijie Zhu, Chuxin Wang, Jiacheng Deng, Jianfeng He, Tianzhu Zhang