Strong Consistency
Strong consistency, in the context of machine learning models, refers to the ability of a model to produce similar or identical outputs for semantically similar inputs, a crucial aspect for robustness and trustworthiness. Current research focuses on improving consistency in various model types, including large language models (LLMs), vision-language models (VLMs), and neural networks applied to diverse tasks like image generation, change detection, and robot control. Addressing inconsistencies through techniques like adapter modules, consistency regularization, and knowledge distillation is vital for building reliable AI systems and improving the validity of research findings across numerous scientific domains and practical applications.
Papers
CREAM: Consistency Regularized Self-Rewarding Language Models
Zhaoyang Wang, Weilei He, Zhiyuan Liang, Xuchao Zhang, Chetan Bansal, Ying Wei, Weitong Zhang, Huaxiu Yao
Consistency Calibration: Improving Uncertainty Calibration via Consistency among Perturbed Neighbors
Linwei Tao, Haolan Guo, Minjing Dong, Chang Xu
DANA: Domain-Aware Neurosymbolic Agents for Consistency and Accuracy
Vinh Luong, Sang Dinh, Shruti Raghavan, William Nguyen, Zooey Nguyen, Quynh Le, Hung Vo, Kentaro Maegaito, Loc Nguyen, Thao Nguyen, Anh Hai Ha, Christopher Nguyen
GenesisTex2: Stable, Consistent and High-Quality Text-to-Texture Generation
Jiawei Lu, Yingpeng Zhang, Zengjun Zhao, He Wang, Kun Zhou, Tianjia Shao