Strong Consistency
Strong consistency, in the context of machine learning models, refers to the ability of a model to produce similar or identical outputs for semantically similar inputs, a crucial aspect for robustness and trustworthiness. Current research focuses on improving consistency in various model types, including large language models (LLMs), vision-language models (VLMs), and neural networks applied to diverse tasks like image generation, change detection, and robot control. Addressing inconsistencies through techniques like adapter modules, consistency regularization, and knowledge distillation is vital for building reliable AI systems and improving the validity of research findings across numerous scientific domains and practical applications.
Papers
An optimal pairwise merge algorithm improves the quality and consistency of nonnegative matrix factorization
Youdong Guo, Timothy E. Holy
Multilevel Graph Reinforcement Learning for Consistent Cognitive Decision-making in Heterogeneous Mixed Autonomy
Xin Gao, Zhaoyang Ma, Xueyuan Li, Xiaoqiang Meng, Zirui Li
Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models
Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Yuan-Fang Li, Cunjian Chen, Yu Qiao
Cascaded two-stage feature clustering and selection via separability and consistency in fuzzy decision systems
Yuepeng Chen, Weiping Ding, Hengrong Ju, Jiashuang Huang, Tao Yin