Strong Consistency
Strong consistency, in the context of machine learning models, refers to the ability of a model to produce similar or identical outputs for semantically similar inputs, a crucial aspect for robustness and trustworthiness. Current research focuses on improving consistency in various model types, including large language models (LLMs), vision-language models (VLMs), and neural networks applied to diverse tasks like image generation, change detection, and robot control. Addressing inconsistencies through techniques like adapter modules, consistency regularization, and knowledge distillation is vital for building reliable AI systems and improving the validity of research findings across numerous scientific domains and practical applications.
Papers
The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents
Che-Jui Chang, Samuel S. Sohn, Sen Zhang, Rajath Jayashankar, Muhammad Usman, Mubbasir Kapadia
FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing
Songyan Chen, Jiancheng Huang
Accuracy and Consistency of Space-based Vegetation Height Maps for Forest Dynamics in Alpine Terrain
Yuchang Jiang, Marius Rüetschi, Vivien Sainte Fare Garnot, Mauro Marty, Konrad Schindler, Christian Ginzler, Jan D. Wegner
On the Consistency and Robustness of Saliency Explanations for Time Series Classification
Chiara Balestra, Bin Li, Emmanuel Müller
On the Consistency of Average Embeddings for Item Recommendation
Walid Bendada, Guillaume Salha-Galvan, Romain Hennequin, Thomas Bouabça, Tristan Cazenave
APLA: Additional Perturbation for Latent Noise with Adversarial Training Enables Consistency
Yupu Yao, Shangqi Deng, Zihan Cao, Harry Zhang, Liang-Jian Deng