Understanding Cross
"Cross" in various scientific contexts refers to the integration of information across different modalities, scales, or datasets to improve model performance and understanding. Current research focuses on leveraging cross-attention mechanisms in transformer networks for tasks like image super-resolution and text-guided image editing, as well as employing cross-lingual and cross-sensor training strategies for enhanced multilingual capabilities and robust color constancy. These advancements are significant for improving the efficiency and accuracy of machine learning models across diverse applications, from robotics and healthcare to computer vision and natural language processing.
Papers
November 14, 2024
November 13, 2024
October 28, 2024
August 19, 2024
July 23, 2024
April 18, 2024
March 6, 2024
March 5, 2024
December 22, 2023
April 17, 2023
February 11, 2023
November 14, 2022
June 10, 2022
March 21, 2022
December 16, 2021
November 24, 2021
November 9, 2021