X Former
X-Formers represent a burgeoning area of research focusing on adapting the transformer architecture for diverse applications beyond natural language processing. Current work centers on improving visual understanding in multimodal models by combining contrastive and reconstruction learning techniques, developing specialized transformers for tasks like object tracking, change detection, and medical image segmentation, and addressing computational challenges through efficient designs and hardware acceleration. This research significantly impacts various fields, enabling advancements in areas such as computer vision, medical imaging, and cybersecurity through improved accuracy, efficiency, and explainability of AI models.
Papers
October 22, 2024
October 14, 2024
October 12, 2024
July 18, 2024
April 22, 2024
November 16, 2023
November 15, 2023
October 15, 2023
August 10, 2023
June 19, 2023
May 12, 2023
March 13, 2023
March 12, 2023
February 28, 2023
November 15, 2022
October 29, 2022
October 14, 2022
March 9, 2022
January 3, 2022