Patch Transformer
Patch Transformers represent a burgeoning area of research that adapts the Transformer architecture to process data divided into smaller patches, improving efficiency and enabling the capture of both local and global features. Current research focuses on optimizing patch sizes and strategies for various data types, including time series, images, and 3D point clouds, with models like Medformer and MultiResFormer demonstrating advancements in specific applications. This approach offers significant potential for improving performance and reducing computational costs in diverse fields, ranging from medical image analysis and time series forecasting to hyperspectral image processing and semantic segmentation.
Papers
November 12, 2024
October 31, 2024
August 5, 2024
May 24, 2024
December 20, 2023
December 14, 2023
December 5, 2023
November 30, 2023
November 1, 2023
August 5, 2023
July 17, 2023
June 20, 2023
June 14, 2023
March 26, 2023
March 23, 2023
January 24, 2023
November 26, 2022
August 26, 2022
July 4, 2022
June 3, 2022