Patch Transformer
Patch Transformers represent a burgeoning area of research that adapts the Transformer architecture to process data divided into smaller patches, improving efficiency and enabling the capture of both local and global features. Current research focuses on optimizing patch sizes and strategies for various data types, including time series, images, and 3D point clouds, with models like Medformer and MultiResFormer demonstrating advancements in specific applications. This approach offers significant potential for improving performance and reducing computational costs in diverse fields, ranging from medical image analysis and time series forecasting to hyperspectral image processing and semantic segmentation.
Papers
March 19, 2022
March 13, 2022
February 27, 2022