Patch Attention
Patch attention mechanisms are being actively researched to improve the efficiency and performance of transformer-based models in computer vision. Current efforts focus on developing novel architectures that reduce the computational complexity of standard self-attention, often through techniques like sparse attention, clustering, and adaptive filtering of image patches, leading to models like ParFormer and ClusTR. These advancements are significant because they enable the application of powerful transformer models to resource-constrained environments and large-scale tasks, improving accuracy and efficiency in various applications such as image classification, object detection, and image restoration.
Papers
October 9, 2024
March 22, 2024
March 16, 2024
February 6, 2024
February 2, 2024
November 21, 2023
August 18, 2023
February 4, 2023
January 12, 2023
December 5, 2022
September 20, 2022
August 28, 2022
June 28, 2022
June 5, 2022
April 14, 2022
March 24, 2022
March 22, 2022
February 8, 2022
February 7, 2022