Vision Transformer
Vision Transformers (ViTs) adapt the transformer architecture, initially designed for natural language processing, to image analysis by treating images as sequences of patches. Current research focuses on improving ViT efficiency and robustness through techniques like token pruning, attention engineering, and hybrid models combining ViTs with convolutional neural networks or other architectures (e.g., Mamba). These advancements are driving progress in various applications, including medical image analysis, object detection, and spatiotemporal prediction, by offering improved accuracy and efficiency compared to traditional convolutional neural networks in specific tasks.
Papers
RaViTT: Random Vision Transformer Tokens
Felipe A. Quezada, Carlos F. Navarro, Cristian Muñoz, Manuel Zamorano, Jorge Jara-Wilde, Violeta Chang, Cristóbal A. Navarro, Mauricio Cerda
B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers
Moritz Böhle, Navdeeppal Singh, Mario Fritz, Bernt Schiele
Vision Transformer with Attention Map Hallucination and FFN Compaction
Haiyang Xu, Zhichao Zhou, Dongliang He, Fu Li, Jingdong Wang
Revisiting Token Pruning for Object Detection and Instance Segmentation
Yifei Liu, Mathias Gehrig, Nico Messikommer, Marco Cannici, Davide Scaramuzza
Unmasking Deepfakes: Masked Autoencoding Spatiotemporal Transformers for Enhanced Video Forgery Detection
Sayantan Das, Mojtaba Kolahdouzi, Levent Özparlak, Will Hickie, Ali Etemad