Vision Transformer
Vision Transformers (ViTs) adapt the transformer architecture, initially designed for natural language processing, to image analysis by treating images as sequences of patches. Current research focuses on improving ViT efficiency and robustness through techniques like token pruning, attention engineering, and hybrid models combining ViTs with convolutional neural networks or other architectures (e.g., Mamba). These advancements are driving progress in various applications, including medical image analysis, object detection, and spatiotemporal prediction, by offering improved accuracy and efficiency compared to traditional convolutional neural networks in specific tasks.
Papers
Investigating Neuron Ablation in Attention Heads: The Case for Peak Activation Centering
Nicholas Pochinkov, Ben Pasero, Skylar Shibayama
Stochastic Layer-Wise Shuffle: A Good Practice to Improve Vision Mamba Training
Zizheng Huang, Haoxing Chen, Jiaqi Li, Jun Lan, Huijia Zhu, Weiqiang Wang, Limin Wang
Vote&Mix: Plug-and-Play Token Reduction for Efficient Vision Transformer
Shuai Peng, Di Fu, Baole Wei, Yong Cao, Liangcai Gao, Zhi Tang
A Survey of the Self Supervised Learning Mechanisms for Vision Transformers
Asifullah Khan, Anabia Sohail, Mustansar Fiaz, Mehdi Hassan, Tariq Habib Afridi, Sibghat Ullah Marwat, Farzeen Munir, Safdar Ali, Hannan Naseem, Muhammad Zaigham Zaheer, Kamran Ali, Tangina Sultana, Ziaurrehman Tanoli, Naeem Akhter
Tex-ViT: A Generalizable, Robust, Texture-based dual-branch cross-attention deepfake detector
Deepak Dagar, Dinesh Kumar Vishwakarma
PartFormer: Awakening Latent Diverse Representation from Vision Transformer for Object Re-Identification
Lei Tan, Pingyang Dai, Jie Chen, Liujuan Cao, Yongjian Wu, Rongrong Ji
LLaVA-SG: Leveraging Scene Graphs as Visual Semantic Expression in Vision-Language Models
Jingyi Wang, Jianzhong Ju, Jian Luan, Zhidong Deng
LoG-VMamba: Local-Global Vision Mamba for Medical Image Segmentation
Trung Dinh Quoc Dang, Huy Hoang Nguyen, Aleksei Tiulpin
GenFormer -- Generated Images are All You Need to Improve Robustness of Transformers on Small Datasets
Sven Oehri, Nikolas Ebert, Ahmed Abdullah, Didier Stricker, Oliver Wasenmüller
LowCLIP: Adapting the CLIP Model Architecture for Low-Resource Languages in Multimodal Image Retrieval Task
Ali Asgarov, Samir Rustamov
AlphaViT: A Flexible Game-Playing AI for Multiple Games and Variable Board Sizes
Kazuhisa Fujita
3D-RCNet: Learning from Transformer to Build a 3D Relational ConvNet for Hyperspectral Image Classification
Haizhao Jing, Liuwei Wan, Xizhe Xue, Haokui Zhang, Ying Li
TReX- Reusing Vision Transformer's Attention for Efficient Xbar-based Computing
Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
AT-SNN: Adaptive Tokens for Vision Transformer on Spiking Neural Network
Donghwa Kang, Youngmoon Lee, Eun-Kyu Lee, Brent Kang, Jinkyu Lee, Hyeongboo Baek