Vision Transformer
Vision Transformers (ViTs) adapt the transformer architecture, initially designed for natural language processing, to image analysis by treating images as sequences of patches. Current research focuses on improving ViT efficiency and robustness through techniques like token pruning, attention engineering, and hybrid models combining ViTs with convolutional neural networks or other architectures (e.g., Mamba). These advancements are driving progress in various applications, including medical image analysis, object detection, and spatiotemporal prediction, by offering improved accuracy and efficiency compared to traditional convolutional neural networks in specific tasks.
Papers
ED-ViT: Splitting Vision Transformer for Distributed Inference on Edge Devices
Xiang Liu, Yijun Song, Xia Li, Yifei Sun, Huiying Lan, Zemin Liu, Linshan Jiang, Jialin Li
CTA-Net: A CNN-Transformer Aggregation Network for Improving Multi-Scale Feature Extraction
Chunlei Meng, Jiacheng Yang, Wei Lin, Bowen Liu, Hongda Zhang, chun ouyang, Zhongxue Gan
NARAIM: Native Aspect Ratio Autoregressive Image Models
Daniel Gallo Fernández, Robert van der Klis, Rǎzvan-Andrei Matişan, Janusz Partyka, Efstratios Gavves, Samuele Papa, Phillip Lippe
STA-Unet: Rethink the semantic redundant for Medical Imaging Segmentation
Vamsi Krishna Vasa, Wenhui Zhu, Xiwen Chen, Peijie Qiu, Xuanzhao Dong, Yalin Wang
Towards virtual painting recolouring using Vision Transformer on X-Ray Fluorescence datacubes
Alessandro Bombini, Fernando García-Avello Bofías, Francesca Giambi, Chiara Ruberto
DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention
Nguyen Huu Bao Long, Chenyu Zhang, Yuzhi Shi, Tsubasa Hirakawa, Takayoshi Yamashita, Tohgoroh Matsui, Hironobu Fujiyoshi
HorGait: A Hybrid Model for Accurate Gait Recognition in LiDAR Point Cloud Planar Projections
Jiaxing Hao, Yanxi Wang, Zhigang Chang, Hongmin Gao, Zihao Cheng, Chen Wu, Xin Zhao, Peiye Fang, Rachmat Muwardi
M$^2$-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization
Yanbiao Liang, Huihong Shi, Zhongfeng Wang
Causal Image Modeling for Efficient Visual Understanding
Feng Wang, Timing Yang, Yaodong Yu, Sucheng Ren, Guoyizhe Wei, Angtian Wang, Wei Shao, Yuyin Zhou, Alan Yuille, Cihang Xie
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
Wenhao Li, Yudong Xu, Scott Sanner, Elias Boutros Khalil
Vision Transformer based Random Walk for Group Re-Identification
Guoqing Zhang, Tianqi Liu, Wenxuan Fang, Yuhui Zheng
LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions
Ravindran Kannan, Chiranjib Bhattacharyya, Praneeth Kacham, David P. Woodruff
Brain Mapping with Dense Features: Grounding Cortical Semantic Selectivity in Natural Images With Vision Transformers
Andrew F. Luo, Jacob Yeung, Rushikesh Zawar, Shaurya Dewan, Margaret M. Henderson, Leila Wehbe, Michael J. Tarr
TextHawk2: A Large Vision-Language Model Excels in Bilingual OCR and Grounding with 16x Fewer Tokens
Ya-Qi Yu, Minghui Liao, Jiwen Zhang, Jihao Wu
Improving Image Clustering with Artifacts Attenuation via Inference-Time Attention Engineering
Kazumoto Nakamura, Yuji Nozawa, Yu-Chieh Lin, Kengo Nakata, Youyang Ng