Vision Transformer
Vision Transformers (ViTs) adapt the transformer architecture, initially designed for natural language processing, to image analysis by treating images as sequences of patches. Current research focuses on improving ViT efficiency and robustness through techniques like token pruning, attention engineering, and hybrid models combining ViTs with convolutional neural networks or other architectures (e.g., Mamba). These advancements are driving progress in various applications, including medical image analysis, object detection, and spatiotemporal prediction, by offering improved accuracy and efficiency compared to traditional convolutional neural networks in specific tasks.
Papers
GiT: Towards Generalist Vision Transformer through Universal Language Interface
Haiyang Wang, Hao Tang, Li Jiang, Shaoshuai Shi, Muhammad Ferjad Naeem, Hongsheng Li, Bernt Schiele, Liwei Wang
LocalMamba: Visual State Space Model with Windowed Selective Scan
Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, Chang Xu
OneVOS: Unifying Video Object Segmentation with All-in-One Transformer Framework
Wanyun Li, Pinxue Guo, Xinyu Zhou, Lingyi Hong, Yangji He, Xiangyu Zheng, Wei Zhang, Wenqiang Zhang
METER: a mobile vision transformer architecture for monocular depth estimation
L. Papa, P. Russo, I. Amerini
Activating Wider Areas in Image Super-Resolution
Cheng Cheng, Hang Wang, Hongbin Sun
General surgery vision transformer: A video pre-trained foundation model for general surgery
Samuel Schmidgall, Ji Woong Kim, Jeffrey Jopling, Axel Krieger
Segmentation Guided Sparse Transformer for Under-Display Camera Image Restoration
Jingyun Xue, Tao Wang, Jun Wang, Kaihao Zhang, Wenhan Luo, Wenqi Ren, Zikun Liu, Hyunhee Park, Xiaochun Cao
AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit Detectors
Kaishen Yuan, Zitong Yu, Xin Liu, Weicheng Xie, Huanjing Yue, Jingyu Yang
T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers
Mariano V. Ntrougkas, Nikolaos Gkalelis, Vasileios Mezaris
ACC-ViT : Atrous Convolution's Comeback in Vision Transformers
Nabil Ibtehaz, Ning Yan, Masood Mortazavi, Daisuke Kihara
NiNformer: A Network in Network Transformer with Token Mixing Generated Gating Function
Abdullah Nazhat Abdullah, Tarkan Aydin
Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
Yuchen Duan, Weiyun Wang, Zhe Chen, Xizhou Zhu, Lewei Lu, Tong Lu, Yu Qiao, Hongsheng Li, Jifeng Dai, Wenhai Wang
xT: Nested Tokenization for Larger Context in Large Images
Ritwik Gupta, Shufan Li, Tyler Zhu, Jitendra Malik, Trevor Darrell, Karttikeya Mangalam
Lightweight Object Detection: A Study Based on YOLOv7 Integrated with ShuffleNetv2 and Vision Transformer
Wenkai Gong