Masked Transformer
Masked Transformers are a class of neural network architectures that leverage masked self-attention mechanisms to process sequential or spatial data, achieving improved performance in various tasks by selectively attending to relevant information. Current research focuses on adapting this architecture for diverse applications, including signal processing (e.g., ECG, sEMG denoising), image and video processing (e.g., inpainting, anomaly detection, segmentation), and 3D data analysis (e.g., point clouds, human motion). This approach demonstrates significant improvements across numerous domains, offering a powerful and flexible framework for various data types and tasks, leading to advancements in fields ranging from healthcare to robotics.
Papers
TrustEMG-Net: Using Representation-Masking Transformer with U-Net for Surface Electromyography Enhancement
Kuan-Chen Wang, Kai-Chun Liu, Ping-Cheng Yeh, Sheng-Yu Peng, Yu Tsao
Linear Transformer Topological Masking with Graph Random Features
Isaac Reid, Kumar Avinava Dubey, Deepali Jain, Will Whitney, Amr Ahmed, Joshua Ainslie, Alex Bewley, Mithun Jacob, Aranyak Mehta, David Rendleman, Connor Schenck, Richard E. Turner, René Wagner, Adrian Weller, Krzysztof Choromanski