MEG Decoder
MEG decoders aim to reconstruct sensory experiences or other brain states from magnetoencephalography (MEG) brain recordings, primarily focusing on improving the accuracy and real-time capabilities of this reconstruction. Current research emphasizes the use of deep learning models, including transformer-based architectures and generative models, often incorporating techniques like contrastive learning and multi-modal fusion to enhance decoding performance. Successful development of robust MEG decoders holds significant potential for advancing our understanding of brain function and for applications in brain-computer interfaces and neuro-rehabilitation.
Papers
Violet: A Vision-Language Model for Arabic Image Captioning with Gemini Decoder
Abdelrahman Mohamed, Fakhraddin Alwajih, El Moatez Billah Nagoudi, Alcides Alcoba Inciarte, Muhammad Abdul-Mageed
DEED: Dynamic Early Exit on Decoder for Accelerating Encoder-Decoder Transformer Models
Peng Tang, Pengkai Zhu, Tian Li, Srikar Appalaraju, Vijay Mahadevan, R. Manmatha