Auditory Attention
Auditory attention research focuses on understanding how the brain selectively processes sounds in complex environments and decoding this selective attention from brain activity, primarily using electroencephalography (EEG). Current research heavily utilizes deep learning models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs like LSTMs), and attention mechanisms, often integrated into hybrid architectures (e.g., CNN-SNN) to improve accuracy and efficiency in identifying attended speakers or sound sources from EEG signals. This field is significant for advancing our understanding of auditory processing and has direct implications for developing improved hearing aids, brain-computer interfaces, and other assistive technologies that can enhance selective listening capabilities.
Papers
Single-word Auditory Attention Decoding Using Deep Learning Model
Nhan Duc Thanh Nguyen, Huy Phan, Kaare Mikkelsen, Preben Kidmose
DARNet: Dual Attention Refinement Network with Spatiotemporal Construction for Auditory Attention Detection
Sheng Yan, Cunhang fan, Hongyu Zhang, Xiaoke Yang, Jianhua Tao, Zhao Lv