Dilated Convolution
Dilated convolutions are a modified convolutional operation that expands the receptive field of a neural network without increasing the number of parameters, enabling the capture of broader contextual information within images or time series. Current research focuses on integrating dilated convolutions into various architectures, such as U-Nets, ResNets, and Transformers, often combined with techniques like attention mechanisms and learnable spacings to improve performance in tasks ranging from image segmentation and object detection to audio classification and medical image analysis. This technique's effectiveness in enhancing model efficiency and accuracy across diverse applications makes it a significant advancement in deep learning, particularly for resource-constrained environments and high-resolution data processing.
Papers
An Efficient Speech Separation Network Based on Recurrent Fusion Dilated Convolution and Channel Attention
Junyu Wang
Domestic Activities Classification from Audio Recordings Using Multi-scale Dilated Depthwise Separable Convolutional Network
Yufei Zeng, Yanxiong Li, Zhenfeng Zhou, Ruiqi Wang, Difeng Lu