Receptive Field
A receptive field, in the context of neural networks, defines the region of input data that influences the output of a single neuron or feature map. Current research focuses on expanding receptive fields to improve the ability of models (like U-Nets, Transformers, and Mamba-based architectures) to capture long-range dependencies and contextual information, particularly in image segmentation and time series forecasting. This is achieved through techniques such as dilated convolutions, attention mechanisms, and novel scanning strategies, ultimately aiming for improved accuracy and efficiency in various applications, including medical image analysis and remote sensing. The impact of receptive field size on model performance and generalization is a key area of investigation, with a growing emphasis on balancing computational cost with the benefits of broader contextual understanding.
Papers
Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining
Jiarun Liu, Hao Yang, Hong-Yu Zhou, Yan Xi, Lequan Yu, Yizhou Yu, Yong Liang, Guangming Shi, Shaoting Zhang, Hairong Zheng, Shanshan Wang
Densely Decoded Networks with Adaptive Deep Supervision for Medical Image Segmentation
Suraj Mishra, Danny Z. Chen
Neural Echos: Depthwise Convolutional Filters Replicate Biological Receptive Fields
Zahra Babaiee, Peyman M. Kiasari, Daniela Rus, Radu Grosu
Slicer Networks
Hang Zhang, Xiang Chen, Rongguang Wang, Renjiu Hu, Dongdong Liu, Gaolei Li
Enhancing Small Object Encoding in Deep Neural Networks: Introducing Fast&Focused-Net with Volume-wise Dot Product Layer
Ali Tofik, Roy Partha Pratim