Scale Equivariance

Scale equivariance in deep learning focuses on designing neural networks that consistently respond to scaled inputs by producing proportionally scaled outputs, improving generalization across different resolutions. Current research explores various architectures, including Fourier-based layers, Lie group convolutions, and Bessel-convolutional networks, aiming to achieve true scale equivariance, often in conjunction with rotation equivariance. This research is significant because scale-equivariant models offer enhanced robustness and generalization capabilities in computer vision tasks, particularly in applications like image classification and segmentation where objects appear at varying scales. The development of provably scale-covariant networks is a key goal, leading to more reliable and efficient algorithms.

Papers