Frequency Bias

Frequency bias in neural networks refers to the tendency of these models to prioritize learning low-frequency features before high-frequency ones, impacting their ability to accurately represent complex signals. Current research focuses on mitigating this bias through techniques like adjusting model initialization, employing specialized filters or regularization methods, and modifying architectures such as Transformers and state-space models. Understanding and addressing frequency bias is crucial for improving the accuracy and robustness of various machine learning applications, including image classification, time series forecasting, and biological signal modeling.

Papers