Frequency Bias
Frequency bias in neural networks refers to the tendency of these models to prioritize learning low-frequency features before high-frequency ones, impacting their ability to accurately represent complex signals. Current research focuses on mitigating this bias through techniques like adjusting model initialization, employing specialized filters or regularization methods, and modifying architectures such as Transformers and state-space models. Understanding and addressing frequency bias is crucial for improving the accuracy and robustness of various machine learning applications, including image classification, time series forecasting, and biological signal modeling.
Papers
November 11, 2024
October 15, 2024
October 7, 2024
October 2, 2024
September 19, 2024
July 4, 2024
June 13, 2024
May 23, 2024
April 10, 2024
July 19, 2023
April 6, 2023
November 16, 2022
May 28, 2022
May 9, 2022
May 6, 2022
March 16, 2022
January 19, 2022