Regularization Module
Regularization modules are increasingly used in deep learning to improve model robustness and performance, particularly when dealing with limited data, noisy data, or class imbalances. Current research focuses on developing novel regularization techniques, such as non-parametric, sparse geometric consistency, and wavelet-based methods, often integrated into various architectures including neural radiance fields (NeRFs), ResNets, and transformers. These advancements address challenges in diverse applications like medical image analysis, few-shot learning, and facial expression recognition, ultimately leading to more accurate and reliable models across various domains.
Papers
July 17, 2024
April 1, 2024
September 29, 2023
September 2, 2023
July 15, 2023
March 29, 2023
December 14, 2022
June 8, 2022