Layer Regularization
Layer regularization in deep learning aims to improve model generalization and robustness by strategically constraining the network's parameters during training. Current research focuses on developing novel regularization techniques, including those based on similarity exploitation, multi-level hierarchies (both in width and depth), and alignment of layer representations across different data distributions or model branches. These advancements address challenges such as overfitting, data heterogeneity in federated learning, and the vulnerability of models to backdoor attacks, ultimately leading to more efficient, reliable, and secure deep learning systems.
Papers
May 23, 2024
May 10, 2023
January 16, 2023
November 11, 2022
July 14, 2022
July 9, 2022
January 13, 2022
December 1, 2021