Novel Regularization
Novel regularization techniques in machine learning aim to improve model performance, stability, and generalization by adding constraints to the training process. Current research focuses on developing regularization methods tailored to specific challenges, such as limited data in GANs (using multi-scale structural self-dissimilarity), preserving geometric structures in hyperbolic neural networks (via Gromov-Wasserstein distance), and mitigating catastrophic forgetting in continual learning (through centroid matching). These advancements enhance model robustness, efficiency (e.g., by reducing network depth), and fairness, leading to improved accuracy and applicability across diverse domains including medical imaging and EEG analysis.
Papers
October 22, 2024
September 25, 2024
August 20, 2024
July 15, 2024
June 13, 2024
April 25, 2024
December 8, 2023
October 12, 2023
May 31, 2023
August 13, 2022
August 11, 2022
August 3, 2022
April 12, 2022
February 1, 2022