Successive Affine Learning
Successive affine learning (SAL) and its related methods, such as successive subspace learning (SSL), represent a novel approach to training deep neural networks (DNNs) by iteratively learning layers or subspaces. Current research focuses on improving efficiency and accuracy, particularly in resource-constrained environments like federated learning (FL), and on enhancing model interpretability. These techniques offer potential advantages in reducing computational complexity, improving performance with limited data, and enabling deployment on devices with varying capabilities, impacting fields such as medical image analysis and distributed machine learning.
Papers
May 26, 2023
May 13, 2023
January 21, 2023
October 7, 2022
February 7, 2022