dlADMM Algorithm
Deep learning-based Alternating Direction Method of Multipliers (dlADMM) algorithms aim to improve the efficiency and convergence of neural network training, addressing limitations of traditional gradient descent methods. Current research focuses on accelerating dlADMM's convergence rate through techniques like Anderson acceleration and enhancing its applicability to decentralized federated learning scenarios, often incorporating strategies to mitigate model inconsistency and overfitting. These advancements hold significant promise for improving the scalability and performance of deep learning models across various applications, from wireless communication to adversarial attack defense and medical prediction.
Papers
June 11, 2024
June 7, 2024
January 8, 2024
October 25, 2023
October 8, 2023
August 16, 2023
July 28, 2023
April 6, 2023
November 21, 2022
September 6, 2022