dlADMM Algorithm

Deep learning-based Alternating Direction Method of Multipliers (dlADMM) algorithms aim to improve the efficiency and convergence of neural network training, addressing limitations of traditional gradient descent methods. Current research focuses on accelerating dlADMM's convergence rate through techniques like Anderson acceleration and enhancing its applicability to decentralized federated learning scenarios, often incorporating strategies to mitigate model inconsistency and overfitting. These advancements hold significant promise for improving the scalability and performance of deep learning models across various applications, from wireless communication to adversarial attack defense and medical prediction.

Papers