Non Convex Loss Function
Non-convex loss functions pose significant challenges in machine learning due to the absence of a globally optimal solution, hindering the development of efficient and robust algorithms. Current research focuses on developing novel algorithms for federated learning, Byzantine-resilient training, and differentially private optimization, often employing techniques like gradient normalization, adaptive client selection, and model sparsification to mitigate the difficulties associated with non-convexity. These advancements are crucial for improving the efficiency, privacy, and robustness of machine learning models in distributed and sensitive data settings, impacting various applications from healthcare to large-scale data analysis.
Papers
October 1, 2023
September 10, 2023
July 14, 2023
June 22, 2023
May 11, 2023
March 22, 2023
February 7, 2023
October 14, 2022
September 15, 2022
August 19, 2022
July 31, 2022
July 15, 2022
June 7, 2022
May 31, 2022
May 27, 2022
April 26, 2022
April 20, 2022
March 24, 2022