Confidence Relaxation
Confidence relaxation, in various contexts, aims to improve model performance and robustness by mitigating the negative effects of overly confident predictions or hard constraints. Current research focuses on applying this principle to diverse areas, including improving the efficiency and accuracy of machine learning algorithms (e.g., Gromov-Wasserstein distance approximation, k-means clustering), enhancing video quality assessment and speech recognition, and developing more generalizable self-supervised learning models. These advancements have significant implications for improving the reliability and applicability of machine learning across numerous fields, from healthcare to computer vision.
Papers
November 2, 2024
October 24, 2024
October 20, 2024
August 21, 2024
July 16, 2024
May 27, 2024
March 3, 2024
October 29, 2023
September 8, 2023
August 12, 2023
July 20, 2023
March 15, 2023
March 12, 2023
February 23, 2023
December 27, 2022
November 22, 2022
September 20, 2022
July 12, 2022