Score Normalization
Score normalization is a crucial technique for improving the reliability and fairness of various machine learning models by adjusting their output scores. Current research focuses on developing and applying normalization methods tailored to specific applications, such as improving fairness in biometric systems, enhancing uncertainty estimation in large language models, and accelerating diffusion model sampling. These advancements are significant because they lead to more robust and reliable systems across diverse domains, impacting areas like security, natural language processing, and image generation.
Papers
July 19, 2024
June 17, 2024
February 19, 2024
October 31, 2023
July 1, 2023
October 18, 2022
March 28, 2022