Randomness Factor
The "randomness factor" in machine learning encompasses the influence of stochastic elements—like data shuffling, weight initialization, and random number generators—on model performance, reproducibility, and fairness. Current research focuses on mitigating the negative impacts of randomness through techniques such as ensembling, noise regularization, and controlled randomness injection, often applied to models like random forests, deep neural networks, and graph neural networks. Understanding and controlling this randomness is crucial for building reliable, robust, and fair machine learning systems, improving both the trustworthiness of scientific results and the safety and efficacy of real-world applications.
Papers
August 22, 2024
July 29, 2024
June 18, 2024
June 13, 2024
April 5, 2024
February 20, 2024
December 14, 2023
October 20, 2023
June 24, 2023
August 19, 2022
July 27, 2022
March 1, 2022