Leave One Out Distinguishability
Leave-one-out distinguishability (LOOD) assesses how much a single data point impacts a machine learning model's output, revealing insights into data memorization, information leakage, and the influence of individual training examples. Current research focuses on quantifying LOOD using analytical frameworks, often employing Gaussian processes or adapting existing clustering algorithms like k-means to improve cluster separability. This work is significant for enhancing model interpretability, mitigating privacy risks associated with data leakage, and improving the robustness and reliability of machine learning models across various applications, including anomaly detection and classification tasks.
Papers
April 24, 2024
September 29, 2023
August 6, 2023
February 23, 2023