Distributional Assumption
Distributional assumptions in machine learning and related fields concern the underlying probability distributions of data, impacting model performance and reliability. Current research focuses on mitigating the negative effects of biased or misspecified distributions, employing techniques like Wasserstein distance-based robustness, importance reweighting for bias correction, and distributional counterfactual explanations for improved interpretability. These advancements are crucial for building more robust and reliable models, particularly in applications where data scarcity, adversarial attacks, or evolving distributions are prevalent, leading to improved accuracy and trustworthiness in diverse fields.
Papers
October 22, 2024
September 7, 2024
July 18, 2024
June 28, 2024
June 17, 2024
June 3, 2024
May 29, 2024
May 24, 2024
May 20, 2024
April 25, 2024
April 21, 2024
March 21, 2024
January 23, 2024
January 21, 2024
December 29, 2023
December 17, 2023
November 29, 2023
October 25, 2023
September 9, 2023