Structural Bias
Structural bias in machine learning refers to systematic errors arising from the inherent structure of data or algorithms, leading to unfair or inaccurate predictions. Current research focuses on detecting and mitigating these biases across various domains, including graph neural networks, large language models, and optimization algorithms, employing techniques like fairness regularizers, causal inference, and disentangled representation learning. Understanding and addressing structural bias is crucial for improving the fairness, robustness, and generalizability of machine learning models, impacting their reliability in diverse applications ranging from decision-support systems to natural language processing.
Papers
April 26, 2024
February 6, 2024
January 29, 2024
October 14, 2023
April 4, 2023
September 28, 2022
September 2, 2022
August 11, 2022
June 24, 2022