Pseudo Bias
Pseudo bias refers to unintended biases learned by machine learning models from training data, hindering generalization and robustness. Current research focuses on developing methods to identify and mitigate these biases, often employing techniques like pseudo-labeling (assigning bias labels based on model predictions) and architectural modifications (e.g., using shallow networks to prioritize core features). These efforts aim to improve model fairness, reliability, and trustworthiness, particularly in sensitive applications like healthcare, where biased models can have significant negative consequences.
Papers
August 18, 2024
July 7, 2024
March 28, 2024
May 6, 2023