Speaker Verification Fairness
Speaker verification fairness focuses on mitigating biases in automated speaker recognition systems that lead to unequal performance across different demographic groups. Current research emphasizes developing and comparing fairness metrics, exploring techniques like adversarial reweighting and unsupervised clustering to improve model performance for underrepresented groups (e.g., those with specific accents or from certain geographic regions), and investigating the robustness of fairness-enhancing methods against malicious attacks. Addressing these biases is crucial for ensuring equitable access to technologies reliant on speaker verification and promoting trust in these systems.
Papers
August 5, 2024
April 27, 2024
December 16, 2023
June 6, 2023
June 1, 2023
February 20, 2023
July 16, 2022
July 15, 2022