NLP Fairness
NLP fairness research aims to mitigate biases in natural language processing models and datasets that perpetuate societal inequalities. Current efforts focus on identifying and quantifying biases along various axes (gender, race, political affiliation, etc.), developing methods to measure and reduce these biases (e.g., data perturbation, differential privacy, model compression), and evaluating fairness across diverse cultural contexts. This work is crucial for ensuring the responsible development and deployment of NLP systems, preventing the amplification of harmful stereotypes, and promoting equitable access to technology.
Papers
November 12, 2024
February 25, 2024
May 17, 2023
May 15, 2023
November 21, 2022
September 25, 2022
May 25, 2022
May 12, 2022