Group DIscrimination
Group discrimination in machine learning focuses on identifying and mitigating biases that lead to unfair or unequal outcomes for different demographic groups. Current research emphasizes developing methods to measure discrimination across various model types, including large language models and neural networks, often employing techniques like contrastive learning, attention mechanisms, and causal inference to pinpoint and correct biases. This work is crucial for ensuring fairness and ethical considerations in AI applications, impacting fields ranging from hiring practices and loan applications to healthcare and criminal justice. The ultimate goal is to create more equitable and trustworthy AI systems.
Papers
Does In-Context Learning Really Learn? Rethinking How Large Language Models Respond and Solve Tasks via In-Context Learning
Quanyu Long, Yin Wu, Wenya Wang, Sinno Jialin Pan
PromptSync: Bridging Domain Gaps in Vision-Language Models through Class-Aware Prototype Alignment and Discrimination
Anant Khandelwal