Diversity Awareness
Diversity awareness in artificial intelligence focuses on mitigating biases and enhancing the fairness and inclusivity of AI systems by addressing the underrepresentation of diverse populations in data and models. Current research emphasizes developing metrics to quantify diversity in synthetic datasets and generated outputs, employing techniques like contrastive learning and diffusion models to improve diversity in generated content, and adapting large language models to better represent diverse linguistic and cultural contexts. This work is crucial for ensuring the responsible development and deployment of AI, preventing discriminatory outcomes, and promoting equitable access to AI benefits across diverse populations.
Papers
PASSION for Dermatology: Bridging the Diversity Gap with Pigmented Skin Images from Sub-Saharan Africa
Philippe Gottfrois, Fabian Gröger, Faly Herizo Andriambololoniaina, Ludovic Amruthalingam, Alvaro Gonzalez-Jimenez, Christophe Hsu, Agnes Kessy, Simone Lionetti, Daudi Mavura, Wingston Ng'ambi, Dingase Faith Ngongonda, Marc Pouly, Mendrika Fifaliana Rakotoarisaona, Fahafahantsoa Rapelanoro Rabenja, Ibrahima Traoré, Alexander A. Navarini
Enabling Adaptive Agent Training in Open-Ended Simulators by Targeting Diversity
Robby Costales, Stefanos Nikolaidis
Growing a Tail: Increasing Output Diversity in Large Language Models
Michal Shur-Ofry, Bar Horowitz-Amsalem, Adir Rahamim, Yonatan Belinkov
Conditional Vendi Score: An Information-Theoretic Approach to Diversity Evaluation of Prompt-based Generative Models
Mohammad Jalali, Azim Ospanov, Amin Gohari, Farzan Farnia
Saliency-Based diversity and fairness Metric and FaceKeepOriginalAugment: A Novel Approach for Enhancing Fairness and Diversity
Teerath Kumar, Alessandra Mileo, Malika Bendechache
On the Role of Depth and Looping for In-Context Learning with Task Diversity
Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi, Stefanie Jegelka, Sanjiv Kumar
On the Diversity of Synthetic Data and its Impact on Training Large Language Models
Hao Chen, Abdul Waheed, Xiang Li, Yidong Wang, Jindong Wang, Bhiksha Raj, Marah I. Abdin
GDPO: Learning to Directly Align Language Models with Diversity Using GFlowNets
Oh Joon Kwon, Daiki E. Matsunaga, Kee-Eung Kim
Theoretical Aspects of Bias and Diversity in Minimum Bayes Risk Decoding
Hidetaka Kamigaito, Hiroyuki Deguchi, Yusuke Sakai, Katsuhiko Hayashi, Taro Watanabe