Diverse Persona
Diverse persona research explores how assigning different identities or characteristics to large language models (LLMs) affects their outputs and capabilities. Current research focuses on understanding and mitigating biases revealed through persona assignment, developing methods for consistent persona maintenance across interactions, and creating evaluation frameworks to assess persona adherence and faithfulness. This work is significant because it highlights the inherent biases within LLMs and the need for more robust and ethical AI systems, impacting both the development of more responsible AI and the creation of personalized applications across various sectors.
Papers
November 8, 2024
October 5, 2024
September 18, 2024
September 9, 2024
August 16, 2024
July 25, 2024
July 24, 2024
July 2, 2024
June 20, 2024
June 10, 2024
June 3, 2024
May 30, 2024
May 26, 2024
May 13, 2024
April 28, 2024
December 31, 2023
November 8, 2023