Consistent Personality
Research into consistent personality in large language models (LLMs) aims to understand whether and how these models exhibit stable personality traits, similar to humans, and how these traits are influenced by training data and prompting techniques. Current research employs various methods, including adapting human personality tests (like the Big Five Inventory and MBTI) for LLMs and developing novel external evaluation methods that analyze LLM responses to open-ended prompts. This work is crucial for improving the reliability and ethical implications of LLMs in applications requiring consistent and predictable behavior, as well as for advancing our understanding of personality itself through a computational lens.
Papers
June 20, 2024
February 22, 2024
February 5, 2024
December 20, 2023
October 27, 2023
May 31, 2023