Open Ended Generation
Open-ended generation focuses on developing large language models (LLMs) capable of producing diverse, coherent, and factually accurate text in response to open-ended prompts, unlike constrained tasks like question answering. Current research emphasizes improving factuality and reducing biases through techniques like self-consistency decoding, contrastive learning, and entropy-aware sampling, alongside developing robust evaluation metrics that go beyond simple accuracy. These advancements are crucial for building more reliable and trustworthy LLMs, with implications for various applications ranging from creative writing and content generation to improving the safety and robustness of AI systems.
Papers
October 14, 2024
October 3, 2024
October 2, 2024
July 1, 2024
June 11, 2024
February 26, 2024
February 19, 2024
February 16, 2024
February 15, 2024
December 14, 2023
October 30, 2023
October 15, 2023
September 7, 2023
July 4, 2023
May 22, 2023
February 14, 2023
December 4, 2022