Open Domain Generative
Open-domain generative models aim to create systems capable of generating diverse and coherent text or other media in response to open-ended prompts, without relying on predefined structures or limited knowledge bases. Current research focuses on improving model capabilities through techniques like hypergraphic learning for multi-agent coordination, variational inference for retrieval-augmented models, and methods to mitigate issues like hallucinations and ensure factual accuracy via techniques such as conformal abstention and improved attribution mechanisms. These advancements are significant for various applications, including fact-checking, information extraction, and building more engaging and informative conversational AI systems.
Papers
September 13, 2024
August 4, 2024
July 19, 2024
April 4, 2024
January 20, 2024
November 7, 2023
February 17, 2023
September 23, 2022
June 12, 2022