Language Model Output
Language model output research centers on improving the accuracy, reliability, and controllability of text generated by large language models (LLMs). Current efforts focus on detecting and correcting factual errors, mitigating biases, enhancing control over generation through techniques like prompt tuning and constrained decoding, and improving the explainability and trustworthiness of LLM outputs. This work utilizes various architectures, including transformers, and employs methods such as fact-checking, self-debiasing, and retrieval augmentation to achieve these goals. The ultimate aim is to create more reliable and responsible LLMs with broader applicability across diverse fields, from healthcare to legal document generation.
Papers
October 28, 2024
July 25, 2024
July 23, 2024
June 2, 2024
April 8, 2024
March 1, 2024
February 20, 2024
February 19, 2024
January 24, 2024
January 18, 2024
January 8, 2024
November 27, 2023
November 20, 2023
September 18, 2023
September 14, 2023
June 16, 2023
May 18, 2023
April 22, 2023
April 17, 2023