Source Text
Research on large language models (LLMs) is intensely focused on improving the accuracy and faithfulness of text generation, particularly in summarization and translation tasks. Current efforts leverage techniques like attention mechanisms to better align generated text with source material, employ methods such as domain-conditional mutual information to reduce hallucinations, and explore multimodal approaches integrating visual information to resolve ambiguities. These advancements aim to enhance the reliability and trustworthiness of LLMs for various applications, ranging from educational tools and information retrieval to machine translation and knowledge graph construction.
Papers
August 27, 2024
June 17, 2024
June 11, 2024
May 21, 2024
April 26, 2024
April 15, 2024
April 11, 2024
March 22, 2024
March 5, 2024
November 9, 2023
October 14, 2023
August 7, 2023
October 26, 2022