Recent Language Model
Recent research on language models centers on improving their accuracy, efficiency, and trustworthiness. Current efforts focus on optimizing training datasets, developing techniques for model compression (like low-rank decomposition), and mitigating issues like hallucinations and biases in generated text, often through methods such as knowledge augmentation and improved sampling strategies. These advancements are crucial for expanding the reliable application of language models in various fields, from healthcare and education to software development and information retrieval.
Papers
July 8, 2024
June 27, 2024
May 10, 2024
April 28, 2024
March 26, 2024
February 21, 2024
January 27, 2024
January 23, 2024
January 3, 2024
November 24, 2023
November 1, 2023
October 19, 2023
July 11, 2023
July 6, 2023
May 30, 2023
July 29, 2022