Tuned Lm
Tuned Large Language Models (LLMs) focus on improving the performance and reliability of pre-trained LLMs for specific tasks or domains. Current research emphasizes efficient tuning methods, such as proxy-tuning and sparse pre-training, to reduce computational costs and address issues like hallucination and knowledge limitations. These advancements are significant because they enable the creation of more accurate, efficient, and trustworthy LLMs for diverse applications, ranging from biomedical research to improved educational tools and more reliable code generation.
Papers
June 5, 2023
June 3, 2023
May 29, 2023
May 25, 2023
May 24, 2023
May 22, 2023
May 20, 2023
May 18, 2023
May 5, 2023
May 1, 2023
April 13, 2023
February 16, 2023
February 7, 2023
December 9, 2022
November 15, 2022
April 27, 2022