Fine Tuned Language Model
Fine-tuning pre-trained language models (LLMs) adapts their general capabilities to specific tasks, improving performance and addressing limitations like poor reasoning or bias. Current research emphasizes enhancing reasoning abilities, mitigating biases, and improving model calibration and compatibility across updates, often employing techniques like parameter-efficient fine-tuning and model merging. This work is significant because it enables the creation of more reliable, specialized LLMs for diverse applications, ranging from clinical documentation to scientific knowledge base construction, while also addressing crucial concerns about model safety and privacy.
Papers
December 20, 2023
November 22, 2023
November 13, 2023
November 1, 2023
October 29, 2023
October 20, 2023
October 10, 2023
September 18, 2023
July 19, 2023
July 18, 2023
May 30, 2023
May 26, 2023
May 24, 2023
May 16, 2023
May 2, 2023
April 24, 2023
April 22, 2023
February 13, 2023
February 12, 2023
February 9, 2023