Fine Tuned Language Model
Fine-tuning pre-trained language models (LLMs) adapts their general capabilities to specific tasks, improving performance and addressing limitations like poor reasoning or bias. Current research emphasizes enhancing reasoning abilities, mitigating biases, and improving model calibration and compatibility across updates, often employing techniques like parameter-efficient fine-tuning and model merging. This work is significant because it enables the creation of more reliable, specialized LLMs for diverse applications, ranging from clinical documentation to scientific knowledge base construction, while also addressing crucial concerns about model safety and privacy.
Papers
October 25, 2024
October 17, 2024
October 2, 2024
August 27, 2024
July 12, 2024
July 3, 2024
June 18, 2024
June 12, 2024
April 8, 2024
March 29, 2024
March 26, 2024
March 13, 2024
March 5, 2024
February 19, 2024
February 8, 2024
February 6, 2024
December 28, 2023