Tuned Lm
Tuned Large Language Models (LLMs) focus on improving the performance and reliability of pre-trained LLMs for specific tasks or domains. Current research emphasizes efficient tuning methods, such as proxy-tuning and sparse pre-training, to reduce computational costs and address issues like hallucination and knowledge limitations. These advancements are significant because they enable the creation of more accurate, efficient, and trustworthy LLMs for diverse applications, ranging from biomedical research to improved educational tools and more reliable code generation.
Papers
Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning
Gurusha Juneja, Subhabrata Dutta, Soumen Chakrabarti, Sunny Manchanda, Tanmoy Chakraborty
Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs
Young-Suk Lee, Md Arafat Sultan, Yousef El-Kurdi, Tahira Naseem Asim Munawar, Radu Florian, Salim Roukos, Ramón Fernandez Astudillo