Fine Tuned Llama
Fine-tuning Llama, a large language model (LLM), focuses on adapting its capabilities for specific tasks, improving efficiency, and enhancing controllability. Current research explores efficient fine-tuning methods, including parameter-efficient fine-tuning (PEFT) techniques like LoRA, and investigates optimizing inference across diverse hardware, such as FPGAs for energy efficiency. These advancements are significant for broadening LLM accessibility and applicability across various domains, including disinformation detection, financial analysis, and online safety, by enabling faster, cheaper, and more controlled deployment of customized LLMs.
Papers
November 15, 2024
November 9, 2024
October 24, 2024
April 29, 2024
March 20, 2024
February 12, 2024
December 7, 2023
September 9, 2023
August 28, 2023