Fine Tuned Llama

Fine-tuning Llama, a large language model (LLM), focuses on adapting its capabilities for specific tasks, improving efficiency, and enhancing controllability. Current research explores efficient fine-tuning methods, including parameter-efficient fine-tuning (PEFT) techniques like LoRA, and investigates optimizing inference across diverse hardware, such as FPGAs for energy efficiency. These advancements are significant for broadening LLM accessibility and applicability across various domains, including disinformation detection, financial analysis, and online safety, by enabling faster, cheaper, and more controlled deployment of customized LLMs.

Papers