Federated Instruction Tuning
Federated instruction tuning (FIT) aims to improve large language models (LLMs) by collaboratively training them across multiple devices without directly sharing sensitive data. Current research focuses on addressing challenges like data heterogeneity and limited instruction data, exploring techniques such as automated data generation from unstructured text, personalized model architectures via neural architecture search, and robust defense mechanisms against privacy attacks. FIT's significance lies in enabling the development of more powerful and adaptable LLMs while preserving user privacy, opening avenues for broader deployment and personalized applications in various domains.
Papers
October 15, 2024
October 14, 2024
September 30, 2024
September 11, 2024
June 15, 2024
April 20, 2024
March 10, 2024
February 26, 2024