Closed Source Model
Closed-source large language models (LLMs) represent a powerful but opaque class of AI, hindering research reproducibility and raising concerns about bias and safety. Current research focuses on bridging the performance gap with open-source alternatives through techniques like instruction tuning, knowledge distillation (using data generated by closed-source models to train open-source ones), and the development of novel open-source architectures like Mixture-of-Experts models. This work is crucial for advancing the field by promoting transparency, enabling broader access to powerful LLMs, and mitigating the risks associated with proprietary models.
Papers
December 13, 2023
October 30, 2023
October 17, 2023
October 16, 2023
August 24, 2023
July 18, 2023
June 13, 2023
May 25, 2023