Closed Source Model

Closed-source large language models (LLMs) represent a powerful but opaque class of AI, hindering research reproducibility and raising concerns about bias and safety. Current research focuses on bridging the performance gap with open-source alternatives through techniques like instruction tuning, knowledge distillation (using data generated by closed-source models to train open-source ones), and the development of novel open-source architectures like Mixture-of-Experts models. This work is crucial for advancing the field by promoting transparency, enabling broader access to powerful LLMs, and mitigating the risks associated with proprietary models.

Papers