Closed Source Large Language Model
Closed-source large language models (LLMs) are powerful AI systems whose inner workings are not publicly accessible, limiting researchers' ability to fully understand and improve them. Current research focuses on mitigating the limitations of this closed nature by exploring methods like instruction tuning with data generated from open-source LLMs, developing techniques to evaluate and enhance their performance (e.g., through prompt optimization and meta-ranking), and addressing safety concerns such as jailbreaking and bias. This research is crucial for advancing the field, enabling more responsible development and deployment of LLMs across various applications while also fostering the development of more transparent and accessible alternatives.
Papers
What's Wrong with Your Code Generated by Large Language Models? An Extensive Study
Shihan Dou, Haoxiang Jia, Shenxi Wu, Huiyuan Zheng, Weikang Zhou, Muling Wu, Mingxu Chai, Jessica Fan, Caishuang Huang, Yunbo Tao, Yan Liu, Enyu Zhou, Ming Zhang, Yuhao Zhou, Yueming Wu, Rui Zheng, Ming Wen, Rongxiang Weng, Jingang Wang, Xunliang Cai, Tao Gui, Xipeng Qiu, Qi Zhang, Xuanjing Huang
InverseCoder: Unleashing the Power of Instruction-Tuned Code LLMs with Inverse-Instruct
Yutong Wu, Di Huang, Wenxuan Shi, Wei Wang, Lingzhe Gao, Shihao Liu, Ziyuan Nan, Kaizhao Yuan, Rui Zhang, Xishan Zhang, Zidong Du, Qi Guo, Yewen Pu, Dawei Yin, Xing Hu, Yunji Chen