Proprietary Large Language Model
Proprietary large language models (LLMs) are powerful AI systems used for various tasks, including code generation, text summarization, and legal analysis, with a primary research objective being to understand and improve their capabilities and address limitations. Current research focuses on improving open-source alternatives to reduce reliance on costly and potentially privacy-compromising proprietary models, exploring techniques like instruction tuning, reinforcement learning, and adversarial distillation to enhance performance and efficiency. This research is significant because it addresses concerns about accessibility, cost, and ethical implications of proprietary LLMs while advancing the development of more robust and reliable AI systems across diverse applications.
Papers
AERO: Softmax-Only LLMs for Efficient Private Inference
Nandan Kumar Jha, Brandon Reagen
CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment
Qinfeng Li, Yangfan Xie, Tianyu Du, Zhiqiang Shen, Zhenghan Qin, Hao Peng, Xinkui Zhao, Xianwei Zhu, Jianwei Yin, Xuhong Zhang