Open Source Large Language Model
Open-source large language models (LLMs) aim to provide accessible and customizable alternatives to proprietary models, fostering research and development while addressing concerns about data privacy and vendor lock-in. Current research focuses on adapting these models to specific languages and domains (e.g., Romanian, medicine, finance), improving their reasoning capabilities through techniques like retrieval-augmented generation and mixture-of-experts architectures, and optimizing their deployment efficiency on various hardware. This burgeoning field significantly impacts both the scientific community, by enabling broader participation in LLM research, and practical applications, offering cost-effective and adaptable solutions for diverse tasks ranging from question answering to code generation.
Papers
OpenLogParser: Unsupervised Parsing with Open-Source Large Language Models
Zeyang Ma, Dong Jae Kim, Tse-Hsun Chen
FANNO: Augmenting High-Quality Instruction Data with Open-Sourced LLMs Only
He Zhu, Junyou Su, Tianle Lun, Yicheng Tao, Wenjia Zhang, Zipei Fan, Guanhua Chen
The Impact of Hyperparameters on Large Language Model Inference Performance: An Evaluation of vLLM and HuggingFace Pipelines
Matias Martinez
PrExMe! Large Scale Prompt Exploration of Open Source LLMs for Machine Translation and Summarization Evaluation
Christoph Leiter, Steffen Eger
"Vorbeşti Româneşte?" A Recipe to Train Powerful Romanian LLMs with English Instructions
Mihai Masala, Denis C. Ilie-Ablachim, Alexandru Dima, Dragos Corlatescu, Miruna Zavelca, Ovio Olaru, Simina Terian, Andrei Terian, Marius Leordeanu, Horia Velicu, Marius Popescu, Mihai Dascalu, Traian Rebedea
RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale
Beck LaBash, August Rosedale, Alex Reents, Lucas Negritto, Colin Wiel
Panza: A Personalized Text Writing Assistant via Data Playback and Local Fine-Tuning
Armand Nicolicioiu, Eugenia Iofinova, Eldar Kurtic, Mahdi Nikdan, Andrei Panferov, Ilia Markov, Nir Shavit, Dan Alistarh
OTCE: Hybrid SSM and Attention with Cross Domain Mixture of Experts to construct Observer-Thinker-Conceiver-Expresser
Jingze Shi, Ting Xie, Bingheng Wu, Chunjun Zheng, Kai Wang