Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
TQA-Bench: Evaluating LLMs for Multi-Table Question Answering with Scalable Context and Symbolic Extension
Zipeng Qiu, You Peng, Guangxin He, Binhang Yuan, Chen Wang
A Simple and Provable Scaling Law for the Test-Time Compute of Large Language Models
Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, Jingren Zhou
Towards Understanding Retrieval Accuracy and Prompt Quality in RAG Systems
Shengming Zhao, Yuheng Huang, Jiayang Song, Zhijie Wang, Chengcheng Wan, Lei Ma
Beyond Surface Structure: A Causal Assessment of LLMs' Comprehension Ability
Yujin Han, Lei Xu, Sirui Chen, Difan Zou, Chaochao Lu
On the effectiveness of discrete representations in sparse mixture of experts
Giang Do, Kha Pham, Hung Le, Truyen Tran
Marconi: Prefix Caching for the Era of Hybrid LLMs
Rui Pan, Zhuang Wang, Zhen Jia, Can Karakus, Luca Zancato, Tri Dao, Ravi Netravali, Yida Wang
OMuleT: Orchestrating Multiple Tools for Practicable Conversational Recommendation
Se-eun Yoon, Xiaokai Wei, Yexi Jiang, Rachit Pareek, Frank Ong, Kevin Gao, Julian McAuley, Michelle Gong
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Akhiad Bercovich, Tomer Ronen, Talor Abramovich, Nir Ailon, Nave Assaf, Mohammad Dabbah, Ido Galil, Amnon Geifman, Yonatan Geifman, Izhak Golan, Netanel Haber, Ehud Karpas, Itay Levy, Shahar Mor, Zach Moshe, Najeeb Nabwani, Omri Puny, Ran Rubin, Itamar Schen, Ido Shahaf, Oren Tropp, Omer Ullman Argov, Ran Zilberstein, Ran El-Yaniv
Personalized Federated Fine-Tuning for LLMs via Data-Driven Heterogeneous Model Architectures
Yicheng Zhang, Zhen Qin, Zhaomin Wu, Shuiguang Deng
CovidLLM: A Robust Large Language Model with Missing Value Adaptation and Multi-Objective Learning Strategy for Predicting Disease Severity and Clinical Outcomes in COVID-19 Patients
Shengjun Zhu (1), Siyu Liu (2), Yang Li (3), Qing Lei, Hongyan Hou, Hewei Jiang, Shujuan Guo, Feng Wang, Rongshang Chen, Xionglin Fan, Shengce Tao, Jiaxin Cai ((1) School of Mathematics and Statistics, Xiamen University of Technology, Xiamen, China, (2) School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China, (3) Shanghai Center for Systems Biomedicine, Key Laboratory of Systems Biomedicine (Ministry of Education), Shanghai Jiao Tong University, Shanghai, China)
Way to Specialist: Closing Loop Between Specialized LLM and Evolving Domain Knowledge Graph
Yutong Zhang, Lixing Chen, Shenghong Li, Nan Cao, Yang Shi, Jiaxin Ding, Zhe Qu, Pan Zhou, Yang Bai
Mars-PO: Multi-Agent Reasoning System Preference Optimization
Xiaoxuan Lou, Chaojie Wang, Bo An
DIESEL -- Dynamic Inference-Guidance via Evasion of Semantic Embeddings in LLMs
Ben Ganon, Alon Zolfi, Omer Hofman, Inderjeet Singh, Hisashi Kojima, Yuval Elovici, Asaf Shabtai
Zero-shot Slot Filling in the Age of LLMs for Dialogue Systems
Mansi Rana, Kadri Hacioglu, Sindhuja Gopalan, Maragathamani Boothalingam
Devising a Set of Compact and Explainable Spoken Language Feature for Screening Alzheimer's Disease
Junan Li, Yunxiang Li, Yuren Wang, Xixin Wu, Helen Meng
UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS
Haomin Zhuang, Yihua Zhang, Kehan Guo, Jinghan Jia, Gaowen Liu, Sijia Liu, Xiangliang Zhang
On the Effectiveness of Incremental Training of Large Language Models
Miles Q. Li, Benjamin C. M. Fung, Shih-Chia Huang
Cross-modal Information Flow in Multimodal Large Language Models
Zhi Zhang, Srishti Yadav, Fengze Han, Ekaterina Shutova
LLM-ABBA: Understand time series via symbolic approximation
Erin Carson, Xinye Chen, Cheng Kang
Draft Model Knows When to Stop: A Self-Verification Length Policy for Speculative Decoding
Ziyin Zhang, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Rui Wang, Zhaopeng Tu