Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
RAG-Star: Enhancing Deliberative Reasoning with Retrieval Augmented Verification and Refinement
Jinhao Jiang, Jiayi Chen, Junyi Li, Ruiyang Ren, Shijie Wang, Wayne Xin Zhao, Yang Song, Tao Zhang
DSGram: Dynamic Weighting Sub-Metrics for Grammatical Error Correction in the Era of Large Language Models
Jinxiang Xie, Yilin Li, Xunjian Yin, Xiaojun Wan
More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression
Jiebin Zhang, Dawei Zhu, Yifan Song, Wenhao Wu, Chuqiao Kuang, Xiaoguang Li, Lifeng Shang, Qun Liu, Sujian Li
LLM-based Discriminative Reasoning for Knowledge Graph Question Answering
Mufan Xu, Kehai Chen, Xuefeng Bai, Muyun Yang, Tiejun Zhao, Min Zhang
Baichuan4-Finance Technical Report
Hanyu Zhang, Boyu Qiu, Yuhao Feng, Shuqi Li, Qian Ma, Xiyuan Zhang, Qiang Ju, Dong Yan, Jian Xie
What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context
Zhiyuan Chang, Mingyang Li, Xiaojun Jia, Junjie Wang, Yuekai Huang, Qing Wang, Yihao Huang, Yang Liu
LLMs are Also Effective Embedding Models: An In-depth Overview
Chongyang Tao, Tao Shen, Shen Gao, Junshuo Zhang, Zhen Li, Zhengwei Tao, Shuai Ma
Process-Supervised Reward Models for Clinical Note Generation: A Scalable Approach Guided by Domain Expertise
Hanyin Wang, Qiping Xu, Bolun Liu, Guleid Hussein, Hariprasad Korsapati, Mohamad El Labban, Kingsley Iheasirim, Mohamed Hassan, Gokhan Anil, Brian Bartlett, Jimeng Sun
Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis with Large Language Models
Chengyan Wu, Bolei Ma, Zheyu Zhang, Ningyuan Deng, Yanqing He, Yun Xue
Solid-SQL: Enhanced Schema-linking based In-context Learning for Robust Text-to-SQL
Geling Liu, Yunzhi Tan, Ruichao Zhong, Yuanzhen Xie, Lingchen Zhao, Qian Wang, Bo Hu, Zang Li
LinguaLIFT: An Effective Two-stage Instruction Tuning Framework for Low-Resource Language Tasks
Hongbin Zhang, Kehai Chen, Xuefeng Bai, Yang Xiang, Min Zhang
RareAgents: Autonomous Multi-disciplinary Team for Rare Disease Diagnosis and Treatment
Xuanzhong Chen, Ye Jin, Xiaohao Mao, Lun Wang, Shuyang Zhang, Ting Chen
Knowledge Boundary of Large Language Models: A Survey
Moxin Li, Yong Zhao, Yang Deng, Wenxuan Zhang, Shuaiyi Li, Wenya Xie, See-Kiong Ng, Tat-Seng Chua
LITA: An Efficient LLM-assisted Iterative Topic Augmentation Framework
Chia-Hsuan Chang, Jui-Tse Tsai, Yi-Hang Tsai, San-Yih Hwang
How Different AI Chatbots Behave? Benchmarking Large Language Models in Behavioral Economics Games
Yutong Xie, Yiyao Liu, Zhuang Ma, Lin Shi, Xiyuan Wang, Walter Yuan, Matthew O. Jackson, Qiaozhu Mei
Can LLM Prompting Serve as a Proxy for Static Analysis in Vulnerability Detection
Ira Ceka, Feitong Qiao, Anik Dey, Aastha Valechia, Gail Kaiser, Baishakhi Ray
The Open Source Advantage in Large Language Models (LLMs)
Jiya Manchanda, Laura Boettcher, Matheus Westphalen, Jasser Jasser
SciFaultyQA: Benchmarking LLMs on Faulty Science Question Detection with a GAN-Inspired Approach to Synthetic Dataset Generation
Debarshi Kundu
Cost-Effective Label-free Node Classification with LLMs
Taiyan Zhang, Renchi Yang, Mingyu Yan, Xiaochun Ye, Dongrui Fan, Yurui Lai
Inferring Functionality of Attention Heads from their Parameters
Amit Elhelo, Mor Geva