Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
Strong Preferences Affect the Robustness of Value Alignment
Ziwei Xu, Mohan Kankanhalli
Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization
Mingyang Wang, Lukas Lange, Heike Adel, Jannik Strötgen, Hinrich Schütze
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models
Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Xiang Wang, Xiangnan He, Tat-seng Chua
Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection
Tianxiang Chen, Zhentao Tan, Tao Gong, Yue Wu, Qi Chu, Bin Liu, Jieping Ye, Nenghai Yu
Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models
Rui Meng, Ye Liu, Lifu Tu, Daqing He, Yingbo Zhou, Semih Yavuz
Determine-Then-Ensemble: Necessity of Top-k Union for Large Language Model Ensembling
Yuxuan Yao, Han Wu, Mingyang Liu, Sichun Luo, Xiongwei Han, Jie Liu, Zhijiang Guo, Linqi Song
Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach
Wanyi Liu, Long Chen, Zhenzhou Tang
Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models
Guobin Shen, Dongcheng Zhao, Yiting Dong, Xiang He, Yi Zeng
CodePMP: Scalable Preference Model Pretraining for Large Language Model Reasoning
Huimu Yu, Xing Wu, Weidong Yin, Debing Zhang, Songlin Hu
Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Xiaoqun Liu, Jiacheng Liang, Luoxi Tang, Chenyu You, Muchao Ye, Zhaohan Xi
CaLMFlow: Volterra Flow Matching using Causal Language Models
Sizhuang He, Daniel Levine, Ivan Vrkic, Marco Francesco Bressana, David Zhang, Syed Asad Rizvi, Yangtian Zhang, Emanuele Zappala, David van Dijk
Calibrate to Discriminate: Improve In-Context Learning with Label-Free Comparative Inference
Wei Cheng, Tianlu Wang, Yanmin Ji, Fan Yang, Keren Tan, Yiyu Zheng
Efficiently Deploying LLMs with Controlled Risk
Michael J. Zellinger, Matt Thomson
ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement
Xiangyu Peng, Congying Xia, Xinyi Yang, Caiming Xiong, Chien-Sheng Wu, Chen Xing
A Watermark for Black-Box Language Models
Dara Bahri, John Wieting, Dana Alon, Donald Metzler
RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning
Jonas Gehring, Kunhao Zheng, Jade Copet, Vegard Mella, Taco Cohen, Gabriel Synnaeve
Precision Knowledge Editing: Enhancing Safety in Large Language Models
Xuying Li, Zhuo Li, Yuji Kosuga, Yasuhiro Yoshida, Victor Bian
Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions
Qian Ruan, Ilia Kuznetsov, Iryna Gurevych
FLAG: Financial Long Document Classification via AMR-based GNN
Bolun "Namir" Xia, Aparna Gupta, Mohammed J. Zaki
LLM+KG@VLDB'24 Workshop Summary
Arijit Khan, Tianxing Wu, Xi Chen