Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
Language Model Evolutionary Algorithms for Recommender Systems: Benchmarks and Algorithm Comparisons
Jiao Liu, Zhu Sun, Shanshan Feng, Yew-Soon Ong
HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization
Huaqin Zhao, Jiaxi Li, Yi Pan, Shizhe Liang, Xiaofeng Yang, Wei Liu, Xiang Li, Fei Dou, Tianming Liu, Jin Lu
Structured Dialogue System for Mental Health: An LLM Chatbot Leveraging the PM+ Guidelines
Yixiang Chen, Xinyu Zhang, Jinran Wang, Xurong Xie, Nan Yan, Hui Chen, Lan Wang
Leveraging large language models for efficient representation learning for entity resolution
Xiaowei Xu, Bi T. Foua, Xingqiao Wang, Vivek Gunasekaran, John R. Talburt
AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment
Yonggan Fu, Zhongzhi Yu, Junwei Li, Jiayi Qian, Yongan Zhang, Xiangchi Yuan, Dachuan Shi, Roman Yakunin, Yingyan Celine Lin
Efficient Alignment of Large Language Models via Data Sampling
Amrit Khera, Rajat Ghosh, Debojyoti Dutta
On the Privacy Risk of In-context Learning
Haonan Duan, Adam Dziedzic, Mohammad Yaghini, Nicolas Papernot, Franziska Boenisch
Number it: Temporal Grounding Videos like Flipping Manga
Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, Xu Yang
Scaling Law for Post-training after Model Pruning
Xiaodong Chen, Yuxuan Hu, Jing Zhang, Xiaokang Zhang, Cuiping Li, Hong Chen
Measuring Non-Adversarial Reproduction of Training Data in Large Language Models
Michael Aerni, Javier Rando, Edoardo Debenedetti, Nicholas Carlini, Daphne Ippolito, Florian Tramèr
Agentic LLMs in the Supply Chain: Towards Autonomous Multi-Agent Consensus-Seeking
Valeria Jannelli, Stefan Schoepf, Matthias Bickel, Torbjørn Netland, Alexandra Brintrup
Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions
Yutao Hou, Yajing Luo, Zhiwen Ruan, Hongru Wang, Weifeng Ge, Yun Chen, Guanhua Chen
Mitigating Sycophancy in Decoder-Only Transformer Architectures: Synthetic Data Intervention
Libo Wang
Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity
Zichen Song, Sitan Huang, Yuxin Wu, Zhongfeng Kang
Information Extraction from Clinical Notes: Are We Ready to Switch to Large Language Models?
Yan Hu, Xu Zuo, Yujia Zhou, Xueqing Peng, Jimin Huang, Vipina K. Keloth, Vincent J. Zhang, Ruey-Ling Weng, Qingyu Chen, Xiaoqian Jiang, Kirk E. Roberts, Hua Xu
Large Language Models as User-Agents for Evaluating Task-Oriented-Dialogue Systems
Taaha Kazi, Ruiliang Lyu, Sizhe Zhou, Dilek Hakkani-Tur, Gokhan Tur
Squeezed Attention: Accelerating Long Context Length LLM Inference
Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Monishwaran Maheswaran, June Paik, Michael W. Mahoney, Kurt Keutzer, Amir Gholami
LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models
Zhengyi Wang, Jonathan Lorraine, Yikai Wang, Hang Su, Jun Zhu, Sanja Fidler, Xiaohui Zeng
Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents
Yuyou Gan, Yong Yang, Zhe Ma, Ping He, Rui Zeng, Yiming Wang, Qingming Li, Chunyi Zhou, Songze Li, Ting Wang, Yunjun Gao, Yingcai Wu, Shouling Ji
MM-Eval: A Hierarchical Benchmark for Modern Mongolian Evaluation in LLMs
Mengyuan Zhang, Ruihui Wang, Bo Xia, Yuan Sun, Xiaobing Zhao