Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
Representing the Under-Represented: Cultural and Core Capability Benchmarks for Developing Thai Large Language Models
Dahyun Kim, Sukyung Lee, Yungi Kim, Attapol Rutherford, Chanjun Park
Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge
Jiahuan Li, Yiqing Cao, Shujian Huang, Jiajun Chen
Driving with Regulation: Interpretable Decision-Making for Autonomous Vehicles with Retrieval-Augmented Reasoning via LLM
Tianhui Cai, Yifan Liu, Zewei Zhou, Haoxuan Ma, Seth Z. Zhao, Zhiwen Wu, Jiaqi Ma
$\textbf{Only-IF}$:Revealing the Decisive Effect of Instruction Diversity on Generalization
Dylan Zhang, Justin Wang, Francois Charton
Rule-based Data Selection for Large Language Models
Xiaomin Li, Mingye Gao, Zhiwei Zhang, Chang Yue, Hong Hu
The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?
Alexander S. Choi, Syeda Sabrina Akter, JP Singh, Antonios Anastasopoulos
MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs
Lei Wang, Shan Dong, Yuhui Xu, Hanze Dong, Yalu Wang, Amrita Saha, Ee-Peng Lim, Caiming Xiong, Doyen Sahoo
Deeper Insights Without Updates: The Power of In-Context Learning Over Fine-Tuning
Qingyu Yin, Xuzheng He, Luoao Deng, Chak Tou Leong, Fan Wang, Yanzhao Yan, Xiaoyu Shen, Qiang Zhang
Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates
Chaithanya Bandi, Abir Harrasse
Control Large Language Models via Divide and Conquer
Bingxuan Li, Yiwei Wang, Tao Meng, Kai-Wei Chang, Nanyun Peng
Evaluation of Code LLMs on Geospatial Code Generation
Piotr Gramacki, Bruno Martins, Piotr Szymański
ProtocoLLM: Automatic Evaluation Framework of LLMs on Domain-Specific Scientific Protocol Formulation Tasks
Seungjun Yi, Jaeyoung Lim, Juyong Yoon
Hammer: Robust Function-Calling for On-Device Language Models via Function Masking
Qiqiang Lin, Muning Wen, Qiuying Peng, Guanyu Nie, Junwei Liao, Jun Wang, Xiaoyun Mo, Jiamu Zhou, Cheng Cheng, Yin Zhao, Jun Wang, Weinan Zhang
EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
Aakriti Agrawal, Mucong Ding, Zora Che, Chenghao Deng, Anirudh Satheesh, John Langford, Furong Huang
On Evaluating LLMs' Capabilities as Functional Approximators: A Bayesian Perspective
Shoaib Ahmed Siddiqui, Yanzhi Chen, Juyeon Heo, Menglin Xia, Adrian Weller
RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference
Yige Xu, Xu Guo, Zhiwei Zeng, Chunyan Miao
Realizing Video Summarization from the Path of Language-based Semantic Understanding
Kuan-Chen Mu, Zhi-Yi Chin, Wei-Chen Chiu
Diagnosing Robotics Systems Issues with Large Language Models
Jordis Emilia Herrmann, Aswath Mandakath Gopinath, Mikael Norrlof, Mark Niklas Müller
MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems
Zhentao Xie, Jiabao Zhao, Yilei Wang, Jinxin Shi, Yanhong Bai, Xingjiao Wu, Liang He
Lens: Rethinking Multilingual Enhancement for Large Language Models
Weixiang Zhao, Yulin Hu, Jiahe Guo, Xingyu Sui, Tongtong Wu, Yang Deng, Yanyan Zhao, Bing Qin, Wanxiang Che, Ting Liu