Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
Too Big to Fool: Resisting Deception in Language Models
Mohammad Reza Samsami, Mats Leon Richter, Juan Rodriguez, Megh Thakkar, Sarath Chandar, Maxime Gasse
RAGServe: Fast Quality-Aware RAG Systems with Configuration Adaptation
Siddhant Ray, Rui Pan, Zhuohan Gu, Kuntai Du, Ganesh Ananthanarayanan, Ravi Netravali, Junchen Jiang
On Adversarial Robustness and Out-of-Distribution Robustness of Large Language Models
April Yang, Jordan Tab, Parth Shah, Paul Kotchavong
One world, one opinion? The superstar effect in LLM responses
Sofie Goethals, Lauren Rhue
Cultural Evolution of Cooperation among LLM Agents
Aron Vallinder, Edward Hughes
Detecting LLM Hallucination Through Layer-wise Information Deficiency: Analysis of Unanswerable Questions and Ambiguous Prompts
Hazel Kim, Adel Bibi, Philip Torr, Yarin Gal
Efficient Continual Pre-training of LLMs for Low-resource Languages
Arijit Nag, Soumen Chakrabarti, Animesh Mukherjee, Niloy Ganguly
Retrieval-Augmented Semantic Parsing: Using Large Language Models to Improve Generalization
Xiao Zhang, Qianru Meng, Johan Bos
From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection
Haowei Wang, Rupeng Zhang, Junjie Wang, Mingyang Li, Yuekai Huang, Dandan Wang, Qing Wang
MPPO: Multi Pair-wise Preference Optimization for LLMs with Arbitrary Negative Samples
Shuo Xie, Fangzhi Zhu, Jiahui Wang, Lulu Wen, Wei Dai, Xiaowei Chen, Junxiong Zhu, Kai Zhou, Bo Zheng
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers
Junyan Hu, Xue Xiao, Mengqi Zhang, Xiao Chen, Zhaochun Ren, Zhumin Chen, Pengjie Ren
RETQA: A Large-Scale Open-Domain Tabular Question Answering Dataset for Real Estate Sector
Zhensheng Wang, Wenmian Yang, Kun Zhou, Yiquan Zhang, Weijia Jia
GAOKAO-Eval: Does high scores truly reflect strong capabilities in LLMs?
Zhikai Lei, Tianyi Liang, Hanglei Hu, Jin Zhang, Yunhua Zhou, Yunfan Shao, Linyang Li, Chenchui Li, Changbo Wang, Hang Yan, Qipeng Guo
A Comparative Study of LLMs, NMT Models, and Their Combination in Persian-English Idiom Translation
Sara Rezaeimanesh, Faezeh Hosseini, Yadollah Yaghoobzadeh
Small Language Model as Data Prospector for Large Language Model
Shiwen Ni, Haihong Wu, Di Yang, Qiang Qu, Hamid Alinejad-Rokny, Min Yang
AI and the Future of Digital Public Squares
Beth Goldberg, Diana Acosta-Navas, Michiel Bakker, Ian Beacock, Matt Botvinick, Prateek Buch, Renée DiResta, Nandika Donthi, Nathanael Fast, Ravi Iyer, Zaria Jalan, Andrew Konya, Grace Kwak Danciu, Hélène Landemore, Alice Marwick, Carl Miller, Aviv Ovadya, Emily Saltz, Lisa Schirch, Dalit Shalom, Divya Siddarth, Felix Sieker, Christopher Small, Jonathan Stray, Audrey Tang, Michael Henry Tessler, Amy Zhang
Llama 3 Meets MoE: Efficient Upcycling
Aditya Vavre, Ethan He, Dennis Liu, Zijie Yan, June Yang, Nima Tajbakhsh, Ashwath Aithal
Enhancing Nursing and Elderly Care with Large Language Models: An AI-Driven Framework
Qiao Sun, Jiexin Xie, Nanyang Ye, Qinying Gu, Shijie Guo
B-VLLM: A Vision Large Language Model with Balanced Spatio-Temporal Tokens
Zhuqiang Lu, Zhenfei Yin, Mengwei He, Zhihui Wang, Zicheng Liu, Zhiyong Wang, Kun Hu