Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
A Multimodal Social Agent
Athina Bikaki, Ioannis A. Kakadiaris
LLaVA-Zip: Adaptive Visual Token Compression with Intrinsic Image Information
Ke Wang, Hong Xuan
Performance of a large language model-Artificial Intelligence based chatbot for counseling patients with sexually transmitted infections and genital diseases
Nikhil Mehta, Sithira Ambepitiya, Thanveer Ahamad, Dinuka Wijesundara, Yudara Kularathne
In-Context Learning with Topological Information for Knowledge Graph Completion
Udari Madhushani Sehwag, Kassiani Papasotiriou, Jared Vann, Sumitra Ganesh
Towards LLM-based optimization compilers. Can LLMs learn how to apply a single peephole optimization? Reasoning is all LLMs need!
Xiangxin Fang, Lev Mukhanov
Advancing Single- and Multi-task Text Classification through Large Language Model Fine-tuning
Hang Zhao, Qile P. Chen, Yijing Barry Zhang, Gang Yang
TURBOATTENTION: Efficient Attention Approximation For High Throughputs LLMs
Hao Kang, Srikant Bharadwaj, James Hensman, Tushar Krishna, Victor Ruhle, Saravan Rajmohan
EMS: Adaptive Evict-then-Merge Strategy for Head-wise KV Cache Compression Based on Global-Local Importance
Yingxin Li, Ye Li, Yuan Meng, Xinzhu Ma, Zihan Geng, Shutao Xia, Zhi Wang
Learning to Reason via Self-Iterative Process Feedback for Small Language Models
Kaiyuan Chen, Jin Wang, Xuejie Zhang
NyayaAnumana & INLegalLlama: The Largest Indian Legal Judgment Prediction Dataset and Specialized Language Model for Enhanced Decision Analysis
Shubham Kumar Nigam, Balaramamahanthi Deepak Patnaik, Shivam Mishra, Noel Shallum, Kripabandhu Ghosh, Arnab Bhattacharya
SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs
Sultan Alrashed
Large Language Models Still Face Challenges in Multi-Hop Reasoning with External Knowledge
Haotong Zhang
Code LLMs: A Taxonomy-based Survey
Nishat Raihan, Christian Newman, Marcos Zampieri
LCFO: Long Context and Long Form Output Dataset and Benchmarking
Marta R. Costa-jussà, Pierre Andrews, Mariano Coria Meglioli, Joy Chen, Joe Chuang, David Dale, Christophe Ropers, Alexandre Mourachko, Eduardo Sánchez, Holger Schwenk, Tuan Tran, Arina Turkatenko, Carleigh Wood
Large Language Models for Scholarly Ontology Generation: An Extensive Analysis in the Engineering Field
Tanay Aggarwal, Angelo Salatino, Francesco Osborne, Enrico Motta
PyOD 2: A Python Library for Outlier Detection with LLM-powered Model Selection
Sihan Chen, Zhuangzhuang Qian, Wingchun Siu, Xingcan Hu, Jiaqi Li, Shawn Li, Yuehan Qin, Tiankai Yang, Zhuo Xiao, Wanghao Ye, Yichi Zhang, Yushun Dong, Yue Zhao
GraphTool-Instruction: Revolutionizing Graph Reasoning in LLMs through Decomposed Subtask Instruction
Rongzheng Wang, Shuang Liang, Qizhi Chen, Jiasheng Zhang, Ke Qin
Concept Bottleneck Large Language Models
Chung-En Sun, Tuomas Oikarinen, Berk Ustun, Tsui-Wei Weng