Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Xinyao Ma, Rui Zhu, Zihao Wang, Jingwei Xiong, Qingyu Chen, Haixu Tang, L. Jean Camp, Lucila Ohno-Machado
A Comprehensive Evaluation of Large Language Models on Mental Illnesses in Arabic Context
Noureldin Zahran, Aya E. Fouda, Radwa J. Hanafy, Mohammed E. Fouda
An efficient approach to represent enterprise web application structure using Large Language Model in the service of Intelligent Quality Engineering
Zaber Al Hassan Ayon, Gulam Husain, Roshankumar Bisoi, Waliur Rahman, Dr Tom Osborn
Leveraging Taxonomy and LLMs for Improved Multimodal Hierarchical Classification
Shijing Chen, Mohamed Reda Bouadjenek, Shoaib Jameel, Usman Naseem, Basem Suleiman, Flora D. Salim, Hakim Hacid, Imran Razzak
Event Argument Extraction with Enriched Prompts
Chen Liang
Bridging the Fairness Gap: Enhancing Pre-trained Models with LLM-Generated Sentences
Liu Yu, Ludie Guo, Ping Kuang, Fan Zhou
VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning
Ji Soo Lee, Jongha Kim, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim
Hierarchical Divide-and-Conquer for Fine-Grained Alignment in LLM-Based Medical Evaluation
Shunfan Zheng, Xiechi Zhang, Gerard de Melo, Xiaoling Wang, Linlin Wang
ZNO-Eval: Benchmarking reasoning capabilities of large language models in Ukrainian
Mykyta Syromiatnikov, Victoria Ruvinskaya, Anastasiya Troynina
Scaling Down Semantic Leakage: Investigating Associative Bias in Smaller Language Models
Veronika Smilga
Quantifying Relational Exploration in Cultural Heritage Knowledge Graphs with LLMs: A Neuro-Symbolic Approach
Mohammed Maree
Guided Code Generation with LLMs: A Multi-Agent Framework for Complex Code Tasks
Amr Almorsi, Mohanned Ahmed, Walid Gomaa
Fine-tuning Large Language Models for Improving Factuality in Legal Question Answering
Yinghao Hu, Leilei Gan, Wenyi Xiao, Kun Kuang, Fei Wu
Using Pre-trained LLMs for Multivariate Time Series Forecasting
Malcolm L. Wolff, Shenghao Yang, Kari Torkkola, Michael W. Mahoney
AFRIDOC-MT: Document-level MT Corpus for African Languages
Jesujoba O. Alabi, Israel Abebe Azime, Miaoran Zhang, Cristina España-Bonet, Rachel Bawden, Dawei Zhu, David Ifeoluwa Adelani, Clement Oyeleke Odoje, Idris Akinade, Iffat Maab, Davis David, Shamsuddeen Hassan Muhammad, Neo Putini, David O. Ademuyiwa, Andrew Caines, Dietrich Klakow
Towards a Probabilistic Framework for Analyzing and Improving LLM-Enabled Software
Juan Manuel Baldonado, Flavia Bonomo-Braberman, Víctor Adrián Braberman
Large Language Models Share Representations of Latent Grammatical Concepts Across Typologically Diverse Languages
Jannik Brinkmann, Chris Wendler, Christian Bartelt, Aaron Mueller
Aggregating Low Rank Adapters in Federated Fine-tuning
Evelyn Trautmann, Ian Hales, Martin F. Volk
Multi-Agent Collaboration Mechanisms: A Survey of LLMs
Khanh-Tung Tran, Dung Dao, Minh-Duong Nguyen, Quoc-Viet Pham, Barry O'Sullivan, Hoang D. Nguyen