Large Language Model
Large language models (LLMs) are sophisticated AI systems designed to process and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM safety, efficiency (through techniques like quantization and optimized decoding), and fairness, as well as improving their ability to perform complex reasoning and handle diverse instructions. These advancements are significant because they address critical limitations in current LLMs and pave the way for broader applications across diverse fields, including healthcare, legal tech, and autonomous systems.
Papers
Quasi-random Multi-Sample Inference for Large Language Models
Aditya Parashar, Aditya Vikram Singh, Avinash Amballa, Jinlin Lai, Benjamin Rozonoyer
Robust Detection of LLM-Generated Text: A Comparative Analysis
Yongye Su, Yuqing Wu
IOPO: Empowering LLMs with Complex Instruction Following via Input-Output Preference Optimization
Xinghua Zhang, Haiyang Yu, Cheng Fu, Fei Huang, Yongbin Li
Building an Efficient Multilingual Non-Profit IR System for the Islamic Domain Leveraging Multiprocessing Design in Rust
Vera Pavlova, Mohammed Makhlouf
Detecting Reference Errors in Scientific Literature with Large Language Models
Tianmai M. Zhang, Neil F. Abernethy
Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns
Zheng Li, Kai Zhange
A Picture is Worth A Thousand Numbers: Enabling LLMs Reason about Time Series via Visualization
Haoxin Liu, Chenghao Liu, B. Aditya Prakash
The Dark Patterns of Personalized Persuasion in Large Language Models: Exposing Persuasive Linguistic Features for Big Five Personality Traits in LLMs Responses
Wiktoria Mieleszczenko-Kowszewicz, Dawid Płudowski, Filip Kołodziejczyk, Jakub Świstak, Julian Sienkiewicz, Przemysław Biecek
Unmasking the Shadows: Pinpoint the Implementations of Anti-Dynamic Analysis Techniques in Malware Using LLM
Haizhou Wang, Nanqing Luo, Peng LIu
Energy Efficient Protein Language Models: Leveraging Small Language Models with LoRA for Controllable Protein Generation
Aayush Shah, Shankar Jayaratnam
Recycled Attention: Efficient inference for long-context language models
Fangyuan Xu, Tanya Goyal, Eunsol Choi
Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?
Veronica Chatrath, Marcelo Lotif, Shaina Raza
Multi-hop Evidence Pursuit Meets the Web: Team Papelo at FEVER 2024
Christopher Malon
Humans Continue to Outperform Large Language Models in Complex Clinical Decision-Making: A Study with Medical Calculators
Nicholas Wan, Qiao Jin, Joey Chan, Guangzhi Xiong, Serina Applebaum, Aidan Gilson, Reid McMurry, R. Andrew Taylor, Aidong Zhang, Qingyu Chen, Zhiyong Lu
Evaluating and Adapting Large Language Models to Represent Folktales in Low-Resource Languages
JA Meaney, Beatrice Alex, William Lamb
Assessing the Answerability of Queries in Retrieval-Augmented Code Generation
Geonmin Kim, Jaeyeon Kim, Hancheol Park, Wooksu Shin, Tae-Ho Kim
An Early FIRST Reproduction and Improvements to Single-Token Decoding for Fast Listwise Reranking
Zijian Chen, Ronak Pradeep, Jimmy Lin
KyrgyzNLP: Challenges, Progress, and Future
Anton Alekseev, Timur Turatali