Medical LLM
Medical LLMs are large language models adapted for healthcare applications, primarily aiming to improve medical information access, analysis, and decision-making. Current research focuses on enhancing reasoning capabilities through techniques like chain-of-thought prompting and dynamic reasoning trajectory search, as well as addressing biases and ensuring safety through careful preference alignment and guardrail implementation. These advancements hold significant promise for improving healthcare efficiency and patient care, but ongoing work is crucial to address challenges like bias mitigation, hallucination reduction, and robust evaluation in real-world clinical settings.
Papers
Efficient Continual Pre-training of LLMs for Low-resource Languages
Arijit Nag, Soumen Chakrabarti, Animesh Mukherjee, Niloy Ganguly
TACOMORE: Leveraging the Potential of LLMs in Corpus-based Discourse Analysis with Prompt Engineering
Bingru Li, Han Wang
Modeling Story Expectations to Understand Engagement: A Generative Framework Using LLMs
Hortense Fong, George Gui
LatentQA: Teaching LLMs to Decode Activations Into Natural Language
Alexander Pan, Lijie Chen, Jacob Steinhardt
TURBOATTENTION: Efficient Attention Approximation For High Throughputs LLMs
Hao Kang, Srikant Bharadwaj, James Hensman, Tushar Krishna, Victor Ruhle, Saravan Rajmohan
Can We Generate Visual Programs Without Prompting LLMs?
Michal Shlapentokh-Rothman, Yu-Xiong Wang, Derek Hoiem
CogNav: Cognitive Process Modeling for Object Goal Navigation with LLMs
Yihan Cao, Jiazhao Zhang, Zhinan Yu, Shuzhen Liu, Zheng Qin, Qin Zou, Bo Du, Kai Xu
Combining knowledge graphs and LLMs for hazardous chemical information management and reuse
Marcos Da Silveira, Louis Deladiennee, Kheira Acem, Oona Freudenthal
TrojanWhisper: Evaluating Pre-trained LLMs to Detect and Localize Hardware Trojans
Md Omar Faruque, Peter Jamieson, Ahmad Patooghy, Abdel-Hameed A. Badawy
Filling Memory Gaps: Enhancing Continual Semantic Parsing via SQL Syntax Variance-Guided LLMs without Real Data Replay
Ruiheng Liu, Jinyu Zhang, Yanqi Song, Yu Zhang, Bailong Yang
Exploring Coding Spot: Understanding Parametric Contributions to LLM Coding Performance
Dongjun Kim, Minhyuk Kim, YongChan Chun, Chanjun Park, Heuiseok Lim
Predictable Emergent Abilities of LLMs: Proxy Tasks Are All You Need
Bo-Wen Zhang, Yan Yan, Boxiang Yang, Yifei Xue, Guang Liu