Medical LLM
Medical LLMs are large language models adapted for healthcare applications, primarily aiming to improve medical information access, analysis, and decision-making. Current research focuses on enhancing reasoning capabilities through techniques like chain-of-thought prompting and dynamic reasoning trajectory search, as well as addressing biases and ensuring safety through careful preference alignment and guardrail implementation. These advancements hold significant promise for improving healthcare efficiency and patient care, but ongoing work is crucial to address challenges like bias mitigation, hallucination reduction, and robust evaluation in real-world clinical settings.
Papers
ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance
Chunwei Wang, Guansong Lu, Junwei Yang, Runhui Huang, Jianhua Han, Lu Hou, Wei Zhang, Hang Xu
Political-LLM: Large Language Models in Political Science
Lincan Li, Jiaqi Li, Catherine Chen, Fred Gui, Hongjia Yang, Chenxiao Yu, Zhengguang Wang, Jianing Cai, Junlong Aaron Zhou, Bolin Shen, Alex Qian, Weixin Chen, Zhongkai Xue, Lichao Sun, Lifang He, Hanjie Chen, Kaize Ding, Zijian Du, Fangzhou Mu, Jiaxin Pei, Jieyu Zhao, Swabha Swayamdipta, Willie Neiswanger, Hua Wei, Xiyang Hu, Shixiang Zhu, Tianlong Chen, Yingzhou Lu, Yang Shi, Lianhui Qin, Tianfan Fu, Zhengzhong Tu, Yuzhe Yang, Jaemin Yoo, Jiaheng Zhang, Ryan Rossi, Liang Zhan, Liang Zhao, Emilio Ferrara, Yan Liu, Furong Huang, Xiangliang Zhang, Lawrence Rothenberg, Shuiwang Ji, Philip S. Yu, Yue Zhao, Yushun Dong
Generative Adversarial Reviews: When LLMs Become the Critic
Nicolas Bougie, Narimasa Watanabe
Enhancing LLMs for Impression Generation in Radiology Reports through a Multi-Agent System
Fang Zeng, Zhiliang Lyu, Quanzheng Li, Xiang Li
Reinforcement Learning: An Overview
Kevin Murphy
A text-to-tabular approach to generate synthetic patient data using LLMs
Margaux Tornqvist, Jean-Daniel Zucker, Tristan Fauvel, Nicolas Lambert, Mathilde Berthelot, Antoine Movschin
Continuous Speech Tokens Makes LLMs Robust Multi-Modality Learners
Ze Yuan, Yanqing Liu, Shujie Liu, Sheng Zhao
Show, Don't Tell: Uncovering Implicit Character Portrayal using LLMs
Brandon Jaipersaud, Zining Zhu, Frank Rudzicz, Elliot Creager
Enhancing Mathematical Reasoning in LLMs with Background Operators
Jiajun Chen, Yik-Cheung Tam
MTMT: Consolidating Multiple Thinking Modes to Form a Thought Tree for Strengthening LLM
Changcheng Li, Xiangyu Wang, Qiuju Chen, Xiren Zhou, Huanhuan Chen
U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs
Konstantin Chernyshev, Vitaliy Polshkov, Ekaterina Artemova, Alex Myasnikov, Vlad Stepanov, Alexei Miasnikov, Sergei Tilga
TOOL-ED: Enhancing Empathetic Response Generation with the Tool Calling Capability of LLM
Huiying Cao, Yiqun Zhang, Shi Feng, Xiaocui Yang, Daling Wang, Yifei Zhang
TDD-Bench Verified: Can LLMs Generate Tests for Issues Before They Get Resolved?
Toufique Ahmed, Martin Hirzel, Rangeet Pan, Avraham Shinnar, Saurabh Sinha
Drawing Pandas: A Benchmark for LLMs in Generating Plotting Code
Timur Galimzyanov, Sergey Titov, Yaroslav Golubev, Egor Bogomolov
QA-TOOLBOX: Conversational Question-Answering for process task guidance in manufacturing
Ramesh Manuvinakurike, Elizabeth Watkins, Celal Savur, Anthony Rhodes, Sovan Biswas, Gesem Gudino Mejia, Richard Beckwith, Saurav Sahay, Giuseppe Raffa, Lama Nachman
Adaptive Two-Phase Finetuning LLMs for Japanese Legal Text Retrieval
Quang Hoang Trung, Nguyen Van Hoang Phuc, Le Trung Hoang, Quang Huu Hieu, Vo Nguyen Le Duy
BANER: Boundary-Aware LLMs for Few-Shot Named Entity Recognition
Quanjiang Guo, Yihong Dong, Ling Tian, Zhao Kang, Yu Zhang, Sijie Wang