Medical Dialogue
Medical dialogue research focuses on developing computational models to analyze and generate doctor-patient conversations, aiming to improve healthcare efficiency and quality. Current research heavily utilizes large language models (LLMs), often augmented with knowledge graphs and other techniques like retrieval-augmented generation and reinforcement learning, to enhance accuracy, fluency, and the incorporation of medical knowledge in dialogue generation and summarization. This field is significant because it promises to automate tasks like medical transcription, note generation, and even parts of diagnosis, potentially reducing physician workload and improving patient care. Challenges remain in ensuring accuracy, safety, and ethical considerations related to bias and privacy in these AI-driven systems.
Papers
RuleAlign: Making Large Language Models Better Physicians with Diagnostic Rule Alignment
Xiaohan Wang, Xiaoyan Yang, Yuqi Zhu, Yue Shen, Jian Wang, Peng Wei, Lei Liang, Jinjie Gu, Huajun Chen, Ningyu Zhang
MDD-5k: A New Diagnostic Conversation Dataset for Mental Disorders Synthesized via Neuro-Symbolic LLM Agents
Congchi Yin, Feng Li, Shu Zhang, Zike Wang, Jun Shao, Piji Li, Jianhua Chen, Xun Jiang