Clinical Text
Clinical text analysis focuses on extracting meaningful information from unstructured medical records to improve healthcare. Current research emphasizes leveraging large language models (LLMs), such as BERT and its variants, along with techniques like retrieval-augmented generation (RAG) and parameter-efficient fine-tuning (PEFT), to enhance tasks like entity recognition, information retrieval, and phenotyping. These advancements are crucial for automating high-throughput phenotyping, improving diagnostic accuracy, and facilitating more efficient clinical decision-making, ultimately impacting patient care and medical research. The development of open-source tools and datasets is also a significant trend, fostering collaboration and reproducibility within the field.
Papers
Lessons Learned on Information Retrieval in Electronic Health Records: A Comparison of Embedding Models and Pooling Strategies
Skatje Myers, Timothy A. Miller, Yanjun Gao, Matthew M. Churpek, Anoop Mayampurath, Dmitriy Dligach, Majid Afshar
Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining for Clinical LLMs
Clément Christophe, Tathagata Raha, Svetlana Maslenkova, Muhammad Umar Salman, Praveen K Kanithi, Marco AF Pimentel, Shadab Khan
Performant ASR Models for Medical Entities in Accented Speech
Tejumade Afonja, Tobi Olatunji, Sewade Ogun, Naome A. Etori, Abraham Owodunni, Moshood Yekini
Aqulia-Med LLM: Pioneering Full-Process Open-Source Medical Language Models
Lulu Zhao, Weihao Zeng, Xiaofeng Shi, Hua Zhou, Donglin Hao, Yonghua Lin