Medical LLM
Medical LLMs are large language models adapted for healthcare applications, primarily aiming to improve medical information access, analysis, and decision-making. Current research focuses on enhancing reasoning capabilities through techniques like chain-of-thought prompting and dynamic reasoning trajectory search, as well as addressing biases and ensuring safety through careful preference alignment and guardrail implementation. These advancements hold significant promise for improving healthcare efficiency and patient care, but ongoing work is crucial to address challenges like bias mitigation, hallucination reduction, and robust evaluation in real-world clinical settings.
Papers
Leveraging LLM for Automated Ontology Extraction and Knowledge Graph Generation
Mohammad Sadeq Abolhasani, Rong Pan
TextClass Benchmark: A Continuous Elo Rating of LLMs in Social Sciences
Bastián González-Bustamante
AgriBench: A Hierarchical Agriculture Benchmark for Multimodal Large Language Models
Yutong Zhou, Masahiro Ryo
The Performance of the LSTM-based Code Generated by Large Language Models (LLMs) in Forecasting Time Series Data
Saroj Gopali, Sima Siami-Namini, Faranak Abri, Akbar Siami Namin
Exploration of LLM Multi-Agent Application Implementation Based on LangGraph+CrewAI
Zhihua Duan, Jialin Wang
Simulating Tabular Datasets through LLMs to Rapidly Explore Hypotheses about Real-World Entities
Miguel Zabaleta, Joel Lehman
Push the Limit of Multi-modal Emotion Recognition by Prompting LLMs with Receptive-Field-Aware Attention Weighting
Liyun Zhang, Dian Ding, Yu Lu, Yi-Chao Chen, Guangtao Xue
Synthetic Data Generation with LLM for Improved Depression Prediction
Andrea Kang, Jun Yu Chen, Zoe Lee-Youngzie, Shuhao Fu
On Limitations of LLM as Annotator for Low Resource Languages
Suramya Jadhav, Abhay Shanbhag, Amogh Thakurdesai, Ridhima Sinare, Raviraj Joshi
NEMO: Can Multimodal LLMs Identify Attribute-Modified Objects?
Jiaxuan Li, Junwen Mo, MinhDuc Vo, Akihiro Sugimoto, Hideki Nakayama
Inference Scaling $\scriptsize\mathtt{F}$Laws: The Limits of LLM Resampling with Imperfect Verifiers
Benedikt Stroebl, Sayash Kapoor, Arvind Narayanan
Can LLMs be Good Graph Judger for Knowledge Graph Construction?
Haoyu Huang, Chong Chen, Conghui He, Yang Li, Jiawei Jiang, Wentao Zhang
Blockchain Meets LLMs: A Living Survey on Bidirectional Integration
Jianghao Gong, Peiqi Yan, Yue Zhang, Hongli An, Logan Liu
The Two-Hop Curse: LLMs trained on A$\rightarrow$B, B$\rightarrow$C fail to learn A$\rightarrow$C
Mikita Balesni, Tomek Korbak, Owain Evans
What can LLM tell us about cities?
Zhuoheng Li, Yaochen Wang, Zhixue Song, Yuqi Huang, Rui Bao, Guanjie Zheng, Zhenhui Jessie Li
Specifications: The missing link to making the development of LLM systems an engineering discipline
Ion Stoica, Matei Zaharia, Joseph Gonzalez, Ken Goldberg, Koushik Sen, Hao Zhang, Anastasios Angelopoulos, Shishir G. Patil, Lingjiao Chen, Wei-Lin Chiang, Jared Q. Davis