Non Negative Textual Response
Non-negative textual response research focuses on generating accurate, helpful, and unbiased text outputs from large language models (LLMs), addressing issues like hallucinations (fabricating information) and biases. Current research emphasizes improving the faithfulness of LLM responses to input context, often using retrieval-augmented generation (RAG) or fine-tuning techniques to enhance model accuracy and reduce reliance on inherent biases. This work is crucial for building trustworthy LLMs applicable to various fields, including healthcare, education, and customer service, where reliable and unbiased information is paramount.
Papers
ACE2: Accurately learning subseasonal to decadal atmospheric variability and forced responses
Oliver Watt-Meyer, Brian Henn, Jeremy McGibbon, Spencer K. Clark, Anna Kwa, W. Andre Perkins, Elynn Wu, Lucas Harris, Christopher S. Bretherton
Understanding Student Sentiment on Mental Health Support in Colleges Using Large Language Models
Palak Sood, Chengyang He, Divyanshu Gupta, Yue Ning, Ping Wang
Contrastive learning of cell state dynamics in response to perturbations
Soorya Pradeep, Alishba Imran, Ziwen Liu, Taylla Milena Theodoro, Eduardo Hirata-Miyasaki, Ivan Ivanov, Madhura Bhave, Sudip Khadka, Hunter Woosley, Carolina Arias, Shalin B. Mehta
Search Engines in an AI Era: The False Promise of Factual and Verifiable Source-Cited Responses
Pranav Narayanan Venkit, Philippe Laban, Yilun Zhou, Yixin Mao, Chien-Sheng Wu
Retrospective Comparative Analysis of Prostate Cancer In-Basket Messages: Responses from Closed-Domain LLM vs. Clinical Teams
Yuexing Hao, Jason M. Holmes, Jared Hobson, Alexandra Bennett, Daniel K. Ebner, David M. Routman, Satomi Shiraishi, Samir H. Patel, Nathan Y. Yu, Chris L. Hallemeier, Brooke E. Ball, Mark R. Waddle, Wei Liu
A method for identifying causality in the response of nonlinear dynamical systems
Joseph Massingham, Ole Nielsen, Tore Butlin