GPT Neo
GPT Neo, a family of large language models (LLMs), is being extensively researched for its capabilities in various domains, focusing on improving its performance, efficiency, and mitigating biases. Current research explores applications ranging from automated text summarization and medical diagnosis assistance to code generation and data acquisition system design, often employing techniques like parameter-efficient fine-tuning and retrieval-augmented generation to enhance performance and reduce computational costs. The ability of GPT Neo and similar LLMs to process and generate human-quality text has significant implications for diverse fields, offering potential for automating tasks, improving efficiency, and augmenting human capabilities in various professional settings.
Papers
Retrospective Comparative Analysis of Prostate Cancer In-Basket Messages: Responses from Closed-Domain LLM vs. Clinical Teams
Yuexing Hao, Jason M. Holmes, Jared Hobson, Alexandra Bennett, Daniel K. Ebner, David M. Routman, Satomi Shiraishi, Samir H. Patel, Nathan Y. Yu, Chris L. Hallemeier, Brooke E. Ball, Mark R. Waddle, Wei Liu
The application of GPT-4 in grading design university students' assignment and providing feedback: An exploratory study
Qian Huang, Thijs Willems, King Wang Poon
GPT's Judgements Under Uncertainty
Payam Saeedi, Mahsa Goodarzi