GPT 4
GPT-4, a large language model, is being extensively researched for its capabilities across diverse tasks, including translation, code analysis, educational assessment, and medical information extraction. Current research focuses on evaluating its performance against human benchmarks, exploring its limitations (e.g., susceptibility to prompt engineering and inconsistencies in complex reasoning), and developing methods to improve its reliability and efficiency, including the use of prompt engineering and ensemble methods with other machine learning models. These investigations are crucial for understanding GPT-4's strengths and weaknesses, informing its responsible deployment in various applications, and advancing the broader field of large language model development.
Papers
Evaluating GPT-4 with Vision on Detection of Radiological Findings on Chest Radiographs
Yiliang Zhou, Hanley Ong, Patrick Kennedy, Carol Wu, Jacob Kazam, Keith Hentel, Adam Flanders, George Shih, Yifan Peng
ESG Classification by Implicit Rule Learning via GPT-4
Hyo Jeong Yun, Chanyoung Kim, Moonjeong Hahm, Kyuri Kim, Guijin Son