GPT 4
GPT-4, a large language model, is being extensively researched for its capabilities across diverse tasks, including translation, code analysis, educational assessment, and medical information extraction. Current research focuses on evaluating its performance against human benchmarks, exploring its limitations (e.g., susceptibility to prompt engineering and inconsistencies in complex reasoning), and developing methods to improve its reliability and efficiency, including the use of prompt engineering and ensemble methods with other machine learning models. These investigations are crucial for understanding GPT-4's strengths and weaknesses, informing its responsible deployment in various applications, and advancing the broader field of large language model development.
Papers
De-jargonizing Science for Journalists with GPT-4: A Pilot Study
Sachita Nishal, Eric Lee, Nicholas Diakopoulos
In-Context Learning for Long-Context Sentiment Analysis on Infrastructure Project Opinions
Alireza Shamshiri, Kyeong Rok Ryu, June Young Park
Mini-Omni2: Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities
Zhifei Xie, Changqiao Wu