GPT 4
GPT-4, a large language model, is being extensively researched for its capabilities across diverse tasks, including translation, code analysis, educational assessment, and medical information extraction. Current research focuses on evaluating its performance against human benchmarks, exploring its limitations (e.g., susceptibility to prompt engineering and inconsistencies in complex reasoning), and developing methods to improve its reliability and efficiency, including the use of prompt engineering and ensemble methods with other machine learning models. These investigations are crucial for understanding GPT-4's strengths and weaknesses, informing its responsible deployment in various applications, and advancing the broader field of large language model development.
Papers
Deployment of Large Language Models to Control Mobile Robots at the Edge
Pascal Sikorski, Leendert Schrader, Kaleb Yu, Lucy Billadeau, Jinka Meenakshi, Naveena Mutharasan, Flavio Esposito, Hadi AliAkbarpour, Madi Babaiasl
RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness
Tianyu Yu, Haoye Zhang, Yuan Yao, Yunkai Dang, Da Chen, Xiaoman Lu, Ganqu Cui, Taiwen He, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun
Eliciting Informative Text Evaluations with Large Language Models
Yuxuan Lu, Shengwei Xu, Yichi Zhang, Yuqing Kong, Grant Schoenebeck
A Declarative System for Optimizing AI Workloads
Chunwei Liu, Matthew Russo, Michael Cafarella, Lei Cao, Peter Baille Chen, Zui Chen, Michael Franklin, Tim Kraska, Samuel Madden, Gerardo Vitagliano
Can GPT-4 do L2 analytic assessment?
Stefano Bannò, Hari Krishna Vydana, Kate M. Knill, Mark J. F. Gales
GPT-4 passes most of the 297 written Polish Board Certification Examinations
Jakub Pokrywka, Jeremi Kaczmarek, Edward Gorzelańczyk
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Justin Zhao, Timothy Wang, Wael Abid, Geoffrey Angus, Arnav Garg, Jeffery Kinnison, Alex Sherstinsky, Piero Molino, Travis Addair, Devvret Rishi