GPT 4
GPT-4, a large language model, is being extensively researched for its capabilities across diverse tasks, including translation, code analysis, educational assessment, and medical information extraction. Current research focuses on evaluating its performance against human benchmarks, exploring its limitations (e.g., susceptibility to prompt engineering and inconsistencies in complex reasoning), and developing methods to improve its reliability and efficiency, including the use of prompt engineering and ensemble methods with other machine learning models. These investigations are crucial for understanding GPT-4's strengths and weaknesses, informing its responsible deployment in various applications, and advancing the broader field of large language model development.
Papers
Synthetic Dialogue Dataset Generation using LLM Agents
Yelaman Abdullin, Diego Molla-Aliod, Bahadorreza Ofoghi, John Yearwood, Qingyang Li
Adapting Amidst Degradation: Cross Domain Li-ion Battery Health Estimation via Physics-Guided Test-Time Training
Yuyuan Feng, Guosheng Hu, Xiaodong Li, Zhihong Zhang
ChatQA: Surpassing GPT-4 on Conversational QA and RAG
Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro
GPT4Ego: Unleashing the Potential of Pre-trained Models for Zero-Shot Egocentric Action Recognition
Guangzhao Dai, Xiangbo Shu, Wenhao Wu, Rui Yan, Jiachao Zhang