GPT Neo
GPT Neo, a family of large language models (LLMs), is being extensively researched for its capabilities in various domains, focusing on improving its performance, efficiency, and mitigating biases. Current research explores applications ranging from automated text summarization and medical diagnosis assistance to code generation and data acquisition system design, often employing techniques like parameter-efficient fine-tuning and retrieval-augmented generation to enhance performance and reduce computational costs. The ability of GPT Neo and similar LLMs to process and generate human-quality text has significant implications for diverse fields, offering potential for automating tasks, improving efficiency, and augmenting human capabilities in various professional settings.
Papers
TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT
Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao
A Study on the Performance of Generative Pre-trained Transformer (GPT) in Simulating Depressed Individuals on the Standardized Depressive Symptom Scale
Sijin Cai, Nanfeng Zhang, Jiaying Zhu, Yanjie Liu, Yongjin Zhou
Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation
Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, Ashwin Kaylan
InheritSumm: A General, Versatile and Compact Summarizer by Distilling from GPT
Yichong Xu, Ruochen Xu, Dan Iter, Yang Liu, Shuohang Wang, Chenguang Zhu, Michael Zeng