GPT Neo
GPT Neo, a family of large language models (LLMs), is being extensively researched for its capabilities in various domains, focusing on improving its performance, efficiency, and mitigating biases. Current research explores applications ranging from automated text summarization and medical diagnosis assistance to code generation and data acquisition system design, often employing techniques like parameter-efficient fine-tuning and retrieval-augmented generation to enhance performance and reduce computational costs. The ability of GPT Neo and similar LLMs to process and generate human-quality text has significant implications for diverse fields, offering potential for automating tasks, improving efficiency, and augmenting human capabilities in various professional settings.
Papers
"ChatGPT Is Here to Help, Not to Replace Anybody" -- An Evaluation of Students' Opinions On Integrating ChatGPT In CS Courses
Bruno Pereira Cipriano, Pedro Alves
Prompting Towards Alleviating Code-Switched Data Scarcity in Under-Resourced Languages with GPT as a Pivot
Michelle Terblanche, Kayode Olaleye, Vukosi Marivate