Generative Language Model
Generative language models (GLMs) are artificial intelligence systems designed to produce human-like text, aiming to improve tasks like text summarization, question answering, and creative writing. Current research focuses on enhancing GLMs' accuracy, addressing biases and hallucinations, and improving efficiency through techniques like retrieval-augmented generation (RAG), fine-tuning with smaller, specialized models, and optimizing model architectures (e.g., transformers). These advancements have significant implications for various fields, including education (automated scoring), scientific discovery (catalyst design), and addressing societal challenges (mitigating harmful outputs), but also raise concerns about ethical implications and potential biases.
Papers
On the Amplification of Linguistic Bias through Unintentional Self-reinforcement Learning by Generative Language Models -- A Perspective
Minhyeok Lee
The BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues
Anaïs Tack, Ekaterina Kochmar, Zheng Yuan, Serge Bibauw, Chris Piech
A Scalable and Adaptive System to Infer the Industry Sectors of Companies: Prompt + Model Tuning of Generative Language Models
Lele Cao, Vilhelm von Ehrenheim, Astrid Berghult, Cecilia Henje, Richard Anselmo Stahl, Joar Wandborg, Sebastian Stan, Armin Catovic, Erik Ferm, Hannes Ingelhag
LexGPT 0.1: pre-trained GPT-J models with Pile of Law
Jieh-Sheng Lee
Large-Scale Text Analysis Using Generative Language Models: A Case Study in Discovering Public Value Expressions in AI Patents
Sergio Pelaez, Gaurav Verma, Barbara Ribeiro, Philip Shapira
Smaller Language Models are Better Black-box Machine-Generated Text Detectors
Niloofar Mireshghallah, Justus Mattern, Sicun Gao, Reza Shokri, Taylor Berg-Kirkpatrick