Generative Large Language Model
Generative Large Language Models (LLMs) are powerful AI systems capable of generating human-quality text, enabling advancements in various applications like dialogue systems and machine translation. Current research focuses on improving efficiency (e.g., through quantization and parallel processing), addressing biases and safety concerns (including backdoor attacks), and enhancing performance in low-resource languages via techniques like fine-tuning and prompt engineering. The impact of LLMs is significant, driving progress in natural language processing and impacting diverse fields through improved automation, enhanced accessibility, and more effective information retrieval and analysis.
Papers
Large Language Models for Generative Information Extraction: A Survey
Derong Xu, Wei Chen, Wenjun Peng, Chao Zhang, Tong Xu, Xiangyu Zhao, Xian Wu, Yefeng Zheng, Yang Wang, Enhong Chen
Building Efficient Universal Classifiers with Natural Language Inference
Moritz Laurer, Wouter van Atteveldt, Andreu Casas, Kasper Welbers
Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization
George Chrysostomou, Zhixue Zhao, Miles Williams, Nikolaos Aletras
Assessing Translation capabilities of Large Language Models involving English and Indian Languages
Vandan Mujadia, Ashok Urlana, Yash Bhaskar, Penumalla Aditya Pavani, Kukkapalli Shravya, Parameswari Krishnamurthy, Dipti Misra Sharma
GENEVA: GENErating and Visualizing branching narratives using LLMs
Jorge Leandro, Sudha Rao, Michael Xu, Weijia Xu, Nebosja Jojic, Chris Brockett, Bill Dolan
Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, Jilin Chen
Using GPT-4 to Augment Unbalanced Data for Automatic Scoring
Luyang Fang, Gyeong-Geon Lee, Xiaoming Zhai