Large Scale Language Model
Large-scale language models (LLMs) are powerful AI systems designed to understand and generate human-like text, aiming to improve various natural language processing tasks. Current research focuses on enhancing LLM efficiency through techniques like iterative refinement, hierarchical architectures, and model compression methods such as quantization and pruning, as well as improving their reliability and addressing issues like hallucinations. These advancements are driving significant progress in diverse fields, including recommendation systems, mental health support, and legal document drafting, demonstrating LLMs' practical impact and their potential to revolutionize numerous applications.
Papers
Assessing Linguistic Generalisation in Language Models: A Dataset for Brazilian Portuguese
Rodrigo Wilkens, Leonardo Zilio, Aline Villavicencio
Enhancing Black-Box Few-Shot Text Classification with Prompt-Based Data Augmentation
Danqing Luo, Chen Zhang, Jiahui Xu, Bin Wang, Yiming Chen, Yan Zhang, Haizhou Li