Natural Language Model
Natural language models (NLMs) are computational systems designed to understand, process, and generate human language. Current research focuses on improving NLMs' capabilities in diverse tasks, including information extraction from specialized texts (e.g., medical guidelines), idiom processing, and cross-modal applications (e.g., integrating speech and text). Key model architectures include transformers and their variants, often leveraging techniques like parameter-efficient fine-tuning and pre-training on massive datasets to enhance performance and efficiency across various downstream applications, impacting fields from healthcare to software engineering.
Papers
February 17, 2023
January 15, 2023
December 30, 2022
November 30, 2022
November 7, 2022
November 2, 2022
November 1, 2022
October 30, 2022
October 22, 2022
October 7, 2022
August 30, 2022
May 18, 2022
April 7, 2022
April 5, 2022
March 18, 2022