Natural Language Text
Natural language text processing focuses on enabling computers to understand, generate, and manipulate human language, aiming to bridge the gap between human communication and machine intelligence. Current research heavily utilizes large language models (LLMs) and transformer architectures, focusing on tasks like information extraction, data-to-text generation, and cross-lingual understanding, often incorporating techniques like prompting strategies and graph-based methods to improve accuracy and interpretability. These advancements have significant implications for various fields, including improving search engines, automating knowledge graph construction, and enhancing the efficiency of clinical data analysis.
Papers
A Generic Method for Fine-grained Category Discovery in Natural Language Texts
Chang Tian, Matthew B. Blaschko, Wenpeng Yin, Mingzhe Xing, Yinliang Yue, Marie-Francine Moens
MMUTF: Multimodal Multimedia Event Argument Extraction with Unified Template Filling
Philipp Seeberger, Dominik Wagner, Korbinian Riedhammer