Sequence Labeling

Sequence labeling, a core task in natural language processing, aims to assign labels to sequential data like words in a sentence, enabling tasks such as named entity recognition and part-of-speech tagging. Current research emphasizes improving performance in low-resource settings through techniques like cross-lingual transfer learning, parameter-efficient fine-tuning (e.g., using hypernetworks and LoRA), and incorporating external knowledge sources. These advancements, along with the exploration of novel architectures like transformer-based models and graph neural networks, are driving progress in various applications, including biomedical named entity recognition, financial information extraction, and even analysis of ancient languages.

Papers