Token Level
Token-level analysis in large language models (LLMs) focuses on understanding the individual units of text and their contribution to overall model behavior and performance. Current research investigates token dynamics within various architectures, including transformers and state space models, exploring techniques like token caching, selective training, and retrieval augmentation to improve efficiency and accuracy. This granular approach is crucial for enhancing LLM capabilities in diverse applications, from improving machine translation and gene expression prediction to mitigating biases and enhancing robustness against attacks. The insights gained are driving advancements in model training, optimization, and interpretability.
Papers
Exploring Optimal Transport-Based Multi-Grained Alignments for Text-Molecule Retrieval
Zijun Min, Bingshuai Liu, Liang Zhang, Jia Song, Jinsong Su, Song He, Xiaochen Bo
TriG-NER: Triplet-Grid Framework for Discontinuous Named Entity Recognition
Rina Carines Cabral, Soyeon Caren Han, Areej Alhassan, Riza Batista-Navarro, Goran Nenadic, Josiah Poon