Token Level
Token-level analysis in large language models (LLMs) focuses on understanding the individual units of text and their contribution to overall model behavior and performance. Current research investigates token dynamics within various architectures, including transformers and state space models, exploring techniques like token caching, selective training, and retrieval augmentation to improve efficiency and accuracy. This granular approach is crucial for enhancing LLM capabilities in diverse applications, from improving machine translation and gene expression prediction to mitigating biases and enhancing robustness against attacks. The insights gained are driving advancements in model training, optimization, and interpretability.
Papers
TokenCompose: Text-to-Image Diffusion with Token-level Supervision
Zirui Wang, Zhizhou Sha, Zheng Ding, Yilin Wang, Zhuowen Tu
PneumoLLM: Harnessing the Power of Large Language Model for Pneumoconiosis Diagnosis
Meiyue Song, Zhihua Yu, Jiaxin Wang, Jiarui Wang, Yuting Lu, Baicun Li, Xiaoxu Wang, Qinghua Huang, Zhijun Li, Nikolaos I. Kanellakis, Jiangfeng Liu, Jing Wang, Binglu Wang, Juntao Yang