Lexical Semantic Family
Lexical semantic families represent groups of words sharing semantic relatedness, a crucial concept for understanding language processing and generation. Current research focuses on leveraging these families to improve the efficiency and interpretability of large language models (LLMs), for example, by enabling parallel decoding of semantically linked word groups or by analyzing how attention mechanisms in models differentially weight words from different semantic categories. This work has implications for enhancing LLM performance in various tasks, including text summarization, dialogue modeling, and metaphor understanding, and for developing more robust and explainable AI systems.
Papers
A Tale of Two Laws of Semantic Change: Predicting Synonym Changes with Distributional Semantic Models
Bastien Liétard, Mikaela Keller, Pascal Denis
Does Conceptual Representation Require Embodiment? Insights From Large Language Models
Qihui Xu, Yingying Peng, Samuel A. Nastase, Martin Chodorow, Minghua Wu, Ping Li