Topic Embeddings
Topic embeddings represent textual data, such as words or documents, as vectors in a continuous space, aiming to capture semantic relationships and facilitate various downstream tasks like topic modeling and stance detection. Current research focuses on improving the quality and generalizability of these embeddings, exploring techniques like contrastive learning, box embeddings, and incorporating user interaction to refine topic representations within neural topic models (NTMs) and other architectures. This work is significant because improved topic embeddings enhance the performance of numerous natural language processing applications, including information retrieval, summarization, and social media analysis, leading to more accurate and insightful results.
Papers
Factored Attention and Embedding for Unstructured-view Topic-related Ultrasound Report Generation
Fuhai Chen, Rongrong Ji, Chengpeng Dai, Xuri Ge, Shengchuang Zhang, Xiaojing Ma, Yue Gao
Neural Topic Modeling with Deep Mutual Information Estimation
Kang Xu, Xiaoqiu Lu, Yuan-fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou