Label Embeddings
Label embeddings represent categorical data, such as labels in classification tasks, as dense vectors in a continuous space, aiming to capture semantic relationships between labels and improve model performance. Current research focuses on integrating label embeddings with various architectures, including contrastive learning, transformers, and graph convolutional networks, to enhance tasks like multi-label classification, hierarchical text classification, and anomaly detection. This approach improves model accuracy, efficiency, and interpretability across diverse applications, from medical image analysis to natural language processing, by leveraging the rich semantic information encoded within label embeddings. The resulting advancements contribute to more robust and effective machine learning models across numerous domains.
Papers
Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention
Kun Yan, Chenbin Zhang, Jun Hou, Ping Wang, Zied Bouraoui, Shoaib Jameel, Steven Schockaert
Gaussian Mixture Variational Autoencoder with Contrastive Learning for Multi-Label Classification
Junwen Bai, Shufeng Kong, Carla P. Gomes