Slot Filling
Slot filling, a crucial task in natural language understanding, aims to extract relevant entities (slots) from text or speech, often in conjunction with intent detection. Current research emphasizes improving robustness to noisy input (e.g., from automatic speech recognition), achieving zero-shot or few-shot learning capabilities across diverse domains, and enhancing model efficiency for deployment on resource-constrained devices. This is being addressed through various approaches, including large language models (LLMs), transformer-based architectures, and techniques like contrastive learning and multi-task learning, often incorporating attention mechanisms and graph-based methods to capture inter-dependencies between intents and slots. Advances in slot filling directly impact the performance of conversational AI systems, improving their accuracy and reliability in real-world applications.
Papers
Co-guiding Net: Achieving Mutual Guidances between Multiple Intent Detection and Slot Filling via Heterogeneous Semantics-Label Graphs
Bowen Xing, Ivor W. Tsang
Group is better than individual: Exploiting Label Topologies and Label Relations for Joint Multiple Intent Detection and Slot Filling
Bowen Xing, Ivor W. Tsang
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
Kalpa Gunaratna, Vijay Srinivasan, Akhila Yerukola, Hongxia Jin