Semantic Parsing
Semantic parsing aims to translate natural language into formal, structured representations, enabling computers to understand and act upon human instructions. Current research focuses on improving the accuracy and robustness of semantic parsers, particularly using large language models and sequence-to-sequence architectures, often augmented with techniques like in-context learning and grammar constraints to handle ambiguity and improve generalization. This field is crucial for bridging the gap between human language and machine action, with applications ranging from question answering and database querying to controlling robots and other intelligent systems. Ongoing efforts address challenges like handling complex queries, diverse data sources, and cross-lingual transfer.
Papers
Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing
Shufan Wang, Sebastien Jean, Sailik Sengupta, James Gung, Nikolaos Pappas, Yi Zhang
The Role of Output Vocabulary in T2T LMs for SPARQL Semantic Parsing
Debayan Banerjee, Pranav Ajit Nair, Ricardo Usbeck, Chris Biemann
Towards Zero-Shot Frame Semantic Parsing with Task Agnostic Ontologies and Simple Labels
Danilo Ribeiro, Omid Abdar, Jack Goetz, Mike Ross, Annie Dong, Kenneth Forbus, Ahmed Mohamed
From Parse-Execute to Parse-Execute-Refine: Improving Semantic Parser for Complex Question Answering over Knowledge Base
Wangzhen Guo, Linyin Luo, Hanjiang Lai, Jian Yin