Semantic Parsing
Semantic parsing aims to translate natural language into formal, structured representations, enabling computers to understand and act upon human instructions. Current research focuses on improving the accuracy and robustness of semantic parsers, particularly using large language models and sequence-to-sequence architectures, often augmented with techniques like in-context learning and grammar constraints to handle ambiguity and improve generalization. This field is crucial for bridging the gap between human language and machine action, with applications ranging from question answering and database querying to controlling robots and other intelligent systems. Ongoing efforts address challenges like handling complex queries, diverse data sources, and cross-lingual transfer.
Papers
Scalable Learning of Latent Language Structure With Logical Offline Cycle Consistency
Maxwell Crouse, Ramon Astudillo, Tahira Naseem, Subhajit Chaudhury, Pavan Kapanipathi, Salim Roukos, Alexander Gray
Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding
Parker Glenn, Parag Pravin Dakle, Preethi Raghavan