Published on Tue Dec 25 2018

Building a Neural Semantic Parser from a Domain Ontology

Jianpeng Cheng, Siva Reddy, Mirella Lapata

Semantic parsing is the task of converting natural language utterances into machine interpretable meaning representations. Scaling semantic parsing to arbitrary domains faces two interrelated challenges: obtaining broad coverage and cheaply. We address these challenges with a framework which allows to elicit training data from a domain ontology.

0
0
0
Abstract

Semantic parsing is the task of converting natural language utterances into machine interpretable meaning representations which can be executed against a real-world environment such as a database. Scaling semantic parsing to arbitrary domains faces two interrelated challenges: obtaining broad coverage training data effectively and cheaply; and developing a model that generalizes to compositional utterances and complex intentions. We address these challenges with a framework which allows to elicit training data from a domain ontology and bootstrap a neural parser which recursively builds derivations of logical forms. In our framework meaning representations are described by sequences of natural language templates, where each template corresponds to a decomposed fragment of the underlying meaning representation. Although artificial, templates can be understood and paraphrased by humans to create natural utterances, resulting in parallel triples of utterances, meaning representations, and their decompositions. These allow us to train a neural semantic parser which learns to compose rules in deriving meaning representations. We crowdsource training data on six domains, covering both single-turn utterances which exhibit rich compositionality, and sequential utterances where a complex task is procedurally performed in steps. We then develop neural semantic parsers which perform such compositional tasks. In general, our approach allows to deploy neural semantic parsers quickly and cheaply from a given domain ontology.

Thu Apr 27 2017
NLP
Learning Structured Natural Language Representations for Semantic Parsing
We introduce a neural semantic parser that converts natural language utterances to intermediate representations. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from systematically motivated ones.
0
0
0
Tue Nov 14 2017
NLP
Learning an Executable Neural Semantic Parser
The paper describes a neural semantic parser that maps natural language onto logical forms. It can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The generation process is modeled by structured recurrent neural networks.
0
0
0
Thu Apr 27 2017
NLP
Learning a Neural Semantic Parser from User Feedback
We present an approach to rapidly and easily build natural language interfaces to databases for new domains. To achieve this, we adapt neural sequence models to map utterances directly to SQL. These models are immediately deployed online to solicit feedback from real users to flag up incorrect queries.
0
0
0
Thu Aug 23 2018
NLP
Weakly-supervised Neural Semantic Parsing with a Generative Ranker
weakly-supervised semantic parsers are trained on utterance-denotation pairs. The task is challenging due to the large search space and spuriousness of logical forms. We propose to use a neurallyencoded lexicon to inject prior domain knowledge to the model.
0
0
0
Tue Nov 03 2020
NLP
Generating Synthetic Data for Task-Oriented Semantic Parsing with Hierarchical Representations
Modern conversational AI systems support natural language understanding for a wide variety of capabilities. State-of-the-art semantic parsers are trained using supervised learning with data labeled according to a hierarchical schema. In this work, we explore the possibility of generating synthetic data for neuralized semantic parsing.
0
0
0
Wed Jun 14 2017
Machine Learning
Transfer Learning for Neural Semantic Parsing
The goal of semantic parsing is to map natural language to a machinehematicallyinterpretable meaning representation language (MRL) One of the constraints that limits full exploration of deep learning technologies for semantic parsing. In this paper, we propose using sequence-to-sequence in a multi-task setup for
0
0
0