{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:46.000977Z" }, "title": "A Globally Normalized Neural Model for Semantic Parsing", "authors": [ { "first": "Chenyang", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta", "location": {} }, "email": "chenyangh@ualberta.ca" }, { "first": "Wei", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "wei.yang@borealisai.com" }, { "first": "Yanshuai", "middle": [], "last": "Cao", "suffix": "", "affiliation": {}, "email": "yanshuai.cao@borealisai.com" }, { "first": "Osmar", "middle": [], "last": "Za\u00efane", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta", "location": {} }, "email": "zaiane@ualberta.ca" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose a globally normalized model for context-free grammar (CFG)based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose a globally normalized model for context-free grammar (CFG)based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsing has received much interest in the NLP community (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Jia and Liang, 2016; Guo et al., 2020) . The task is to map a natural language utterance to executable code, such as \u03bb-expressions, SQL queries, and Python programs.", "cite_spans": [ { "start": 65, "end": 89, "text": "(Zelle and Mooney, 1996;", "ref_id": "BIBREF29" }, { "start": 90, "end": 120, "text": "Zettlemoyer and Collins, 2005;", "ref_id": "BIBREF31" }, { "start": 121, "end": 141, "text": "Jia and Liang, 2016;", "ref_id": "BIBREF9" }, { "start": 142, "end": 159, "text": "Guo et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent work integrates the context-free grammar (CFG) of the target code into the generation process. Instead of generating tokens of the code (Dong and Lapata, 2016) , CFG-based semantic parsing predicts the grammar rules in the abstract syntax tree (AST). This guarantees the generated code complies with the CFG, and thus it has been widely adopted Guo et al., 2019; Bogin et al., 2019; Sun et al., 2019 Sun et al., , 2020 .", "cite_spans": [ { "start": 143, "end": 166, "text": "(Dong and Lapata, 2016)", "ref_id": "BIBREF5" }, { "start": 352, "end": 369, "text": "Guo et al., 2019;", "ref_id": "BIBREF8" }, { "start": 370, "end": 389, "text": "Bogin et al., 2019;", "ref_id": "BIBREF1" }, { "start": 390, "end": 406, "text": "Sun et al., 2019", "ref_id": "BIBREF18" }, { "start": 407, "end": 425, "text": "Sun et al., , 2020", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Typically, the neural semantic parsing models are trained by maximum likelihood estimation (MLE). The models predict the probability of the next rules in an autoregressive fashion, known as a locally normalized model. However, local normalization is often criticized for the label bias problem (Lafferty et al., 2001; Andor et al., 2016; Wiseman and Rush, 2016; Stanojevi\u0107 and Steedman, 2020) . In semantic parsing, for example, grammar rules that generate identifiers (e.g., variable names) have much lower probability than other grammar rules. Thus, the model will be biased towards such rules that can avoid predicting identifiers. More generally, the locally normalized model will prefer such early-step predictions that can lead to low entropy in future steps.", "cite_spans": [ { "start": 294, "end": 317, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF12" }, { "start": 318, "end": 337, "text": "Andor et al., 2016;", "ref_id": "BIBREF0" }, { "start": 338, "end": 361, "text": "Wiseman and Rush, 2016;", "ref_id": "BIBREF21" }, { "start": 362, "end": 392, "text": "Stanojevi\u0107 and Steedman, 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose to apply global normalization to neural semantic parsing. Our model scores every grammar rule with an unbounded real value, instead of a probability, so that the model does not have to avoid high-entropy predictions and does not suffer from label bias. Specifically, we use max-margin loss for training, where the ground truth is treated as the positive sample and beam search results are negative samples. In addition, we accelerate training by initializing the globally normalized model with the parameters from a pretrained locally normalized model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct experiments on three datasets: ATIS (Dahl et al., 1994) , CoNaLa , and Spider (Yu et al., 2018) . Compared with local normalization, our globally normalized model is able to achieve higher performance on the small ATIS and CoNaLa datasets with the long short-term memory (LSTM) architecture, but does not yield improvement on the massive Spider dataset when using a BERT-based pretrained language model.", "cite_spans": [ { "start": 47, "end": 66, "text": "(Dahl et al., 1994)", "ref_id": "BIBREF2" }, { "start": 89, "end": 106, "text": "(Yu et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early approaches to semantic parsing mainly rely on predefined templates, and are domain-specific (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Kwiatkowksi et al., 2010) . Later, researchers apply sequence-to-sequence models to semantic parsing. Dong and Lapata (2016) propose to generate tokens along the syntax tree of a program. Yin and Neubig (2017) generate a program by predicting the grammar rules; our work uses the TranX tool with this framework.", "cite_spans": [ { "start": 98, "end": 122, "text": "(Zelle and Mooney, 1996;", "ref_id": "BIBREF29" }, { "start": 123, "end": 153, "text": "Zettlemoyer and Collins, 2005;", "ref_id": "BIBREF31" }, { "start": 154, "end": 179, "text": "Kwiatkowksi et al., 2010)", "ref_id": "BIBREF11" }, { "start": 256, "end": 278, "text": "Dong and Lapata (2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Globally normalized models, such as the conditional random field (CRF, Lafferty et al., 2001) , are able to mitigate the label bias problem. How-ever, their training is generally difficult due to the global normalization process. To tackle this challenge, Daum\u00e9 and Marcu (2005) propose learning as search optimization (LaSO), and Wiseman and Rush (2016) extend it to the neural network regime as beam search optimization (BSO). Specifically, they obtain negative partial samples whenever the ground truth falls out of the beam during the search, and \"restart\" the beam search with the ground truth partial sequence teacher-forced.", "cite_spans": [ { "start": 71, "end": 93, "text": "Lafferty et al., 2001)", "ref_id": "BIBREF12" }, { "start": 256, "end": 278, "text": "Daum\u00e9 and Marcu (2005)", "ref_id": "BIBREF3" }, { "start": 331, "end": 354, "text": "Wiseman and Rush (2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work is similar to BSO. However, we search for an entire output, and do not train with partial negative samples. This is because our decoder is tree-structured, and different partial trees cannot be implemented in batch efficiently. We instead perform locally normalized pretraining to ease the training of our globally normalized model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we first introduce the neural semantic parser TranX, which servers as the locally normalized base model in our work. We then elaborate how to construct its globally normalized version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "TranX is a context-free grammar (CFG)-based neural semantic parsing system . TranX first encodes a natural language input X with a neural network encoder. Then, the model generates a program by predicting the grammar rules (also known as actions) along the abstract syntax tree (AST) of the program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TranX Framework", "sec_num": "3.1" }, { "text": "In Figure 1 , for example, the rules generating the desired program include ApplyConstr(Expr.), ApplyConstr(Call), ApplyConstr(Attr.), and GenToken(sorted).", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The TranX Framework", "sec_num": "3.1" }, { "text": "In TranX, these actions are predicted in an autoregressive way based on the input X and the partially generated tree, given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TranX Framework", "sec_num": "3.1" }, { "text": "P L (a t |a ", "text": "BLEU score on the CoNaLa dataset.", "html": null, "num": null } } } }