{ "paper_id": "P17-1041", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:15:41.816545Z" }, "title": "A Syntactic Neural Model for General-Purpose Code Generation", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "pcyin@cs.cmu.edu" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "gneubig@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.", "pdf_parse": { "paper_id": "P17-1041", "_pdf_hash": "", "abstract": [ { "text": "We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Every programmer has experienced the situation where they know what they want to do, but do not have the ability to turn it into a concrete implementation. For example, a Python programmer may want to \"sort my list in descending order,\" but not be able to come up with the proper syntax sorted(my list, reverse=True) to realize his intention. To resolve this impasse, it is common for programmers to search the web in natural language (NL) , find an answer, and modify it into the desired form (Brandt et al., 2009 (Brandt et al., , 2010 . However, this is time-consuming, and thus the software engineering literature is ripe with methods to directly generate code from NL descriptions, mostly with hand-engineered methods highly tailored to specific programming languages (Balzer, 1985; Little and Miller, 2009; Gvero and Kuncak, 2015) .", "cite_spans": [ { "start": 435, "end": 439, "text": "(NL)", "ref_id": null }, { "start": 494, "end": 514, "text": "(Brandt et al., 2009", "ref_id": "BIBREF11" }, { "start": 515, "end": 537, "text": "(Brandt et al., , 2010", "ref_id": "BIBREF10" }, { "start": 773, "end": 787, "text": "(Balzer, 1985;", "ref_id": "BIBREF6" }, { "start": 788, "end": 812, "text": "Little and Miller, 2009;", "ref_id": "BIBREF30" }, { "start": 813, "end": 836, "text": "Gvero and Kuncak, 2015)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In parallel, the NLP community has developed methods for data-driven semantic parsing, which attempt to map NL to structured logical forms executable by computers. These logical forms can be general-purpose meaning representations (Clark and Curran, 2007; Banarescu et al., 2013) , formalisms for querying knowledge bases (Tang and Mooney, 2001; Zettlemoyer and Collins, 2005; Berant et al., 2013) and instructions for robots or personal assistants Quirk et al., 2015; Misra et al., 2015) , among others. While these methods have the advantage of being learnable from data, compared to the programming languages (PLs) in use by programmers, the domain-specific languages targeted by these works have a schema and syntax that is relatively simple.", "cite_spans": [ { "start": 231, "end": 255, "text": "(Clark and Curran, 2007;", "ref_id": "BIBREF12" }, { "start": 256, "end": 279, "text": "Banarescu et al., 2013)", "ref_id": "BIBREF7" }, { "start": 322, "end": 345, "text": "(Tang and Mooney, 2001;", "ref_id": "BIBREF48" }, { "start": 346, "end": 376, "text": "Zettlemoyer and Collins, 2005;", "ref_id": "BIBREF54" }, { "start": 377, "end": 397, "text": "Berant et al., 2013)", "ref_id": "BIBREF9" }, { "start": 449, "end": 468, "text": "Quirk et al., 2015;", "ref_id": "BIBREF45" }, { "start": 469, "end": 488, "text": "Misra et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, Ling et al. (2016) have proposed a data-driven code generation method for high-level, general-purpose PLs like Python and Java. This work treats code generation as a sequence-tosequence modeling problem, and introduce methods to generate words from character-level models, and copy variable names from input descriptions. However, unlike most work in semantic parsing, it does not consider the fact that code has to be well-defined programs in the target syntax.", "cite_spans": [ { "start": 10, "end": 28, "text": "Ling et al. (2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose a data-driven syntaxbased neural network model tailored for generation of general-purpose PLs like Python. In order to capture the strong underlying syntax of the PL, we define a model that transduces an NL statement into an Abstract Syntax Tree (AST; Fig. 1(a) ,", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 286, "text": "Fig. 1(a)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u00a7 2) for the target PL. ASTs can be deterministically generated for all well-formed programs using standard parsers provided by the PL, and thus give us a way to obtain syntax information with minimal engineering. Once we generate an AST, we can use deterministic generation tools to convert the AST into surface code. We hypothesize Table 1 : Example production rules for common Python statements (Python Software Foundation, 2016) that such a structured approach has two benefits.", "cite_spans": [ { "start": 398, "end": 432, "text": "(Python Software Foundation, 2016)", "ref_id": null } ], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, we hypothesize that structure can be used to constrain our search space, ensuring generation of well-formed code. To this end, we propose a syntax-driven neural code generation model. The backbone of our approach is a grammar model ( \u00a7 3) which formalizes the generation story of a derivation AST into sequential application of actions that either apply production rules ( \u00a7 3.1), or emit terminal tokens ( \u00a7 3.2). The underlying syntax of the PL is therefore encoded in the grammar model a priori as the set of possible actions. Our approach frees the model from recovering the underlying grammar from limited training data, and instead enables the system to focus on learning the compositionality among existing grammar rules. Xiao et al. (2016) have noted that this imposition of structure on neural models is useful for semantic parsing, and we expect this to be even more important for general-purpose PLs where the syntax trees are larger and more complex.", "cite_spans": [ { "start": 736, "end": 754, "text": "Xiao et al. (2016)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, we hypothesize that structural information helps to model information flow within the neural network, which naturally reflects the recursive structure of PLs. To test this, we extend a standard recurrent neural network (RNN) decoder to allow for additional neural connections which reflect the recursive structure of an AST ( \u00a7 4.2). As an example, when expanding the node ? in Fig. 1(a) , we make use of the information from both its parent and left sibling (the dashed rectangle). This enables us to locally pass information of relevant code segments via neural network connections, resulting in more confident predictions.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 395, "text": "Fig. 1(a)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments ( \u00a7 5) on two Python code generation tasks show 11.7% and 9.3% absolute improvements in accuracy against the state-of-the-art system (Ling et al., 2016) . Our model also gives competitive performance on a standard semantic parsing benchmark 1 .", "cite_spans": [ { "start": 145, "end": 164, "text": "(Ling et al., 2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 Implementation available at https://github. com/neulab/NL2code", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given an NL description x, our task is to generate the code snippet c in a modern PL based on the intent of x. We attack this problem by first generating the underlying AST. We define a probabilistic grammar model of generating an AST y given x: p(y|x). The best-possible AST\u0177 is then given b\u0177 y = arg max y p(y|x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Code Generation Problem", "sec_num": "2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Code Generation Problem", "sec_num": "2" }, { "text": "y is then deterministically converted to the corresponding surface code c. 2 While this paper uses examples from Python code, our method is PLagnostic. Before detailing our approach, we first present a brief introduction of the Python AST and its underlying grammar. The Python abstract grammar contains a set of production rules, and an AST is generated by applying several production rules composed of a head node and multiple child nodes. For instance, the first rule in Tab. 1 is used to generate the function call sorted(\u2022) in Fig. 1(a) . It consists of a head node of type Call, and three child nodes of type expr, expr* and keyword*, respectively. Labels of each node are noted within brackets. In an AST, non-terminal nodes sketch the general structure of the target code, while terminal nodes can be categorized into two types: operation terminals and variable terminals. Operation terminals correspond to basic arithmetic operations like AddOp.Variable terminal nodes store values for variables and constants of built-in data types 3 . For instance, all terminal nodes in Fig. 1(a) are variable terminal nodes.", "cite_spans": [], "ref_spans": [ { "start": 532, "end": 541, "text": "Fig. 1(a)", "ref_id": "FIGREF1" }, { "start": 1082, "end": 1091, "text": "Fig. 1(a)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Code Generation Problem", "sec_num": "2" }, { "text": "Before detailing our neural code generation method, we first introduce the grammar model at its core. Our probabilistic grammar model defines the generative story of a derivation AST. We fac- torize the generation process of an AST into sequential application of actions of two types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Model", "sec_num": "3" }, { "text": "\u2022 APPLYRULE[r] applies a production rule r to the current derivation tree;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Model", "sec_num": "3" }, { "text": "\u2022 GENTOKEN[v] populates a variable terminal node by appending a terminal token v. Formally, under our grammar model, the probability of generating an AST y is factorized as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y|x) = T Y t=1 p(a t |x, a token is used to \"close\" the node. The grammar model then proceeds to the new frontier node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GENTOKEN Actions", "sec_num": "3.2" }, { "text": "Terminal tokens can be generated from a predefined vocabulary, or be directly copied from the input NL. This is motivated by the observation that the input description often contains out-ofvocabulary (OOV) variable names or literal values that are directly used in the target code. For instance, in our running example the variable name my list can be directly copied from the the input at t 12 . We give implementation details in \u00a7 4.2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GENTOKEN Actions", "sec_num": "3.2" }, { "text": "We estimate action probabilities in Eq. (2) using attentional neural encoder-decoder models with an information flow structured by the syntax trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimating Action Probabilities", "sec_num": "4" }, { "text": "For an NL description x consisting of n words {w i } n i=1 , the encoder computes a context sensitive embedding h i for each w i using a bidirectional Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) , similar to the setting in (Bahdanau et al., 2014) . See supplementary materials for detailed equations.", "cite_spans": [ { "start": 189, "end": 223, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF18" }, { "start": 252, "end": 275, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "4.1" }, { "text": "The decoder uses an RNN to model the sequential generation process of an AST defined as Eq. (2). Each action step in the grammar model naturally grounds to a time step in the decoder RNN. Therefore, the action sequence in Fig. 1(b) can be interpreted as unrolling RNN time steps, with solid arrows indicating RNN connections. The RNN maintains an internal state to track the generation process ( \u00a7 4.2.1), which will then be used to compute action probabilities p(a t |x, a