ACL-OCL / Base_JSON /prefixN /json /nlp4prog /2021.nlp4prog-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:07.366492Z"
},
"title": "Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Orlanski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute",
"location": {}
},
"email": ""
},
{
"first": "Alex",
"middle": [],
"last": "Gittens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute",
"location": {}
},
"email": "gittea@rpi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Answering a programming question using only its title is difficult as salient contextual information is omitted. Based on this observation, we present a corpus of over 40,000 StackOverflow question texts to be used in conjunction with their corresponding intents from the CoNaLa dataset (Yin et al., 2018). Using both the intent and question body, we use BART to establish a baseline BLEU score of 34.35 for this new task. We find further improvements of 2.8% by combining the mined CoNaLa data with the labeled data to achieve a 35.32 BLEU score. We evaluate prior stateof-the-art CoNaLa models with this additional data and find that our proposed method of using the body and mined data beats the BLEU score of the prior state-of-the-art by 71.96%. Finally, we perform ablations to demonstrate that BART is an unsupervised multimodal learner and examine its extractive behavior. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Answering a programming question using only its title is difficult as salient contextual information is omitted. Based on this observation, we present a corpus of over 40,000 StackOverflow question texts to be used in conjunction with their corresponding intents from the CoNaLa dataset (Yin et al., 2018). Using both the intent and question body, we use BART to establish a baseline BLEU score of 34.35 for this new task. We find further improvements of 2.8% by combining the mined CoNaLa data with the labeled data to achieve a 35.32 BLEU score. We evaluate prior stateof-the-art CoNaLa models with this additional data and find that our proposed method of using the body and mined data beats the BLEU score of the prior state-of-the-art by 71.96%. Finally, we perform ablations to demonstrate that BART is an unsupervised multimodal learner and examine its extractive behavior. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of semantic parsing is to translate a Natural Language(NL) utterance to its logical components. There is a large body of research on applying semantic parsing for source code generation in a multitude of domain specific languages such as lambda calculus and SQL (Dahl et al., 1994; Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Ling et al., 2016; Xiao et al., 2016; Rabinovich et al., 2017; Dong and Lapata, 2018; Guo et al., 2019; Hwang et al., 2019; Tabassum et al., 2020 ). However, the task of translating an NL utterance to a general-purpose programming language has proven to be more challenging. A significant issue contributing to this is the difficulty in acquiring quality data due to the necessary domain knowledge needed in the annotation process.",
"cite_spans": [
{
"start": 271,
"end": 290,
"text": "(Dahl et al., 1994;",
"ref_id": "BIBREF3"
},
{
"start": 291,
"end": 314,
"text": "Zelle and Mooney, 1996;",
"ref_id": "BIBREF35"
},
{
"start": 315,
"end": 345,
"text": "Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF36"
},
{
"start": 346,
"end": 364,
"text": "Ling et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 365,
"end": 383,
"text": "Xiao et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 384,
"end": 408,
"text": "Rabinovich et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 409,
"end": 431,
"text": "Dong and Lapata, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 432,
"end": 449,
"text": "Guo et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 450,
"end": 469,
"text": "Hwang et al., 2019;",
"ref_id": null
},
{
"start": 470,
"end": 491,
"text": "Tabassum et al., 2020",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite this, the past few years have seen a large number of datasets released for different text-to- Figure 1 : Overview of our approach. From the combined annotated + mined set, we concatenate the intent and question body for inputs to BART (Lewis et al., 2020) and use beam search for generation. code related tasks (Ling et al., 2016; Yu et al., 2018; Lu et al., 2021) . Some datasets such as CodeSearchNet (Husain et al., 2019) contain snippets from a multitude of different languages. Others focus on distinct tasks within a specific language, such as JuICe (Agashe et al., 2019) , which contains executable Python programming assignments. Utilizing these corpora, prior works (Suhr et al., 2018; Neubig, 2017, 2018; Sun et al., 2019; Hayati et al., 2018; Yin and Neubig, 2019; have found success with a large variety of model architectures. These methods, however, struggle with domain agnostic open-ended code generation in general-purpose languages. One idea to combat this is to utilize large pretrained language models.",
"cite_spans": [
{
"start": 243,
"end": 263,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 319,
"end": 338,
"text": "(Ling et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 339,
"end": 355,
"text": "Yu et al., 2018;",
"ref_id": "BIBREF34"
},
{
"start": 356,
"end": 372,
"text": "Lu et al., 2021)",
"ref_id": "BIBREF16"
},
{
"start": 411,
"end": 432,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 564,
"end": 585,
"text": "(Agashe et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 683,
"end": 702,
"text": "(Suhr et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 703,
"end": 722,
"text": "Neubig, 2017, 2018;",
"ref_id": null
},
{
"start": 723,
"end": 740,
"text": "Sun et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 741,
"end": 761,
"text": "Hayati et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 762,
"end": 783,
"text": "Yin and Neubig, 2019;",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transformers (Vaswani et al., 2017) have demonstrated that they can both be few-shot (Brown et al., 2020) and unsupervised multitask (Radford et al., 2019) learners. They have been successfully applied to programming language tasks. CodeBERT achieved strong performance on the CodeSearch-Net task through pretraining on bimodal NL comment and code pairs (Feng et al., 2020) , while Sun et al. (2019) used abstract syntax trees(AST) and transformers to achieve state of the art performance on the HearthStone benchmark (Ling et al., 2016) . Roziere et al. (2021) proposed the deobfuscation pretraining task to incorporate structural features of code into transformer models without the use of ASTs. More recently, Shin et al. (2021) explored the capabilities of large pretrained language models to be few-shot semantic parsers.",
"cite_spans": [
{
"start": 13,
"end": 35,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 85,
"end": 105,
"text": "(Brown et al., 2020)",
"ref_id": null
},
{
"start": 133,
"end": 155,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 354,
"end": 373,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 382,
"end": 399,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 518,
"end": 537,
"text": "(Ling et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 540,
"end": 561,
"text": "Roziere et al. (2021)",
"ref_id": "BIBREF19"
},
{
"start": 713,
"end": 731,
"text": "Shin et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Yet open-domain programming question answering on sites such as StackOverflow(SO) 2 has remained an elusive goal. created an annotated dataset with the site in which the intent and answer snippet pairs were automatically mined from the question. They then had crowd workers rewrite the intents to reflect the corresponding code better. Currently, state-of-the-art was achieved by pretraining an LSTM model on resampled API and mined data . Subsequent work conducted an empirical study on the effectiveness of using a code generation model in an IDE plugin and find that developers largely had favorable opinions of their experience (Xu et al., 2021) . An inherent issue with the approach of Xu et al. 2020, more fundamentally the dataset and parameters of the task, is that the intent can only contain a limited amount of information. Arriving at this answer from the intent \"add a new axis to array a\" requires not only the disambiguation of data types for variable a, but also the use of multiple distinct library-specific concepts. Further, this must be accomplished while maintaining syntactically correct code and proper order of arguments. However, neither the original title nor the rewritten intent contains the necessary information to accomplish this task. Although the previous state-of-the-art-model by uses abstract syntax trees (AST) to guarantee syntactically valid python code, it incorrectly generates a[(-1),:]=a. One potential remedy would be to increase the amount of training data, but as discussed previously, getting high-quality annotated code generation data is especially difficult.",
"cite_spans": [
{
"start": 632,
"end": 649,
"text": "(Xu et al., 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by the limitations to the amount of information a given intent can contain and the substantial difficulty involved with gathering more labeled data, we utilize the multimodal text from the question bodies provided by the StackExchange API 3 . We take advantage of the strong performances of transformer models to beat the previous state-of-the-art by 3.06 BLEU. We ensure a fair comparison by training the models from prior works with the extra data to adequately evaluate our proposed method. When all models are trained with the extra data, using BART beats the previous state of the art by 15.12 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Expanding upon the original CoNaLa dataset to include the multimodal textual question bodies and thus the pertinent contextual information they contain such as inputs, outputs, and required libraries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Demonstrating that BART does not rely on a single modality, but rather achieves its best performance on our dataset when all modalities are included. This indicates at least a shallow understanding of both natural and programming language as well as how they are related in the context of SO questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Conducting experiments revealing that BART's struggle to generate syntacically correct code is likely a result of its tendency to be extractive rather than generative in the task of text-to-code generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As detailed in Figure 1 , our overarching approach is to: (1) gather textual bodies from SO for both the annotated and mined examples in the CoNaLa corpus, (2) use the concatenated intents and question bodies as inputs for a large pretrained language model, and (3) use beam search to generate the answer code snippet.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Every example e i \u2208 E from the CoNaLa dataset is comprised of an intent x i \u2208 X that concisely summarizes what the poster wants and a snippet of Python code y i \u2208 Y that represents an implementation of x i . Crowd sourcing was used to rewrite a selection of the mined intents to reflect the snippet better and to ensure that the snippet was indeed a correct answer. As discussed, these intents are limited in the amount of information they can contain. The intent \"add a new axis to array a\" from Figure 2 could refer to a wide variety of different Python objects. It could range from the default list to the Tensor object from PyTorch 4 . The full question, or either its tags or title, is typically enough for a human to disambiguate the correct library to use. But the annotated intent lacks this crucial information as it is rather difficult to design an annotation task for SO data. 5 We address this problem directly by using the additional data found in the SO question. In Figure 2 there are four direct mentions of the NumPy library: two in the question body and one each in both the tags and the title. Further, there is a direct mention of the ndarray data type from NumPy. It is, therefore, rather intuitive to include this additional data as input with the hope that it improves the answer generation performance. Although we did mention that both the tags and title provide salient information, the focus of this paper is only on using the noisy textual question bodies. Therefore, for every example e i the inputs now become the concatenation of x i and the body q x i \u2208 Q from the original SO question. It is important to note that |Q| = |E| as a single question can have many examples while every question is, by definition, unique.",
"cite_spans": [
{
"start": 888,
"end": 889,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 981,
"end": 989,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "StackOverflow Data",
"sec_num": "2.1"
},
{
"text": "Multiple modalities are present in the textual body of a given question. These can range from embedded images to messages from administrators (or upset users) stating that the question is a duplicate of some tangentially related post that does not have an answer. While these are useful to readers, we limit our focus to three modalities: code blocks, inline code, and NL. These modalities are marked in Figure 2 with blue, green, and red, respectively. Ideally, we would prefer to leave in the HTML tags to serve as sentinel tokens, but, looking at Figure 2 , one immediately finds that the poster forgot to mark _to_col as inline code. Therefore, we remove all HTML tags from the inputs, creating an unsupervised learning environ-ment. Therefore, we propose that a transformer model will learn each of the three modalities and learn to use the relationships between each. We use BART (Lewis et al., 2020) because its pretraining focuses on denoising textual data and, to the best of our knowledge, has minimal exposure to code examples. We used HuggingFace's (Wolf et al., 2020) BartForConditionalGeneration model which has a default BART encoder-decoder model with a linear layer and bias for outputs.",
"cite_spans": [
{
"start": 886,
"end": 906,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 1061,
"end": 1080,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 550,
"end": 558,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Unsupervised Modality Learning",
"sec_num": "2.2"
},
{
"text": "We followed Xu et al. (2020) by using large amounts of the mined but not annotated data. Unlike Xu et al. 2020, however, we do not use this data for pretraining. Instead, we combine this data with the annotated data in our main training and validation sets. By adding more questions to the training set, we directly increase the probability that the model encounters a larger and more representative distribution of libraries. Intuitively, this will reduce the variances between experiments as we have reduced the dependency on the specific examples used in the training and validation sets. This variance reduction is especially useful when working with a small dataset such as CoNaLa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unlabeled Data",
"sec_num": "2.3"
},
{
"text": "CoNaLa 6 is an open domain text to code generation task constructed from SO questions. It has 2,879 7 annotated NL-code pairs with more than 590K mined pairs from over 40,000 unique SO questions in the dataset. StackOverflow Data For every unique question in both the annotated and mined sets, we gather additional data from the StackExchange API. As discussed in subsection 2.1, we only use the question body as input. Therefore the task is to generate a valid answer snippet from both the intent and the textual body. Detailed statistics for this dataset are given in Table 1 and Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 570,
"end": 589,
"text": "Table 1 and Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We removed 238 ( 10%) examples from the training set to form the validation set. We then followed Xu et al. (2020) in taking the top mined samples based on their given probability that the NL-Code pair is valid. However, we only used 10,000 samples rather than the 100,000 Xu et al. 2020used. From this, we remove 1000 for validation. 8 For all tests of our model with the mined data, we combine the two training and validation sets into one. Every experiment and test conducted in this work was conducting using Google's Colab Pro service. It afforded us the ability to use 512 input tokens with a batch size of 16. More importantly, we were able to use P100 and V100 graphics cards. Following that, we perform an ablation study using BART and the different components of our approach. Every ablation is run five separate times with different seeds and validation splits. For each test, the model with the lowest validation loss is used in the evaluation. Each test is run for ten epochs as we consistently observed overfitting after five to eight epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "Because we introduce new data at inference, we needed to ensure we fairly compare our methods with previous work. To this end, we run the prior works with the question bodies as inputs. However, for testing Xu et al. (2020) with the question bodies, we limited the amount of mined data in pretraining to 10,000 instead of 100,000. This was done due to Google Colab's execution time limits, as it took upwards of four hours for each run of Xu et al. (2020) with only 10,000 samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "8 Some questions were deleted from StackOverflow in both the annotated and mined sets, so we could not use those.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.2"
},
{
"text": "We measure the corpus level BLEU score of the generated code snippets with the same postprocessing methods and smoothing as Xu et al. 2020. We evaluate our ablations by comparing the corpus BLEU score and unigram, bigram, and trigram precision. Finally, we calculate the percentage of test examples for which our model generated a syntactically valid Python snippet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "For the previous state-of-the-art, we also report the Oracle BLEU proposed by Yin and Neubig (2019) . This is calculated by choosing the candidate snippet s i with the highest sentence level BLEU score out of n generated snippets. Formally, given the candidate list",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "Yin and Neubig (2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = [c 1 , . . . , c n ] and ground truth y i , z = argmax c j \u2208C BLEU(c j , y i )",
"eq_num": "(1)"
}
],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "Furthermore, we want to quantify how much our model relies on the body of the question or \"cheats.\" To do this, we calculate the cheating for the generated snippet ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "s i \u2208 [s 1 , . . . , s N ] = S and ground truth y i \u2208 [y 1 , . . . , y N ] = Y with respect to the input text b i \u2208 [b 1 , . . . , b N ] = B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C m (S) = i\u2208[1;N ] (m(s i , b i ) \u2212 m(y i , b i )) N",
"eq_num": "(2)"
}
],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "If the model is heavily \"cheating\" from the input, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "m(s i , b i ) m(y i , b i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": ", which leads to a large C m . The quantity C m is, by design, similar to a standard mean squared error. The largest difference is that the difference is not squared, to facilitate distinguishing between less and more similar. For the metric function m, we use BLEU and ROUGE (Lin, 2004) . For the former, we take the bigram (C BB ) and trigram (C BT ) precision from BLEU. For ROUGE, we use bigram ROUGE (ROUGE-2/C R2 ) and the longest common subsequence (ROUGE-L/C RL ). The intuition behind using these metrics is that there is a high probability that unigram precision is large. The answers to a question must address the contents of the said question, leading to shared tokens between inputs and outputs. However, the probability should massively drop when considering multiple grams. Therefore, the similarity between n-grams when n > 1 should indicate the reliance on the inputs.",
"cite_spans": [
{
"start": 276,
"end": 287,
"text": "(Lin, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "We implemented our model with Python and Hug-gingFace's transformer library (Wolf et al., 2020) 9 . We used a BART model with a linear layer and a separate bias for text generation. We utilized the smallest available BART model from FAIR, which was the Facebook/BART-base 10 . For training, we again rely on HuggingFace's trainer and their implementation of the learning rate scheduler. We used Adam (Loshchilov and Hutter, 2017) as our optimizer with a learning rate of 5e\u22125 and a linear learning rate scheduler. We also used a warmup ratio of 0.05. Finally, for generation, we used beam search with four beams, early stopping, and a length penalty of 0.9.",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 96,
"end": 97,
"text": "9",
"ref_id": null
},
{
"start": 400,
"end": 429,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.4"
},
{
"text": "We list the previous state-of-the-art BLEU scores for the CoNaLa dataset as well as the performance of our models in Table 3 . Using the intent and question bodies achieved a BLEU score of 34.35\u00b11.01. This was further increased to 35.32\u00b10.42 by including the mined data in the training and validation set. To better understand our model, we perform ablation tests and report their results in Table 4 . When comparing our top performance with the previous top performance, regardless of the data used, our model beats the previous state of the art by 3.40 BLEU, a 10.54% increase. Notably, our model outperforms the previous SoTA by 14.78 BLEU, a 71.96% increase when only comparing the experiments with the question body. Furthermore, BART with the mined data and question bodies beats their Oracle BLEU by 1.61 BLEU, translating to a 4.78% increase. However, it is important to note that Xu et al. (2020) outperforms our model by 1.71(5.30%) when we do not use the textual body. But they still both beat the baseline TranX by 25.72% and 7.98%, respectively. The use of the mined data further beat the reranker by 1.46%.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 392,
"end": 399,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The 71.96% increase is likely because TranX models were never intended to perform well with very noisy data, as evidenced by the 36% dropoff in corpus BLEU when adding the body to Xu et al. (2020). In choosing BART, we intentionally picked a transformer model designed for denoising (Lewis et al., 2020) . Further testing is likely needed to determine if our approach is heavily dependent on the underlying transformer, but that is beyond the scope of this paper.",
"cite_spans": [
{
"start": 283,
"end": 303,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Adding the body of the question objectively improved the performance of the model. The BLEU score increased 30.92% to 34.35 and, per Table 4 , there was an increase across unigram, bigram, and trigram precision. While they all do increase, the amount is far from constant. The unigram precision only saw a 3.61% increase, whereas bigram and trigram precision increased by 12.77% and 22.90%, respectively. This indicates that while the model selected slightly more correct tokens, it greatly improved its ordering of said tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Impact of adding the Question Body",
"sec_num": "4.1"
},
{
"text": "Similar improvements, albeit smaller in value, also occurred when including the mined data without the question bodies. However, there was a sharp drop in the standard deviations for the three precision metrics. In contrast, adding the question body resulted in a steep increase in variance. This is most probably a result of the \"shrinking\" of the dataset that occurred when we added the bodies. In Table 1 we report that every split of the dataset has fewer unique questions than it does examples. Also reported is that the number of tokens in the body is, on average, significantly greater than that of the intents. The effective dataset size is now much smaller, while the number of unique answer snippets stayed the same. The result is that the model now performs better on the difficult test set, at the cost of being more reliant on the training and validation split. Using both the bodies and mined",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 407,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of adding the Question Body",
"sec_num": "4.1"
},
{
"text": "With Body Model Corpus BLEU Corpus BLEU Oracle BLEU TranX data does mitigate this \"shrinking\" effect, as shown by the lower standard deviations than those when only using the body.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No Body",
"sec_num": null
},
{
"text": "As discussed in subsection 2.2, we focus on three modalities in the textual bodies: code blocks, inline code, and natural language. We put forth the idea that a large pretrained language model such as BART learns each modality in an unsupervised manner. We designed four distinct ablations to test if this was the case. Each was run both with and without the mined data totaling eight ablations. We report the full BLEU scores from these in Table 4 . Further, we calculate the performance with respect to baselines in Table 5 . Notably, there was no modality whose removal resulted in a BLEU score worse than when the question body was not used in the input. There was also not a modality whose removal improved performance. From our ablations, it is clear that the most important modality in the question bodies is the code regardless of if it is inline or in a block. But, using only code is still 2.25% worse than when all three modalities are included with mined. This indicates that the NL surrounding acts not only as additional context, but likely further both direct and indirect indicators of salient code for the model.",
"cite_spans": [],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 518,
"end": 525,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Is BART Reliant on a Single Modality",
"sec_num": "4.2"
},
{
"text": "In Table 4 we report the percent of generated snippets that are syntactically valid-adding only the mined data results in a 9% increase. When using the question bodies, the addition of the mined data also increases the percent of valid snippets generated by 7.88%. While it is an improvement, it is still a 3.76% drop from when the body was excluded. Further, removing the code from the bodies resulted in the highest percentages of 92.00% and 84.92% with and without the mined data. We then performed a finer analysis using a single seed and the same training and validation data across all ablations and reported the results in Appendix A. Across all ablations, the majority of errors are caused by mismatches of parentheses. In reality, a large percentage of general syntax errors are likely caused by this. However, syntax errors prevent the extraction of the AST for further investigation of these errors. We also report in Table 9 the percentage of valid snippets generated when the print function is present. One of the more commonly occurring incompatibilities between Python 2 and 3 is that print now requires parentheses. Considering that the questions in the CoNaLa dataset are from March 2017 or earlier and that support for Python 2.x only ended in January 2020 11 , we hypothesize that these deprecated calls are a large cause of the errors. When both the body and snippet have print, the inclusion of the question body led to the percent of valid snippets dropping by 21.06 with and 21.05 without the mined data with respect to their baselines. While there are only 19 such questions in the test set, this is a significant drop. The likely cause is that the autoregressive decoder of BART struggles to remember to close the parentheses when wrapping the snippet with a print statement. One solution would be to run the 2to3 12 translator on all of the code. However, the possibilities for code blocks to contain code and other modalities such as error messages and console executions present significant hurdles as 2to3 does not support these. Therefore we leave that to future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 929,
"end": 936,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Removing Code Improves Syntax",
"sec_num": "4.3"
},
{
"text": "In subsection 3.3 we define the \"cheating\" equation to measure if the generated snippet is more similar to the question body than the ground truth is. The ideal model would maximize the BLEU score while minimizing the |C m |. We run multiple ablations on a single seed and calculate the \"cheating\" as defined by Equation 2 and present these results in Table 6 . Suffice to say that serious violations of academic integrity have occurred. As expected, the baseline is less similar to the question bodies than the ground truth is. When the body was used as input, C BT increased by 20.28 points, while C RL rose by 3.16 points, representing a 293.49% and 159.60% increase over their respective baselines. Including the mined data resulted in increases of 18.59 (308.13%) and 0.77(265.52%) when compared to using only the intents. Both indicate that the model's generated output has significantly more shared multigram subsequences with the question body than the ground truth does. In the ablations where code was removed from the body, C BT increased by only 0.98 and 1.86 with and without the mined data. This represents a percent of increase of only 16.25% and 26.92% over their respective baselines. However, in the case where all NL was removed, C BT increased by 17.35(287.73%) and 19.18(277.57%) points with respect to their baselines. The fact that these increases are lower than that when all modalities are included provides further evidence that BART is an unsupervised mul-timodal learner and understands the relationships between each modality. The NL likely provides both explicit and implicit hints about the importance of certain code spans.",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Cheating",
"sec_num": "4.4"
},
{
"text": "C BB C BT C R2 C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cheating",
"sec_num": "4.4"
},
{
"text": "Intent: multiply a matrix 'p' with a 3d tensor 't' in scipy scipy.tensordot (P, T, axes=[1, 1] ).swapaxes(0, 1)",
"cite_spans": [
{
"start": 76,
"end": 94,
"text": "(P, T, axes=[1, 1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "x np.einsum('...j,...j->...', P, T) y np.einsum('ij->ij->ik->j->ik', p)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "x P.dot(T).transpose(1, 0, 2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "Intent: concatenate items of list 'l' with a space ' ' print (' '.join(map(str, l) ",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(' '.join(map(str, l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": ")) x list(map(tuple,[])) y [item for item in L if \" in item] z print(' '.join(str(x) for x in L))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "Intent: concatenate elements of list 'b' by a colon \":\" \"\"\":\"\"\".join(str(x) for x in b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "x [' '.join(x) Table 7 : Example intents and generated snippets. Screenshots of the questions are located in Appendix B and each intent links to the question. Red text indicates that it is incorrect while blue text marks correct tokens in the wrong place. ground truth. xEK+RR no body (Xu et al., 2020). yMined. zBody+Mined.",
"cite_spans": [
{
"start": 2,
"end": 14,
"text": "[' '.join(x)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "We select three examples that demonstrate the benefits of our approach while also highlighting the issues in both the use of the question body and SO corpora in general and report them in Table 7 . In the first example, we can see that both x and y have learned how to use einsums, but neither is correct. z in this case produces an answer that returns the correct value. It is highly probable that BART understood from the poster's explicit mention that P.dot(T).transpose(1, 0, 2) gives the desired result and thus extracts it. However, this example has two critical issues: the poster's intent is to find a \"cleaner\" way to multiply a matrix with a tensor, and scipy.tensordot is deprecated. The latter is to be expected, considering the answer is from 2010. But it does indicate that a better evaluation based on inputs and outputs is likely needed.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "The next two examples are quite similar but are from two separate questions. x likely mistakes the core intent to be type conversion due to the inclusion of the words \"items\" and \"with.\" y also suffers from the inclusion of these tokens but believes the problem involves filtering. In the final example, x recognizes that it must convert the items in b to str, but does not return a joined string. y recognizes that, again, the answer involves type conversion but predicts the incorrect type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "Similar to the first example, z produces answers for both the second and third examples that functionally return the correct results. However, running z's solution for the third example would result in a syntax error due to the missing \").\" On further inspection of the question bodies, it becomes apparent that the probable reason why one snippet is syntactically valid while the other is not is the presence of a Python 2 print. The model recognizes that a suitable answer can be found in the question but must be converted to python 3. As discussed in subsection 4.3, these print statements are prone to cause syntactical issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4.5"
},
{
"text": "We expand the CoNaLa dataset by adding the textual question bodies from the StackExchange API and achieve state-of-the-art performance with a simple BART model. Further, we demonstrate that, for this task, BART performs best when code blocks, inline code, and NL are all present. We then examine the impact of the question body on syntax errors and BART's cheating through multimodal understanding. Finally, we examine examples that highlight the issues with both StackOverflow data and code evaluation in general. Future work should focus on extracting desired inputs and outputs for a given intent. Further, additional efforts put into creating corpora of executable code are likely to improve not only generation but evaluation. Both will also protect datasets from deprecated functions and abandoned libraries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/gabeorlanski/stackoverflowencourages-cheating",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://stackoverflow.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://api.stackexchange.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pytorch.org/5 We direct readers to for a full discussion of these challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://conala-corpus.github.io/ 7 Actual Number is lower due to errors in the dataset preventing the usage of some examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/transformers 10 https://huggingface.co/facebook/bart-base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.python.org/doc/sunset-python-2/ 12 https://docs.python.org/3/library/2to3.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "JuICe: A large scale distantly supervised dataset for open domain context-based code generation",
"authors": [
{
"first": "Rajas",
"middle": [],
"last": "Agashe",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5436--5446",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1546"
]
},
"num": null,
"urls": [],
"raw_text": "Rajas Agashe, Srinivasan Iyer, and Luke Zettlemoyer. 2019. JuICe: A large scale distantly supervised dataset for open domain context-based code gener- ation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5436-5446, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Expanding the scope of the ATIS task: The ATIS-3 corpus",
"authors": [
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Hunicke-Smith",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Pallett",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Pao",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Human Language Technology: Proceedings of a Workshop held at Plainsboro",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Human Language Tech- nology: Proceedings of a Workshop held at Plains- boro, New Jersey, March 8-11, 1994.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Coarse-to-fine decoding for neural semantic parsing",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "731--742",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1068"
]
},
"num": null,
"urls": [],
"raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731-742, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Generating code with the help of retrieved template functions and stack overflow answers",
"authors": [
{
"first": "Dawn",
"middle": [],
"last": "Drain",
"suffix": ""
},
{
"first": "Changran",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Breslav",
"suffix": ""
},
{
"first": "Neel",
"middle": [],
"last": "Sundaresan",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dawn Drain, Changran Hu, Chen Wu, Mikhail Breslav, and Neel Sundaresan. 2021. Generating code with the help of retrieved template functions and stack overflow answers.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Code-BERT: A pre-trained model for programming and natural languages",
"authors": [
{
"first": "Zhangyin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1536--1547",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.139"
]
},
"num": null,
"urls": [],
"raw_text": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xi- aocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Code- BERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards complex text-to-SQL in crossdomain database with intermediate representation",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zecheng",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian-Guang",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dongmei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4524--4535",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1444"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross- domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524-4535, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Retrieval-based neural code generation",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Shirley Anugrah Hayati",
"suffix": ""
},
{
"first": "Pravalika",
"middle": [],
"last": "Olivier",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Avvaru",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Tomasic",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "925--930",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1111"
]
},
"num": null,
"urls": [],
"raw_text": "Shirley Anugrah Hayati, Raphael Olivier, Pravalika Av- varu, Pengcheng Yin, Anthony Tomasic, and Gra- ham Neubig. 2018. Retrieval-based neural code gen- eration. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 925-930, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Codesearchnet challenge: Evaluating the state of semantic code search",
"authors": [
{
"first": "Hamel",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "Ho-Hsiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tiferet",
"middle": [],
"last": "Gazit",
"suffix": ""
},
{
"first": "Miltiadis",
"middle": [],
"last": "Allamanis",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09436"
]
},
"num": null,
"urls": [],
"raw_text": "Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- searchnet challenge: Evaluating the state of seman- tic code search. arXiv preprint arXiv:1909.09436.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mapping language to code in programmatic context",
"authors": [
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1643--1652",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1192"
]
},
"num": null,
"urls": [],
"raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 1643-1652, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Latent predictor networks for code generation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Fumin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "599--609",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1057"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 599-609, Berlin, Germany. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fixing weight decay regularization in adam",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. CoRR, abs/1711.05101.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Codexglue: A machine learning benchmark dataset for code understanding and generation",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Svyatkovskiy",
"suffix": ""
},
{
"first": "Ambrosio",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"B"
],
"last": "Clement",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Drain",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Tufano",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Neel",
"middle": [],
"last": "Sundaresan",
"suffix": ""
},
{
"first": "Shengyu",
"middle": [],
"last": "Shao Kun Deng",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Li- dong Zhou, Linjun Shou, Long Zhou, Michele Tu- fano, Ming Gong, Ming Zhou, Nan Duan, Neel Sun- daresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. Codexglue: A machine learning bench- mark dataset for code understanding and generation. CoRR, abs/2102.04664.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Abstract syntax networks for code generation and semantic parsing",
"authors": [
{
"first": "Maxim",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1139--1149",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139- 1149, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dobf: A deobfuscation pre-training objective for programming languages",
"authors": [
{
"first": "Marie-Anne",
"middle": [],
"last": "Baptiste Roziere",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Lachaux",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Szafraniec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.07492"
]
},
"num": null,
"urls": [],
"raw_text": "Baptiste Roziere, Marie-Anne Lachaux, Marc Szafraniec, and Guillaume Lample. 2021. Dobf: A deobfuscation pre-training objective for program- ming languages. arXiv preprint arXiv:2102.07492.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Constrained language models yield few-shot semantic parsers",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"H"
],
"last": "Lin",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Subhro",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2021,
"venue": "Emmanouil Antonios Platanios",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Shin, Christopher H. Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning to map context-dependent sentences to executable formal queries",
"authors": [
{
"first": "Alane",
"middle": [],
"last": "Suhr",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2238--2249",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1203"
]
},
"num": null,
"urls": [],
"raw_text": "Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to ex- ecutable formal queries. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2238-2249, New Orleans, Louisiana. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Treegen: A tree-based transformer architecture for code generation",
"authors": [
{
"first": "Zeyu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qihao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yingfei",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yican",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeyu Sun, Qihao Zhu, Yingfei Xiong, Yican Sun, Lili Mou, and Lu Zhang. 2019. Treegen: A tree-based transformer architecture for code generation. CoRR, abs/1911.09983.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Code and named entity recognition in StackOverflow",
"authors": [
{
"first": "Jeniya",
"middle": [],
"last": "Tabassum",
"suffix": ""
},
{
"first": "Mounica",
"middle": [],
"last": "Maddela",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4913--4926",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.443"
]
},
"num": null,
"urls": [],
"raw_text": "Jeniya Tabassum, Mounica Maddela, Wei Xu, and Alan Ritter. 2020. Code and named entity recognition in StackOverflow. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4913-4926, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sequence-based structured prediction for semantic parsing",
"authors": [
{
"first": "Chunyang",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Dymetman",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1341--1350",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1127"
]
},
"num": null,
"urls": [],
"raw_text": "Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for se- mantic parsing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341- 1350, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Incorporating external knowledge through pre-training for natural language to code generation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Zhengbao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6045--6052",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.538"
]
},
"num": null,
"urls": [],
"raw_text": "Frank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. 2020. Incorporating external knowledge through pre-training for natural language to code generation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 6045-6052, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "-ide code generation from natural language: Promise and challenges",
"authors": [
{
"first": "F",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.11149"
]
},
"num": null,
"urls": [],
"raw_text": "Frank F Xu, Bogdan Vasilescu, and Graham Neubig. 2021. In-ide code generation from natural lan- guage: Promise and challenges. arXiv preprint arXiv:2101.11149.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Staqc: A systematically mined questioncode dataset from stack overflow",
"authors": [
{
"first": "Ziyu",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Wei-Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 World Wide Web Conference, WWW '18",
"volume": "",
"issue": "",
"pages": "1693--1703",
"other_ids": {
"DOI": [
"10.1145/3178876.3186081"
]
},
"num": null,
"urls": [],
"raw_text": "Ziyu Yao, Daniel S. Weld, Wei-Peng Chen, and Huan Sun. 2018. Staqc: A systematically mined question- code dataset from stack overflow. In Proceedings of the 2018 World Wide Web Conference, WWW '18, page 1693-1703, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning to mine aligned code and natural language pairs from stack overflow",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Edgar",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR)",
"volume": "",
"issue": "",
"pages": "476--486",
"other_ids": {
"DOI": [
"10.1145/3196398.3196408"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In 2018 IEEE/ACM 15th Interna- tional Conference on Mining Software Repositories (MSR), pages 476-486. IEEE.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A syntactic neural model for general-purpose code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "440--450",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1041"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 440-450, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for se- mantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstra- tions, pages 7-12, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Reranking for neural semantic parsing",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4553--4559",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1447"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2019. Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4553-4559, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Dongxu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingning",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Shanelle",
"middle": [],
"last": "Roman",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3911--3921",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1425"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the Thirteenth Na- tional Conference on Artificial Intelligence -Volume 2, AAAI'96, page 1050-1055. AAAI Press.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI'05",
"volume": "",
"issue": "",
"pages": "658--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI'05, page 658-666, Arlington, Virginia, USA. AUAI Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A Error Categories Error Count General Invalid Syntax Paranthesis Matching Other Matching Baseline 61",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Error Categories Error Count General Invalid Syntax Paranthesis Matching Other Matching Baseline 61",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "No Print Has Print in Snippet Has Print in Body Has Print in Both Baseline",
"authors": [],
"year": null,
"venue": "",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 8: Percentages of syntax errors for ablations in a single run. No Print Has Print in Snippet Has Print in Body Has Print in Both Baseline 88.28",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Percentage of valid snippets based on the presence of print. B Full Questions for Examples (a) Full Stack Overflow Question for Example 1 in Table 7",
"authors": [],
"year": null,
"venue": "",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 9: Percentage of valid snippets based on the presence of print. B Full Questions for Examples (a) Full Stack Overflow Question for Example 1 in Table 7. Question can be found https://stackoverflow.com/questions/4490961/numpy-multiplying-a- matrix-with-a-3d-tensor-suggestion.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Full Stack Overflow Question for Example 2 in Table 7",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Full Stack Overflow Question for Example 2 in Table 7. Question can be found https://stackoverflow.com/questions/13550423/python-printing- without-commas.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Full Stack Overflow Question for Example 3 in Table 7",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Full Stack Overflow Question for Example 3 in Table 7. Question can be found https://stackoverflow.com/questions/13954222/how-to-join-mixed-list- array-with-integers-in-it-in-python.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Example StackOverflow question with labeled elements. The corresponding rewritten intent for this question is \"add a new axis to array a.\" Consider the question from Figure 2 in which a valid python snippet could be a[:, (np.newaxis)].",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Given the function m(a, b) that calculates a textual similarity metric m, we define the cheating w.r.t. m as",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "for x in b] y b = [int(i) for i in b] z print(':'.join(map(str, b))",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Train 2376</td><td>1708</td><td>1.39\u00b11.02</td><td colspan=\"2\">16.45\u00b17.51</td><td>17.23\u00b18.58</td><td>221.90\u00b1202.65</td></tr><tr><td>Test 498</td><td>364</td><td>1.37\u00b10.88</td><td colspan=\"2\">15.98\u00b16.62</td><td>18.47\u00b112.90</td><td>208.04\u00b1164.74</td></tr><tr><td>Mined-10K 9988 \u00a6</td><td>7181</td><td>1.39\u00b10.80</td><td colspan=\"2\">11.29\u00b13.94</td><td>16.58\u00b19.27</td><td>297.53\u00b1367.09</td></tr><tr><td colspan=\"4\">Split Have Answer * Has Code Inline x</td><td>Blocks x</td><td colspan=\"2\">Code Tokens x NL Tokens x</td></tr><tr><td>Train 87.88%</td><td/><td>85.95%</td><td colspan=\"3\">1.21\u00b12.09 1.42\u00b11.26 95.54\u00b1157.52</td><td>124.60\u00b192.02</td></tr><tr><td>Test 87.09%</td><td/><td>87.91%</td><td colspan=\"3\">1.08\u00b11.87 1.50\u00b11.26 88.21\u00b1116.01</td><td>118.52\u00b179.51</td></tr><tr><td>Mined-10K 86.16%</td><td/><td>84.00%</td><td colspan=\"4\">1.30\u00b12.36 1.46\u00b11.34 133.20\u00b1278.20 164.54\u00b1207.08</td></tr><tr><td>Mined 81.92%</td><td/><td>81.83%</td><td colspan=\"4\">1.50\u00b12.86 1.47\u00b11.44 172.57\u00b1372.32 197.98\u00b1257.71</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Split |E| * |Q| * |E|/|Q| x Intent Tokens y Snippet Tokens y Body Tokens z Mined 593837 40522 14.65\u00b17.01 11.41\u00b14.22 28.70\u00b142.81 371.24\u00b1483.67 Table 1: Statistics for the CoNaLa dataset with data from the StackOverflow API. |E| is # of examples. |Q| number of questions. Values are reported as \u00b5 \u00b1 \u03c3 unless the column header has * . x Mean # of examples for a Question. Number of tokens in the body regardless of modalitiy. \u00a6 12 of the 10K questions were removed because there was an issue with them.",
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Detailed statistics for the StackOverflow questions. Mined-10K represents the top 10,000 samples selected from the Mined data based on their probability that they are a valid NL-Code pair.",
"num": null
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results compared to previous papers both with and without the use of the question body at inference. We do not calculate the Oracle BLEU for either of our models as our corpus BLEU already surpasses their Oracle BLEU. EK=Using External Knowledge. RR=Using Reranking.x Using only the rewritten intent, if available else normal intent, as input.",
"num": null
},
"TABREF4": {
"content": "<table><tr><td colspan=\"2\">Input Unigram -Code BLEU 27.67\u00b10.40 68.29\u00b10.53 44.93\u00b10.57 30.12\u00b10.69 84.92\u00b11.00</td></tr><tr><td>-Blocks</td><td>29.53\u00b10.47 68.14\u00b10.26 45.69\u00b10.10 31.36\u00b10.15 80.84\u00b11.37</td></tr><tr><td>-Inline</td><td>33.57\u00b10.94 70.50\u00b10.27 49.56\u00b10.40 36.54\u00b10.46 82.16\u00b11.53</td></tr><tr><td colspan=\"2\">Body+Mined 35.32\u00b10.42 67.62\u00b10.76 47.69\u00b10.82 35.00\u00b10.87 89.32\u00b11.49</td></tr><tr><td>-NL</td><td>34.53\u00b10.88 66.24\u00b10.90 46.11\u00b11.15 33.54\u00b11.02 90.08\u00b10.48</td></tr><tr><td>-Code</td><td>31.39\u00b10.75 67.00\u00b10.75 45.65\u00b10.97 31.60\u00b10.88 92.00\u00b11.31</td></tr><tr><td>-Blocks</td><td>32.14\u00b10.14 66.96\u00b11.03 45.32\u00b10.97 31.49\u00b10.74 89.24\u00b11.30</td></tr><tr><td>-Inline</td><td>35.06\u00b10.49 67.04\u00b11.54 46.99\u00b11.29 34.31\u00b11.04 89.20\u00b10.42</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Baseline 26.24\u00b10.31 67.53\u00b10.46 44.10\u00b10.60 29.80\u00b10.69 84.08\u00b11.27 +Mined 30.55\u00b10.38 67.81\u00b10.23 45.55\u00b10.27 31.69\u00b10.37 93.08\u00b11.28 Body 34.35\u00b11.01 69.97\u00b10.89 49.74\u00b10.99 36.62\u00b10.97 81.44\u00b12.25 -NL 34.06\u00b10.48 68.29\u00b10.48 47.91\u00b10.45 35.33\u00b10.40 81.92\u00b10.75",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Percent of generated snippets that are</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Ablation Experiments all with BART ran on 5 different random initializations. All tests have rewritten intent as input in addition to the input described in the Input column. The bolded ablation indicates our best performance while red text represents the worst performance.",
"num": null
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Change in BLEU score for each ablation versus their respective baseline.",
"num": null
},
"TABREF8": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Cheating Measurements calculated by Equation 2 using a single run but same seed and environment. C BB and C BT are the cheating w.r.t. BLEU Bigram and Trigram Precision. C R2 and C RL are the cheating w.r.t. ROUGE-2 and ROUGE-L.",
"num": null
}
}
}
}