ACL-OCL / Base_JSON /prefixN /json /nlp4prog /2021.nlp4prog-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:08.800739Z"
},
"title": "Code to Comment Translation: A Comparative Study on Model Effectiveness & Errors",
"authors": [
{
"first": "Junayed",
"middle": [],
"last": "Mahmud",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {
"country": "USA"
}
},
"email": "jmahmud@gmu.edu"
},
{
"first": "Fahim",
"middle": [],
"last": "Faisal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {
"country": "USA"
}
},
"email": "ffaisal@gmu.edu"
},
{
"first": "Raihan",
"middle": [
"Islam"
],
"last": "Arnob",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {
"country": "USA"
}
},
"email": "rarnob@gmu.edu"
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Moran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {
"country": "USA"
}
},
"email": "kpmoran@gmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automated source code summarization is a popular software engineering research topic wherein machine translation models are employed to \"translate\" code snippets into relevant natural language descriptions. Most evaluations of such models are conducted using automatic reference-based metrics. However, given the relatively large semantic gap between programming languages and natural language, we argue that this line of research would benefit from a qualitative investigation into the various error modes of current stateof-the-art models. Therefore, in this work, we perform both a quantitative and qualitative comparison of three recently proposed source code summarization models. In our quantitative evaluation, we compare the models based on the smoothed BLEU-4, METEOR, and ROUGE-L machine translation metrics, and in our qualitative evaluation, we perform a manual open-coding of the most common errors committed by the models when compared to ground truth captions. Our investigation reveals new insights into the relationship between metric-based performance and model prediction errors grounded in an empirically derived error taxonomy that can be used to drive future research efforts. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automated source code summarization is a popular software engineering research topic wherein machine translation models are employed to \"translate\" code snippets into relevant natural language descriptions. Most evaluations of such models are conducted using automatic reference-based metrics. However, given the relatively large semantic gap between programming languages and natural language, we argue that this line of research would benefit from a qualitative investigation into the various error modes of current stateof-the-art models. Therefore, in this work, we perform both a quantitative and qualitative comparison of three recently proposed source code summarization models. In our quantitative evaluation, we compare the models based on the smoothed BLEU-4, METEOR, and ROUGE-L machine translation metrics, and in our qualitative evaluation, we perform a manual open-coding of the most common errors committed by the models when compared to ground truth captions. Our investigation reveals new insights into the relationship between metric-based performance and model prediction errors grounded in an empirically derived error taxonomy that can be used to drive future research efforts. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Proper documentation is an important component of modern software development, and previous studies have illustrated its advantages for tasks ranging from program comprehension to software maintenance (Chen and Huang, 2009) . However, manually documenting software is a tedious task (McBurney and McMillan, 2014) and modern agile development practices 1 Our annotations and guidelines are publicly available on Github https://github.com/SageSELab/ CodeSumStudy and Zenodo: https://doi.org/10. 5281/zenodo.4904024.",
"cite_spans": [
{
"start": 201,
"end": 223,
"text": "(Chen and Huang, 2009)",
"ref_id": "BIBREF5"
},
{
"start": 283,
"end": 312,
"text": "(McBurney and McMillan, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 352,
"end": 353,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "tend to champion working code over extensive documentation (Beck et al., 2001) . As such, a range of important documentation activities are often neglected leading to deficiencies in carrying out development activities and contributing to technical debt. Because of this, researchers have worked to develop automated code summarization techniques wherein machine translation models are employed to generate precise, semantically accurate natural language descriptions of source code (Haiduc et al., 2010) . Due to the promise and potential benefits of effective automated source code summarization techniques, this area of work has seen constant and growing attention at the intersection of the software engineering and natural language processing research communities (Zhu and Pan, 2019) .",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Beck et al., 2001)",
"ref_id": null
},
{
"start": 483,
"end": 504,
"text": "(Haiduc et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 769,
"end": 788,
"text": "(Zhu and Pan, 2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "Various techniques for automated source code summarization have been explored extensively over the past decade. Some of the earliest approaches made use of a combination of structural code information and text retrieval techniques for determining the most relevant terms (Haiduc et al., 2010) , with follow up work investigating the use of topic modeling (Eddy et al., 2013) . Techniques then evolved from using information retrieval to canonical machine learning techniques, with Ying and Robillard (2013) using supervised Naive Bayes and Support Vector Machine classifiers to identify code fragment lines that could be used as suitable summaries. One of the first appearances of language modeling came from McBurney and McMillan (2016) who proposed an approach combining a software word usage model, natural language generation systems, and the PageRank algorithm (Langville and Meyer, 2006) to generate summaries. Driven by the advent of deep learning, current state-of-the-art techniques generally make use of large-scale neural models and have significantly improved the performance of code summa-rization tasks. For instance, Iyer et al. (2016) used Long Short Term Memory (Hochreiter and Schmidhuber, 1997 ) with attention (Bahdanau et al., 2015) to generate summaries from a code snippet. Following this work, researchers have applied several deep learning-based approaches to the task of source code summarization (Zhang et al., 2020a; Wan et al., 2018; .",
"cite_spans": [
{
"start": 271,
"end": 292,
"text": "(Haiduc et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 355,
"end": 374,
"text": "(Eddy et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 709,
"end": 737,
"text": "McBurney and McMillan (2016)",
"ref_id": "BIBREF30"
},
{
"start": 866,
"end": 893,
"text": "(Langville and Meyer, 2006)",
"ref_id": "BIBREF18"
},
{
"start": 1132,
"end": 1150,
"text": "Iyer et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1179,
"end": 1212,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF13"
},
{
"start": 1230,
"end": 1253,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 1423,
"end": 1444,
"text": "(Zhang et al., 2020a;",
"ref_id": "BIBREF41"
},
{
"start": 1445,
"end": 1462,
"text": "Wan et al., 2018;",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "In most works on automated code summarization, the performance of the generated natural language descriptions is evaluated using referencebased metrics adapted from machine translation, e.g., BLEU (Papineni et al., 2002) and ME-TEOR (Lavie and Agarwal, 2007) , or text summarization, e.g., ROUGE (Lin, 2004) . As such, most researchers make conclusions based on the results obtained using these metrics. However, the code summarization task is a difficult one -due in large part to the sizeable semantic gap between the modalities of source code and natural language. As such, while these metrics provide a general illustration of model efficacy, it can be difficult to determine the specific shortcomings of neural code summarization techniques without a more extensive qualitative investigation into their errors.",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
},
{
"start": 233,
"end": 258,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF19"
},
{
"start": 296,
"end": 307,
"text": "(Lin, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "Few past studies have examined the failure modes of neural code summarization models as we outline in \u00a76. Therefore, to further explore this topic, in this paper we perform both a qualitative and quantitative empirical comparison of three neural code summarization models. Our quantitative evaluation offers a comparison of three recently proposed models (CodeBERT (Feng et al., 2020) , NeuralCodeSum (Ahmad et al., 2020) , and code2seq (Alon et al., 2019) ) on the Funcom dataset (LeClair and McMillan, 2019) using the smoothed BLEU-4 (Lin and Och, 2004) , ME-TEOR (Lavie and Agarwal, 2007) , and ROUGE-L (Lin, 2004) metrics whereas our qualitative evaluation consists of a rigorous manual categorization of model errors (compared to ground truth captions) based on a procedure adapted from the practice of open coding (Miles et al., 2013) . In summary, this paper makes the following contributions:",
"cite_spans": [
{
"start": 365,
"end": 384,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 401,
"end": 421,
"text": "(Ahmad et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 437,
"end": 456,
"text": "(Alon et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 536,
"end": 555,
"text": "(Lin and Och, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 566,
"end": 591,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF19"
},
{
"start": 606,
"end": 617,
"text": "(Lin, 2004)",
"ref_id": "BIBREF23"
},
{
"start": 820,
"end": 840,
"text": "(Miles et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "\u2022 We offer a quantitative comparative analysis of the CodeBERT, NeuralCodeSum, and code2seq models applied to the task of Java method summarization in the Funcom dataset. The results of this analysis illustrate that the CodeBERT model performs best to a statistically significant degree, achieving a BLEU-4 score of 24.15, a METEOR score of 30.34, and a ROUGE-L score of 35.65. \u2022 We conduct a qualitative investigation into the various prediction errors made by our three studied models and derive a taxonomy of error modes across the various models. We also offer a discussion about differences in errors made across models and suggestions for model improvements. \u2022 We offer resources on GitHub 2 and Zenodo 3 for replicating our experiments, including code and trained models, in addition to all of the data and examples used in our qualitative analysis of model errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "This section outlines necessary background regarding our chosen evaluation dataset as well as the three neural code summarization models upon which we focus our empirical investigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Deep Learning for Code Summarization",
"sec_num": "2"
},
{
"text": "In this study we make use of the Funcom dataset (LeClair and McMillan, 2019). 4 We selected this dataset primarily for three reasons: (i) this dataset was specifically curated for the task of code summarization, excluding methods more than 100 words and comments with >13 and <3 words or which were auto-generated, (ii) it is currently one of the largest datasets specifically tailored for code summarization, containing over 2.1M Java methods with paired JavaDoc comments, (iii) it targets Java, one of the most popular programming languages. 5 In order to make for a feasible training procedure for our various model configurations, and to keep the dataset size in line with past work to which our studied models were applied (e.g., the size of the CodeXGlue dataset from Lu et al. (2021) , containing approximately 180000 Java methods and JavaDoc pairs, to which CodeBERT was applied) we chose to use the first 500,000 method-comment pairs from the filtered Funcom dataset for our experiments. Note that we did not use the tokenized version of the dataset as provided by as each of our models has unique pre-processing constraints, described in detail in Appendix B. ",
"cite_spans": [
{
"start": 78,
"end": 79,
"text": "4",
"ref_id": null
},
{
"start": 544,
"end": 545,
"text": "5",
"ref_id": null
},
{
"start": 774,
"end": 790,
"text": "Lu et al. (2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset: Funcom",
"sec_num": "2.1"
},
{
"text": "CodeBERT CodeBERT (Feng et al., 2020 ) is a bimodal pre-trained model used in natural language (NL) and programming language (PL) tasks. This model supports six programming language tasks in various downstream NL-PL applications, e.g., code search, code summarization, etc. The architecture of the model is based on BERT (Devlin et al., 2019) , specifically following the RoBERTabase (Liu et al., 2019) in using 125 million model parameters. The objectives of training CodeBERT are masked language modeling (MLM) and replaced token detection (RTD). Recently, Microsoft Research Asia introduced the CodeXGLUE benchmark that consists of 14 datasets for ten diversified code intelligence tasks (Lu et al., 2021) . They fine-tuned CodeBERT in code-to-natural-language generation tasks. CodeBERT was used as the encoder, with a six-layer self-attentive (Vaswani et al., 2017) decoder. An architecture for code-to-text translation using the CodeBERT encoder is shown in Figure 1 . The dataset Lu et al. (2021) used is derived from CodeSearchNet (Husain et al., 2019) .",
"cite_spans": [
{
"start": 18,
"end": 36,
"text": "(Feng et al., 2020",
"ref_id": "BIBREF9"
},
{
"start": 321,
"end": 342,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 384,
"end": 402,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 691,
"end": 708,
"text": "(Lu et al., 2021)",
"ref_id": "BIBREF28"
},
{
"start": 848,
"end": 870,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 987,
"end": 1003,
"text": "Lu et al. (2021)",
"ref_id": "BIBREF28"
},
{
"start": 1039,
"end": 1060,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 964,
"end": 972,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "2.2"
},
{
"text": "NeuralCodeSum The second technique we study is NeuralCodeSum (Ahmad et al., 2020) . Here, the authors explored a transformer-based approach to perform the task of code summarization, using a self-attention mechanism to capture the long-term dependencies that are common in source code. In order to enable the model to both copy from already seen source code and to generate new words from its vocabulary, they employed a copy mechanism (See et al., 2017) . One important distinction of source code that this model takes into account is that the absolute token position does not necessarily assist in the process of learning effective source code representations (i.e., int a=b+c and int a=c+b; both convey the same meaning). To mitigate this problem, they used the relative positioning of tokens to encode pairwise token relations. Additionally, the authors of this model also explored the integration of an abstract syntax tree (AST)-based source code representation. How-ever, they found that the AST information did not result in a marked improvement in model accuracy. code2seq The third model we consider in our study is code2seq (Alon et al., 2019) , which is a widely utilized technique that was originally designed for the task of method name prediction. The authors of this work focused on capturing the true syntactic construction of source code by encoding AST paths. They showed that code snippets which exhibited differences in lines but that were designed for similar functionality often have similar patterns in their AST trees. To take advantage of this observation, code2seq uses an encoder-decoder architecture that attends to the constructed AST encoding to generate the resultant sequence. The authors experimented with Java method name generation as well as code captioning tasks. They compared their code captioning approach to CodeNN (Iyer et al., 2016) using BLEU score, against which it illustrated improved performance.",
"cite_spans": [
{
"start": 61,
"end": 81,
"text": "(Ahmad et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 436,
"end": 454,
"text": "(See et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 1135,
"end": 1154,
"text": "(Alon et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1857,
"end": 1876,
"text": "(Iyer et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2.2"
},
{
"text": "To evaluate the performance of our three models applied to the task of code summarization, we perform both a quantitative and qualitative evaluation centered upon the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Empirical Evaluation",
"sec_num": "3"
},
{
"text": "RQ 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Empirical Evaluation",
"sec_num": "3"
},
{
"text": "How effective is each model in terms of predicting natural language summaries from Java methods?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Empirical Evaluation",
"sec_num": "3"
},
{
"text": "RQ 2 : What types of errors do our studied models make when compared to ground truth captions?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Empirical Evaluation",
"sec_num": "3"
},
{
"text": "RQ 3 : What differences (if any) are there between the errors made by different models?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Empirical Evaluation",
"sec_num": "3"
},
{
"text": "In this subsection, we discuss how we split the dataset, the evaluation metrics we use, and how we configure our studied models for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology for RQ 1",
"sec_num": "3.1"
},
{
"text": "To adapt the Funcom dataset for our study, we first sampled the first 500k function-comment pairs from the filtered Funcom dataset into training (80%), validation (10%) and testing (10%) for our experiment, ensuring that the method-comment pairs between our training and testing datasets came from separate software projects (i.e., split by project), as suggested by the Funcom authors, in order to avoid artificial inflation of performance due to data snooping .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Preparation and Metrics",
"sec_num": "3.1.1"
},
{
"text": "CodeXGlue 164923 5183 10955 Funcom 400000 50000 49997 As a comparison to past work, we illustrate the training, validation and test dataset sizes between the CodeXGLUE and Funcom datasets in Table 1 . As mentioned earlier we preprocess the sampled dataset based on the requirements for each of our chosen models, and provide details in Appendix B. Prior work has explored the use of several reference-based metrics, e.g., BLEU, METEOR, and ROUGE-L for evaluating the performance of code summarization. In our study we make use of smoothed BLEU-4 as it was previously used to evaluate the CodeBERT model (Feng et al., 2020) . BLEU is the geometric average of n-gram precisions between the predicted and reference captions multiplied by a brevity penalty that penalizes the generation of short descriptions. We use the BLEU metric applying a smoothing technique (Lin and Och, 2004) , which adds one count in the case of n-gram hits to address hypotheses shorter than n. In addition, we include METEOR (Lavie and Agarwal, 2007) and ROUGE-L (Lin, 2004) in our study. METEOR computes the harmonic mean between precision and recall based on unigram matches between the prediction from a model and reference, also going beyond exact matches to include stemming, synonyms, and lemmatization. ROUGE-L computes the longest common subsequence-based F-measure between the hypotheses and references.",
"cite_spans": [
{
"start": 603,
"end": 622,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 860,
"end": 879,
"text": "(Lin and Och, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 999,
"end": 1024,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF19"
},
{
"start": 1037,
"end": 1048,
"text": "(Lin, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Training Dev Testing",
"sec_num": null
},
{
"text": "We train, validate and test the three models described in \u00a72 for the task of summarizing Java methods in natural language. A subset of model hyperparameters for all three studied deep learning models is shown in Table 2 . We preprocess the dataset for each of the models according to their individual requirements and select the hyperparameters for each of the models based on the optimal settings from prior work. Additionally, we apply some global preprocessing that is common to all models, taken from recent work on language modeling for code (Mastropaolo et al., 2021) . Initially, we remove all the comments that exist inside methods, as the commented code could lead to poor predictions. Next, all the JavaDoc comments are filtered keeping only the description of the method. Finally, we clean HTML and remove special characters from the JavaDoc captions. We provide a detailed account of our preprocessing and training techniques in Appendix B and in our publicly available resources.",
"cite_spans": [
{
"start": 547,
"end": 573,
"text": "(Mastropaolo et al., 2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 212,
"end": 219,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model Configurations and Training",
"sec_num": "3.1.2"
},
{
"text": "CodeBERT Model Configurations and Training:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations and Training",
"sec_num": "3.1.2"
},
{
"text": "We use the open-source implementation 6 made available by Microsoft to fine-tune Code-BERT using the Funcom dataset. We utilized the optimal model configurations for this model used to train on the CodeXGlue (Lu et al., 2021) dataset with hyperparamters tuned on the Funcom dataset.",
"cite_spans": [
{
"start": 208,
"end": 225,
"text": "(Lu et al., 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations and Training",
"sec_num": "3.1.2"
},
{
"text": "We use the open-source implementation of NeuralCodeSum 7 to train the model in our study. We performed one additional preprocessing step than typical with this model, splitting camelcase words. The dropout rate is set to 0.2 and we train for a maximum of 1000 epochs. Additionally, we stop training if validation does not improve after 20 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeuralCodeSum Model Configurations and Training:",
"sec_num": null
},
{
"text": "code2seq Model Configurations and Training:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeuralCodeSum Model Configurations and Training:",
"sec_num": null
},
{
"text": "We make use of the publicly available implementation of code2seq. 8 To use the Funcom dataset, we had to prepare the AST node representation using a modified dataset build script. 9 The original dataset build script was designed to predict the method name whereas we modify it to predict summaries. One problem we faced representing Funcom methods as ASTs is that there were some code examples which could not be parsed into an AST representation mainly because of the imposed minimum code length threshold and the method not having 6 https://github.com/microsoft/ CodeXGLUE/tree/main/Code-Text/ code-to-text any AST-Paths. As a result, we were able to train code2seq on only a subset of the Funcom dataset (40009/50000 \u21e1 80.02%). To train the model we made use of large batch sizes (e.g., 256 and 512) as we noted smaller batch sizes resulted in instability. As code2seq was originally designed to predict method names, we also made some changes in the model parameters to facilitate longer prediction sequences, which we give in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NeuralCodeSum Model Configurations and Training:",
"sec_num": null
},
{
"text": "We performed a manual, qualitative analysis on the output of the three models 10 to answer RQ 2 and RQ 3 in order to better understand and compare the various types of errors each model makes. The methodology we follow to categorize the model prediction errors follows a procedure inspired by open coding (Miles et al., 2013) , which has been used in prior studies to categorize large numbers of software project artifacts (Linares-V\u00e1squez et al., 2017, inter alia). Initially, we randomly selected a small number of samples from our validation split of the Funcom dataset, and applied each of our three models to generate captions. The four annotators 11 then met and discussed the samples to derive an initial set of labels that described deviations from the ground truth. We found that 15 methods (each with three predictions, one from each of our studied models) were enough to reach an initial agreement on the labels. Note that we use the ground truth captions as a \"gold set\" in order to orient our analysis to a shared understanding among annotators and to limit potential subjectivity.",
"cite_spans": [
{
"start": 305,
"end": 325,
"text": "(Miles et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology for RQ 2 & RQ 3",
"sec_num": "3.2"
},
{
"text": "Next, we conducted two rounds of independent labeling, wherein three annotators independently coded a samples of method-comment pairs and predicted comments, such that two annotators independently coded each sample. Here we define a \"sample\" as a method $ gold-comment pair, and the three resulting predictions from CodeBERT, NeuralCodeSum, and code2seq respectively for the method. During this process, annotators were free to add additional labels outside of the initial set if they deemed it necessary. The first round of labeling consisted of 148 samples in total, amounting to 148 \u21e5 3 = 444 predictions from our studied models. After the independent labeling process, the authors met to resolve the conflicts among the labels. This initial round of coding resulted in a disagreement on \u21e1 82% of the samples wherein author discussion was needed in order to derive a common agreed upon label. There were two main reasons for this relatively high rate of disagreement: (i) the authors created some category labels with similar semantic meanings, but different labels, and (ii) some of the authors had different interpretations of shared meanings. However, through an extensive discussion, the conflicts were resolved and a shared understanding reached. The second round of independent labeling consisted of 50 samples, and resulted in a disagreement rate of only \u21e1 27%, illustrating the stronger consensus among authors. We derive the taxonomy presented in \u00a74 from labels present after both rounds of our open coding procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology for RQ 2 & RQ 3",
"sec_num": "3.2"
},
{
"text": "In this section, we will discuss the quantitative and qualitative results from our empirical study in order to answer our research questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4"
},
{
"text": "To perform the evaluation on the Funcom dataset, we use the optimal hyper-parameters shown in Table 2 for the three deep learning models. Neural-CodeSum could not predict natural language descriptions for some examples (\u21e1 80). The most likely reason for this situation is the errors in processing code or docstring tokens. Table 3 shows the quantitative results obtained based on smoothed BLEU-4, METEOR, and ROUGE-L scores. The results show that CodeBERT performs best among the three models. We believe that the reason we observe CodeBERT achieving this level of performance is that this model is pre-trained on both bimodal data and unimodal data (wherein bimodal data refers to the coupled code and natural language pairs and unimodal data refers to either natural language descriptions without code snippets or code snippets without natural language descriptions (Feng et al., 2020) ).",
"cite_spans": [
{
"start": 868,
"end": 887,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 323,
"end": 330,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "RQ 1 Results: Evaluation Based on Reference-Based Metrics",
"sec_num": "4.1"
},
{
"text": "Statistical significance In addition to calculating the evaluation scores (i.e. smoothed BLEU-4, METEOR, ROUGE), we conducted statistical significance tests for all three metrics to assess the validity of the obtained results. We took 19009 examples from the test dataset and used pairwise bootstrap re-sampling (Koehn, 2004) 3 model predictions. In comparison to Neural-CodeSum, we found CodeBERT performs better with a mean score increase (BLEU-4 2.8, ME-TEOR 2.9, ROUGE 2.2) at a 95% confidence interval, thus indicating a performance delta that is statistically significant.",
"cite_spans": [
{
"start": 312,
"end": 325,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 1 Results: Evaluation Based on Reference-Based Metrics",
"sec_num": "4.1"
},
{
"text": "In the first round of our study that included 148 \u21e5 3 = 444 samples, we were able to classify the errors for 398 generated natural language descriptions from the models from the validation dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 2 Results: Types of errors",
"sec_num": "4.2"
},
{
"text": "The remaining 46 descriptions that were not classified as predictions were not made by the models due to errors in parsing and one error in processing code tokens. This singular error was due to the fact an entire code snippet was commented out, and our models do not process commented code. Thus, we did not include the predictions for the three different models for that code snippet in our study. In the other 43 cases, the code2seq model could not generate predictions because the model was not able to parse the AST. Our error taxonomy derived after both rounds of the open coding process is shown in Figure 2. The taxonomy consists of seven highlevel categories with each consisting of multiple lower-level sub-categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 606,
"end": 612,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "RQ 2 Results: Types of errors",
"sec_num": "4.2"
},
{
"text": "To elaborate, Semantically Unrelated to Code is a subcategory of Incorrect Semantic Information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 2 Results: Types of errors",
"sec_num": "4.2"
},
{
"text": "Truth is dedicated to those captions that generally matched the ground truth, which we include for completeness. The numbers that are shown beside the name of the sub-categories illustrate the number of errors for CodeBERT, NeuralCodeSum, and code2seq respectively. The numbers shown beside the categories' names represent the cumulative sum of the sub-categories. We provide a small number of examples of these categorizations in Appendix C, and provide all labeled examples in our public resources on GitHub and Zenodo. We make the following notable observations resulting from our derived taxonomy: . This seems to indicate that one of the biggest struggles for current neural code summarization techniques is related to the inclusion of various types of necessary information in the summary itself, followed by issues in properly constructing comment syntax. \u2022 The models also either incorrectly recognized or failed to recognize salient identifiers that were needed to understand method functionality in a non-negligible number of cases (71/535 \u21e1 13.2%). This suggests that mechanisms for identifying focal identifiers i.e., those that might prominently contribute to describing the functionality, could be beneficial, similar to past work on identifying focal methods (Qusef et al., 2010) . \u2022 Some of the models exhibited generated summaries that over-generalized to the detriment of the summary meaning (49/535 \u21e1 9.15%) , whereas very few summaries contained extraneous information. \u2022 Further study is needed to gain a better understanding of the various facets of the critical information and non-critical information that captions were missing. For instance, we plan to explore whether the necessary information is contained within the code itself, or perhaps in semantically related methods. We leave this for future work.",
"cite_spans": [
{
"start": 1274,
"end": 1294,
"text": "(Qusef et al., 2010)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Note that one category Consistent with Ground",
"sec_num": null
},
{
"text": "One advantage of the formulation of our empirical study is that we are able to compare the various shortcomings of our studied models as they relate Extraneous/Unnessecary Information Included (2, 3, 4) Missing Context (1, 2, 2)",
"cite_spans": [
{
"start": 149,
"end": 196,
"text": "Extraneous/Unnessecary Information Included (2,",
"ref_id": null
},
{
"start": 197,
"end": 199,
"text": "3,",
"ref_id": null
},
{
"start": 200,
"end": 202,
"text": "4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 3 Results: Comparison of three different models",
"sec_num": "4.3"
},
{
"text": "Incorrect Construction (26, 31, 53) Consistent with Ground Truth (88, 57, 17) Over-Generalization (7, 21, 21) Incorrect Semantic Information (6, 27, 19) Missing Information (65, 56, 27) Missing Prog. Language Information (0, 0, 2)",
"cite_spans": [
{
"start": 23,
"end": 27,
"text": "(26,",
"ref_id": null
},
{
"start": 28,
"end": 31,
"text": "31,",
"ref_id": null
},
{
"start": 32,
"end": 35,
"text": "53)",
"ref_id": null
},
{
"start": 65,
"end": 69,
"text": "(88,",
"ref_id": null
},
{
"start": 70,
"end": 73,
"text": "57,",
"ref_id": null
},
{
"start": 74,
"end": 77,
"text": "17)",
"ref_id": null
},
{
"start": 98,
"end": 101,
"text": "(7,",
"ref_id": null
},
{
"start": 102,
"end": 105,
"text": "21,",
"ref_id": null
},
{
"start": 106,
"end": 109,
"text": "21)",
"ref_id": null
},
{
"start": 141,
"end": 144,
"text": "(6,",
"ref_id": null
},
{
"start": 145,
"end": 148,
"text": "27,",
"ref_id": null
},
{
"start": 149,
"end": 152,
"text": "19)",
"ref_id": null
},
{
"start": 173,
"end": 177,
"text": "(65,",
"ref_id": null
},
{
"start": 178,
"end": 181,
"text": "56,",
"ref_id": null
},
{
"start": 182,
"end": 185,
"text": "27)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 3 Results: Comparison of three different models",
"sec_num": "4.3"
},
{
"text": "\u2022 Missing Attributes that refer to PL specific information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 3 Results: Comparison of three different models",
"sec_num": "4.3"
},
{
"text": "Missing Database Information (1, 2, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 3 Results: Comparison of three different models",
"sec_num": "4.3"
},
{
"text": "\u2022 Missing database attributes that provide needed context to method functionality. Info (30, 15, 5) \u2022 Comment matches ground truth well. Specific Info (56, 35, 12) \u2022 Comment matches ground truth mostly, but misses some important specific information.",
"cite_spans": [
{
"start": 83,
"end": 92,
"text": "Info (30,",
"ref_id": null
},
{
"start": 93,
"end": 96,
"text": "15,",
"ref_id": null
},
{
"start": 97,
"end": 99,
"text": "5)",
"ref_id": null
},
{
"start": 137,
"end": 155,
"text": "Specific Info (56,",
"ref_id": null
},
{
"start": 156,
"end": 159,
"text": "35,",
"ref_id": null
},
{
"start": 160,
"end": 163,
"text": "12)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RQ 3 Results: Comparison of three different models",
"sec_num": "4.3"
},
{
"text": "Improves upon Semantic Meaning (2, 6, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistent but Missing",
"sec_num": null
},
{
"text": "\u2022 The predicted comment matches the ground and improves capturing method meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistent but Missing",
"sec_num": null
},
{
"text": "Consistent but with Unnecessary Info (0, 1, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistent but Missing",
"sec_num": null
},
{
"text": "\u2022 Accurate but has some unnecessary info.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistent but Missing",
"sec_num": null
},
{
"text": "\u2022 Comment over-generalizes on the meaning of the code functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Algorithmically Incorrect (1, 6, 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Overgeneralizes to the point of incorrectness",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Missing Attribute Specification (4, 12, 15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Uses generic names such as var.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Partial Incorrect Information (6, 11, 3) \u2022 Semantically meaningful, with a few errors.",
"cite_spans": [
{
"start": 18,
"end": 33,
"text": "Information (6,",
"ref_id": null
},
{
"start": 34,
"end": 37,
"text": "11,",
"ref_id": null
},
{
"start": 38,
"end": 40,
"text": "3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Semantically Unrelated to Code (0, 11, 13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Does not capture code context whatsoever.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Algorithmically Incorrect (0, 5, 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Conveys a different algorithmic meaning as compared to the code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Incorrect Identifier/Attribute (5, 19, 15) \u2022 Correctly identifies a variable or attribute, but uses it incorrectly.",
"cite_spans": [
{
"start": 31,
"end": 34,
"text": "(5,",
"ref_id": null
},
{
"start": 35,
"end": 38,
"text": "19,",
"ref_id": null
},
{
"start": 39,
"end": 42,
"text": "15)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Incomplete Sentence (1, 1, 10) \u2022 Predicted comment is grammatically incomplete.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 30,
"text": "(1, 1, 10)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Repetition (0, 7, 27) \u2022 Comment contains unnecessary repetition of a word or fragment between 2-3 times. Extreme Repetition (0, 2, 1)",
"cite_spans": [
{
"start": 11,
"end": 14,
"text": "(0,",
"ref_id": null
},
{
"start": 15,
"end": 17,
"text": "7,",
"ref_id": null
},
{
"start": 18,
"end": 21,
"text": "27)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Comment contains unnecessary repetition of a word or fragment more than 2-3 times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Focusing Only on Method Name (20, 1, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 When comment focuses mostly on the method name, which provides an incomplete but partial description of the functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Grammatical Errors (0, 1, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Grammatical Error is present in predicted caption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Missing Critical Information (21, 14, 7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Comment is missing critical semantic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Missing Task Elaboration (5, 2, 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Did not describe what code was doing properly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Missing 19, 5) \u2022 Useful comment but non-critical info missing.",
"cite_spans": [
{
"start": 8,
"end": 11,
"text": "19,",
"ref_id": null
},
{
"start": 12,
"end": 14,
"text": "5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Missing Web-Related Information (0, 1, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Comment failed to mention web-related identifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Failed to Mention Identifiers (0, 11, 6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Does not mention specific variable/attribute names, often using a generic identifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Missing Identifier (5, 3, 7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 No identifier mentioned at all. Unnecessary File Information (0, 1, 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Adds unnecessary file information to comment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Unnecessary Incorrect Information (1, 2, 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Adds information to comment that is both incorrect and unnecessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "The numbers shown for each category illustrate the number of instances found for (CodeBERT, NeuralCodeSum, and code2seq) respectively Figure 2 : Taxonomy of the Errors Between the Generated Summaries and the Ground Truth to our qualitative error analysis. To this end, we make the following notable observations:",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 The most frequent error categories for Code-BERT and NeuralCodeSum are Consistent but Missing Specific Information (Code-BERT: 56/197 \u21e1 28.42% and NeuralCodeSum: 35/197 \u21e1 17.77%). However, for code2seq, the most frequent category is Repetition (27/141 \u21e1 19.15%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 A non-negligible number of predictions from CodeBERT fall into the focusing Only on the Method Name category (20/197 \u21e1 10.15%). This may suggest a reliance of the model on descriptive method names in order to produce reasonable summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 NeuralCodeSum and code2seq produce a small number of predictions that are Semantically Unrelated to Code. However, we did not find any such cases for CodeBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "\u2022 Similar to our quantitative evaluation, we find that CodeBERT performs best, but suffers from a large number of errors related to Missing Information. In future work, we will investigate the adaptation of source coverage tech-niques (Cohn et al., 2016; Mi et al., 2016) to our task to mitigate this issue.",
"cite_spans": [
{
"start": 235,
"end": 254,
"text": "(Cohn et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 255,
"end": 271,
"text": "Mi et al., 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Different Meaning (2, 3, 3)",
"sec_num": null
},
{
"text": "Takeaway 1: The CodeBERT model illustrates improved performance on the Funcom dataset as compared to CodeXGLUE, likely due to the filtering steps undertaken in its construction. Previously, the CodeBERT model was finetuned on the CodeXGlue dataset and the smoothed BLEU-4 score obtained on the Java dataset was 17.65 (Lu et al., 2021) . However, we fine-tuned the model on the Funcom dataset and obtained a smoothed BLEU-4 score of 24.15. We believe there are two primary contributing factors to this observation: 1) A higher volume of data, and 2) filtering strategies. CodeXGLUE only provides 164923 training examples, whereas we used 400000 Java Methods and Javadoc pairs during he fine-tuning process. Moreover, The CodeXGLUE dataset is obtained from CodeSearchNet and the documents that contain special tokens (e.g., <img> or https:) are filtered. In our preprocessing, we did not completely remove such data in the preprocessing; we only remove the HTML and special characters from the JavaDoc captions. We hypothesize that such characters may contain important information and as such lead to more effective predicted summaries. Takeaway 2: Models that rely on statically parsing source code can lead to high numbers of missing/incomplete predictions. The preprocessing for the code2seq model includes generating strings from the AST node representation of each method. Unfortunately, it is difficult (or impossible) to construct a suitable AST representation for methods that fall under a certain token length threshold. As a result, about 19.98% of the original dataset could not be fed into the code2seq testing module, and for which we could not generate any prediction for these examples.",
"cite_spans": [
{
"start": 317,
"end": 334,
"text": "(Lu et al., 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Learned Lessons",
"sec_num": "5"
},
{
"text": "Takeaway 3: Some of the generated summaries provide a semantic meaning similar to the ground truth, despite exhibiting fewer n-gram matches. Our studied models can generate summaries that contain relevant semantic information which can be useful for code comprehension despite not perfectly matching the ground truth. For instance, let's consider the following example ground truth for a Java method, \"this method sets the text for the heading on the component\". The generated summary from the CodeBERT model is \"sets the heading caption\". Comparing these two descriptions will not necessarily result in a high BLEU-4 score. This suggests that a modification to the evaluation procedure for these models may provide a more realistic characterization of model performance in practice. For instance, measuring BERTScore in addition to other metrics for evaluation (Zhang et al., 2020b) 12 may help to better capture semantic similarities compared to purely symbolic similarities. Takeway 5: Future studies should explore the 12 https://github.com/Tiiiger/bert_score combination of AST traversal based and selfattention mechanism-based approaches to perform robust comment generation. AST-based approach is useful to provide syntax level information and it follows the structural tree traversal method to capture the global information. At the same time, we can see this approach is prone to errors like Repetition and Semantically Unrelated to Code. On the other hand, a selfattention mechanism is useful to capture the local information. So a multi-modal approach where standard encoders can be utilized to combine both AST-based and attention-based approaches can be a viable direction to explore further. Takeway 6: Robust evaluation metric(s) should be developed that specifically focus on source code -natural language translation. Source code is fundamentally different from the natural language from a number of perspectives. For instance, it exhibits less significant word order dependency, the significance of appropriate syntax naming and mentioning, etc. So a robust code to natural language translation evaluation metric should consider assessment from both local and global levels. Standard machine translation metrics like BLEU, ME-TEOR, ROUGE do not fully cover these factors.",
"cite_spans": [
{
"start": 862,
"end": 883,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Learned Lessons",
"sec_num": "5"
},
{
"text": "As such, we encourage future work to study and develop new forms of automated metrics for assessing this special case of machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Learned Lessons",
"sec_num": "5"
},
{
"text": "6 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Learned Lessons",
"sec_num": "5"
},
{
"text": "Source code summarization is a topic of great interest in software engineering research. The aim is to automate a portion of the software documentation process by automatically generating summaries of a given granularity for a source code snippet (e.g., methods) to save developer effort. Techniques have evolved from using more traditional Information Retrieval (IR) and machine learning methods to utilizing artificial neural networks. One of the earliest deep-learning-based source code summarization techniques is that by Iyer et al. (2016) . The authors used an attention-based neural network to generate NL summaries from source code. The approach was applied to the C# programming language and SQL. Given the strong syntax associated with programming languages, researchers have also experimented with utilizing AST information for source code summarization. Hu et al. (2018) used an AST traversal method to generate summaries. Additionally, LeClair et al. (2019) utilized structural code information by encoding ASTs. Our goal in this study is to provide an overview on the performance of a variety of techniques, both sequence based (i.e., Code-BERT, NeuralCodeSum), and structure-based (i.e., code2seq), in order to examine differences in quantitative and qualitative performance across different types of models. Recently, a more complex retrieval-augmented mechanism was introduced that combines both retrieval and generation-based methods for code to comment translation (Liu et al., 2021) . Finally, Bansal et al. (2021) recently proposed a method that uses a vectorized representation of source code files. We plan to explore additional techniques such as these in future work.",
"cite_spans": [
{
"start": 526,
"end": 544,
"text": "Iyer et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 866,
"end": 882,
"text": "Hu et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 1484,
"end": 1502,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF26"
},
{
"start": 1514,
"end": 1534,
"text": "Bansal et al. (2021)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Code to Comment Translation",
"sec_num": "6.1"
},
{
"text": "Although many deep learning models are capable of generating summaries from source code, very few researchers have focused on evaluating the errors made by the models from a human perspective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Studies of Code Summaries and Code Summarization",
"sec_num": "6.2"
},
{
"text": "During an early study on this topic, Ying and Robillard (2013) tried to understand whether code summaries achieved the same level of agreement from multiple human perspectives. McBurney and McMillan (2016) performed a comparison based on the similarities of the summaries generated by a newly proposed model which aimed at including context in code summaries. However, most recent work on code summarization models, e.g., Bansal et al., 2021) depend on machine translation metrics to measure the performance of the code summarization task. However, a recent study showed a necessity of revised metrics for code summarization (Stapleton et al., 2020) . Perhaps the most closely related study to ours is that conducted by Gros et al. (2020) . In this study, the authors question the validity of the formulation of code summarization as a machine translation task. In doing so, they apply code and natural language summarization models to several recently proposed code summarization datasets and one natural language dataset. They found differences between the natural language summarization and code summarization datasets that suggests marked semantic differences between the two task settings. Additionally, the authors carried out experiments which illustrate that reference-based metrics such as BLEU score may not be well suited for mea-suring the efficacy of code summarization tasks. Finally, the authors illustrate that IR techniques perform reasonably well at code summarization. While this study derives certain conclusions that are similar to those in our work (e.g., the need for better automated metrics) our study is differentiated by our manually derived fault taxonomy.",
"cite_spans": [
{
"start": 177,
"end": 205,
"text": "McBurney and McMillan (2016)",
"ref_id": "BIBREF30"
},
{
"start": 422,
"end": 442,
"text": "Bansal et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 625,
"end": 649,
"text": "(Stapleton et al., 2020)",
"ref_id": "BIBREF37"
},
{
"start": 720,
"end": 738,
"text": "Gros et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Studies of Code Summaries and Code Summarization",
"sec_num": "6.2"
},
{
"text": "To the best of our knowledge, no other study has taken on a large-scale qualitative empirical study with the objective of categorizing and understanding errors between automatically generated and ground truth code summaries. Thus, we believe this is one of the first papers to take a step toward a grounded understanding of the errors made by neural code summarization techniques -offering empirically validated insights into how future code summarization techniques might be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Studies of Code Summaries and Code Summarization",
"sec_num": "6.2"
},
{
"text": "In this work we perform both quantitative and qualitative evaluations of three popular neural code summarization techniques. Based on our quantitative analysis, we find that the CodeBERT model performs statistically significantly better than two other popular models (NeuralCodeSu, and code2seq) achieving a smoothed-BLEU-4 score of 24.15, a METEOR score of 30.34, and a ROUGE-L score of 35.65. Our qualitative analysis highlights some the most common errors made by our studied models and motivates follow-up work on improving specific model attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "7"
},
{
"text": "In the future, we aim to expand our analysis to additional retrieval-augmented summarization techniques and to expand the scope and depth of our neural code summarization model error taxonomy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "7"
},
{
"text": "In Table 4 , we show the hyper-parameters that are used in our adapted models. Code2seq model could not be trained using batch size or 128 because of the instability occurred from the longer comment length. Originally, this model was designed to predict the method name. So we trained the model using batch size 512 in our final experiment and it required 39 epochs to train the model. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "A Hyper-parameters",
"sec_num": null
},
{
"text": "We had to perform several preprocessing steps to make the dataset ready for training. Among all the three models, we removed comments inside methods, removed tags, clean HTML, lowercasing characters, removing special characters. For the NeuralCodeSum model, we applied an additional sub-tokenization step. For code2seq, we needed to prepare the AST representation of the code snippets. To do this, we used a modified JavaExtractor 13 which locates the Java methods and put them in a file where each line is for one method. Subtokenization is performed in between to tokenize the CamelCase attributes (i.e. [\"ArrayList\"->[\"Array\", \"List\"]]). The original dataset build script was designed to put the method name in the prediction window. The modified one puts the comment instead of a method name. In Table 5 , a Java code, comment and the equivalent one line dataset instance (AST representation) is presented. While performing this step, some methods could not be parsed as this AST representation mainly because of the minimum method length threshold required for the parsing. In total, we could transform 80.02% of our training dataset on which we trained the code2seq model. All the steps used in preprocessing are shown in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 800,
"end": 807,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 1228,
"end": 1235,
"text": "Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "B Data Prepossessing",
"sec_num": null
},
{
"text": "In Table 7 , model predictions are given with the ground truth and assigned error categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Case Study",
"sec_num": null
},
{
"text": "Original Method public Type getType() { return m type; } Comment returns the type of this technical information AST represtation returns|the|type|of|this|technical|information type,Cls0|Mth|Nm1,get|type type,Cls0|Mth|Bk|Ret|Nm0,m|type get|type,Nm1|Mth|Bk|Ret|Nm0,m|type ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Case Study",
"sec_num": null
},
{
"text": "https://github.com/wasiahmad/ NeuralCodeSum 8 https://github.com/tech-srl/code2seq 9 https://github.com/LRNavin/ AutoComments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some examples of the predictions are shown in Appendix C 11 All annotators are also authors of this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/LRNavin/ AutoComments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "} sets the value of the \"path\" attribute sets the path (Consistent but Missing Specific Info) sets the path (Consistent but Missing Specific Info) sets the path to the path of the path are not relative to the path of the path (Extreme Repetition) Table 7 : Detailed case study of model predictions with ground truth",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A transformer-based approach for source code summarization",
"authors": [
{
"first": "Wasi",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Saikat",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Baishakhi",
"middle": [],
"last": "Ray",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "4998--5007",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based ap- proach for source code summarization. pages 4998- 5007.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating sequences from structured representations of code",
"authors": [
{
"first": "Uri",
"middle": [],
"last": "Alon",
"suffix": ""
},
{
"first": "Shaked",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uri Alon, Shaked Brody, Omer Levy, and Eran Ya- hav. 2019. code2seq: Generating sequences from structured representations of code. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Project-level encoding for neural source code summarization of subroutines",
"authors": [
{
"first": "Aakash",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Sakib",
"middle": [],
"last": "Haque",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakash Bansal, Sakib Haque, and Collin McMillan. 2021. Project-level encoding for neural source code summarization of subroutines.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An empirical analysis of the impact of software development problem factors on software maintainability",
"authors": [
{
"first": "Jie-Cherng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sun-Jen",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "J. Syst. Softw",
"volume": "82",
"issue": "6",
"pages": "981--992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie-Cherng Chen and Sun-Jen Huang. 2009. An empir- ical analysis of the impact of software development problem factors on software maintainability. J. Syst. Softw., 82(6):981-992.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incorporating structural alignment biases into an attentional neural translation model",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Cong Duy Vu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vymolova",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vy- molova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment bi- ases into an attentional neural translation model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 876-885, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Evaluating source code summarization techniques: Replication and expansion",
"authors": [
{
"first": "B",
"middle": [
"P"
],
"last": "Eddy",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Robinson",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Kraft",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Carver",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 21st International Conference on Program Comprehension (ICPC)",
"volume": "",
"issue": "",
"pages": "13--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. P. Eddy, J. A. Robinson, N. A. Kraft, and J. C. Carver. 2013. Evaluating source code summariza- tion techniques: Replication and expansion. In 2013 21st International Conference on Program Compre- hension (ICPC), pages 13-22.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Code-BERT: A pre-trained model for programming and natural languages",
"authors": [
{
"first": "Zhangyin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1536--1547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xi- aocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Code- BERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Usage and usefulness of technical software documentation: An industrial case study",
"authors": [
{
"first": "Golara",
"middle": [],
"last": "Garousi",
"suffix": ""
},
{
"first": "Vahid",
"middle": [],
"last": "Garousi-Yusifoglu",
"suffix": ""
},
{
"first": "Guenther",
"middle": [],
"last": "Ruhe",
"suffix": ""
},
{
"first": "Junji",
"middle": [],
"last": "Zhi",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Moussavi",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "formation and Software Technology",
"volume": "57",
"issue": "",
"pages": "664--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Golara Garousi, Vahid Garousi-Yusifoglu, Guenther Ruhe, Junji Zhi, Mahmoud Moussavi, and Brian Smith. 2015. Usage and usefulness of technical soft- ware documentation: An industrial case study. In- formation and Software Technology, 57:664 -682.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Code to comment \"translation\": Data, metrics, baselining evaluation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Gros",
"suffix": ""
},
{
"first": "Hariharan",
"middle": [],
"last": "Sezhiyan",
"suffix": ""
},
{
"first": "Prem",
"middle": [],
"last": "Devanbu",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, ASE '20",
"volume": "",
"issue": "",
"pages": "746--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Gros, Hariharan Sezhiyan, Prem Devanbu, and Zhou Yu. 2020. Code to comment \"translation\": Data, metrics, baselining evaluation. In Proceed- ings of the 35th IEEE/ACM International Confer- ence on Automated Software Engineering, ASE '20, page 746-757, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Supporting program comprehension with source code summarization",
"authors": [
{
"first": "Sonia",
"middle": [],
"last": "Haiduc",
"suffix": ""
},
{
"first": "Jairo",
"middle": [],
"last": "Aponte",
"suffix": ""
},
{
"first": "Andrian",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonia Haiduc, Jairo Aponte, and Andrian Marcus. 2010. Supporting program comprehension with source code summarization. ICSE '10, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Association for Computing Machinery",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 26th Conference on Program Comprehension, ICPC '18",
"volume": "",
"issue": "",
"pages": "200--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, ICPC '18, page 200-210, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Codesearchnet challenge: Evaluating the state of semantic code search",
"authors": [
{
"first": "Hamel",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "Ho-Hsiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tiferet",
"middle": [],
"last": "Gazit",
"suffix": ""
},
{
"first": "Miltiadis",
"middle": [],
"last": "Allamanis",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- searchnet challenge: Evaluating the state of seman- tic code search. CoRR, abs/1909.09436.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Summarizing source code using a neural attention model",
"authors": [
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2073--2083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073-2083, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceed- ings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing, pages 388- 395, Barcelona, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Google's PageRank and Beyond: The Science of Search Engine Rankings",
"authors": [
{
"first": "Amy",
"middle": [
"N"
],
"last": "Langville",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"D"
],
"last": "Meyer",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy N. Langville and Carl D. Meyer. 2006. Google's PageRank and Beyond: The Science of Search En- gine Rankings. Princeton University Press, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07",
"volume": "",
"issue": "",
"pages": "228--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, StatMT '07, page 228-231, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improved code summarization via a graph neural network",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Leclair",
"suffix": ""
},
{
"first": "Sakib",
"middle": [],
"last": "Haque",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Program Comprehension, ICPC '20",
"volume": "",
"issue": "",
"pages": "184--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander LeClair, Sakib Haque, Lingfei Wu, and Collin McMillan. 2020. Improved code summariza- tion via a graph neural network. In Proceedings of the 28th International Conference on Program Com- prehension, ICPC '20, page 184-195, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A neural model for generating natural language summaries of program subroutines",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Leclair",
"suffix": ""
},
{
"first": "Siyuan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 41st International Conference on Software Engineering, ICSE '19",
"volume": "",
"issue": "",
"pages": "795--806",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander LeClair, Siyuan Jiang, and Collin McMil- lan. 2019. A neural model for generating natural language summaries of program subroutines. In Proceedings of the 41st International Conference on Software Engineering, ICSE '19, page 795-806. IEEE Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recommendations for datasets for source code summarization",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Leclair",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander LeClair and Collin McMillan. 2019. Rec- ommendations for datasets for source code summa- rization. CoRR, abs/1904.02660.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Orange: a method for evaluating automatic evaluation metrics for machine translation",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2004,
"venue": "The 20th International Conference on Computational Linguistics (COLING 2004)",
"volume": "",
"issue": "",
"pages": "501--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In The 20th International Conference on Computational Linguistics (COLING 2004), pages 501-507. COLING.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Enabling mutation testing for android apps",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Linares-V\u00e1squez",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Bavota",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Tufano",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [
"Di"
],
"last": "Penta",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Vendome",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Bernal-C\u00e1rdenas",
"suffix": ""
},
{
"first": "Denys",
"middle": [],
"last": "Poshyvanyk",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering",
"volume": "",
"issue": "",
"pages": "233--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Linares-V\u00e1squez, Gabriele Bavota, Michele Tu- fano, Kevin Moran, Massimiliano Di Penta, Christo- pher Vendome, Carlos Bernal-C\u00e1rdenas, and Denys Poshyvanyk. 2017. Enabling mutation testing for an- droid apps. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, page 233-244, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Retrieval-augmented generation for code summarization via hybrid {gnn}",
"authors": [
{
"first": "Shangqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Jing",
"middle": [
"Kai"
],
"last": "Siow",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2021. Retrieval-augmented genera- tion for code summarization via hybrid {gnn}. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Codexglue: A machine learning benchmark dataset for code understanding and generation",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Svyatkovskiy",
"suffix": ""
},
{
"first": "Ambrosio",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Clement",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Drain",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Tufano",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Li- dong Zhou, Linjun Shou, Long Zhou, Michele Tu- fano, Ming Gong, Ming Zhou, Nan Duan, Neel Sun- daresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. Codexglue: A machine learning bench- mark dataset for code understanding and generation.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Studying the usage of text-to",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Mastropaolo",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Scalabrino",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "David",
"middle": [
"Nader"
],
"last": "Palacio",
"suffix": ""
},
{
"first": "Denys",
"middle": [],
"last": "Poshyvanyk",
"suffix": ""
},
{
"first": "Rocco",
"middle": [],
"last": "Oliveto",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Bavota",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Mastropaolo, Simone Scalabrino, Nathan Cooper, David Nader Palacio, Denys Poshyvanyk, Rocco Oliveto, and Gabriele Bavota. 2021. Study- ing the usage of text-to-text transfer transformer to support code-related tasks.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic source code summarization of context for java methods",
"authors": [
{
"first": "P",
"middle": [
"W"
],
"last": "Mcburney",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Software Engineering",
"volume": "42",
"issue": "2",
"pages": "103--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. W. McBurney and C. McMillan. 2016. Automatic source code summarization of context for java meth- ods. IEEE Transactions on Software Engineering, 42(2):103-119.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Automatic documentation generation via source code summarization of method context",
"authors": [
{
"first": "W",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcburney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 22nd International Conference on Program Comprehension, ICPC 2014",
"volume": "",
"issue": "",
"pages": "279--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul W. McBurney and Collin McMillan. 2014. Au- tomatic documentation generation via source code summarization of method context. In Proceedings of the 22nd International Conference on Program Comprehension, ICPC 2014, page 279-290, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Coverage embedding models for neural machine translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Baskaran Sankaran",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "955--960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 955-960, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Qualitative data analysis: a methods sourcebook",
"authors": [
{
"first": "Matthew",
"middle": [
"B"
],
"last": "Miles",
"suffix": ""
},
{
"first": "A",
"middle": [
"Michael"
],
"last": "Huberman",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Salda\u00f1a",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew B. Miles, A. Michael Huberman, and Johnny Salda\u00f1a. 2013. Qualitative data analysis: a methods sourcebook. Thousand Oaks, Califorinia: SAGE Publications, Inc., [2014] \u00a92014.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bleu: A method for automatic evaluation of machine translation. ACL '02",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "USA. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. ACL '02, page 311-318, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Recovering traceability links between unit tests and classes under test: An improved method",
"authors": [
{
"first": "Abdallah",
"middle": [],
"last": "Qusef",
"suffix": ""
},
{
"first": "Rocco",
"middle": [],
"last": "Oliveto",
"suffix": ""
},
{
"first": "Andrea",
"middle": [
"De"
],
"last": "Lucia",
"suffix": ""
}
],
"year": 2010,
"venue": "2010 IEEE International Conference on Software Maintenance",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdallah Qusef, Rocco Oliveto, and Andrea De Lucia. 2010. Recovering traceability links between unit tests and classes under test: An improved method. In 2010 IEEE International Conference on Software Maintenance, pages 1-10.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A human study of comprehension and code summarization",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Stapleton",
"suffix": ""
},
{
"first": "Yashmeet",
"middle": [],
"last": "Gambhir",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Leclair",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Eberhart",
"suffix": ""
},
{
"first": "Westley",
"middle": [],
"last": "Weimer",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Leach",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Program Comprehension, ICPC '20",
"volume": "",
"issue": "",
"pages": "2--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Stapleton, Yashmeet Gambhir, Alexander LeClair, Zachary Eberhart, Westley Weimer, Kevin Leach, and Yu Huang. 2020. A human study of comprehen- sion and code summarization. In Proceedings of the 28th International Conference on Program Compre- hension, ICPC '20, page 2-13, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Attention is all you need. NIPS'17",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Kaiser",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Atten- tion is all you need. NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Improving automatic source code summarization via deep reinforcement learning",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Haochao",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering",
"volume": "",
"issue": "",
"pages": "397--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S. Yu. 2018. Im- proving automatic source code summarization via deep reinforcement learning. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, page 397-407, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Code fragment summarization",
"authors": [
{
"first": "T",
"middle": [
"T"
],
"last": "Annie",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"P"
],
"last": "Ying",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robillard",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "655--658",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie T. T. Ying and Martin P. Robillard. 2013. Code fragment summarization. ESEC/FSE 2013, page 655-658, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Retrieval-based neural source code summarization",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hailong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xudong",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, ICSE '20",
"volume": "",
"issue": "",
"pages": "1385--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020a. Retrieval-based neural source code summarization. In Proceedings of the ACM/IEEE 42nd International Conference on Soft- ware Engineering, ICSE '20, page 1385-1397, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Cost, benefits and quality of software development documentation",
"authors": [
{
"first": "Junji",
"middle": [],
"last": "Zhi",
"suffix": ""
},
{
"first": "Vahid",
"middle": [],
"last": "Garousi-Yusiflu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Golara",
"middle": [],
"last": "Garousi",
"suffix": ""
},
{
"first": "Shawn",
"middle": [],
"last": "Shahnewaz",
"suffix": ""
},
{
"first": "Guenther",
"middle": [],
"last": "Ruhe",
"suffix": ""
}
],
"year": 2015,
"venue": "J. Sys. Sof",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junji Zhi, Vahid Garousi-Yusiflu, Bo Sun, Golara Garousi, Shawn Shahnewaz, and Guenther Ruhe. 2015. Cost, benefits and quality of software devel- opment documentation. J. Sys. Sof.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Automatic code summarization: A systematic literature review",
"authors": [
{
"first": "Yuxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minxue",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuxiang Zhu and Minxue Pan. 2019. Automatic code summarization: A systematic literature review. CoRR, abs/1909.04352.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Code to text translation using CodeBERT."
},
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Data Statistics. We use the Funcom dataset."
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Model Hyperparameters."
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Evaluation Results with three metrics. CodeBERT is consistently better than the other two models."
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td>Preprocessing</td><td colspan=\"2\">CodeBERT Neural-</td><td>Code2Seq</td></tr><tr><td/><td/><td>CodeSum</td><td/></tr><tr><td>removed comments inside methods removed tags for comments and</td><td>X X</td><td>X X</td><td>X X</td></tr><tr><td>methods HTML cleaning Sub-tokenization Lowercase removing special characters</td><td>X X X</td><td>X X X X</td><td>X X X X</td></tr></table>",
"html": null,
"num": null,
"text": "AST representation of java method for code2seq training"
},
"TABREF11": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
}
}
}
}