{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:09.589890Z" }, "title": "CoTexT: Multi-task Learning with Code-Text Transformer", "authors": [ { "first": "Long", "middle": [], "last": "Phan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Case Western Reserve University", "location": { "region": "Ohio", "country": "USA" } }, "email": "" }, { "first": "Hieu", "middle": [], "last": "Tran", "suffix": "", "affiliation": { "laboratory": "", "institution": "VNU-HCM", "location": { "country": "Vietnam" } }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Le", "suffix": "", "affiliation": { "laboratory": "", "institution": "Case Western Reserve University", "location": { "region": "Ohio", "country": "USA" } }, "email": "" }, { "first": "Hieu", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Case Western Reserve University", "location": { "region": "Ohio", "country": "USA" } }, "email": "" }, { "first": "James", "middle": [], "last": "Anibal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Case Western Reserve University", "location": { "region": "Ohio", "country": "USA" } }, "email": "" }, { "first": "Alec", "middle": [], "last": "Peltekian", "suffix": "", "affiliation": { "laboratory": "", "institution": "Case Western Reserve University", "location": { "region": "Ohio", "country": "USA" } }, "email": "" }, { "first": "Yanfang", "middle": [], "last": "Ye", "suffix": "", "affiliation": { "laboratory": "", "institution": "Case Western Reserve University", "location": { "region": "Ohio", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present CoTexT, a pre-trained, transformerbased encoder-decoder model that learns the representative context between natural language (NL) and programming language (PL). Using self-supervision, CoTexT is pretrained on large programming language corpora to learn a general understanding of language and code. CoTexT supports downstream NL-PL tasks such as code summarizing/documentation, code generation, defect detection, and code debugging. We train CoTexT on different combinations of available PL corpus including both \"bimodal\" and \"unimodal\" data. Here, bimodal data is the combination of text and corresponding code snippets, whereas unimodal data is merely code snippets. We first evaluate CoTexT with multi-task learning: we perform Code Summarization on 6 different programming languages and Code Refinement on both small and medium size featured in the CodeXGLUE dataset. We further conduct extensive experiments to investigate Co-TexT on other tasks within the CodeXGlue dataset, including Code Generation and Defect Detection. We consistently achieve SOTA results in these tasks, demonstrating the versatility of our models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present CoTexT, a pre-trained, transformerbased encoder-decoder model that learns the representative context between natural language (NL) and programming language (PL). Using self-supervision, CoTexT is pretrained on large programming language corpora to learn a general understanding of language and code. CoTexT supports downstream NL-PL tasks such as code summarizing/documentation, code generation, defect detection, and code debugging. We train CoTexT on different combinations of available PL corpus including both \"bimodal\" and \"unimodal\" data. Here, bimodal data is the combination of text and corresponding code snippets, whereas unimodal data is merely code snippets. We first evaluate CoTexT with multi-task learning: we perform Code Summarization on 6 different programming languages and Code Refinement on both small and medium size featured in the CodeXGLUE dataset. We further conduct extensive experiments to investigate Co-TexT on other tasks within the CodeXGlue dataset, including Code Generation and Defect Detection. We consistently achieve SOTA results in these tasks, demonstrating the versatility of our models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, pre-trained language models (LM) have played a crucial role in the development of many natural language processing (NLP) systems. Before the emergence of large LMs, traditional word embedding gives each word/token a global representation. Large pre-trained models such as ELMo (Peters et al., 2018) , GPT (Brown et al., 2020) , BERT (Devlin et al., 2018) , and XLNet (Yang et al., 2020) can derive contextualized word vector representations from large corpora. These methods can learn generalized representations of language and have significantly improved a broad range of downstream NLP tasks. These LMs make use of learning objectives such as Masked Language Modeling (MLM) (Devlin et al., 2018) where random tokens in a sequence are masked and the model predicts the original tokens to learn the context. The success of pre-trained models in NLP has created a path for domain-specific pretrained LMs, such as BioBERT (Lee et al., 2019a) on biomedical text, or TaBERT (Yin et al., 2020) on NL text and tabular data.", "cite_spans": [ { "start": 294, "end": 315, "text": "(Peters et al., 2018)", "ref_id": "BIBREF20" }, { "start": 322, "end": 342, "text": "(Brown et al., 2020)", "ref_id": null }, { "start": 350, "end": 371, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" }, { "start": 384, "end": 403, "text": "(Yang et al., 2020)", "ref_id": "BIBREF25" }, { "start": 694, "end": 715, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" }, { "start": 938, "end": 957, "text": "(Lee et al., 2019a)", "ref_id": "BIBREF14" }, { "start": 988, "end": 1006, "text": "(Yin et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce CoTexT (Code and Text Transfer Transformer), a pre-trained model for both natural language (NL) and programming language (PL) such as Java, Python, Javascript, PHP, etc. CoTexT follows the encoder-decoder architecture proposed by (Vaswani et al., 2017) with attention mechanisms. We then adapt the model to match T5 framework proposed by (Raffel et al., 2019) . We test CoTexT by performing exhaustive experiments on multi-task learning of multiple programming languages and other related tasks.", "cite_spans": [ { "start": 243, "end": 265, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 351, "end": 372, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We train CoTexT using large programming language corpora containing multiple programming languages (including Java, Python, JavaScript, Ruby, etc.) . Here, we test different combinations of unimodal and bimodal data to produce the best result for each downstream task. We then finetune CoTexT on four CodeXGLUE tasks (Lu et al., 2021) including CodeSummarization, CodeGeneration, Defect Detection and Code Refinement (small and medium dataset). Results show that we achieve state-of-the-art values for each of the four tasks. We found that CoTexT outperforms current SOTA models such as CodeBERT (Feng et al., 2020) and PLBART (Ahmad et al., 2021a) .", "cite_spans": [ { "start": 99, "end": 147, "text": "(including Java, Python, JavaScript, Ruby, etc.)", "ref_id": null }, { "start": 317, "end": 334, "text": "(Lu et al., 2021)", "ref_id": "BIBREF18" }, { "start": 596, "end": 615, "text": "(Feng et al., 2020)", "ref_id": null }, { "start": 620, "end": 648, "text": "PLBART (Ahmad et al., 2021a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we offer the following contribution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Three different versions of CoTexT that achieve state-of-the-art on the CodeXGLUE's CodeSummarization, CodeGeneration, Defect Detection and Code Refinement (small and medium dataset) tasks. We publicize our CoTexT pre-trained checkpoints and related source code available for future studies and improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent work on domain adaptation of BERT show improvements compared to the general BERT model. BioBERT (Lee et al., 2019b) is further trained from BERT BASE on biomedical articles such as PubMed abstracts and PMC articles. Similarly, SciBERT (Beltagy et al., 2019) is trained on the full text of biomedical and computer science papers. The experimental results of these models on domain-specific datasets show the enhanced performance compared to BERT BASE . Relating specfically to our work, CodeBERT is (Feng et al., 2020) trained on bimodal data of NL-PL pairs. This strategy allows CodeBERT to learn general-purpose representations of both natural language and programming language. GraphCode-BERT (Guo et al., 2021) is an extension of Code-BERT that moves beyond syntactic-level structure and uses data flow in the pre-training stage to capture the semantic-level structure of code. More recently, PLBART (Ahmad et al., 2021b) is a pretrained sequence-to-sequence model for NL and PL. Through denoising autoencoding, this model can perform well on NL-PL understanding and generation tasks.", "cite_spans": [ { "start": 103, "end": 122, "text": "(Lee et al., 2019b)", "ref_id": "BIBREF15" }, { "start": 242, "end": 264, "text": "(Beltagy et al., 2019)", "ref_id": "BIBREF2" }, { "start": 505, "end": 524, "text": "(Feng et al., 2020)", "ref_id": null }, { "start": 702, "end": 720, "text": "(Guo et al., 2021)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Following the example of T5 (Raffel et al., 2019) , we use the Sentence Piece Unsupervised Text Tokenizer proposed by (Kudo and Richardson, 2018) . The Sentence Piece model extracts the sub-words that contain the semantic context of a sequence. We employ Sentence Piece as a vocabulary model for all of our contributed CoTexT models. However, the special tokens used in code (such as \"[\", \"{\", \"$\", etc) are out-of-vocab for the SentencePiece model 1 . These tokens have a crucial representative context in programming languages. Therefore, to enhance the robustness of the model, we encode all of these missing tokens into a natural language representation during both self-supervised and supervised training. Figure 1 : An illustration about Fill-in-the-blank objective", "cite_spans": [ { "start": 28, "end": 49, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" }, { "start": 118, "end": 145, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 711, "end": 719, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Vocabulary", "sec_num": "3.1" }, { "text": "We train CoTexT on both bimodal and unimodal data. Bimodal data contains both code snippets and the corresponding natural text in each sequence, while unimodal data contains only the sequence of code. We use two main datasets during selfsupervised training: CodeSearchNet Corpus Collection (Husain et al., 2020) and GitHub Repositories 2 data. The combinations of corpus used to train CoTexT are listed in Table 1 . To save both time and computing resources, we initialized the checkpoints from the original T5 that was trained on the C4 corpus. (Raffel et al., 2019) .", "cite_spans": [ { "start": 290, "end": 311, "text": "(Husain et al., 2020)", "ref_id": "BIBREF10" }, { "start": 546, "end": 567, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 406, "end": 413, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Pre-training CoTexT", "sec_num": "3.2" }, { "text": "CodeSearchNet Corpus (Husain et al., 2020) contains coded functions from open-source non-forked Github repositories. This dataset spans 6 coding languages (Python, Java, Javascript, PHP, Ruby, Go), which facilitates multi-task learning. Code-SearchNet also contains a natural language description for each function. For bimodal data, we simply concatenate the natural language snippet with the corresponding code snippet to create one input sequence. These data are then processed as described in 3.1.", "cite_spans": [ { "start": 21, "end": 42, "text": "(Husain et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "CodeSearchNet Corpus Collection", "sec_num": "3.2.1" }, { "text": "We download a large collection of Java and Python functions from the GitHub repositories dataset available on Google BigQuery. These Java and Python functions are then extracted and the natural language descriptions are obtained using the preprocessing pipeline from (Lachaux et al., 2020) . These datapoints also run through a pipeline to replace special tokens (as described in 3.1).", "cite_spans": [ { "start": 267, "end": 289, "text": "(Lachaux et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "GitHub repositories", "sec_num": "3.2.2" }, { "text": "CoTexT converts all NLP problems into a textto-text format. This means that during both self- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input/Output Representations", "sec_num": "3.3" }, { "text": "Model N-modal Corpus combination T5 NL C4 CoTexT (1-CC) PL C4 + CodeSearchNet CoTexT (2-CC) NL-PL C4 + CodeSearchNet CoTexT (1-CCG) PL C4 + CodeSearchNet + Github Repos", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input/Output Representations", "sec_num": "3.3" }, { "text": "supervised pre-training and supervised training, we use an input sequence and a target sequence. For the bimodal model, we concatenate a sequence of natural language text and the corresponding sequence of programming language text as an input. For the unimodal model, we simply use each coded function as an input sequence. During selfsupervised training, spans of the input sequence are randomly masked and the target sequence (Raffel et al., 2019) is formed as the concatenation of the same sentinel tokens and the real masked spans/tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input/Output Representations", "sec_num": "3.3" }, { "text": "CoTexT follows the sequence-to-sequence encoderdecoder architecture proposed by (Vaswani et al., 2017) . We initialize the Base T5 model released by (Raffel et al., 2019) which has 220 million parameters. We train the model with a 0.001 learning rate and an input/target length of 1024. With the provided TPU v2-8 on Google Colab, we train with the recommended setting of model parallelism 2 and batch size 128.", "cite_spans": [ { "start": 80, "end": 102, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 149, "end": 170, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.4" }, { "text": "The model is trained with maximum likelihood objective (that is using \"teacher forcing\" (Williams and Zipser, 1989) ) regardless of the text-code or code-text tasks. Therefore, for CoTexT, we leverage the potential for Multi-Task learning (Raffel et al., 2019) to complete both text-code and codetext generation on CodeSummarization and Code Refinement tasks. To specify the task our model should perform, we simply add a task-specific prefix to the input sequence. For example, when finetuning of the CodeSummarization task for each programming language, we simply prepend a prefix for each PL name (i.e., Java) to the input sequence.", "cite_spans": [ { "start": 88, "end": 115, "text": "(Williams and Zipser, 1989)", "ref_id": "BIBREF24" }, { "start": 239, "end": 260, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-task Learning", "sec_num": "3.5" }, { "text": "In this section, we will first describe the benchmark dataset for code intelligence CodeXGLUE, then we To display Hello on the screen Figure 2 : An illustration about Multi-task learning will explain the experimental setup on the tasks we perform and discuss the results of each task. The evaluation datasets are summarized in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 2", "ref_id": null }, { "start": 327, "end": 334, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "General Language Understanding Evaluation benchmark for CODE (CodeXGLUE) (Lu et al., 2021 ) is a benchmark dataset to facilitate machine learning studies on code understanding and code generation problems. This dataset includes a collection of code intelligence tasks (both classification and generation), a platform for model evaluation, and a leaderboard for comparison. CodeXGLUE has 10 code intelligence tasks including code-text, text-code, code-code, and text-text scenarios. For CoTexT, we focus on Code Summarization, Code Generation, Code Refinement, and Defect Detection tasks.", "cite_spans": [ { "start": 73, "end": 89, "text": "(Lu et al., 2021", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "CodeXGLUE", "sec_num": "4.1" }, { "text": "We evaluate our programming language and natural language generation tasks on TPU v2-8 with the settings from the original T5 model (Raffel et al., 2019) . The input length and target length for each task are described in Table 2 .", "cite_spans": [ { "start": 132, "end": 153, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 222, "end": 229, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "4.2" }, { "text": "For Code Summarization, the objective is to generate a natural language description for a given code snippet. The task includes a CodeSearchNet dataset (Husain et al., 2019) with 6 different programming languages: Python, Java, Javascript, PHP, Ruby, Go. The data comes from public open-source nonfork GitHub repositories and the annotations are ex- tracted from function documentation as described in (Husain et al., 2019) .", "cite_spans": [ { "start": 152, "end": 173, "text": "(Husain et al., 2019)", "ref_id": "BIBREF9" }, { "start": 402, "end": 423, "text": "(Husain et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Code Summarization", "sec_num": "4.2.1" }, { "text": "Text-to-Code Generation aims to generate a coded function given a natural language description. This task is completed using the CONCODE dataset (Iyer et al., 2018) , a well-known dataset for Java language generation. Within the dataset, there are tuples which contain a natural language description, code environments, ad code snippets. The goal is to generate the correct Java function from the natural language description in the form of Javadoc-style method comments.", "cite_spans": [ { "start": 145, "end": 164, "text": "(Iyer et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Code Generation", "sec_num": "4.2.2" }, { "text": "Code Refinement, or Code Repair, aims to automatically correct bugs in Java code. We used the Bug2Fix corpus released by CodeXGLUE (Lu et al., 2021) , which divides the task into 2 subsets: SMALL and MEDIUM The small dataset includes only Java code functions with fewer than 50 tokens. The medium dataset includes functions with 50-100 tokens.", "cite_spans": [ { "start": 131, "end": 148, "text": "(Lu et al., 2021)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Code Refinement", "sec_num": "4.2.3" }, { "text": "For Defect Detection tasks, we attempt to classify whether a PL snippet contains vulnerabilities that could lead to damaging outcomes such as resource leaks or DoS attacks. The task uses the Devign dataset , which contains C programming language from open-source projects. This dataset is labeled based on security-related commits. For details on the annotation process, refer to .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defect Detection", "sec_num": "4.2.4" }, { "text": "We compare our model with some well-known pretrained models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3.1" }, { "text": "\u2022 CodeGPT, CodeGPT-adapted are based on the architecture and training objective of GPT-2 (Budzianowski and Vulic, 2019) . CodeGPT is pre-trained from scratch on CodeSearch-Net dataset (Lu et al., 2021) while CodeGPTadapted learns this dataset starting from the GPT-2 checkpoint.", "cite_spans": [ { "start": 89, "end": 119, "text": "(Budzianowski and Vulic, 2019)", "ref_id": "BIBREF4" }, { "start": 184, "end": 201, "text": "(Lu et al., 2021)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3.1" }, { "text": "\u2022 CodeBERT (Feng et al., 2020) employs the same architecture as RoBERTa but aims to minimize the combined loss from masked language modeling and replaced token detection.", "cite_spans": [ { "start": 11, "end": 30, "text": "(Feng et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3.1" }, { "text": "\u2022 PLBART (Ahmad et al., 2021b) is a Transformer-based model. BART (Lewis et al., 2019) is trained on PL corpora using three learning strategies: token masking, token deletion, and token infilling.", "cite_spans": [ { "start": 66, "end": 86, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3.1" }, { "text": "\u2022 BLEU (Papineni et al., 2002) is an algorithm which performs automatic evaluation of machine-translated text. This method calculates the n-gram similarity of a candidate translation compared to a set of reference texts. Similar to (Feng et al., 2020) and (Ahmad et al., 2021b) , we use smooth BLEU-4 score (?) for Code Summarization and corpus-level BLEU score for all remaining tasks.", "cite_spans": [ { "start": 7, "end": 30, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF19" }, { "start": 232, "end": 251, "text": "(Feng et al., 2020)", "ref_id": null }, { "start": 256, "end": 277, "text": "(Ahmad et al., 2021b)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "4.3.2" }, { "text": "\u2022 CodeBLEU (Ren et al., 2020) is designed to consider syntactic and semantic features of (Iyer et al., 2018) codes based on the abstract syntax tree and the data flow structure.", "cite_spans": [ { "start": 89, "end": 108, "text": "(Iyer et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "4.3.2" }, { "text": "\u2022 Accuracy is the ratio of the number of generated sequences that harmonise the reference to the total number of observations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Metrics", "sec_num": "4.3.2" }, { "text": "We first report the result of CoTexT in Multi-Task Learning tasks including Code Summarization and Code Refinement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Task Learning", "sec_num": "5.1" }, { "text": "For the Code Summarization task, we perform Multi-Task Learning by using the T5 framework (Raffel et al., 2019) to finetune CoTexT on 6 diferent programming language (Ruby, Javascript, Go, Python, Java, and PHP). The results of the Code Summarization task are shown in Table 5 . First, we observe that the base T5, which is pre-trained only on the general domain corpus (C4), is effective on this task. In fact, base T5 achieves higher overall results on the BLEU-4 metric compared to all other related models on the CodeXGLUE leaderboard. This shows the importance of domain-specific T5 models, which we expect to achieve superior results compared to base T5.", "cite_spans": [ { "start": 90, "end": 111, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Code Summarization", "sec_num": "5.1.1" }, { "text": "We further observe that CoTexT achieves stateof-the-art (SOTA) on the overall score, the Python-specific score, the Java-specific score, and the Gospecific score. While CoTexT does not significantly outperform other pre-trained models, we observe that CoTexT achieves SOTA on two very common programming languages (Python and Java) while still obtaining competitive results on other programming languages. We attribute this result to the large amount of training data for Python and Java compared to the other languages (training size described in Table 3 ). Based on this result, CoTeXT has the potential to further surpass competitor models as more training data becomes availible.", "cite_spans": [], "ref_spans": [ { "start": 548, "end": 555, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Code Summarization", "sec_num": "5.1.1" }, { "text": "We also tested CoTexT by performing multi-task learning for Code Refinement. In this case, both the small and medium test sets have a task registry with respective prefix prepending to the input sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Code Refinement", "sec_num": "5.1.2" }, { "text": "The Code Refinement results of each model are shown in Table 6 . For this task, the base T5, which is pre-trained only on natural language text, does not perform well compared to other transformerbased models. Yet, after the training on a large programming language corpus, the result from Co-TexT improves significantly on all metrics for both small and medium test sets. CoTexT achieves SOTA for all metrics on the small test set and on the accuracy metric for the medium test set.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Code Refinement", "sec_num": "5.1.2" }, { "text": "In addition to multi-task learning, we also evaluate CoTexT performance single-task learning with a Code Generation Task and a classification task relating to Defect Detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-Task Learning", "sec_num": "5.2" }, { "text": "In Table 4 , we reported our results for the Code Generation task wherein natural language is translated into Java code. The result shows that our proposed model achieves SOTA results based on 3 metrics: Exact Match (EM), BLEU, and Code-BLEU. For each individual metric, CoTexT has only slightly outperformed other models (e.g both CoTexT and CodeGPT-adapted achieve 20.10 for EM). However, our model is consistently superior across the 3 metrics. Prior to CoTexT, CodeGPTadapted was SOTA for the EM metric and PLBART was SOTA for the BLUE/CodeBLUE metrics. From this result, we infer that CoTexT has the best overall performance on this task and has great potential in the area of code generation.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Code Generation", "sec_num": "5.2.1" }, { "text": "The Defect Detection results are shown in Table 7 . Specifically, CoText outperforms the previous SOTA model (PLBART) by 3.44%. For this task, extra training on a large programming corpus allows CoTexT to outperform all other models and achieve SOTA results. The Defect Detection dataset consists of code written in the C programming language, which is not contained in our training data. Our model has a strong understanding of similar languages, and is thus able to perform Defect Detection in C with improved results compared to competitor models.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 50, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Defect Detection", "sec_num": "5.2.2" }, { "text": "In this manuscript, we introduced CoTexT, a pretrained language representation for both programming language and natural language. CoTexT focused on text-code and code-text understanding and generating. Leveraging the T5 framework (Raffel et al., 2019) , we showed that pre-training on a large programming language corpus is effective for a diverse array of tasks within the natural language and programming language domain. CoTexT achieves state-of-the-art results on 4 CodeXGLUE code intelligence tasks: Code Summarization, Code Generation, Code Refinement, and Code Detection. For future work, we plan to test CoTexT on a broader range of programming language and natural language generation tasks, such as autocompletion or code translation.", "cite_spans": [ { "start": 231, "end": 252, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://console.cloud.google.com/marketplace/details/github/githubrepos", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unified pretraining for program understanding and generation", "authors": [ { "first": "Saikat", "middle": [], "last": "Wasi Uddin Ahmad", "suffix": "" }, { "first": "Baishakhi", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Ray", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021a. Unified pre- training for program understanding and generation.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unified pretraining for program understanding and generation", "authors": [ { "first": "Saikat", "middle": [], "last": "Wasi Uddin Ahmad", "suffix": "" }, { "first": "Baishakhi", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Ray", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021b. Unified pre- training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Scibert: Pretrained contextualized embeddings for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. CoRR, abs/1903.10676.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Hello, it's GPT-2 -how can I help you? towards the use of pretrained language models for task-oriented dialogue systems", "authors": [ { "first": "Pawel", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" } ], "year": 2019, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawel Budzianowski and Ivan Vulic. 2019. Hello, it's GPT-2 -how can I help you? towards the use of pre- trained language models for task-oriented dialogue systems. CoRR, abs/1907.05774.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Xiaocheng Feng, Ming Gong, Linjun Shou", "authors": [ { "first": "Zhangyin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Daya", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xi- aocheng Feng, Ming Gong, Linjun Shou, Bing Qin,", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Codebert: A pre-trained model for programming and natural languages", "authors": [ { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Code- bert: A pre-trained model for programming and nat- ural languages. CoRR, abs/2002.08155.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. Graphcode{bert}: Pre-training code representations with data flow. In International Conference on Learning Representations", "authors": [ { "first": "Daya", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Shuo Ren", "suffix": "" }, { "first": "Zhangyin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Feng", "suffix": "" }, { "first": "", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Shujie", "suffix": "" }, { "first": "Long", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Svyatkovskiy", "suffix": "" }, { "first": "Shengyu", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Tufano", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Shao Kun Deng", "suffix": "" }, { "first": "", "middle": [], "last": "Clement", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie LIU, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. Graphcode{bert}: Pre-training code represen- tations with data flow. In International Conference on Learning Representations.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Codesearchnet challenge: Evaluating the state of semantic code search", "authors": [ { "first": "Hamel", "middle": [], "last": "Husain", "suffix": "" }, { "first": "Ho-Hsiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Tiferet", "middle": [], "last": "Gazit", "suffix": "" }, { "first": "Miltiadis", "middle": [], "last": "Allamanis", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- searchnet challenge: Evaluating the state of seman- tic code search. CoRR, abs/1909.09436.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Codesearchnet challenge: Evaluating the state of semantic code search", "authors": [ { "first": "Hamel", "middle": [], "last": "Husain", "suffix": "" }, { "first": "Ho-Hsiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Tiferet", "middle": [], "last": "Gazit", "suffix": "" }, { "first": "Miltiadis", "middle": [], "last": "Allamanis", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2020. Code- searchnet challenge: Evaluating the state of seman- tic code search.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Mapping language to code in programmatic context", "authors": [ { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Alvin", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. CoRR, abs/1808.09588.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. CoRR, abs/1808.06226.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised translation of programming languages", "authors": [ { "first": "Marie-Anne", "middle": [], "last": "Lachaux", "suffix": "" }, { "first": "Baptiste", "middle": [], "last": "Roziere", "suffix": "" }, { "first": "Lowik", "middle": [], "last": "Chanussot", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, and Guillaume Lample. 2020. Unsuper- vised translation of programming languages.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019a. Biobert: a pre-trained biomedical language representation model for biomedical text mining. CoRR, abs/1901.08746.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "Bioinformatics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1093/bioinformatics/btz682" ] }, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019b. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. CoRR, abs/1910.13461.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Ro{bert}a: A robustly optimized {bert} pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Codexglue: A machine learning benchmark dataset for code understanding and generation", "authors": [ { "first": "Shuai", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Daya", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Svyatkovskiy", "suffix": "" }, { "first": "Ambrosio", "middle": [], "last": "Blanco", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Clement", "suffix": "" }, { "first": "Dawn", "middle": [], "last": "Drain", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Linjun", "middle": [], "last": "Shou", "suffix": "" }, { "first": "Long", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Tufano", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Li- dong Zhou, Linjun Shou, Long Zhou, Michele Tu- fano, Ming Gong, Ming Zhou, Nan Duan, Neel Sun- daresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. Codexglue: A machine learning bench- mark dataset for code understanding and generation.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bleu: A method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, ACL '02, page 311-318, USA. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. CoRR, abs/1802.05365.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. CoRR, abs/1910.10683.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Codebleu: a method for automatic evaluation of code synthesis", "authors": [ { "first": "Daya", "middle": [], "last": "Shuo Ren", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Long", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Neel", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Sundaresan", "suffix": "" }, { "first": "Ambrosio", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Blanco", "suffix": "" }, { "first": "", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Neel Sundaresan, Ming Zhou, Am- brosio Blanco, and Shuai Ma. 2020. Codebleu: a method for automatic evaluation of code synthesis.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A learning algorithm for continually running fully recurrent neural networks", "authors": [ { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" }, { "first": "David", "middle": [], "last": "Zipser", "suffix": "" } ], "year": 1989, "venue": "Neural Comput", "volume": "1", "issue": "2", "pages": "270--280", "other_ids": { "DOI": [ "10.1162/neco.1989.1.2.270" ] }, "num": null, "urls": [], "raw_text": "Ronald J. Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. Neural Comput., 1(2):270-280.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2020. Xlnet: Generalized autoregressive pretraining for language understanding.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Tabert: Pretraining for joint understanding of textual and tabular data", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Wen Tau Yih", "suffix": "" }, { "first": "", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Graham Neubig, Wen tau Yih, and Se- bastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks", "authors": [ { "first": "Yaqin", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shangqing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jingkai", "middle": [], "last": "Siow", "suffix": "" }, { "first": "Xiaoning", "middle": [], "last": "Du", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulner- ability identification by learning comprehensive pro- gram semantics via graph neural networks.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "CoTexT javascript: console.log(\"Hello\"); ruby: puts \"Hello\" go: fmt.Println(\"Hello\") python: print(\"Hello\") java: System.out.println(\"Hello\"); PHP: echo \"Hello\";" }, "TABREF1": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Pre-training CoTexT on different combinations of natural language and programming language copora" }, "TABREF2": { "content": "
TaskDatasetTask TypeInput Length Target Length
Self-supervised LearningCodSearchNet Corpus GitHub Repositories1024 10241024 1024
Code SummarizationCodeSearchNetMulti-Task 512512
Code GenerationCONCODESingle-Task 256256
Code RefinementBugs2Fix small Bugs2Fix mediumMulti-Task 512512
Defect DetectionDevignSingle-Task 10245
", "type_str": "table", "html": null, "num": null, "text": "The input and target sequence length settings for each self-supervised learning, code summarization, code generation, code refinement, and defect detection task" }, "TABREF3": { "content": "
CategoryTaskDatasetTrainSize ValTestLanguage
164K 5.1K10.9K Java
58K3.8K3.2KJavascript
Code-TextCode Summarization (Lu et al., 2021)CodeSearchNet251K 13.9K 14.9K Python 241K 12.9K 14K PHP
167K 7.3K8.1KGo
24K1.4K1.2KRuby
Code-CodeDefect Detection (Zhou et al., 2019) Code Refinement (Lu et al., 2021)Devign Bugs2Fix small Bugs2Fix medium21K 46K 52K2.7K 5.8K 6.5K2.7K 5.8K 6.5KC Java
Text-CodeCode GenerationCONCODE100K 2K2KJava
", "type_str": "table", "html": null, "num": null, "text": "Data statistics about Code Intelligence datasets" }, "TABREF4": { "content": "
ModelEMText2Code Generation BLEU CodeBLEU
PLBART18.7536.6938.52
CodeGPT-adapted20.1032.7935.98
CodeGPT18.2528.6932.71
T518.6532.7435.95
CoText (1-CCG)19.4535.4038.47
CoText (2-CC)20.1036.5139.49
CoText (1-CC)20.1037.4040.14
Notes: The best scores are in bold and second best scores are underlined. The baseline scores were obtained from the
CodeXGLUE's Leaderboard (https://microsoft.github.io/CodeXGLUE/)
", "type_str": "table", "html": null, "num": null, "text": "Test result on Code Generation task" }, "TABREF5": { "content": "
ModelAllRubyJavascriptGoPythonJavaPHP
RoBERTa16.5711.1711.9017.7218.1416.4724.02
CodeBERT17.8312.1614.9018.0719.0617.6525.16
PLBART18.3214.1115.5618.9119.318.4523.58
T518.3514.1814.5719.1719.2618.3524.59
CoTexT (1-CCG)18.0013.2314.7518.9519.3518.7522.97
CoTexT (2-CC)18.3813.0714.7719.3719.5219.124.47
CoTexT (1-CC)18.5514.0214.9618.8619.7319.0624.58
", "type_str": "table", "html": null, "num": null, "text": "Test result on Code Summarization task The best scores are in bold and second best scores are underlined. The baseline scores were obtained from the CodeXGLUE's Leaderboard (https://microsoft.github.io/CodeXGLUE/)" }, "TABREF6": { "content": "
Small test setMedium test set
ModelBLEUAcc(%) CodeBLEU BLEUAcc(%) CodeBLEU
Transformer77.2114.7073.3189.253.7081.72
CodeBERT77.4216.4075.5891.075.1687.52
PLBART77.0219.21/88.58.98/
T574.9415.375.8588.284.1185.61
CoTexT (1-CCG)76.8720.3977.3488.5812.8886.05
CoTexT (2-CC)77.2821.5877.3888.6813.0384.41
CoTexT (1-CC)77.7921.0376.1588.413.1185.83
", "type_str": "table", "html": null, "num": null, "text": "Test result on Code Refinement task" }, "TABREF7": { "content": "
ModelAccuracy
RoBERTa61.05
CodeBERT62.08
PLBART63.18
T561.93
CoTexT (1-CCG)66.62
CoTexT (2-CC)64.49
CoTexT (1-CC)65.99
Notes: The best scores are in bold and second
best scores are underlined.The baseline scores
were obtained from the CodeXGLUE's Leaderboard
(https://microsoft.github.io/CodeXGLUE/)
", "type_str": "table", "html": null, "num": null, "text": "Test result on Defect Detection task" } } } }