{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:05.373137Z" }, "title": "CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model", "authors": [ { "first": "Tae-Hwan", "middle": [], "last": "Jung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyung Hee University", "location": {} }, "email": "nlkey2022@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In version control using Git, the commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. To write a good commit message, the message should briefly summarize the source code changes, which takes a lot of time and effort. Therefore, a lot of research has been studied to automatically generate a commit message when a code modification is given. However, in most of the studies so far, there was no curated dataset for code modifications (additions and deletions) and corresponding commit messages in various programming languages. The model also had difficulty learning the contextual representation between code modification and natural language. To solve these problems, we propose the following two methods: (1) We collect code modification and corresponding commit messages in Github for six languages (Python, PHP, Go, Java, JavaScript, and Ruby) and release a wellorganized 345K pair dataset. (2) In order to resolve the large gap in contextual representation between programming language (PL) and natural language (NL), we use CodeBERT (Feng et al., 2020), a pre-trained language model (PLM) for programming code, as an initial model. Using two methods leads to successful results in the commit message generation task. Also, this is the first research attempt in finetuning commit generation using various programming languages and code PLM. Training code, dataset, and pretrained weights are available at https://github.com/graykode/commitautosuggestions.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In version control using Git, the commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. To write a good commit message, the message should briefly summarize the source code changes, which takes a lot of time and effort. Therefore, a lot of research has been studied to automatically generate a commit message when a code modification is given. However, in most of the studies so far, there was no curated dataset for code modifications (additions and deletions) and corresponding commit messages in various programming languages. The model also had difficulty learning the contextual representation between code modification and natural language. To solve these problems, we propose the following two methods: (1) We collect code modification and corresponding commit messages in Github for six languages (Python, PHP, Go, Java, JavaScript, and Ruby) and release a wellorganized 345K pair dataset. (2) In order to resolve the large gap in contextual representation between programming language (PL) and natural language (NL), we use CodeBERT (Feng et al., 2020), a pre-trained language model (PLM) for programming code, as an initial model. Using two methods leads to successful results in the commit message generation task. Also, this is the first research attempt in finetuning commit generation using various programming languages and code PLM. Training code, dataset, and pretrained weights are available at https://github.com/graykode/commitautosuggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Commit message is the smallest unit that summarizes source code changes in natural language. In the Git process, git diff uses unified format (unidiff 2 ): A line marked in red or green means a modified line, and green highlights in '+' lines are the added code, whereas red highlights in '-' lines are the deleted code. good commit message allows developers to visualize the commit history at a glance, so many teams try to do high quality commits by creating rules for commit messages. For example, Conventional Commits 1 is one of the commit rules to use a verb of a specified type for the first word like 'Add' or 'Fix' and limit the length of the character. It is very tricky to follow all these rules and write a good quality commit message, so many developers ignore it due to lack of time and motivation. So it would be very efficient if the commit message is automatically written when a code modification is given.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Similar to text summarization, many studies have been conducted by taking code modification X = (x 1 , ..., x n ) as encoder input and commit message Y = (y 1 , ..., y m ) as decoder input based on the NMT (Neural machine translation) model. Loyola et al., 2017; van Hal et al., 2019) However, taking the code modification without distinguishing between the added and the deleted part as model input makes it difficult to understand the context of modification in the NMT model. In addition, previous studies tend to train from scratch when training a model, but this method does not show good performance because it creates a large gap in the contextual representation between programming language (PL) and natural language (NL). To overcome the problems in previous studies and train a better commit message generation model, our approach follows two stages:", "cite_spans": [ { "start": 242, "end": 262, "text": "Loyola et al., 2017;", "ref_id": "BIBREF10" }, { "start": 263, "end": 284, "text": "van Hal et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) Collecting and processing data with the pair of the added and deleted parts of the code X = ((add 1 , del 1 ), ..., (add n , del n )). To take this pair dataset into the Transformer-based NMT model (Vaswani et al., 2017) , we use the BERT (Devlin et al., 2018) fine-tuning method about two sentencepair consist of added and deleted parts. This shows a better BLEU-4 score (Papineni et al., 2002) than previous works using raw git diff. Similar to Code-SearchNet (Husain et al., 2019) , our data is also collected for six languages (Python, PHP, Go, Java, JavaScript, and Ruby) from Github to show good performance in various languages. We finally released 345K code modification and commit message pair data.", "cite_spans": [ { "start": 202, "end": 224, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF20" }, { "start": 243, "end": 264, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF1" }, { "start": 376, "end": 399, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF14" }, { "start": 466, "end": 487, "text": "(Husain et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) To solve a large gap about contextual representation between programming language (PL) and natural language (NL), we use CodeBERT (Feng et al., 2020 ), a language model well-trained in the code domain as the initial weight. Using Code-BERT as the initial weight shows that the BLEU-4 score for commit message generation is better than when using random initialization and RoBERTa (Liu et al., 2019) . Additionally, when we pre-train the Code-to-NL task to document the source code in CodeSearchNet and use the initial weight of commit generation, the contextual representation between PL and NL is further reduced.", "cite_spans": [ { "start": 134, "end": 152, "text": "(Feng et al., 2020", "ref_id": "BIBREF2" }, { "start": 384, "end": 402, "text": "(Liu et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Commit message generation has been studied in various ways. collect 2M commits from the Mauczka et al. (2015) and top 1K Java projects in Github. Among the commit messages, only those that keep the format of \"Verb + Object\" are filtered, grouped into verb types with similar characteristics, and then the classification model is trained with the naive Bayes classifier. use the commit data collected by to generate the commit message using an attention-based RNN encoder-decoder NMT model. They filter again in a \"verb/direct-object pattern\" from 2M data and finally used the 26K commit message data. Loyola et al. (2017) uses an NMT model similar to , but uses git diff and commit pairs collected from 1\u223c3 repositories of Python, Java, JavaScript, and C++ as training data. Liu et al. (2018) propose a retrieval model using 's 26K commit as training data. Code modification is represented by bags of words vector, and the message with the highest cosine similarity is retrieved. Xu et al. (2019) collect only '.java' file format from and use 509K dataset as training data for NMT. Also, to mitigate the problem of Out-of-Vocabulary (OOV) of code domain input, they use generation distribution or copying distribution similar to pointer-generator networks (See et al., 2017 ). van Hal et al. (2019 also argues that the entire data is noise and proposes a pre-processing method that filters the better commit messages. argue that it is challenging to represent the information required for source code input in the NMT model with a fixed-length. In order to alleviate this, it is suggested that only the added and deleted parts of the code modification be abbreviated as abstract syntax tree (AST) and applied to the Bi-LSTM model. Nieb et al. presented a large gap between the contextual representation between the source code and the natural language when generating commit messages. Previous studies have used RNN or LSTM model, they use the transformer model, and similarly to other studies, they use Liu et al. (2018) as the training data. To reduce this gap, they try to reduce the two-loss that predict the next code line (Explicit Code Changes) and the randomly masked word in the binary file.", "cite_spans": [ { "start": 88, "end": 109, "text": "Mauczka et al. (2015)", "ref_id": "BIBREF11" }, { "start": 775, "end": 792, "text": "Liu et al. (2018)", "ref_id": "BIBREF9" }, { "start": 1256, "end": 1273, "text": "(See et al., 2017", "ref_id": "BIBREF16" }, { "start": 1274, "end": 1297, "text": "). van Hal et al. (2019", "ref_id": "BIBREF3" }, { "start": 2004, "end": 2021, "text": "Liu et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Git is a version management system that manages version history and helps collaboration efficiently. Git tracks all files in the project in the Working directory, Staging area, and Repository. The working directory shows the files in their current state. After modifying the file, developers move the files to the staging area using the add command to record the modified contents and write a commit message through the commit command. Therefore, the commit message may contain two or more file changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Git Process", "sec_num": "3.1" }, { "text": "With the advent of sequence to sequence learning (Seq2Seq) (Sutskever et al., 2014) , various tasks between the source and the target domain are being solved. Text summarization is one of these tasks, showing good performance through the Seq2Seq model with a more advanced encoder and decoder. The encoder and decoder models are trained by maximizing the conditional log-likelihood below based on source input X = (x 1 , ..., x n ) and target input Y = (y 1 , ..., y m ).", "cite_spans": [ { "start": 59, "end": 83, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Text Summarization based on Encoder-Decoder Model", "sec_num": "3.2" }, { "text": "p(Y |X; \u03b8) = log T t=0 p(y t |y