ACL-OCL / Base_JSON /prefixN /json /nlp4prog /2021.nlp4prog-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:05.373137Z"
},
"title": "CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model",
"authors": [
{
"first": "Tae-Hwan",
"middle": [],
"last": "Jung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyung Hee University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In version control using Git, the commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. To write a good commit message, the message should briefly summarize the source code changes, which takes a lot of time and effort. Therefore, a lot of research has been studied to automatically generate a commit message when a code modification is given. However, in most of the studies so far, there was no curated dataset for code modifications (additions and deletions) and corresponding commit messages in various programming languages. The model also had difficulty learning the contextual representation between code modification and natural language. To solve these problems, we propose the following two methods: (1) We collect code modification and corresponding commit messages in Github for six languages (Python, PHP, Go, Java, JavaScript, and Ruby) and release a wellorganized 345K pair dataset. (2) In order to resolve the large gap in contextual representation between programming language (PL) and natural language (NL), we use CodeBERT (Feng et al., 2020), a pre-trained language model (PLM) for programming code, as an initial model. Using two methods leads to successful results in the commit message generation task. Also, this is the first research attempt in finetuning commit generation using various programming languages and code PLM. Training code, dataset, and pretrained weights are available at https://github.com/graykode/commitautosuggestions.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In version control using Git, the commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. To write a good commit message, the message should briefly summarize the source code changes, which takes a lot of time and effort. Therefore, a lot of research has been studied to automatically generate a commit message when a code modification is given. However, in most of the studies so far, there was no curated dataset for code modifications (additions and deletions) and corresponding commit messages in various programming languages. The model also had difficulty learning the contextual representation between code modification and natural language. To solve these problems, we propose the following two methods: (1) We collect code modification and corresponding commit messages in Github for six languages (Python, PHP, Go, Java, JavaScript, and Ruby) and release a wellorganized 345K pair dataset. (2) In order to resolve the large gap in contextual representation between programming language (PL) and natural language (NL), we use CodeBERT (Feng et al., 2020), a pre-trained language model (PLM) for programming code, as an initial model. Using two methods leads to successful results in the commit message generation task. Also, this is the first research attempt in finetuning commit generation using various programming languages and code PLM. Training code, dataset, and pretrained weights are available at https://github.com/graykode/commitautosuggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Commit message is the smallest unit that summarizes source code changes in natural language. In the Git process, git diff uses unified format (unidiff 2 ): A line marked in red or green means a modified line, and green highlights in '+' lines are the added code, whereas red highlights in '-' lines are the deleted code. good commit message allows developers to visualize the commit history at a glance, so many teams try to do high quality commits by creating rules for commit messages. For example, Conventional Commits 1 is one of the commit rules to use a verb of a specified type for the first word like 'Add' or 'Fix' and limit the length of the character. It is very tricky to follow all these rules and write a good quality commit message, so many developers ignore it due to lack of time and motivation. So it would be very efficient if the commit message is automatically written when a code modification is given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similar to text summarization, many studies have been conducted by taking code modification X = (x 1 , ..., x n ) as encoder input and commit message Y = (y 1 , ..., y m ) as decoder input based on the NMT (Neural machine translation) model. Loyola et al., 2017; van Hal et al., 2019) However, taking the code modification without distinguishing between the added and the deleted part as model input makes it difficult to understand the context of modification in the NMT model. In addition, previous studies tend to train from scratch when training a model, but this method does not show good performance because it creates a large gap in the contextual representation between programming language (PL) and natural language (NL). To overcome the problems in previous studies and train a better commit message generation model, our approach follows two stages:",
"cite_spans": [
{
"start": 242,
"end": 262,
"text": "Loyola et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 263,
"end": 284,
"text": "van Hal et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Collecting and processing data with the pair of the added and deleted parts of the code X = ((add 1 , del 1 ), ..., (add n , del n )). To take this pair dataset into the Transformer-based NMT model (Vaswani et al., 2017) , we use the BERT (Devlin et al., 2018) fine-tuning method about two sentencepair consist of added and deleted parts. This shows a better BLEU-4 score (Papineni et al., 2002) than previous works using raw git diff. Similar to Code-SearchNet (Husain et al., 2019) , our data is also collected for six languages (Python, PHP, Go, Java, JavaScript, and Ruby) from Github to show good performance in various languages. We finally released 345K code modification and commit message pair data.",
"cite_spans": [
{
"start": 202,
"end": 224,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 243,
"end": 264,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 376,
"end": 399,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 466,
"end": 487,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) To solve a large gap about contextual representation between programming language (PL) and natural language (NL), we use CodeBERT (Feng et al., 2020 ), a language model well-trained in the code domain as the initial weight. Using Code-BERT as the initial weight shows that the BLEU-4 score for commit message generation is better than when using random initialization and RoBERTa (Liu et al., 2019) . Additionally, when we pre-train the Code-to-NL task to document the source code in CodeSearchNet and use the initial weight of commit generation, the contextual representation between PL and NL is further reduced.",
"cite_spans": [
{
"start": 134,
"end": 152,
"text": "(Feng et al., 2020",
"ref_id": "BIBREF2"
},
{
"start": 384,
"end": 402,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Commit message generation has been studied in various ways. collect 2M commits from the Mauczka et al. (2015) and top 1K Java projects in Github. Among the commit messages, only those that keep the format of \"Verb + Object\" are filtered, grouped into verb types with similar characteristics, and then the classification model is trained with the naive Bayes classifier. use the commit data collected by to generate the commit message using an attention-based RNN encoder-decoder NMT model. They filter again in a \"verb/direct-object pattern\" from 2M data and finally used the 26K commit message data. Loyola et al. (2017) uses an NMT model similar to , but uses git diff and commit pairs collected from 1\u223c3 repositories of Python, Java, JavaScript, and C++ as training data. Liu et al. (2018) propose a retrieval model using 's 26K commit as training data. Code modification is represented by bags of words vector, and the message with the highest cosine similarity is retrieved. Xu et al. (2019) collect only '.java' file format from and use 509K dataset as training data for NMT. Also, to mitigate the problem of Out-of-Vocabulary (OOV) of code domain input, they use generation distribution or copying distribution similar to pointer-generator networks (See et al., 2017 ). van Hal et al. (2019 also argues that the entire data is noise and proposes a pre-processing method that filters the better commit messages. argue that it is challenging to represent the information required for source code input in the NMT model with a fixed-length. In order to alleviate this, it is suggested that only the added and deleted parts of the code modification be abbreviated as abstract syntax tree (AST) and applied to the Bi-LSTM model. Nieb et al. presented a large gap between the contextual representation between the source code and the natural language when generating commit messages. Previous studies have used RNN or LSTM model, they use the transformer model, and similarly to other studies, they use Liu et al. (2018) as the training data. To reduce this gap, they try to reduce the two-loss that predict the next code line (Explicit Code Changes) and the randomly masked word in the binary file.",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "Mauczka et al. (2015)",
"ref_id": "BIBREF11"
},
{
"start": 775,
"end": 792,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 1256,
"end": 1273,
"text": "(See et al., 2017",
"ref_id": "BIBREF16"
},
{
"start": 1274,
"end": 1297,
"text": "). van Hal et al. (2019",
"ref_id": "BIBREF3"
},
{
"start": 2004,
"end": 2021,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Git is a version management system that manages version history and helps collaboration efficiently. Git tracks all files in the project in the Working directory, Staging area, and Repository. The working directory shows the files in their current state. After modifying the file, developers move the files to the staging area using the add command to record the modified contents and write a commit message through the commit command. Therefore, the commit message may contain two or more file changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Git Process",
"sec_num": "3.1"
},
{
"text": "With the advent of sequence to sequence learning (Seq2Seq) (Sutskever et al., 2014) , various tasks between the source and the target domain are being solved. Text summarization is one of these tasks, showing good performance through the Seq2Seq model with a more advanced encoder and decoder. The encoder and decoder models are trained by maximizing the conditional log-likelihood below based on source input X = (x 1 , ..., x n ) and target input Y = (y 1 , ..., y m ).",
"cite_spans": [
{
"start": 59,
"end": 83,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Summarization based on Encoder-Decoder Model",
"sec_num": "3.2"
},
{
"text": "p(Y |X; \u03b8) = log T t=0 p(y t |y <t , X; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Summarization based on Encoder-Decoder Model",
"sec_num": "3.2"
},
{
"text": "where T is the length of the target input, y 0 is the start token, y T is the end token and \u03b8 is the parameter of the model. In the Transformer (Vaswani et al., 2017) model, the source input is vectorized into a hidden state through self-attention as the number of encoder layers. After that, the target input also learns the generation distribution through self-attention and attention to the hidden state of the encoder. It shows better summarization results than the existing RNNbased model (Nallapati et al., 2016) .",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 494,
"end": 518,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Summarization based on Encoder-Decoder Model",
"sec_num": "3.2"
},
{
"text": "To improve performance, most machine translations use beam search. It keeps the search area by K most likely tokens at each step and searches the next step to generate better text. Generation stops when the predicted y t is an end token or reaches the maximum target length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Summarization based on Encoder-Decoder Model",
"sec_num": "3.2"
},
{
"text": "CodeSearchNet (Husain et al., 2019 ) is a dataset to search code function snippets in natural language. It is a paired dataset of code function snippets for six programming languages (Python, PHP, Go, Java, JavaScript and Ruby) and a docstring summarizing these functions in natural language. A total of 6M pair datasets is collected from projects with a re-distribution license. Using the CodeSearch-Net corpus, retrieval of the code corresponding to the query composed of natural language can be resolved. Also, it is possible to resolve the problem of documenting the code by summarizing it in natural language (Code-to-NL).",
"cite_spans": [
{
"start": 14,
"end": 34,
"text": "(Husain et al., 2019",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CodeSearchNet",
"sec_num": "3.3"
},
{
"text": "Recent NLP studies have shown state-of-the-art in various tasks through transfer learning consisting of pre-training and fine-tuning (Peters et al., 2018) . In particular, BERT (Devlin et al., 2018 ) is a pre-trained language model by predicting masked words from randomly masked sequence input and uses only encoder based on Transformer (Vaswani et al., 2017) . It shows good perfomances in various datasets and is now extending out of the natural language domain to the voice, video, and code domains.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 177,
"end": 197,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF1"
},
{
"start": 338,
"end": 360,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT",
"sec_num": "3.4"
},
{
"text": "CodeBERT is a pre-trained language model in the code domain to learn the relationship between programming language (PL) and natural language (NL). In order to learn the representation between different domains, they refer to the learning method of ELECTRA (Clark et al., 2020) which is consists of Generator-Discriminator. NL and Code Generator predict words from code tokens and comment tokens masked at a specific rate. Finally, NL-Code Discriminator is CodeBERT after trained through binary classification that predicts whether it is replaced or original.",
"cite_spans": [
{
"start": 256,
"end": 276,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT",
"sec_num": "3.4"
},
{
"text": "CodeBERT shows good results for all tasks in the code domain. Specially, it shows a higher score than other pre-trained models in the code to natural language(Code-to-NL) and code retrieval task from NL using CodeSearchNet Corpus. In addition, CodeBERT uses the Byte Pair Encoding (BPE) tokenizer (Sennrich et al., 2015) used in RoBERTa, and does not generate unk tokens in code domain input.",
"cite_spans": [
{
"start": 297,
"end": 320,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT",
"sec_num": "3.4"
},
{
"text": "We collect a 345K code modification dataset and commit message pairs from 52K repositories of six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby) on Github. When using raw git diff as model input, it is difficult to distinguish between added and deleted parts, so unlike , our dataset focuses only on the added and deleted lines in git diff. The detailed data collection and pre-processing method are shown as a pseudo-code in Algorithm 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "To collect only the code that is a re-distributable license, we have listed the Github repository name in the CodeSearchNet dataset. After that, all the repositories are cloned through multi-threading. Detailed descriptions of functions that collect the commit hashes in a repository and the code modifi- Figure 2 : Commit message verb type and frequency statistics. Only 'upgrade' is not included in the high frequency, but is included in a similar way to 'update'. This refers to the verb group in . cations in a commit hash are as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 313,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 get_commits is a function that gets the commit history from the repository. At this time, the commits of the master branch are filtered, excluding merge commits. Commits with code modifications corresponding to 6 the program language(.py, .php, .js, .java, .go, .ruby) extensions are collected. To implement this, we use the open-source pydriller (Spadini et al., 2018) . \u2022 get_modifications is a function that gets the line modified in the commit. Through this function, it is possible to collect only the added or deleted parts, not all git diffs.",
"cite_spans": [
{
"start": 349,
"end": 371,
"text": "(Spadini et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "While collecting the pair dataset, we find that the relationship between some code modifications and the corresponding commit message is obscure and very abstract. Also, we check that some code modification or commit message is a meaningless dummy file. To filter these, we create the filtering function and the rules as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "1. To collect commit messages with various format distributions, we limit the collection of up to 50 commits in one repository. 2. We filter commits whose number of files changed is one or two per commit message. 3. Commit message with issue number is removed because detailed information is abbreviated. 4. Similar to , the non-English commit messages are removed. 5. Since some commit messages are very long, the first line is fetched. 6. If the token of code through tree-sitter 3 , a parser generator tool, exceeds 32 characters, it is excluded. This removes unnecessary things like changes to binary files in code diff. 7. By referring to the and Conventional Commits( \u00a7 1) rules, the commit message that begins with a verb is collected. We use spaCy 4 for Pos tagging. 8. We filter commit messages with 13 verb types, which are the most frequent. Figure 2 shows the collected verb types and their ratio for the entire dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 853,
"end": 861,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "As a result, we collect 345K code modification and commit message pair datasets from 52K Github repositories and split commit data into 80-10-10 train/validation/test sets. This results are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "We propose the idea of generating a commit message through the CodeBERT model with the our dataset ( \u00a7 4). To this end, this section describes how to feed inputs code modification (X = ((add 1 , del 1 ) , ..., (add n , del n ))) and commit message (Y = (msg1, ..., msg n )) to CodeBERT and how to use pre-trained weights more efficiently to reduce the gap in contextual representation between programming language (PL) and natural language (NL).",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 202,
"text": "((add 1 , del 1 )",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "CommitBERT",
"sec_num": "5"
},
{
"text": "We feed the code modification to the encoder and a commit message to the decoder input by following the NMT model. Especially for code modification in the encoder, similar inputs are concatenated, and different types of inputs are separated by a sentence separator (sep). Applying this to our CommitBERT in the same way, added tokens (Add = (add 1 , ..., add n )) and deleted tokens (Del = (del 1 , ..., del n )) of similar types are connected to each other, and sentence separators are inserted between them. Therefore, the conditionallikelihood is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT for Commit Message Generation",
"sec_num": "5.1"
},
{
"text": "p(M |C; \u03b8) = log T t=0 p(m t |m <t , C; \u03b8), m <t = (m 0 , m 1 , ..., m t\u22121 ) C = concat([cls], Add, [sep], Del, [sep])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT for Commit Message Generation",
"sec_num": "5.1"
},
{
"text": "where M is commit message tokens, C is code modification tokens and concat is list concatenation function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT for Commit Message Generation",
"sec_num": "5.1"
},
{
"text": "[cls] and [sep] are speical tokens, which are a start token and a sentence separator token respectively. Other notions are the same as Section 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CodeBERT for Commit Message Generation",
"sec_num": "5.1"
},
{
"text": "Unlike previous works, all code modifications in git diff are not used as input and only changed lines in code modification are used. Since this removes unnecessary inputs, it shows a significant performance improvement in summarizing code modifications in natural language. Figure 3 shows how the code modification is actually taken as model input.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "CodeBERT for Commit Message Generation",
"sec_num": "5.1"
},
{
"text": "To reduce the gap difference between two domains(PL, NL), We use the pretrained CodeBERT as the initial weight. Furthermore, we determine that removing deleted tokens from our dataset ( \u00a7 4) is similar to the Code-to-NL task in CodeSearchNet (Section 3.3). Using this feature, we use the initial weight after training the Code-to-NL task with CodeBERT as the initial weight. This method of training shows better results than only using Code-BERT weight in commit message generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialize Pretrained Weights",
"sec_num": "5.2"
},
{
"text": "To verify the proposal in Section 5 in the commit message generation task, we do two experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "6"
},
{
"text": "(1) Compare the commit message generation results of using all code modifications as inputs and using only the added or deleted lines as inputs. 2Ablation study several initial model weights to find the weight with the smallest gap in contextual representation between PL and NL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "6"
},
{
"text": "Our implementation uses CodeXGLUE's code-text pipeline library 5 . We use the same model architecture and experimental parameters for the two experiments below. As a model architecture, the encoder and decoder use 12 and 3 Transformer layers. We use 5e-5 as the learning rate and train on one V100 GPU with a 32 batch size. We also use 256 as the maximum source input length and 128 as the target input length, 10 training epochs, and 10 as the beam size k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "6.1"
},
{
"text": "To experiment generating a commit message according to the input type, only 4135 data is collected from data with code modification in the '.java' files among 26K training data of Loyola et al. (2017) . Then we transform these 4135 data into two types, respectively, and experiment with training data for RoBERTa and CodeBERT weights: (a) entire code modification in git diff and (b) only changed lines in code modification. Figure 3 shows these two differences in detail. Table 3 shows the BLEU-4 values when inference with the test set after training about these two types. Both initial weights show worse results than (b), even though type (a) takes a more extended input to the model. This shows that lines other than changed lines as input data disturb training when generating the commit message. Table 3 : The result of generating the commit message for the input type after collecting 4135 data with only source code change among the data of Loyola et al. (2017) . (a) uses entire git diff(unidiff) as input, and (b) uses only the changed line according to Section 5.1 as input.",
"cite_spans": [
{
"start": 180,
"end": 200,
"text": "Loyola et al. (2017)",
"ref_id": "BIBREF10"
},
{
"start": 950,
"end": 970,
"text": "Loyola et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 425,
"end": 433,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 473,
"end": 480,
"text": "Table 3",
"ref_id": null
},
{
"start": 803,
"end": 810,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compare Model Input Type",
"sec_num": "6.2"
},
{
"text": "We do an ablation study while changing the initial weight of the model for 345K datasets in six programming languages collected in Section 4. As mentioned in 5.2, when the model weight with high comprehension in the code domain is used as the initial weight, it is assumed that the large gap in contextual representation between PL and NL would be greatly reduced. To prove this, we train the commit message generation task for four weights as initial model weights: Random, RoBERTa 6 , CodeBERT 7 , and the weights trained on the Code-to-NL task(Section 3.3) with Code-BERT. Except for this initial weight, all training parameters are the same. Table 2 shows BLEU-4 for the test set and PPL for the dev set for each of the four weights after training. As a result, using weights trained on the Code-to-NL task with CodeBERT as the initial weight shows the best results for test BLEU-4 and dev PPL. It also shows good performance regardless of programming language.",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 653,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ablation study on initial weight",
"sec_num": "6.3"
},
{
"text": "Our work presented a model summarizing code modifications to solve the difficulty of humans manually writing commit messages. To this end, this paper proposed a method of collecting data, a method of taking it to a model, and a method of improving performance. As a result, it showed a successful result in generating a commit message using our proposed methods. Consequently, our work can help developers who have difficulty writing commit messages even with the application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Although it is possible to generate a high-quality commit message with a pre-trained model, future studies to understand the code syntax structure remain in our work. As a solution to this, Com-mitBERT should be converted to AST (Abstract",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "https://conventionalcommits.org 2 https://en.wikipedia.org/wiki/Diff",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tree-sitter.github.io/tree-sitter 4 https://spacy.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/microsoft/CodeXGLUE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/roberta-base 7 https://huggingface.co/microsoft/codebert-base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author would like to thank Gyuwan Kim, Dongjun Lee, Mansu Kim and the anonymous reviewers for their thoughtful paper review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Reference / Generated Syntax Tree) before code modification is taken into the encoder like .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Codebert: A pre-trained model for programming and natural languages",
"authors": [
{
"first": "Zhangyin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.08155"
]
},
"num": null,
"urls": [],
"raw_text": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xi- aocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural lan- guages. arXiv preprint arXiv:2002.08155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generating commit messages from git diffs",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Srp Van Hal",
"suffix": ""
},
{
"first": "Kasper",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wendel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.11690"
]
},
"num": null,
"urls": [],
"raw_text": "SRP van Hal, Mathieu Post, and Kasper Wendel. 2019. Generating commit messages from git diffs. arXiv preprint arXiv:1911.11690.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Codesearchnet challenge: Evaluating the state of semantic code search",
"authors": [
{
"first": "Hamel",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "Ho-Hsiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tiferet",
"middle": [],
"last": "Gazit",
"suffix": ""
},
{
"first": "Miltiadis",
"middle": [],
"last": "Allamanis",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09436"
]
},
"num": null,
"urls": [],
"raw_text": "Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- searchnet challenge: Evaluating the state of seman- tic code search. arXiv preprint arXiv:1909.09436.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatically generating commit messages from diffs using neural machine translation",
"authors": [
{
"first": "Siyuan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ameer",
"middle": [],
"last": "Armaly",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE)",
"volume": "",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In 2017 32nd IEEE/ACM International Conference on Auto- mated Software Engineering (ASE), pages 135-146. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards automatic generation of short summaries of commits",
"authors": [
{
"first": "Siyuan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Mcmillan",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE/ACM 25th International Conference on Program Comprehension (ICPC)",
"volume": "",
"issue": "",
"pages": "320--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siyuan Jiang and Collin McMillan. 2017. Towards automatic generation of short summaries of com- mits. In 2017 IEEE/ACM 25th International Con- ference on Program Comprehension (ICPC), pages 320-323. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Atom: Commit message generation based on abstract syntax tree and hybrid ranking",
"authors": [
{
"first": "Shangqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Cuiyun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Nie Lun Yiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Software Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangqing Liu, Cuiyun Gao, Sen Chen, Nie Lun Yiu, and Yang Liu. 2020. Atom: Commit message gener- ation based on abstract syntax tree and hybrid rank- ing. IEEE Transactions on Software Engineering.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neuralmachine-translation-based commit message generation: how far are we?",
"authors": [
{
"first": "Zhongxin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"E"
],
"last": "Hassan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Zhenchang",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering",
"volume": "",
"issue": "",
"pages": "373--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongxin Liu, Xin Xia, Ahmed E Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural- machine-translation-based commit message genera- tion: how far are we? In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 373-384.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A neural architecture for generating natural language descriptions from source code changes",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Loyola",
"suffix": ""
},
{
"first": "Edison",
"middle": [],
"last": "Marrese-Taylor",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04856"
]
},
"num": null,
"urls": [],
"raw_text": "Pablo Loyola, Edison Marrese-Taylor, and Yutaka Mat- suo. 2017. A neural architecture for generating natu- ral language descriptions from source code changes. arXiv preprint arXiv:1704.04856.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dataset of developerlabeled commit messages",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Mauczka",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Brosch",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Schanes",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Grechenig",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE/ACM 12th Working Conference on Mining Software Repositories",
"volume": "",
"issue": "",
"pages": "490--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Mauczka, Florian Brosch, Christian Schanes, and Thomas Grechenig. 2015. Dataset of developer- labeled commit messages. In 2015 IEEE/ACM 12th Working Conference on Mining Software Reposito- ries, pages 490-493. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Abstractive text summarization using sequence-to-sequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.06023"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summariza- tion using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Coregen: Contextualized code representation learning for commit message generation",
"authors": [
{
"first": "Cuiyun",
"middle": [],
"last": "Lun Yiu Nieb",
"suffix": ""
},
{
"first": "Zhicong",
"middle": [],
"last": "Gaoa",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Zhongc",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Lamb",
"suffix": ""
},
{
"first": "Zenglin",
"middle": [],
"last": "Liud",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xua",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lun Yiu Nieb, Cuiyun Gaoa, Zhicong Zhongc, Wai Lamb, Yang Liud, and Zenglin Xua. Coregen: Con- textualized code representation learning for commit message generation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04368"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pydriller: Python framework for mining software repositories",
"authors": [
{
"first": "Davide",
"middle": [],
"last": "Spadini",
"suffix": ""
},
{
"first": "Maur\u00edcio",
"middle": [],
"last": "Aniche",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Bacchelli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering",
"volume": "",
"issue": "",
"pages": "908--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davide Spadini, Maur\u00edcio Aniche, and Alberto Bac- chelli. 2018. Pydriller: Python framework for min- ing software repositories. In Proceedings of the 2018 26th ACM Joint Meeting on European Soft- ware Engineering Conference and Symposium on the Foundations of Software Engineering, pages 908-911.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.3215"
]
},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Commit message generation for source code changes",
"authors": [
{
"first": "Shengbin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tianxiao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hanghang",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, Hang- hang Tong, and Jian Lu. 2019. Commit message generation for source code changes. In IJCAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Figure 1 shows the git diff representing code modification and the corresponding commit message. A Message : fix deprecated ref to tokenizer.max len",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "The figure above shows an example of commit message and git diff in Github.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Illustration of a code modification example in git diff (a) and method of taking it to the input of CommitBERT (b). (b) shows that all code modification lines in (a) are not used, and only changed lines are as input. So, in this example, code modification (a) includes return a -b, but not in the model input (b).",
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>3:</td><td>commits = get commits(Repo)</td></tr><tr><td>4:</td><td>for commit in commits do</td></tr><tr><td>5:</td><td>mods = get modif ications(commit)</td></tr><tr><td>6:</td><td>for mod in mods do</td></tr><tr><td>7:</td><td>if f iltering(mod, commit) then</td></tr><tr><td>8:</td><td>break</td></tr><tr><td>9:</td><td>end if</td></tr><tr><td>10:</td><td>Save (mod.add, mod.del) to dataset.</td></tr><tr><td>11:</td><td>end for</td></tr><tr><td>12:</td><td>end for</td></tr><tr><td>13:</td><td>end for</td></tr><tr><td colspan=\"2\">14: end procedure</td></tr><tr><td colspan=\"2\">0K 20K 40K 60K 80K 100K 120K 140K add 37.7% fix 22.1% use 6.9% update 6.2% remove make change move allow 6.2% 5.1% 4.6% 3.6% 3.2% improve 1.8% implement Verb Types 1.7% create 0.8% upgrade 0.2%</td></tr><tr><td/><td>Frequency</td></tr></table>",
"num": null,
"text": "Algorithm 1 Code modification parser from the list of repositories. : procedure REPOPARSER(Repos) 2:for Repo in Repos do",
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Dataset Statistics for each language collected from 52K repositories of six programming languages.",
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td>Initial Weight</td><td>Input Type</td><td>BLEU-4</td></tr><tr><td>RoBERTa</td><td>(a) All code modification (b) Only changed lines (Ours)</td><td>10.91 12.52</td></tr><tr><td>CodeBERT</td><td>(a) All code modification (b) Only changed lines (Ours)</td><td>11.77 13.32</td></tr></table>",
"num": null,
"text": "Commit message generation result for 4 initial weights. In (c), CodeBERT is used as the initial weight. And (d) uses the weight trained on the Code-to-NL task in CodeSearchNet with CodeBERT as the initial weight.As a result, it shows BLEU-4 for the test set after training and the best PPL for the validation set in the during training.",
"html": null
}
}
}
}