ACL-OCL / Base_JSON /prefixN /json /nlp4prog /2021.nlp4prog-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:04.606929Z"
},
"title": "Time-Efficient Code Completion Model for the R Programming Language",
"authors": [
{
"first": "Artem",
"middle": [],
"last": "Popov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Dmitrii",
"middle": [],
"last": "Orekhov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Denis",
"middle": [],
"last": "Litvinov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nikolay",
"middle": [],
"last": "Korolev",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Gleb",
"middle": [],
"last": "Morgachev",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a deep learning code completion model for the R programming language. We introduce several techniques to utilize language modeling based architecture in the code completion task. With these techniques, the model requires low resources, but still achieves high quality. We also present an evaluation dataset for the R programming language completion task. Our dataset contains multiple autocompletion usage contexts and that provides robust validation results. The dataset is publicly available.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a deep learning code completion model for the R programming language. We introduce several techniques to utilize language modeling based architecture in the code completion task. With these techniques, the model requires low resources, but still achieves high quality. We also present an evaluation dataset for the R programming language completion task. Our dataset contains multiple autocompletion usage contexts and that provides robust validation results. The dataset is publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Code completion feature (for simplicity we will refer to it as autocompletion) is used in an integrated development environment (IDE) to suggest the next pieces of code during typing. Code completion engines can accelerate software development and help to reduce errors by eliminating typos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years quality improvements in the code completion task have been achieved with the transformer language models. Models with a huge amount of parameters usually demonstrate better performance (Brown et al., 2020) , but in practice code completion is executed on a user laptop with limited computational resources. At the same time code completion should run as fast as possible to be considered as a convenient development tool.",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "(Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we show that the autocompletion task can be solved with a fairly good quality even with a small transformer-based model. We propose several techniques to adapt the model which was originally designed for NLP tasks to our task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is hard to build a good autocompletion system for dynamically typed languages without machine learning methods (Shelley, 2014) . Let us consider an autocompletion of a function argument scenario. In static languages, an argument type is determined in the function definition. We can collect variables of this type from the scope in which the function is called. These variables may be used as an autocompletion output. However, in dynamic languages the argument type information is omitted. Since all dynamic languages are interpreted, variable types can not be obtained without running a program or special tools usage.",
"cite_spans": [
{
"start": 114,
"end": 129,
"text": "(Shelley, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We choose a dynamic R programming language for our experiments. To the best of our knowledge, there are no papers about code completion based on deep learning for the R programming language specifically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also propose an evaluation dataset for the R programming language collected from the opensource GitHub projects 1 . Our dataset is divided into several groups specific for different code usage contexts. For example, there is a separate group containing package imports and another one containing function calls.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many ways to design code completion models. One of the methods is a frequency-based system. The statistical language model is used to rank a set of possible completions extracted by the rule-based methods (Tu et al., 2014) . Bruch et al. (2009) proposed proposed a ranking machine learning model, which additionally takes a feature vector describing completion context as an input.",
"cite_spans": [
{
"start": 215,
"end": 232,
"text": "(Tu et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 235,
"end": 254,
"text": "Bruch et al. (2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lately, deep learning approaches have gained popularity. Completions are generated by autoregressive models such as LSTM or transformerbased language models (Li et al., 2017) trained on a large source unlabeled code corpora. Some large models such as GPT3 (Brown et al., 2020) can even perform a full-line autocompletion with promising quality. Alon et al. (2019) suggest to predict the next node of the abstract syntax tree (AST) of the program to get completions. Liu et al. (2020) propose to predict the token and its type jointly to improve completion performance for identifiers.",
"cite_spans": [
{
"start": 157,
"end": 174,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 256,
"end": 276,
"text": "(Brown et al., 2020)",
"ref_id": null
},
{
"start": 345,
"end": 363,
"text": "Alon et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 466,
"end": 483,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We use conventional GPT-2 (Radford et al., 2019) architecture with Byte Pair Encoding (BPE) tokenization (Sennrich et al., 2015) , but with fewer layers and heads and a lower hidden size. We train it on a standard language modeling task, predicting the next BPE token x t from the previous ones:",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 105,
"end": 128,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline model",
"sec_num": "3.1"
},
{
"text": "L lm = t log p(x t |x <t ) \u2192 max (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline model",
"sec_num": "3.1"
},
{
"text": "However, we use special preprocessing to make this task easier. In particular, we apply R lexer to a source code to get so-called program tokens. We use that information to replace numerical and string literals by type-specific placeholders, delete comments and remove vector content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline model",
"sec_num": "3.1"
},
{
"text": "At inference time we exploit beams search and softmax with temperature. To prevent generation of the repeating elements we use penalized sampling (Keskar et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline model",
"sec_num": "3.1"
},
{
"text": "As we know, transformers suffer from O(n 2 ) complexity with n as input length. It limits their ability to exploit large contexts and therefore limits code completion quality. If we take only the last tokens as input, it can dramatically reduce a model quality in a code completion task. For example, it is very complicated to get a variable with a rare name in the model output if it is declared at the start of the program and never used after that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable Name Substitution",
"sec_num": "3.2"
},
{
"text": "While BPE tokenization allows us to represent rare words with fixed-size vocabulary, they can still have damaging effect on the training and the inference stages. We observe that rare variable names in a source code unnecessarily extend input sequence length, thus reducing effective context length. We tried to use some transformer modifications such as Reformer (Kitaev et al., 2020) to reduce inference time but the quality drop was very high.",
"cite_spans": [
{
"start": 364,
"end": 385,
"text": "(Kitaev et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variable Name Substitution",
"sec_num": "3.2"
},
{
"text": "Here we propose a simple idea of replacing a rare variable name with a placeholder (varK, where K is the variable index number) if its frequency is less than a certain threshold. Also, we should note, that by such replacement the language modeling task becomes a bit easier. Since there is no need to remember complex variable names and the model can concentrate on predicting more useful token sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable Name Substitution",
"sec_num": "3.2"
},
{
"text": "Proposed substitution increases quality and speed not only because of the context size. It is impossible to get a long name variable from the model output because there is a limit on the number of generation iterations. While such transformation allows us to generate a variable of any length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variable Name Substitution",
"sec_num": "3.2"
},
{
"text": "We observe some discrepancy between training objective and model usage cases. Usually, the user can improve completion results by typing in several first characters of the desired token. The problem is that during inference user may invoke an autocompletion service, while the pointer is still in the middle of the BPE token of the desired output. So, at the training stage the tokenization is determined, while during inference it can take an arbitrary form. When the word is typed in by the user, its prefixes may be decomposed into BPE tokens in several different ways. For example, if a user wants to get the variable maxDf consisting of max and df BPE tokens, then the typed m can lead only to an unlikely sequence m, ax, df.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix Generation",
"sec_num": "3.3"
},
{
"text": "We try to work around this issue by utilizing token prefixes. To incorporate signal from the prefix, we propose to roll the pointer back to the start of the program token and to utilize only those BPE tokens that match our prefix during beam search. Searching tokens with the right prefix is computationally expensive (O(D) for each call, D is a dictionary size). To overcome the computational cost we use the trie data structure to store all the BPE tokens (O(m) for each call, m is the maximum length of the BPE token in the dictionary).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix Generation",
"sec_num": "3.3"
},
{
"text": "We investigated full-line code completion setting, where we try to predict a sequence of program tokens till the end of the line. We selected the average number of correct program tokens predicted as our quality metric. We found out, that if we restrict model size and use regular language modeling objective, the model starts to hallucinate after 1-2 program tokens. So we decided to restrict our inference only to 1 program token, introducing early stopping into the beam search routine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search with Early Stopping",
"sec_num": "3.4"
},
{
"text": "It is easy to understand if a generated sequence is exhausted in a single token completion task. The lexer is applied to extract program tokens after each beam search iteration. If at some point the lexer output contains more than one program token, the generation process is stopped for the current sequence. We also stop the beam search if we have already obtained k complete tokens, where k is a hyperparameter. It helps to accelerate the inference and has nearly no negative effect on model quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam Search with Early Stopping",
"sec_num": "3.4"
},
{
"text": "Distillation is a model compression procedure in which a student model is trained to match the outputs of a large pre-trained teacher model. Some works (Bucila et al., 2006; Hinton et al., 2015) show that distilled model can perform even better than a trained from scratch model with the same architecture and the same amount of parameters.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "(Bucila et al., 2006;",
"ref_id": "BIBREF4"
},
{
"start": 174,
"end": 194,
"text": "Hinton et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation",
"sec_num": "3.5"
},
{
"text": "For distillation, we use the cross entropy loss along with the KL divergence between the student and teacher outputs (Equation 3, where p s is a student model, p t is a teacher model, and \u03b1 is hyperparameter to balance losses).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q(x, t) = \u2212(1 \u2212 \u03b1) log p s (x t |x <t ) + + \u03b1 KL (p s (x t |x <t ) || p t (x t |x <t ))",
"eq_num": "(2)"
}
],
"section": "Distillation",
"sec_num": "3.5"
},
{
"text": "L dist = t q(x, t) \u2192 min (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation",
"sec_num": "3.5"
},
{
"text": "The dataset used for the model training consists of 500k R Markdown files (Rmd). Non-code information is erased from each file and the rest of the text is transformed into a script. Additionally, in one of the experiments we use a larger dataset that contains more than 4kk with both R and Rmd files. The evaluation dataset was collected from the Github open-source projects and consists of 35k examples from the 9k R files. There is an issue with the using of the open-source project codes for the evaluation. It is very likely for the training and the test sets to intersect. A lot of repositories have forks with minimal differences and it is very hard to distinguish them from the source one. That is why we evaluate most of our models on R files only while training on Rmd files to avoid encountering the training samples in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Some papers investigate autocompletion behaviour on real-world autocompletion logs. Aye et al. 2020 trained as language models on an unlabelled corpus perform much worse on the real-world logs than models trained on a logs dataset initially. Hellendoorn et al. 2019showed a difference in the distributions of the completed tokens between the real completion events and the synthetic evaluation datasets. Not having the real logs available, we decided to divide our synthetic evaluation dataset into several groups. It is useful to validate a model behaviour on different autocompletion contexts. This way, the model can be fine-tuned to improve quality in concrete autocompletion situations, such as a package import or a function call completion. Firstly, we divide the dataset into prefix and non-prefix groups. The last program token is always incomplete in the prefix group. Also, we divide our examples into groups by the usage context. For example, there is a group with the filling of the function arguments and a group with new variables declaration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "The first type of dataset groups corresponds to completion events following the concrete operators ($, %>%, ->, :: <-, =). Another type covers autocompletion events during the positional or keyword arguments completion in vectors or functions. The next one consists of packages import usage contexts. The last one corresponds to the completion of a variable or a function name at the start of the new line.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "The code completion task may be considered a ranking problem. We use mean reciprocal rank score (MRR) and mean Recall@5 score for evaluation in our experiments. There is only one relevant element a in the autocompletion task and with search results denoted as s the formulas can be written as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "= i \u22121 , if s i = a 0, if a / \u2208 s Recall@k(a, s) = k i=1 I[a = s i ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RR(a, s)",
"sec_num": null
},
{
"text": "Our aim is to build a model light enough to run smoothly on an average laptop. We evaluate our models on a laptop equipped with Intel Core i7 with 6 cores and 16 GB RAM. The average time for the single autocompletion event should be close to 100ms and RAM consumption should not exceed 400MB. Figure 1 presents average inference times for our model with all the proposed modifications. We keep the number of heads = 4 and vary hidden size and number of layers. It can be seen that the model with the hidden size = 256 and number of layers = 4 is the most complicated model that still satisfies the performance requirements.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 301,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.1"
},
{
"text": "In this experiment, we evaluate each of our proposed modifications from the section 3. We apply modifications one by one and measure metrics and mean inference time for each of them. We use a transformer model with parameters from the previous experiment (hidden size = 256, heads amount = 4, number of layers = 4) as the baseline. For all experiments, we use Adam (Kingma and Ba, 2017) optimizer with the default parameters, cosine annealing learning rate scheduler (Smith and Topin, 2018) with upper learning rate boundary 5e-3 and gradient norm clipping by 10.",
"cite_spans": [
{
"start": 365,
"end": 386,
"text": "(Kingma and Ba, 2017)",
"ref_id": "BIBREF9"
},
{
"start": 467,
"end": 490,
"text": "(Smith and Topin, 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality and Inference Speed",
"sec_num": "5.2"
},
{
"text": "The results show that without the prefix generation modification the model is unable to take advantage of the given prefixes. It should be noted that almost 45% of the examples from the evaluation dataset contain unfinished tokens with a given prefix. Additional manipulations with the prefix slow down the model but it is compensated by the following two modifications. Variable name substitution during the prepossessing leads to both quality improvement and inference speed up. Generation early stopping procedure accelerates the inference without any ranking drawback. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality and Inference Speed",
"sec_num": "5.2"
},
{
"text": "One of the standard methods to improve model performance in data science is to collect more data. As we mentioned before, we can not guarantee total fairness of the evaluation process in this setup, but we try to make sure that all the training examples are removed from the test set by eliminating possible duplicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Big Dataset Effect",
"sec_num": "5.3"
},
{
"text": "MRR Recall@5 l4 s256 0.676 0.751 l6 s1024 0.683 0.751 + more data 0.761 0.815 + distillation 0.701 0.767 We consider multiple types of models in this experiment. The first one is the best model from experiment 5.2. The second experiment is similar to the first one but consists of six layers instead of four and has hidden size of 1024 instead of 256. The third experiment has the same architecture as the second one and is trained on a larger training set. We apply Adaptive Softmax (Grave et al., 2017) during the first training iterations to speed up the training process. The fourth experiment is a result of distillation of the third one into the model with the architecture from the first experiment.",
"cite_spans": [
{
"start": 484,
"end": 504,
"text": "(Grave et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Big Dataset Effect",
"sec_num": "5.3"
},
{
"text": "As we see from the results (Table 3) both increasing training set size and distillation have positive effect on the metrics. The distilled model outperforms all the models trained on a small dataset, even the more complicated ones. Table 4 shows the distilled model performance on different parts of the evaluation dataset. In general, the additional prefix information allows achieving a higher score. Groups related to function arguments and vector content have the highest MRR score. It is an interesting observation since the vector content is eliminated during the preprocessing step. It seems that vector argument filling is very close to function argument filling semantically and the model is able to perform well in this situation without any relevant training samples.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 36,
"text": "(Table 3)",
"ref_id": "TABREF4"
},
{
"start": 232,
"end": 239,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Big Dataset Effect",
"sec_num": "5.3"
},
{
"text": "The additional prefix information is very important for a library group. Library calls are usually located at the start of the program. If there is no last token prefix then the only reasonable model behaviour is to predict the most common completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Interpretation",
"sec_num": "5.4"
},
{
"text": "Autocompletion usage after the <-operator means that we want to get a variable computation statement based on a variable name. In opposite, usage after the -> means that we want to get a variable name based on given computations. Corresponding groups at the table show that we are much better at the first one completion group. It makes sense as the user has no limits in the variable name design. Another reason for the low quality for the after operator -> is a low amount of examples for this operator in the training data. That is why the quality for the new line variable group is better even though the task is harder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Interpretation",
"sec_num": "5.4"
},
{
"text": "In this work, we present a model for the R programming language completion. We introduced simple but effective techniques, which can improve a code completion quality, while not affecting the model architecture or the training objective. Thus, these techniques can be easily combined with other works in the field and any dynamic programming language. We also present an evaluation dataset for the R programming language containing different autocompletion contexts. The diversity of our dataset provides a robust estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Structural language models for any-code generation",
"authors": [
{
"first": "Uri",
"middle": [],
"last": "Alon",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Sadaka",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uri Alon, Roy Sadaka, Omer Levy, and Eran Yahav. 2019. Structural language models for any-code gen- eration. CoRR, abs/1910.00577.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning autocompletion from real-world datasets",
"authors": [
{
"first": "Seohyun",
"middle": [],
"last": "Gareth Ari Aye",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gareth Ari Aye, Seohyun Kim, and Hongyu Li. 2020. Learning autocompletion from real-world datasets.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning from examples to improve code completion systems",
"authors": [
{
"first": "Marcel",
"middle": [],
"last": "Bruch",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Monperrus",
"suffix": ""
},
{
"first": "Mira",
"middle": [],
"last": "Mezini",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "213--222",
"other_ids": {
"DOI": [
"10.1145/1595696.1595728"
]
},
"num": null,
"urls": [],
"raw_text": "Marcel Bruch, Martin Monperrus, and Mira Mezini. 2009. Learning from examples to improve code completion systems. pages 213-222.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Model compression",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Bucila",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
}
],
"year": 2006,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "535--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In KDD, pages 535-541. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient softmax approximation for gpus",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Moustapha",
"middle": [],
"last": "Ciss\u00e9",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Armand Joulin, Moustapha Ciss\u00e9, David Grangier, and Herv\u00e9 J\u00e9gou. 2017. Efficient softmax approximation for gpus.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "When code completion fails: a case study on real-world completions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Hellendoorn",
"suffix": ""
},
{
"first": "Harald",
"middle": [
"C"
],
"last": "Proksch",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Gall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bacchelli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 41st International Conference on Software Engineering, ICSE 2019",
"volume": "",
"issue": "",
"pages": "960--970",
"other_ids": {
"DOI": [
"10.1109/ICSE.2019.00101"
]
},
"num": null,
"urls": [],
"raw_text": "Vincent J. Hellendoorn, Sebastian Proksch, Harald C. Gall, and Alberto Bacchelli. 2019. When code com- pletion fails: a case study on real-world completions. In Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019, pages 960-970. IEEE / ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "CTRL: A conditional transformer language model for controllable generation",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "Lav",
"middle": [
"R"
],
"last": "Mccann",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh- ney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. CoRR, abs/1909.05858.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Reformer: The efficient transformer",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Anselm",
"middle": [],
"last": "Levskaya",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev, \u0141ukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Code completion with neural attention and pointer networks",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Lyu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Li, Yue Wang, Irwin King, and Michael R. Lyu. 2017. Code completion with neural attention and pointer networks. CoRR, abs/1711.09573.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multitask learning based pre-trained language model for code completion",
"authors": [
{
"first": "Fang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunfei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fang Liu, Ge Li, Yunfei Zhao, and Zhi Jin. 2020. Multi- task learning based pre-trained language model for code completion.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Autocompletion without static typing",
"authors": [
{
"first": "Nicholas Mckay",
"middle": [],
"last": "Shelley",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas McKay Shelley. 2014. Autocompletion with- out static typing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Superconvergence: Very fast training of neural networks using large learning rates",
"authors": [
{
"first": "Leslie",
"middle": [
"N"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Nicholay",
"middle": [],
"last": "Topin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie N. Smith and Nicholay Topin. 2018. Super- convergence: Very fast training of neural networks using large learning rates.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On the localness of software",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhendong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Premkumar",
"middle": [],
"last": "Devanbu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering",
"volume": "",
"issue": "",
"pages": "269--280",
"other_ids": {
"DOI": [
"10.1145/2635868.2635875"
]
},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. 2014. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Sympo- sium on Foundations of Software Engineering, FSE 2014, page 269-280, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Mean inference time over 50k objects for different model parameters",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"text": "Dataset group sizes",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"type_str": "table",
"num": null,
"text": "Model modifications performance",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"type_str": "table",
"num": null,
"text": "Increasing dataset size and distillation effects",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"type_str": "table",
"num": null,
"text": "Distilled model performance on separate groups. Rows correspond to autocompletion contexts. Results for no prefix subset, prefix subset, and entire dataset are split into columns.",
"content": "<table/>"
}
}
}
}