{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:04.606929Z" }, "title": "Time-Efficient Code Completion Model for the R Programming Language", "authors": [ { "first": "Artem", "middle": [], "last": "Popov", "suffix": "", "affiliation": {}, "email": "artem.popov@jetbrains.com" }, { "first": "Dmitrii", "middle": [], "last": "Orekhov", "suffix": "", "affiliation": {}, "email": "dmitrii.orekhov@jetbrains.com" }, { "first": "Denis", "middle": [], "last": "Litvinov", "suffix": "", "affiliation": {}, "email": "denis.litvinov@jetbrains.com" }, { "first": "Nikolay", "middle": [], "last": "Korolev", "suffix": "", "affiliation": {}, "email": "nikolai.korolev@jetbrains.com" }, { "first": "Gleb", "middle": [], "last": "Morgachev", "suffix": "", "affiliation": {}, "email": "gleb.morgachev@jetbrains.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present a deep learning code completion model for the R programming language. We introduce several techniques to utilize language modeling based architecture in the code completion task. With these techniques, the model requires low resources, but still achieves high quality. We also present an evaluation dataset for the R programming language completion task. Our dataset contains multiple autocompletion usage contexts and that provides robust validation results. The dataset is publicly available.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present a deep learning code completion model for the R programming language. We introduce several techniques to utilize language modeling based architecture in the code completion task. With these techniques, the model requires low resources, but still achieves high quality. We also present an evaluation dataset for the R programming language completion task. Our dataset contains multiple autocompletion usage contexts and that provides robust validation results. The dataset is publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Code completion feature (for simplicity we will refer to it as autocompletion) is used in an integrated development environment (IDE) to suggest the next pieces of code during typing. Code completion engines can accelerate software development and help to reduce errors by eliminating typos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years quality improvements in the code completion task have been achieved with the transformer language models. Models with a huge amount of parameters usually demonstrate better performance (Brown et al., 2020) , but in practice code completion is executed on a user laptop with limited computational resources. At the same time code completion should run as fast as possible to be considered as a convenient development tool.", "cite_spans": [ { "start": 201, "end": 221, "text": "(Brown et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we show that the autocompletion task can be solved with a fairly good quality even with a small transformer-based model. We propose several techniques to adapt the model which was originally designed for NLP tasks to our task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is hard to build a good autocompletion system for dynamically typed languages without machine learning methods (Shelley, 2014) . Let us consider an autocompletion of a function argument scenario. In static languages, an argument type is determined in the function definition. We can collect variables of this type from the scope in which the function is called. These variables may be used as an autocompletion output. However, in dynamic languages the argument type information is omitted. Since all dynamic languages are interpreted, variable types can not be obtained without running a program or special tools usage.", "cite_spans": [ { "start": 114, "end": 129, "text": "(Shelley, 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We choose a dynamic R programming language for our experiments. To the best of our knowledge, there are no papers about code completion based on deep learning for the R programming language specifically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We also propose an evaluation dataset for the R programming language collected from the opensource GitHub projects 1 . Our dataset is divided into several groups specific for different code usage contexts. For example, there is a separate group containing package imports and another one containing function calls.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are many ways to design code completion models. One of the methods is a frequency-based system. The statistical language model is used to rank a set of possible completions extracted by the rule-based methods (Tu et al., 2014) . Bruch et al. (2009) proposed proposed a ranking machine learning model, which additionally takes a feature vector describing completion context as an input.", "cite_spans": [ { "start": 215, "end": 232, "text": "(Tu et al., 2014)", "ref_id": "BIBREF17" }, { "start": 235, "end": 254, "text": "Bruch et al. (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Lately, deep learning approaches have gained popularity. Completions are generated by autoregressive models such as LSTM or transformerbased language models (Li et al., 2017) trained on a large source unlabeled code corpora. Some large models such as GPT3 (Brown et al., 2020) can even perform a full-line autocompletion with promising quality. Alon et al. (2019) suggest to predict the next node of the abstract syntax tree (AST) of the program to get completions. Liu et al. (2020) propose to predict the token and its type jointly to improve completion performance for identifiers.", "cite_spans": [ { "start": 157, "end": 174, "text": "(Li et al., 2017)", "ref_id": "BIBREF11" }, { "start": 256, "end": 276, "text": "(Brown et al., 2020)", "ref_id": null }, { "start": 345, "end": 363, "text": "Alon et al. (2019)", "ref_id": "BIBREF0" }, { "start": 466, "end": 483, "text": "Liu et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use conventional GPT-2 (Radford et al., 2019) architecture with Byte Pair Encoding (BPE) tokenization (Sennrich et al., 2015) , but with fewer layers and heads and a lower hidden size. We train it on a standard language modeling task, predicting the next BPE token x t from the previous ones:", "cite_spans": [ { "start": 26, "end": 48, "text": "(Radford et al., 2019)", "ref_id": "BIBREF13" }, { "start": 105, "end": 128, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline model", "sec_num": "3.1" }, { "text": "L lm = t log p(x t |x %, ->, :: <-, =). Another type covers autocompletion events during the positional or keyword arguments completion in vectors or functions. The next one consists of packages import usage contexts. The last one corresponds to the completion of a variable or a function name at the start of the new line.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4" }, { "text": "The code completion task may be considered a ranking problem. We use mean reciprocal rank score (MRR) and mean Recall@5 score for evaluation in our experiments. There is only one relevant element a in the autocompletion task and with search results denoted as s the formulas can be written as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "= i \u22121 , if s i = a 0, if a / \u2208 s Recall@k(a, s) = k i=1 I[a = s i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RR(a, s)", "sec_num": null }, { "text": "Our aim is to build a model light enough to run smoothly on an average laptop. We evaluate our models on a laptop equipped with Intel Core i7 with 6 cores and 16 GB RAM. The average time for the single autocompletion event should be close to 100ms and RAM consumption should not exceed 400MB. Figure 1 presents average inference times for our model with all the proposed modifications. We keep the number of heads = 4 and vary hidden size and number of layers. It can be seen that the model with the hidden size = 256 and number of layers = 4 is the most complicated model that still satisfies the performance requirements.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 301, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Implementation Details", "sec_num": "5.1" }, { "text": "In this experiment, we evaluate each of our proposed modifications from the section 3. We apply modifications one by one and measure metrics and mean inference time for each of them. We use a transformer model with parameters from the previous experiment (hidden size = 256, heads amount = 4, number of layers = 4) as the baseline. For all experiments, we use Adam (Kingma and Ba, 2017) optimizer with the default parameters, cosine annealing learning rate scheduler (Smith and Topin, 2018) with upper learning rate boundary 5e-3 and gradient norm clipping by 10.", "cite_spans": [ { "start": 365, "end": 386, "text": "(Kingma and Ba, 2017)", "ref_id": "BIBREF9" }, { "start": 467, "end": 490, "text": "(Smith and Topin, 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Quality and Inference Speed", "sec_num": "5.2" }, { "text": "The results show that without the prefix generation modification the model is unable to take advantage of the given prefixes. It should be noted that almost 45% of the examples from the evaluation dataset contain unfinished tokens with a given prefix. Additional manipulations with the prefix slow down the model but it is compensated by the following two modifications. Variable name substitution during the prepossessing leads to both quality improvement and inference speed up. Generation early stopping procedure accelerates the inference without any ranking drawback. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality and Inference Speed", "sec_num": "5.2" }, { "text": "One of the standard methods to improve model performance in data science is to collect more data. As we mentioned before, we can not guarantee total fairness of the evaluation process in this setup, but we try to make sure that all the training examples are removed from the test set by eliminating possible duplicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Big Dataset Effect", "sec_num": "5.3" }, { "text": "MRR Recall@5 l4 s256 0.676 0.751 l6 s1024 0.683 0.751 + more data 0.761 0.815 + distillation 0.701 0.767 We consider multiple types of models in this experiment. The first one is the best model from experiment 5.2. The second experiment is similar to the first one but consists of six layers instead of four and has hidden size of 1024 instead of 256. The third experiment has the same architecture as the second one and is trained on a larger training set. We apply Adaptive Softmax (Grave et al., 2017) during the first training iterations to speed up the training process. The fourth experiment is a result of distillation of the third one into the model with the architecture from the first experiment.", "cite_spans": [ { "start": 484, "end": 504, "text": "(Grave et al., 2017)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Big Dataset Effect", "sec_num": "5.3" }, { "text": "As we see from the results (Table 3) both increasing training set size and distillation have positive effect on the metrics. The distilled model outperforms all the models trained on a small dataset, even the more complicated ones. Table 4 shows the distilled model performance on different parts of the evaluation dataset. In general, the additional prefix information allows achieving a higher score. Groups related to function arguments and vector content have the highest MRR score. It is an interesting observation since the vector content is eliminated during the preprocessing step. It seems that vector argument filling is very close to function argument filling semantically and the model is able to perform well in this situation without any relevant training samples.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 36, "text": "(Table 3)", "ref_id": "TABREF4" }, { "start": 232, "end": 239, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Big Dataset Effect", "sec_num": "5.3" }, { "text": "The additional prefix information is very important for a library group. Library calls are usually located at the start of the program. If there is no last token prefix then the only reasonable model behaviour is to predict the most common completion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Interpretation", "sec_num": "5.4" }, { "text": "Autocompletion usage after the <-operator means that we want to get a variable computation statement based on a variable name. In opposite, usage after the -> means that we want to get a variable name based on given computations. Corresponding groups at the table show that we are much better at the first one completion group. It makes sense as the user has no limits in the variable name design. Another reason for the low quality for the after operator -> is a low amount of examples for this operator in the training data. That is why the quality for the new line variable group is better even though the task is harder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Interpretation", "sec_num": "5.4" }, { "text": "In this work, we present a model for the R programming language completion. We introduced simple but effective techniques, which can improve a code completion quality, while not affecting the model architecture or the training objective. Thus, these techniques can be easily combined with other works in the field and any dynamic programming language. We also present an evaluation dataset for the R programming language containing different autocompletion contexts. The diversity of our dataset provides a robust estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Structural language models for any-code generation", "authors": [ { "first": "Uri", "middle": [], "last": "Alon", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Sadaka", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Eran", "middle": [], "last": "Yahav", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uri Alon, Roy Sadaka, Omer Levy, and Eran Yahav. 2019. Structural language models for any-code gen- eration. CoRR, abs/1910.00577.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning autocompletion from real-world datasets", "authors": [ { "first": "Seohyun", "middle": [], "last": "Gareth Ari Aye", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gareth Ari Aye, Seohyun Kim, and Hongyu Li. 2020. Learning autocompletion from real-world datasets.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning from examples to improve code completion systems", "authors": [ { "first": "Marcel", "middle": [], "last": "Bruch", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Monperrus", "suffix": "" }, { "first": "Mira", "middle": [], "last": "Mezini", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "213--222", "other_ids": { "DOI": [ "10.1145/1595696.1595728" ] }, "num": null, "urls": [], "raw_text": "Marcel Bruch, Martin Monperrus, and Mira Mezini. 2009. Learning from examples to improve code completion systems. pages 213-222.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Model compression", "authors": [ { "first": "Cristian", "middle": [], "last": "Bucila", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "Alexandru", "middle": [], "last": "Niculescu-Mizil", "suffix": "" } ], "year": 2006, "venue": "KDD", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In KDD, pages 535-541. ACM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Efficient softmax approximation for gpus", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Moustapha", "middle": [], "last": "Ciss\u00e9", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edouard Grave, Armand Joulin, Moustapha Ciss\u00e9, David Grangier, and Herv\u00e9 J\u00e9gou. 2017. Efficient softmax approximation for gpus.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "When code completion fails: a case study on real-world completions", "authors": [ { "first": "J", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Hellendoorn", "suffix": "" }, { "first": "Harald", "middle": [ "C" ], "last": "Proksch", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Gall", "suffix": "" }, { "first": "", "middle": [], "last": "Bacchelli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 41st International Conference on Software Engineering, ICSE 2019", "volume": "", "issue": "", "pages": "960--970", "other_ids": { "DOI": [ "10.1109/ICSE.2019.00101" ] }, "num": null, "urls": [], "raw_text": "Vincent J. Hellendoorn, Sebastian Proksch, Harald C. Gall, and Alberto Bacchelli. 2019. When code com- pletion fails: a case study on real-world completions. In Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019, pages 960-970. IEEE / ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "CTRL: A conditional transformer language model for controllable generation", "authors": [ { "first": "Bryan", "middle": [], "last": "Nitish Shirish Keskar", "suffix": "" }, { "first": "Lav", "middle": [ "R" ], "last": "Mccann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Varshney", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh- ney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. CoRR, abs/1909.05858.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Reformer: The efficient transformer", "authors": [ { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Anselm", "middle": [], "last": "Levskaya", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikita Kitaev, \u0141ukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Code completion with neural attention and pointer networks", "authors": [ { "first": "Jian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Irwin", "middle": [], "last": "King", "suffix": "" }, { "first": "Michael", "middle": [ "R" ], "last": "Lyu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Li, Yue Wang, Irwin King, and Michael R. Lyu. 2017. Code completion with neural attention and pointer networks. CoRR, abs/1711.09573.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multitask learning based pre-trained language model for code completion", "authors": [ { "first": "Fang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yunfei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fang Liu, Ge Li, Yunfei Zhao, and Zhi Jin. 2020. Multi- task learning based pre-trained language model for code completion.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.07909" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Autocompletion without static typing", "authors": [ { "first": "Nicholas Mckay", "middle": [], "last": "Shelley", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas McKay Shelley. 2014. Autocompletion with- out static typing.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Superconvergence: Very fast training of neural networks using large learning rates", "authors": [ { "first": "Leslie", "middle": [ "N" ], "last": "Smith", "suffix": "" }, { "first": "Nicholay", "middle": [], "last": "Topin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leslie N. Smith and Nicholay Topin. 2018. Super- convergence: Very fast training of neural networks using large learning rates.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On the localness of software", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Zhendong", "middle": [], "last": "Su", "suffix": "" }, { "first": "Premkumar", "middle": [], "last": "Devanbu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering", "volume": "", "issue": "", "pages": "269--280", "other_ids": { "DOI": [ "10.1145/2635868.2635875" ] }, "num": null, "urls": [], "raw_text": "Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. 2014. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Sympo- sium on Foundations of Software Engineering, FSE 2014, page 269-280, New York, NY, USA. Associa- tion for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Mean inference time over 50k objects for different model parameters", "type_str": "figure" }, "TABREF1": { "html": null, "type_str": "table", "num": null, "text": "Dataset group sizes", "content": "" }, "TABREF3": { "html": null, "type_str": "table", "num": null, "text": "Model modifications performance", "content": "
" }, "TABREF4": { "html": null, "type_str": "table", "num": null, "text": "Increasing dataset size and distillation effects", "content": "
" }, "TABREF6": { "html": null, "type_str": "table", "num": null, "text": "Distilled model performance on separate groups. Rows correspond to autocompletion contexts. Results for no prefix subset, prefix subset, and entire dataset are split into columns.", "content": "
" } } } }