ACL-OCL / Base_JSON /prefixN /json /nlp4prog /2021.nlp4prog-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:11.422717Z"
},
"title": "Bag-of-Words Baselines for Semantic Code Search",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": ""
},
{
"first": "Ji",
"middle": [],
"last": "Xin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Yates",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of semantic code search is to retrieve code snippets from a source code corpus based on an information need expressed in natural language. The semantic gap between natural language and programming languages has for long been regarded as one of the most significant obstacles to the effectiveness of keyword-based information retrieval (IR) methods. It is a common assumption that \"traditional\" bag-of-words IR methods are poorly suited for semantic code search: our work empirically investigates this assumption. Specifically, we examine the effectiveness of two traditional IR methods, namely BM25 and RM3, on the CodeSearchNet Corpus, which consists of natural language queries paired with relevant code snippets. We find that the two keyword-based methods outperform several pre-BERT neural models. We also compare several code-specific data pre-processing strategies and find that specialized tokenization improves effectiveness. Code for reproducing our experiments is available at https:",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of semantic code search is to retrieve code snippets from a source code corpus based on an information need expressed in natural language. The semantic gap between natural language and programming languages has for long been regarded as one of the most significant obstacles to the effectiveness of keyword-based information retrieval (IR) methods. It is a common assumption that \"traditional\" bag-of-words IR methods are poorly suited for semantic code search: our work empirically investigates this assumption. Specifically, we examine the effectiveness of two traditional IR methods, namely BM25 and RM3, on the CodeSearchNet Corpus, which consists of natural language queries paired with relevant code snippets. We find that the two keyword-based methods outperform several pre-BERT neural models. We also compare several code-specific data pre-processing strategies and find that specialized tokenization improves effectiveness. Code for reproducing our experiments is available at https:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Community Question Answering forums like Stack Overflow have become popular 1 methods for finding code snippets relevant to natural language questions (e.g., \"How can I download a paper from arXiv in Python?\"). Such forums require community members to provide answers, which means that potential questions are limited to public code, and a large portion of questions cannot be answered in real time. The task of semantic code search removes these limitations by treating a code-related natural language question as a query and using it to retrieve relevant code snippets. In this way, novel questions can be immediately answered whether in public or private code repositories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consequently, the semantic code search task is receiving an increasing amount of attention. Several early efforts showed promising results applying neural networks models to various code search datasets (Gu et al., 2018; Sachdev et al., 2018; Cambronero et al., 2019; Zhu et al., 2020; Srinivas et al., 2020) . To facilitate research on semantic code search, GitHub released the CodeSearchNet Corpus and Challenge (Husain et al., 2019) , providing a large-scale dataset across multiple programming languages with unified evaluation criteria. This dataset has been utilized by multiple recent papers Gu et al., 2021; Arumugam, 2020) .",
"cite_spans": [
{
"start": 203,
"end": 220,
"text": "(Gu et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 221,
"end": 242,
"text": "Sachdev et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 243,
"end": 267,
"text": "Cambronero et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 268,
"end": 285,
"text": "Zhu et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 286,
"end": 308,
"text": "Srinivas et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 414,
"end": 435,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 599,
"end": 615,
"text": "Gu et al., 2021;",
"ref_id": "BIBREF7"
},
{
"start": 616,
"end": 631,
"text": "Arumugam, 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work on semantic code search has focused on neural ranking models under the assumption that such methods are necessary to bridge the semantic gap between natural language queries and relevant results (i.e., code snippets). Such approaches usually design a task-specific joint vector representation to map natural language queries and programming language \"documents\" into a shared vector space (Gu et al., 2018; Sachdev et al., 2018; Cambronero et al., 2019) . Inspired by progress in pretrained models (Devlin et al., 2019) , researchers proposed CodeBERT , a pretrained transformer model specifically for programming languages, which yields impressive effectiveness on this task.",
"cite_spans": [
{
"start": 394,
"end": 411,
"text": "(Gu et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 412,
"end": 433,
"text": "Sachdev et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 434,
"end": 458,
"text": "Cambronero et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 503,
"end": 524,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Beyond utilizing the raw text of code corpora, another thread of research conducts retrieval using structural features parsed from code, which are believed to contain rich semantic information (Srinivas et al., 2020) . Multiple papers have also proposed incorporating structural information with neural ranking models (Gu et al., 2021; Ling et al., 2021; .",
"cite_spans": [
{
"start": 193,
"end": 216,
"text": "(Srinivas et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 318,
"end": 335,
"text": "(Gu et al., 2021;",
"ref_id": "BIBREF7"
},
{
"start": 336,
"end": 354,
"text": "Ling et al., 2021;",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast to these comparatively sophisticated methods, in this work we explore the effectiveness of traditional information retrieval (IR) methods on the semantic code search task. This exploration is of interest for two reasons:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, while neural methods can take advantage of distributed representations (i.e., static or contextual embeddings) to model semantic similarity, Yang et al. (2019) found that pre-BERT neural ranking models can underperform traditional IR methods like BM25 with RM3 query expansion, especially in the absence of large amounts of data for training. Prior work has claimed that traditional IR methods are unfit for code search (Husain et al., 2019) , but there is a lack of empirical evidence supporting this claim. In fact, in one of the few comparisons with traditional IR methods available (Sachdev et al., 2018) , BM25 performed well in comparison to the proposed neural methods on an Android-specific dataset.",
"cite_spans": [
{
"start": 148,
"end": 166,
"text": "Yang et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 427,
"end": 448,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 593,
"end": 615,
"text": "(Sachdev et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, neural approaches are often reranking methods that rerank candidate documents identified by a first-stage ranking method. Even dense retrieval methods that perform ranking on shared vector representations directly can benefit from hybrid combinations with keyword-based signals as well as another round of reranking (Gao et al., 2020) . It is thus useful to identify the best-performing traditional IR methods in this domain, so that they can provide a complementary source of evidence.",
"cite_spans": [
{
"start": 324,
"end": 342,
"text": "(Gao et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, our work has two main contributions: First, we provide strong keyword baselines for semantic code search, demonstrating that traditional IR methods can in fact outperform several pre-BERT neural ranking models even without a semantic matching ability, which extends the conclusions drawn by Yang et al. (2019) on ad hoc retrieval to the semantic code search task. Second, we investigate and quantify the impact of specialized pre-processing for code search.",
"cite_spans": [
{
"start": 297,
"end": 315,
"text": "Yang et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As discussed above, joint-vector representations have been widely used in recent work on code search. NCS (Sachdev et al., 2018) proposed an approach integrating TF-IDF, word embeddings, and an efficient embedding search technique where the word embeddings are learned in an unsupervised manner. CODEnn (Gu et al., 2018) developed a neural model based on queries and separate code components. UNIF (Cambronero et al., 2019) investigated the necessity of supervision and sophisticated architectures for learning aligned vector representations. After concluding that supervision and a simpler network architecture are beneficial, the authors further enhanced NCS by adding a supervision module on top. In addition to introducing the dataset, the CodeSearchNet paper also proposed joint-embedding models as baselines, where the embeddings may be learned from neural bag of words (NBoW), bidirectional RNN, 1D CNN, or self-attention (SelfAtt). In this work, we compare against the best-performing of these baselines, NBoW and SelfAtt.",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "(Sachdev et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 303,
"end": 320,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 398,
"end": 423,
"text": "(Cambronero et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Unlike attempts to learn aligned vector representations from each dataset, CodeBERT ) built a BERT-style pre-trained transformer encoder with code-specific training data and objectives, and then fine-tuned the model on downstream tasks. This approach has been highly successful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another line of work tries to enhance retrieval by incorporating structural information. In work where queries and code snippets are encoded separately, this is usually achieved by merging the encoded structure into the code vector. extracted paths from the abstract syntax tree (AST) of the code and directly used the encoded path to represent the code snippet. Gu et al. (2021) built a statement dependency matrix from the code and transformed it into a vector, which is then added to the code vector prepared from the text. Ling et al. (2021) utilized a graph neural network to embed the program graph into the code vector. Adopting a different approach, extended CodeBERT by adding two structure-aware pre-training objectives, and showed that the benefits of structural information are orthogonal to the benefits of large-scale pre-training.",
"cite_spans": [
{
"start": 363,
"end": 379,
"text": "Gu et al. (2021)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While neural ranking models are popular approaches to the code retrieval task, we found few papers that compared them with traditional algorithms. To the best of our knowledge, only Sachdev et al. 2018compared their embedding model with BM25, finding that BM25 performed acceptably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the traditional IR methods that we used in our experiments and the neural ranking models that have been evaluated on the CodeSearchNet Corpus in previous work (Husain et al., 2019; .",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "(Husain et al., 2019;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "To test the effectiveness of traditional IR methods, we chose two well-known and effective retrieval methods as our baselines: BM25 (Robertson and Zaragoza, 2009) and RM3 (Lavrenko and Croft, 2001; Abdul-Jaleel et al., 2004) . Both have been widely used for ad hoc retrieval and have been demonstrated to be strong baselines compared to multiple pre-BERT neural ranking models (Yang et al., 2019) .",
"cite_spans": [
{
"start": 132,
"end": 162,
"text": "(Robertson and Zaragoza, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 171,
"end": 197,
"text": "(Lavrenko and Croft, 2001;",
"ref_id": "BIBREF11"
},
{
"start": 198,
"end": 224,
"text": "Abdul-Jaleel et al., 2004)",
"ref_id": "BIBREF0"
},
{
"start": 377,
"end": 396,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional IR Baselines",
"sec_num": "3.1"
},
{
"text": "BM25 is a ranking method based on the probabilistic relevance model (Robertson and Jones, 1976) , which combines term frequency (tf) and inverse document frequency (idf) signals from individual query terms to estimate query-document relevance. RM3 is a query expansion technique based on pseudo relevance feedback (PRF) that can be combined with another ranking method such as BM25. It expands the original query with selected terms from initial retrieval results (e.g., results of BM25) and applies another round of retrieval (e.g., with BM25) using the expanded query. We omit a comprehensive explanation of these two methods here and refer interested readers to the cited papers.",
"cite_spans": [
{
"start": 68,
"end": 95,
"text": "(Robertson and Jones, 1976)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional IR Baselines",
"sec_num": "3.1"
},
{
"text": "We compare the traditional IR methods described above with three neural ranking models: neural bag of words (NBoW), self-attention (SelfAtt), and CodeBERT. Results of the first two models are reported by Husain et al. (2019) , and the last model by . We use their reported scores in this paper.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "Husain et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ranking Models",
"sec_num": "3.2"
},
{
"text": "According to Husain et al. (2019) , both NBoW and SelfAtt encode natural language queries and code into a joint vector space, and then aggregate the sequence representation into a single vector. The models are trained with the objective of maximizing the inner products of the aggregated query vectors and code vectors. The two models only differ in the encoding step, where NBoW encodes each token through a simple embedding matrix and SelfAtt encodes the sequence using BERT (Devlin et al., 2019) . pre-trained a bi-modal (natural language and programming language) transformer encoder based on RoBERTa (Liu et al., 2019) , with the hybrid objectives of Mask Language Model (MLM) and Replaced Token Detection (RTD). The model is then fine-tuned for the code search task on each programming language dataset. We refer readers to the original papers (Husain et al., 2019; for further model details and hyperparameters.",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "Husain et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 477,
"end": 498,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 605,
"end": 623,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 850,
"end": 871,
"text": "(Husain et al., 2019;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Ranking Models",
"sec_num": "3.2"
},
{
"text": "In this section, we introduce the CodeSearchNet Dataset (Husain et al., 2019) used in this paper and the code specific pre-processing strategies (e.g., tokenization) to be compared. In this work we conduct all experiments on the CodeSearchNet Corpus dataset. The labeled data are split into training, validation, and test sets in a ratio of 80:10:10. Table 1 shows the overall dataset size and the number of unique docstrings in each data split. The test set is partitioned into segments of size 1000 at the evaluation stage, and the correct code snippet for a given query is compared against the other snippets within the same segment. That is, the code snippets in the 1000 <docstring, code snippet> pairs naturally form the distractor set for each other.",
"cite_spans": [
{
"start": 56,
"end": 77,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset and Pre-processing",
"sec_num": "4"
},
{
"text": "According to Husain et al. (2019) , the crawled data are filtered according to certain heuristic rules, 1 # Appends the given string at the end of the current string value for key k. including removing (1) pairs where the docstring is shorter than three tokens, (2) functions that contain fewer than three lines, contain the \"test\" substring, or serve as constructors or standard extension methods, and (3) duplicate functions. Nevertheless, even though duplicate functions are removed, queries prepared from docstrings can still repeat. That is, different functions can share the same documentation. Such duplication may result from function overloading, oversimplified documentation, or mere coincidence. An example of this duplication is shown in Figure 1 . Table 2 shows that such query duplication can be observed in all programming languages to some degree, and most of the duplication arises from functions in the same repository. Considering the number of duplicate docstrings, it is inaccurate to consider all functions other than the one matched to the current query as negative samples. In this work, we aggregate all functions sharing the same docstring and regard all of them as relevant results.",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "Husain et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 750,
"end": 758,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 761,
"end": 768,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "De-duplication",
"sec_num": "4.2"
},
{
"text": "2 def putcat (k, v) 3 k = k.to_s; v = v.to_s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "De-duplication",
"sec_num": "4.2"
},
{
"text": "In all experiments, we apply the Porter stemmer and perform stopword removal using the default stopwords list in the Anserini toolkit (Yang et al., 2017) , which is a Lucene-based IR system.",
"cite_spans": [
{
"start": 134,
"end": 153,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.3"
},
{
"text": "On top of this default configuration, we investigate the effectiveness of the following tokenization and stopword removal strategies specific to programming languages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.3"
},
{
"text": "\u2022 no-code-tokenization: No extra pre-processing is applied other than Porter stemmer and removal of English stopwords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.3"
},
{
"text": "\u2022 code-tokenization: Tokens in both camelCase and snake case in code snippets and documentation are further tokenized into separate tokens, e.g., camel case and snake case. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.3"
},
{
"text": "\u2022 code-tokenization + remove reserved tokens: Considering that reserved tokens in programming languages intuitively add little value in exact match methods, we remove the reserved tokens of each programming language on top of the codetokenization condition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.3"
},
{
"text": "We show length and vocabulary statistics after applying each pre-processing strategy in Table 3 . In the table, total vocab size is the number of tokens that appear in either docstring or code, and overlapped vocabulary ratio is the percentage of tokens appearing in both docstring and code in the entire vocabulary. The table shows that code tokenization greatly shrinks the vocabulary size and raises the overlapped vocabulary ratio. Interestingly, reserved token removal shortens the code snippets length, but shows little impact on the overall vocabulary size. This results from the fact that reserved tokens are commonly contained in variable names as subtokens and thus reappear after code tokenization (e.g., the variable name class dir would be tokenized into class and dir, therefore class would still appear in the final vocabulary).",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.3"
},
{
"text": "All our experiments were conducted with Capreolus , an IR toolkit integrating ranking and reranking tasks under the same data Table 4 : MRR on the test set of the CodeSearchNet Corpus where each model searches for the correct code snippet against the 999 distractors. The highest scores among non-BERT models are highlighted in bold, and the ones among keyword-only models are underlined. We copied the scores of neural ranking models from Husain et al. (2019) and . processing pipeline. We chose the toolkit to enhance reproducibility and to support future comparisons. Note that although Capreolus is primarily designed for text ranking with neural ranking models, in this work we do not use any of those features. The underlying implementation of BM25 and RM3 are provided by the Pyserini toolkit (Lin et al., 2021) , which in turn is built on the Lucene open-source search library, but Capreolus provides simplified mechanisms for parameter tuning and other useful features for end-to-end experiments.",
"cite_spans": [
{
"start": 440,
"end": 460,
"text": "Husain et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 800,
"end": 818,
"text": "(Lin et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Following the original paper (Husain et al., 2019) , each correct code snippet was searched against a fixed set of 999 distractors, as described in Section 4.1. All experiments were evaluated with Mean Reciprocal Rank (MRR). In all experiments, we tuned the parameters k1 and b for BM25 and originalQueryWeight, fbDocs, fbTerms for RM3 on the validation set, then applied the parameters from the best result on the test set. Note that since BM25 and RM3 only require parameter tuning, we did not use the training set mentioned in Table 1 .",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "(Husain et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 530,
"end": 537,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "[0.7, 1.3], step size 0.1 b [0.7, 1.0], step size 0.1 fbDocs [55, 95] , step size 10 fbTerms 2, 5, 7, 10 originalQueryWeight 0.7, 0.8, 0.9 After pilot experiments on the Ruby and Go datasets to determine reasonable parameter ranges to search, we performed a grid search on each language dataset over the values shown in Table 5 .",
"cite_spans": [
{
"start": 61,
"end": 65,
"text": "[55,",
"ref_id": null
},
{
"start": 66,
"end": 69,
"text": "95]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "k1",
"sec_num": null
},
{
"text": "The results are shown in Table 4 . The first row reports the results of CodeBERT . We list this result here to better compare the IR baselines with the state-of-the-art model in the field. The next two rows are pre-BERT neural model results copied from Husain et al. (2019) . The remaining rows show the scores of BM25 and RM3 with the three aforementioned pre-processing strategies on the six programming language datasets.",
"cite_spans": [
{
"start": 253,
"end": 273,
"text": "Husain et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.2"
},
{
"text": "As Table 4 shows, BM25 and BM25 + RM3 in general outperform the NBoW and SelfAtt baselines despite variations in effectiveness across programming languages. The SelfAtt model only shows sizeable improvement over BM25 on Python and a modest improvement on PHP. This suggests that the gap between natural language and programming languages does not necessarily hinder traditional IR methods in the code search task, and that distributed representations are not necessarily better at addressing this gap.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.2"
},
{
"text": "Comparing the results of BM25 and BM25 + RM3, we observe that adding RM3, which is generally considered more effective, does not improve over BM25 on any of the language datasets. We suspect the cause of this unanticipated result is that most of the queries in CodeSearchNet only have a single relevant document, which may not be sufficient to quantify the benefits of pseudo relevance feedback techniques. This hypothesis is supported by a similar observation that adding RM3 degrades effectiveness on the MS MARCO dataset (Bajaj et al., 2018) , where each query also has few relevant documents .",
"cite_spans": [
{
"start": 524,
"end": 544,
"text": "(Bajaj et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.2"
},
{
"text": "The results from each pre-processing strategy show the necessity of code tokenization, which improves MRR overall. On the other hand, removing the reserved tokens does not improve effectiveness. The possible reasons could be that (1) some reserved tokens are in the English stopwords list and would be removed anyway (e.g. for, if, or, etc.), (2) some special reserved tokens rarely appear in the query and thus contribute little to the final score (e.g. elif, await, etc.), and (3) frequently-appearing reserved words are given small IDF weights in BM25, which minimizes their effect (e.g. final, return, var).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.2"
},
{
"text": "In this paper we examined the effectiveness of traditional IR methods for semantic code search and found that while these exact match methods are not as effective as CodeBERT, they generally outperform pre-BERT neural models. We also compare the effect of code-specific tokenization strategies, showing that while splitting camel and snake case is beneficial, removing reserved tokens does not necessarily help keyword-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "There are also aspects of semantic code search that this paper does not cover. Sachdev et al. (2018) mentioned the nuance between different code com-ponents, such as how readability can differ for function names and local variables. We leave for future work an investigation of whether treating such components differently improves effectiveness. Nevertheless, the lesson from our work seems clear: even with advances in neural approaches, we shouldn't neglect comparisons to and contributions from strong keyword-based IR methods.",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "Sachdev et al. (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://stackoverflow.blog/2020/01/21/ scripting-the-future-of-stack-2020-pla ns-vision/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/github/CodeSearchNet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "According toHusain et al. (2019), NBoW and SelfAtt tokenize 'camelCase' tokens into subtokens ('camel' and 'case'), which is similar to our code-tokenization setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "UMass at TREC 2004: Novelty and HARD",
"authors": [
{
"first": "Nasreen",
"middle": [],
"last": "Abdul-Jaleel",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Larkey",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"D"
],
"last": "Smucker",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Strohman",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Turtle",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Wade",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Thirteenth Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Don- ald Metzler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004), Gaithersburg, Maryland.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic code search using Code2Vec: A bag-of-paths model",
"authors": [
{
"first": "Lakshmanan",
"middle": [],
"last": "Arumugam",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lakshmanan Arumugam. 2020. Semantic code search using Code2Vec: A bag-of-paths model. Master's thesis, University of Waterloo.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Saurabh Tiwary, and Tong Wang",
"authors": [
{
"first": "Payal",
"middle": [],
"last": "Bajaj",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mcnamara",
"suffix": ""
},
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Stoica",
"suffix": ""
}
],
"year": 2018,
"venue": "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.09268v3"
]
},
"num": null,
"urls": [],
"raw_text": "Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Ti- wary, and Tong Wang. 2018. MS MARCO: A Hu- man Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268v3.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "When deep learning met code search",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Cambronero",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Seohyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Koushik",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Satish",
"middle": [],
"last": "Chandra",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2019",
"volume": "",
"issue": "",
"pages": "964--974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra. 2019. When deep learn- ing met code search. In Proceedings of the 2019 27th ACM Joint Meeting on European Software En- gineering Conference and Symposium on the Foun- dations of Software Engineering, ESEC/FSE 2019, page 964-974, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Code-BERT: A pre-trained model for programming and natural languages",
"authors": [
{
"first": "Zhangyin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1536--1547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xi- aocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Code- BERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Complementing lexical retrieval with semantic residual embedding",
"authors": [
{
"first": "Luyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhuyun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13969"
]
},
"num": null,
"urls": [],
"raw_text": "Luyu Gao, Zhuyun Dai, Zhen Fan, and Jamie Callan. 2020. Complementing lexical retrieval with seman- tic residual embedding. arXiv:2004.13969.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CRaDLe: Deep code retrieval based on semantic dependency learning",
"authors": [
{
"first": "Wenchao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zongjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Cuiyun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chaozheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zenglin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Lyu",
"suffix": ""
}
],
"year": 2021,
"venue": "Neural Networks",
"volume": "141",
"issue": "",
"pages": "385--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenchao Gu, Zongjie Li, Cuiyun Gao, Chaozheng Wang, Hongyu Zhang, Zenglin Xu, and Michael R. Lyu. 2021. CRaDLe: Deep code retrieval based on semantic dependency learning. Neural Networks, 141:385-394.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Association for Computing Machinery",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Sunghun",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 40th International Conference on Software Engineering, ICSE '18",
"volume": "",
"issue": "",
"pages": "933--944",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In Proceedings of the 40th Inter- national Conference on Software Engineering, ICSE '18, page 933-944, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "GraphCode-BERT: Pre-training code representations with data flow",
"authors": [
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Shuai",
"middle": [],
"last": "Shuo Ren",
"suffix": ""
},
{
"first": "Zhangyin",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.08366"
]
},
"num": null,
"urls": [],
"raw_text": "Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, L. Zhou, Nan Duan, Jian Yin, Daxin Jiang, and M. Zhou. 2020. GraphCode- BERT: Pre-training code representations with data flow. arXiv:2009.08366.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Code-SearchNet Challenge: Evaluating the state of semantic code search",
"authors": [
{
"first": "Hamel",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "Ho-Hsiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tiferet",
"middle": [],
"last": "Gazit",
"suffix": ""
},
{
"first": "Miltiadis",
"middle": [],
"last": "Allamanis",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brockschmidt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09436"
]
},
"num": null,
"urls": [],
"raw_text": "Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- SearchNet Challenge: Evaluating the state of seman- tic code search. arXiv:1909.09436.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Association for Computing Machinery",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W. Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Lavrenko and W. Bruce Croft. 2001. Relevance based language models. In Proceedings of the 24th Annual International ACM SIGIR Conference on Re- search and Development in Information Retrieval, SIGIR '01, page 120-127, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xueguang",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Sheng-Chieh",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jheng-Hong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ronak",
"middle": [],
"last": "Pradeep",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'21",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR'21, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pretrained transformers for text ranking: BERT and beyond",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.06467"
]
},
"num": null,
"urls": [],
"raw_text": "Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2020. Pretrained transformers for text ranking: BERT and beyond. arXiv:2010.06467.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep graph matching and searching for semantic code retrieval",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Saizhuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Gaoning",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Fangli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"X"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Chunming",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shouling",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM Transactions on Knowledge Discovery from Data",
"volume": "15",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Ling, Lingfei Wu, Saizhuo Wang, Gaoning Pan, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, and Shouling Ji. 2021. Deep graph matching and searching for semantic code retrieval. ACM Transac- tions on Knowledge Discovery from Data, 15(5):Ar- ticle No. 88.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The probabilistic relevance framework: BM25 and beyond. Foundation and Trends in Information Retrieval",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "3",
"issue": "",
"pages": "333--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundation and Trends in Information Re- trieval, 3(4):333-389.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Relevance weighting of search terms",
"authors": [
{
"first": "E",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"Sparck"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1976,
"venue": "Journal of the American Society for Information science",
"volume": "27",
"issue": "3",
"pages": "129--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen E. Robertson and Karen Sparck Jones. 1976. Relevance weighting of search terms. Journal of the American Society for Information science, 27(3):129-146.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Retrieval on source code: A neural code search",
"authors": [
{
"first": "Saksham",
"middle": [],
"last": "Sachdev",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sifei",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Seohyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Koushik",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Satish",
"middle": [],
"last": "Chandra",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages",
"volume": "",
"issue": "",
"pages": "31--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saksham Sachdev, Hongyu Li, Sifei Luan, Seohyun Kim, Koushik Sen, and Satish Chandra. 2018. Re- trieval on source code: A neural code search. In Pro- ceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL 2018, page 31-41, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Graph4Code: A machine interpretable knowledge graph for code",
"authors": [
{
"first": "Kavitha",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Abdelaziz",
"suffix": ""
},
{
"first": "Julian",
"middle": [
"T"
],
"last": "Dolby",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mccusker",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.09440"
]
},
"num": null,
"urls": [],
"raw_text": "Kavitha Srinivas, I. Abdelaziz, Julian T. Dolby, and J. McCusker. 2020. Graph4Code: A ma- chine interpretable knowledge graph for code. arXiv:2002.09440.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "PSCS: A path-based neural model for semantic code search",
"authors": [
{
"first": "Zhensu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Qian",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.03042"
]
},
"num": null,
"urls": [],
"raw_text": "Zhensu Sun, Y. Liu, Chen Yang, and Yu Qian. 2020. PSCS: A path-based neural model for semantic code search. arXiv:2008.03042.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Anserini: Enabling the use of Lucene for information retrieval research",
"authors": [
{
"first": "Peilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '17",
"volume": "",
"issue": "",
"pages": "1253--1256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of Lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, SIGIR '17, page 1253-1256, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Critically examining the \"neural hype\": Weak baselines and the additivity of effectiveness gains from neural ranking models",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kuang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Peilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19",
"volume": "",
"issue": "",
"pages": "1129--1132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. 2019. Critically examining the \"neural hype\": Weak baselines and the additivity of effectiveness gains from neural ranking models. In Proceedings of the 42nd International ACM SIGIR Conference on Re- search and Development in Information Retrieval, SIGIR'19, page 1129-1132, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Flexible IR pipelines with Capreolus",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"Martin"
],
"last": "Jose",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 29th ACM International Conference on Information & Knowledge Management, CIKM '20",
"volume": "",
"issue": "",
"pages": "3181--3188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Yates, Kevin Martin Jose, Xinyu Zhang, and Jimmy Lin. 2020. Flexible IR pipelines with Capre- olus. In Proceedings of the 29th ACM International Conference on Information & Knowledge Manage- ment, CIKM '20, page 3181-3188, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "OCoR: An overlappingaware code retriever",
"authors": [
{
"first": "Qihao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiran",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yingfei",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, ASE '20",
"volume": "",
"issue": "",
"pages": "883--894",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qihao Zhu, Zeyu Sun, Xiran Liang, Yingfei Xiong, and Lu Zhang. 2020. OCoR: An overlapping- aware code retriever. In Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, ASE '20, page 883-894, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "the given string at the end of the current string value for key k.8 def putcat (k, v) 9 k = k.to_s; v = v.to_s 10 @db.putcat(k, v)11 end Docstring duplication example (unused docstring and extra blank lines are removed).",
"num": null
},
"TABREF1": {
"html": null,
"text": "Dataset Size Statistics.",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"text": "Average length and vocabulary statistics after applying each pre-processing strategy.",
"content": "<table><tr><td>Models</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF7": {
"html": null,
"text": "BM25 and RM3 parameter values explored.",
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}