ACL-OCL / Base_JSON /prefixE /json /ecnlp /2021.ecnlp-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:16.958297Z"
},
"title": "Textual Representations for Crosslingual Information Retrieval",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Liling",
"middle": [],
"last": "Tan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we explored different levels of textual representations for cross-lingual information retrieval. Beyond the traditional token level representation, we adopted the subword and character level representations for information retrieval that had shown to improve neural machine translation by reducing the out-of-vocabulary issues in machine translation. Additionally, we improved the search performance by combining and re-ranking the result sets from the different text representations for German, French and Japanese.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we explored different levels of textual representations for cross-lingual information retrieval. Beyond the traditional token level representation, we adopted the subword and character level representations for information retrieval that had shown to improve neural machine translation by reducing the out-of-vocabulary issues in machine translation. Additionally, we improved the search performance by combining and re-ranking the result sets from the different text representations for German, French and Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cross-lingual information retrieval (CLIR) systems commonly use machine translation (MT) systems to translate the user query to the language of the search index before retrieving the search results (Fujii and Ishikawa, 2000; Pecina et al., 2014; Saleh and Pecina, 2020; Bi et al., 2020) .",
"cite_spans": [
{
"start": 198,
"end": 224,
"text": "(Fujii and Ishikawa, 2000;",
"ref_id": "BIBREF3"
},
{
"start": 225,
"end": 245,
"text": "Pecina et al., 2014;",
"ref_id": "BIBREF16"
},
{
"start": 246,
"end": 269,
"text": "Saleh and Pecina, 2020;",
"ref_id": "BIBREF20"
},
{
"start": 270,
"end": 286,
"text": "Bi et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, information retrieval and machine translation systems convert search queries to tokens and n-grams level textual representation (Jiang and Zhai, 2007; McNamee and Mayfield, 2004; Leveling and Jones, 2010; Yarmohammadi et al., 2019) . Modern neural machine translation (NMT) systems have shown that subwords and character representations with flexible vocabularies outperform fixed vocabulary token-level translations (Sennrich et al., 2016; Lee et al., 2017; Kudo and Richardson, 2018; . This study explores the shared granularity of textual representations between machine translation and cross-lingual information retrieval.",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "(Jiang and Zhai, 2007;",
"ref_id": "BIBREF5"
},
{
"start": 166,
"end": 193,
"text": "McNamee and Mayfield, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 194,
"end": 219,
"text": "Leveling and Jones, 2010;",
"ref_id": "BIBREF10"
},
{
"start": 220,
"end": 246,
"text": "Yarmohammadi et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 432,
"end": 455,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 456,
"end": 473,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 474,
"end": 500,
"text": "Kudo and Richardson, 2018;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Textual representations of varying granularity encode queries differently, resulting in more diverse and robust search retrieval. Potentially, subwords and character-level representations are less sensitive to irregularities in noisy user-generated queries, e.g. misspellings and dialectal variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "americium ist ein chemisches element ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokens:",
"sec_num": null
},
{
"text": "am er ic ium ist ein chemische s element ... Characters: a m e r c i u m i s t e i n c h e m i s c h e s e l e m e n t Table 1 : Example of a Pre-processed Document with Different Text Representations",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subwords:",
"sec_num": null
},
{
"text": "Neural machine translation had shown to outperform older paradigm of statistical machine translation models significantly and even \"achieved human parity in specific machine translation tasks\" (Hassan et al., 2018; L\u00e4ubli et al., 2018; Toral, 2020) . Moving from fixed token-level vocabulary to a subword representation unlocks open vocabulary capabilities to minimize out-of-vocabulary (OOV) issues 1 . Byte-Pair Encoding (BPE) is a popular subword algorithm that splits tokens into smaller units (Sennrich et al., 2016) . This is based on the intuition that smaller units of character sequences can be translated easily across languages.",
"cite_spans": [
{
"start": 193,
"end": 214,
"text": "(Hassan et al., 2018;",
"ref_id": null
},
{
"start": 215,
"end": 235,
"text": "L\u00e4ubli et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 236,
"end": 248,
"text": "Toral, 2020)",
"ref_id": "BIBREF26"
},
{
"start": 498,
"end": 521,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For instance, these smaller units appear when handling compound words via compositional translations, such as For instance, subword units can better handle compound words via compositional German to English translations, schokolade \u2192 chocolate and schoko-creme \u2192 chocolate cream. Suwbords can also cope with translations where we can easily copy or translate part of the source tokens or translate cognates and loanwords via phonological or morphological transformations, e.g. positiv \u2192 positive and negativ (German) \u2192 negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While BPE reduces the OOV instances, it requires the input to be pre-tokenized before applying the subword compression. Alternatively, Kudo and Richardson (2018) proposed a more languageagnostic approach to subword tokenization directly from raw string inputs using unigram language models.",
"cite_spans": [
{
"start": 135,
"end": 161,
"text": "Kudo and Richardson (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Completing the whole gamut of granular text representations, Lee et al. (2017) explored characterlevel neural machine translations that do not require any form of pre-processing or subword or token-level tokenization. They found that multilingual many-to-one character-level NMT models are more efficient and can be as competitive as or sometimes better than subwords NMT models. Moreover, character-level NMT can naturally handle intra-sentence code-switching. In the context of CLIR, they will be able to handle mixed language queries. Following this, found that using byte-level BPE vocabulary is 1/8 the size of a full subword BPE model. A multilingual NMT (many-to-one) setting achieves the best translation quality, outperforming subwords models and character-level models.",
"cite_spans": [
{
"start": 61,
"end": 78,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While finer granularity of text representations was exploited for machine translation, to our best knowledge, information retrieval studies have yet to study the impact of using these subword representations on traditional information retrieval systems (Robertson, 2004; Robertson and Zaragoza, 2009; Aly et al., 2014) . However, many previous works have leapfrogged to using fully neural information retrieval systems representing text with underlying various subword representations and neural dense text representation.",
"cite_spans": [
{
"start": 253,
"end": 270,
"text": "(Robertson, 2004;",
"ref_id": "BIBREF18"
},
{
"start": 271,
"end": 300,
"text": "Robertson and Zaragoza, 2009;",
"ref_id": "BIBREF19"
},
{
"start": 301,
"end": 318,
"text": "Aly et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Often, these neural representations are available in multilingual settings in which the same neural language model can encode texts in multiple languages. Jiang et al. (2020) explored using the popular multilingual Bidirectional Encoder Representations from Transformers (BERT) model to learn the relevance between English queries and foreign language documents in a CLIR setup. They showed that the model outperforms competitive non-neural traditional IR systems on a few of the sub-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Alternatively, previous researches have also used a cascading approach to machine translation and traditional IR where (i) the documents are translated to the foreign languages with neural machine translation and/or (ii) the foreign queries are trans-lated before retrieval from the source document index (Saleh and Pecina, 2020; Oard, 1998; Mc-Carley, 1999) . Saleh and Pecina (2020) compared the effects of statistical machine translation (SMT) and NMT in a cascaded traditional CLIR setting. They found that the better quality translations from NMT outperforms SMT and translating queries to the source document language that achieved better IR results than using foreign language queries on an index of translated documents.",
"cite_spans": [
{
"start": 305,
"end": 329,
"text": "(Saleh and Pecina, 2020;",
"ref_id": "BIBREF20"
},
{
"start": 330,
"end": 341,
"text": "Oard, 1998;",
"ref_id": "BIBREF14"
},
{
"start": 342,
"end": 358,
"text": "Mc-Carley, 1999)",
"ref_id": null
},
{
"start": 361,
"end": 384,
"text": "Saleh and Pecina (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Although fully neural IR systems are changing the paradigm of information retrieval, traditional IR (e.g. TF-IDF or BM25) approaches remain very competitive and can still outperform neural IR systems for some tasks (Boytsov, 2020; Jiang et al., 2020) . In this regard, we follow up on the cascading approach to machine translation and information retrieval on traditional IR systems. This study fills the knowledge gap of understanding the effects of subword representation in traditional IR indices.",
"cite_spans": [
{
"start": 215,
"end": 230,
"text": "(Boytsov, 2020;",
"ref_id": "BIBREF2"
},
{
"start": 231,
"end": 250,
"text": "Jiang et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We report the experiments on different textual representations on traditional IR in a cross-lingual setting using a large-scale dataset derived from Wikipedia Sasaki et al. (2018) . Sasaki et al. (2018) focused their work on a supervised re-ranking task using relevance annotations. We use those annotations from the same Wikipedia dataset to perform the typical retrieval task. The dataset was designed so that the English queries are expected to retrieve the Wikipedia documents in the foreign languages, and the foreign documents with the highest relevance are annotated with three levels of relevance. Formally, the ground truth data is a set of tuples: (English query, q, foreign document, d and relevance judgement r, where r \u2208 {0, 1, 2} We note that the Wikipedia documents in the dataset are not parallel (i.e. not translations of each other) but they are comparable in nature depending on the varying amounts of contributions available on the official Wikipedia dumps across different languages. For our study, we use the German, French and Japanese document collections and report retrieval performance of English queries translated to these languages. 3 The Wikipedia corpus came pre-tokenized, so we had to detokenize the documents 4 (Tan, 2018) before putting them through the subword tokenizer. We used pre-trained SentencePiece subword tokenizers used by the OPUS machine translation models(Tiedemann and Thottingal, 2020) 5 . Additionally, we emulated the typical pre-processing steps for character-level machine translation and split all individual characters by space, replacing the whitespaces with an underscore character. Table 2 shows the corpus statistics of the number of documents, tokens, subwords, and characters for the respective languages. Although Latin alphabetic languages benefit from the extra information produced by splitting the tokens into subwords, Japanese presents an opposite condition. Japanese became more compact when represented by the subwords in place of the tokens. The examples in Table 1 show an instance of a sentence pre-processed in different levels of granularity. The underscore in the subword sequence represents a symbolic space and is usually attached to the following subword unit, whereas the whitespace represents the unit boundary between the subwords.",
"cite_spans": [
{
"start": 149,
"end": 179,
"text": "Wikipedia Sasaki et al. (2018)",
"ref_id": null
},
{
"start": 182,
"end": 202,
"text": "Sasaki et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 1163,
"end": 1164,
"text": "3",
"ref_id": null
},
{
"start": 1246,
"end": 1257,
"text": "(Tan, 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 1643,
"end": 1650,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 2032,
"end": 2040,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The English queries were translated using the same OPUS machine translation models. 6 Although these machine translation models are open source and free to use under a permissive CC-BY license, it takes a significant amount of GPU computation and major changes to the HuggingFace API (Wolf et al., 2020) to efficiently translate the query samples parallelized inference. We will release the modified code for parallel GPU inference and translation outputs for the data used in this experiment for future convenience to improve the replicability of this paper.",
"cite_spans": [
{
"start": 284,
"end": 303,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We use the Okapi BM25 implementation in PyLucene as the retrieval framework with hyperparameter setting (k 1 = 1.2, b = 0.75) (Manning et al., 2008) . We consider the top 100 documents (top k = 100) in the search ranking as search results for each query.",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval System",
"sec_num": "3.1"
},
{
"text": "For each foreign language, we created an index for the documents with 5 TextField as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building index for the documents",
"sec_num": "3.1.1"
},
{
"text": "\u2022 id: the unique index of the document During retrieval, each translated query is first processed into its respective text representations (tokens, subwords or characters) and parsed using Lucene's built-in query parser and analyzer. Additionally, we tried to improve the search results by combining and re-ranking the result sets from the different text representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building index for the documents",
"sec_num": "3.1.1"
},
{
"text": "Our intuition is that queries of more granular text representation can improve the robustness of the retrieval and potentially override the textual noise (e.g., misspellings are handled better for some languages). Hence, we attempt to expand the list of possible candidate documents by combining the search results from the token and the subword representations. Given a query q and its token q token and subword q subword representations, we obtained two sets of search results from their respective indices R tokens and R subword . We concatenated R tokens and R subword , and remove the repeated candidates that appear in both sets from R subword as illustrated in Figure 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 676,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Search result expansion",
"sec_num": "3.1.3"
},
{
"text": "Aside from expanding the search results, we tried a re-ranking technique. We presumed that if different representations retrieve a document from a single query, it is more relevant than the documents that appear solely from one representation. Thus, we boosted the rank of the documents (D shared ) that are retrieved both in R tokens and R subword from the same query. After boosting the rank of such documents (D shared ) by 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search result re-ranking",
"sec_num": "3.1.4"
},
{
"text": "d \u2208 D shared , rank new (d) = rank original (d)\u22122,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search result re-ranking",
"sec_num": "3.1.4"
},
{
"text": "we re-rank the token-based search result, as illustrated in Figure 2 to get the final search result R.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Search result re-ranking",
"sec_num": "3.1.4"
},
{
"text": "We choose the following ranking metrics to evaluate the retrieval performance of the different text representations of query translation. Those ranking metrics are Mean Reciprocal Ranking (MRR), Mean Average Precision (MAP), normalized Discounted Cumulative Gain (nDCG);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "\u2022 MRR measures the ranking of the first document that is relevant to a given query in the search result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "\u2022 MAP evaluates the rankings of top 100 docu- ments that are relevant to a given query in the search result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "\u2022 nDCG calibrates the ranking and relevance score of all the documents that are relevant to a given query in the search result. We compute nDCG@16 for the top-16 search results respectively. Table 3 , 4 and 5 show the result for the CLIR experiments on the translated English queries and the German, French, and Japanese documents of different textual representations. For all the German and French setups, the token level representation achieved the best MAP, MMR, and NDCG scores, followed by subwords at significantly lower performance. Character-level representation performs the words at a magnitude 104 times worse than token-level results. We expected a margin between the token and subword level performance but the stark difference was surprising. Although machine translation can exploit the sequential nature of the open vocabulary with the subwords representation, traditional information retrieval methods disregard the other textual representation to a lesser extent. However, for Japanese, we see that the subword representation performs very similarly to the tokens counterparts.",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "For German and French documents, the intuition behind the poor performance of the character-level representation can be attributed to the meaningless and arbitrary nature of the unordered bag of characters. Whereas in Japanese, with its mix of syllabic and logographic orthography, the individual characters can potentially encode crucial semantic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We can see that both search result expansion and re-ranking techniques can improve the final search results for some languages. Table 3 , 4 and 5 show that the search result expansion technique improves MRR for all three languages compared with the token-based retrieval baseline, and it improves both MRR and MAP for Japanese. The re-ranking technique achieves the highest MRR for both German and Japanese. Improvement in the MRR indicates that those two techniques can improve the ranking of the first relevant document appearing in the search results, which can be beneficial for cross-lingual e-commerce search systems. Neither the expansion nor the re-ranking technique achieves a better nDCG score, which is consistent with our expectation of improving the accuracy and robustness of retrieval with minimal changes to the relevance score that affects nDCG.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We explored the different granularity of textual representations in a traditional IR system within the CLIR task by re-using the subword representation from the neural machine translation systems. Our experiments in this paper provide empirical evidence for the underwhelming impact of subwords in traditional IR systems for Latin-based languages as opposed to the advancements that subword representation has made in machine translation. 7 In some scenarios, it is possible to achieve better CLIR performance by combining and expanding retrieval results of token and subword representations.",
"cite_spans": [
{
"start": 439,
"end": 440,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We conducted the experiments in this study using well-formed queries and documents. Our intuition is that a combination of the different textual representations can improve the robustness of the indexing and retrieval systems in realistic situations with noisier data (e.g. queries spelling or translations errors). For future work, we want to explore similar experiments with noisy e-commerce search datasets. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Although subwords allow more flexibility than tokens in creating unseen words, most NMT systems cannot support a genuinely open vocabulary thus a backoff token <unk> is often used during inference to represent subwords that is not seen in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that a single English query can be mapped to multiple documents with varying relevance judgements",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the raw dataset from http://www.cs.jhu. edu/\u02dckevinduh/a/wikiclir2018/ for the document indices.4 https://github.com/alvations/ sacremoses 5 https://huggingface.co/Helsinki-NLP6 We use the opus-mt-en-de, opus-mt-en-fr, and opus-mt-en-jap models, their BLEU and ChrF scores(Papineni et al., 2002;Popovi\u0107, 2015) can be found on https://huggingface.co/Helsinki-NLP(Tiedemann and Thottingal, 2020;Tiedemann, 2020)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The processed datasets, code to generate the translations and evaluations will be made available under an open source license upon paper acceptance.8 We note that many open-source CLIR experiments are constrained to Wikipedia document searches. Although the lesson learned from these experiments can impact industrial e-commerce applications, the lack of open source e-commerce IR datasets limited the results reported in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Probabilistic models in IR and their relationships",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Aly",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
}
],
"year": 2014,
"venue": "Inf. Retr",
"volume": "17",
"issue": "2",
"pages": "177--201",
"other_ids": {
"DOI": [
"10.1007/s10791-013-9226-3"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Aly, Thomas Demeester, and Stephen Robertson. 2014. Probabilistic models in IR and their relation- ships. Inf. Retr., 17(2):177-201.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Constraint translation candidates: A bridge between neural query translation and cross-lingual information retrieval",
"authors": [
{
"first": "Tianchi",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Baosong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianchi Bi, Liang Yao, Baosong Yang, Haibo Zhang, Weihua Luo, and Boxing Chen. 2020. Constraint translation candidates: A bridge between neural query translation and cross-lingual information re- trieval.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Traditional ir rivals neural models on the ms marco document ranking leaderboard",
"authors": [
{
"first": "Leonid",
"middle": [],
"last": "Boytsov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonid Boytsov. 2020. Traditional ir rivals neural mod- els on the ms marco document ranking leaderboard.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Applying machine translation to two-stage cross-language information retrieval",
"authors": [
{
"first": "Atsushi",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Tetsuya",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2000,
"venue": "Envisioning Machine Translation in the Information Future",
"volume": "",
"issue": "",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atsushi Fujii and Tetsuya Ishikawa. 2000. Applying machine translation to two-stage cross-language in- formation retrieval. In Envisioning Machine Trans- lation in the Information Future, pages 13-24, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An empirical study of tokenization strategies for biomedical information retrieval",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "Inf. Retr",
"volume": "10",
"issue": "4-5",
"pages": "341--363",
"other_ids": {
"DOI": [
"10.1007/s10791-007-9027-7"
]
},
"num": null,
"urls": [],
"raw_text": "Jing Jiang and ChengXiang Zhai. 2007. An empirical study of tokenization strategies for biomedical infor- mation retrieval. Inf. Retr., 10(4-5):341-363.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Crosslingual information retrieval with BERT",
"authors": [
{
"first": "Zhuolin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Amro",
"middle": [],
"last": "El-Jaroudi",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "Lingjun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020)",
"volume": "",
"issue": "",
"pages": "26--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuolin Jiang, Amro El-Jaroudi, William Hartmann, Damianos Karakos, and Lingjun Zhao. 2020. Cross- lingual information retrieval with BERT. In Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020), pages 26-31, Marseille, France. Eu- ropean Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Has machine translation achieved human parity? a case for document-level evaluation",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "L\u00e4ubli",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Volk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4791--4796",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1512"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel L\u00e4ubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791-4796, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fully character-level neural machine translation without explicit segmentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "365--378",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00067"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine trans- lation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Subword indexing and blind relevance feedback for english, bengali, hindi, and marathi ir",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "9",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1838745.1838749"
]
},
"num": null,
"urls": [],
"raw_text": "Johannes Leveling and Gareth J. F. Jones. 2010. Sub- word indexing and blind relevance feedback for en- glish, bengali, hindi, and marathi ir. ACM Trans- actions on Asian Language Information Processing, 9(3).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Introduction to information retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Prabhakar Raghavan, Hinrich Sch\u00fctze, et al. 2008. Introduction to information re- trieval, volume 1. Cambridge university press Cam- bridge.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Should we translate the documents or the queries in cross-language information retrieval",
"authors": [
{
"first": "",
"middle": [],
"last": "Scott Mccarley",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "208--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Scott McCarley. 1999. Should we translate the doc- uments or the queries in cross-language information retrieval? In Proceedings of the 37th Annual Meet- ing of the Association for Computational Linguistics, pages 208-214.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Character n-gram tokenization for european language text retrieval",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
}
],
"year": 2004,
"venue": "Information Retrieval",
"volume": "7",
"issue": "1-2",
"pages": "73--97",
"other_ids": {
"DOI": [
"10.1023/B:INRT.0000009441.78971.be"
]
},
"num": null,
"urls": [],
"raw_text": "Paul McNamee and James Mayfield. 2004. Character n-gram tokenization for european language text re- trieval. Information Retrieval, 7(1-2):73-97.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A comparative study of query and document translation for cross-language information retrieval",
"authors": [
{
"first": "W",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Oard",
"suffix": ""
}
],
"year": 1998,
"venue": "Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "472--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas W Oard. 1998. A comparative study of query and document translation for cross-language infor- mation retrieval. In Conference of the Association for Machine Translation in the Americas, pages 472- 483. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adaptation of machine translation for multilingual information retrieval in the medical domain",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Lorraine",
"middle": [],
"last": "Goeuriot",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Jaroslava",
"middle": [],
"last": "Hlav\u00e1\u010dov\u00e1",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "Liadh",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosa",
"suffix": ""
}
],
"year": 2014,
"venue": "Artificial Intelligence in Medicine",
"volume": "61",
"issue": "3",
"pages": "165--185",
"other_ids": {
"DOI": [
"10.1016/j.artmed.2014.01.004"
]
},
"num": null,
"urls": [],
"raw_text": "Pavel Pecina, Ond\u0159ej Du\u0161ek, Lorraine Goeuriot, Jan Haji\u010d, Jaroslava Hlav\u00e1\u010dov\u00e1, Gareth J.F. Jones, Liadh Kelly, Johannes Leveling, David Mare\u010dek, Michal Nov\u00e1k, Martin Popel, Rudolf Rosa, Ale\u0161 Tamchyna, and Zde\u0148ka Ure\u0161ov\u00e1. 2014. Adaptation of ma- chine translation for multilingual information re- trieval in the medical domain. Artificial Intelligence in Medicine, 61(3):165 -185. Text Mining and In- formation Analysis of Health Documents.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3049"
]
},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Understanding inverse document frequency: On theoretical arguments for idf",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Documentation",
"volume": "60",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson. 2004. Understanding inverse doc- ument frequency: On theoretical arguments for idf. Journal of Documentation, 60:2004.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The probabilistic relevance framework: BM25 and beyond",
"authors": [
{
"first": "E",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "Found. Trends Inf. Retr",
"volume": "3",
"issue": "4",
"pages": "333--389",
"other_ids": {
"DOI": [
"10.1561/1500000019"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Found. Trends Inf. Retr., 3(4):333-389.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Document translation vs. query translation for cross-lingual information retrieval in the medical domain",
"authors": [
{
"first": "Shadi",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6849--6860",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.613"
]
},
"num": null,
"urls": [],
"raw_text": "Shadi Saleh and Pavel Pecina. 2020. Document trans- lation vs. query translation for cross-lingual informa- tion retrieval in the medical domain. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6849-6860, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Cross-lingual learning-to-rank with shared representations",
"authors": [
{
"first": "Shota",
"middle": [],
"last": "Sasaki",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shigehiko",
"middle": [],
"last": "Schamoni",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "458--463",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2073"
]
},
"num": null,
"urls": [],
"raw_text": "Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, and Kentaro Inui. 2018. Cross-lingual learning-to-rank with shared representations. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 458-463, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sacremoses: Python implementations of moses statistical machine translation pre-processing tools",
"authors": [
{
"first": "Liling",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liling Tan. 2018. Sacremoses: Python im- plementations of moses statistical ma- chine translation pre-processing tools. https://github.com/alvations/sacremoses.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Tatoeba Translation Challenge -Realistic data sets for low resource and multilingual MT",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2020. The Tatoeba Translation Chal- lenge -Realistic data sets for low resource and mul- tilingual MT. In Proceedings of the Fifth Confer- ence on Machine Translation (Volume 1: Research Papers). Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "OPUS-MT -Building open translation services for the World",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Santhosh",
"middle": [],
"last": "Thottingal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT -Building open translation services for the World. In Proceedings of the 22nd Annual Con- ferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reassessing claims of human parity and super-human performance in machine translation at wmt",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Toral. 2020. Reassessing claims of human par- ity and super-human performance in machine trans- lation at wmt 2019.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural machine translation with byte-level subwords",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03341"
]
},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Kyunghyun Cho, and Jiatao Gu. 2019. Neural machine translation with byte-level subwords. arXiv preprint arXiv:1909.03341.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Robust document representations for cross-lingual information retrieval in low-resource settings",
"authors": [
{
"first": "Mahsa",
"middle": [],
"last": "Yarmohammadi",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Sorami",
"middle": [],
"last": "Hisamoto",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hainan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2019,
"venue": "Research Track",
"volume": "1",
"issue": "",
"pages": "12--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahsa Yarmohammadi, Xutai Ma, Sorami Hisamoto, Muhammad Rahman, Yiming Wang, Hainan Xu, Daniel Povey, Philipp Koehn, and Kevin Duh. 2019. Robust document representations for cross-lingual information retrieval in low-resource settings. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 12-20, Dublin, Ireland. European Association for Machine Transla- tion.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "surface: the raw text of the document \u2022 tokens: the document after tokenization \u2022 subword: the document in SentencePiece subwords \u2022 char: the document in characters 3.1.2 Querying the document index",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Search Results Expansion",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Search Results Re-ranking",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"text": "Corpus statistics on Wikipedia documents in dataset fromSasaki et al. (2018). (All numbers are in units of one million)",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Results of CLIR Experiments on Translated English Queries on German Wikipedia",
"html": null,
"content": "<table><tr><td colspan=\"5\">Metric Token Subword Characters Expansion Re-ranking</td></tr><tr><td>MAP</td><td>0.30330 0.06931</td><td>0.00035</td><td>0.29859</td><td>0.29898</td></tr><tr><td>MRR</td><td>0.37866 0.08492</td><td>0.00039</td><td>0.37872</td><td>0.37830</td></tr><tr><td colspan=\"2\">nDCG 0.36810 0.09153</td><td>0.00060</td><td>0.36397</td><td>0.36537</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "Results of CLIR Experiments on Translated English Queries on French Wikipedia",
"html": null,
"content": "<table><tr><td colspan=\"5\">Metric Token Subword Characters Expansion Re-ranking</td></tr><tr><td>MAP</td><td>0.00039 0.00036</td><td>0.00024</td><td>0.00036</td><td>0.00024</td></tr><tr><td>MRR</td><td>0.00038 0.00037</td><td>0.00025</td><td>0.00037</td><td>0.00025</td></tr><tr><td colspan=\"2\">nDCG 0.00076 0.00054</td><td>0.00022</td><td>0.00074</td><td>0.00075</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"text": "",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}