ACL-OCL / Base_JSON /prefixS /json /spanlp /2022.spanlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:44.246805Z"
},
"title": "Efficient Machine Translation Domain Adaptation",
"authors": [
{
"first": "Pedro",
"middle": [
"Henrique"
],
"last": "Martins",
"suffix": "",
"affiliation": {
"laboratory": "Instituto de Telecomunica\u00e7\u00f5es DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)",
"institution": "Instituto Superior T\u00e9cnico Unbabel Lisbon",
"location": {
"country": "Portugal"
}
},
"email": "[email protected]"
},
{
"first": "Zita",
"middle": [],
"last": "Marinho",
"suffix": "",
"affiliation": {
"laboratory": "Instituto de Telecomunica\u00e7\u00f5es DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)",
"institution": "Instituto Superior T\u00e9cnico Unbabel Lisbon",
"location": {
"country": "Portugal"
}
},
"email": "[email protected]"
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": "",
"affiliation": {
"laboratory": "Instituto de Telecomunica\u00e7\u00f5es DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)",
"institution": "Instituto Superior T\u00e9cnico Unbabel Lisbon",
"location": {
"country": "Portugal"
}
},
"email": "[email protected]."
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore (Khandelwal et al., 2021). A drawback of these retrievalaugmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbor machine translation. We adapt the methods recently proposed by He et al. (2021) for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. 1",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore (Khandelwal et al., 2021). A drawback of these retrievalaugmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbor machine translation. We adapt the methods recently proposed by He et al. (2021) for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modern neural machine translation models are mostly parametric (Bahdanau et al., 2015; Vaswani et al., 2017) , meaning that, for each input, the output depends only on a fixed number of model parameters, obtained using some training data, hopefully in the same domain. However, when running machine translation systems in the wild, it is often the case that the model is given input sentences or documents from domains that were not part of the training data, which frequently leads to subpar translations. One solution is training or fine-tuning the entire model or just part of it for each domain, but this can be expensive and may lead to catastrophic forgetting (Saunders, 2021) .",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 87,
"end": 108,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 666,
"end": 682,
"text": "(Saunders, 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, an approach that has achieved promising results is augmenting parametric models with a retrieval component, leading to semi-parametric models (Gu et al., 2018; Zhang et al., 2018; Bapna and Firat, 2019; Khandelwal et al., 2021; Meng et al., 2021; Jiang et al., 2021) . These models construct a datastore based on a set of source / target sentences or word-level contexts (translation memories) and retrieve similar examples from this datastore, using this information in the generation process. This allows having only one model that can be used for every domain. However, the model's runtime increases with the size of the domain's datastore and searching for related examples on large datastores can be computationally very expensive: for example, when retrieving 64 neighbors from the datastore, the model may become two orders of magnitude slower (Khandelwal et al., 2021) . Due to this, some recent works have proposed methods that aim to make this process more efficient. Meng et al. (2021) proposed constructing a different datastore for each source sentence, by first searching for the neighbors of the source tokens; and He et al. (2021) proposed several techniques -datastore pruning, adaptive retrieval, dimension reduction -for nearest neighbor language modeling.",
"cite_spans": [
{
"start": 152,
"end": 169,
"text": "(Gu et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 170,
"end": 189,
"text": "Zhang et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 190,
"end": 212,
"text": "Bapna and Firat, 2019;",
"ref_id": "BIBREF2"
},
{
"start": 213,
"end": 237,
"text": "Khandelwal et al., 2021;",
"ref_id": "BIBREF7"
},
{
"start": 238,
"end": 256,
"text": "Meng et al., 2021;",
"ref_id": "BIBREF9"
},
{
"start": 257,
"end": 276,
"text": "Jiang et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 861,
"end": 886,
"text": "(Khandelwal et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 988,
"end": 1006,
"text": "Meng et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 1140,
"end": 1156,
"text": "He et al. (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we adapt several methods proposed by He et al. (2021) to machine translation, and we further propose a new approach that increases the model's efficiency: the use of a retrieval distributions cache. By caching the kNN probability distributions, together with the corresponding decoder representations, for the previous steps of the generation of the current translation(s), the model can quickly retrieve the retrieval distribution when the current representation is similar to a cached one, instead of having to search for neighbors in the datastore at every single step.",
"cite_spans": [
{
"start": 52,
"end": 68,
"text": "He et al. (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform a thorough analysis of the model's efficiency on a controlled setting, which shows that the combination of our proposed techniques results in a model, the efficient kNN-MT, which is approx-imately twice as fast as the vanilla kNN-MT. This comes without harming translation performance, which is, on average, more than 8 BLEU points and 5 COMET points better than the base MT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In sum, this paper presents the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We adapt the methods proposed by He et al. (2021) for efficient nearest neighbor language modeling to machine translation.",
"cite_spans": [
{
"start": 35,
"end": 51,
"text": "He et al. (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a caching strategy to store the retrieval probability distributions, improving the translation speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We compare the efficiency and translation quality of the different methods, which show the benefits of the proposed and adapted techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When performing machine translation, the model is given a source sentence or document,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "x = [x 1 , . . . , x L ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": ", on one language, and the goal is to output a translation of the sentence in the desired language, y = [y 1 , . . . , y N ]. This is usually done using a parametric sequence-to-sequence model (Bahdanau et al., 2015; Vaswani et al., 2017) , in which the encoder receives the source sentence as input and outputs a set of hidden states. Then, at each step t, the decoder attends to these hidden states and outputs a probability distribution p NMT (y t |y <t , x) over the vocabulary. Finally, these probability distributions are used to predict the output tokens, typically with beam search.",
"cite_spans": [
{
"start": 193,
"end": 216,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 217,
"end": 238,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Khandelwal et al. (2021) introduced a nearest neighbor machine translation model, kNN-MT, which is a semi-parametric model. This means that besides having a parametric component that outputs a probability distribution over the vocabulary, p NMT (y t |y <t , x), the model also has a nearest neighbor retrieval mechanism, which allows direct access to a datastore of examples. More specifically, we build a datastore D which consists of a key-value memory, where each entry key is the decoder's output representation, f (x, y <t ), and the value is the target token y t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D = {(f (x, y <t ) , y t ) \u2200y t \u2208 y | (x, y) \u2208 (X , Y)},",
"eq_num": "(1)"
}
],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "where (X , Y) corresponds to a set of parallel source and target sequences. Then, at inference time, the model searches the datastore to retrieve the set of k nearest neighbors N . Using their distances d(\u2022) to the current decoder's output representation, we can compute the retrieval distribution p kNN (y t |y <t , x) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "p kNN (y t |y <t , x) = (2) (k j ,v j )\u2208N 1 yt=v j exp (\u2212d (k j , f (x, y <t )) /T ) (k j ,v j )\u2208N exp (\u2212d (k j , f (x, y <t )) /T ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "where T is the softmax temperature, k j denotes the key of the j th neighbor and v j its value. Finally, p NMT (y t |y <t , x) and p kNN (y t |y <t , x) are combined to obtain the final distribution, which is used to generate the translation through beam search, by performing interpolation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "p(y t |y <t , x) =(1 \u2212 \u03bb) p NMT (y t |y <t , x) (3) + \u03bb p kNN (y t |y <t , x),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "where \u03bb is a hyper-parameter that controls the weights given to the two distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbor Machine Translation",
"sec_num": "2.1"
},
{
"text": "In this section, we describe the approaches introduced by He et al. (2021) to speed-up the inference time for nearest neighbor language modeling, such as pruning the datastore ( \u00a73.1) and reducing the representations dimension ( \u00a73.2), which we adapt to machine translation. We further describe a novel method that allows the model to have access to examples without having to search them in the datastore at every step, by maintaining a cache of the past retrieval distributions, for the current translation(s) ( \u00a73.3).",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "He et al. (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient kNN-MT",
"sec_num": "3"
},
{
"text": "has the same value, we merge the two entries, by simply removing the neighboring one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datastore Pruning",
"sec_num": "3.1"
},
{
"text": "The decoder's output representations, f (x, y <t ) are, usually, high-dimensional (1024, in our case). This leads to a high computational cost when computing vector distances, which are needed for retrieving neighbors from the datastore. To alleviate this, we follow He et al. 2021, and use principal component analysis (PCA), an efficient dimension reduction method, to reduce the dimension of the decoder's output representation to a pre-defined dimension, d, and generate a compressed datastore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dimension Reduction",
"sec_num": "3.2"
},
{
"text": "The model does not need to search the datastore at every step of the translation generation in order to do it correctly. Here, we aim to predict when it needs to retrieve neighbors from the datastore, so that, by only searching the datastore in the necessary steps, we can increase the generation speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cache",
"sec_num": "3.3"
},
{
"text": "Adaptive retrieval. To do so, first we follow He et al. 2021, and use a simple MLP to predict the value of the interpolation coefficient \u03bb at each step. Then, we define a threshold, \u03b1, so that the model only performs retrieval when \u03bb > \u03b1. However, we observed that this leads to results ( \u00a7A.3) similar to randomly selecting when to search the datastore. We posit that this occurs because it is difficult to predict when the model should perform retrieval, for domain adaptation (He et al., 2021) , and because in machine translation error propagation occurs more prominently than in language modeling.",
"cite_spans": [
{
"start": 479,
"end": 496,
"text": "(He et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cache",
"sec_num": "3.3"
},
{
"text": "Cache. Because it is common to have similar contexts along the generation process, when using beam search, the model can be often retrieving similar neighbors at different steps, which is not efficient. To avoid repeating searches on the datastore for similar context vectors, f (x, y <t ), we propose keeping a cache of the previous retrieval distributions, of the current translation(s). More specifically, at each step of the generation of y, we add the decoder's representation vector along with the retrieval distribution p kNN (y t |y <t , x), corresponding to all beams, B, to the cache C:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cache",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C={(f (x, y <t ), p kNN (y t |y <t ,x))\u2200y t \u2208 y | y \u2208 B}.",
"eq_num": "(4)"
}
],
"section": "Cache",
"sec_num": "3.3"
},
{
"text": "Then, at each step of the generation, we compute the Euclidean distance between the current decoder's representation and the keys on the cache. If all distances are bigger than a threshold \u03c4 , the model searches the datastore to find the nearest neighbors. Otherwise, the model retrieves, from the cache, the retrieval distribution that corresponds to the closest key.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cache",
"sec_num": "3.3"
},
{
"text": "Dataset and metrics. We perform experiments on the Medical, Law, IT, and Koran domain data of the multi-domains dataset (Koehn and Knowles, 2017) re-splitted by Aharoni and Goldberg (2020) .",
"cite_spans": [
{
"start": 120,
"end": 145,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF8"
},
{
"start": 161,
"end": 188,
"text": "Aharoni and Goldberg (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To build the datastores we use the in-domain training sets which have from 17,982 to 467,309 sentences. The validation and test sets have 2,000 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To evaluate the models we use BLEU (Papineni et al., 2002; Post, 2018) and COMET (Rei et al., 2020) .",
"cite_spans": [
{
"start": 35,
"end": 58,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF12"
},
{
"start": 59,
"end": 70,
"text": "Post, 2018)",
"ref_id": "BIBREF13"
},
{
"start": 81,
"end": 99,
"text": "(Rei et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Settings. We use the WMT'19 German-English news translation task winner ) (with 269 M parameters), available on the Fairseq library , as the base MT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As baselines, we consider the base MT model, the vanilla kNN-MT model (Khandelwal et al., 2021) , and the Fast kNN-MT model (Meng et al., 2021) . For all models, which perform retrieval, we select the hyper-parameters, for each method and each domain, by performing grid search on k \u2208 {8, 16, 32, 64} and \u03bb \u2208 {0.5, 0.6, 0.7, 0.8}. The selected values are stated in Table 9 of App. B.",
"cite_spans": [
{
"start": 70,
"end": 95,
"text": "(Khandelwal et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 124,
"end": 143,
"text": "(Meng et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 365,
"end": 372,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For the vanilla kNN-MT model and the efficient kNN-MT we follow Khandelwal et al. (2021) and use the Euclidean distance to perform retrieval and the proposed softmax temperature. For the Fast kNN-MT, we use the cosine distance and the softmax temperature proposed by Meng et al. (2021) . For the efficient kNN-MT we selected parameters that ensure a good speed/quality trade-off: k = 2 for datastore pruning, d = 256 for PCA, and \u03c4 = 6 as the cache threshold. Results for each methods using different parameters are reported in App. A.",
"cite_spans": [
{
"start": 64,
"end": 88,
"text": "Khandelwal et al. (2021)",
"ref_id": "BIBREF7"
},
{
"start": 267,
"end": 285,
"text": "Meng et al. (2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The translation scores are reported on Figure 1 : Plots of the generation speed (tokens/s) for the different models on the medical, law, IT, and Koran domains, for different batch sizes (1, 8, 16) . The generation speed (y-axis) is in log scale. When using the Fast kNN-MT model, the maximum batch size that we are able to use is 2, due to out of memory errors. points and 5 COMET points more than the base MT model.",
"cite_spans": [
{
"start": 186,
"end": 189,
"text": "(1,",
"ref_id": null
},
{
"start": 190,
"end": 192,
"text": "8,",
"ref_id": null
},
{
"start": 193,
"end": 196,
"text": "16)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "Computational infrastructure. All experiments were performed on a server with 3 RTX 2080 Ti (11 GB), 12 AMD Ryzen 2920X CPUs (24 cores), and 128 Gb of RAM. For the generation speed measurements, we ran each model on a single GPU while no other process was running on the server, to have a controlled environment. To search the datastore, we used the FAISS library (Johnson et al., 2019) . When using the vanilla kNN-MT and efficient kNN-MT, the nearest neighbor search is performed on the CPUs, since not all datastores fit into memory, while when using the Fast kNN-MT this is done on the GPU.",
"cite_spans": [
{
"start": 364,
"end": 386,
"text": "(Johnson et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation speed",
"sec_num": "4.2"
},
{
"text": "Analysis. As can be seen on the plots of Figure 1 , for a batch size of 1 Fast kNN-MT leads to a generation speed higher than our proposed method and vanilla kNN-MT. However, because of its high memory requirements, we are not able to run Fast kNN-MT for batch sizes larger than 2, on the computational infrastructure stated above. On the contrary, when using the proposed methods (efficient kNN-MT) we are able to run the model with higher batch sizes, achieving superior generation speeds to Fast kNN-MT and vanilla kNN-MT, and reducing the gap to the base MT model. Ablation. We plot the generation speed for different combinations of the proposed methods (averaged across domains), for several batch sizes, on Figure 2 . On this plot, we can clearly see that every method contributes to the speed-up achieved by the model that combines all approaches. Moreover, we can observe that the method which leads to the largest speed-up is the use of a cache of retrieval distributions, by saving, on average 57% of the retrieval searches.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 1",
"ref_id": null
},
{
"start": 714,
"end": 722,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generation speed",
"sec_num": "4.2"
},
{
"text": "In this paper we propose the efficient kNN-MT, in which we combine several methods to improve the kNN-MT generation speed. First, we adapted to machine translation methods that improve retrieval efficiency in language modeling (He et al., 2021) . Then we proposed a new method which consists on keeping in cache the previous retrieval distributions so that the model does not need to search for neighbors in the datastore at every step. Through experiments on domain adaptation, we show that the combination of the proposed methods leads to a considerable speed-up (up to 2x) without harming the translation performance substantially.",
"cite_spans": [
{
"start": 227,
"end": 244,
"text": "(He et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In this section we report the BLEU scores as well as additional statistics for the different methods, when varying their hyper-parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional results",
"sec_num": null
},
{
"text": "We report on Table 2 the BLEU scores for datastore pruning, when varying the number of neighbors used for greedy merging, k. The resulting datastore sizes are presented on Table 3 4,039,432 11,103,775 2,303,808 353,007 k = 5 3,084,106 8,486,551 1,852,191 290,192 ",
"cite_spans": [
{
"start": 180,
"end": 262,
"text": "4,039,432 11,103,775 2,303,808 353,007 k = 5 3,084,106 8,486,551 1,852,191 290,192",
"ref_id": null
}
],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 172,
"end": 179,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "A.1 Datastore pruning",
"sec_num": null
},
{
"text": "We report on Table 4 the BLEU scores for dimension reduction, when varying the output dimension d. ",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "A.2 Dimension reduction",
"sec_num": null
},
{
"text": "We report on Table 5 the BLEU scores for adaptive retrieval, when varying the threshold \u03b1. The percentage of times the model performs retrieval is stated on Table 6 . Table 6 : Percentage of times the model searches for neighbors on the datastore when performing adaptive retrieval for different values of the threshold \u03b1, for a batch size of 8.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 157,
"end": 164,
"text": "Table 6",
"ref_id": null
},
{
"start": 167,
"end": 174,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Adaptive retrieval",
"sec_num": null
},
{
"text": "We report on Table 7 the BLEU scores for a model using a cache of the retrieval distributions, when varying the threshold \u03c4 . The percentage of times the model performs retrieval is stated on Table 8 : Percentage of times the model searches for neighbors on the datastore when using a retrieval distributions' cache for different values of the threshold \u03c4 , for a batch size of 8.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 192,
"end": 199,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "A.4 Cache",
"sec_num": null
},
{
"text": "On Table 9 we report the values for the hyperparameters: number of neighbors to be retrieved",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Hyper-parameters",
"sec_num": null
},
{
"text": "The code is available at https://github.com/ deep-spin/efficient_kNN_MT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The goal of datastore pruning is to reduce the size of the datastore, so that the model is able to search for the nearest neighbors faster, without severely compromising the translation performance. To do so, we followHe et al. (2021), and use greedy merging. In greedy merging, we aim to merge datastore entries that share the same value (target token) while their keys are close to each other in vector space. To do this, we first need to find the k nearest neighbors of every entry of the datastore, where k is a hyper-parameter. Then, if in the set of neighbors, retrieved for a given entry, there is an entry which has not been merged before and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the European Research Council (ERC StG DeepSPIN 758969), by the P2020 project MAIA (contract 045909), by the Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia through project PTDC/CCI-INF/4703/2021 (PRE-LUNA, contract UIDB/50008/2020), and by contract PD/BD/150633/2020 in the scope of the Doctoral Program FCT -PD/00140/2013 NETSyS. We thank Junxian He, Graham Neubig, the SARDINE team members, and the reviewers for helpful discussion and feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "IT Koran k \u03bb T k \u03bb T k \u03bb T k \u03bb T kNN-MT 8 0.7 10 8 0.8 10 8 0.7 10 8 0.6 100 Fast kNN-MT 16 0.7 .015 32 0.6 .015 8 0.6 .02 16 0.6 .05 cache 8 0.7 10 8 0.8 10 8 0.7 10 8 0.6 100 PCA + cache 8 0.8 10 8 0.8 10 8 0.7 10 8 0.7 100 PCA + pruning 8 0.7 10 8 0.8 10 8 0.7 10 8 0.7 100 PCA + cache + pruning 8 0.7 10 8 0.8 10 8 0.7 10 8 0.7 100 Table 9 : Values of the hyper-parameters: number of neighbors to be retrieved k, interpolation coefficient \u03bb, and retrieval softmax temperature T .k \u2208 {8, 16, 32, 64}, the interpolation coefficient \u03bb \u2208 {0.5, 0.6, 0.7, 0.8}, and retrieval softmax temperature T . For decoding we use beam search with a beam size of 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 44,
"text": "Koran k \u03bb T k \u03bb T k \u03bb T k \u03bb T",
"ref_id": null
},
{
"start": 348,
"end": 355,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Law",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised domain clusters in pretrained language models",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proc. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyung",
"middle": [
"Hyun"
],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Non-Parametric Adaptation for Neural Machine Translation",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur Bapna and Orhan Firat. 2019. Non-Parametric Adaptation for Neural Machine Translation. In Proc. NAACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Search engine guided neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2018. Search engine guided neural machine trans- lation. In Proc. AAAI.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient Nearest Neighbor Language Models",
"authors": [
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junxian He, Graham Neubig, and Taylor Berg- Kirkpatrick. 2021. Efficient Nearest Neighbor Lan- guage Models. In Proc. EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning Kernel-Smoothed Machine Translation with Retrieved Examples",
"authors": [
{
"first": "Qingnan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Shanbo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, and Lei Li. 2021. Learn- ing Kernel-Smoothed Machine Translation with Re- trieved Examples. In Proc. EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Billion-scale similarity search with gpus",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Big Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Nearest neighbor machine translation",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neigh- bor machine translation. In Proc. ICLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Six Challenges for Neural Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six Chal- lenges for Neural Machine Translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fast Nearest Neighbor Machine Translation",
"authors": [
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiayu",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tianwei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xi- aofei Sun, Tianwei Zhang, and Jiwei Li. 2021. Fast Nearest Neighbor Machine Translation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Facebook FAIR's WMT19 News Translation Task Submission",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Kyra",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of the Fourth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 News Translation Task Submission. In Proc. of the Fourth Conference on Machine Trans- lation.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. NAACL (Demonstrations)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proc. NAACL (Demonstra- tions).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proc. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Call for Clarity in Reporting BLEU Scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Third Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A Call for Clarity in Reporting BLEU Scores. In Proc. Third Conference on Machine Trans- lation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "COMET: A Neural Framework for MT Evaluation",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"C"
],
"last": "Farinha",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A Neural Framework for MT Evaluation. In Proc. EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey",
"authors": [
{
"first": "Danielle",
"middle": [],
"last": "Saunders",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danielle Saunders. 2021. Domain Adaptation and Multi-Domain Adaptation for Neural Machine Trans- lation: A Survey.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Guiding Neural Machine Translation with Retrieved Translation Pieces",
"authors": [
{
"first": "Jingyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingyi Zhang, Masao Utiyama, Eiichiro Sumita, Gra- ham Neubig, and Satoshi Nakamura. 2018. Guiding Neural Machine Translation with Retrieved Transla- tion Pieces. In Proc. NAACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adaptive Nearest Neighbor Machine Translation",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junliang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021. Adaptive Nearest Neighbor Machine Translation.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Plot of the generation speed (tokens/s), averaged across domains, for different combinations of the proposed methods.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "We can clearly see that both Fast kNN-MT and the efficient kNN-MT (combining the different methods) do not hurt the translation performance substantially, still leading to, on average, 8 BLEU BLEU and COMET scores on the multi-domains test set, for a batch size of 8.",
"html": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Medical Law</td><td>BLEU IT</td><td colspan=\"2\">Koran Average</td><td colspan=\"2\">Medical Law</td><td>COMET IT</td><td colspan=\"2\">Koran Average</td></tr><tr><td colspan=\"3\">Baselines Base MT kNN-MT Fast kNN-MT</td><td>40.01 54.47 52.90</td><td colspan=\"3\">45.64 37.91 16.35 61.23 45.96 21.02 55.71 44.73 21.29</td><td>34.98 45.67 43.66</td><td>.4702 .5760 .5293</td><td colspan=\"3\">.5770 .3942 -.0097 .6781 .5163 .0480 .5944 .5445 -.0455</td><td>.3579 .4546 .4057</td></tr><tr><td colspan=\"3\">Efficient kNN-MT cache PCA + cache PCA + pruning PCA + cache + pruning</td><td>53.30 53.58 53.23 51.90</td><td colspan=\"3\">59.12 45.39 20.67 58.57 46.29 20.67 60.38 45.16 20.52 57.82 44.44 20.11</td><td>44.62 44.78 44.82 43.57</td><td>.5625 .5457 .5658 .5513</td><td colspan=\"3\">.6403 .5085 .0346 .6379 .5311 -.0021 .6639 .4981 .0298 .6260 .4909 -.0052</td><td>.4365 .4282 .4394 .4158</td></tr><tr><td>Generation speed</td><td>10 2 10 3</td><td>base kNN-MT fast kNN-MT efficient kNN-MT Medical</td><td>10 2 10 3</td><td/><td>Law</td><td/><td>10 2 10 3</td><td>IT</td><td/><td>10 2 10 3</td><td>Koran</td></tr><tr><td/><td>1</td><td>8 Batch size</td><td>16</td><td>1</td><td>8 Batch size</td><td>16</td><td>1</td><td>8 Batch size</td><td>16</td><td>1</td><td>8 Batch size</td><td>16</td></tr></table>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": ".",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Medical Law</td><td>IT</td><td>Koran Average</td></tr><tr><td>kNN-MT</td><td>54.47</td><td colspan=\"3\">61.23 45.96 21.02</td><td>45.67</td></tr><tr><td>k = 1 k = 2 k = 5</td><td>53.60 52.95 51.63</td><td colspan=\"3\">60.23 45.03 20.81 59.40 44.76 20.12 57.55 44.07 19.29</td><td>44.92 44.31 43.14</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "BLEU scores on the multi-domains test set when performing datastore pruning with several values of k, for a batch size of 8.",
"html": null,
"content": "<table><tr><td/><td>Medical</td><td>Law</td><td>IT</td><td>Koran</td></tr><tr><td colspan=\"5\">kNN-MT 6,903,141 19,061,382 3,602,862 524,374</td></tr><tr><td>k = 1</td><td colspan=\"4\">4,780,514 13,130,326 2,641,709 400,385</td></tr><tr><td>k = 2</td><td/><td/><td/></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Sizes of the in-domain datastores when performing datastore pruning with several values of k, for a batch size of 8.",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "BLEU scores on the multi-domains test set when performing PCA with different dimension, d, values, for a batch size of 8.",
"html": null,
"content": "<table/>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"text": "BLEU scores on the multi-domains test set when performing adaptive retrieval for different values of the threshold \u03b1, for a batch size of 8.",
"html": null,
"content": "<table><tr><td/><td>Medical</td><td>Law</td><td>IT</td><td>Koran</td></tr><tr><td>kNN-MT</td><td>100%</td><td colspan=\"3\">100% 100% 100%</td></tr><tr><td>\u03b1 = 0.25 \u03b1 = 0.5 \u03b1 = 0.75</td><td>78% 96% 98%</td><td>73% 96% 99%</td><td>38% 60% 92%</td><td>4% 61% 91%</td></tr></table>"
},
"TABREF8": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Medical Law</td><td>IT</td><td>Koran Average</td></tr><tr><td>kNN-MT</td><td>54.47</td><td colspan=\"3\">61.23 45.96 21.02</td><td>45.67</td></tr><tr><td>\u03c4 = 2 \u03c4 = 4 \u03c4 = 6 \u03c4 = 8</td><td>54.47 54.17 53.30 30.06</td><td colspan=\"3\">61.23 45.93 20.98 61.10 46.07 21.00 59.12 45.39 20.67 23.01 25.53 16.08</td><td>45.65 45.58 44.62 23.67</td></tr></table>"
},
"TABREF9": {
"num": null,
"type_str": "table",
"text": "BLEU scores on the multi-domains test set when using a retrieval distributions' cache for different values of the threshold \u03c4 , for a batch size of 8.",
"html": null,
"content": "<table><tr><td/><td>Medical</td><td>Law</td><td>IT</td><td>Koran</td></tr><tr><td>kNN-MT</td><td>100%</td><td colspan=\"3\">100% 100% 100%</td></tr><tr><td>\u03c4 = 2 \u03c4 = 4 \u03c4 = 6 \u03c4 = 8</td><td>59% 50% 43% 26%</td><td>51% 42% 35% 16%</td><td>67% 57% 49% 29%</td><td>64% 53% 45% 31%</td></tr></table>"
}
}
}
}