ACL-OCL / Base_JSON /prefixL /json /loresmt /2020.loresmt-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:59:41.834600Z"
},
"title": "Effective Architectures for Low Resource Multilingual Named Entity Transliteration",
"authors": [
{
"first": "Molly",
"middle": [],
"last": "Moran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brandeis University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Constantine",
"middle": [],
"last": "Lignos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brandeis University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we evaluate LSTM, biLSTM, GRU, and Transformer architectures for the task of name transliteration in a many-to-one multilingual paradigm, transliterating from 590 languages to English. We experiment with different encoder-decoder combinations and evaluate them using accuracy, character error rate, and an F-measure based on longest continuous subsequences. We find that using a Transformer for the encoder and decoder performs best, improving accuracy by over 4 points compared to previous work. We explore whether manipulating the source text by adding macrolanguage flag tokens or preromanizing source strings can improve performance and find that neither manipulation has a positive effect. Finally, we analyze performance differences between the LSTM and Transformer encoders when using a Transformer decoder and find that the Transformer encoder is better able to handle insertions and substitutions when transliterating.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we evaluate LSTM, biLSTM, GRU, and Transformer architectures for the task of name transliteration in a many-to-one multilingual paradigm, transliterating from 590 languages to English. We experiment with different encoder-decoder combinations and evaluate them using accuracy, character error rate, and an F-measure based on longest continuous subsequences. We find that using a Transformer for the encoder and decoder performs best, improving accuracy by over 4 points compared to previous work. We explore whether manipulating the source text by adding macrolanguage flag tokens or preromanizing source strings can improve performance and find that neither manipulation has a positive effect. Finally, we analyze performance differences between the LSTM and Transformer encoders when using a Transformer decoder and find that the Transformer encoder is better able to handle insertions and substitutions when transliterating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The world's written languages collectively represent hundreds of different writing systems. Transliteration is the process of converting a word's written representation in one language to its equivalent in a target language and is a key component of machine translation and cross-lingual information extraction and retrieval. It is especially relevant for recovering named entities, which often cannot be \"translated\" in the traditional sense, and linking names across texts in different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we apply existing architectures used for machine translation to the task of name transliteration, the task of converting the written citation form of a name in one language to another language. We use an existing massivelymultilingual resource of aligned parallel names in 591 languages and evaluate four neural architectures for transliteration. While the use of neural machine translation architectures for transliteration is not novel, to the best of our knowledge a comparative study of the performance of multiple models and preprocessing strategies in the many-to-one transliteration paradigm has not been previously performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are as follows: first, we provide a performance comparison of several different neural architectures on the task of name transliteration in a many-to-one paradigm. Second, we evaluate the effectiveness of various methods of manipulating input sequences for name transliteration. Finally, we present a multilingual transliteration model with a 1-Best accuracy of 73%, a 4-point improvement over the previous best model for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The 2018 Named Entities Workshop included a shared task on Named Entity Transliteration (Chen et al., 2018) that established several useful metrics to evaluate various elements of transliteration quality. We use their mean F-score metric in this work. As part of the shared task, Kundu et al. (2018) compared RNN and CNN architectures on the task of name transliteration for 13 language pairs. They found that their character-level RNN model performed best and achieved state-of-the-art results for one language pair. Notably, the authors focused on only a few higher-resource languages, whereas our study uses a much larger set of languages, including many lower-resourced ones.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 280,
"end": 299,
"text": "Kundu et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Merhav and Ash (2018) released a set of bilingual name dictionaries mined from Wikipedia for English to Russian, Hebrew, Arabic, and Japanese Katakana transliteration, and compare traditional weighted FST models with more modern neural techniques on bilingual transliteration tasks. Their models collapse all names written in Latin scripts under the English label. But very different languages may share a script; we aim to create models sensitive to these differences. Benites et al. (2020) recently released a much more comprehensive name corpus covering 3 million names across 180 languages derived from Wikipedia, GeoNames, and the dataset released by Merhav and Ash (2018) .",
"cite_spans": [
{
"start": 470,
"end": 491,
"text": "Benites et al. (2020)",
"ref_id": null
},
{
"start": 656,
"end": 677,
"text": "Merhav and Ash (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Regarding lower-resourced transliteration, Upadhyay et al. 2018proposed a bootstrapping method wherein a \"weak\" generation model is used to guide discovery of possible transliteration candidates for a given word. Le and Sadat (2018) used a combination of G2P, neural networks and word embeddings to improve English-Vietnamese transliteration. Johnson et al. (2017) introduced a now-popular technique for invoking transfer learning in a manyto-one multilingual translation framework. A single model is trained with a mixed source vocabulary and a single target vocabulary. An artificial \"flag\" token is appended to each source sequence to help the model identify languages, circumventing the need for training a separate encoder and decoder for each language pair. We use this approach to perform many-languages-to-one transliteration.",
"cite_spans": [
{
"start": 343,
"end": 364,
"text": "Johnson et al. (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Wu and Yarowsky (2018) explored approaches to extremely low-resource name transliteration, using a multiparallel corpus of Biblical names across 591 languages . The authors also provided a cursory report on the results of an experimental multilingual transliteration model, using the technique proposed by Johnson (Johnson et al., 2017) . They found that a character-level RNN trained on concatenated data from all source languages significantly outperformed individual language-pair models. Using the same data, we aim to provide a more comprehensive study of the performance of various neural architectures and the applicability of potential performance-boosting preprocessing techniques for the task of transliteration in a many-to-one framework.",
"cite_spans": [
{
"start": 314,
"end": 336,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Experiment Design",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The corpus we use to train our models represents 591 languages with varying numbers of alignments to 1129 English names. After removing any blank names, the final dataset comprised 348,991 name pairs. We partitioned the data using the exact train/development/test split (80/10/10) and random seed provided by , which enables us to directly compare our results with the values they report. Our resulting training set consisted of 279,192 name pairs. The mean number of training pairs per language was 472, the median was 454, and the minimum and maximum were 34 and 938.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Our models are implemented using OpenNMT version 1.0.2 (Klein et al., 2017) . We experimented with several different combinations of encoders and decoders. Our baseline architecture is the model used by Wu and Yarowsky, a GRU encoder paired with the default decoder, using their hyperparameters as our defaults. We configured three additional encoders: an LSTM, a biLSTM, and a Transformer. We paired the GRU, LSTM and biLSTM encoders each with two decoders, a Transformer and Open-NMT's default LSTM decoder with attention. We paired the Transformer encoder with a Transformer decoder; for brevity we will refer to this as our Transformer model without separately mentioning the encoder and decoder. After testing each model on a single random seed with embedding sizes of 200, 300 and 400-using the same size for both source and target-we chose the size with the lowest development set perplexity (in all cases, 200).",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "The LSTM, biLSTM and GRU encoders are 2-layer models with hidden size 200, trained for 44k steps, the same hyperparameters used by Wu and Yarowsky 2018. 1 The Transformer encoder is a 4-layer model with 8-headed self-attention, sinusoidal positional encoding with clipping distance of 2, and hidden feed-forward size of 1024. We tested several positional encoding clipping distances, choosing the value that produced the lowest development set perplexity. Based on inspecting development set perplexity, we increased the Transformer encoder's training duration to 90k updates, with 10k warmup steps and a learning rate decay interval of 10k steps.",
"cite_spans": [
{
"start": 153,
"end": 154,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "OpenNMT's default decoder is a 2-layer LSTM RNN of size 500 with attention that implements input-feeding (see Luong et al. 2015) , wherein attention vectors are fed as input to the next time step. The hidden size of each layer is 200. The Transformer decoder is identical to our Transformer encoder-4 layers with 8-headed self-attention- ",
"cite_spans": [
{
"start": 110,
"end": 128,
"text": "Luong et al. 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "We report three separate evaluation metrics for each model: 1-best accuracy, the percentage of perfectly transliterated names; character error rate (CER), the error rate of characters across names as calculated using Levenshtein distance; and mean F-score or \"Fuzziness in Top-1,\" a metric introduced by the NEWS 2018 Shared Task on Named Entity Transliteration (Chen et al., 2018) . Mean F-score computes the character-level F1-score based on the longest common subsequence between a source and target sequence.",
"cite_spans": [
{
"start": 362,
"end": 381,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "3.3"
},
{
"text": "Each experimental configuration for each model architecture was run 10 times using 10 different random seeds. Checkpoints were saved every 5k updates, and the checkpoint with the lowest development set perplexity was used for evaluation. For a given experimental configuration we report the mean and standard deviation for each evaluation metric taken across all random seeds. 93.48 \u00b1 .03 73.01 \u00b1 .10 14.73 \u00b1 .08 Table 2 : Mean and standard deviation of all metrics for each encoder using the Transformer decoder whiskers give the minimum and maximum scores; each box gives the 25th, 50th, and 75th percentiles.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Procedure",
"sec_num": "3.4"
},
{
"text": "As the plots for the other metrics look extremely similar, we provide plots for accuracy ( Figure 4 ) and CER ( Figure 5 ) at the end of the paper. Table 1 gives results for all architectures using the default decoder, and Table 2 gives results for all architectures using the Transformer decoder. As many of the results are similar-especially the LSTM and biLSTM-we performed statistical significance tests and bolded the highest score and the scores that were not found to be significantly different at the p < 0.05 level from the highest score. We use Dunn's test, applying Bonferroni correction to a Kruskal-Wallis test, which is similar to Mann-Whitney U . In short, this approach tests differences between median scores without making any assumptions that the data come from a normal distribution, and ti adjusts for the fact that we are comparing more than two scores. While we follow the convention of reporting the mean and standard deviation in tables, neither the mean nor standard deviation is used by our non-parametric statistical significance testing procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 112,
"end": 120,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 148,
"end": 155,
"text": "Table 1",
"ref_id": null
},
{
"start": 223,
"end": 230,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As Table 1 shows, when combined with the default decoder the LSTM and biLSTM encoders perform almost identically (comparing using each metric, all comparisons p > 0.05), and outperform the GRU (all comparisons p < 0.001).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As Table 2 shows, using a Transformer decoder generally leads to improved performance compared to the default decoder. Among the encoders evaluated, the Transformer performs best, and the differences are statistically significant when compared to each of the other encoders and across all metrics (all comparisons p < 0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As Figure 1 shows, there is no overlap between the minimum and maximum mean F-score values attained by the Transformer (when used as both encoder and decoder) and other architectures. In addition to achieving the highest performance across all metrics, the Transformer's variation across random Standard <aln> e m a n u e l i m m a n u e l Macrolang. <aln> <sqi> e m a n u e l i m m a n u e l Table 3 : Source preprocessing with ISO-639-3 language (<aln>) and macrolanguage (<sqi>) codes seeds is smaller than other architectures, both in the range between the minimum and maximum values and the standard deviation.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 393,
"end": 400,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Johnson et al. (2017) used ISO-639-3 language codes as flag tokens. To our knowledge, subsequent implementations of multilingual translation have all used similar codes, including Wu and Yarowsky (2018)'s transliteration model and our own models. As these language codes are arbitrary, atomic flags, they do not encode known relationships between languages. The ISO-639 schema designates 58 languages as macrolanguages, which unite groups of closely-related languages that may exist on a dialect continuum. 2 Of the 7,868 individual languages identified by ISO-639-3, 453 (5.8%) have a corresponding macrolanguage, and those that do tend to be lower-resourced languages. The Biblical names corpus compiled by mirrors these statistics; of the 591 languages, 36 (6.1%) have a corresponding macrolanguage code. These languages comprised 6.8% of the training set (19,040 name pairs) and 6.9% of the test set (2,432 pairs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Tokens",
"sec_num": "5.1"
},
{
"text": "We can leverage this information in our models by appending an additional macrolanguage flag token following a language's ISO-639-3 code, where applicable, as shown in Table 3 . We expected that this may lead to small improvements in transliteration quality for lower-resourced languages that belong to the same macrolanguage. We tested this by evaluating using our Transformer model.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Artificial Tokens",
"sec_num": "5.1"
},
{
"text": "As Table 4 shows, the addition of macrolanguage tokens actually slightly hurt the models' performance on every metric. The 36 languages in our dataset with a corresponding macrolanguage shared 24 macrolanguages, making it difficult for a model to leverage cross-lingual similarities. We suspect that macrolanguage information ultimately amounted to noise.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Artificial Tokens",
"sec_num": "5.1"
},
{
"text": "We also evaluated on only the subset of languages with a corresponding macrolanguage, to see whether the addition of macrolanguage tokens improved performance for them. However, performance using macrolanguage tokens (Mean F-score 92.04 \u00b1 .08; Accuracy 69.65 \u00b1 .26; CER 18.29 \u00b1 .18) decreased from the baseline (93.35 \u00b1 .08; 72.33 \u00b1 .38; 14.79 \u00b1 .2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Tokens",
"sec_num": "5.1"
},
{
"text": "In analyzing per-language performance, we found the expected relationship between the number of training pairs in the language and the performance in that language, as shown in Figure 2 for character error rate. However, there is a lot of variation across languages, even among those with scripts very different from English, which is suggestive of successful transfer learning across languages. Many languages with fewer examples are doing better than expected, reducing the correlation between per-language training data size and performance. The largest cluster of number of training pairs per language is approximately 500 languages with around 500 name pairs each. Even within that cluster there is tremendous variance in performance around that number of training examples. Iloko (ilo), the best-performing language in that cluster, has a CER of 3.6, while the worst-performing language, Inuktitut (iku), has a CER of 48.2. There is a nearly uniform distribution of languages between those extremes. 3 As previously identified by , transliteration is more successful when the edit distance between source and target is low. Figure 3 shows the mean CER, along with the mean edit distance to English, for each language, using a single run of the Transformer model with standard preprocessing. One possible approach to reducing edit distance is to romanize the data before transliteration. report that 3 For this example and in Figures 2 and 3 we exclude three extremely poorly-performing languages (cmn, mya, khm, each with CER > 75) where the poor performance can be traced to errors in the training data which aligned names to phrases. preprocessing data into ASCII using Unidecode yielded no improvement in performance. However, their approach is extremely simple; we evaluate a more sophisticated, hand-tuned romanizer, uroman (Hermjakob et al., 2018) . We romanized data before providing it as source text, and trained the Transformer using standard preprocessing. As shown in Table 4 , this slightly reduces accuracy and mean F-score, and slightly increases CER. Further analysis revealed that romanization did not actually decrease edit distance to English; it increased it for as many languages as it decreased it, likely because it often lengthened words.",
"cite_spans": [
{
"start": 1006,
"end": 1007,
"text": "3",
"ref_id": null
},
{
"start": 1405,
"end": 1406,
"text": "3",
"ref_id": null
},
{
"start": 1835,
"end": 1859,
"text": "(Hermjakob et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1130,
"end": 1138,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 1431,
"end": 1446,
"text": "Figures 2 and 3",
"ref_id": "FIGREF2"
},
{
"start": 1986,
"end": 1993,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Romanization",
"sec_num": "5.2"
},
{
"text": "As shown in Table 4 , neither the use of macrolanguage tokens nor romanization improves performance. As the bolding indicates, for mean F-score the standard condition performs significantly better than macrolanguage (p = 0.03) and romanized (p = 0.0007); for CER, standard performs better than romanized (p = 0.008). The results for the accuracy metric are statistically indistinguishable (all comparisons p > 0.05); all values are bolded as it is essentially an all-way tie.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Summary",
"sec_num": "5.3"
},
{
"text": "Our findings demonstrate that using a Transformer architecture for the encoder and decoder results in reliably strong performance for this task. Attempting to improve the model through simple \"tweaks\" to the input does not improve performance and may hurt it. However, one may wonder why the Transformer does so much better at this task than an LSTM encoder and a decoder with attention when the transliteration problem is relatively simple compared to sentence translation. There is only limited reordering in transliteration, and there may be fewer long-range dependencies. While we have manually examined the output of the transliteration system across architectures, it is difficult to identify obvious patterns, and attempting to do so puts us at risk of overgeneralizing our observations that are based on the relatively few writing systems that we can read.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "To further analyze performance, for every item in the test set we computed the Levenshtein distance-expressed as insertions, deletions, and substitutions-between the source name and target name. We then analyzed the relationship between these edit distance metrics and 1-best accuracy by predicting whether the system correctly transliterated each item of the test set using logistic regression. We analyze two systems: the LSTM and Transformer encoders, each paired with the Transformer decoder. 4 We examined the contribution of each type of edit (insertions, etc.) on the performance of each system. We fit a logistic regression model to the 689,960 predictions across all items and random seeds for the models we are comparing. For each item, we used the number of source-target edit distance operations to predict whether the model made any errors. We employed interactions between the encoder type and each predictor to explicitly test for differences between the LSTM and Transformer encoders. In other words, our model tested whether the two encoders differed in how each edit operation required affected their ability to correctly transliterate.",
"cite_spans": [
{
"start": 497,
"end": 498,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We found that there were significant differences in the interaction terms of the model for insertions and substitutions, but not for deletions. For the Transformer encoder, with each additional insertion in the edit distance, it was 1.1% less likely to produce an error than the LSTM was (p = 0.0006), for substitutions, 1.8% (p < 0.0001). This helps further characterize the performance differences between these models. While they are equally capable of handling deletions, the Transformer encoder can better handle insertions and substitutions, and the advantage is larger for substitutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We conclude that using a Transformer architecture for both the encoder and decoder leads to the best performance on the many-languages-to-English transliteration task that we evaluate. However, using macrolanguage codes and pre-romanizing the input do not improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our best multilingual transliteration model achieves a 1-best accuracy of 73%, a 4-point improvement over the baseline provided by . When using a Transformer encoder, our model demonstrates an improved ability to handle substitution edits in source-target pairs compared to an LSTM encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "While an off-the-shelf MT system provides a strong starting system for the task of many-toone transliteration, future improvements for lowerresourced settings will likely require a greater level of sophistication, possibly using monolingual pretraining to better model the source language given few training examples. Additionally, while using the Transformer for the encoder and decoder gives the strongest results, it may be possible to further simplify the model and achieve similar results. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In the case of the biLSTM encoder, each direction has 100 hidden units for a total of 200. We experimented with doubling the total number of hidden units so that each direction was size 200, but this increase did not improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://iso639-3.sil.org/code_tables/ macrolanguage_mappings/read",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We exclude pairs from the languages with codes cmn, mya, khm, and nab from this analysis after review of the training data revealed that edit distance between source and target names was artificially high due to data problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Pius von D\u00e4niken, and Mark Cieliebak. 2020. Large name transliteration resource",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": ""
},
{
"first": "Gilbert",
"middle": [],
"last": "Fran\u00e7ois Duivesteijn",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Benites, Gilbert Fran\u00e7ois Duivesteijn, Pius von D\u00e4niken, and Mark Cieliebak. 2020. Large name transliteration resource. In Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2020).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2408"
]
},
"num": null,
"urls": [],
"raw_text": "Nancy Chen, Xiangyu Duan, Min Zhang, Rafael E. Banchs, and Haizhou Li. 2018. News 2018 whitepa- per. In Proceedings of the Seventh Named Entities Workshop, pages 47-54, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Out-of-the-box universal Romanization tool",
"authors": [
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-4003"
]
},
"num": null,
"urls": [],
"raw_text": "Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal Romanization tool",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00065"
]
},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A deep learning based approach to transliteration",
"authors": [
{
"first": "Soumyadeep",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Sayantan",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Santanu",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "79--83",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2411"
]
},
"num": null,
"urls": [],
"raw_text": "Soumyadeep Kundu, Sayantan Paul, and Santanu Pal. 2018. A deep learning based approach to translitera- tion. In Proceedings of the Seventh Named Entities Workshop, pages 79-83, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Low-resource machine transliteration using recurrent neural networks of Asian languages",
"authors": [
{
"first": "Ngoc",
"middle": [
"Tan"
],
"last": "",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Fatiha",
"middle": [],
"last": "Sadat",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2414"
]
},
"num": null,
"urls": [],
"raw_text": "Ngoc Tan Le and Fatiha Sadat. 2018. Low-resource machine transliteration using recurrent neural net- works of Asian languages. In Proceedings of the Seventh Named Entities Workshop, pages 95- 100, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Design challenges in named entity transliteration",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Merhav",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Ash",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "630--640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuval Merhav and Stephen Ash. 2018. Design chal- lenges in named entity transliteration. In Proceed- ings of the 27th International Conference on Compu- tational Linguistics, pages 630-640, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bootstrapping transliteration with constrained discovery for low-resource languages",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Kodner",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "501--511",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1046"
]
},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Jordan Kodner, and Dan Roth. 2018. Bootstrapping transliteration with constrained dis- covery for low-resource languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 501-511, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Creating a translation matrix of the Bible's names across 591 languages",
"authors": [
{
"first": "Winston",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Winston Wu, Nidhi Vyas, and David Yarowsky. 2018. Creating a translation matrix of the Bible's names across 591 languages. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A comparative study of extremely low-resource transliteration of the world's languages",
"authors": [
{
"first": "Winston",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Winston Wu and David Yarowsky. 2018. A com- parative study of extremely low-resource transliter- ation of the world's languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "displays box plots for mean F-score for each configuration of each architecture. Each box's",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Mean F-score across encoder and decoder architectures Format Source Target",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Per-language mean character error rate and number of training pairs, with linear fit line",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Per-language mean character error rate and mean edit distance from English, with linear fit line",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Accuracy across encoder and decoder architectures",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "CER (lower is better) across encoder and decoder architectures uroman. In Proceedings of ACL 2018, System Demonstrations, pages 13-18, Melbourne, Australia. Association for Computational Linguistics.",
"uris": null,
"num": null
},
"TABREF0": {
"text": "\u00b1 .07 68.66 \u00b1 .25 17.53 \u00b1 .23 LSTM 92.94 \u00b1 .13 71.14 \u00b1 .34 16.10 \u00b1 .44 biLSTM 92.93 \u00b1 .09 71.13 \u00b1 .20 16.05 \u00b1 .23 Table 1: Mean and standard deviation of all metrics for each encoder using the default (LSTM) decoder",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Encoder Mean F-score</td><td>Accuracy</td><td>CER</td></tr><tr><td colspan=\"2\">GRU 92.27 except it has a hidden size of 2048.</td><td/></tr><tr><td colspan=\"3\">All models were trained using ADADELTA,</td></tr><tr><td colspan=\"3\">with dropout of 0.2 and a batch size of 64. Training</td></tr><tr><td colspan=\"3\">was performed on a single computer with an AMD</td></tr><tr><td colspan=\"3\">2990WX CPU, two RTX 2080 Ti GPUs, and one</td></tr><tr><td colspan=\"3\">Titan RTX GPU. Training on a single RTX 2080 Ti</td></tr><tr><td colspan=\"3\">for the GRU (1,818,431 parameters with the default</td></tr><tr><td colspan=\"3\">decoder, 3,103,631 with a Transformer decoder)</td></tr><tr><td colspan=\"3\">and LSTM (2,180,031 parameters with the default</td></tr><tr><td colspan=\"3\">decoder, 3,545,727 with a Transformer decoder)</td></tr><tr><td colspan=\"3\">models took approximately 26 minutes. The biL-</td></tr><tr><td colspan=\"3\">STM model (2,020,031 parameters with the default</td></tr><tr><td colspan=\"3\">decoder, 3,385,727 with a Transformer decoder)</td></tr><tr><td colspan=\"3\">took approximately 29 minutes. Transformer train-</td></tr><tr><td colspan=\"3\">ing (5,839,623 parameters) took approximately 98</td></tr><tr><td>minutes.</td><td/><td/></tr></table>",
"num": null
},
"TABREF1": {
"text": "\u00b1 .07 70.78 \u00b1 .25 16.15 \u00b1 .30 LSTM 93.08 \u00b1 .09 71.49 \u00b1 .32 15.65 \u00b1 .36 biLSTM 92.99 \u00b1 .19 71.37 \u00b1 .28 15.82 \u00b1 .35 Trans.",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Encoder Mean F-score</td><td>Accuracy</td><td>CER</td></tr><tr><td>GRU</td><td>92.87</td><td/></tr></table>",
"num": null
},
"TABREF2": {
"text": "\u00b1 .03 73.01 \u00b1 .10 14.73 \u00b1 .08 Macrolang. 93.45 \u00b1 .04 72.90 \u00b1 .13 14.75 \u00b1 .11 Romanized 93.40 \u00b1 .03 72.96 \u00b1 .14 14.86 \u00b1 .07",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Condition</td><td>Mean F-score</td><td>Accuracy</td><td>CER</td></tr><tr><td>Standard</td><td>93.48</td><td/><td/></tr></table>",
"num": null
},
"TABREF3": {
"text": "Mean and standard deviation of all metrics for the Transformer encoder and decoder for standard, macrolanguage-marked, and romanized source text",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}