Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:35.842805Z"
},
"title": "G2P Conversion of Proper Names Using Word Origin Information",
"authors": [
{
"first": "Sonjia",
"middle": [],
"last": "Waxmonsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Chicago Chicago",
"location": {
"postCode": "60637",
"region": "IL"
}
},
"email": ""
},
{
"first": "Sravana",
"middle": [],
"last": "Reddy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Chicago Chicago",
"location": {
"postCode": "60637",
"region": "IL"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Motivated by the fact that the pronunciation of a name may be influenced by its language of origin, we present methods to improve pronunciation prediction of proper names using word origin information. We train grapheme-to-phoneme (G2P) models on language-specific data sets and interpolate the outputs. We perform experiments on US surnames, a data set where word origin variation occurs naturally. Our methods can be used with any G2P algorithm that outputs posterior probabilities of phoneme sequences for a given word.",
"pdf_parse": {
"paper_id": "N12-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "Motivated by the fact that the pronunciation of a name may be influenced by its language of origin, we present methods to improve pronunciation prediction of proper names using word origin information. We train grapheme-to-phoneme (G2P) models on language-specific data sets and interpolate the outputs. We perform experiments on US surnames, a data set where word origin variation occurs naturally. Our methods can be used with any G2P algorithm that outputs posterior probabilities of phoneme sequences for a given word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speakers can often associate proper names with their language of origin, even when the words have not been seen before. For example, many English speakers will recognize that Makowski and Masiello are Polish and Italian respectively, without prior knowledge of either name. Such recognition is important for language processing tasks since the pronunciations of out-of-vocabulary (OOV) words may depend on the language of origin. For example, as noted by Llitj\u00f3s (2001) , 'sch' is likely to be pronounced as /sh/ for German-origin names (Schoenenberg) and /sk/ for Italian-origin words (Schiavone) .",
"cite_spans": [
{
"start": 455,
"end": 469,
"text": "Llitj\u00f3s (2001)",
"ref_id": "BIBREF8"
},
{
"start": 586,
"end": 597,
"text": "(Schiavone)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we apply word origin recognition to grapheme-to-phoneme (G2P) conversion, the task of predicting the phonemic representation of a word given its written form. We specifically study G2P conversion for personal surnames, a domain where OOVs are common and expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to show how word origin information can be used to train language-specific G2P models, and how output from these models can be combined to improve prediction of the best pronunciation of a name. We deal with data sparsity in rare language classes by re-weighting the output of the languagespecific and language-independent models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Llitj\u00f3s 2001applies word origin information to pronunciation modeling for speech synthesis. Here, a CART decision tree system is presented for G2P conversion that maps letters to phonemes using local context. Experiments use a data set of US surnames that naturally draws from a diverse set of origin languages, and show that the inclusion of word origin features in the model improves pronunciation accuracy. We use similar data, as described in \u00a74.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Some works on lexical modeling for speech recognition also make use of word origin. Here, the focus is on expanding the vocabulary of an ASR system rather than choosing a single best pronunciation. train language-specific G2P models for eight languages and output pronunciations to augment a baseline lexicon. This augmented lexicon outperforms a handcrafted lexicon in ASR experiments; error reduction is highest for foreign names spoken by native speakers of the origin language. Cremelie and ten Bosch (2001) carry out a similar lexicon augmentation, and make use of penalty weighting, with different penalties for pronunciations generated by the language-specific and language-independent G2P models.",
"cite_spans": [
{
"start": 482,
"end": 511,
"text": "Cremelie and ten Bosch (2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The problem of machine transliteration is closely related to grapheme-to-phoneme conversion. Many transliteration systems (Khapra and Bhattacharyya, 2009; Bose and Sarkar, 2009; Bhargava and Kondrak, 2010) use word origin information. The method described by Hagiwara and Sekine (2011) is similar to our work, except that (a) we use a data set where multiple languages of origin occur naturally, rather than creating language-specific lists and merging them into a single set, and (b) we consider methods of smoothing against a languageindependent model to overcome the problems of data sparsity and errors in word origin recognition.",
"cite_spans": [
{
"start": 122,
"end": 154,
"text": "(Khapra and Bhattacharyya, 2009;",
"ref_id": "BIBREF7"
},
{
"start": 155,
"end": 177,
"text": "Bose and Sarkar, 2009;",
"ref_id": "BIBREF2"
},
{
"start": 178,
"end": 205,
"text": "Bhargava and Kondrak, 2010)",
"ref_id": "BIBREF0"
},
{
"start": 259,
"end": 285,
"text": "Hagiwara and Sekine (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "3 Language-Aware G2P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Our methods are designed to be used with any statistical G2P system that produces the posterior probability Pr(\u03c6|\u1e21) of a phoneme sequence\u03c6 for a word (grapheme sequence)\u1e21 (or a score that can be normalized to give a probability). The most likely pronunciation of a word is taken to be arg max\u03c6 Pr(\u03c6|\u1e21).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Our baseline is a single G2P model that is trained on all available training data. We train additional models on language-specific training subsets and incorporate the output of these models to re-estimate Pr(\u03c6|\u1e21), which involves the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "1. Train a supervised word origin classifier to predict Pr(l|w) for all l \u2208 L, the set of languages in our hand-labeled word origin training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "2. Train G2P models for each l \u2208 L. Each model m l is trained on words with Pr(l|w) greater than some threshold \u03b1. Here, we use \u03b1 = 0.7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "3. For each word w in the test set, generate candidate transcriptions from model m l for each language with nonzero Pr(l|w). Re-estimate Pr(\u03c6|\u1e21) by interpolating the outputs of the language-specific models. We may also use the output of the language-independent model. We elaborate on our approaches to Steps 1 and 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We apply a sequential conditional model to predict Pr(l|w), the probability of a language class given the word. A similar Maximum Entropy model is presented by Chen and , where features are the presence or absence of a given character n-gram in w. In our approach, feature functions are defined at character positions rather than over the entire word. Specifically, for word w j composed of character sequence c 1 . . . c m of length m (including start and end symbols), binary features test for the presence or absence of an n-gram context at each position m. A context is the presence of a character n-gram starting or ending at position m. Model features are represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "f i (w, m, l k ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "1, if lang(w) = l k and context i is present at position m 0, otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "(1) Then, for w j = c i . . . c m :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(l k |w j ) = exp m i \u03bb i f i (c m , l k ) Z",
"eq_num": "(2)"
}
],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "Z = j exp m i \u03bb i f i (c m , l k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "is a normalization factor. In practice, we can implement this model as a CRF, where a language label is applied at each character position rather than for the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "While all the language labels in a sequence need not be the same, we find only a handful of words where a transition occurs from one language label to another within a word. For these cases, we take the label of the last character in the word as the language of origin. Experiments comparing this sequential Maximum Entropy method with other word origin classifiers are described by Waxmonsky (2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: Word origin modeling",
"sec_num": "3.1"
},
{
"text": "We test two methods of re-weighting Pr(\u03c6|\u1e21) using the word origin estimation and the output of language-specific G2P models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 3: Re-weighting of G2P output",
"sec_num": "3.2"
},
{
"text": "Method A uses only language-specific models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 3: Re-weighting of G2P output",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(\u03c6|\u1e21) = l\u2208L Pr(\u03c6|\u1e21, l) Pr(l|g)",
"eq_num": "(3)"
}
],
"section": "Step 3: Re-weighting of G2P output",
"sec_num": "3.2"
},
{
"text": "where Pr(\u03c6|\u1e21, l) is estimated by model m l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 3: Re-weighting of G2P output",
"sec_num": "3.2"
},
{
"text": "Method B With the previous method, names from infrequent classes suffer from data sparsity. We therefore smooth with the output P I of the baseline language-independent model. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 3: Re-weighting of G2P output",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(\u03c6|\u1e21) = \u03c3 Pr I (\u03c6|\u1e21)+(1\u2212\u03c3) l\u2208L Pr(\u03c6|\u1e21, l) Pr(l|g)",
"eq_num": "(4)"
}
],
"section": "Step 3: Re-weighting of G2P output",
"sec_num": "3.2"
},
{
"text": "We assemble a data set of surnames that occur frequently in the United States. Since surnames are often \"Americanized\" in their written and phonemic forms, our goal is to model how a name is most likely to be pronounced in standard US English rather than in its language of origin. We consider the 50,000 most frequent surnames in the 1990 census 1 , and extract those entries that also appear in the CMU Pronouncing Dictionary 2 , giving us a set of 45,841 surnames with their phoneme representations transcribed in the Arpabet symbol set. We divide this data 80/10/10 into train, test, and development sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Data",
"sec_num": "4"
},
{
"text": "To build a word origin classification training set, we randomly select 3,000 surnames from the same census lists, and label by hand the most likely language of origin of each name when it occurs in the US. Labeling was done primarily using the Dictionary of American Family Names (Hanks, 2003) and Ellis Island immigration records. 3 We find that, in many cases, a surname cannot be attributed to a single language but can be assigned to a set of lan-1 http://www.census.gov/genealogy/names/ 2 http://www.speech.cs.cmu.edu/cgi-bin/ cmudict",
"cite_spans": [
{
"start": 280,
"end": 293,
"text": "(Hanks, 2003)",
"ref_id": "BIBREF6"
},
{
"start": 332,
"end": 333,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Data",
"sec_num": "4"
},
{
"text": "3 http://www.ellisisland.org guages related by geography and language family. For example, we discovered several surnames that could be ambiguously labeled as English, Scottish, or Irish in origin. For languages that are frequently confusable, we create a single language group to be used as a class label. Here, we use groups for British Isles, Slavic, and Scandinavian languages. Names of undetermined origin are removed, leaving a final training set of 2,795 labeled surnames and 33 different language classes. We have made this annotated word origin data publicly available for future research. 4 In these experiments, we use surnames from the 12 language classes that contain at least 10 handlabeled words, and merge the remaining languages into an \"Other\" class. Table 1 shows the final language classes used. Unlike the training sets, we do not remove names with ambiguous or unknown origin from the test set, so our G2P system is also evaluated on the ambiguous names.",
"cite_spans": [
{
"start": 599,
"end": 600,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 769,
"end": 776,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments 4.1 Data",
"sec_num": "4"
},
{
"text": "The Sequitur G2P algorithm (Bisani and Ney, 2008) is used for all our experiments.",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "(Bisani and Ney, 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We use the CMU Dictionary as the gold standard, with the assumption that it contains the standard pronunciations in US English. While surnames may have multiple valid pronunciations, we make the simplifying assumption that a name has one best pronunciation. Evaluation is done on the test set of 4,585 names from the CMU Dictionary. Table 1 shows G2P accuracy for the baseline system and Methods A and B. Test data is partitioned by the most likely language of origin.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We see that Method A, which uses only languagespecific G2P models, has lower overall accuracy than the baseline. We attribute this to data sparsity introduced by dividing the training set by language. With the exception of British and German, language-specific training set sizes are less than 10% the size of the baseline training set of 37k names. Another cause of the lowered performance is likely due to errors made by our word origin model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Examining results for individual language classes for Method A, we see that Italian and British are the only language classes where accuracy improves. For Italian, we attribute this to two factors: high divergence in pronunciation from US English, and the availability of enough training data to build a successful language-specific model. In the case of British, a language-specific model removes foreign words but leaves enough training data to model the language sufficiently. Method B shows accuracy gains of 2.2%, with gains for almost all language classes except Dutch and Scandinavian. This is probably because names in these two classes have almost standard US English pronunciations, and are already well-modeled by a language-independent model. We next look at some sample outputs from our G2P systems. Table 2 shows names where Method B generated the gold standard pronunciation and the baseline system did not. For the Italian and Spanish sets, we see that the letter-to-phoneme mappings produced by Method B are indicative of the language of origin: (c \u2192 /CH/) in Carcione, (u \u2192 /UW/) in Cuttino, (o \u2192 /OW/) in Pesola, and (i \u2192 /IY/) in Zavadil and Vivona. Interestingly, the name Bencivenga is categorized as Spanish but appears with the letter-to-phoneme mapping (c \u2192 /CH/), which corresponds to Italian as the language of origin. We found other examples of the (c \u2192 /CH/) mappings, indicating that Italian-origin names have been folded into Spanish data. This is not surprising since Spanish and Italian names have high confusion with each other. Effectively, our word origin model produced a noisy Spanish G2P training set, but the re-weighted G2P system is robust to these errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 813,
"end": 820,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "K AH T IY N OW K UW T IY N OW Italian Lubrano L AH B R AA N OW L UW B R AA N OW Pesola P EH S AH L AH P EH S OW L AH Kotula K OW T UW L AH K AH T UW L AH Slavic Jaworowski JH AH W ER AO F S K IY Y AH W ER AO F S K IY Lisak L IY S AH K L IH S AH K Wasik W AA S IH K V AA S IH K Bencivenga B EH N S IH V IH N G AH B EH N CH IY V EH NG G AH Spanish Vivona V IH V OW N AH V IY V OW N AH Zavadil Z AA V AA D AH L Z AA V AA D IY L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We see examples in the Slavic set where the gold standard dictionary pronunciation is partially but not completely Americanized. In Jaworowski, we have the mappings (j \u2192 /Y/) and (w \u2192 /F/), both of which are derived from the original Polish pronunciation. But for the same name, we also have (w \u2192 /W/) rather than (w \u2192 /V/), although the latter is truer to the original Polish. This illustrates one of the goals of our project, which is is to capture these patterns of Americanization as they occur in the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We apply word origin modeling to graphemeto-phoneme conversion, interpolating between language-independent and language-specific probabilistic grapheme-to-phoneme models. We find that our system outperforms the baseline in predicting Americanized surname pronunciations and captures several letter-to-phoneme features that are specific to the language of origin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our method operates as a wrapper around G2P output without modifying the underlying algorithm, and therefore can be applied to any state-of-the-art G2P system that outputs posterior probabilities of phoneme sequences for a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Future work will consider unsupervised or semisupervised approaches to word origin recognition for this task, and methods to tune the smoothing weights \u03c3 at the language rather than the global level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The data may be downloaded from http://people. cs.uchicago.edu/\u02dcwax/wordorigin/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language identification of names with SVMs",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Bhargava",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Bhargava and Grzegorz Kondrak. 2010. Lan- guage identification of names with SVMs. In Proceed- ings of NAACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Jointsequence models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech Communication.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning multi character alignment rules and classification of training data for transliteration",
"authors": [
{
"first": "Dipankar",
"middle": [],
"last": "Bose",
"suffix": ""
},
{
"first": "Sudeshna",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipankar Bose and Sudeshna Sarkar. 2009. Learning multi character alignment rules and classification of training data for transliteration. In Proceedings of the ACL Named Entities Workshop.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using place name data to train language identification models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maison",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Beno\u00eet Maison. 2003. Using place name data to train language identification models. In Proceedings of Eurospeech.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving the recognition of foreign names and non-native speech by combining multiple grapheme-to-phoneme converters",
"authors": [
{
"first": "Nick",
"middle": [],
"last": "Cremelie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Louis Ten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ITRW on Adaptation Methods for Speech Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nick Cremelie and Louis ten Bosch. 2001. Improv- ing the recognition of foreign names and non-native speech by combining multiple grapheme-to-phoneme converters. In Proceedings of ITRW on Adaptation Methods for Speech Recognition.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent class transliteration based on source language origin",
"authors": [
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masato Hagiwara and Satoshi Sekine. 2011. Latent class transliteration based on source language origin. In Proceedings of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dictionary of American family names",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Hanks. 2003. Dictionary of American family names. New York : Oxford University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving transliteration accuracy using word-origin detection and lexicon lookup",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitesh M. Khapra and Pushpak Bhattacharyya. 2009. Improving transliteration accuracy using word-origin detection and lexicon lookup. In Proceedings of the ACL Named Entities Workshop.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving pronunciation accuracy of proper names with language origin classes",
"authors": [
{
"first": "Ariadna",
"middle": [],
"last": "Font",
"suffix": ""
},
{
"first": "Llitj\u00f3s",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariadna Font Llitj\u00f3s. 2001. Improving pronunciation accuracy of proper names with language origin classes. Master's thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pronunciation modeling for names of foreign origin",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Maison",
"suffix": ""
},
{
"first": "Stanley",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"S"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beno\u00eet Maison, Stanley F. Chen, and Paul S. Cohen. 2003. Pronunciation modeling for names of foreign origin. In Proceedings of ASRU.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural language processing for named entities with word-internal information",
"authors": [
{
"first": "Sonjia",
"middle": [],
"last": "Waxmonsky",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonjia Waxmonsky. 2011. Natural language process- ing for named entities with word-internal information. Ph.D. thesis, University of Chicago.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Language</td><td>Train</td><td>Test</td><td>Base</td><td>(A)</td><td>(B)</td></tr><tr><td>Class</td><td colspan=\"3\">Count Count -line</td><td/><td/></tr><tr><td>British</td><td>16.1k</td><td>2111</td><td colspan=\"3\">71.8 73.1 73.9</td></tr><tr><td>German</td><td>8360</td><td>1109</td><td colspan=\"3\">75.8 74.2 78.2</td></tr><tr><td>Italian</td><td>3358</td><td>447</td><td colspan=\"3\">61.7 66.2 65.1</td></tr><tr><td>Slavic</td><td>1658</td><td>232</td><td colspan=\"3\">50.9 49.6 51.7</td></tr><tr><td>Spanish</td><td>1460</td><td>246</td><td colspan=\"3\">44.7 41.5 48.0</td></tr><tr><td>French</td><td>1143</td><td>177</td><td colspan=\"3\">42.9 42.4 45.2</td></tr><tr><td>Dutch</td><td>468</td><td>82</td><td>70.7</td><td colspan=\"2\">52.4 68.3</td></tr><tr><td>Scandin.</td><td>393</td><td>61</td><td>77.1</td><td colspan=\"2\">60.7 72.1</td></tr><tr><td>Japanese</td><td>116</td><td>23</td><td colspan=\"3\">73.9 52.2 78.3</td></tr><tr><td>Arabic</td><td>68</td><td>18</td><td colspan=\"3\">33.3 11.1 38.9</td></tr><tr><td>Portug.</td><td>34</td><td>4</td><td colspan=\"3\">25.0 25.0 50.0</td></tr><tr><td>Hungarian</td><td>28</td><td>3</td><td colspan=\"3\">100.0 66.7 100.0</td></tr><tr><td>Other</td><td>431</td><td>72</td><td colspan=\"3\">55.6 54.2 59.7</td></tr><tr><td>All</td><td/><td/><td colspan=\"3\">67.8 67.4 70.0</td></tr><tr><td colspan=\"6\">Table 1: G2P word accuracy for various weighting meth-</td></tr><tr><td colspan=\"5\">ods using a character-based word origin model.</td><td/></tr></table>",
"text": "The factor \u03c3 is tuned on a development set.",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Sample G2P output from the Baseline (language-independent) and Method B systems. Language labels shown here are the arg max l P (l|w) using the character-based word origin model. Phoneme symbols are from an Arpabet-based alphabet, as used in the CMU Pronouncing Dictionary.",
"num": null
}
}
}
}