Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:34.775827Z"
},
"title": "Semi-Supervised Lexicon Mining from Parenthetical Expressions in Monolingual Web Pages",
"authors": [
{
"first": "Xianchao",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jun",
"middle": [
"'"
],
"last": "Ichi Tsujii",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Manchester National Centre for Text Mining (NaCTeM) Manchester Interdisciplinary Biocentre",
"location": {
"addrLine": "131 Princess Street",
"postCode": "M1 7DN",
"settlement": "Manchester",
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a semi-supervised learning framework for mining Chinese-English lexicons from large amount of Chinese Web pages. The issue is motivated by the observation that many Chinese neologisms are accompanied by their English translations in the form of parenthesis. We classify parenthetical translations into bilingual abbreviations, transliterations, and translations. A frequency-based term recognition approach is applied for extracting bilingual abbreviations. A self-training algorithm is proposed for mining transliteration and translation lexicons. In which, we employ available lexicons in terms of morpheme levels, i.e., phoneme correspondences in transliteration and grapheme (e.g., suffix, stem, and prefix) correspondences in translation. The experimental results verified the effectiveness of our approaches.",
"pdf_parse": {
"paper_id": "N09-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a semi-supervised learning framework for mining Chinese-English lexicons from large amount of Chinese Web pages. The issue is motivated by the observation that many Chinese neologisms are accompanied by their English translations in the form of parenthesis. We classify parenthetical translations into bilingual abbreviations, transliterations, and translations. A frequency-based term recognition approach is applied for extracting bilingual abbreviations. A self-training algorithm is proposed for mining transliteration and translation lexicons. In which, we employ available lexicons in terms of morpheme levels, i.e., phoneme correspondences in transliteration and grapheme (e.g., suffix, stem, and prefix) correspondences in translation. The experimental results verified the effectiveness of our approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Bilingual lexicons, as lexical or phrasal parallel corpora, are widely used in applications of multilingual language processing, such as statistical machine translation (SMT) and cross-lingual information retrieval. However, it is a time-consuming task for constructing large-scale bilingual lexicons by hand. There are many facts cumber the manual development of bilingual lexicons, such as the continuous emergence of neologisms (e.g., new technical terms, personal names, abbreviations, etc.), the difficulty of keeping up with the neologisms for lexicographers, etc. In order to turn the facts to a better way, one of the simplest strategies is to automatically mine large-scale lexicons from corpora such as the daily updated Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generally, there are two kinds of corpora used for automatic lexicon mining. One is the purely monolingual corpora, wherein frequency-based expectation-maximization (EM, refer to (Dempster et al., 1977) ) algorithms and cognate clues play a central role (Koehn and Knight, 2002) . Haghighi et al. (2008) presented a generative model based on canonical correlation analysis, in which monolingual features such as the context and orthographic substrings of words were taken into account. The other is multilingual parallel and comparable corpora (e.g., Wikipedia 1 ), wherein features such as cooccurrence frequency and context are popularly employed (Cheng et al., 2004; Shao and Ng, 2004; Cao et al., 2007; Lin et al., 2008) .",
"cite_spans": [
{
"start": 179,
"end": 202,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF3"
},
{
"start": 254,
"end": 278,
"text": "(Koehn and Knight, 2002)",
"ref_id": "BIBREF9"
},
{
"start": 281,
"end": 303,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF6"
},
{
"start": 649,
"end": 669,
"text": "(Cheng et al., 2004;",
"ref_id": "BIBREF1"
},
{
"start": 670,
"end": 688,
"text": "Shao and Ng, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 689,
"end": 706,
"text": "Cao et al., 2007;",
"ref_id": "BIBREF0"
},
{
"start": 707,
"end": 724,
"text": "Lin et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on a special type of comparable corpus, parenthetical translations. The issue is motivated by the observation that Web pages and technical papers written in Asian languages (e.g., Chinese, Japanese) sometimes annotate named entities or technical terms with their translations in English inside a pair of parentheses. This is considered to be a traditional way to annotate new terms, personal names or other named entities with their English translations expressed in brackets. Formally, a parenthetical translation can be expressed by the following pattern, f 1 f 2 ... f J (e 1 e 2 ... e I ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, f 1 f 2 ... f J (f J 1 ), the pre-parenthesis text, denotes the word sequence of some language other than English; and e 1 e 2 ... e I (e I 1 ), the in-parenthesis text, denotes the word sequence of English. We separate parenthetical translations into three categories: Table 1 : Parenthetical translation categories and examples extracted from Chinese Web pages. Mixture stands for the mixture of translation (University) and transliteration (Bradford). ' ' denotes the left boundary of f J 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "bilingual abbreviation, transliteration, and translation. Table 1 illustrates examples of these categories. We address several characteristics of parenthetical translations that differ from traditional comparable corpora. The first is that they only appear in monolingual Web pages or documents, and the context information of e I 1 is unknown. Second, frequency and word number of e I 1 are frequently small. This is because parenthetical translations are only used when the authors thought that f J 1 contained some neologism(s) which deserved further explanation in another popular language (e.g., English). Thus, traditional context based approaches are not applicable and frequency based approaches may yield low recall while with high precision. Furthermore, cognate clues such as orthographic features are not applicable between language pairs such as English and Chinese.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parenthetical translation mining faces the following issues. First, we need to distinguish parenthetical translations from parenthetical expressions, since parenthesis has many functions (e.g., defining abbreviations, elaborations, ellipsis, citations, annotations, etc.) other than translation. Second, the left boundary (denoted as in Table 1 ) of the preparenthesis text need to be determined to get rid of the unrelated words. Third, we need further distinguish different translation types, such as bilingual abbreviation, the mixture of translation and transliteration, as shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 1",
"ref_id": null
},
{
"start": 587,
"end": 594,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to deal with these problems, supervised (Cao et al., 2007) and unsupervised (Li et al., 2008) methods have been proposed. However, supervised approaches are restricted by the quality and quantity of manually constructed training data, and unsupervised approaches are totally frequency-based without using any semantic clues. In contrast, we propose a semi-supervised framework for mining parenthetical translations. We apply a monolingual abbreviation extraction approach to bilingual abbreviation extraction. We construct an English-syllable to Chinese-pinyin transliteration model which is selftrained using phonemic similarity measurements. We further employ our cascaded translation model (Wu et al., 2008) which is self-trained based on morpheme-level translation similarity.",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "(Cao et al., 2007) and",
"ref_id": "BIBREF0"
},
{
"start": 72,
"end": 102,
"text": "unsupervised (Li et al., 2008)",
"ref_id": null
},
{
"start": 702,
"end": 719,
"text": "(Wu et al., 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. We briefly review the related work in the next section. Our system framework and self-training algorithm is described in Section 3. Bilingual abbreviation extraction, self-trained transliteration models and cascaded translation models are described in Section 4, 5, and 6, respectively. In Section 7, we evaluate our mined lexicons by Wikipedia. We conclude in Section 8 finally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Numerous researchers have proposed a variety of automatic approaches to mine lexicons from the Web pages or other large-scale corpora. Shao and Ng (2004) presented a method to mine new translations from Chinese and English news documents of the same period from different news agencies, combining both transliteration and context information. Kuo et al. (2006) used active learning and unsupervised learning for mining transliteration lexicon from the Web pages, in which an EM process was used for estimating the phonetic similarities between English syllables and Chinese characters. Cao et al. (2007) split parenthetical translation mining task into two parts, transliteration detection and translation detection. They employed a transliteration lexicon for constructing a grapheme-based transliteration model and annotated boundaries manually to train a classifier. Lin et al. (2008) applied a frequency-based word alignment approach, Competitive Link (Melanmed, 2000) , to determine the outer boundary (Section 7).",
"cite_spans": [
{
"start": 135,
"end": 153,
"text": "Shao and Ng (2004)",
"ref_id": "BIBREF19"
},
{
"start": 343,
"end": 360,
"text": "Kuo et al. (2006)",
"ref_id": "BIBREF10"
},
{
"start": 586,
"end": 603,
"text": "Cao et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 870,
"end": 887,
"text": "Lin et al. (2008)",
"ref_id": "BIBREF11"
},
{
"start": 956,
"end": 972,
"text": "(Melanmed, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On the other hand, there have been many semisupervised approaches in numerous applications (Lin et al., 2008) Figure 1: The system framework of mining lexicons from Chinese Web pages. (Zhu, 2007) , such as self-training in word sense disambiguation (Yarowsky, 2005) and parsing (Mc-Closky et al., 2008) . In this paper, we apply selftraining to a new topic, lexicon mining.",
"cite_spans": [
{
"start": 91,
"end": 109,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 184,
"end": 195,
"text": "(Zhu, 2007)",
"ref_id": "BIBREF22"
},
{
"start": 249,
"end": 265,
"text": "(Yarowsky, 2005)",
"ref_id": null
},
{
"start": 278,
"end": 302,
"text": "(Mc-Closky et al., 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 System Framework and Self-Training Algorithm Figure 1 illustrates our system framework for mining lexicons from Chinese Web pages. First, parenthetical expressions matching Pattern 1 are extracted. Then, pre-parenthetical Chinese sequences are segmented into word sequences by S-MSRSeg 2 (Gao et al., 2006) . The initial parenthetical translation corpus is constructed by applying the heuristic rules defined in (Lin et al., 2008) 3 . Based on this corpus, we mine three lexicons step by step, a bilingual abbreviation lexicon, a transliteration lexicon, and a translation lexicon. The abbreviation candidates are extracted firstly by using a heuristic rule (Section 4.1). Then, the transliteration candidates are selected by employing a transliteration model (Section 5.1). Specially, f J 1 (e I 1 ) is taken as a transliteration candidate only if a word e i in e I 1 can be transliterated. In addition, a transliteration candidate will also be considered as a translation candidate if not all e i can be transliterated (refer to the mixture example in Table1). Finally, after abbreviation filtering and transliteration filtering, the remaining candi-2 http://research.microsoft.com/research/downloads/details/ 7a2bb7ee-35e6-40d7-a3f1-0b743a56b424/details.aspx 3 e.g., f J 1 is predominantly in Chinese and e I 1 is predominantly in English Algorithm 1 self-training algorithm",
"cite_spans": [
{
"start": 290,
"end": 308,
"text": "(Gao et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 414,
"end": 432,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Require: L, U = {f J 1 (e I 1 )}, T , M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "\u00a3L, (labeled) training set; U , (unlabeled) candidate set; T , test set; M, the transliteration or translation model. 1: Lexicon = {} \u00a3 new mined lexicon 2: repeat 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "N = {} \u00a3 new mined lexicon during one iteration 4: train M on L 5: evaluate M on T 6: for f J 1 (e I 1 ) \u2208 U do 7: topN = {C |decode e I 1 by M} 8: N = N \u222a {(c, e I 1 )|c \u2208 f J 1 \u2227 \u2203C \u2208 topN s.t. similarity{c, C } \u2265 \u03b8} 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "end for 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "U = U \u2212 N 11: L = unif ied(L \u222a N ) 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lexicon = unif ied(Lexicon \u222a N ) 13: until |N | \u2264 14: return Lexicon \u00a3 the output dates are used for translation lexicon mining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Algorithm 1 addresses the self-training algorithm for lexicon mining. The main part is a loop from Line 2 to Line 13. A given seed lexicon is taken as labeled data and is split into training and testing sets (L and T ). U ={f J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "1 (e I 1 )}, stands for the (unlabeled) parenthetical expression set. Initially, a translation/transliteration model (M) is trained on L and evaluated on T (Line 4 and 5). Then, the English phrase e I 1 of each unlabeled entry is decoded by M, and the top-N outputs are stored in set topN (Line 7\u223c8). A similarity function on c (a word substring of f J 1 ) and a top-N output C is employed to make the decision of classification: the pair (c, e I 1 ) will be selected as a new entry if the similarity between c and C is no smaller than a threshold value \u03b8 (Line 8). After processing each entry in U , the new mined lexicon N is deleted from U and unified with the current training set L as the new training set (Line 10 and 11). Also, N is added to the final lexicon (Line 12). When |N | is lower than a threshold, the loop stops. Finally, the algorithm returns the mined lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One of the open problems in Algorithm 1 is how to append new mined entries into the existing seed lexicon, considering they have different distributions. One way is to design and estimate a weight function on the frequency of new mined entries. For simplicity, we use a deficient strategy that takes the weights of all new mined entries to be one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The method that we use for extracting a bilingual abbreviation lexicon from parenthetical expressions is inspired by (Okzaki and Ananiadou, 2006) . They used a term recognition approach to build a monolingual abbreviation dictionary from the Medical Literature Analysis and Retrieval System Online (MED-LINE) abstracts, wherein acronym definitions (e.g., ADM is short for adriamycin, adrenomedullin, etc.) are abundant. They reported 99% precision and 82-95% recall. Through locating a textual fragment with an acronym and its expanded form in pattern long form (short form),",
"cite_spans": [
{
"start": 117,
"end": 145,
"text": "(Okzaki and Ananiadou, 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "they defined a heuristic formula to compute the longform likelihood LH(c) for a candidate c:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "LH(c) = freq(c) \u2212 t\u2208Tc freq(t) \u00d7 freq(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "t\u2208Tc freq(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "(3) Here, c is a long-form candidate; freq(c) denotes the frequency of co-occurrence of c with a short-form; and T c is a set of nested long-form candidates, each of which consists of a preceding word followed by the candidate c. Obviously, for t \u2208 T c , Equation 3 can be explained as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LH(c) = freq(c) \u2212 E[freq(t)].",
"eq_num": "(4)"
}
],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "In this paper, we apply their method on the task of bilingual abbreviation lexicon extraction. Now, the long-form is a Chinese word sequence and the short-form is an English acronym. We filter the parenthetical expressions in the Web pages with several heuristic rules to meet the form of pattern 2 and to save the computing time:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "\u2022 the short-form (e I 1 ) should contain only one English word (I = 1), and all letters in which should be capital;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "\u2022 similar with (Lin et al., 2008) , the preparenthesis text is trimmed with: |c| \u2265 10 \u00d7 |e I 1 | + 6 when |e I 1 | \u2264 6, and |c| \u2265 2 \u00d7 |e I 1 | + 6, otherwise. |c| and |e I 1 | are measured in bytes. We further trim the remaining pre-parenthesis text by punctuations other than hyphens and dots, i.e., the right most punctuation and its left subsequence are discarded. ",
"cite_spans": [
{
"start": 15,
"end": 33,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "We used SogouT Internet Corpus Version 2.0 4 , which contains about 13 billion original Web pages (mainly Chinese) in the form of 252 gigabyte .txt files. In addition, we used 55 gigabyte (.txt format) Peking University Chinese Paper Corpus. We constructed a partially parallel corpus in the form of Pattern 1 from the union of the two corpora using the heuristic rules defined in (Lin et al., 2008) . We gained a partially parallel corpus which contains 12,444,264 entries.",
"cite_spans": [
{
"start": 381,
"end": 399,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.2"
},
{
"text": "We extracted 107,856 distinct English acronyms. Limiting LH score \u2265 1.0 in Equation 3, we gained 2,020,012 Chinese long-form candidates for the 107,856 English acronyms. Table 2 illustrates the top-7 Chinese long-form candidates of the English acronym TAA. Three candidates are correct (T) longforms while the other 4 are wrong (F). Wrong candidates from No. 3 to 5 are all subsequences of the correct candidate No. 1. No. 6 includes No. 1 while with a Chinese functional word de in the left most side. These error types can be easily tackled with some filtering patterns, such as 'remove the left most functional word in the long-form candidates', 'only keep the relatively longer candidates with larger LH score', etc.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.2"
},
{
"text": "Since there does not yet exists a common evaluation data set for the bilingual abbreviation lexicon, we manually evaluated a small sample of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.2"
},
{
"text": "Of the 107,856 English acronyms, we randomly selected 200 English acronyms and their top-1 Chinese long-form candidates for manually evaluating. We found, 92 candidates were correct including 3 transliteration examples. Of the 108 wrong candidates, 96 candidates included the correct long-form with some redundant words on the left side (i.e., c = (word) + correct long-form), the other 12 candidates missed some words of the correct long-form or had some redundant words right before the left parenthesis (i.e., c = (word) * correct long-form (word) + or c = (word) * subsequence of correct long-form word) * ). We classified the redundant word right before the correct long-form of each of the 96 candidates, de occupied 32, noun occupied 7, verb occupied 18, prepositions and conjunctions occupied the remaining ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.2"
},
{
"text": "In total, the abbreviation translation accuracy is 44.5%. We improved the accuracy to 60.5% with an additional de filtering pattern. According to former mentioned error analysis, the accuracy may further be improved if a Chinese part-of-speech tagger is employed and the non-nominal words in the longform are removed beforehand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.2"
},
{
"text": "In this section, we first describe and compare three transliteration models. Then, we select and train the best model following Algorithm 1 for lexicon mining. We investigate two things, the scalability of the self-trained model given different amount of initial training data, and the performance of several strategies for selecting new training samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Training for Transliteration Models",
"sec_num": "5"
},
{
"text": "We construct and compare three forward transliteration models, a phoneme-based model (English phonemes to Chinese pinyins), a grapheme-based model (English syllables to Chinese characters) and a hybrid model (English syllables to Chinese pinyins). Similar models have been compared in (Oh et al., 2006) for English-to-Korean and Englishto-Japanese transliteration. All the three models are phrase-based, i.e., adjacent phonemes or graphemes are allowable to form phrase-level transliteration units. Building the correspondences on phrase level can effectively tackle the missing or redundant phoneme/grapheme problem during transliteration. For example, when Aamodt is transliterated into a m\u014d t\u00e8 5 , a and d are missing. The problem can be easily solved when taking Aa and dt as single units for transliterating.",
"cite_spans": [
{
"start": 285,
"end": 302,
"text": "(Oh et al., 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "5.1"
},
{
"text": "Making use of Moses (Koehn et al., 2007) , a phrase-based SMT system, Matthews (2007) has shown that the performance was comparable to recent state-of-the-art work (Jiang et al., 2007) in English-to-Chinese personal name transliteration. Matthews (2007) took transliteration as translation at the surface level. Inspired by his idea, we also implemented our transliteration models employing Moses. The main difference is that, while Matthews (2007) tokenized the English names into individual letters before training in Moses, we split them into syllables using the heuristic rules described in (Jiang et al., 2007) , such that one syllable only contains one vowel letter or a combination of a consonant and a vowel letter.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF8"
},
{
"start": 70,
"end": 85,
"text": "Matthews (2007)",
"ref_id": "BIBREF12"
},
{
"start": 164,
"end": 184,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 238,
"end": 253,
"text": "Matthews (2007)",
"ref_id": "BIBREF12"
},
{
"start": 433,
"end": 448,
"text": "Matthews (2007)",
"ref_id": "BIBREF12"
},
{
"start": 595,
"end": 615,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "5.1"
},
{
"text": "English syllable sequences are used in the grapheme-based and hybrid models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "5.1"
},
{
"text": "In the phoneme-based model, we transfer English names into phonemes and Chinese characters into Pinyins in virtue of the CMU pronunciation dictionary 6 and the LDC Chinese character-to-pinyin list 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "5.1"
},
{
"text": "In the mass, the grapheme-based model is the most robust model, since no additional resources are needed. However, it suffers from the Chinese homophonic character problem. For instance, pinyin ai corresponds to numerous Chinese characters which are applicable to personal names. The phonemebased model is the most suitable model that reflects the essence of transliteration, while restricted by additional grapheme to phoneme dictionaries. In order to eliminate the confusion of Chinese homophonic characters and alleviate the dependency on additional resources, we implement a hybrid model that accepts English syllables and Chinese pinyins as formats of the training data. This model is called hybrid, since English syllables are graphemes and Chinese pinyins are phonemes. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "5.1"
},
{
"text": "Similar to (Jiang et al., 2007) , the transliteration models were trained and tested on the LDC Chinese-English Named Entity Lists Version 1.0 8 . The original list contains 572,213 English people names with Chinese transliterations. We extracted 74,725 entries in which the English names also appeared in the CMU pronunciation dictionary. We randomly selected 3,736 entries as an open testing set and the remaining entries as a training set 9 . The results were evaluated using the character/pinyin-based 4-gram BLEU score (Papineni et al., 2002) , word error rate (WER), position independent word error rate (PER), and exact match (EMatch). Figure 2 reports the performances of the three models and the comparison based on EMatch. From the results, we can easily draw the conclusion that the hybrid model performs the best under the maximal phrase length (mpl, the maximal phrase length allowed in Moses) from 1 to 8. The performances of the models converge at or right after mpl = 4. The pinyin-based WER of the hybrid model is 39.13%, comparable to the pinyin error rate 39.6%, reported in (Jiang et al., 2007) 10 . Thus, our further 10 It should be notified that we achieved this result by using larger training set (70,989 vs. 25,718) and larger test set (3,736 vs. 200) comparing with (Jiang et al., 2007) , and we did not use self-training experiments are pursued on the hybrid model taking mpl to be 4 (short for h4, hereafter).",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 524,
"end": 547,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 1094,
"end": 1114,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 1221,
"end": 1240,
"text": "(70,989 vs. 25,718)",
"ref_id": null
},
{
"start": 1292,
"end": 1312,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 643,
"end": 651,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental model selection",
"sec_num": "5.2"
},
{
"text": "As former mentioned, we investigate the scalability of the self-trained h4 model by respectively using 5, 10, 20, 40, 60, 80, and 100 percent of initial training data, and the performances of using exact matching (em) or approximate matching (am, line 8 in Algorithm 1) on the top-1 and top-5 outputs (line 7 in Algorithm 1) for selecting new training samples. We used edit distance (ed) to measure the em and am similarities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on the self-trained hybrid model",
"sec_num": "5.3"
},
{
"text": "ed(c, C ) = 0 or < syllable number(C )/2. 5When applying Algorithm 1 for transliteration lexicon mining, we decode each word in e I 1 respectively. The algorithm terminated in five iterations when we set the terminal threshold (Line 13 in Algorithm 1) to be 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on the self-trained hybrid model",
"sec_num": "5.3"
},
{
"text": "For simplicity, Table 3 only illustrates the BLEU score of h4 models under four selection strategies. From this table, we can draw the following conclusions. First, with fewer initial training data, the improvement is better. The best relative improvements additional Web resources as Jiang et al. (2007) did. are 8.74%, 8.46%, 4.41%, 0.67%, 0.68%, 0.32%, and 1.39%, respectively. Second, using top-5 and em for new training data selection performs the best among the four strategies. Compared under each iteration, using top-5 is better than using top-1; em is better than am; and top-5 with am is a little better than top-1 with em. We mined 39,424, 42,466, 46,116, 47,057, 49,551, 49,622, and 50,313 distinct entries under the six types of initial data with top-5 plus em strategy. The 50,313 entries are taken as the final transliteration lexicon for further comparison.",
"cite_spans": [
{
"start": 285,
"end": 304,
"text": "Jiang et al. (2007)",
"ref_id": "BIBREF7"
},
{
"start": 644,
"end": 702,
"text": "39,424, 42,466, 46,116, 47,057, 49,551, 49,622, and 50,313",
"ref_id": null
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments on the self-trained hybrid model",
"sec_num": "5.3"
},
{
"text": "We classify the parenthetical translation candidates by employing a translation model. In contrast to (Lin et al., 2008) , wherein the lengthes of prefixes and suffixes of English words were assumed to be three bytes, we segment words into morphemes (sequences of prefixes, stems, and suffixes) by Morfessor 0.9.2 11 , an unsupervised language-independent morphological analyzer (Creutz and Lagus, 2007) . We use the morpheme-level translation similarity explicitly in our cascaded translation model (Wu et al., 2008) , which makes use of morpheme, word, and phrase level translation units. We train Moses to gain a phrase-level translation table. To gain a morpheme-level translation table, we run GIZA++ (Och and Ney, 2003) on both directions between English morphemes and Chinese characters, and take the intersection of Viterbi alignments. The Englishto-Chinese translation probabilities computed by GIZA++ are attached to each morpheme-character element in the intersection set.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 379,
"end": 403,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF2"
},
{
"start": 500,
"end": 517,
"text": "(Wu et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 706,
"end": 725,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Training for a Cascaded Translation Model",
"sec_num": "6"
},
{
"text": "The Wanfang Chinese-English technical term dictionary 12 , which contains 525,259 entries in total, was used for training and testing. 10,000 entries were randomly selected as the test set and the remaining as the training set. Again, we investigated the scalability of the self-trained cascaded translation model by respectively using 20, 40, 60, 80, and 100 percent of initial training data. An aggressive similar-% 0t 1t 2t 3t 4t 5t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "6.1"
},
{
"text": ". 1406 .1196 .1243 .1239 .1176 .1179 40 .1091 .1224 .1386 .1345 .1479 .1466 60 .1630 .1624 .1429 .1714 .1309 .1398 80 .1944 .1783 .1886 .1870 .1884 .1873 100 .1810 .1814 .1539 .1981 .1542 .1944 Table 4: The BLEU score of self-trained cascaded translation model under five initial training sets.",
"cite_spans": [
{
"start": 2,
"end": 193,
"text": "1406 .1196 .1243 .1239 .1176 .1179 40 .1091 .1224 .1386 .1345 .1479 .1466 60 .1630 .1624 .1429 .1714 .1309 .1398 80 .1944 .1783 .1886 .1870 .1884 .1873 100 .1810 .1814 .1539 .1981 .1542 .1944",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "20",
"sec_num": null
},
{
"text": "ity measurement was used for selecting new training samples: first char(c) = first char(C ) \u2227 min{ed(c, C )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "20",
"sec_num": null
},
{
"text": "(6) Here, we judge if the first characters of c and C are similar or not. c was gained by deleting zero or more characters from the left side of f J 1 . When more than one c satisfied this condition, the c that had the smallest edit distance with C was selected. When applying Algorithm 1 for translation lexicon mining, we took e I 1 as one input for decoding instead of decoding each word respectively. Only the top-1 output (C ) was used for comparing. The algorithm stopped in five iterations when we set the terminal threshold to be 2000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "20",
"sec_num": null
},
{
"text": "For simplicity, Table 4 only illustrates the BLEU score of the cascaded translation model under five initial training sets. For the reason that there are finite phonemes in English and Chinese while the semantic correspondences between the two languages tend to be infinite, Table 4 is harder to be analyzed than Table 3 . When initially using 40%, 60%, and 100% training data for self-training, the results tend to be better at some iterations. We gain 35.6%, 5.2%, and 9.4% relative improvements, respectively. However, the results tend to be worse when 20% and 80% training data were used initially, with 11.6% and 3.0% minimal relative loss. The best BLEU scores tend to be better when more initial training data are available. We mined 1,038,617, 1,025,606, 1,048,761, 1,056,311, and 1,060,936 distinct entries under the five types of initial training data. The 1,060,936 entries are taken as the final translation lexicon for further comparison.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 4",
"ref_id": null
},
{
"start": 275,
"end": 282,
"text": "Table 4",
"ref_id": null
},
{
"start": 313,
"end": 320,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "20",
"sec_num": null
},
{
"text": "We have mined three kinds of lexicons till now, an abbreviation lexicon containing 107,856 dis-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia Evaluation",
"sec_num": "7"
},
{
"text": "http://en.wikipedia.org/wiki/Main Page",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.sogou.com/labs/dl/t.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The tones of Chinese pinyins are ignored in our transliteration models for simplicity.6 http://www.speech.cs.cmu.edu/cgi-bin/cmudict 7 http://projects.ldc.upenn.edu/Chinese/docs/char2pinyin.txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cis.hut.fi/projects/morpho/ 12 http://www.wanfangdata.com.cn/Search/ResourceBrowse .aspx",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan) and Japanese/Chinese Machine Translation Project in Special Coordination Funds for Promoting Science and Technology (MEXT, Japan). We thank the anonymous reviewers for their constructive comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "En. to Ch.Ch. to En.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Our Lexicon 22.8% 5.2% 23.2% 5.5% Unsupervised 23.5% 5.4%24.0% 5.4% Table 5 : The results of our lexicon and an unsupervisedmined lexicon (Lin et al., 2008) evaluated under Wikipedia title dictionary. Cov is short for coverage. similar English acronyms with 2,020,012 Chinese long-form candidates; a transliteration lexicon with 50,313 distinct entries; and a translation lexicon with 1,060,936 distinct entries. The three lexicons are combined together as our final lexicon. Similar with (Lin et al., 2008) , we compare our final mined lexicon with a dictionary extracted from Wikipedia, the biggest multilingual free-content encyclopedia on the Web. We extracted the titles of Chinese and English Wikipedia articles 13 that are linked to each other. Since most titles contain less than five words, we take a linked title pair as a translation entry without considering the word alignment relation between the words inside the titles. The result lexicon contains 105,320 translation pairs between 103,823 Chinese titles and 103,227 English titles. Obviously, only a small percentage of titles have more than one translation. Whenever there is more than one translation, we take the candidate entry as correct if and only if it matches one of the translations.Moreover, we compare our semi-supervised approach with an unsupervised approach (Lin et al., 2008) . Lin et al. (2008) took \u03d5 2 (f j , e i ) score 14 (Gale and Church, 1991) with threshold 0.001 as the word alignment probability in a word alignment algorithm, Competitive Link. Competitive Link tries to align an unlinked e i with an unlinked f j by the condition that \u03d5 2 (f j , e i ) is the biggest. Lin et al. (2008) relaxed the unlinked constraints to allow consecutive sequence of words on one side to be linked to the same word on the other side 15 . , where a is the number of f J1 (e I 1 ) containing both ei and fj; (a + b) is the number of f J1 (e I 1 ) containing ei; (a + c) is the number of f J 1 (e I 1 ) containing fj; and d is the number of f J 1 (e I 1 ) containing neither ei nor fj.15 Instead of requiring both ei and fj to have no previous link-boundary inside f J 1 is determined when each e i in e I 1 is aligned. After applying the modified Competitive Link on the partially parallel corpus which includes 12,444,264 entries (Section 4.2), we obtained 2,628,366 distinct pairs. Table 5 shows the results of the two lexicons evaluated under Wikipedia title dictionary. The coverage is measured by the percentage of titles which appears in the mined lexicon. We then check whether the translation in the mined lexicon is an exact match of one of the translations in the Wikipedia lexicon. Through comparing the results, our mined lexicon is comparable with the lexicon mined in an unsupervised way. Since the selection is based on phonemic and semantic clues instead of frequency, a parenthetical translation candidate will not be selected if the in-parenthetical English text is failed to be transliterated or translated. This is one reason that explains why we earned a little lower coverage. Another reason comes from the low coverage rate of seed lexicons used for self-training, only 8.65% English words in the partially parallel corpus are covered by the Wanfang dictionary.",
"cite_spans": [
{
"start": 138,
"end": 156,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 489,
"end": 507,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 1340,
"end": 1358,
"text": "(Lin et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 1361,
"end": 1378,
"text": "Lin et al. (2008)",
"ref_id": "BIBREF11"
},
{
"start": 1410,
"end": 1433,
"text": "(Gale and Church, 1991)",
"ref_id": "BIBREF4"
},
{
"start": 1662,
"end": 1679,
"text": "Lin et al. (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 5",
"ref_id": null
},
{
"start": 2361,
"end": 2368,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cov EMatch Cov EMatch",
"sec_num": null
},
{
"text": "We have proposed a semi-supervised learning framework for mining bilingual lexicons from parenthetical expressions in monolingual Web pages. We classified the parenthesis expressions into three categories: abbreviation, transliteration, and translation. A set of heuristic rules, a self-trained hybrid transliteration model, and a self-trained cascaded translation model were proposed for each category, respectively.We investigated the scalability of the self-trained transliteration and translation models by training them with different amount of data. The results shew the stability (transliteration) and feasibility (translation) of our proposals. Through employing the parallel Wikipedia article titles as a gold standard lexicon, we gained the comparable results comparing our semi-supervised framework with our implementation of Lin et al. (2008) 's unsupervised mining approach.ages, they only require that at least one of them be unlinked and that (suppose ei is unlinked and fj is linked to e k ) none of the words between ei and e k be linked to any word other than fj.",
"cite_spans": [
{
"start": 837,
"end": 854,
"text": "Lin et al. (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A system to Mine Large-Scale Bilingual Dictionaries from Monolingual Web Pages",
"authors": [
{
"first": "Guihong",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2007,
"venue": "MT Summit XI",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cao, Guihong, Jianfeng Gao, and Jian-Yun Nie. 2007. A system to Mine Large-Scale Bilingual Dictionar- ies from Monolingual Web Pages. In MT Summit XI. pages 57-64, Copenhagen, Denmark.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Creating Multilingual Translation Lexicons with Regional Variations Using Web Corpora",
"authors": [
{
"first": "Pu-Jen",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Yi-Cheng",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Wen-Hsiang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lee-Feng",
"middle": [],
"last": "Chien",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL 2004",
"volume": "",
"issue": "",
"pages": "534--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng, Pu-Jen, Yi-Cheng Pan, Wen-Hsiang Lu, and Lee- Feng Chien. 2004. Creating Multilingual Translation Lexicons with Regional Variations Using Web Cor- pora. In ACL 2004, pages 534-541, Barcelona, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised Models for Morpheme Segmentation and Morphology Learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions on Speech and Language Processing",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Creutz, Mathias and Krista Lagus. 2007. Unsupervised Models for Morpheme Segmentation and Morphology Learning. ACM Transactions on Speech and Lan- guage Processing, 4(1):Article 3.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Maximum Likelihood from Incomplete Data via the EM Algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society",
"volume": "39",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A. P., N. M. Laird and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Soci- ety, 39:1-38.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Identifying word correspondence in parallel text",
"authors": [
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1991,
"venue": "DARPA NLP Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W. and K. Church. 1991. Identifying word corre- spondence in parallel text. In DARPA NLP Workshop.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Andi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "4",
"pages": "531--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, Jianfeng, Mu Li, Andi Wu, and Chang-Ning Huang. 2006. Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach. Computational Linguistics, 31(4):531-574.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning Bilingual Lexicons from Monolingual Corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL-08:HLT",
"volume": "",
"issue": "",
"pages": "771--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haghighi, Aria, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein 2008. Learning Bilingual Lexicons from Monolingual Corpora. In ACL-08:HLT. pages 771-779, Columbus, Ohio.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named Entity Translation with Web Mining and Transliteration",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lee-Feng",
"middle": [],
"last": "Chien",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2007,
"venue": "IJCAI 2007",
"volume": "",
"issue": "",
"pages": "1629--1634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, Long, Ming Zhou, Lee-Feng Chien, and Cheng Niu. 2007. Named Entity Translation with Web Min- ing and Transliteration. In IJCAI 2007. pages 1629- 1634, Hyderabad, India.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL 2007 Poster Session",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL 2007 Poster Session, pages 177-180.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning a translation lexicon from monolingual corpora",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "SIGLEX 2002",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In SIGLEX 2002, pages 9-16.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning Transliteration Lexicons from the Web",
"authors": [
{
"first": "Jin",
"middle": [
"-"
],
"last": "Kuo",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Shea",
"suffix": ""
},
{
"first": "Ying-Kuei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2006,
"venue": "COLING-ACL 2006",
"volume": "",
"issue": "",
"pages": "1129--1136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuo, Jin-Shea, Haizhou Li, and Ying-Kuei Yang. 2006. Learning Transliteration Lexicons from the Web. In COLING-ACL 2006. pages 1129-1136.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mining Parenthetical Translations from the Web by Word Alignment",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL-08:HLT",
"volume": "",
"issue": "",
"pages": "994--1002",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Dekang, Shaojun Zhao, Benjamin Van Durme, and Marius Pa\u015fca. 2008. Mining Parenthetical Transla- tions from the Web by Word Alignment. In ACL- 08:HLT, pages 994-1002, Columbus, Ohio.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Machine Transliteration of Proper Names",
"authors": [
{
"first": "David",
"middle": [],
"last": "Matthews",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthews, David. 2007. Machine Transliteration of Proper Names. A Thesis of Master. University of Ed- inburgh.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "When is Self-Training Effective for Parsing?",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "561--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McClosky, David, Eugene Charniak, and Mark Johnson 2008. When is Self-Training Effective for Parsing? In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 561- 568, manchester, UK.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Models of Translational Equivalence among Words",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dan",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. Dan. 2000. Models of Translational Equiv- alence among Words. Computational Linguistics, 26(2):221-249.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Systematic Comparison of Various Statistical Alignment Models",
"authors": [
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz Josef and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Comparison of Different Machine Transliteration Models",
"authors": [
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Key-Sun",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artifical Intelligence Research",
"volume": "27",
"issue": "",
"pages": "119--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oh, Jong-Hoon, Key-Sun Choi, and Hitoshi Isahara. 2006. A Comparison of Different Machine Translit- eration Models. Journal of Artifical Intelligence Re- search, 27:119-151.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Building an Abbreviation Dictionary Using a Term Recognition Approach",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2006,
"venue": "Bioinformatics",
"volume": "22",
"issue": "22",
"pages": "3089--3095",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Okazaki, Naoaki and Sophia Ananiadou. 2006. Building an Abbreviation Dictionary Using a Term Recognition Approach. Bioinformatics, 22(22):3089-3095.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, Kishore, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL). pages 311-318, Philadel- phia.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mining New Word Translations from Comparable Corpora",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "618--624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shao, Li and Hwee Tou Ng. 2004. Mining New Word Translations from Comparable Corpora. In Proceed- ings of the 20th International Conference on Com- putational Linguistics (COLING), pages 618-624, Geneva, Switzerland.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving English-to-Chinese Translation for Technical Terms Using Morphological Information",
"authors": [
{
"first": "Xianchao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Tsunakawa",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 8th Conference of the Association for Machine Translation in the Americas (AMTA)",
"volume": "",
"issue": "",
"pages": "202--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Xianchao, Naoaki Okazaki, Takashi Tsunakawa, and Jun'ichi Tsujii. 2008. Improving English-to-Chinese Translation for Technical Terms Using Morphological Information. In Proceedings of the 8th Conference of the Association for Machine Translation in the Ameri- cas (AMTA), pages 202-211, Waikiki, Hawai'i.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky, David. 1995. Unsupervised Word Sense Dis- ambiguation Rivaling Supervised Methods. In Pro- ceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 189-196, Cam- bridge, Massachusetts.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semi-Supervised Learning Literature Survery",
"authors": [
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu, Xiaojin. 2007. Semi-Supervised Learning Litera- ture Survery. University of Wisconsin -Madison.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "The performances of the transliteration models and their comparison on EMatch.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "al. (2007) selected 25,718 personal name pairs from LDC2003E01 as the experiment data: 200 as development set, 200 as test set, and the remaining entries as training set.",
"uris": null,
"num": null
},
"TABREF2": {
"text": "Top-7 Chinese long-form candidates for the English acronym TAA, according to the LH score.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"text": "The BLEU score of self-trained h4 transliteration models under four selection strategies. nt (n=1..5) stands for the n-th iteration.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}