Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D12-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:23:52.758481Z"
},
"title": "Translation Model Based Cross-Lingual Language Model Adaptation: from Word Models to Phrase Models",
"authors": [
{
"first": "Shixiang",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Wei",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Xiaoyin",
"middle": [],
"last": "Fu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a novel translation model (TM) based cross-lingual data selection model for language model (LM) adaptation in statistical machine translation (SMT), from word models to phrase models. Given a source sentence in the translation task, this model directly estimates the probability that a sentence in the target LM training corpus is similar. Compared with the traditional approaches which utilize the first pass translation hypotheses, cross-lingual data selection model avoids the problem of noisy proliferation. Furthermore, phrase TM based cross-lingual data selection model is more effective than the traditional approaches based on bag-ofwords models and word-based TM, because it captures contextual information in modeling the selection of phrase as a whole. Experiments conducted on large-scale data sets demonstrate that our approach significantly outperforms the state-of-the-art approaches on both LM perplexity and SMT performance.",
"pdf_parse": {
"paper_id": "D12-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a novel translation model (TM) based cross-lingual data selection model for language model (LM) adaptation in statistical machine translation (SMT), from word models to phrase models. Given a source sentence in the translation task, this model directly estimates the probability that a sentence in the target LM training corpus is similar. Compared with the traditional approaches which utilize the first pass translation hypotheses, cross-lingual data selection model avoids the problem of noisy proliferation. Furthermore, phrase TM based cross-lingual data selection model is more effective than the traditional approaches based on bag-ofwords models and word-based TM, because it captures contextual information in modeling the selection of phrase as a whole. Experiments conducted on large-scale data sets demonstrate that our approach significantly outperforms the state-of-the-art approaches on both LM perplexity and SMT performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language model (LM) plays a critical role in statistical machine translation (SMT). It seems to be a universal truth that LM performance can always be improved by using more training data (Brants et al., 2007) , but only if the training data is reasonably well-matched with the desired output (Moore and Lewis, 2010) . It is also obvious that among the large training data the topics or domains of discussion will change , which causes the mismatch problems with the translation task. For this reason, most researchers preferred to select similar training data from the large training corpus in the past few years Zhao et al., 2004; Kim, 2005; Masskey and Sethy, 2010; Axelrod et al., 2011) . This would empirically provide more accurate lexical probabilities, and thus better match the translation task at hand (Axelrod et al., 2011) .",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "(Brants et al., 2007)",
"ref_id": "BIBREF4"
},
{
"start": 304,
"end": 316,
"text": "Lewis, 2010)",
"ref_id": "BIBREF20"
},
{
"start": 614,
"end": 632,
"text": "Zhao et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 633,
"end": 643,
"text": "Kim, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 644,
"end": 668,
"text": "Masskey and Sethy, 2010;",
"ref_id": "BIBREF18"
},
{
"start": 669,
"end": 690,
"text": "Axelrod et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 812,
"end": 834,
"text": "(Axelrod et al., 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many previous data selection approaches for LM adaptation in SMT depend on the first pass translation hypotheses Zhao et al., 2004; Kim, 2005; Masskey and Sethy, 2010) , they select the sentences which are similar to the translation hypotheses. These schemes are overall limited by the quality of the translation hypotheses (Tam et al., 2007 and , and better initial translation hypotheses lead to better selected sentences (Zhao et al., 2004) . However, while SMT has achieved a great deal of development in recent years, the translation hypotheses are still far from perfect (Wei and Pal, 2010) , which have many noisy data. The noisy translation hypotheses mislead data selection process (Xu et al., 2001; Tam et al., 2006 and 2007; Wei and Pal, 2010) , and thus take noisy data into the selected training data, which causes noisy proliferation and degrades the performance of adapted LM.",
"cite_spans": [
{
"start": 113,
"end": 131,
"text": "Zhao et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 132,
"end": 142,
"text": "Kim, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 143,
"end": 167,
"text": "Masskey and Sethy, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 324,
"end": 345,
"text": "(Tam et al., 2007 and",
"ref_id": "BIBREF29"
},
{
"start": 424,
"end": 443,
"text": "(Zhao et al., 2004)",
"ref_id": "BIBREF34"
},
{
"start": 577,
"end": 596,
"text": "(Wei and Pal, 2010)",
"ref_id": "BIBREF31"
},
{
"start": 691,
"end": 708,
"text": "(Xu et al., 2001;",
"ref_id": "BIBREF32"
},
{
"start": 709,
"end": 729,
"text": "Tam et al., 2006 and",
"ref_id": "BIBREF28"
},
{
"start": 730,
"end": 735,
"text": "2007;",
"ref_id": "BIBREF7"
},
{
"start": 736,
"end": 754,
"text": "Wei and Pal, 2010)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, traditional approaches for LM adaptation are based on bag-of-words models and considered to be context independent, despite of their state-of-the-art performance, such as TF-IDF Zhao et al., 2004; Hildebrand et al., 2005; Kim, 2005; Foster and Kuhn, 2007) , centroid similarity (Masskey and Sethy, 2010) , and cross-lingual similarity (CLS) (Ananthakrishnan et al., 2011a) . They all perform at the word level, exact only ter-m matching schemes, and do not take into account any contextual information when modeling the selection by single words in isolation, which degrade the quality of selected sentences.",
"cite_spans": [
{
"start": 191,
"end": 209,
"text": "Zhao et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 210,
"end": 234,
"text": "Hildebrand et al., 2005;",
"ref_id": "BIBREF14"
},
{
"start": 235,
"end": 245,
"text": "Kim, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 246,
"end": 268,
"text": "Foster and Kuhn, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 291,
"end": 316,
"text": "(Masskey and Sethy, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 354,
"end": 385,
"text": "(Ananthakrishnan et al., 2011a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we argue that it is beneficial to model the data selection based on the source translation task directly and capture the contextual information for LM adaptation. To this end, we propose a more principled translation model (TM) based cross-lingual data selection model for LM adaptation, from word models to phrase models. We assume that the data selection should be performed by the cross-lingual model and at the phrase level. Given a source sentence in the translation task, this model directly estimates the probability before translation that a sentence in the target LM training corpus is similar. Therefore, it does not require the translation task to be pre-translation as in monolingual adaptation, and can address the problem of noisy proliferation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, this is the first extensive and empirical study of using phrase T-M based cross-lingual data selection for LM adaptation. This model learns the transform probability of a multi-term phrase in a source sentence given a phrase in the target sentence of LM training corpus. Compared with bag-of-words models and word-based TM that account for selecting single words in isolation, this model performs at the phrase level and captures some contextual information in modeling the selection of phrase as a whole, thus it is potentially more effective. More precise data selection can be determined for phrases than for words. In this model, we propose a linear ranking model framework to further improve the performance, referred to the linear discriminant function (Duda et al., 2001; Collins, 2002; Gao et al., 2005) in pattern classification and information retrieval (IR), where different models are incorporated as features, as we will show in our experiments.",
"cite_spans": [
{
"start": 789,
"end": 808,
"text": "(Duda et al., 2001;",
"ref_id": "BIBREF9"
},
{
"start": 809,
"end": 823,
"text": "Collins, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 824,
"end": 841,
"text": "Gao et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike the general TM in SMT, we explore the use of TextRank algorithm (Mihalcea et al., 2004) to identify and eliminate unimportant words (e.g., non-topical words, common words) for corpus preprocessing, and construct TM by important words. This reduces the average number of words in crosslingual data selection model, thus improving the efficiency. Moreover, TextRank utilizes the contex-t information of words to assign term weights (Lee et al., 2008) , which makes phrase TM based crosslingual data selection model play its advantage of capturing the contextual information, thus further improving the performance.",
"cite_spans": [
{
"start": 71,
"end": 94,
"text": "(Mihalcea et al., 2004)",
"ref_id": "BIBREF19"
},
{
"start": 437,
"end": 455,
"text": "(Lee et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. Section 2 introduces the related work of LM adaptation. Section 3 presents the framework of cross-lingual data selection for LM adaptation. Section 4 describes our proposed TM based crosslingual data selection model: from word models to phrase models. In section 5 we present large-scale experiments and analyses, and followed by conclusions and future work in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TF-IDF and cosine similarity have been widely used for LM adaptation Zhao et al., 2004; Hildebrand et al., 2005; Kim, 2005; Foster and Kuhn, 2007) . Masskey and Sethy (2010) selected the auxiliary data by computing centroid similarity score to the centroid of the in-domain data. The main idea of these methods is to select the sentences which are similar to the first pass translation hypotheses or in-domain corpus from the large LM training corpus, and estimate the bias LM for SMT system to improve the translation quality. Tam et al. (2007 and proposed a bilingual-LSA model for LM adaptation. They integrated the LSA marginal into the target generic LM using marginal adaptation which minimizes the Kullback-Leibler divergence between the adapted LM and the generic LM. Ananthakrishnan et al. (2011a) proposed CLS to bias the count and probability of corresponding n-gram through weighting the LM training corpus. However, these two cross-lingual approaches focus on modify LM itself, which are different from data selection method for LM adaptation. In our comparable experiments, we apply CLS for the first time to the task of cross-lingual data selection for LM adaptation. Due to lack of smoothing measure for sparse vector representation in CLS, the similarity computation is not accurate which degrades the performance of adapted LM. To avoid this, we add smoothing measure like TF-IDF, called CLS s , as we will discuss in the experiments. Snover et al. (2008) used a word TM based CLIR system (Xu et al., 2001) to select a subset of target documents comparable to the source document for adapting LM. Because of the data sparseness in the document state and it operated at the document level, this model selected large quantities of irrelevant text, which may degrade the adapted LM Ananthakrishnan et al., 2011b) . In our word TM based cross-lingual data selection model, we operate at the sentence level and add the smoothing mechanism by integrating with the background word frequency model, and these can significantly improve the performance. Axelrod et al. (2011) proposed a bilingual cross-entropy difference to select data from parallel corpus for domain adaptation which captures the contextual information slightly, and outperformed monolingual cross-entropy difference (Moore and Lewis, 2010), which first shows the advantage of bilingual data selection. However, its performance depends on the parallel in-domain corpus which is usually hard to find, and its application is assumed to be limited.",
"cite_spans": [
{
"start": 69,
"end": 87,
"text": "Zhao et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 88,
"end": 112,
"text": "Hildebrand et al., 2005;",
"ref_id": "BIBREF14"
},
{
"start": 113,
"end": 123,
"text": "Kim, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 124,
"end": 146,
"text": "Foster and Kuhn, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 528,
"end": 548,
"text": "Tam et al. (2007 and",
"ref_id": "BIBREF29"
},
{
"start": 1453,
"end": 1473,
"text": "Snover et al. (2008)",
"ref_id": "BIBREF26"
},
{
"start": 1507,
"end": 1524,
"text": "(Xu et al., 2001)",
"ref_id": "BIBREF32"
},
{
"start": 1797,
"end": 1827,
"text": "Ananthakrishnan et al., 2011b)",
"ref_id": "BIBREF1"
},
{
"start": 2062,
"end": 2083,
"text": "Axelrod et al. (2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our LM adaptation is an unsupervised similar training data selection guided by TM based cross-lingual data selection model. For the source sentences in the translation task, we estimate a new LM, the bias LM, from the corresponding target LM training sentences which are selected as the similar sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Data Selection for Language Model Adaptation",
"sec_num": "3"
},
{
"text": "Since the size of the selected sentences is small, the corresponding bias LM is specific and more effective, giving high probabilities to those phrases that occur in the desired output translations. Following the work of (Zhao et al., 2004; Snover et al., 2008) , the generic LM P g (w i |h) and the bias LM P b (w i |h) are combined using linear interpolation as the adapted LM P a (w i |h), which is shown to improve the performance over individual model,",
"cite_spans": [
{
"start": 221,
"end": 240,
"text": "(Zhao et al., 2004;",
"ref_id": "BIBREF34"
},
{
"start": 241,
"end": 261,
"text": "Snover et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Data Selection for Language Model Adaptation",
"sec_num": "3"
},
{
"text": "P a (w i |h) = \u00b5P g (w i |h) + (1 \u2212 \u00b5)P b (w i |h) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Data Selection for Language Model Adaptation",
"sec_num": "3"
},
{
"text": "where the interpolation factor \u00b5 can be simply estimated using the Powell Search algorithm (Press et al., 1992) via cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Data Selection for Language Model Adaptation",
"sec_num": "3"
},
{
"text": "Our work focuses on TM based cross-lingual data selection model, from word model to phrase models, and the quality of this model is crucial to the performance of adapted LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Data Selection for Language Model Adaptation",
"sec_num": "3"
},
{
"text": "Let Q = q 1 , . . . , q j be a source sentence in the translation task and S = w 1 , . . . , w i be a sentence in the general target LM training corpus, thus crosslingual data selection model can be framed probabilistically as maximizing the P (S|Q) . By Bayes' rule,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model for Cross-Lingual Data Selection (CLTM)",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (S|Q) = P (S)P (Q|S) P (Q)",
"eq_num": "(2)"
}
],
"section": "Translation Model for Cross-Lingual Data Selection (CLTM)",
"sec_num": "4"
},
{
"text": "where the prior probability P (S) can be viewed as uniform, and the P (Q) is constant across all sentences. Therefore, selecting a sentence to maximize P (S|Q) is equivalent to selecting a sentence that maximizes P (Q|S).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model for Cross-Lingual Data Selection (CLTM)",
"sec_num": "4"
},
{
"text": "Cross-Lingual Data Selection (CLWTM)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Based Translation Model for",
"sec_num": "4.1"
},
{
"text": "Following the work of (Xu et al., 2001; Snover et al., 2008) , CLWTM can be described as",
"cite_spans": [
{
"start": 22,
"end": 39,
"text": "(Xu et al., 2001;",
"ref_id": "BIBREF32"
},
{
"start": 40,
"end": 60,
"text": "Snover et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Q|S) = q\u2208Q P (q|S)",
"eq_num": "(3)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (q|S) = \u03b1P (q|C q ) + (1 \u2212 \u03b1) w\u2208S P (q|w)P (w|S)",
"eq_num": "(4)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.1.1"
},
{
"text": "where \u03b1 is the interpolation weight empirically set as a constant 1 , P (q|w) is the word-based TM which is estimated by IBM Model 1 (Brown et al., 1993) from the parallel corpus, P (q|C q ) and P (w|S) are the un-smoothed background and sentence model, respectively, estimated using maximum likelihood estimation (MLE) as",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.1.1"
},
{
"text": "P (q|C q ) = f req(q, C q ) |C q | (5) P (w|S) = f req(w, S) |S| (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.1.1"
},
{
"text": "where C q refers to the translation task, f req(q, C q ) refers to the number of times q occurs in C q , f req(w, S) refers to the number of times w occurs in S, and |C q | and |S| are the sizes of the translation task and the current target sentence, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.1.1"
},
{
"text": "Because of the data sparseness in the sentence state which degrades the model, Equation (6) does not perform well in our data selection experiments. Inspired by the work of (Berger et al., 1999) in IR, we make the following smoothing mechanism:",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "(Berger et al., 1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (q|S) = \u03b1P (q|C q )+(1\u2212\u03b1) w\u2208S P (q|w)P s (w|S) (7) P s (w|S) = \u03b2P (w|C s ) + (1 \u2212 \u03b2)P (w|S) (8) P (w|C s ) = f req(w, C s ) |C s |",
"eq_num": "(9)"
}
],
"section": "Ranking Candidate Sentences",
"sec_num": "4.1.2"
},
{
"text": "where P (w|C s ) is the un-smoothed background model, estimated using MLE as Equation 5, C s refers to the LM training corpus and |C s | refers to its size. Here, \u03b2 is interpolation weight; notice that letting \u03b2 = 0 in Equation 8reduces the model to the un-smoothed model in Equation (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.1.2"
},
{
"text": "Cross-Lingual Data Selection (CLPTM)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Translation Model for",
"sec_num": "4.2"
},
{
"text": "The phrase-based TM (Koehn et al., 2003; Och and Ney, 2004) has shown superior performance compared to the word-based TM. In this paper, the goal of phrase-based TM is to transfer S into Q.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "(Koehn et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 41,
"end": 59,
"text": "Och and Ney, 2004)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "Rather than transferring single words in isolation, the phrase model transfers one sequence of words into another sequence of words, thus incorporating contextual information. Inspired by the work of web search (Gao et al., 2010) and question retrieval in community question answer (Q&A) (Zhou et al., 2011) , we assume the following generative process: first the sentence S is broken into K nonempty word sequences w 1 , . . . , w k , then each is transferred into a new non-empty word sequences q 1 , . . . , q k , and finally these phrases are permutated and concatenated to form the sentence Q, where q and w denote the phrases or consecutive sequence of words.",
"cite_spans": [
{
"start": 211,
"end": 229,
"text": "(Gao et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 288,
"end": 307,
"text": "(Zhou et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "To formulate this generative process, let U denote the segmentation of S into K phrases w 1 , . . . , w k , and let V denote the K phrases q 1 , . . . , q k , we refer to these (w i , q i ) pairs as bi-phrases. Finally, let M denote a permutation of K elements representing the final ranking step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "Next we place a probability distribution over rewrite pairs. Let B(S, Q) denote the set of U , V , M triples that transfer S into Q. Here we assume a uniform probability over segmentations, so the phrase-based selection probability can be formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Q|S) \u221d (U,V,M )\u2208 B(S,Q) P (V |S, U ) \u2022 P (M |S, U, V )",
"eq_num": "(10"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": ") Then, we use the maximum approximation to the sum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Q|S) \u2248 max (U,V,M )\u2208 B(S,Q) P (V |S, U ) \u2022 P (M |S, U, V )",
"eq_num": "(11)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "Although we have defined a generative model for transferring S into Q, our goal is to calculate the ranking score function over existing Q and S. However, this model can not be used directly for sentence ranking because Q and S are often of different lengths, the length of S is almost 1.5 times to that of Q in our corpus, leaving many words in S unaligned to any word in Q. This is another key difference between our task and SMT. As pointed out by the previous work (Berger and Lafferty, 1999; Gao et al., 2010; Zhou et al., 2011) , sentence-query selection requires a distillation of the sentence, while selection of natural language tolerates little being thrown away. Thus we restrict our attention to those key sentence words that form the distillation of S, do not consider the unaligned words in S, and assume that Q is transfered only from the key sentence words.",
"cite_spans": [
{
"start": 469,
"end": 496,
"text": "(Berger and Lafferty, 1999;",
"ref_id": "BIBREF3"
},
{
"start": 497,
"end": 514,
"text": "Gao et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 515,
"end": 533,
"text": "Zhou et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "In this paper, the key sentence words are identified via word alignment. Let A = a 1 . . . a J be the \"hidden\" word alignment, which describes a mapping from a term position j in Q to a word position a j in S. We assume that the positions of the key sentence words are determined by the Viterbi align-ment\u00c2, which can be obtained using IBM Model 1 (Brown et al., 1993) as follows:",
"cite_spans": [
{
"start": 348,
"end": 368,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "A = arg max A P (Q, A|S) = arg max A P (J|I) J j=1 P (q j |w a j ) = arg max a j P (q j |w a j ) J j=1 (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "Given\u00c2, when scoring a given Q/S pair, we restrict our attention to those U , V , M triples that are consistent with\u00c2, which we denote as B(S, Q,\u00c2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "Here, consistency requires that if two words are aligned in\u00c2, then they must appear in the same biphrase (w i , q i ). Once the word alignment is fixed, the final permutation is uniquely determined, so we can safely discard that factor. Then Equation (11) can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Q|S) \u2248 max (U,V,M )\u2208 B(S,Q,\u00c2) P (V |S, U )",
"eq_num": "(13)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "For the sole remaining factor P (V |S, U ), we assume that a segmented queried question V = q 1 , . . . , q k is generated from left to right by transferring each phrase w 1 , . . . , w k independently, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (V |S, U ) = K k=1 P (q k |w k )",
"eq_num": "(14)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "where P (q k |w k ) is a phrase translation probability computed from the parallel corpus, which can be estimated in two ways (Koehn et al., 2003; Och and Ney, 2004) : relative frequency and lexical weighting, and has two format: phrase translation probability and lexical weight probability. In order to find the maximum probability assignment P (Q|S) efficiently, we use a dynamic programming approach, somewhat similar to the monotone decoding algorithm described in the work (Och, 2002) . We consider quantity a j as the maximal probability of the most likely sequence of phrases in S covering the first j words in Q, therefore the probability can be calculated using the following recursion:",
"cite_spans": [
{
"start": 126,
"end": 146,
"text": "(Koehn et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 147,
"end": 165,
"text": "Och and Ney, 2004)",
"ref_id": "BIBREF23"
},
{
"start": 479,
"end": 490,
"text": "(Och, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "step (1). Initialization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 0 = 1",
"eq_num": "(15)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "step (2). Induction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 j = j <j,q=q j +1 ...q j \u03b1 j P (q|w q )",
"eq_num": "(16)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "step (3). Total:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Q|S) = \u03b1 J",
"eq_num": "(17)"
}
],
"section": "Cross-Lingual Sentence Selection Model",
"sec_num": "4.2.1"
},
{
"text": "However, directly using the phrase-based TM, computed in Equations (15) to (17), to rank the candidate sentences does not perform well. Inspired by the linear discriminant function (Duda et al., 2001; Collins, 2002; Gao et al., 2005) in pattern classification and IR, we therefore propose a linear ranking model framework for cross-lingual data selection model in which different models are incorporated as features.",
"cite_spans": [
{
"start": 181,
"end": 200,
"text": "(Duda et al., 2001;",
"ref_id": "BIBREF9"
},
{
"start": 201,
"end": 215,
"text": "Collins, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 216,
"end": 233,
"text": "Gao et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "We consider the linear ranking model as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Score(Q, S) = \u03bb T \u2022 H(Q, S) = N n=1 \u03bb n h n (Q, S)",
"eq_num": "(18)"
}
],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "where the model has a set of N features, and each feature is an arbitrary function that maps (Q|S) to a real value, i.e., H(Q, S) \u2208 R. \u03bb n for n = 1 . . . N is the corresponding parameters of each feature, and we optimize these parameters using the Powell Search algorithm (Press et al., 1992) via crossvalidation. The used features in the linear ranking model are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "\u2022 Phrase translation feature (PT): h P T (Q, S, A) = logP (Q|S), where P (Q|S) is computed using Equations (15) to (17), and P (q k |w k ) is phrase translation probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "\u2022 Inverted phrase translation feature (IPT):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "h IP T (S, Q, A) = logP (S|Q), where P (S|Q) is computed using Equations (15) to (17), and P (w k |q k ) is inverted phrase translation probability. \u2022 Lexical weight feature (LW): h LW (Q, S, A) = logP (Q|S), where P (Q|S) is computed using Equations (15) to (17), and P (q k |w k ) is lexical weight probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Candidate Sentences",
"sec_num": "4.2.2"
},
{
"text": "h ILW (S, Q, A) = logP (S|Q), where P (S|Q) is computed using Equations (15) to (17), and P (w k |q k ) is inverted lexical weight probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Inverted lexical weight feature (ILW):",
"sec_num": null
},
{
"text": "\u2022 Unaligned word penalty feature (UWP):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Inverted lexical weight feature (ILW):",
"sec_num": null
},
{
"text": "h U W P (Q, S, A), which is defined as the ratio between the number of unaligned terms and the total number of terms in Q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Inverted lexical weight feature (ILW):",
"sec_num": null
},
{
"text": "\u2022 Word-based translation feature (WT): h W T (Q, S, A) = logP (Q|S), where P (Q|S) is the word-based TM defined by Equations (3) and (7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Inverted lexical weight feature (ILW):",
"sec_num": null
},
{
"text": "To improve the efficiency of cross-lingual data selection process, we consider the translation task, the LM training corpus and the parallel corpus in our task are constructed by the key words or important words, and thus construct TM by the key words or important words, which is another key difference between our task and SMT. We identify and eliminate unimportant words, somewhat similar to Q&A retrieval (Lee et al., 2008; Zhou et al., 2011) . Thus, the average number of words (the total word number in Q and S) in cross-lingual sentence selection model would be minimized naturally, and the efficiency of cross-lingual data selection would be improved.",
"cite_spans": [
{
"start": 409,
"end": 427,
"text": "(Lee et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 428,
"end": 446,
"text": "Zhou et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eliminating Unimportant Words (EUW)",
"sec_num": "4.3"
},
{
"text": "In this paper, we adopt a variant of TextRank algorithm (Mihalcea and Tarau, 2004) , a graphbased ranking model for key word extraction which achieves state-of-the-art accuracy. It identifies and eliminates unimportant words from the corpus, and assumes that a word is unimportant if it holds a relatively low significance in the corpus. Compared with the traditional approaches, such as TF-IDF, Tex-tRank utilizes the context information of words to assign term weights (Lee et al., 2008) , so it further improves the performance of CLPTM, as we will show in the experiments.",
"cite_spans": [
{
"start": 56,
"end": 82,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 471,
"end": 489,
"text": "(Lee et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eliminating Unimportant Words (EUW)",
"sec_num": "4.3"
},
{
"text": "Following the work of (Lee et al., 2008) , the ranking algorithm proceeds as follows. First, all the words in a given document are added as vertices in a graph. Then edges are added between words (vertices) if the words co-occur in a fixed-sized window. The number of co-occurrences becomes the weight of an edge. When the graph is constructed, the score of each vertex is initialized as 1, and the PageRank based ranking algorithm is run on the graph iteratively until convergence. The TextRank score R k w i ,D of a word w i in document D at kth iteration is defined as follows:",
"cite_spans": [
{
"start": 22,
"end": 40,
"text": "(Lee et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eliminating Unimportant Words (EUW)",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R k w i ,D = (1\u2212d)+d\u2022 \u2200j:(i,j)\u2208G e i,j \u2200l:(j,l)\u2208G e j,l R k\u22121 w j ,D",
"eq_num": "(19)"
}
],
"section": "Eliminating Unimportant Words (EUW)",
"sec_num": "4.3"
},
{
"text": "where d is a damping factor usually set as a constan-t 2 , and e i,j is an edge weight between w i and w j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eliminating Unimportant Words (EUW)",
"sec_num": "4.3"
},
{
"text": "In our experiments, we manually set the proportion to be removed as 25%, that is to say, 75% of total words in the documents would be remained as the important words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Eliminating Unimportant Words (EUW)",
"sec_num": "4.3"
},
{
"text": "We measure the utility of our proposed LM adaptation approach in two ways: (a) comparing reference translations based perplexity of adapted LMs with the generic LM, and (b) comparing SMT performance of adapted LMs with the generic LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We conduct experiments on two Chinese-to-English translation tasks: IWSLT-07 (dialogue domain) and NIST-06 (news domain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Tasks",
"sec_num": "5.1"
},
{
"text": "IWSLT-07. The bilingual training corpus comes from BTEC 3 and CJK 4 corpus, which contains 3.82K sentence pairs with 3.0M/3.1M Chinese/English words. The LM training corpus is from the English side of the parallel data (BTEC, CJK, and CWMT2008 5 ), which consists of 1.34M sentences and 15.2M English words. The test set is IWSLT-07 test set which consists of 489 sentences, and the development set is IWSLT-05 test set which consists of 506 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Tasks",
"sec_num": "5.1"
},
{
"text": "NIST-06. The bilingual training corpus comes from Linguistic Data Consortium (LDC) 6 , which consists of 3.4M sentence pairs with 64M/70M Chinese/English words. The LM training corpus is from the English side of the parallel data as well as the English Gigaword corpus 7 , which consists of 11.3M sentences. The test set is 2006 NIST MT Evaluation test set which consists of 1664 sentences, and the development set is 2005 NIST MT Evaluation test set which consists of 1084 sentences. 2 As in Lee et al. (2008) , a value of 0.85 was used for d. ",
"cite_spans": [
{
"start": 485,
"end": 486,
"text": "2",
"ref_id": null
},
{
"start": 493,
"end": 510,
"text": "Lee et al. (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Tasks",
"sec_num": "5.1"
},
{
"text": "We randomly divide the development set into five subsets and conduct 5-fold cross-validation experiments. In each trial, we tune the parameter \u00b5 in Equation (1) and parameter \u03bb in Equation 18with four of five subsets and then apply it to one remaining subset. The experiments reported below are those averaged over the five trials. We estimate the generic 4-gram LM with the entire LM training corpus as the baseline. Then, we select the top-N sentences which are similar to the development set, estimate the bias 4-gram LMs (with n-gram cutoffs tuned as above) with these selected sentences, and interpolate with the generic 4-gram LM as the adapted LMs. All the LMs are estimated by the SRILM toolkit (Stolcke, 2002) . Perplexity is a metric of LM performance, and the lower perplexity value indicates the better performance. Therefore, we estimate the perplexity of adapted LMs according to English reference translations. Figure 1 shows the perplexity of adapted LMs vs. the size of selected data. In this paper, we choose TF-IDF as the foundation of our solution since TF-IDF has gained the state-of-the-art performance for LM adaptation Hildebrand et al., 2005; Kim, 2005; Foster and Kuhn, 2007) . CLS refers to the cross-lingual similarity of (Ananthakrishnan et al., 2011a), and CLS s is our proposed improved algorithm on CLS with optimization measure like TF-IDF. CLWTM(\u03b2 = 0) refers to Snover et al. (2008) , which is the un-smooth ver- sion of our proposed CLWTM in the document state. CLPTM(l = 4) is our proposed CLPTM with a maximum phrase length of four, and we score the target sentences by the highest scoring Q/S pair. The results in Figure 1 indicate that English reference translations based perplexity of adapted LMs decreases consistently with increase of the size of selected top-N sentences, and increases consistently after a certain size in all approaches. Therefore, proper size of similar sentences with the translation task makes the adapted LM perform well, but if too many noisy data are taken into the selected sentences, the performance becomes worse. Similar observations have been done by Axelrod et al., 2011) . Furthermore, it is comforting that our approaches (CLWTM and CLPTM(l = 4)) performs better and are more stable than other approaches.",
"cite_spans": [
{
"start": 703,
"end": 718,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF27"
},
{
"start": 1143,
"end": 1167,
"text": "Hildebrand et al., 2005;",
"ref_id": "BIBREF14"
},
{
"start": 1168,
"end": 1178,
"text": "Kim, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 1179,
"end": 1201,
"text": "Foster and Kuhn, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 1411,
"end": 1417,
"text": "(2008)",
"ref_id": null
},
{
"start": 2125,
"end": 2146,
"text": "Axelrod et al., 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 926,
"end": 934,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1653,
"end": 1661,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Perplexity Analysis",
"sec_num": "5.2"
},
{
"text": "According to the perplexity results in Figure 1 , we select the top 8K sentences on IWSLT-07 and top 16K sentences on NIST-06 which are similar to the test set for adapting LM, respectively. Table 1 shows English reference translations based perplexity of adapted LMs on two test sets. Our approaches have significantly reduction in perplexity compared with other approaches, and the results indicate that adapted LMs are significantly better predictors of the corresponding translation task at hand than the generic LM. We use these adapted LMs for next translation experiments to show the detailed performance of selected training data for LM adaptation.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Perplexity Analysis",
"sec_num": "5.2"
},
{
"text": "We carry out translation experiments on the test set by hierarchical phrase-based (HPB) SMT (Chiang, 2005 and 2007) system to demonstrate the utility of LM adaptation on improving SMT performance by BLEU score (Papineni et al., 2002) . The generic LM and adapted LMs are estimated as above in perplexity analysis experiments. We use minimum error rate training (Och, 2003) to tune the feature weights of HPB for maximum BLEU score on the development set with serval groups of different start weights. Table 2 shows the main translation results on two test sets, and the improvements are statistically significant at the 95% confidence interval with respect to the baseline. From the comparison results, we get some clear trends:",
"cite_spans": [
{
"start": 92,
"end": 109,
"text": "(Chiang, 2005 and",
"ref_id": "BIBREF6"
},
{
"start": 110,
"end": 115,
"text": "2007)",
"ref_id": "BIBREF7"
},
{
"start": 210,
"end": 233,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF24"
},
{
"start": 361,
"end": 372,
"text": "(Och, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Translation Experiments",
"sec_num": "5.3"
},
{
"text": "(1) Cross-lingual data selection model outperforms the traditional approaches which utilize the first pass translation hypotheses (row 4 vs. row2; row 11 vs. row 9), but the detailed impact of noisy data in the translation hypotheses on data selection will be shown in the next section (section 5.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Experiments",
"sec_num": "5.3"
},
{
"text": "(2) CLWTM significantly outperforms CLS s (row 6 vs. row 4; row 13 vs. row 11), we suspect that word-based TM makes more accurate cross-lingual data selection model than single cross-lingual projection (Ananthakrishnan et al., 2011a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Experiments",
"sec_num": "5.3"
},
{
"text": "(3) Compared with (Snover et al., 2008) , adding the smoothing mechanism in the sentence state for CLWTM significantly improves the performance (row 6 vs. row 5; row 13 vs. row 12).",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Snover et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Experiments",
"sec_num": "5.3"
},
{
"text": "(4) Phrase-based TM (CLPTM) significantly outperforms the state-of-the-art approaches based on bag-of-words models and word-based TM (row 7 vs. row 2, row 4, row 5 and row 6; row 14 vs. row 9, row 11, row 12 and row 13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Experiments",
"sec_num": "5.3"
},
{
"text": "The experiment results in Table 2 indicate the second pass translation hypotheses (row 2 and row 9) made by TF-IDF are better than the first pass translation hypotheses (row 1 and row 8), so we consider that these translations have less noisy data. Thus, they were considered as the new translation hypotheses (the second pass) to select the similar sentences for LM adaptation by TF-IDF. Table 3 shows the impact of noisy data in the translation hypotheses on the performance of adapted LMs. The observed improvement suggests that better initial translations which have less noisy data lead to better adapted LMs, and thereby better second iteration translations. Therefore, it is advisable to use cross-lingual data selection for LM adaptation in SMT, which can address the problem of noisy proliferation.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 389,
"end": 396,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Impact of Noisy Data in the Translation Hypotheses",
"sec_num": "5.4"
},
{
"text": "The results in Table 4 show that longer phrases do yield some visible improvement up to the maximum length of four. This may suggest that some properties captured by longer phrases are also captured by other features. The performances when the phrase length is 1 are better than that of single word-based TM (row 6 and row 13 in Table 2 ), this suspect that the features in our linear ranking model are useful. However, it will be instructive to explore the methods of preserving the improvement generated by longer phrase when more features are incorporated in the future work. Table 5 shows the results of EUW by TextRank algorithm on the performance of CLTM for LM adaptation. Initial represents that we do not eliminate unimportant words. Average number represents the average number of words (the total word number in Q and S) in cross-lingual data selection model. The average number is reduced when unimportant words are eliminated, from 19 to 12 on IWSLT-07 and from 37 to 24 on NIST-06, respectively. This makes the cross-lingual data selection process become more efficient. In CLWTM, the performance with EUW is basically the same with that of the initial state; but in CLPTM, EUW outperforms the initial state because TextRank algorithm utilizes the context infor- mation of words when assigning term weights, thus makeing CLPTM play its advantage of capturing the contextual information.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 329,
"end": 336,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 579,
"end": 586,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Impact of Phrase Length",
"sec_num": "5.5"
},
{
"text": "In this paper, we propose a novel TM based crosslingual data selection model for LM adaptation in SMT, from word models to phrase models, and aims to find the LM training corpus which are similar to the translation task at hand. Unlike the general TM in SMT, we explore the use of TextRank algorithm to identify and eliminate unimportant words for corpus preprocessing, and construct TM by important words. Compared with the traditional approaches which utilize the first pass translation hypotheses, cross-lingual data selection avoids the problem of noisy proliferation. Furthermore, phrase T-M based cross-lingual data selection is more effective than the traditional approaches based on bagof-words models and word-based TM, because it captures contextual information in modeling the selection of phrase as a whole. Large-scale experiments are conducted on LM perplexity and SMT performance, and the results demonstrate that our approach solves the two aforementioned disadvantages and significantly outperforms the state-of-theart methods for LM adaptation. There are some ways in which this research could be continued in the future. First, we will utilize our approach to mine large-scale corpora by distributed infrastructure system, and investigate the use of our approach for other domains, such as speech translation system. Second, the significant improvement of LM adaptation based on cross-lingual data selection is exciting, so it will be instructive to explore other knowledge based cross-lingual data selection for LM adaptation, such as latent semantic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "As inXu et al. (2001), a value of 0.3 was used for \u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by 863 program in China (No. 2011AA01A207). We thank Guangyou Zhou for his helpful discussions and suggestions. We also thank the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On-line language model biasing for statistical machine translation",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Sankaranarayanan Ananthakrishnan",
"suffix": ""
},
{
"first": "Prem",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Natarajan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "445--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sankaranarayanan Ananthakrishnan, Rohit Prasad, and Prem Natarajan. 2011a. On-line language model bias- ing for statistical machine translation. In Proceedings of ACL, pages 445-449.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Online language model biasing for multi-pass automatic speech recognition",
"authors": [
{
"first": "Stavros",
"middle": [],
"last": "Sankaranarayanan Ananthakrishnan",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "Prem",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Natarajan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "621--624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sankaranarayanan Ananthakrishnan, Stavros Tsakalidis, Rohit Prasad, and Prem Natarajan. 2011b. On- line language model biasing for multi-pass automat- ic speech recognition. In Proceedings of INTER- SPEECH, pages 621-624.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Domain adaptation via pseudo in-domain data selection",
"authors": [
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "355--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selec- tion. In Proceedings of EMNLP, pages 355-362.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Information retrieval as statistical translation",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of SI-GIR",
"volume": "",
"issue": "",
"pages": "222--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Berger and John Lafferty. 1999. Information re- trieval as statistical translation. In Proceedings of SI- GIR, pages 222-229.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "858--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of EMNLP, pages 858-867.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematic- s of statistical machine translation: parameter estima- tion. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL, pages 263-270.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden markov models: theory and experiments with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: theory and experi- ments with the perceptron algorithm. In Proceedings of EMNLP, pages 1-8.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pattern classification",
"authors": [
{
"first": "Richard",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
},
{
"first": "David",
"middle": [
"G"
],
"last": "Stork",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard O. Duda, Peter E. Hart, and David G. Stork. 2001. Pattern classification. John Wiley & Sons, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Language model adaptation for statistical machine translation based on information retrieval",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "327--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Eck, Stephan Vogel, and Alex Waibel. 2004. Language model adaptation for statistical machine translation based on information retrieval. In Proceed- ings of LREC, pages 327-330.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mixture-model adaptation for SMT",
"authors": [
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Foster and Roland Kuhn. 2007. Mixture-model adaptation for SMT. In Proceedings of ACL, pages 128-135.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linear discriminative model for information retrieval",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Haoliang",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Xinsong",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "290--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Haoliang Qi, Xinsong Xia, and Jian-Yun Nie. 2005. Linear discriminative model for informa- tion retrieval. In Proceedings of SIGIR, pages 290- 297.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Clickthrough-based translation models for web search: from word models to phrase models",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of CIKM",
"volume": "",
"issue": "",
"pages": "1139--1148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Xiaodong He, and Jian-Yun Nie. 2010. Clickthrough-based translation models for web search: from word models to phrase models. In Proceedings of CIKM, pages 1139-1148.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adaptation of the translation model for statistical machine translation based information retrieval",
"authors": [
{
"first": "Almut Silja",
"middle": [],
"last": "Hildebrand",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EAMT",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Almut Silja Hildebrand, Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Adaptation of the transla- tion model for statistical machine translation based in- formation retrieval. In Proceedings of EAMT, pages 133-142.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Language model adaptation for automatic speech recognition and statistical machine translation",
"authors": [
{
"first": "Woosung",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woosung Kim. 2005. Language model adaptation for automatic speech recognition and statistical machine translation. Ph.D. thesis, The Johns Hopkins Univer- sity.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of NAACL, pages 48-54.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bridging lexical gaps between queries and questions on large online Q&A collections with compact translation models",
"authors": [
{
"first": "Jung-Tae",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sang-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "410--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jung-Tae Lee, Sang-Bum Kim, Young-In Song, and Hae- Chang Rim. 2008. Bridging lexical gaps between queries and questions on large online Q&A collection- s with compact translation models. In Proceedings of EMNLP, pages 410-418.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Resampling auxiliary data for language model adaptation in machine translation for speech",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Masskey",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Sethy",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ICAS-SP",
"volume": "",
"issue": "",
"pages": "4817--4820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Masskey and Abhinav Sethy. 2010. Resampling auxiliary data for language model adaptation in ma- chine translation for speech. In Proceedings of ICAS- SP, pages 4817-4820.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "TextRank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bring- ing order into text. In Proceedings of EMNLP, pages 404-411.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Intelligent selection of language model training data",
"authors": [
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "220--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceed- ings of ACL, pages 220-224.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Statistical mahcine translation: from single word models to alignment templates",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2002. Statistical mahcine transla- tion: from single word models to alignment templates. Ph.D thesis, RWTH Aachen.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160-167.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The alignmen- t template approach to statistical machine translation. Computational Linguistics, 30(4):417-449.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Weijing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- jing Zhu. 2002. BLEU: A method for automatic eval- uation of machine translation. In Proceedings of ACL, pages 311-318.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Numerical Recipes in C",
"authors": [
{
"first": "H",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Saul",
"middle": [
"A"
],
"last": "Press",
"suffix": ""
},
{
"first": "William",
"middle": [
"T"
],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"P"
],
"last": "Vetterling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flannery",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William H. Press, Saul A. Teukolsky, William T. Vetter- ling, and Brian P. Flannery. 1992. Numerical Recipes in C. Cambridge University Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Language and translation model adaptation using comparable corpora",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "857--866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, and Richard Marcu. 2008. Language and translation model adaptation us- ing comparable corpora. In Proceedings of EMNLP, pages 857-866.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SRILM -An extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -An extensible lan- guage modeling toolkit. In Proceedings of ICSLP, pages 901-904.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Unsupervised language model adaptation using latent semantic marginals",
"authors": [
{
"first": "Yik-Cheung",
"middle": [],
"last": "Tam",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ICSLP",
"volume": "",
"issue": "",
"pages": "2206--2209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yik-Cheung Tam and Tanja Schultz. 2006. Unsuper- vised language model adaptation using latent seman- tic marginals. In Proceedings of ICSLP, pages 2206- 2209.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bilingual-LSA based LM adaptation for spoken language translation",
"authors": [
{
"first": "Yik-Cheung",
"middle": [],
"last": "Tam",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "520--527",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yik-Cheung Tam, Ian Lane, and Tanja Schultz. 2007. Bilingual-LSA based LM adaptation for spoken lan- guage translation. In Proceedings of ACL, pages 520- 527.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bilingual-LSA based adaptation for statistical machine translation",
"authors": [
{
"first": "Yik-Cheung",
"middle": [],
"last": "Tam",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2008,
"venue": "Machine Translation",
"volume": "21",
"issue": "",
"pages": "187--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yik-Cheung Tam, Ian Lane, and Tanja Schultz. 2008. Bilingual-LSA based adaptation for statistical machine translation. Machine Translation, 21:187-207.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cross lingual adaptation: an experiment on sentiment classifications",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "258--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Wei and Christopher Pal. 2010. Cross lingual adap- tation: an experiment on sentiment classifications. In Proceedings of ACL, pages 258-262.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Evaluating a probabilistic model for cross-lingual information retrieval",
"authors": [
{
"first": "Jinxi",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ralpha",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Chanh",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "105--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinxi Xu, Ralpha Weischedel, and Chanh Nguyen. 2001. Evaluating a probabilistic model for cross-lingual in- formation retrieval. In Proceedings of SIGIR, pages 105-110.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Retrieval models for question and answer archives",
"authors": [
{
"first": "Xiaobing",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Jiwoon",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "475--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval models for question and answer archives. In Proceedings of SIGIR, pages 475-482.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Language model adaptation for statistical machine translation with structured query models",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "411--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Language model adaptation for statistical machine translation with structured query models. In Proceed- ings of COLING, pages 411-417.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Phrase-based translation model for question retrieval in community question answer archives",
"authors": [
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "653--662",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangyou Zhou, Li Cai, Jun Zhao, and Kang Liu. 2011. Phrase-based translation model for question retrieval in community question answer archives. In Proceed- ings of ACL, pages 653-662.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "English reference translations based perplexity of adapted LMs vs. the size of selected training data with different approaches on two development sets.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: English reference translations based perplexi-</td></tr><tr><td>ty of adapted LMs with different approaches on two test</td></tr><tr><td>sets, with the top 8K sentences on IWSLT-07 and top 16K</td></tr><tr><td>sentences on NIST-06, respectively.</td></tr></table>"
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: Comparison of SMT performance (p &lt; 0.05)</td></tr><tr><td>with different approaches for LM adaptation on two test</td></tr><tr><td>sets.</td></tr></table>"
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "The impact of noisy data in the translation hypotheses on the performance of LM adaptation.",
"num": null,
"content": "<table/>"
},
"TABREF7": {
"html": null,
"type_str": "table",
"text": "The impact of phrase length in CLPTM on the performance of LM adaptation, and the maximum phrase length is four.",
"num": null,
"content": "<table/>"
},
"TABREF9": {
"html": null,
"type_str": "table",
"text": "The impact of eliminating unimportant words by TextRank algorithm on the performance of CLTM for LM adaptation.",
"num": null,
"content": "<table/>"
}
}
}
}