ACL-OCL / Base_JSON /prefixI /json /iwslt /2004.iwslt-evaluation.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:18:49.375087Z"
},
"title": "The ITC-irst Statistical Machine Translation System for IWSLT-2004",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ITC-irst",
"location": {
"addrLine": "via Sommarive 18",
"settlement": "Povo",
"region": "TN",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Roldano",
"middle": [],
"last": "Cattoni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ITC-irst",
"location": {
"addrLine": "via Sommarive 18",
"settlement": "Povo",
"region": "TN",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ITC-irst",
"location": {
"addrLine": "via Sommarive 18",
"settlement": "Povo",
"region": "TN",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ITC-irst",
"location": {
"addrLine": "via Sommarive 18",
"settlement": "Povo",
"region": "TN",
"country": "Italy"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Focus of this paper is the system for statistical machine translation developed at ITC-irst. It has been employed in the evaluation campaign of the International Workshop on Spoken Language Translation 2004 in all the three data set conditions of the Chinese-English track. Both the statistical model underlying the system and the system architecture are presented. Moreover, details are given on how the submitted runs have been produced.",
"pdf_parse": {
"paper_id": "2004",
"_pdf_hash": "",
"abstract": [
{
"text": "Focus of this paper is the system for statistical machine translation developed at ITC-irst. It has been employed in the evaluation campaign of the International Workshop on Spoken Language Translation 2004 in all the three data set conditions of the Chinese-English track. Both the statistical model underlying the system and the system architecture are presented. Moreover, details are given on how the submitted runs have been produced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper reports on the participation of ITC-irst to the evaluation campaign organized by the International Workshop on Spoken Language Translation (IWSLT) 2004. The Statistical Machine Translation (SMT) system developed at ITC-irst was applied to all the three data set conditions of the Chinese-English track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The ITC-irst SMT system implements an extension of the IBM Model 4 as a log-linear interpolation of statistical models, which estimate probabilities in terms of phrases. The use of phrases rather than words has recently emerged as a mean to cope with the limited context that Model 4 exploits to guess word translation (lexicon model) and word positions (distortion model) [1, 2, 3, 4, 5, 6, 7] .",
"cite_spans": [
{
"start": 373,
"end": 376,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 377,
"end": 379,
"text": "2,",
"ref_id": "BIBREF1"
},
{
"start": 380,
"end": 382,
"text": "3,",
"ref_id": "BIBREF2"
},
{
"start": 383,
"end": 385,
"text": "4,",
"ref_id": "BIBREF3"
},
{
"start": 386,
"end": 388,
"text": "5,",
"ref_id": "BIBREF4"
},
{
"start": 389,
"end": 391,
"text": "6,",
"ref_id": "BIBREF5"
},
{
"start": 392,
"end": 394,
"text": "7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "While parameters of the models are estimated exploiting statistics of phrase pairs extracted from word alignments, the weights of the interpolation are optimized through a training procedure which directly aims at minimizing translation errors on a development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Decoding is implemented in terms of a dynamic programming algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The paper is organized as follows. Next section details the statistical model underlying the system. Sections 3 and 4 briefly describe the search and the segmentation algorithms, respectively. Section 5 gives an overview of the system architecture. Finally, in Section 6 experimental set-ups of the evaluation campaign runs and results are presented and discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The advantages of the statistical translation approach are advocated by the many papers on the subject, which followed its first introduction. Of course, there have been also attempts to overcome some of its shortcomings, e.g. the use of limited context within the foreign string to guess word translations and word positions. Recently, several research labs have reported improvements in translation accuracy by shifting from word-to phrase-based SMT. In particular, statistical phrasebased translation models have recently emerged, which rely on statistics of phrase pairs. Phrase pairs statistics can be automatically extracted from word-aligned parallel corpora [5] . In the following subsections, we introduce the SMT framework and the Model 4. Then, we briefly describe a method for extracting phrase pairs. Finally, a novel phrase-based translation framework is presented which is tightly related to Model 4.",
"cite_spans": [
{
"start": 666,
"end": 669,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2."
},
{
"text": "As originally proposed by [8] , the most likely translation of a foreign source sentence f into English is obtained by searching for the sentence with the highest posterior probability:",
"cite_spans": [
{
"start": 26,
"end": 29,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e * = arg max e Pr(e | f )",
"eq_num": "(1)"
}
],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "Usually, the hidden variable a is introduced:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "e * = arg max e a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "Pr(e, a | f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "which represents an alignment from source to target positions. The framework of maximum entropy [9] provides a mean to directly estimate the posterior probability Pr(e, a | f ). It is determined through suitable real valued feature functions h i (e, f , a), i = 1 . . . M , and takes the parametric form:",
"cite_spans": [
{
"start": 96,
"end": 99,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03bb (e, a | f ) = exp{ i \u03bb i h i (e, f , a)} e,a exp{ i \u03bb i h i (e, f , a)}",
"eq_num": "(3)"
}
],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "The maximum entropy solution corresponds to values \u03bb i that maximize the log-likelihood over a training sample T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb * = arg max \u03bb (e,f ,a)\u2208T log p \u03bb (e, a | f )",
"eq_num": "(4)"
}
],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "Unfortunately, a closed-form solution of (4) does not exist. An iterative procedure converging to the solution was proposed by [10] ; an improved version is given in [11] . If the following feature functions are chosen [12] : Pr(f , a | e) \u03bb 2",
"cite_spans": [
{
"start": 127,
"end": 131,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 166,
"end": 170,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 219,
"end": 223,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "h 1 (e,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "where \u03bb i 's represent scaling factors of factors. In eq. 5, English strings e are ranked on the basis of the weighted product of the language model probability Pr(e), usually computed through an n-gram language model [13] , and the marginal of the translation probability Pr(f , a | e).",
"cite_spans": [
{
"start": 218,
"end": 222,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "In [8, 14] six translation models (Model 1 to 6) of increasing complexity are introduced. These alignment models are usually estimated through the Expectation Maximization algorithm [15] , or approximations of it, by exploiting a suitable parallel corpus of translation pairs. For computational reasons, the optimal translation of f is computed with the approximated search criterion:",
"cite_spans": [
{
"start": 3,
"end": 6,
"text": "[8,",
"ref_id": "BIBREF7"
},
{
"start": 7,
"end": 10,
"text": "14]",
"ref_id": "BIBREF13"
},
{
"start": 182,
"end": 186,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "e * \u2248 arg max e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "Pr(e) \u03bb1 max a Pr(f , a | e) \u03bb2 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear Model",
"sec_num": "2.1."
},
{
"text": "Given the string e = e 1 , . . . , e l , a string f and an alignment a are generated as follows: (i) a non-negative integer \u03c6 i , called fertility, is generated for each word e i and for the null word e 0 ; (ii) for each e i , a list \u03c4 i , called tablet, of \u03c6 i source words and a list \u03c0 i , called permutation, of \u03c6 i source positions are generated; (iii) finally, if the generated permutations cover all the available source positions exactly once then the process succeeds, otherwise it fails. Fertilities fix the number of source words to be aligned to each target word, and the total length of the foreign string. Moreover, as permutations of Model 4 are constrained to assign positions in ascending order, it can be shown that if the process succeeds in generating a triple (\u03c6 l 0 , \u03c4 l 0 , \u03c0 l 0 ), then there is exactly one corresponding pair (f , a), and viceversa. This property justifies the following decomposition of Model 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (f , a | e) = p(\u03c6 l 0 , \u03c4 l 0 , \u03c0 l 0 | e l 0 ) (7) = p(\u03c6, \u03c4 , \u03c0 | e)",
"eq_num": "(8)"
}
],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= p(\u03c6 | e) \u2022 p(\u03c4 | \u03c6, e) \u2022 p(\u03c0 | \u03c6, \u03c4 , e) (9) where p(\u03c6 | e) = l i=1 p(\u03c6 i | e i ) p(\u03c6 0 | l i=1 \u03c6 i ) (10) p(\u03c4 | \u03c6, e) = l i=0 p(\u03c4 i | \u03c6 i , e i )",
"eq_num": "(11)"
}
],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u03c0 | \u03c6, \u03c4 , e) = 1 \u03c6 0 ! l i=1 p(\u03c0 i | \u03c6 i ,\u03c0 \u03c1(i) )",
"eq_num": "(12)"
}
],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u03c4 i | \u03c6 i , e i ) = \u03c6 i k=1 p(\u03c4 i,k | e i )",
"eq_num": "(13)"
}
],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "p(\u03c0 i | \u03c6 i ,\u03c0 \u03c1(i) ) = p =1 (\u03c0 i,1 \u2212\u03c0 \u03c1(i) ) \u00d7 \u00d7 \u03c6 i k=2 p >1 (\u03c0 i,k \u2212 \u03c0 i,k\u22121 ) (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "In eq. 9, the first factor is the fertility model p(\u03c6 | e)see eq. (10) -and represents step (i): fertilities of e 1 , . . . , e l are generated for each word according to the distributions p(\u03c6 i | e i ), while the fertility of e 0 is generated through a Binomial distribution p(\u03c6 | m ). The remaining factors, the lexicon model p(\u03c4 | \u03c6, e) -see eq. 11-and the distortion model p(\u03c0 | \u03c6, \u03c4 , e) -see eq. (12) -correspond to step (ii): tablets for cepts 1 are generated according to eq. (13), and permutations \u03c0 i , with the exception of \u03c0 0 , are generated according to eq. (14) . The latter relies on two probability tables:",
"cite_spans": [
{
"start": 451,
"end": 452,
"text": "1",
"ref_id": "BIBREF0"
},
{
"start": 572,
"end": 576,
"text": "(14)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "p =1 (\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": ", which considers the distance between the first generated position and the center 2 of the most recent cept; p >1 (\u2022), which considers the distance between any two consecutively assigned positions of the permutation. Finally, positions for e 0 are generated at random over the residual \u03c6 0 positions, with probability 1 \u03c6 0 ! . It is worth remarking that the here considered distortion model omits some dependencies specified in [8] .",
"cite_spans": [
{
"start": 430,
"end": 433,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model 4",
"sec_num": "2.2."
},
{
"text": "The here used method exploits the so called union alignments between sentence pairs of the training corpus [5] . Given strings f = f 1 , . . . , f m and e = e 1 , . . . , e l , a direct alignment a (from f to e) and an inverted alignment b (from e to f ), the union alignment is defined as:",
"cite_spans": [
{
"start": 107,
"end": 110,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-pair Extraction",
"sec_num": "2.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = {(j, i) : a j = i \u2228 b i = j}",
"eq_num": "(15)"
}
],
"section": "Phrase-pair Extraction",
"sec_num": "2.3."
},
{
"text": "It is easy to verify that while a and b are many-to-one alignments, c is a many-to-many alignment. Moreover, the union alignment does not necessarily cover all source and target positions. Given a source-target sentence pair (f , e) and a union alignment c, let J and I denote two closed intervals within the positions of f and e, respectively. We say that I and J form a phrase pair 3 under c if and only if c aligns all source positions J with target positions contained in I, and all target positions I with source positions contained in J.",
"cite_spans": [
{
"start": 384,
"end": 385,
"text": "3",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-pair Extraction",
"sec_num": "2.3."
},
{
"text": "Given a parallel corpus provided with Viterbi alignments in both directions, we can compute all phrase pairs occurring in its sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-pair Extraction",
"sec_num": "2.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = {(f p ,\u1ebd p ) : p = 1, . . . , P }",
"eq_num": "(16)"
}
],
"section": "Phrase-pair Extraction",
"sec_num": "2.3."
},
{
"text": "Practically, in order to limit the number of phrases, the maximum length of I and J is limited to some value k. Note that the set P also includes phrase pairs with one single target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-pair Extraction",
"sec_num": "2.3."
},
{
"text": "We assume that the target vocabulary is augmented by including all target phrases in P. Hence, the search criterion (6) is modified as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Model",
"sec_num": "2.4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e * = arg max e Pr(\u1ebd) \u03bb1 max a p \u03b8 (f , a |\u1ebd) \u03bb2",
"eq_num": "(17)"
}
],
"section": "Phrase-based Model",
"sec_num": "2.4."
},
{
"text": "where\u1ebd ranges over all strings of the augmented target vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Model",
"sec_num": "2.4."
},
{
"text": "Our phrase-based language model Pr(\u1ebd) is a simple extension of a n-gram word-based language model. The phrase model exploits a counting probability measure defined on the phrase sample P. Hence, the relative frequency of a given phrase pair (f ,\u1ebd) in the sample P is interpreted as the probability of the phrase pair, given the training data. Basic probabilities of the translation model relying on statistics over P are summarized in Table 1 .f (\u03c4 ) trivially transforms \u03c4 into a phrase. The implicit assumption that the tablet must correspond to a source phrase, i.e. it must cover consecutive positions, is made explicit by the distortion model. In fact, it assigns the first tablet position the same probability given by the Model 4 distortion model, but constrains successive positions to be adjacent.",
"cite_spans": [],
"ref_spans": [
{
"start": 435,
"end": 442,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phrase-based Model",
"sec_num": "2.4."
},
{
"text": "Given the source sentence f = f m 1 , the optimal translatio\u00f1 e * is searched through the approximate criterion (17) . According to the dynamic programming paradigm, the optimal solution can be computed through a recursive formula which expands previously computed partial theories, and recombines the new expanded theories. A theory can be described by its state, which only includes information needed Table 1 : Phrase-based model: fertility, lexicon, and distortion probabilities.",
"cite_spans": [
{
"start": 112,
"end": 116,
"text": "(17)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "N (f , \u03c6,\u1ebd) = P p=1 \u03b4(f p =f ) \u03b4(\u1ebd p =\u1ebd) \u03b4(|f p | = \u03c6) N (\u03c6,\u1ebd) = f N (f , \u03c6,\u1ebd) N (\u1ebd) = \u03c6 N (\u03c6,\u1ebd) Fertility Model:p S (\u03c6 |\u1ebd) = N (\u03c6,\u1ebd) N (\u1ebd) Lexicon Model:p S (\u03c4 | \u03c6,\u1ebd) = N (f (\u03c4 ), \u03c6,\u1ebd) N (\u03c6,\u1ebd) Distortion Model:p S (\u03c0 | \u03c6,\u03c0) = p =1 (\u03c0 1 \u2212\u03c0)\u00d7 \u03c6 k=2 \u03b4(\u03c0 k \u2212 \u03c0 k\u22121 = 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "for its expansion; two partial theories sharing the same state are identical (undistinguishable) for the sake of expansion, i.e. they should be recombined. More formally, let Q i (s) be the best score among all partial theories of length i sharing the state s, pred(s) the set of partial theories which are expanded in a theory of state s, and G(s , s) the cost for expanding a partial theory of state s into one of state s. The score Q * of the optimal solutio\u00f1 e * can be computed by explicitly searching among optimal solutions fixing the length i and the state s, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q * = max e Pr(\u1ebd) max a p \u03b8 (f , a |\u1ebd)",
"eq_num": "(18)"
}
],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= max i max e i 1 Pr(\u1ebd i 1 ) max a p \u03b8 (f , a |\u1ebd i 1 ) (19) = max i,s Q i (s)",
"eq_num": "(20)"
}
],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "Henceforth, the score Q i (s) can be defined recursively with respect to the length i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "Q i (s) = max th \u2208pred(s) Q i\u22121 (s(th )) * G(s(th ), s) (21)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "with a suitable initialization for Q 0 (s). Given the model described in the previous section, the state s(th) = (C,\u03c0,\u1ebd ,\u1ebd) of a partial theory th includes the coverage set, C, the center of the last cept,\u03c0, and the last two output phrases,\u1ebd and\u1ebd. A theory of state s = (C,\u03c0,\u1ebd ,\u1ebd) can be only generated from one of state s = (C \\ \u03c0,\u03c0 ,\u1ebd ,\u1ebd ), i.e. a new output phrase\u1ebd is added with fertility \u03c6 = |\u03c0|, and \u03c6 positions are covered. Notice that if \u03c6 = 0 the center remains unaltered, i.e.\u03c0 =\u03c0. The possible initial states s = (\u03c0 0 ,\u03c0 0 , , ) correspond to partial theories with no target phrases and with all \u03c6 0 positions in C = \u03c0 0 covered by the null phrase\u1ebd 0 . Notice that\u03c0 0 is not used in the computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "Hence, eq. (21) relies on the following definitions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G(s , s) = G((C \\ \u03c0,\u03c0 ,\u1ebd ,\u1ebd ), (C,\u03c0,\u1ebd ,\u1ebd)) = p(\u1ebd |\u1ebd ,\u1ebd ) \u00d7 \u00d7 p(\u03c6 i , \u03c4 i , \u03c0 i |\u1ebd,\u03c0 ) if \u03c0 = \u2205 p(\u03c6 i = 0 |\u1ebd) if \u03c0 = \u2205 (22) Q 0 (s) = Q 0 (\u03c0 0 ,\u03c0 0 , , )",
"eq_num": "(23)"
}
],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= p(\u03c6 0 | m \u2212 \u03c6 0 ) p(\u03c4 0 |\u1ebd 0 ) 1 \u03c6 0 !",
"eq_num": "(24)"
}
],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "In order to reduce the huge number of theories to generate, some methods are used, which affect the optimality of the search algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "\u2022 Comparison with the best theory: theories are pruned, whose score is worse than the so-far best found complete solution, as theory expansion always decreases the score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "\u2022 Beam search: at each expansion less promising theories are also pruned. In particular, two types of pruning define the beam:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "-threshold pruning: partial theories th whose score Q i (s(th)) is smaller than the current optimum score Q * curr times a given factor T , i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q i (s(th)) Q * curr < T ,",
"eq_num": "(25)"
}
],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "are eliminated;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "-histogram pruning: hypotheses not among the top N best scoring ones are pruned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "These criteria are applied, first to all theories with a fixed coverage set, then to all theories of fixed output length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "\u2022 Reordering constraint: a smaller number of theories is generated by applying the so-called IBM constraint on each additionally covered source position, i.e. by selecting only one of the first 4 empty positions, from left to right. Figure 1 shows how theories are generated, recombined and pruned during the search process.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 241,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoding Algorithm",
"sec_num": "3."
},
{
"text": "The Chinese word segmentation problem can be formulated as follows. Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x n 1 = x 1 , x 2 , . . . , x n x i \u2208 \u03a3",
"eq_num": "(26)"
}
],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "be a string of characters (observations) representing a Chinese text, where \u03a3 denotes the set of Chinese characters. We assume that the text is produced by concatenating words which are independent and identically distributed according to a distribution P (w), defined over strings w of \u03a3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "\u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 \u00a2 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a1 \u00a3 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a1 \u00a4 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a1 \u00a6 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7\u00a8 \u00a1 \u00a1 \u00a1 \u00a1\u00a8 \u00a1 \u00a1 \u00a1 \u00a1\u00a8 \u00a1 \u00a1 \u00a1 \u00a1\u00a8 \u00a1 \u00a1 \u00a1 \u00a1\u00a8 \u00a1 \u00a1 \u00a1 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 \u00a1 i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x 1 \u2022 \u2022 \u2022 x n 1 \u22121 P (w1) x n 1 \u2022 \u2022 \u2022 x n 2 \u22121 P (w2) \u2022 \u2022 \u2022 x n c \u2022 \u2022 \u2022 x n P (wc)",
"eq_num": "(27)"
}
],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "Hence, segmentation is the task of guessing the number of words c and of detecting the transition points n c 1 = n 1 , n 2 . . . n c within the original string. From a statistical perspective, we look for the segmentation which maximizes the text log-likelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L * (x n 1 ) = max c,n c 1 L(x n 1 ; c; n c 1 ) (28) = max c,n c 1 c+1 i=1 log P (w = x n i \u22121 n i\u22121 )",
"eq_num": "(29)"
}
],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "where 1 = n 0 < n 1 < n 2 < . . . < n c < n c+1 = n + 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "The maximization in eq. (29) can be solved by dynamic programming, while the word model can be defined as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "Elementary statistics suggests that simple and effective word models can be built from word occurrence statistics collected within a large corpus of segmented texts. However, while just relying on word counting can be optimal in a closed-vocabulary situation, smoothing word probabilities with other less specific features can improve performance on texts including never observed words. Here, we present a word model including statistics of words, word lengths, and character sequences. More specifically, we assume the fol- Figure 2 : The architecture of the ITC-irst SMT system at run time: after preprocessing, the input sentence is sent to the decoder that, given the model parameters, searches for the best hypothesis. A final postprocessing step provides the actual translation. lowing back-off word model over \u03a3 + :",
"cite_spans": [],
"ref_spans": [
{
"start": 526,
"end": 534,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Chinese Segmentation",
"sec_num": "4."
},
{
"text": "P (w = x l 1 ) = (1 \u2212 \u03bb)p(w) ifp(w) > 0 \u03b1 \u03bb p(l, x l 1 ) otherwise (30)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "wherep(w) is an empirical word distribution estimated on a segmented text sample, \u03bb \u2208 (0, 1) is a smoothing factor, \u03b1 is a normalization term to ensure that w\u2208\u03a3 + P (w) = 1, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "p(l, x 1 , . . . , x l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "is a character-based language model. The character n-gram model is defined by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(l, x l 1 ) =p(l) l+1 i=1 p(x i | x i\u22121 , l)",
"eq_num": "(31)"
}
],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "wherep(l) is the empirical word-length distribution of the training data, p(x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "i | x i\u22121 , l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "is a length conditional bigram language model, and x 0 and x l+1 are set to the conventional character $ to model word boundaries. Bigram probabilities are estimated from a sample of words by applying the wellknown Witten-Bell smoothing method [16] .",
"cite_spans": [
{
"start": 244,
"end": 248,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FACTORS SCALING",
"sec_num": null
},
{
"text": "The architecture of the ITC-irst SMT system at run time is shown in Figure 2 . After a preprocessing step, the sentence in the source language is given as input to the decoder, which outputs the best hypothesis in the target language; the actual translation is obtained by a further postprocessing.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "5."
},
{
"text": "Preprocessing and postprocessing consist of a sequence of actions aiming at normalizing text and are applied both for preparing training data and for managing text to translate. The same steps can be applied to both source and target sentences, accordingly with the language. Input strings are tokenized, and put in lowercase. Text is labeled with few classes including cardinal and ordinal numbers, week-day and month names, years and percentages. As training and decoding assume sentences divided into words, Chinese sequence of ideograms are segmented by means of the algorithm described in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "5."
},
{
"text": "Parameters of the statistical translation model described in Section 2 can be divided into two groups: the parameters of each basic phrase-based model and the weights of their log-linear combination. Accordingly, the training procedure Figure 3 , consists of two separate phases.",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 244,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "5."
},
{
"text": "In the first phase, distributions of the components of the phrase-based models are computed starting from a parallel training corpus. After preprocessing, Viterbi alignments from source to target words, and vice-versa, are computed by means of the GIZA++ toolkit [1] . Phrase pairs are then extracted taking into account both direct and inverse alignments (see section 2.3), and the phrase-based distributions are estimated (section 2.4).",
"cite_spans": [
{
"start": 263,
"end": 266,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "5."
},
{
"text": "In the second phase the scaling factors of the log-linear model are estimated by the so-called minimum error training procedure. This iterative method searches for a set of factors that minimizes a given error measure on a development corpus. The simplex method [17] is used to explore the space of scaling factors. A detailed description of the minimum error training approach is reported in [18] .",
"cite_spans": [
{
"start": 262,
"end": 266,
"text": "[17]",
"ref_id": "BIBREF16"
},
{
"start": 393,
"end": 397,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "5."
},
{
"text": "ITC-irst participated to all the three data conditions of the Chinese-English track: Supplied, Additional, and Unrestricted data. According with the evaluation specification, in the last two conditions monolingual and bilingual training data can be added to the supplied corpus of 20K sentences. Experiments on a development set were performed to select these corpora in order to optimize performance of the system. System development was performed on the CSTAR-2003 evaluation set, and then blindly applied to the IWSLT-2004 test set. No optimization has been done with respect to the post-processing required by the IWSLT-2004 evaluation campaign (e.g. absence of punctuation). The system has been trained in a standard way (e.g. with punctuation and with lower-case letters) and the required post-processing was simply applied to the output sentences as final step. The development of the system was done by considering the BLEU score, both in the data selection and in the optimization of the scaling factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6."
},
{
"text": "Adding data for training the system is an hard issue. Using more training data usually improves performance of the baseline system, provided these data are close enough to the domain of the test set. However, an exhaustive exploration of corpora available for the IWSLT evaluation for finding the best combination for training the system is unfeasible. Hence, first we searched for the best monolingual resources consisting of the English part of parallel corpora. Successively, we tried the effectiveness of additional bilingual resources. Note that no optimization of scaling factors is made in this phase. The upper half of Table 2 summarizes the results of the selection of additional monolingual resources. Monolingual data are used only for estimating the language model. The baseline system was trained on the Supplied data. Among the available corpora, the Basic Traveling Expression Corpus (BTEC) [19] , a collection of 162K parallel sentences in several languages, is the closest to the task domain 4 . Using it, performance improvement over the baseline is about 17% relative. The impact on performance of other corpora was explored by training different language models on them, and combining the estimated models in a mixture [20] . Two groups of additional data are considered: DB1 mostly composed by news corpora 5 and DB2 consisting of press releases released by the Hong Kong Special Administrative Region 6 . In both cases, small relative decrements (\u22121.2% and \u22121.4%) of the BLEU score were observed. This behavior can be explained by the specificity of BTEC, whose domain -tourism -is different from those of the other corpora. Accordingly, the language model estimated over BTEC is used for all the following experiments. Even more challenging is the selection of bilingual resources. In order to avoid constraints given by the Additional data condition, we worked under the Unrestricted data condition, that permits the use of any parallel corpus for training. Two translation systems are trained on different sets of bilingual resources: tm-btec and tm-db3 (see lower half of Table 2 ). The first system extends the supplied data with BTEC; the second one with a selection of other corpora available from LDC (DB3 7 ). The tm-btec system signifi- cantly outperformes previous systems. The increment of the BLEU score is about 43% and 23% relative, with respect to the baseline and lm-btec systems, respectively. Performance of tm-db3 system scored better than baseline and lm-btec, too. The constraints on the use of training data for the three conditions and the above reported results on the development set suggested the employment of the following systems for the evaluation campaign: the baseline system in the Supplied data condition, the lm-btec in the Additional data condition, and the tm-db3 in the Unrestricted data condition. The scaling factors that minimize the errors on the development set were estimated through the procedure mentioned in Section 5 and employed for the official evaluation.",
"cite_spans": [
{
"start": 906,
"end": 910,
"text": "[19]",
"ref_id": "BIBREF18"
},
{
"start": 1009,
"end": 1010,
"text": "4",
"ref_id": "BIBREF3"
},
{
"start": 1239,
"end": 1243,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 1328,
"end": 1329,
"text": "5",
"ref_id": "BIBREF4"
},
{
"start": 1423,
"end": 1424,
"text": "6",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 627,
"end": 634,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 2098,
"end": 2105,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Selection of additional training data",
"sec_num": "6.1."
},
{
"text": "In developing the Chinese-English MT system for the IWSLT-2004 evaluation campaign we had to face the problem of having different Chinese word segmentation in the training corpora and in the test set. By assuming that each available data set provides its own segmentation, and that no knowledge is given about its characteristics, an interesting issue is to understand which choice is the best between (i) exploiting the provided segmentation or (ii) removing the provided segmentation and homogeneously re-segmenting all data. Three types of segmentation were taken into account:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Official evaluation",
"sec_num": "6.2."
},
{
"text": "1. Supplied, the original Chinese segmentation provided in the training and test corpus was not changed and data were used as they were. This means that the segmentation step was skipped during the preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Official evaluation",
"sec_num": "6.2."
},
{
"text": "2. Special, Chinese segmentation was applied from scratch by training the segmentation model (Section 4) on a 7K-entry word-frequency list extracted from the supplied data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Official evaluation",
"sec_num": "6.2."
},
{
"text": "LDC2002E18.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Official evaluation",
"sec_num": "6.2."
},
{
"text": "3. Full, Chinese segmentation was applied from scratch by training the segmentation model on a 44K-entry word-frequency list supplied by LDC. Table 3 reports automatic scores on the official test set for each data condition and for each segmentation type. Concerning the Supplied data condition, results show that the Special segmentation outperforms the Supplied one in terms of BLEU score; the relative improvement is more than 10%. It is worth noticing that the Full segmentation is not permitted according to the Supplied data conditions. A reason for the large difference in performance is probably due to the fact that training and testing data were manually segmented by different people. Hence, the two data sets reflect different ways of interpreting the concept of word, which is quite frequent in Chinese. Hence, the approach of automatically resegmenting all the data with one model produces the positive effect of making training and testing data more consistent. By looking at the Additional data condition, we notice that the three segmentation modalities give comparable results. In the Unrestricted data condition, results show that the Full Segmentation method achieves the best performance. The BLEU score relative improvement is about 17% and 7% with respect to Supplied and Special segmentations, respectively. This is not surprising because (i) the size of the training set is much larger than in the Additional data condition and (ii) the training set contains data much closer to the Chinese dictionary used by the segmenter. These numbers appear to confirm that the manual segmentation of the test set exhibits some differences with respect to the segmentation typically found in the LDC documents or even in the IWSLT-2004 supplied training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Official evaluation",
"sec_num": "6.2."
},
{
"text": "A cept is a target word (including e 0 ) with positive fertility. A not-cept word may only generate an empty tablet and an empty permutation with probability 1.2\u03c0 \u03c1(i) is defined as the ceiling of the mean position assigned to the most recent cept, whose index is defined by \u03c1(i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In order to distinguish between words and phrases and between wordbased and phrase-based models, the latter will be identified with the symbolt hrough all the rest of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, both development and test sets are extracted from BTEC.5 The corpora are available from LDC: LDC2002E17, LDC2002E58, and LDC2002E18.6 LDC2003E25 and LDC2000T46. 7 LDC2002E17, LDC2002L27, LDC2003E25, LDC2002E58, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney, \"Improved statistical alignment models,\" in Proc. of the 38th Annual Meeting of the Association for Computational Linguistics, Hongkong, China, October 2000, pp. 440-447.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A syntactic-based statistical translation model",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the 39th Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Yamada and K. Knight, \"A syntactic-based statisti- cal translation model,\" in Proc. of the 39th Meeting of the Association for Computational Linguistics (ACL), Toulouse, France, 2001, pp. 523-530.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Phrase-based, Joint Probability Model for Statistical Machine Translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and W. Wong, \"A Phrase-based, Joint Prob- ability Model for Statistical Machine Translation,\" in Proc. of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), 2002.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The CMU statistical machine translation system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "A",
"middle": [
"T"
],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Machine Translation Summit IX",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Vogel, Y. Zhang, F. Huang, A. Venugopal, B. Zhao, A. T. a nd M. Eck, and A. Waibel, \"The CMU statistical machine translation system,\" in Proc. of the Machine Translation Summit IX, New Orleans, LA, 2003.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical phrasebased translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of HLT-NAACl",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F. J. Och, and D. Marcu, \"Statistical phrase- based translation,\" in Proc. of HLT-NAACl 2003, Ed- monton, Canada, 2003, pp. 127-133.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A projection extension algorithm for statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Tillmann, \"A projection extension algorithm for statistical machine translation,\" in Proc. of Empirical Methods in Natural Language Processing (EMNLP), Sapporo, Japan, 2003, pp. 1-8.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards a unified approach to memory-and statistical-based machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the 39th Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "378--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu, \"Towards a unified approach to memory-and statistical-based machine translation,\" in Proc. of the 39th Meeting of the Association for Computational Lin- guistics (ACL), Toulouse, France, 2001, pp. 378-385.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer, \"The Mathematics of Statistical Ma- chine Translation: Parameter Estimation,\" Computa- tional Linguistics, vol. 19, no. 2, pp. 263-313, 1993.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Maximum Entropy Approach to Natural Language Processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra, \"A Max- imum Entropy Approach to Natural Language Process- ing,\" Computational Linguistics, vol. 22, no. 1, pp. 39- 71, 1996.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generalized Iterative Scaling for Log-Linear Models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Darroch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ratcliff",
"suffix": ""
}
],
"year": 1972,
"venue": "The Annals of Mathematical Statistics",
"volume": "43",
"issue": "5",
"pages": "1470--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Darroch and D. Ratcliff, \"Generalized Iterative Scal- ing for Log-Linear Models,\" The Annals of Mathemati- cal Statistics, vol. 43, no. 5, pp. 1470-1480, 1972.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Inducing features of random fields",
"authors": [
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Trans. on Pattern Analysis and Machine Intelligence",
"volume": "19",
"issue": "4",
"pages": "380--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Della Pietra, V. Della Pietra, and J. Lafferty, \"Induc- ing features of random fields,\" IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 4, pp. 380-393, 1997.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL02: Proc. of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney, \"Discriminative training and max- imum entropy models for statistical machine transla- tion,\" in ACL02: Proc. of the 40th Annual Meeting of the Association for Computational Linguistics, PA, Philadelphia, 2002, pp. 295-302.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical Methods for Speech Recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Jelinek, Statistical Methods for Speech Recognition. MIT Press Cambridge, Massachusetts, London, Eng- land, 1997.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney, \"A systematic comparison of var- ious statistical alignment models,\" Computational Lin- guistics, vol. 29, no. 1, pp. 19-51, 2003.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Maximum-likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society, B",
"volume": "39",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin, \"Maximum-likelihood from incomplete data via the EM algorithm,\" Journal of the Royal Statistical Soci- ety, B, vol. 39, pp. 1-38, 1977.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression",
"authors": [
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "T",
"middle": [
"C"
],
"last": "Bell",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Trans. Inform. Theory",
"volume": "",
"issue": "4",
"pages": "1085--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. H. Witten and T. C. Bell, \"The zero-frequency prob- lem: Estimating the probabilities of novel events in adaptive text compression,\" IEEE Trans. Inform. The- ory, vol. IT-37, no. 4, pp. 1085-1094, 1991.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Numerical Recipes in C: The Art of Scientific Computing",
"authors": [
{
"first": "W",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Flannery",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Vetterling",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Press, B. Flannery, S. Teukolsky, and W. Vetterling, Numerical Recipes in C: The Art of Scientific Comput- ing. Cambridge University Press, 1992.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Minimum Error Training of Log-Linear Translation Models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Cettolo and M. Federico, \"Minimum Error Training of Log-Linear Translation Models,\" In these proceed- ings.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of 3rd International Conference on Language Resources and Evaluation (LREC), Las Palmas",
"volume": "",
"issue": "",
"pages": "147--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, \"Toward a broad-coverage bilingual cor- pus for speech translation of travel conversations in the real world,\" in Proc. of 3rd International Conference on Language Resources and Evaluation (LREC), Las Pal- mas, Spain, 2002, pp. 147-152.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cross-task portability of a broadcast news speech recognition system",
"authors": [
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Brugnara",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Giuliani",
"suffix": ""
}
],
"year": 2002,
"venue": "Speech Communication",
"volume": "38",
"issue": "3-4",
"pages": "335--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Bertoldi, F. Brugnara, M. Cettolo, M. Federico, and D. Giuliani, \"Cross-task portability of a broadcast news speech recognition system,\" Speech Communication, vol. 38, no. 3-4, pp. 335-347, 2002.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "f , a) = log Pr(e) h 2 (e, f , a) = log Pr(f , a | e) exploiting eq. (3), eq. (2) can be rewritten as: e * = arg max e Pr(e) \u03bb 1 a",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Expansion, recombination and pruning of theories during the search process.",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "The two-phase architecture of the training system: first, the distributions of the components of the phrase-based model are estimated by means of alignments (left side). Then, the scaling factors of the components are computed by a minimum error training loop (right side).",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>PREPROCESSED</td><td/><td/><td/><td/><td>PREPROCESSED</td></tr><tr><td>src TRAINING SET tgt</td><td colspan=\"4\">Word Aligner</td><td>PARAMETERS MODEL PHRASE\u2212BASED</td><td>Decoder</td><td>src DEVELOPMENT SET tgt (ref)</td></tr><tr><td/><td/><td/><td/><td/><td>\u2212 lexicon distributions</td></tr><tr><td/><td/><td colspan=\"2\">WORD</td><td/><td>\u2212 fertility \u2212 distortion \" \"</td><td>TRANSLATION</td></tr><tr><td/><td colspan=\"4\">ALIGNMENTS</td><td>\u2212 LM</td></tr><tr><td/><td>src</td><td/><td/><td>tgt</td></tr><tr><td/><td/><td>Phrase</td><td/><td/><td>Estimation Parameter</td><td>\u2212 \u03bb4 \u2212 \u03bb3 \u2212 \u03bb2 \u2212 \u03bb1</td><td>Evaluator</td></tr><tr><td/><td/><td colspan=\"2\">Extraction</td><td/></tr><tr><td/><td>src</td><td colspan=\"2\">PHRASES</td><td>tgt</td><td>SCORE</td></tr><tr><td/><td colspan=\"2\">.. w1#..#wj w1#..#wl</td><td colspan=\"2\">w1#..#wk .. w1#..#wm</td><td>Simplex</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Phase 1:</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Experiments for the selection of additional training data. Results are given on the development set CSTAR-2003.",
"content": "<table><tr><td>System</td><td colspan=\"2\">Additional Data</td><td>BLEU</td><td colspan=\"3\">NIST MWER MPER</td></tr><tr><td>name</td><td colspan=\"2\">monolingual bilingual</td><td/><td/><td/></tr><tr><td>baseline</td><td/><td/><td colspan=\"2\">0.3001 7.0157</td><td>50.8</td><td>41.5</td></tr><tr><td>lm-btec</td><td>BTEC</td><td/><td colspan=\"2\">0.3509 7.5099</td><td>47.2</td><td>38.1</td></tr><tr><td>lm-db1</td><td>BTEC, DB1</td><td/><td colspan=\"2\">0.3466 7.4475</td><td>47.6</td><td>38.3</td></tr><tr><td>lm-db2</td><td>BTEC, DB2</td><td/><td colspan=\"2\">0.3460 7.4427</td><td>47.1</td><td>38.3</td></tr><tr><td>tm-btec</td><td>BTEC</td><td>BTEC</td><td colspan=\"2\">0.4311 8.5336</td><td>42.0</td><td>33.3</td></tr><tr><td>tm-db3</td><td>BTEC</td><td colspan=\"3\">BTEC, DB3 0.4574 8.7890</td><td>39.7</td><td>30.5</td></tr><tr><td>of the system, shown in</td><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Official results of the IWSLT-2004 evaluation campaign. Comparison between different types of Chinese segmentation.",
"content": "<table><tr><td colspan=\"3\">Data Condition Segmentation BLEU</td><td colspan=\"3\">NIST MWER MPER</td></tr><tr><td>Supplied</td><td>Supplied</td><td colspan=\"2\">0.3156 7.1604</td><td>53.1</td><td>45.3</td></tr><tr><td/><td>Special</td><td colspan=\"2\">0.3493 7.0973</td><td>50.8</td><td>43.0</td></tr><tr><td>Additional</td><td>Supplied</td><td colspan=\"2\">0.3499 7.5199</td><td>51.0</td><td>43.3</td></tr><tr><td/><td>Special</td><td colspan=\"2\">0.3514 7.3958</td><td>49.7</td><td>42.0</td></tr><tr><td/><td>Full</td><td colspan=\"2\">0.3490 6.6185</td><td>51.9</td><td>44.5</td></tr><tr><td>Unrestricted</td><td>Supplied</td><td colspan=\"2\">0.3774 7.0880</td><td>50.0</td><td>43.4</td></tr><tr><td/><td>Special</td><td colspan=\"2\">0.4118 7.0908</td><td>47.7</td><td>41.0</td></tr><tr><td/><td>Full</td><td colspan=\"2\">0.4409 7.2413</td><td>45.7</td><td>39.3</td></tr></table>",
"num": null,
"html": null
}
}
}
}