Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:45:06.537814Z"
},
"title": "Minimum Bayes-Risk Decoding for Statistical Machine Translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"addrLine": "3400 North Charles Street",
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA \u00a1"
}
},
"email": "[email protected]"
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"addrLine": "3400 North Charles Street",
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA \u00a1"
}
},
"email": "byrne\[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present Minimum Bayes-Risk (MBR) decoding for statistical machine translation. This statistical approach aims to minimize expected loss of translation errors under loss functions that measure translation performance. We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. We report the performance of the MBR decoders on a Chinese-to-English translation task. Our results show that MBR decoding can be used to tune statistical MT performance for specific loss functions.",
"pdf_parse": {
"paper_id": "N04-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "We present Minimum Bayes-Risk (MBR) decoding for statistical machine translation. This statistical approach aims to minimize expected loss of translation errors under loss functions that measure translation performance. We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. We report the performance of the MBR decoders on a Chinese-to-English translation task. Our results show that MBR decoding can be used to tune statistical MT performance for specific loss functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Statistical Machine Translation systems have achieved considerable progress in recent years as seen from their performance on international competitions in standard evaluation tasks (NIST, 2003) . This rapid progress has been greatly facilitated by the development of automatic translation evaluation metrics such as BLEU score (Papineni et al., 2001) , NIST score (Doddington, 2002) and Position Independent Word Error Rate (PER) (Och, 2002) . However, given the many factors that influence translation quality, it is unlikely that we will find a single translation metric that will be able to judge all these factors. For example, the BLEU, NIST and the PER metrics, \u00a3 This work was supported by the National Science Foundation under Grant No. 0121285 and an ONR MURI Grant N00014-01-1-0685. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Office of Naval Research. though effective, do not take into account explicit syntactic information when measuring translation quality.",
"cite_spans": [
{
"start": 182,
"end": 194,
"text": "(NIST, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 328,
"end": 351,
"text": "(Papineni et al., 2001)",
"ref_id": "BIBREF17"
},
{
"start": 365,
"end": 383,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 431,
"end": 442,
"text": "(Och, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given that different Machine Translation (MT) evaluation metrics are useful for capturing different aspects of translation quality, it becomes desirable to create MT systems tuned with respect to each individual criterion. In contrast, the maximum likelihood techniques that underlie the decision processes of most current MT systems do not take into account these application specific goals. We apply the Minimum Bayes-Risk (MBR) techniques developed for automatic speech recognition (Goel and Byrne, 2000) and bitext word alignment for statistical MT (Kumar and Byrne, 2002) , to the problem of building automatic MT systems tuned for specific metrics. This is a framework that can be used with statistical models of speech and language to develop decision processes optimized for specific loss functions.",
"cite_spans": [
{
"start": 485,
"end": 507,
"text": "(Goel and Byrne, 2000)",
"ref_id": "BIBREF9"
},
{
"start": 550,
"end": 576,
"text": "MT (Kumar and Byrne, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will show that MBR decoding can be applied to machine translation in two scenarios. Given an automatic MT metric, we design a loss function based on the metric and use MBR decoding to tune MT performance under the metric. We also show how MBR decoding can be used to incorporate syntactic structure into a statistical MT system by building specialized loss functions. These loss functions can use information from word strings, word-to-word alignments and parse-trees of the source sentence and its translation. In particular we describe the design of a Bilingual Tree Loss Function that can explicitly use syntactic structure for measuring translation quality. MBR decoding under this loss function allows us to integrate syntactic knowledge into a statistical MT system without building detailed models of linguistic features, and retraining the system from scratch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first present a hierarchy of loss functions for translation based on different levels of lexical and syntactic information from source and target language sentences. This hierarchy includes the loss functions useful in both situations where we intend to apply MBR decoding. We then present the MBR framework for statistical machine translation under the various translation loss functions. We finally report the performance of MBR decoders optimized for each loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We now introduce translation loss functions to measure the quality of automatically generated translations. Suppose we have a sentence in a source language for which we have generated an automatic translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Loss Functions",
"sec_num": "2"
},
{
"text": "with word-to-word alignment \u00a4 \u00a5 \u00a2 r elative to . The wordto-word alignment \u00a4 \u00a5 \u00a2 specifies the words in the source sentence that are aligned to each word in the translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a3 \u00a2",
"sec_num": null
},
{
"text": ". We wish to compare this automatic translation with a reference translation \u00a1 with word-to-word alignment \u00a4 relative to .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a6 \u00a2",
"sec_num": null
},
{
"text": "We will now present a three-tier hierarchy of translation loss functions of the form \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a6 \u00a2",
"sec_num": null
},
{
"text": "\u00a5 \u00a9 \u00a1 \u00a6 \u00a2 \u00a4 \u00a5 \u00a2 \u00a1 \u00a4 ! that measure\u00a1 ! \u00a2 \" # \u00a4 $ \u00a2 against\u00a1 \u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a6 \u00a2",
"sec_num": null
},
{
"text": ". These loss functions will make use of different levels of information from word strings, MT alignments and syntactic structure from parse-trees of both the source and target strings as illustrated in the following table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a6 \u00a2",
"sec_num": null
},
{
"text": "Functional Form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "Lexical \u00a7 \u00a5 % \u00a1 # \u00a1 ! \u00a2 Target Language Parse-Tree \u00a7 $ % & ( ' ) \u00a9 & 0 ' 2 1 % Bilingual Parse-Tree \u00a7 \u00a5 \u00a9 3 & ( ' 4 \u00a4 5 3 & 0 ' 2 1 6 # \u00a4 $ \u00a2 7 & 0 8 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "We start with an example of two competing English translations for a Chinese sentence (in Pinyin without tones), with their word-to-word alignments in Figure 1 . The reference translation for the Chinese sentence with its word-to-word alignment is shown in Figure 2 . In this section, we will show the computation of different loss functions for this example.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Figure 1",
"ref_id": null
},
{
"start": 257,
"end": 265,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Loss Function",
"sec_num": null
},
{
"text": "The first class of loss functions uses no information about word alignments or parse-trees, so that \u00a7 $ % \u00a1 ! \u00a2 \u00a4 \u00a5 \u00a2 7 \u00a1 \u00a4 # ! can be reduced to \u00a7 $ \u00a1 \u00a1 \u00a3 \u00a2 @",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": ". We consider three loss functions in this category: The BLEU score (Papineni et al., 2001 ), word-error rate, and the position-independent word-error rate (Och, 2002) . Another example of a loss function in this class is the MTeval metric introduced in Melamed et al. (2003) . A loss function of this type depends only on information from word strings. BLEU score (Papineni et al., 2001 ) computes the geometric mean of the precision of",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF17"
},
{
"start": 156,
"end": 167,
"text": "(Och, 2002)",
"ref_id": "BIBREF15"
},
{
"start": 254,
"end": 275,
"text": "Melamed et al. (2003)",
"ref_id": "BIBREF13"
},
{
"start": 365,
"end": 387,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": "A -grams of vari- ous lengths (A C B E D G F I H @ HP R Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": ") between a hypothesis and a reference translation, and includes a brevity penalty (S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": "T \u00a1 \u00a1 ! \u00a2 V U W F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": ") if the hypothesis is shorter than the refer-ence. We use",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": "P Y X \u00e0 . b \u00a7 c \u00a1 \u00a3 d \u00eb \u00a1 \u00a1 \u00a2 9 X g f h p i q s r t u w v y x % 5 i \u00fc % \u00a1 # \u00a1 ! \u00a2 7 P S T \u00a1 \u00a1 \u00a2 where i \u00fc % \u00a1 # \u00a1 ! \u00a2 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": "is the precision of A -grams in the hypothesis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Loss Functions",
"sec_num": "2.1"
},
{
"text": ". The BLEU score is zero if any of the n-gram precisions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": "i \u00fc \u00a1 \u00a1 \u00a6 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": "is zero for that sentence pair. We note that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": "U b \u00a7 c \u00a1 \u00a3 d \u00eb \u00a1 \u00a1 \u00a2 R U F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": ". We derive a loss function from BLEU score as \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": "BLEU\u00a8 \u00a1 \u00a1 \u00a6 \u00a2 @ X F 4 b \u00a7 \u00a1 \u00a3 d % \u00a1 # \u00a1 ! \u00a2 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": "Word Error Rate (WER) is the ratio of the string-edit distance between the reference and the hypothesis word strings to the number of words in the reference. Stringedit distance is measured as the minimum number of edit operations needed to transform a word string to the other word string. Position-independent Word Error Rate (PER) measures the minimum number of edit operations needed to transform a word string to any permutation of the other word string. The PER score (Och, 2002) is then computed as a ratio of this distance to the number of words in the reference word string.",
"cite_spans": [
{
"start": 474,
"end": 485,
"text": "(Och, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 ! \u00a2",
"sec_num": null
},
{
"text": "The second class of translation loss functions uses information only from the parse-trees of the two translations, so that \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Language Parse-Tree Loss Functions",
"sec_num": "2.2"
},
{
"text": "\u00a5 \u00a9 \u00a1 \u00a4 \u00a1 ! \u00a2 \" \u00a4 \u00a5 \u00a2 7 ! X \u00a7 $ % & 0 ' ) \u00a9 & 0 ' 2 1 %",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Language Parse-Tree Loss Functions",
"sec_num": "2.2"
},
{
"text": ". This loss function has no access to any information from the source sentence or the word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Language Parse-Tree Loss Functions",
"sec_num": "2.2"
},
{
"text": "Examples of such loss functions are tree-edit distances between parse-trees, string-edit distances between event representation of parse-trees (Tang et al., 2002) , and treekernels (Collins and Duffy, 2002) . The computation of tree-edit distance involves an unconstrained alignment of the two English parse-trees. We can simplify this problem once we have a third parse tree (for the Chinese sentence) with node-to-node alignment relative to the two English trees. We will introduce such a loss function in the next section. We did not perform experiments involving this class of loss functions, but mention them for completeness in the hierarchy of loss functions.",
"cite_spans": [
{
"start": 143,
"end": 162,
"text": "(Tang et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 181,
"end": 206,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Target Language Parse-Tree Loss Functions",
"sec_num": "2.2"
},
{
"text": "The third class of loss functions uses information from word strings, alignments and parse-trees in both languages, and can be described by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parse-Tree Loss Functions",
"sec_num": "2.3"
},
{
"text": "\u00a7 \u00a5 \u00a9 % \u00a1 \u00a4 \u00a5 \u00a1 ! \u00a2 \" \u00a4 $ \u00a2 # ! X \u00a7 \u00a5 \u00a9 % & ' \u00a4 \u00a5 % & ' y 1 # \u00a4 $ \u00a2 \u00a9 & 0 8 9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parse-Tree Loss Functions",
"sec_num": "2.3"
},
{
"text": "We will now describe one such loss function using the example in Figures 1 and 2 . Figure 3 shows a treeto-tree mapping between the source (Chinese) parse-tree and parse-trees of its reference translation and two competing hypothesis (English) translations. E 2 2 A 1 A the first two months of this year guangdong 's high\u2212tech products 3.76 billion US dollars jin\u2212nian qian liangyue guangdong gao xinjishu chanpin chukou sanqidianliuyi meiyuan the first two months of this year guangdong exported high\u2212tech products 3.76 billion US dollars ",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 80,
"text": "Figures 1 and 2",
"ref_id": null
},
{
"start": 83,
"end": 91,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bilingual Parse-Tree Loss Functions",
"sec_num": "2.3"
},
{
"text": "& $ \u00a2 rooted at node \u00a2 by \u00a1 \u00a2 \u00a2 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parse-Tree Loss Functions",
"sec_num": "2.3"
},
{
"text": ". We will now describe a simple procedure that makes use of the word alignment ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parse-Tree Loss Functions",
"sec_num": "2.3"
},
{
"text": ". We first read off the source word sequence corresponding to the leaves of \u00a1 u . We next consider the subset of words in the target sentence that are aligned to any word in this source word sequence, and select the leftmost and rightmost words from this subset. We locate the leaf nodes corresponding to these two words in the target parse tree ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "& and & $ \u00a2 . \u00a5 P \u00a6 8 X D 5 A B & 0 8 \u00a7 \u00a6 \u00a8 \u00a9 X \u00a2 \u00a9 X Q G H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "The Bilingual Parse-Tree (BiTree) Loss Function can then be computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "BiTreeLoss ! # \" % $ \u00a4 & ( ' \u00a3 $ ) ! \" 1$ \u00a4 & 1 0 2 ' \u00a3 3 4 # 5 6 ' 6 7 9 8 @ B A D C E G F I H ! P Q R $ 4 P 4 0 Q 1' \u00a3 $ (1) where S \u00a1 \u00a1 \u00a2 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "is a distance measure between sub-trees \u00a1 and \u00a1 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": ". Specific Bi-tree loss functions are determined through particular choices of S . In our experiments, we used a 0/1 loss function between sub-trees",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a1 and \u00a1 \u00a2 . S \u00a1 \u00a1 \u00a2 9 X U T F \u00a1 \u00a9 X \u00a1 \u00a2 otherwiseH",
"eq_num": "(2)"
}
],
"section": "A",
"sec_num": null
},
{
"text": "We note that other tree-to-tree distance measures can also be used to compute S , e.g. the distance function could compare if the subtrees \u00a1 and \u00a1 \u00a2 have the same headword/non-terminal tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "The Bitree loss function measures the distance between two trees in terms of distances between their corresponding subtrees. In this way, we replace the stringto-string (Levenshtein) alignments (for WER) or A -gram matches (for BLEU/PER) with subtree-to-subtree alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "The Bitree Error Rate (in %) is computed as a ratio of the Bi-tree Loss function to the number of nodes in the set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "\u00a5 P \u00a6 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "The complete node-to-node alignment between the parse-tree of the source (Chinese) sentence and the parse trees of its reference translation and the two hypothesis translations (English) is given in Table 1 . Each row in this table shows the alignment between a node in the Chinese parse-tree and nodes in the reference and the two hypothesis parse-trees. The computation of the Bitree Loss function and the Bitree Error Rate is presented in the last two rows of the Figure 3 : An example showing a parse-tree for a Chinese sentence and parse-trees for its reference translation and two competing hypothesis translations. We show a sample alignment for one of the nodes in the Chinese tree with its corresponding nodes in the three English trees. The complete node-to-node alignment between the parse-trees of the Chinese sentence and the three English sentences is given in Table 1 . NP1 1 LCP NP6 NP1 1 NP1 1 NP1 NP8 NP3 1 NP3 1 NT1 NP8 NP3 1 NP3 1 jin-nian NP8 NP3 1 NP3 1 LC1 first NP1 1 NP2 1 qian first NP1 1 NP2 1 VP2 S S 1 NP1 1 VV NP7 NP2 1 NP2 1 liangyue NP7 NP2 1 NP2 1 NP2 S S 1 NP1 1 Figure 3 . Each row shows a mapping between a node in the parse-tree of the Chinese sentence and the nodes in parse-trees of its reference translation, hypothesis translation 1 and hypothesis translation 2.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 467,
"end": 475,
"text": "Figure 3",
"ref_id": null
},
{
"start": 875,
"end": 882,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 885,
"end": 1157,
"text": "NP1 1 LCP NP6 NP1 1 NP1 1 NP1 NP8 NP3 1 NP3 1 NT1 NP8 NP3 1 NP3 1 jin-nian NP8 NP3 1 NP3 1 LC1 first NP1 1 NP2 1 qian first NP1 1 NP2 1 VP2 S S 1 NP1 1 VV NP7 NP2 1 NP2 1 liangyue NP7 NP2 1 NP2 1 NP2 S S 1 NP1 1",
"ref_id": "TABREF3"
},
{
"start": 1158,
"end": 1166,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "Node \u00a2 \u00a1 # 5 Node \u00a3 \u00a4 \u00a1 Node \u00a3 \u00a6 \u00a5 \u00a7 \u00a1 \u00a5 \u00a9 ! P Q $ 4 P Q \u00a7 ' Node \u00a3 \u00a1 \u00a9 D ! P 4 Q $ 4 P 0 Q ' VP1 S S 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A",
"sec_num": null
},
{
"text": "In Table 2 we compare various translation loss functions for the example from Figure 1 . The two hypothesis translations are very similar at the word level and therefore the BLEU score, PER and the WER are identical. However we observe that the sentences differ substantially in their syntactic structure (as seen from Parse-Trees in Figure 3 ), and to a lesser extent in their word-to-word alignments (Figure 1) to the source sentence. The first hypothesis translation is parsed as a sentence ! # \" P % $ ' & ( $ while the second translation is parsed as a noun phrase. The Bitree loss function which depends both on the parse-trees and the word-to-word alignments, is therefore very different for the two translations (Table 2) . While string based metrics such as BLEU, WER and PER are insensitive to the syntactic structure of the translations, BiTree Loss is able to measure this aspect of translation quality, and assigns different scores to the two translations.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
},
{
"start": 78,
"end": 86,
"text": "Figure 1",
"ref_id": null
},
{
"start": 334,
"end": 342,
"text": "Figure 3",
"ref_id": null
},
{
"start": 402,
"end": 412,
"text": "(Figure 1)",
"ref_id": null
},
{
"start": 720,
"end": 729,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "We provide this example to show how a loss function which makes use of syntactic structure from source and target parse trees, can capture properties of translations that string based loss functions are unable to measure. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "\u00a7 $ % \u00a1 # \u00a4 \u00a5 0 ) % ! .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "Our goal is to find the decoder that has the best performance over all translations. This is measured through Bayes-Risk :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "1 2 ) % ! \u00a9 c X a \u00a1 4 3 6 5' 6 78 9 78 A @ C B \u00a7 $ % \u00a1 # \u00a4 \u00a5 D ) % ! EH",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "The expectation is taken under the true distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "$ % \u00a1 \u00a4 ! # !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "that describes translations of human quality. Given a loss function and a distribution, it is well known that the decision rule that minimizes the Bayes-Risk is given by (Bickel and Doksum, 1977; Goel and Byrne, 2000) :",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Bickel and Doksum, 1977;",
"ref_id": "BIBREF2"
},
{
"start": 196,
"end": 217,
"text": "Goel and Byrne, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": ") ! X \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 ' 178 1 t ' 6 78 \u00a7 \u00a5 \u00a9 % \u00a1 \u00a4 \u00a5 \u00a1 \u00a2 \u00a4 \u00a2 # ! $ \u00a1 \u00a4 ! H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "(3) We shall refer to the decoder given by this equation as the Minimum Bayes-Risk (MBR) decoder. The MBR decoder can be thought of as selecting a consensus translation: For each sentence , Equation 3 selects the translation that is closest on an average to all the likely translations and alignments. The closeness is measured under the loss function of interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "This optimal decoder has the difficulties of search (minimization) and computing the expectation under the true distribution. In practice, we will consider the space of translations to be an P -best list of translation alternatives generated under a baseline translation model. Of course, we do not have access to the true distribution over translations. We therefore use statistical translation models (Och, 2002) to approximate the distribution $ % \u00a1 \u00a4 ! . Decoder Implementation: The MBR decoder (Equation 3) on the P -best List is implemented as",
"cite_spans": [
{
"start": 403,
"end": 414,
"text": "(Och, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "7 ! # \" % $ ' & ( A 0 ) \u00a5 2 1 3 14441 E 6 5 E 8 7 3 8 \u00a5 \u00a9 7 $ & 7 ' \u00a3 $ ( $ & ( ' ' @ 9 7 $ \u00a4 & 7 \u00a3 A B ' and C B B ' 7 E D ( $ \u00a4 & F D ( ' .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "This is a rescoring procedure that searches for consensus under a given loss function. The posterior probability of each hypothesis in the P -best list is derived from the joint probability assigned by the baseline translation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "$ % \u00a1 H G G \u00a4 F G 5 I ! 9 X $ % \u00a1 H G G \u00a4 F G I # ! P r Q v y x $ % \u00a1 Q \u00a4 Q # ! H",
"eq_num": "(4)"
}
],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "The conventional Maximum A Posteriori (MAP) decoder can be derived as a special case of the MBR decoder by considering a loss function that assigns a equal cost (say 1) to all misclassifications. Under the 0/1 loss function, \u00a7 $",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a9 \u00a1 \u00a4 5 % \u00a1 \u00a2 \u00a4 \u00a2 X T if \u00a1 X g \u00a1 ! \u00a2 S R \u00a4 g X \u00a4 $ \u00a2 F otherwise,",
"eq_num": "(5)"
}
],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "the decoder of Equation 3 reduces to the MAP decoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") MAP\u00a8 ! X T \u00a7 \u00a2 3 \u00a4 S \u00a6 U 0 V 5' 178 1@ $ \u00a1 \u00a2 \u00a4 \u00a2 ! H",
"eq_num": "(6)"
}
],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "This illustrates why we are interested in MBR decoders based on other loss functions: the MAP decoder is optimal with respect to a loss function that is very harsh. It does not distinguish between different types of translation errors and good translations receive the same penalty as poor translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Loss Functions",
"sec_num": "2.4"
},
{
"text": "We performed our experiments on the Large-Data Track of the NIST Chinese-to-English MT task (NIST, 2003) . The goal of this task is the translation of news stories from Chinese to English. The test set has a total of 1791 sentences, consisting of 993 sentences from the NIST 2001 MT-eval set and 878 sentences from the NIST 2002 MT-eval set. Each Chinese sentence in this set has four reference translations.",
"cite_spans": [
{
"start": 92,
"end": 104,
"text": "(NIST, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of MBR Decoders",
"sec_num": "4"
},
{
"text": "The performance of the baseline and the MBR decoders under the different loss functions was measured with respect to the four reference translations provided for the test set. Four evaluation metrics were used. These were multi-reference Word Error Rate (mWER) (Och, 2002) , multi-reference Position-independent word Error Rate (mPER) (Och, 2002) , BLEU and multi-reference BiTree Error Rate.",
"cite_spans": [
{
"start": 261,
"end": 272,
"text": "(Och, 2002)",
"ref_id": "BIBREF15"
},
{
"start": 335,
"end": 346,
"text": "(Och, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "Among these evaluation metrics, the BLEU score directly takes into account multiple reference translations (Papineni et al., 2001) . In case of the other metrics, we consider multiple references in the following way. For each sentence, we compute the error rate of the hypothesis translation with respect to the most similar reference translation under the corresponding loss function.",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "(Papineni et al., 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "In our experiments, a baseline translation model (JHU, 2003) , trained on a Chinese-English parallel corpus (NIST, 2003) ",
"cite_spans": [
{
"start": 49,
"end": 60,
"text": "(JHU, 2003)",
"ref_id": null
},
{
"start": 108,
"end": 120,
"text": "(NIST, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "(F X W w S Y English words and F # S W \u00a7 Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "Chinese words), was used to generate 1000-best translation hypotheses for each Chinese sentence in the test set. The 1000-best lists were then rescored using the different translation loss functions described in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "The English sentences in the P -best lists were parsed using the Collins parser (Collins, 1999) , and the Chinese sentences were parsed using a Chinese parser provided to us by D. Bikel (Bikel and Chiang, 2000) . The English parser was trained on the Penn Treebank and the Chinese parser on the Penn Chinese treebank.",
"cite_spans": [
{
"start": 80,
"end": 95,
"text": "(Collins, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 186,
"end": 210,
"text": "(Bikel and Chiang, 2000)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "Under each loss function, the MBR decoding was performed using Equation 3. We say we have a matched condition when the same loss function is used in both the error rate and the decoder design. The performance of the MBR decoders on the NIST 2001+2002 test set is reported in Table 3 . For all performance metrics, we show the 70% confidence interval with respect to the MAP baseline computed using bootstrap resampling (Press et al., 2002; Och, 2003) . We note that this significance level does meet the customary criteria for minimum significance intervals of 68.3% (Press et al., 2002) .",
"cite_spans": [
{
"start": 419,
"end": 439,
"text": "(Press et al., 2002;",
"ref_id": "BIBREF18"
},
{
"start": 440,
"end": 450,
"text": "Och, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 567,
"end": 587,
"text": "(Press et al., 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "We observe in most cases that the MBR decoder under a loss function performs the best under the corresponding error metric i.e. matched conditions perform the best. The gains from MBR decoding under matched conditions are statistically significant in most cases. We note that the MAP decoder is not optimal in any of the cases. In particular, the translation performance under the BLEU metric can be improved by using MBR relative to MAP decoding. This shows the value of finding decoding procedure matched to the performance criterion of interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "We also notice some affinity among the loss functions. The MBR decoding under the Bitree Loss function performs better under the WER relative to the MAP decoder, but perform poorly under the BLEU metric. The MBR decoder under WER and PER perform better than the MAP decoder under all error metrics. The MBR decoder under BLEU loss function obtains a similar (or worse) performance relative to MAP decoder on all metrics other than BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Performance",
"sec_num": "4.2"
},
{
"text": "We have described the formulation of Minimum Bayes-Risk decoders for machine translation. This is a general framework that allows us to build special purpose decoders from general purpose models. The procedure aims at direct minimization of the expected risk of translation errors under a given loss function. In this paper we have focused on two situations where this framework could be applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Given an MT evaluation metric of interest such as BLEU, PER or WER, we can use this metric as a loss function within the MBR framework to design decoders optimized for the evaluation criterion. In particular, the MBR decoding under the BLEU loss function can yield further improvements on top of MAP decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Suppose we are interested in improving syntactic structure of automatic translations and would like to use an existing statistical MT system that is trained without any linguistic features. We have shown in such a situation how MBR decoding can be applied to the MT system. This can be done by the design of translation loss functions from varied linguistic analyzes. We have shown the construction of a Bitree loss function to compare parsetrees of any two translations using alignments with respect to a parse-tree for the source sentence. The loss function therefore avoids the problem of unconstrained tree-to-tree alignment. Using an example, we have shown that this loss function can measure qualities of translation that string (and ngram) based metrics cannot capture. The MBR decoder under this loss function gives improvements under an evaluation metric based on the loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We present results under the Bitree loss function as an example of incorporating linguistic information into a loss function; we have not yet measured its correlation with human assessments of translation quality. This loss function allows us to integrate syntactic structure into the statistical MT framework without building detailed models of syntactic features and retraining models from scratch. However, we emphasize that the MBR techniques do not preclude the construction of complex models of syntactic structure. Translation models that have been trained with linguistic features could still benefit by the application of MBR decoding procedures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "That machine translation evaluation continues to be an active area of research is evident from recent workshops (AMTA, 2003) . We expect new automatic MT evaluation metrics to emerge frequently in the future. Given any translation metric, the MBR decoding framework will allow us to optimize existing MT systems for the new criterion. This is intended to compensate for any mismatch between decoding strategy of MT systems and their evaluation criteria. While we have focused on developing MBR procedures for loss functions that measure various aspects of translation quality, this framework can also be used with loss functions which measure application-specific error criteria.",
"cite_spans": [
{
"start": 112,
"end": 124,
"text": "(AMTA, 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We now describe related training and search procedures for NLP that explicitly take into consideration taskspecific performance metrics. Och (2003) developed a training procedure that incorporates various MT evaluation criteria in the training procedure of log-linear MT models. Foster et al. (2002) developed a text-prediction system for translators that maximizes expected benefit to the translator under a statistical user model. In parsing, Goodman (1996) developed parsing algorithms that are appropriate for specific parsing metrics. There has also been recent work that combines 1-best hypotheses from multiple translation systems (Bangalore et al., 2002) ; this approach uses string-edit distance to align the hypotheses and rescores the resulting lattice with a language model.",
"cite_spans": [
{
"start": 137,
"end": 147,
"text": "Och (2003)",
"ref_id": "BIBREF16"
},
{
"start": 279,
"end": 299,
"text": "Foster et al. (2002)",
"ref_id": "BIBREF8"
},
{
"start": 445,
"end": 459,
"text": "Goodman (1996)",
"ref_id": "BIBREF10"
},
{
"start": 638,
"end": 662,
"text": "(Bangalore et al., 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In future work we plan to extend the search space of MBR decoders to translation lattices produced by the baseline system. Translation lattices (Ueffing et al., 2002; Kumar and Byrne, 2003) are a compact representation of a large set of most likely translations generated by an MT system. While an P -best list contains only a limited reordering of hypotheses, a translation lattice will contain hypotheses with a vastly greater number of re-orderings. We are developing efficient lattice search procedures for MBR decoders. By extending the search space of the decoder to a much larger space than the P -best list, we expect further performance improvements.",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "(Ueffing et al., 2002;",
"ref_id": "BIBREF20"
},
{
"start": 167,
"end": 189,
"text": "Kumar and Byrne, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "MBR is a promising modeling framework for statistical machine translation. It is a simple model rescoring framework that improves well-trained statistical models Performance Metrics Decoder BLEU (%) mWER(%) mPER (%) mBiTree Error Rate(%) 70% Confidence Intervals +/-0.3 +/-0.9 +/-0.6 +/-1. For each metric, the performance under a matched condition is shown in bold. Note that better results correspond to higher BLEU scores and to lower error rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "by tuning them for particular criteria. These criteria could come from evaluation metrics or from other desiderata (such as syntactic well-formedness) that we wish to see in automatic translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was performed as part of the 2003 Johns Hopkins Summer Workshop research group on Syntax for Statistical Machine Translation. We would like to thank all the group members for providing various resources and tools and contributing to useful discussions during the course of the workshop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Workshop on Machine Translation Evaluation",
"authors": [
{
"first": "Amta",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AMTA. 2003. Workshop on Machine Translation Evaluation, MT Summit IX.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bootstrapping bilingual data using consensus translation for a multilingual instant messaging system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bangalore, V. Murdock, and G. Riccardi. 2002. Boot- strapping bilingual data using consensus translation for a multilingual instant messaging system. In Proceed- ings of COLING, Taipei, Taiwan.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mathematical Statistics: Basic Ideas and Selected topics",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Bickel",
"suffix": ""
},
{
"first": "K",
"middle": [
"A"
],
"last": "Doksum",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. J. Bickel and K. A. Doksum. 1977. Mathematical Statistics: Basic Ideas and Selected topics. Holden- Day Inc., Oakland, CA, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Two statistical parsing models applied to the chinese treebank",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second Chinese Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bikel and D. Chiang. 2000. Two statistical pars- ing models applied to the chinese treebank. In Pro- ceedings of the Second Chinese Language Processing Workshop, pages 1-6, Hong Kong.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79-85.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the weighted perceptron",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins and N. Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete struc- tures, and the weighted perceptron. In Proceedings of EMNLP, Philadelphia, PA, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Head-driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. J. Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statis- tics. In Proc. of HLT 2002, San Diego, CA. USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Userfriendly text prediction for translators",
"authors": [
{
"first": "G",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Foster, P. Langlais, and G. Lapalme. 2002. User- friendly text prediction for translators. In Proc. of EMNLP, Philadelphia, PA, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Minimum Bayes-risk automatic speech recognition",
"authors": [
{
"first": "V",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech and Language",
"volume": "14",
"issue": "2",
"pages": "115--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Goel and W. Byrne. 2000. Minimum Bayes-risk auto- matic speech recognition. Computer Speech and Lan- guage, 14(2):115-135.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syntax for statistical machine translation, Final report",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of ACL-1996",
"volume": "",
"issue": "",
"pages": "177--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goodman. 1996. Parsing algorithms and metrics. In Proc. of ACL-1996, pages 177-183, Santa Cruz, CA, USA. JHU. 2003. Syntax for statistical machine translation, Final report, JHU summer workshop.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimum Bayes-Risk alignment of bilingual texts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kumar and W. Byrne. 2002. Minimum Bayes-Risk alignment of bilingual texts. In Proc. of EMNLP, Philadelphia, PA, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A weighted finite state transducer implementation of the alignment template model for statistical machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kumar and W. Byrne. 2003. A weighted finite state transducer implementation of the alignment template model for statistical machine translation. In Proceed- ings of HLT-NAACL, Edmonton, Canada.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Precision and recall of machine translation",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Turian",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. D. Melamed, R. Green, and J. P. Turian. 2003. Preci- sion and recall of machine translation. In Proceedings of the HLT-NAACL, Edmonton, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The NIST Machine Translation Evaluations",
"authors": [
{
"first": "",
"middle": [],
"last": "Nist",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2003. The NIST Machine Translation Evalua- tions. http://www.nist.gov/speech/tests/mt/.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical Machine Translation: From Single Word Models to Alignment Templates",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och. 2002. Statistical Machine Translation: From Single Word Models to Alignment Templates. Ph.D. thesis, RWTH Aachen, Germany.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL, Sapporo, Japan.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Numerical Recipes in C++",
"authors": [
{
"first": "W",
"middle": [
"H"
],
"last": "Press",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "W",
"middle": [
"T"
],
"last": "Vetterling",
"suffix": ""
},
{
"first": "B",
"middle": [
"P"
],
"last": "Flannery",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 2002. Numerical Recipes in C++. Cam- bridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Active learning for statistical natural language parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL 2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Tang, X. Luo, and S. Roukos. 2002. Active learning for statistical natural language parsing. In Proceedings of ACL 2002, Philadelphia, PA, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Generation of word graphs in statistical machine translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "156--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Ueffing, F. Och, and H. Ney. 2002. Generation of word graphs in statistical machine translation. In Proc. of EMNLP, pages 156-163, Philadelphia, PA, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Two competing English translations for a Chinese sentence with their word-to-word alignments.A E export of high\u2212tech products in guangdong in first two months this year reached 3.76 billion US dollars jin\u2212nian qian liangyue guangdong gao xinjishu chanpin chukou sanqidianliuyi meiyuan The reference translation for the Chinese sentence fromFigure 1with its word-to-word alignments. Words in the Chinese (English) sentence shown as unaligned are aligned to the NULL word in the English (Chinese) sentence.We first assume that a node A",
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>Loss Functions BLEU (%)</td><td>\u00a7 $ \u00a1 \u00a1 x 26.4</td><td>\u00a7</td><td>$ \u00a1 26.4 \u00a1</td></tr><tr><td>WER (%)</td><td>70.6</td><td/><td>70.6</td></tr><tr><td>PER (%)</td><td>23.5</td><td/><td>23.5</td></tr><tr><td>BiTree Error Rate (%)</td><td>65.4</td><td/><td>92.3</td></tr><tr><td colspan=\"4\">Table 2: Comparison of the different loss functions for</td></tr><tr><td colspan=\"4\">hypothesis and reference translations from Figures 1, 2.</td></tr><tr><td colspan=\"4\">Statistical Machine Translation (Brown et al., 1990) can</td></tr><tr><td colspan=\"4\">be formulated as a mapping of a word sequence in a</td></tr><tr><td colspan=\"4\">source language to word sequence guage that has a word-to-word alignment \u00a1 in the target lan-\u00a3 \u00a2 \u00a4 r elative to . \u00a2 Given the source sentence , the MT decoder ) % pro-! duces a target word string \u00a1 with word-to-word align-\u00a6 \u00a2 ment \u00a4 \u00a5 \u00a2 . Relative to a reference translation with word \u00a1 alignment , the decoder performance is measured as \u00a4</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"text": "Translation performance of the MBR decoder under various loss functions on the NIST 2001+2002 Test set.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}