Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:55:37.493731Z"
},
"title": "EMDC: A Semi-supervised Approach for Word Alignment",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a novel semisupervised word alignment technique called EMDC that integrates discriminative and generative methods. A discriminative aligner is used to find high precision partial alignments that serve as constraints for a generative aligner which implements a constrained version of the EM algorithm. Experiments on small-size Chinese and Arabic tasks show consistent improvements on AER. We also experimented with moderate-size Chinese machine translation tasks and got an average of 0.5 point improvement on BLEU scores across five standard NIST test sets and four other test sets.",
"pdf_parse": {
"paper_id": "C10-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a novel semisupervised word alignment technique called EMDC that integrates discriminative and generative methods. A discriminative aligner is used to find high precision partial alignments that serve as constraints for a generative aligner which implements a constrained version of the EM algorithm. Experiments on small-size Chinese and Arabic tasks show consistent improvements on AER. We also experimented with moderate-size Chinese machine translation tasks and got an average of 0.5 point improvement on BLEU scores across five standard NIST test sets and four other test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word alignment is a crucial component in statistical machine translation (SMT). From a Machine Learning perspective, the models for word alignment can be roughly categorized as generative models and discriminative models. The widely used word alignment tool, i.e. GIZA++ (Och and Ney, 2003) , implements the well-known IBM models (Brown et al., 1993) and the HMM model (Vogel et al., 1996) , which are generative models. For language pairs such as Chinese-English, the word alignment quality is often unsatisfactory. There has been increasing interest on using manual alignments in word alignment tasks, which has resulted in several discriminative models. Ittycheriah and Roukos (2005) proposed to use only manual alignment links in a maximum entropy model, which is considered supervised. Also, a number of semi-supervised word aligners have been proposed (Taskar et al., 2005; Liu et al., 2005; Moore, 2005; Blunsom and Cohn, 2006; Niehues and Vogel, 2008) . These methods use held-out manual alignments to tune weights for discriminative models, while using the model parameters, model scores or alignment links from unsupervised word aligners as features. Callison-Burch et. al. (2004) proposed a method to interpolate the parameters estimated by sentence-aligned and word-aligned corpus. Also, there are recent attempts to combine multiple alignment sources using alignment confidence measures so as to improve the alignment quality (Huang, 2009) .",
"cite_spans": [
{
"start": 271,
"end": 290,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 330,
"end": 350,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
},
{
"start": 369,
"end": 389,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF17"
},
{
"start": 657,
"end": 686,
"text": "Ittycheriah and Roukos (2005)",
"ref_id": "BIBREF11"
},
{
"start": 858,
"end": 879,
"text": "(Taskar et al., 2005;",
"ref_id": "BIBREF16"
},
{
"start": 880,
"end": 897,
"text": "Liu et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 898,
"end": 910,
"text": "Moore, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 911,
"end": 934,
"text": "Blunsom and Cohn, 2006;",
"ref_id": "BIBREF1"
},
{
"start": 935,
"end": 959,
"text": "Niehues and Vogel, 2008)",
"ref_id": "BIBREF14"
},
{
"start": 1161,
"end": 1190,
"text": "Callison-Burch et. al. (2004)",
"ref_id": "BIBREF3"
},
{
"start": 1439,
"end": 1452,
"text": "(Huang, 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, the question we address is whether we can jointly improve discriminative models and generative models by feeding the information we get from the discriminative aligner back into the generative aligner. Examples of this line of research include Model 6 (Och and Ney, 2003) and the EMD training approach proposed by Fraser and Marcu (2006) and its extension called LEAF aligner (Fraser and Marcu, 2007) . These approaches use labeled data to tune additional parameters to weight different components of the IBM models such as the lexical translation model, the distortion model and the fertility model. These methods are proven to be effective in improving the quality of alignments. However, the discriminative training in these methods is restricted in using the model components of generative models, in other words, incorporating new features is difficult.",
"cite_spans": [
{
"start": 267,
"end": 286,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 329,
"end": 352,
"text": "Fraser and Marcu (2006)",
"ref_id": "BIBREF4"
},
{
"start": 391,
"end": 415,
"text": "(Fraser and Marcu, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Instead of using discriminative training methods to tune the weights of generative models, in this paper we propose to use a discriminative word aligner to produce reliable constraints for the EM algorithm. We call this new training scheme EMDC (Expectation-Maximization-Discrimination-Constraint). The methodology can be viewed as a variation of bootstrapping. It enables the generative models to interact with discriminative models at the data level instead of the model level. Furthermore, with a discriminative word aligner that uses generative word aligner's output as features, we create a feedback loop that can iteratively improve the quality of both aligners. The major contributions of this paper are: 1) The EMDC training scheme, which ties the generative and discriminative aligners together and enables future research on integrating other discriminative aligners. 2) An extended generative aligner based on GIZA++ that allows to perform constrained EM training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we present the EMDC training scheme. Section 3 provides details of the constrained EM algorithm. In Section 4, we introduce the discriminative aligner and link filtering. Section 5 provides the experiment set-up and the results. Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The EMDC training scheme consists of three parts, namely EM, Discrimination, and Constraints. As illustrated in Figure 1 , a large unlabeled training set is first aligned with a generative aligner (GIZA++ for the purpose of this paper). The generative aligner outputs the model parameters and the Viterbi alignments for both source-to-target and target-to-source directions. Afterwards, a discriminative aligner (we use the one described in (Niehues and Vogel, 2008) ), takes the lexical translation model, fertility model and Viterbi alignments from both directions as features, and is tuned to optimize the AER on a small manually aligned tuning set. Afterwards, the alignment links generated by the discriminative aligner are filtered according to their likelihood, resulting in a subset of links that has high precision and low recall. The next step is to put these high precision alignment links back into the generative aligner as constraints. A conventional generative word aligner does not support this type of constraints. Thus we developed a constrained EM algorithm that can use the links from a partial alignment as constraints and estimate the model parameters by marginalizing likelihoods.",
"cite_spans": [
{
"start": 441,
"end": 466,
"text": "(Niehues and Vogel, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "EMDC Training Scheme",
"sec_num": "2"
},
{
"text": "After the constrained EM training is performed, we repeat the procedure and put the updated generative models and Viterbi alignment back into the discriminative aligner. We can either fix the number of iterations, or stop the procedure when the gain on AER of a small held-out test set drops be- The key components for the system are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EMDC Training Scheme",
"sec_num": "2"
},
{
"text": "1. A generative aligner that can make use of reliable alignment links as constraints and improve the models/alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EMDC Training Scheme",
"sec_num": "2"
},
{
"text": "A discriminative aligner that outputs confidence scores for alignment links, which allows to obtain high-precision-low-recall alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "While in this paper we derive the reliable links by filtering the alignment generated by a discriminative aligner, such partial alignments may be obtained from other sources as well: manual alignments, specific named entity aligner, noun-phrase aligner, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "As we mentioned in Section 1, the discriminative aligner is not restricted to use features parameters of generative models and Viterbi alignments. However, including the features from generative models is required for iterative training, because the improvement on the quality of these features can in turn improve the discriminative aligner. In our experiments, the discriminative aligner makes heavy use of the Viterbi alignment and the model parameters from the generative aligner. Nonetheless, one can easily replace the discriminative aligner or add new features to it without modifying the training scheme. The open-ended property of the training scheme makes it a promising method to integrate different aligners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "In the next two sections, we will describe the key components of this framework in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "In this section we will briefly introduce the constrained EM algorithm we used in the experiment, further details of the algorithm can be found in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "The IBM Models (Brown et al., 1993) are a series of generative models for word alignment. GIZA++ (Och and Ney, 2003) , the most widely used implementation of IBM models and HMM (Vogel et al., 1996) , employs EM algorithm to estimate the model parameters. For simpler models such as Model 1 and Model 2, it is possible to obtain sufficient statistics from all possible alignments in the E-step. However, for fertility-based models such as Models 3, 4, and 5, enumerating all possible alignments is NP-complete. To overcome this limitation, GIZA++ adopts a greedy hill-climbing algorithm, which uses simpler models such as HMM or Model 2 to generate a \"center alignment\" and then tries to find better alignments among its neighbors. The neighbors of an alignment",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
},
{
"start": 97,
"end": 116,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 177,
"end": 197,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "a J 1 = [a 1 , a 2 , \u2022 \u2022 \u2022 , a J ] with a j \u2208 [0, I]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "are defined as alignments that can be generated from a J 1 by one of the following two operators: 1. The move operator m [i,j] , that changes a j := i, i.e. arbitrarily sets word f j in the target sentence to align to the word e i in source sentence; 2. The swap operator",
"cite_spans": [
{
"start": 121,
"end": 126,
"text": "[i,j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "s [j 1 ,j 2 ] that exchanges a j 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "and a j 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "The algorithm will update the center alignment as long as a better alignment can be found, and finally outputs a local optimal alignment. The neighbor alignments of the final center alignment are then used in collecting the counts for the M-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "Step. Och and Ney (2003) proposed a fast implementation of the hill-climbing algorithm that employs two matrices, i.e. Moving Matrix M I\u00d7J and Swapping Matrix S J\u00d7J . Each cell of the matrices stores the value of likelihood difference after applying the corresponding operator. We define a partial alignment constraint of a sentence pair (f J 1 , e I 1 ) as a set of links:",
"cite_spans": [
{
"start": 6,
"end": 24,
"text": "Och and Ney (2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "\u03b1 J I = {(i, j)|0 \u2264 i < I, 0 \u2264 j < J}. Given a set of constraints, an alignment a J 1 = [a 1 , a 2 , \u2022 \u2022 \u2022 , a j ] on the sentence pair f J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "1 , e I 1 , the translation probability of P r(f J 1 |e I 1 ) will be zero if the alignment is inconsistent with the constraints. Constraints (0, j) or (i, 0) are used to explicitly represent that word f j or e i is aligned to the empty word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "Under the assumptions of the IBM models, there are two situations that a J 1 is inconsistent with \u03b1 J I : 1. Target word misalignment: The IBM models assume that one target word can only be aligned to one source word. Therefore, if the target word f j aligns to a source word e i , while the constraint \u03b1 J I suggests f j should be aligned to e i , the alignment violates the constraint and thus is considered inconsistent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "2. Source word to empty word misalignment: if a source word is aligned to the empty word, it cannot be aligned to any concrete target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "However, the partial alignments, which allow n-to-n alignments, may already violate the 1-to-n alignment restriction of the IBM models. In these cases, we relax the condition in situation 1 that if the alignment link a j * is consistent with any one of the conflicting target-to-source constraints, it will be considered consistent. Also, we arbitrarily assign the source word to empty word constraints higher priorities than other constraints, because unlike situation 1, it does not have the problem of conflicting with other constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained EM algorithm",
"sec_num": "3"
},
{
"text": "To ensure that resulting center alignment be consistent with the constraints, we need to split the hill-climbing algorithm into two stages: 1) optimize towards the constraints and 2) optimize towards the optimal alignment under the constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained hill-climbing algorithm",
"sec_num": "3.1"
},
{
"text": "From a seed alignment, we first move the alignment towards the constraints by choosing a move or swap operator that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained hill-climbing algorithm",
"sec_num": "3.1"
},
{
"text": "1. produces the alignment that has the highest likelihood among alignments generated by other operators, 2. eliminates at least one inconsistent link.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained hill-climbing algorithm",
"sec_num": "3.1"
},
{
"text": "We iteratively update the alignment until no other inconsistent link can be removed. The algorithm implies that we force the seed alignment to be closer to the constraints while trying to find the best consistent alignment. After we find the consistent alignment, we proceed to optimize towards the optimal alignment under the constraints. The algorithm sets the value of the cells in moving/swapping matrices to negative if the corresponding operators will lead to an inconsistent alignment. The moving matrix needs to be processed only once, whereas the swapping matrix needs to be updated every iteration, since once the alignment is updated, the possible violations will also change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained hill-climbing algorithm",
"sec_num": "3.1"
},
{
"text": "If a source word e i is aligned to the empty word, we set M i,j = \u22121, \u2200j. The swapping matrix does not need to be modified in this case because the swapping operator will not introduce new links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained hill-climbing algorithm",
"sec_num": "3.1"
},
{
"text": "Because the cells that can lead to violations are set to negative, the operators will never be picked when updating the center alignments. This ensures the consistency of the final center alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained hill-climbing algorithm",
"sec_num": "3.1"
},
{
"text": "After finding the center alignment, we need to collect counts from neighbor alignments so that the M-step can normalize the counts to produce the model parameters for the next step. In this stage, we want to make sure all the inconsistent alignments in the neighbor set of the center alignment be ruled out from the sufficient statistics, i.e. have zero probability. Similar to the constrained hill climbing algorithm, we can manipulate the moving/swapping matrices to effectively exclude inconsistent alignments. Since the original count collection algorithm depends only on moving and swapping matrices, we just need to bypass all the cells which hold negative values, i.e. represent inconsistent alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Count Collection",
"sec_num": "3.2"
},
{
"text": "We can also view the algorithm as forcing the posteriors of inconsistent alignments to zero, and therefore increase the posteriors of consistent alignments. When no constraint is given, the algo-rithm falls back to conventional EM, and when all the alignments are known, the algorithm becomes fully supervised. And if the alignment quality can be improved if high-precision partial alignment links is given as constraints. In we experimented with using a dictionary to generate such constraints, and in we experimented with manual word alignments from Mechanical Turk. And in this paper we try to use an alternative method that uses a discriminative aligner and link filtering to generate such constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Count Collection",
"sec_num": "3.2"
},
{
"text": "We employ the CRF-based discriminative word aligner described in (Niehues and Vogel, 2008) . The aligner can use a variety of knowledge sources as features, such as: the fertility and lexical translation model parameters from GIZA++, the Viterbi alignment from both source-to-target and target-to-source directions. It can also make use of first-order features which model the dependency between different links, the Parts-of-Speech tagging features, the word form similarity feature and the phrase features. In this paper we use all the features mentioned above except the POS and phrase features.",
"cite_spans": [
{
"start": 65,
"end": 90,
"text": "(Niehues and Vogel, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Aligner and Link Filtering",
"sec_num": "4"
},
{
"text": "The aligner is trained using a beliefpropagation (BP) algorithm, and can be optimized to maximize likelihood or directly optimize towards AER on a tuning set. The aligner outputs confidence scores for alignment links, which allows us to control the precision and recall rate of the resulting alignment. Guzman et al. (2009) experimented with different alignments produced by adjusting the filtering threshold for the alignment links and showed that they could get high-precision-low-recall alignments by having a higher threshold. Therefore, we replicated the confidence filtering procedures to produce the partial alignment constraints. Afterwards we iterate by putting the partial alignments back to the constrained word alignment algorithm described in section 3.",
"cite_spans": [
{
"start": 303,
"end": 323,
"text": "Guzman et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Aligner and Link Filtering",
"sec_num": "4"
},
{
"text": "Although the discriminative aligner performs well in supplying high precision constraints, it does not model the null alignment explicitly. Table 2 : The qualities of the constraints Hence we are currently not able to provide source word to empty word alignment constraints which have been proven to be effective in improving the alignment quality in . Due to space limitation, please refer to: (Niehues and Vogel, 2008; Guzman et al., 2009) for further details of the aligner and link filtering, respectively.",
"cite_spans": [
{
"start": 395,
"end": 420,
"text": "(Niehues and Vogel, 2008;",
"ref_id": "BIBREF14"
},
{
"start": 421,
"end": 441,
"text": "Guzman et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discriminative Aligner and Link Filtering",
"sec_num": "4"
},
{
"text": "To validate the proposed training scheme, we performed two sets of experiments. First of all, we experimented with a small manually aligned corpus to evaluate the ability of the algorithm to improve the AER. The experiment was performed on Chinese to English and Arabic to English tasks. Secondly, we experimented with a moderate size corpus and performed translation tasks to observe the effects in translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In order to measure the effects of EMDC in alignment quality, we experimented with Chinese-English and Arabic-English manually aligned corpora. The statistics of these sets are shown in Table 1. We split the data into two fragments, the first 100 sentences (Set A) and the remaining (Set B). We trained generative IBM models using the Set B, and tuned the discriminative aligner using the Set A. We evaluated the AER on Set B, but in any of the training steps the manual alignments of Set B were not used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects on AER",
"sec_num": "5.1"
},
{
"text": "In each iteration of EDMC, we load the model parameters from the previous step and continue training using the new constraints. Therefore, it is important to compare the performance of continuous training against an unconstrained baseline, because variation in alignment quality could be attributed to either the effect of more training iterations or to the effect of semi-supervised training scheme. In Figures 3 and 4 we show the alignment quality for each iteration. Iteration 0 is the baseline, which comes from standard GIZA++ training 1 . The grey dash curves represent unconstrained Model 4 training, and the curves with start, circle, cross and diamond markers are constrained EM alignments with 0.6, 0.7, 0.8 and 0.9 filtering thresholds respectively. As we can see from the results, when comparing only the mono-directional trainings, the alignment qualities improve over the unconstrained training in all the metrics (precision, recall and AER). From Table 2, we observe that the quality of discriminative aligner also improved. Nonetheless, when we consider the heuristically symmetrized alignment 2 , we observe mixed results. For instance, for the Chinese-English case we observe that AER improves over iterations, but this is the result of a increasingly higher recall rate in detriment of precision. Ayan and Dorr (2006) pointed out that grow-diag-final symmetrization tends to output alignments with high recall and low precision. However this does not fully explain the tendency we observed between iterations. The characteristics of the alignment modified by EDMC that lead to larger improvements in mono-directional trainings but a precision drop with symmetrization heuristics needs to be addressed in future work.",
"cite_spans": [
{
"start": 1316,
"end": 1336,
"text": "Ayan and Dorr (2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 404,
"end": 419,
"text": "Figures 3 and 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effects on AER",
"sec_num": "5.1"
},
{
"text": "Another observation is how the filtering thresholds affect the results. As we can see in Table 3 , for Chinese to English word alignment, the largest gain on the alignment quality is observed when the threshold was set to 0.8, while for Arabic to English, the threshold of 0.7 or 0.6 works better. Table 2 shows the precision, recall, and AER of the constraint links used in the constrained EM al- Table 3 : Improvement on word alignment quality on small corpus after 8 iterations. BL stands for baseline, and NC represents unconstrained Model 4 training, and 0.9, 0.8, 0.7, 0.6 are the thresholds used in alignment link filtering.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 3",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 2",
"ref_id": null
},
{
"start": 398,
"end": 405,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects on AER",
"sec_num": "5.1"
},
{
"text": "gorithm, the numbers are averaged across all iterations, the actual numbers of each iteration only have small differences. Although one might expect that the quality of resulting alignment from constrained EM be proportional to the quality of constraints, from the numbers in Table 2 and 3, we are not able to induce a clear relationship between them, and it could be language-or corpusdependent. However, in practice we nonetheless use a held-out test set to tune this parameter. The Table 4 : Improvement on word alignment quality on moderate-size corpus, where BL and NC represents baseline and non-constrained Model 4 training relationship between quality of constraints and alignment results is an interesting topic for future research.",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 2",
"ref_id": null
},
{
"start": 485,
"end": 492,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects on AER",
"sec_num": "5.1"
},
{
"text": "In this experiment we run the whole machine translation pipeline and evaluate the system on BLEU score. We used the corpus LDC2006G05 which contains 25 million words as training set, the same discriminative tuning set as previously used (100 sentence pairs) and the remaining 21,763 sentence pairs from the hand-aligned corpus of the previous experiment are held-out test set for alignment qualities. A 4-gram language model trained from English GigaWord V1 and V2 corpus was used. The AER scores on the heldout test set are also provided for every iteration. Based on the observation in last experiment, we adopt the filtering threshold of 0.8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects on translation quality",
"sec_num": "5.2"
},
{
"text": "Similar to previous experiment, the heuristically symmetrized alignments have lower precisions than their EMDC counterparts, however the gaps are smaller as shown in Table 4 . We observe 2.85 and 2.21 absolute AER reduction on two directions, after symmetrization the gain on AER is 1.82. Continuing Model 4 training appears to have minimal effect on AER, and the improve- ment mainly comes from the constraints.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects on translation quality",
"sec_num": "5.2"
},
{
"text": "In the experiment, we use the Moses toolkit to extract phrases, tune parameters and decode. We use the NIST MT06 test set as the tuning set, NIST MT02-05 and MT08 as unseen test sets. We also include results for four additional unseen test sets used in GALE evaluations: DEV07-Dev newswire part (dd-nw, 278 sentences) and Weblog part (dd-wb, 345 sentences), Dev07-Blind newswire part (db-nw, 276 sentences and Weblog part (db-wb, 312 sentences). Table 5 presents the average improvement on BLEU scores in each iteration. As we can see from the results, in all iterations we got improvement on BLEU scores, and the largest gain we have gotten is on the fifth iteration, which has 0.51 average improvement on five NIST test sets, and 0.54 average improvement across all nine test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 453,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effects on translation quality",
"sec_num": "5.2"
},
{
"text": "In this paper we presented a novel training scheme for word alignment task called EMDC. We also presented an extension of GIZA++ that can perform constrained EM training. By integrating it with a CRF-based discriminative word aligner and alignment link filtering, we can improve the alignment quality of both aligners iteratively. We experimented with small-size Chinese-English and Arabic English and moderate-size Chinese-English word alignment tasks, and ob-served in all four mono-directional alignments more than 3% absolute reduction on AER, with the largest improvement being 8.16% absolute on Arabic-to-English comparing to the baseline, and 5.90% comparing to Model 4 training with the same numbers of iterations. On a moderate-size Chinese-to-English tasks we also evaluated the impact of the improved alignment on translation quality across nine test sets. The 2% absolute AER reduction resulted in 0.5 average improvement on BLEU score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Observations on the results raise several interesting questions for future research, such as 1) What is the relationship between the precision of the constraints and the quality of resulting alignments after iterations, 2) The effect of using different discriminative aligners, 3) Using aligners that explicitly model empty words and null alignments to provide additional constraints. We will continue exploration on these directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The extended GIZA++ is released to the research community as a branch of MGIZA++ (Gao and Vogel, 2008) , which is available online 3 .",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We run 5, 5, 3, 3 iterations of Model 1, HMM, Model 3 and Model 4 respectively.2 We used grow-diag-final-and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Accessible on Source Forge, with the URL: http://sourceforge.net/projects/mgizapp/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by NSF CluE Project (NSF 08-560) and DARPA GALE project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Going beyond aer: an extensive analysis of word alignments and their impact on mt",
"authors": [
{
"first": "Necip",
"middle": [],
"last": "Ayan",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Fazil",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayan, Necip Fazil and Bonnie J. Dorr. 2006. Going beyond aer: an extensive analysis of word align- ments and their impact on mt. In Proceedings of the 21st International Conference on Compu- tational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 9-16.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminative word alignment with conditional random fields",
"authors": [
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blunsom, Phil and Trevor Cohn. 2006. Discrimina- tive word alignment with conditional random fields. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Lin- guistics, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "263--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, Peter F., Vincent J.Della Pietra, Stephen A. Della Pietra, and Robert. L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. In Computational Linguistics, volume 19(2), pages 263-331.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical machine translation with wordand sentence-aligned parallel corpora",
"authors": [
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "175--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Callison-Burch, C., D. Talbot, and M. Osborne. 2004. Statistical machine translation with word- and sentence-aligned parallel corpora. In Proceed- ings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 175-183.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semisupervised training for statistical word alignment",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "769--776",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fraser, Alexander and Daniel Marcu. 2006. Semi- supervised training for statistical word alignment. In ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Compu- tational Linguistics, pages 769-776.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Getting the structure right for word alignment: LEAF",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "51--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fraser, Alexander and Daniel Marcu. 2007. Get- ting the structure right for word alignment: LEAF. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 51-60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parallel implementations of word alignment tool",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the ACL 2008 Software Engineering, Testing, and Quality Assurance Workshop",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, Qin and Stephan Vogel. 2008. Parallel imple- mentations of word alignment tool. In Proceedings of the ACL 2008 Software Engineering, Testing, and Quality Assurance Workshop, pages 49-57.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Consensus versus expertise : A case study of word alignment with mechanical turk",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL 2010 Workshop on Creating Speech and Language Data With Mechanical Turk",
"volume": "",
"issue": "",
"pages": "30--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, Qin and Stephan Vogel. 2010. Consensus ver- sus expertise : A case study of word alignment with mechanical turk. In NAACL 2010 Workshop on Cre- ating Speech and Language Data With Mechanical Turk, pages 30-34.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A semi-supervised word alignment algorithm with partial manual alignments",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 joint Fifth Workshop on Statistical Machine Translation and Metrics MATR (ACL-2010 WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, Qin, Nguyen Bach, and Stephan Vogel. 2010. A semi-supervised word alignment algorithm with partial manual alignments. In In Proceedings of the ACL 2010 joint Fifth Workshop on Statistical Machine Translation and Metrics MATR (ACL-2010 WMT).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Reassessment of the role of phrase extraction in pbsmt",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2009,
"venue": "The twelfth Machine Translation Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guzman, Francisco, Qin Gao, and Stephan Vogel. 2009. Reassessment of the role of phrase extrac- tion in pbsmt. In The twelfth Machine Translation Summit.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Confidence measure for word alignment",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "932--940",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Fei. 2009. Confidence measure for word alignment. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 932-940.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A maximum entropy word aligner for arabic-english machine translation",
"authors": [
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "89--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ittycheriah, Abraham and Salim Roukos. 2005. A maximum entropy word aligner for arabic-english machine translation. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Process- ing, pages 89-96.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Loglinear models for word alignment",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL '05: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "459--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Yang, Qun Liu, and Shouxun Lin. 2005. Log- linear models for word alignment. In ACL '05: Pro- ceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 459-466.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A discriminative framework for bilingual word alignment",
"authors": [
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, Robert C. 2005. A discriminative frame- work for bilingual word alignment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Pro- cessing, pages 81-88.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discriminative word alignment via alignment matrix modeling",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stephan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niehues, Jan. and Stephan. Vogel. 2008. Discrimina- tive word alignment via alignment matrix modeling. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 18-25.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz Joseph and Hermann Ney. 2003. A systematic comparison of various statistical align- ment models. In Computational Linguistics, vol- ume 1:29, pages 19-51.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A discriminative matching approach to word alignment",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taskar, Ben, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of the conference on Hu- man Language Technology and Empirical Methods in Natural Language Processing, pages 73-80.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "HMM based word alignment in statistical machine translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of 16th International Conference on Computational Linguistics)",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vogel, Stephan, Hermann Ney, and Christoph Till- mann. 1996. HMM based word alignment in statis- tical machine translation. In Proceedings of 16th In- ternational Conference on Computational Linguis- tics), pages 836-841.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Illustration of EMDC training scheme low a threshold.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Figure 2demonstrates the idea, given the constraints shown in (a), and the seed alignment shown as solid links in (b), we Illustration of Algorithm 1 move the inconsistent link to the dashed link by a move operation.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Alignment qualities of each iteration for Arabic-English word alignment task. The grey dash curves represent unconstrained Model 4 training, and the curves with star, circle, cross and diamond markers are constrained EM alignments with 0.6, 0.7, 0.8 and 0.9 filtering thresholds respectively.",
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"uris": null,
"text": "Alignment qualities of each iteration for Chinese-English word alignment task. The grey dash curves represent unconstrained Model 4 training, and the curves with star, circle, cross and diamond markers are constrained EM alignments with 0.6, 0.7, 0.8 and 0.9 filtering thresholds respectively.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Threshold</td><td>P</td><td>R</td><td>AER</td></tr><tr><td>Ch-En</td><td colspan=\"4\">0.6 71.30 58.12 35.96 0.7 75.24 54.03 37.11 0.8 85.66 44.19 41.70 0.9 93.70 37.95 45.98</td></tr><tr><td>Ar-En</td><td colspan=\"4\">0.6 72.35 59.87 34.48 0.7 77.55 55.58 35.25 0.8 80.07 50.89 37.77 0.9 83.74 44.16 42.17</td></tr></table>",
"text": "Corpus statistics of the manual aligned corpora"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>I</td><td>M</td><td>NIST</td><td>GALE</td></tr><tr><td/><td/><td>mt06 mt02 mt03 mt04 mt05 mt08</td><td>ain db-nw db-wb dd-nw dd-wb</td><td>aia</td></tr><tr><td colspan=\"2\">0 G G</td><td colspan=\"3\">31.00 0.34 31.14 31.94 30.42 32.86 29.49 24.15 0.20 27.31 24.51 27.50 24.02 0.03</td></tr><tr><td>3</td><td>D G</td><td colspan=\"2\">31.29 32.39 30.28 33.19 29.60 24.41 0.43 27.64 25.32 28.55 24.71 30.94 31.95 30.15 32.71 29.38 24.22 0.12 27.63 24.61 28.80 25.05</td><td>0.47 0.29</td></tr><tr><td>4</td><td>D G</td><td colspan=\"2\">30.80 32.04 30.51 33.24 29.49 24.61 0.46 27.61 25.27 28.72 24.98 30.68 31.81 30.33 33.05 29.28 24.41 0.26 27.20 24.79 28.43 24.50</td><td>0.53 0.24</td></tr><tr><td>5</td><td>D G</td><td colspan=\"2\">30.93 31.89 29.96 32.89 29.37 24.50 0.17 27.75 24.50 29.05 24.90 31.16 32.28 30.72 33.30 29.83 24.30 0.51 27.32 25.05 28.60 25.44</td><td>0.33 0.54</td></tr></table>",
"text": "31.80 29.89 32.63 29.33 24.24 26.92 24.48 28.44 24.26 1 D 30.65 31.60 30.04 32.89 29.34 24.52 0.12 27.43 24.72 28.32 24.30 0.14 G 31.35 31.91 30.35 32.75 29.40 24.16 0.15 27.39 24.50 28.22 24.60 0.15 2 D 31.61 32.31 30.40 33.06 29.49 24.11 0.33 28.17 24.42 28.58 24.36"
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Improvement on translation alignment quality on moderate-size corpus, The column ain shows the average improvement of BLEU scores for all NIST test sets (excluding the tuning set MT06), and column aia is the average improvement on all unseen test sets. The column M indicates the alignment source, G means the alignment comes from generative aligner, and D means discriminative aligner respectively. The number of iterations is shown in column I."
}
}
}
}