Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:17.893834Z"
},
"title": "On a Strictly Convex IBM Model 1",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Simion",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Clifford",
"middle": [],
"last": "Stein",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "IBM Model 1 is a classical alignment model. Of the first generation word-based SMT models, it was the only such model with a concave objective function. For concave optimization problems like IBM Model 1, we have guarantees on the convergence of optimization algorithms such as Expectation Maximization (EM). However, as was pointed out recently, the objective of IBM Model 1 is not strictly concave and there is quite a bit of alignment quality variance within the optimal solution set. In this work we detail a strictly concave version of IBM Model 1 whose EM algorithm is a simple modification of the original EM algorithm of Model 1 and does not require the tuning of a learning rate or the insertion of an l 2 penalty. Moreover, by addressing Model 1's shortcomings, we achieve AER and F-Measure improvements over the classical Model 1 by over 30%.",
"pdf_parse": {
"paper_id": "D15-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "IBM Model 1 is a classical alignment model. Of the first generation word-based SMT models, it was the only such model with a concave objective function. For concave optimization problems like IBM Model 1, we have guarantees on the convergence of optimization algorithms such as Expectation Maximization (EM). However, as was pointed out recently, the objective of IBM Model 1 is not strictly concave and there is quite a bit of alignment quality variance within the optimal solution set. In this work we detail a strictly concave version of IBM Model 1 whose EM algorithm is a simple modification of the original EM algorithm of Model 1 and does not require the tuning of a learning rate or the insertion of an l 2 penalty. Moreover, by addressing Model 1's shortcomings, we achieve AER and F-Measure improvements over the classical Model 1 by over 30%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The IBM translation models were introduced in (Brown et al., 1993) and were the first-generation Statistical Machine Translation (SMT) systems. In the current pipeline, these word-based models are the seeds for more sophisticated models which need alignment tableaus to start their optimization procedure. Among the original IBM Models, only IBM Model 1 can be formulated as a concave optimization problem. Recently, there has been some research on IBM Model 2 which addresses either the model's non-concavity (Simion et al., 2015) \u2022 We utilize and expand the mechanism introduced in (Simion et al., 2015) to construct strictly concave versions of IBM Model 1 1 . As was shown in (Toutanova and Galley, 2011) , IBM Model 1 is not a strictly concave optimization problem. What this means in practice is that although we can initialize the model with random parameters and get to the same objective cost via the EM algorithm, there is quite a bit of alignment quality variance within the model's optimal solution set and ambiguity persists on which optimal solution truly is the best. Typically, the easiest way to make a concave model strictly concave is to append an l 2 regularizer. However, this method does not allow for seamless EM training: we have to either use a learning-rate dependent gradient based algorithm directly or use a gradient method within the M step of EM training. In this paper we show how to get via a simple technique an infinite supply of models that still allows a straightforward application of the EM algorithm.",
"cite_spans": [
{
"start": 46,
"end": 66,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 510,
"end": 531,
"text": "(Simion et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 584,
"end": 605,
"text": "(Simion et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 680,
"end": 708,
"text": "(Toutanova and Galley, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 As a concrete application of the above, we detail a very simple strictly concave version of IBM Model 1 and study the performance of different members within this class. Our strictly concave models combine some of the elements of word association and positional dependance as in IBM Model 2 to yield a significant model improvement. Furthermore, we now have guarantees that the solution we find is unique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We detail an EM algorithm for a subclass of strictly concave IBM Model 1 variants. The EM algorithm is a small change to the original EM algorithm introduced in (Brown et al., 1993) .",
"cite_spans": [
{
"start": 163,
"end": 183,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Notation. Throughout this paper, for any positive integer N , we use [N ] to denote {1 . . . N } and [N ] 0 to denote {0 . . . N }. We denote by R n + the set of nonnegative n dimensional vectors. We denote by [0, 1] n the n\u2212dimensional unit cube.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We begin by reviewing IBM Model 1 and introducing the necessary notation. To this end, throughout this section and the remainder of the paper we assume that our set of training examples is (e (k) , f (k) ) for k = 1 . . . n, where e (k) is the k'th English sentence and f (k) is the k'th French sentence. Following standard convention, we assume the task is to translate from French (the \"source\" language) into English (the \"target\" language). We use E to denote the English vocabulary (set of possible English words), and F to denote the French vocabulary. The k'th English sentence is a sequence of words e (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "1 . . . e (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "l k where l k is the length of the k'th English sentence, and each e (k) i \u2208 E; similarly the k'th French sentence is a sequence f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "(k) 1 . . . f (k) m k where each f (k) j \u2208 F .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "We define e (k) 0 for k = 1 . . . n to be a special NULL word (note that E contains the NULL word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "For each English word e \u2208 E, we will assume that D(e) is a dictionary specifying the set of possible French words that can be translations of e. The set D(e) is a subset of F . In practice, D(e) can be derived in various ways; in our experiments we simply define D(e) to include all French words f such that e and f are seen in a translation pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "Given these definitions, the IBM Model 1 optimization problem is given in Fig. 1 and, for example, (Koehn, 2008) . The parameters in this problem are t(f |e). The t(f |e) parameters are translation parameters specifying the probability of English word e being translated as French word f . The objective function is then the log-likelihood of the training data (see Eq. 3):",
"cite_spans": [
{
"start": 99,
"end": 112,
"text": "(Koehn, 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 74,
"end": 80,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "1 n n \ufffd k=1 m k \ufffd j=1 log p(f (k) j |e (k) ) , where log p(f (k) j |e (k) ) is log l k \ufffd i=0 t(f (k) j |e (k) i ) 1 + l k = C + log l k \ufffd i=0 t(f (k) j |e (k) i ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "and C is a constant that can be ignored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Input: Define E, F , L, M , (e (k) , f (k) , l k , m k ) for k = 1 . . . n, D",
"eq_num": "("
}
],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "e) for e \u2208 E as in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IBM Model 1",
"sec_num": "2"
},
{
"text": "\u2022 A parameter t(f |e) for each e \u2208 E, f \u2208 D(e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters:",
"sec_num": null
},
{
"text": "Constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200e \u2208 E, f \u2208 D(e), t(f |e) \u2265 0 (1) \u2200e \u2208 E, \ufffd f \u2208D(e) t(f |e) = 1 (2) Objective: Maximize 1 n n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 t(f (k) j |e (k) i )",
"eq_num": "(3)"
}
],
"section": "Parameters:",
"sec_num": null
},
{
"text": "with respect to the t(f |e) parameters. While IBM Model 1 is concave optimization problem, it is not strictly concave (Toutanova and Galley, 2011) . Therefore, optimization methods for IBM Model 1 (specifically, the EM algorithm) are typically only guaranteed to reach a global maximum of the objective function (see the Appendix for a simple example contrasting convex and strictly convex functions). In particular, although the objective cost is the same for any optimal solution, the translation quality of the solutions is not fixed and will still depend on the initialization of the model (Toutanova and Galley, 2011) .",
"cite_spans": [
{
"start": 118,
"end": 146,
"text": "(Toutanova and Galley, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 594,
"end": 622,
"text": "(Toutanova and Galley, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters:",
"sec_num": null
},
{
"text": "We now detail a very simple method to make IBM Model 1 strictly concave with a unique optimal solution without the need for appending an l 2 loss. Theorem 1. Consider IBM Model 1 and modify its objective to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "1 n n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 h i,j,k (t(f (k) j |e (k) i )) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "where h i,j,k : R + \u2192 R + is strictly concave. With the new objective and the same constraints as IBM Model 1, this new optimization problem is strictly concave.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "Proof. To prove concavity, we now show that the new likelihood function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "L(t) = 1 n n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 h i,j,k (t(f (k) j |e (k) i )) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "is strictly concave (concavity follows in the same way trivially). Suppose by way of contradiction that there is (t) \ufffd = (t \ufffd ) and \u03b8 \u2208 (0, 1) such that equality hold for Jensen's inequality. Since h i,j,k is strictly concave and (t) \ufffd = (t \ufffd ) we must have that there must be some (k, j, i) such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "t(f (k) j |e (k) i ) \ufffd = t \ufffd (f (k) j |e (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "i ) so that Jensen's inequality is strict for h i,j,k and we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "h i,j,k (\u03b8t(f (k) j |e (k) i ) + (1 \u2212 \u03b8)t \ufffd (f (k) j |e (k) i )) > \u03b8h i,j,k (t(f (k) j |e (k) i )) + (1 \u2212 \u03b8)h i,j,k (t \ufffd (f (k) j |e (k) i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "Using Jensen's inequality, the monotonicity of the log, and the above strict inequality we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "L(\u03b8t + (1 \u2212 \u03b8)t \ufffd ) = n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 h i,j,k (\u03b8t(f (k) j |e (k) i ) + (1 \u2212 \u03b8)t \ufffd (f (k) j |e (k) i )) > n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 \u03b8h i,j,k (t(f (k) j |e (k) i )) + (1 \u2212 \u03b8)h i,j,k (t \ufffd (f (k) j |e (k) i )) \u2265 \u03b8 n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 h i,j,k (t(f (k) j |e (k) i )) + (1 \u2212 \u03b8) n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 h i,j,k (t \ufffd (f (k) j |e (k) i )) = \u03b8L(t) + (1 \u2212 \u03b8)L(t \ufffd )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "The IBM Model 1 strictly concave optimization problem is presented in Fig. 2 . In 7it is crucial that each h i,j,k be strictly concave within",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 76,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "\ufffd l k i=0 h i,j,k (t(f (k) j |e (k) i )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "For example, we have that \u221a x 1 + x 2 is concave but not strictly concave and the proof of Theorem 1 would break down. To see this, we can consider (x 1 , x 2 ) \ufffd = (x 1 , x 3 ) and note that equality holds in Jensen's inequality. We should be clear: the main reason why Theorem 1 works is that we have h i,j,k are strictly concave (on R + ) and all the lexical probabilities that are arguments to L are present within the log-likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "Input: Define E, F , L, M , (e (k) , f (k) , l k , m k ) for k = 1 . . . n, D(e) for e \u2208 E as in Section 2. A set of strictly concave functions h i,j,k :",
"cite_spans": [
{
"start": 39,
"end": 42,
"text": "(k)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "R + \u2192 R + .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "Parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "\u2022 A parameter t(f |e) for each e \u2208 E, f \u2208 D(e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "Constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "\u2200e \u2208 E, f \u2208 D(e), t(f |e) \u2265 0 (5) \u2200e \u2208 E, \ufffd f \u2208D(e) t(f |e) = 1 (6) Objective: Maximize 1 n n \ufffd k=1 m k \ufffd j=1 log l k \ufffd i=0 h i,j,k (t(f (k) j |e (k) i )) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "with respect to the t(f |e) parameters. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Strictly Concave IBM Model 1",
"sec_num": "3"
},
{
"text": "For the IBM Model 1 strictly concave optimization problem, we can derive a clean EM Algorithm if we base our relaxation of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i,j,k (t(f (k) j |e (k) i )) = \u03b1(e (k) i , f (k) j )(t(f (k) j |e (k) i )) \u03b2(e (k) i ,f (k) j ) with \u03b2(e (k) i , f",
"eq_num": "(k)"
}
],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "j ) < 1. To justify this, we first need the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "Lemma 1. Consider h : R + \u2192 R + given by h(x) = x \u03b2 where \u03b2 \u2208 (0, 1). Then h is strictly concave.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "Proof. The proof of this lemma is elementary and follows since the second derivative given by h \ufffd\ufffd (x) = \u03b2(\u03b2 \u2212 1)x \u03b2\u22122 is strictly negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "For our concrete experiments, we picked a model based on Lemma 1 and used h(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "x) = \u03b1x \u03b2 with \u03b1, \u03b2 \u2208 (0, 1) so that h i,j,k (t(f (k) j |e (k) i )) = \u03b1(f (k) j , e (k) i )(t(f (k) j |e (k) i )) \u03b2(f (k) j ,e (k) i ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "Using this setup, parameter estimation for the new model can be accomplished via a slight modification of the EM algorithm for IBM Model 1. In particular, we have that the posterior probabilities of this model factor just as those of the standard Model 1 and we have an M step that requires optimizing \ufffd a (k) q(a (k) |e (k) , f (k) ) log p(f (k) , a (k) |e (k) ) 1: Input: Define E, F , L, M , (e (k) , f (k) , l k , m k ) for k = 1 . . . n, D(e) for e \u2208 E as in Section 2. An integer T specifying the number of passes over the data. A set of weighting parameter \u03b1(e, f ), \u03b2(e, f ) \u2208 (0, 1) for each e \u2208 E, f \u2208 D(e). A tuning parameter \u03bb > 0.",
"cite_spans": [
{
"start": 306,
"end": 309,
"text": "(k)",
"ref_id": null
},
{
"start": 321,
"end": 324,
"text": "(k)",
"ref_id": null
},
{
"start": 406,
"end": 409,
"text": "(k)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation via EM",
"sec_num": "4"
},
{
"text": "\u2022 A parameter t(f |e) for each e \u2208 E, f \u2208 D(e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2: Parameters:",
"sec_num": null
},
{
"text": "\u2022 \u2200e \u2208 E, f \u2208 D(e), set t(f |e) = 1/|D(e)|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3: Initialization:",
"sec_num": null
},
{
"text": "5: for all t = 1 . . . T do 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4: EM Algorithm:",
"sec_num": null
},
{
"text": "\u2200e \u2208 E, f \u2208 D(e), count(f, e) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4: EM Algorithm:",
"sec_num": null
},
{
"text": "\u2200e \u2208 E, count(e) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7:",
"sec_num": null
},
{
"text": "EM Algorithm: Expectation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "for all k = 1 . . . n do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "for all j = 1 . . . m k do 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "\u03b41[i] = 0 \u2200i \u2208 [l k ]0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "\u03941 = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "for all i = 0 . . . l k do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "\u03b41[i] = \u03b1(f (k) j , e (k) i )(t(f (k) j |e (k) i )) \u03b2(f (k) j ,e (k) i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "\u03941 += \u03b41[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "for all i = 0 . . . l k do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "\u03b41[i] = \u03b4 1 [i] \u0394 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "18:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "count(f (k) j , e (k) i ) += \u03b2(f (k) j , e (k) i )\u03b41[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "19:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "count(e (k) i ) += \u03b2(f (k) j , e (k) i )\u03b41[i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "20:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "EM Algorithm: Maximization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10:",
"sec_num": null
},
{
"text": "for all e \u2208 E do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "21:",
"sec_num": null
},
{
"text": "for all f \u2208 D(e) do where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "22:",
"sec_num": null
},
{
"text": "q(a (k) |e (k) , f (k) ) \u221d m k \ufffd j=1 h a (k) j ,j,k (t(f (k) j |e (k) a (k) j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "22:",
"sec_num": null
},
{
"text": "are constants gotten in the E step. This optimization step is very similar to the regular Model 1 M step since the \u03b2 drops down using log t \u03b2 = \u03b2 log t; the exact same count-based method can be applied. The details of this algorithm are in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 246,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "22:",
"sec_num": null
},
{
"text": "The performance of our new model will rely heavily on the choice of \u03b1(e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(k) i , f (k) j ), \u03b2(e (k) i , f",
"eq_num": "(k)"
}
],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "j ) \u2208 (0, 1) we use. In particular, we could make \u03b2 depend on the association between the words, or the words' positions, or both. One classical measure of word association is the dice coefficient (Och and Ney, 2003) given by dice(e, f ) = 2c(e, f ) c(e) + c(f ) .",
"cite_spans": [
{
"start": 197,
"end": 216,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "In the above, the count terms c are the number of training sentences that have either a particular word or a pair of of words (e, f ). As with the other choices we explore, the dice coefficient is a fraction between 0 and 1, with 0 and 1 implying less and more association, respectively. Additionally, we make use of positional constants like those of the IBM Model 2 distortions given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "d(i|j, l, m) = \uf8f1 \uf8f2 \uf8f3 1 (l+1)Z(j,l,m) : i = 0 le \u2212\u03bb| i l \u2212 j m | (l+1)Z(j,l,m) : i \ufffd = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "In the above, Z(j, l, m) is the partition function discussed in (Dyer et al., 2013) . The previous measures all lead to potential candidates for \u03b2(e, f ), we have t(f |e) \u2208 (0, 1), and we want to enlarge competing values when decoding (we use \u03b1t \u03b2 instead of t when getting the Viterbi alignment). The above then implies that we will have the word association measures inversely proportional to \u03b2, and so we set \u03b2(e, f ) = 1 \u2212 dice(e, f ) or \u03b2(e, f ) = 1 \u2212 d (i|j, l, m) . In our experiments we picked \u03b1(f",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 459,
"end": 470,
"text": "(i|j, l, m)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(k) j , e",
"eq_num": "(k)"
}
],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "i ) = d(i|j, l k , m k ) or 1; we hold \u03bb to a constant of either 16 or 0 and do not estimate this variable (\u03bb = 16 can be chosen by cross validation on a small trial data set).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing \u03b1 and \u03b2",
"sec_num": "5"
},
{
"text": "For our alignment experiments, we used a subset of the Canadian Hansards bilingual corpus with 247,878 English-French sentence pairs as training data, 37 sentences of development data, and 447 sentences of test data (Michalcea and Pederson, 2003) . As a second validation corpus, we considered a training set of 48,706 Romanian-English sentence-pairs, a development set of 17 sentence pairs, and a test set of 248 sentence pairs (Michalcea and Pederson, 2003) .",
"cite_spans": [
{
"start": 216,
"end": 246,
"text": "(Michalcea and Pederson, 2003)",
"ref_id": "BIBREF6"
},
{
"start": 429,
"end": 459,
"text": "(Michalcea and Pederson, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "6.1"
},
{
"text": "Below we report results in both AER (lower is better) and F-Measure (higher is better) (Och and Ney, 2003) for the English \u2192 French translation direction. To declare a better model we have to settle on an alignment measure. Although the relationship between AER/F-Measure and translation quality varies (Dyer et al., 2013) , there are some positive experiments (Fraser and Marcu, 2004) showing that F-Measure may be more useful, so perhaps a comparison based on F-Measure is ideal. Table 1 : Results on the English-French data for various (\u03b1, \u03b2) settings as discussed in Section 5. For the d parameters, we use \u03bb = 16 throughout. The standard IBM Model 1 is column 1 and corresponds to a setting of (1,1). The not necessarily strictly concave model with (d,1) setting gives the best AER, while the strictly concave model given by the (1, 1\u2212d) setting has the highest F-Measure.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 303,
"end": 322,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 361,
"end": 385,
"text": "(Fraser and Marcu, 2004)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 482,
"end": 489,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "to space limitations. Our experiments show that using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "h i,j,k (t(f (k) j |e (k) i )) = (t(f (k) j |e (k) i )) 1\u2212d(i|j,l k ,m k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "yields the best F-Measure performance and is not far off in AER from the \"fake\" 2 IBM Model 2 (gotten by setting (\u03b1, \u03b2) = (d, 1)) whose results are in column 2 (the reason why we use this model at all is since it should be better than IBM 1: we want to know how far off we are from this obvious improvement). Moreover, we note that dice does not lead to quality \u03b2 exponents and that, unfortunately, combining methods as in column 5 ((\u03b1, \u03b2) = (d, 1 \u2212 d)) does not necessarily lead to additive gains in AER and F-Measure performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "2 Generally speaking, when using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "h i,j,k (t(f (k) j |e (k) i )) = d(i|j, l k , m k )t(f (k) j |e (k) i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "with d constant we cannot use Theorem 3 since h is linear. Most likely, the strict concavity of the model will hold because of the asymmetry introduced by the d term; however, there will be a necessary dependency on the data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "In this section we take a moment to also compare our work with the classical IBM 1 work of (Moore, 2004) . Summarizing (Moore, 2004) , we note that this work improves substancially upon the classical IBM Model 1 by introducing a set of heuristics, among which are to (1) modify the lexical parameter dictionaries (2) introduce an initialization heuristic (3) modify the standard IBM 1 EM algorithm by introducing smoothing (4) tune additional parameters. However, we stress that the main concern of this work is not just heuristicbased empirical improvement, but also structured learning. In particular, although using an regularizer l 2 and the methods of (Moore, 2004) would yield a strictly concave version of IBM 1 as well (with improvements), it is not at all obvious how to choose the learning rate or set the penalty on the lexical parameters. The goal of our work was to offer a new, alternate form of regularization. Moreover, since we are changing the original loglikelihood, our method can be thought of as way of bringing the l 2 regularizer inside the log likelihood. Like (Moore, 2004) , we also achieve appreciable gains but have just one tuning parameter (when \u03b2 = 1 \u2212 d we just have the centering \u03bb parameter) and do not break the probabilistic interpretation any more than appending a regularizer would (our method modifies the log-likelihood but the simplex constrains remain).",
"cite_spans": [
{
"start": 91,
"end": 104,
"text": "(Moore, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 119,
"end": 132,
"text": "(Moore, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 657,
"end": 670,
"text": "(Moore, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 1086,
"end": 1099,
"text": "(Moore, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Previous Work",
"sec_num": "7"
},
{
"text": "In this paper we showed how IBM Model 1 can be made into a strictly convex optimization problem via functional composition. We looked at a specific member within the studied optimization family that allows for an easy EM algorithm. Finally, we conducted experiments showing how the model performs on some standard data sets and empirically showed 30% important over the standard IBM Model 1 algorithm. For further research, we note that picking the optimal h i,j,k is an open question, so provably finding and justifying the choice is one topic of interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Please refer as needed to the Appendix for examples and definitions of convexity/concavity and strict convexity/concavity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Andrei Simion was supported by a Google research award. Cliff Stein is supported in part by NSF grants CCF-1349602 and CCF-1421161.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Convex Optimization",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "Lieven",
"middle": [],
"last": "Vandenberghe",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19:263-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maximum Likelihood From Incomplete Data via the EM Algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the royal statistical society, series B",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum Likelihood From Incomplete Data via the EM Algorithm. Journal of the royal statistical soci- ety, series B, 39(1):1-38.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer , Victor Chahuneau, Noah A. Smith. 2013. A Simple, Fast, and Effective Reparameterization of IBM Model 2. In Proceedings of NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Measuring Word Alignment Quality for Statistical Machine Translation",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal Computational Linguistics",
"volume": "33",
"issue": "3",
"pages": "293--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Fraser and Daniel Marcu. 2007. Measur- ing Word Alignment Quality for Statistical Ma- chine Translation. Journal Computational Linguis- tics, 33(3): 293-303.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical Machine Translation",
"authors": [
{
"first": "Phillip",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillip Koehn. 2008. Statistical Machine Translation. Cambridge University Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Evaluation Exercise in Word Alignment",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Michalcea",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pederson",
"suffix": ""
}
],
"year": 2003,
"venue": "Workshop in building and using Parallel Texts: Data Driven Machine Translation and Beyond",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Michalcea and Ted Pederson. 2003. An Eval- uation Exercise in Word Alignment. HLT-NAACL 2003: Workshop in building and using Parallel Texts: Data Driven Machine Translation and Be- yond.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving IBM Word-Alignment Model 1",
"authors": [
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore. 2004. Improving IBM Word- Alignment Model 1. In Proceedings of the ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "HMM-Based Word Alignment in Statistical Translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney and Christoph Tillman. 1996. HMM-Based Word Alignment in Statistical Translation. In Proceedings of COLING.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Systematic Comparison of Various Statistical Alignment Models",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational-Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Mod- els. Computational-Linguistics, 29(1): 19-52.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Convex Alternative to IBM Model 2",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Simion",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Simion, Michael Collins and Cliff Stein. 2013. A Convex Alternative to IBM Model 2. In Proceed- ings of EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Family of Latent Variable Convex Relaxations for IBM Model 2",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Simion",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Simion, Michael Collins and Cliff Stein. 2015. A Family of Latent Variable Convex Relaxations for IBM Model 2. In Proceedings of the AAAI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Why Initialization Matters for IBM Model 1: Multiple Optima and Non-Strict Convexity",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Michel Galley. 2011. Why Ini- tialization Matters for IBM Model 1: Multiple Op- tima and Non-Strict Convexity. In Proceedings of the ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the L0-norm",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Liang Huang and David Chiang. 2012. Smaller Alignment Models for Better Trans- lations: Unsupervised Word Alignment with the L0- norm. In Proceedings of the ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The IBM Model 1 Optimization Problem."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The IBM Model 1 strictly concave optimization problem."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Pseudocode for T iterations of the EM Algorithm for the IBM Model 1 strictly concave optimization problem."
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>contains our results for the Hansards</td></tr><tr><td>data. For the smaller Romanian data, we obtained</td></tr><tr><td>similar behavior, but we leave out these results due</td></tr></table>"
}
}
}
}