|
{ |
|
"paper_id": "Q14-1031", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:11:06.728895Z" |
|
}, |
|
"title": "Locally Non-Linear Learning for Statistical Machine Translation via Discretization and Structured Regularization", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Redmond", |
|
"location": { |
|
"postCode": "98052, 15213", |
|
"settlement": "Pittsburgh", |
|
"region": "WA, PA", |
|
"country": "USA, USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Redmond", |
|
"location": { |
|
"postCode": "98052, 15213", |
|
"settlement": "Pittsburgh", |
|
"region": "WA, PA", |
|
"country": "USA, USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Microsoft", |
|
"middle": [], |
|
"last": "Research", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Linear models, which support efficient learning and inference, are the workhorses of statistical machine translation; however, linear decision rules are less attractive from a modeling perspective. In this work, we introduce a technique for learning arbitrary, rule-local, nonlinear feature transforms that improve model expressivity, but do not sacrifice the efficient inference and learning associated with linear models. To demonstrate the value of our technique, we discard the customary log transform of lexical probabilities and drop the phrasal translation probability in favor of raw counts. We observe that our algorithm learns a variation of a log transform that leads to better translation quality compared to the explicit log transform. We conclude that non-linear responses play an important role in SMT, an observation that we hope will inform the efforts of feature engineers.", |
|
"pdf_parse": { |
|
"paper_id": "Q14-1031", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Linear models, which support efficient learning and inference, are the workhorses of statistical machine translation; however, linear decision rules are less attractive from a modeling perspective. In this work, we introduce a technique for learning arbitrary, rule-local, nonlinear feature transforms that improve model expressivity, but do not sacrifice the efficient inference and learning associated with linear models. To demonstrate the value of our technique, we discard the customary log transform of lexical probabilities and drop the phrasal translation probability in favor of raw counts. We observe that our algorithm learns a variation of a log transform that leads to better translation quality compared to the explicit log transform. We conclude that non-linear responses play an important role in SMT, an observation that we hope will inform the efforts of feature engineers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Linear models using log-transformed probabilities as features have emerged as the dominant model in MT systems. This practice can be traced back to the IBM noisy channel models (Brown et al., 1993) , which decompose decoding into the product of a translation model (TM) and a language model (LM), motivated by Bayes' Rule. When Och and Ney (2002) introduced a log-linear model for translation (a linear sum of log-space features), they noted that the noisy channel model was a special case of their model using log probabilities. This same formulation persisted even after the introduction of MERT (Och, 2003) , which optimizes a linear model; again, using two log probability features (TM and LM) with equal weight recovered the noisy channel model. Yet systems now use many more features, some of which are not even probabilities. We no longer believe that equal weights between the TM and LM provides optimal translation quality; the probabilities in the TM do not obey the chain rule nor Bayes' rule, nullifying several theoretical mathematical justifications for multiplying probabilities. The story of multiplying probabilities may just amount to heavily penalizing small values.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 197, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 346, |
|
"text": "Och and Ney (2002)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 609, |
|
"text": "MERT (Och, 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The community has abandoned the original motivations for a linear interpolation of two logtransformed features. Is there empirical evidence that we should continue using this particular transformation? Do we have any reason to believe it is better than other non-linear transformations? To answer these, we explore the issue of non-linearity in models for MT. In the process, we will discuss the impact of linearity on feature engineering and develop a general mechanism for learning a class of non-linear transformations of real-valued features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Applying a non-linear transformation such as log to features is one way of achieving a non-linear response function, even if those features are aggregated in a linear model. Alternatively, we could achieve a non-linear response using a natively nonlinear model such as a SVM (Wang et al., 2007) or RankBoost (Sokolov et al., 2012) . However, MT is a structured prediction problem, in which a full hypothesis is composed of partial hypotheses. MT decoders take advantage of the fact that the model score decomposes as a linear sum over both local features and partial hypotheses to efficiently perform inference in these structured spaces ( \u00a72) -currently, there are no scalable solutions to integrating the hypothesis-level non-linear feature transforms typically associated with kernel methods while still maintaining polynomial time search. Another alternative is incorporating a recurrent neural network (Schwenk, 2012; Auli et al., 2013; Kalchbrenner and Blunsom, 2013) or an additive neural network (Liu et al., 2013a) . While these models have shown promise as methods of augmenting existing models, they have not yet offered a path for replacing or transforming existing real-valued features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 294, |
|
"text": "(Wang et al., 2007)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 330, |
|
"text": "(Sokolov et al., 2012)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 907, |
|
"end": 922, |
|
"text": "(Schwenk, 2012;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 923, |
|
"end": 941, |
|
"text": "Auli et al., 2013;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 942, |
|
"end": 973, |
|
"text": "Kalchbrenner and Blunsom, 2013)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1023, |
|
"text": "(Liu et al., 2013a)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this article, we discuss background ( \u00a72), describe local discretization, our approach to learning non-linear transformations of individual features, compare it with globally non-linear models ( \u00a73), present our experimental setup ( \u00a75), empirically verify the importance of non-linear feature transformations in MT and demonstrate that discretization can be used to recover non-linear transformations ( \u00a76), discuss related work ( \u00a77), and conclude ( \u00a78).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Decoding a given source sentence f can be expressed as search over target hypotheses e, each with an associated complete derivation D. To find the best-scoring hypothesis\u00ea(f ), a linear model applies a set of weights w to a complete hypothesis' feature vector H:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Locality & Structured Hypotheses", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e(f ) = arg max e,D |H| i=0 w i H i (f , e, D)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Feature Locality & Structured Hypotheses", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "However, this hides many of the realities of performing inference in modern decoders. Traditional inference would be intractable if every feature were allowed access to the entire derivation D and its associated target hypothesis e. Decoders take advantage of the fact that features decompose over partial derivations d. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Locality & Structured Hypotheses", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e(f ) = arg max e,D |H| i=0 w i d\u2208D h i (d) H i (D)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Feature Locality & Structured Hypotheses", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "This contrasts with non-local features such as the language model (LM), which cannot be exactly calculated given an arbitrary partial hypothesis, which may lack both left and right context. 1 Such features require special handling including future cost estimation. In this study, we limit ourselves to local features, leaving the traditional non-local LM feature unchanged. In general, feature locality is relative to a particular structured hypothesis space, and is unrelated to the structured features described in Section 4.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Locality & Structured Hypotheses", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Unlike models that rely primarily on a large number of sparse indicator features, state-of-the-art machine translation systems rely heavily on a small number of dense real-valued features. However, unlike indicator features, real-valued features may benefit from non-linear transformations to allow a linear model to better fit the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Non-Linearity and Separability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Decoders use a linear model to rank hypotheses, selecting the highest-ranked derivation. Since the absolute score of the model is irrelevant, non-linear responses are useful only in cases where they elicit novel rankings. In this section, we will discuss these cases in terms of separability. Here, we are separating the correctly ranked pairs of hypotheses from the incorrect in the implicit pairwise rankings defined by the total ordering on hypotheses provided by our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Non-Linearity and Separability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "When the local feature vectors h of each oraclebest 2 hypothesis (or hypotheses) are distinct from those of all other competing hypotheses, we say that the inputs are oracle separable given the feature set. If there exists a weight vector that distinguishes the oracle-best ranking from all other rankings under a linear model, we say that the inputs are linearly separable given the feature set. If the inputs are oracle separable but not linearly separable, we say that there are non-linearities that are unexplained by the feature set. For example, this can happen if a feature is positively related to quality in some regions but negatively related in other regions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Non-Linearity and Separability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As we add more sentences to our corpus, separability becomes increasingly difficult. For a given corpus, if all hypotheses are oracle separable, we can always produce the oracle translation -assuming an optimal (and potentially very complex) model and weight vector. If our hypothesis space also contains all reference translations, we can always recover the reference. In practice, both of these conditions are typically violated to a certain degree. However, if we modify our feature set such that some lower-ranked higher-quality hypothesis can be separated from all higher-ranked lower-quality hypotheses, then we can improve translation quality. For this reason, we believe that separability remains an informative tool for thinking about modeling in MT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Non-Linearity and Separability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Currently, non-linearities in novel real-valued features are typically addressed via manual feature engineering involving a good deal of trial and error (Gimpel and Smith, 2009) 3 or by manually discretizing features (e.g. indicator features for count=N ). We will explore one technique for automatically avoiding non-linearities in Section 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 177, |
|
"text": "(Gimpel and Smith, 2009)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Non-Linearity and Separability", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "While MERT has proven to be a strong baseline, it does not scale to larger feature sets in terms of both inefficiency and overfitting. While MIRA (Chiang et al., 2008) , Rampion (Gimpel and Smith, 2012) , and HOLS (Flanigan et al., 2013) have been shown to be effective over larger feature sets, they are difficult to explicitly regularize -this will become important in Section 4.2. Therefore, we use the PRO optimizer (Hopkins and May, 2011) as our baseline learner since it has been shown to perform comparably to MERT for a small number of features, and to significantly outperform MERT for a large number of features (Hopkins and May, 2011; Ganitkevitch et al., 2012) . Other very recent MT optimizers such as the linear structured SVM (Cherry and Foster, 2012) , AROW (Chiang, 2012) and regularized MERT are also compatible with the discretization and structured regularization techniques described in this article. 4", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 167, |
|
"text": "(Chiang et al., 2008)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 202, |
|
"text": "(Gimpel and Smith, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 237, |
|
"text": "(Flanigan et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 443, |
|
"text": "(Hopkins and May, 2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 645, |
|
"text": "(Hopkins and May, 2011;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 672, |
|
"text": "Ganitkevitch et al., 2012)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 766, |
|
"text": "(Cherry and Foster, 2012)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 788, |
|
"text": "(Chiang, 2012)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning with Large Feature Sets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In this section, we propose a feature induction technique based on discretization that produces a feature set that is less prone to non-linearities (see \u00a72.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We define feature induction as a function \u03a6(y) that takes the result of the feature function y = h(x) \u2208 R and returns a tuple y , j where y \u2208 R is a transformed feature value and j is the transformed feature index. 5 Building on equation 2, we can apply feature induction as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e(f ) = arg max e,D d\u2208D |H| i=0 y ,j =\u03a6 i (h i (d)) w j y H (f ,e,D)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "At first glance, one might be tempted to simply choose some non-linear function for \u03a6 (e.g. log(x), exp(x), sin(x), x n ). However, even if we were to restrict ourselves to some \"standard\" set of non-linear functions, many of these functions have hyperparameters that are not directly tunable by conventional optimizers (e.g.period and amplitude for sin, n in x n ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Learning", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "H Learning w Original Linear Model: w \u2022 H H Feature Induction w' Induced Linear Model: w' \u2022 H' H'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Figure 1: Top: A traditional learning procedure, assigning a set of weights to a fixed feature set. Bottom: Discretization, our feature induction technique, expands the feature set as part of learning, while still producing a linear model for inference, albeit with more features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Discretization allows us to avoid many nonlinearities ( \u00a72.2) while preserving the fast inference provided by feature locality ( \u00a72.1). We first discretize real-valued features into a set of indicator 5 One could also imagine a feature transformation function \u03a6 that returns a vector of bins for a single value returned by a feature function h or a transformation that has access to values from multiple feature functions at once.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 202, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "features and then use a conventional optimizer to learn a weight for each indicator feature ( Figure 1 ). This technique is sometimes referred to as binning and is closely related to quantization. Effectively, discretization allows us to re-shape a feature function ( Figure 2 ). In fact, given an infinite number of bins, we can perform any non-linear transformation of the original function. Figure 2 : Left: A real-valued feature. Bold dots represent points where we could imagine bins being placed. However, since we may only adjust w 0 , these \"bins\" will be rigidly fixed along the feature function's value. Right: After discretizing the feature into 4 bins, we may now adjust 4 weights independently, to achieve a non-linear re-shaping of the function.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 102, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 276, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 402, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1.0 0.5 h 0 1.0 0.5 h 1 h 2 h 3 h 4 R", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discretization and Feature Induction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03a6 i in terms of a binning function BIN i (x) \u2208 R \u2192 N: \u03a6 i (x) = 1, i BIN i (x)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "For indicator discretization, we define", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where the operator indicates concatenation of a feature identifier with a bin identifier to form a new, unique feature identifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "For indicator discretization, we define", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unlike other approaches to non-linear learning in MT, we perform non-linear transformation on partial hypotheses as in equation 3 where discretization is applied as \u03a6 i (h i (d)), which allows locally non-linear transformations, instead of applying \u03a6 to complete hypotheses as in \u03a6 i (H i (D)), which would allow globally non-linear transformations. This enables our transformed model to produce non-linear responses with regard to the initial feature set H while inference remains linear with regard to the optimized parameters w . Importantly, our transformed feature set requires no additional non-local information for inference. By performing transformations within a local context, we effectively reinterpret the feature set. For example, the familiar target word count feature", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Discretization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h TM =0.1 h TM =0.2 h Count =2 h TM =0.113 h TM_0.1 =1 h TM_0.2 =1 h TM_0.1 =1 h Count_2 =1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Discretization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "el gato come furtivamente In terms of predictive power, this transformation can provide the learned model with increased ability to discriminate between hypotheses. This is primarily a result of moving to a higher-dimensional feature space. As we introduce new parameters, we expect that some hypotheses that were previously indistinguishable under H become separable under H ( \u00a72.2). We show specific examples comparing linear, locally non-linear, and globally non-linear models in Figures 4 -6. As seen in these examples, locally non-linear models (Eq. 3, 4) are not an approximation nor a subset of globally non-linear models, but rather a different class of models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Discretization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To initialize the learning procedure, we construct the binning function BIN used by the indicator di- Figure 4 : An example showing a collinearity over multiple input sentences S 3 , S 4 in which the oracle-best hypothesis is \"trapped\" along a line with other lower quality hypotheses in the linear model's output space. Ranking shows how the hypotheses would appear in a k-best list with each partial derivation having its partial feature vector h under it; the complete feature vector H is shown to the right of each hypothesis and the oracle-best hypothesis is notated with a * . Pairs explicates the implicit pairwise rankings. Pairwise Ranking graphs those pairs in order to visualize whether or not the hypotheses are separable. (\u2295 indicates that the pair of hypotheses is ranked correctly according to the extrinsic metric and indicates the pair is ranked incorrectly. In the pairwise ranking row, some \u2295 and points are annotated with their positions along the third axis H 3 (omitted for clarity). Collinearity can also occur with a single input having at least 3 hypotheses. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 110, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Binning Algorithm", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "= {H 3 :2} Pairs (S 1 3 , S 2 3 ) {\u2206H:-2.0} \u2295 (S 2 3 , S 1 3 ) {\u2206H:2.0} (S 2 4 , S 1 4 ) {\u2206H:-2.0} (S 1 4 , S 2 4 ) {\u2206H:2.0} \u2295 (S 1 3 , S 2 3 ) {\u2206H 2 :1, \u2206H 4 :-1} \u2295 (S 2 3 , S 1 3 ) {\u2206H 2 :-1, \u2206H 4 :1} (S 2 4 , S 1 4 ) {\u2206H 4 :1, \u2206H 6 :-1} (S 1 4 , S 2 4 ) {\u2206H 4 :-1, \u2206H 6 :1} \u2295 (S 1 3 , S 2 3 ) {\u2206H 1 :2, \u2206H 2 :-2} \u2295 (S 2 3 , S 1 3 ) {\u2206H 1 :-2, \u2206H 2 :2} (S 2 4 , S 1 4 ) {\u2206H 2 :2, \u2206H 3 :-2} (S 1 4 , S 2 4 ) {\u2206H 2 :-2, \u2206H 3 :2} \u2295 Pairwise Ranking \u0394H -2 0 2 \u2296 \u2295 \u2295 \u2296 Inseparable \u0394H 2 -1 1 \u2295 -1 1 \u0394H 4 \u2295 H 6 :1 H 6 :-1 \u2296 \u2296 Separable \u0394H 1 -1 1 \u2295 -1 1 \u0394H 2 \u2295 H 3 :1 H 3 :-1 \u2296 \u2296 Separable", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Binning Algorithm", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "= {H 4 :1} Pairs (S 1 2 , S 2 2 ) {\u2206H:0.0} \u2295 (S 2 2 , S 1 2 ) {\u2206H:0.0} (S 1 2 , S 2 2 ) {\u2206H 4 :0} \u2295 (S 2 2 , S 1 2 ) {\u2206H 4 :0} (S 1 2 , S 2 2 ) {\u2206H 2 :2, \u2206H 4 :-1} \u2295 (S 2 2 , S 1 2 ) {\u2206H 2 :-2, \u2206H 4 :1} Pairwise Ranking", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Inseparable Inseparable Separable Figure 5 : An example showing a trivial \"collision\" in which two hypotheses of differing quality receive the same model score until local discretization is applied. The two hypotheses are indistinguishable under a linear model with the feature set H, as shown by the zero-difference in the \"pairs\" row. While a globally non-linear transformation does not yield any improvement, local discretization allows the hypotheses to be properly ranked due to the higherdimensional feature space H 2 , H 4 . See Figure 4 for an explanation of notation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 42, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 544, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ranking * h B :1.0 = {H B :1.0}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h A :1.0 h A :1.0 = {H A :2.0} * h A :1.0 h A :1.0 h B :1.0 = {H A :2.0, H B :1.0}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h A :0.0 = {} * h B :-4.0 h B :1.0 h B :1.0 h B :1.0 = {H B :-1.0} h A :1.0 = {H A :1.0} * h B :-4.0 h A :1.0 = {H A :1.0, H B :-4.0} h B :1.0 h B :1.0 = {H B :2.0} * h B 1 :1 = {H B 1 :1} h A 1 :1 h A 1 :1 = {H A 1 :2} * h A 1 :1 h A 1 :1 h B 1 :1 = {H A 1 :2, H B 1 :1} h A 1 :0 = {} * h B \u22124 :1 h B 1 :1 h B 1 :1 h B 1 :1 = {H B \u22124 :1, H B 1 :3} h A 1 :1 = {H A 1 :1} * h B \u22124 :1 h A 1 :1 = {H A 1 :1, H B \u22124 :1} h B 1 :1 h B 1 :1 = {H B 1 :2} Pairwise Ranking \u0394H A -6 6 \u2295 -6 6 \u0394H B \u2296 \u2296\u2296 \u2295 \u2295 \u2295 \u2296 Inseparable \u0394H A -3 3 \u2295 -3 3 \u0394H B \u2296 \u2296 \u2296 \u2295 \u2295 \u2295 \u2296 1 1 H B :1 -4 H B :-1 -4 H B :1 -4 H B :-1 -4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Separable Figure 6 : An example demonstrating a non-linear decision boundary induced by discretization. The non-linear nature of the decision boundary can be seen clearly when the induced feature set", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 18, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H A 1 , H B 1 , H B \u22124 (right)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is considered in the original feature space H A , H B (left). In the pairwise ranking row, two axes (H A 1 , H B 1 ) are plotted while the third axis H B \u22124 is indicated only as stand-off annotations for clarity . Given a larger number of hypotheses, such situations could also arise within a single sentence. See Figure 4 for an explanation of notation. cretizer \u03a6. We have two desiderata: (1) any monotonic transformation of a feature should not affect the induced binning since we should not require feature engineers to determine the optimal feature transformation and (2) no bin's data should be so sparse that the optimizer cannot reliably estimate a weight for each bin. Therefore, we construct bins that are (i) populated uniformly subject to (ii) each bin containing no more than one feature value. We call this approach uniform population feature binning. While one could consider the predictive power of the features when determining bin boundaries, this would suggest that we should jointly optimize and determine bin boundaries, which is beyond the scope of this work. This problem has recently been considered for NLP by Suzuki and Nagata (2013) and for MT by Liu et al. (2013b) , though the latter involves decoding the entire training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1135, |
|
"end": 1159, |
|
"text": "Suzuki and Nagata (2013)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1174, |
|
"end": 1192, |
|
"text": "Liu et al. (2013b)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 322, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let X be the list of feature values to bin where i indexes feature values x i \u2208 X and their associ-ated frequencies f i . We want each bin to have a uniform size u. For the sake of simplifying our final algorithm, we first create adjusted frequencies f i so that very frequent feature values will not occupy more than 100% of a bin via the following algorithm, which iterates over k:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "u k = 1 |X | |X | i=1 f k i (5) f k+1 i = min(f k i , u k )", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "which returns u = u k when f k i < u k \u2200i. Next, we solve for a binning B of N bins where b j is the population of each bin:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "arg min B 1 N N j=1 |b j \u2212 u |", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use Algorithm 1 to produce this binning. In our experiments, we construct a translation model for each sentence in our tuning corpus; we then add a feature value instances to X for each rule instance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm 1 POPULATEBINSUNIFORMLY(X , N )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Remaining values for b j , s.t. b k > 0 \u2200k def R(j) = |X | \u2212 (N \u2212 j \u2212 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Remaining frequency mass within ideal bound def", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "C(j) = j \u2022 u \u2212 j k b k i \u2190 1 Current feature value for j \u2208 [1, N ] do while i \u2264 R(j) and f i \u2264 C(j) do b j \u2190 b j \u222a {x i } i \u2190 i + 1 end while", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Handle value that straddles ideal boundaries by minimizing its violation of the ideal if i \u2264 R(j) and f i \u2212C(j)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f i < 0.5 then b j \u2190 b j \u222a {x i } i \u2190 i + 1 end if end for return B 4 Structured Regularization", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unfortunately, choosing the right number of bins can have important effects on the model, including: Fidelity. If we choose too few bins, we risk degrading the model's performance by discarding important distinctions encoded in fine differences between the feature values. In the extreme, we could reduce a real-valued feature to a single indicator feature. Sparsity. If we choose too many bins, we risk making each indicator feature too sparse, which is likely to result in the optimizer overfitting such that we generalize poorly to unseen data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While one may be tempted to simply throw more data or millions of sparse features at the problem, we elect to more strategically use existing data, since (1) large in-domain tuning data is not always available, and (2) when it is available, it can add considerable computational expense. In this section, we explore methods for mitigating data sparsity by embedding more knowledge into the learning procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Locally Non-Linear", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One very simplistic way we could combat sparsity is to extend the edges of each bin such that they cover their neighbors' values (see Equation 4):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overlapping Bins", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03a6 i (x) = 1, i BIN i (x) if x \u2208 \u222a i+1 k=i\u22121 BIN k (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overlapping Bins", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "This way, each bin will have more data points to estimate its weight, reducing data sparsity, and the bins will mutually constrain each other, reducing the ability to overfit. We include this technique as a contrastive baseline for structured regularization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overlapping Bins", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Regularization has long been used to discourage optimization solutions that give too much weight to any one feature. This encodes our prior knowledge that such solutions are unlikely to generalize. Regularization terms such as the p norm are frequently used in gradient-based optimizers including our baseline implementation of PRO. Unregularized discretization is potentially brittle with regard to the number of bins chosen. Primarily, it suffers from sparsity. At the same time, we note that we know much more about discretized features than initial features since we control how they are formed. These features make up a structured feature space. With these things in mind, we propose linear neighbor regularization, a structured regularizer that embeds a small amount of knowledge into the objective function: that the indicator features resulting from the discretization of a single real-valued feature are spatially related. We expect similar weights to be given to the indicator features that represent neighboring values of the original real-valued feature such that the resulting transformation appears somewhat smooth.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Neighbor Regularization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To incorporate this knowledge of nearby bins, the linear neighbor regularizer R LNR penalizes each feature's weight by the squared amount it differs from its neighbors' midpoint:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Neighbor Regularization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "R LNR (w, j) = 1 2 (w j\u22121 + w j+1 ) \u2212 w j 2 (9) R LNR (w) = \u03b2 |h|\u22121 j=2 R LNR (w, j)", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Linear Neighbor Regularization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This is a special case of the feature network regularizer of Sandler (2010) . Unlike traditional regularizers, we do not hope to reduce the active feature count. With the PRO loss l and a 2 regularizater R 2 , our final loss function internal to each iteration of PRO is:", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 75, |
|
"text": "Sandler (2010)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Neighbor Regularization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "L(w) = l(x, y; w) + R 2 (w) + R LNR (w) (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Neighbor Regularization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "However, as \u03b2 \u2192 \u221e, the linear neighbor regularizer R LNR forces a linear arrangement of weightsthis violates our premise that we should be agnostic to non-linear transformations. We now describe a structured regularizer R MNR whose limiting solution is any monotone arrangement of weights. We augment R LNR with a smooth damping term D(w, j), which has the shape of a bathtub curve with steepness \u03b3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monotone Neighbor Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "D(w, j) = tanh 2\u03b3 1 2 (w j\u22121 + w j+1 ) \u2212 w j 1 2 (w j\u22121 \u2212 w j+1 ) (12) R MNR (w) = \u03b2 |h|\u22121 j=2 D(w, j)R LNR (w, j) (13)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monotone Neighbor Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "D is nearly zero while w j \u2208 [w j\u22121 , w j+1 ] and nearly one otherwise. Briefly, the numerator measures how far w j is from the midpoint of w j\u22121 and w j+1 while the denominator scales that distance by the radius from the midpoint to the neighboring weight.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monotone Neighbor Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Formalism: In our experiments, we use a hierarchical phrase-based translation model (Chiang, 2007) . A corpus of parallel sentences is first word-aligned, and then phrase translations are extracted heuristically. In addition, hierarchical grammar rules are extracted where phrases are nested. In general, our choice of formalism is rather unimportant -our techniques should apply to most common phrasebased and chart-based paradigms including Hiero and syntactic systems. Decoder: For decoding, we will use cdec (Dyer et al., 2010) , a multi-pass decoder that supports syntactic translation models and sparse features. Optimizer: Optimization is performed using PRO (Hopkins and May, 2011) as implemented by the cdec decoder. We run PRO for 30 iterations as suggested by Hopkins and May (2011) . The PRO optimizer internally uses a L-BFGS optimizer with the default 2 regularization implemented in cdec. Any additional regularization is explicitly noted. Baseline Features: We use the baseline features produced by Lopez' suffix array grammar extractor (Lopez, 2008) , which is distributed with cdec. 6 All code at http://github.com/jhclark/cdec Bidirectional lexical log-probabilities, the coherent phrasal translation log-probability, target word count, glue rule count, source OOV count, target OOV count, and target language model logprobability. Note that these features may be simplified or removed as specified in each experimental condition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 98, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 531, |
|
"text": "(Dyer et al., 2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 689, |
|
"text": "(Hopkins and May, 2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 793, |
|
"text": "Hopkins and May (2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1053, |
|
"end": 1066, |
|
"text": "(Lopez, 2008)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1101, |
|
"end": 1102, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup 6", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Zh\u2192En Chinese Resources: For the Chinese\u2192English experiments, including the completed work presented in this proposal, we train on the Foreign Broadcast Information Service (FBIS) corpus of approximately 300,000 sentence pairs with about 9.4 million English words. We tune weights on the NIST MT 2006 dataset, tune hyperparameters on NIST MT05, and test on NIST MT 2008. Arabic Resources: We build an Arabic\u2192English system, training on the large NIST MT 2009 constrained training corpus of approximately 5 million sentence pairs with about 181 million English words. We tune weights on the NIST MT 2006 dataset, tune hyperparameters on NIST MT 2005, and test on NIST MT 2008.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup 6", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also construct a Czech\u2192English system based on the CzEng 1.0 data (Bojar et al., 2012) . First, we lowercased and performed sentence-level deduplication of the data. 7 Then, we uniformly sampled a training set of 1M sentences (sections 1 -97) along with a weighttuning set (section 98), hyperparameter-tuning (section 99), and test set (section 99) from the paraweb domain contained of CzEng. 8 Sentences less than 5 words were discarded due to noise. Evaluation: We quantify increases in translation quality using case-insensitive BLEU (Papineni et al., 2002) . We control for test set variation and optimizer instability by averaging over multiple optimizer replicas (Clark et al., 2011 Table 3 : Top: Translation quality for systems with and without the typical log transform. Bottom: Translation quality for systems using discretization and structured regularization with probabilities P or counts C as the input of discretization. MNR P consistently recovers or outperforms a state-of-the-art system, but without any assumptions about how to transform the initial features. All scores are averaged over 3 end-to-end optimizer replications. denotes significantly different than log probs (row2) with p(CHANCE) < 0.01 under Clark et al. 2011and \u2020 is likewise used with regard to P (row 1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 89, |
|
"text": "(Bojar et al., 2012)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 563, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 691, |
|
"text": "(Clark et al., 2011", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 692, |
|
"end": 699, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Czech resources:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our first set of experiments, we seek to answer \"Does non-linearity matter?\" by starting with our baseline system of 7 typical features (the log Probability system) and we then remove the log transform from all of the log probability features in our grammar (the Probs. system). The results are shown in Table 3 (rows 1, 2). If a na\u00efve feature engineer were to remove the non-linear log transform, the systems would degrade between 1.1 BLEU and 3.6 BLEU. From this, we conclude that non-linearity does affect translation quality. This is a potential pitfall for any real-valued feature including probability features, count features, similarity measures, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 307, |
|
"end": 314, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does Non-Linearity Matter?", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Next, we evaluate the effects of discretization (Disc), overlapping bins (Over.), linear neighbor regularization (LNR), and monotone neighbor regularization (MNR) on three language pairs: a small Zh\u2192En system, a large Ar\u2192En system and a large Cz\u2192En system. In the first row of Table 3 , we use raw probabilities rather than log probabilities for p coherent (t|s), p lex (t|s), and p lex (s|t). In rows 3 -7, all translation model features (without the logtransformed features) are then discretized into indicator features. 10 The number of bins and the structured regularization strength were tuned on the hyperparameter tuning set. Discretization alone does not consistently recover the performance of the log transformed features (row 3). The na\u00efve overlap strategy in fact degrades performance (row 4). Linear neighbor regularization (row 5) behaves more consistently than discretization alone, but is consistently outperformed by the monotone neighbor regularizer (row 6), which is able to meet or significantly exceed the performance of the log transformed system. Importantly, this is done without any knowledge of the correct nonlinear transformation. In the final row, we go a step further by removing p coherent (t|s) altogether and replacing it with simple count features: c(s) and c(s, t), with slight to no degradation in quality. 11 We take this as evidence that a feature engineer developing a new real-valued feature may find discretization and monotone neighbor regularization useful.", |
|
"cite_spans": [ |
|
{ |
|
"start": 523, |
|
"end": 525, |
|
"text": "10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1343, |
|
"end": 1345, |
|
"text": "11", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 284, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Non-Linear Transformations", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We also observe that different data sets benefit from non-linear feature transformation in to different degrees (Table 3 , rows 1, 2). We noticed that discretization with monotone neighbor regularization is able to improve over a log transform (rows 2, 6) in proportion to the improvement of a log transform over probability-based features (rows 1, 2).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 120, |
|
"text": "(Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Non-Linear Transformations", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "To provide insight into how translation quality can be affected by the number of bits for discretization, we offer Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Non-Linear Transformations", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In Figure 7 , we present the weights learned by the Ar\u2192En system for probability-based features. We see that even without a bias toward a log transform, a log-like shape still emerges for many SMT features based only on the criteria of optimizing BLEU and a preference for monotonicity. However, the optimizer chooses some important variations on the log curve, especially for low probabilities, that lead to improvements in translation quality. Original raw count feature value 0.08 0.07 Weight Figure 7 : Plots of weights learned for the discretized p coherent (e|f ) (top) and c(f ) (bottom) for the Ar\u2192En system with 4 bits and monotone neighbor regularization. p(e|f ) > 0.11 is omitted for exposition as values were constant after this point. The gray line fits a log curve to the weights. The system learns a shape that deviates from the log in several regions. Each non-monotonic segment represents the learner choosing to better fit the data while paying a strong regularization penalty.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 504, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Non-Linear Transformations", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Previous work on feature discretization in machine learning has focused on the conversion of realvalued features into discrete values for learners that are either incapable of handling real-valued inputs or perform suboptimally given real-valued inputs (Dougherty et al., 1995; Kotsiantis and Kanellopoulos, 2006) . Decision trees and random forests have been successfully used in language modeling (Jelinek et al., 1994; Xu and Jelinek, 2004) and parsing (Charniak, 2010; Magerman, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 277, |
|
"text": "(Dougherty et al., 1995;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 313, |
|
"text": "Kotsiantis and Kanellopoulos, 2006)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 421, |
|
"text": "(Jelinek et al., 1994;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 443, |
|
"text": "Xu and Jelinek, 2004)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 472, |
|
"text": "(Charniak, 2010;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 488, |
|
"text": "Magerman, 1995)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Kernel methods such as support vector machines (SVMs) are often considered when non-linear interactions between features are desired since they allow for easy usage of non-linear kernels. Wu et al. (2004) showed improvements using non-linear kernel PCA for word sense disambiguation. Taskar et al. (2003) describes a method for incorporating kernels into structured Markov networks. Tsochantaridis et al. (2004) then proposed a structured SVM for grammar learning, named-entity recognition, text classification, and sequence alignment. This was followed by a structured SVM with inexact inference (Finley and Joachims, 2008) and the latent structured SVM (Yu and Joachims, 2009) . Even within kernel methods, learning non-linear mappings with kernels remains an open area of research; For example, Cortes et al. (2009) investigated learning non-linear combinations of kernels. In MT, Gim\u00e9nez and M\u00e0rquez (2007) used a SVM to annotate a phrase table with binary features indicating whether or not a phrase translation was appropriate in context. Nguyen et al. (2007) also applied nonlinear features for SMT n-best reranking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 204, |
|
"text": "Wu et al. (2004)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 304, |
|
"text": "Taskar et al. (2003)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 411, |
|
"text": "Tsochantaridis et al. (2004)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 624, |
|
"text": "(Finley and Joachims, 2008)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 678, |
|
"text": "(Yu and Joachims, 2009)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 818, |
|
"text": "Cortes et al. (2009)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 910, |
|
"text": "Gim\u00e9nez and M\u00e0rquez (2007)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1045, |
|
"end": 1065, |
|
"text": "Nguyen et al. (2007)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Toutanova and Ahn (2013) use a form of regression decision trees to induce locally non-linear features in a n-best reranking framework. He and Deng (2012) directly optimize the lexical and phrasal features using expected BLEU. Nelakanti et al. (2013) use tree-structured p regularizers to train language models and improve perplexity over Kneser-Ney.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 154, |
|
"text": "He and Deng (2012)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "Nelakanti et al. (2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Learning parameters under weak order restrictions has also been studied for regression. Isotonic regression (Barlow et al., 1972; Robertson et al., 1988; Silvapulle and Sen, 2005) fits a curve to a set of data points such that each point in the fitted curve is greater than or equal to the previous point in the curve. Nearly isotonic regression allows violations in monotonicity (Tibshirani et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 129, |
|
"text": "(Barlow et al., 1972;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 153, |
|
"text": "Robertson et al., 1988;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 154, |
|
"end": 179, |
|
"text": "Silvapulle and Sen, 2005)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 405, |
|
"text": "(Tibshirani et al., 2011)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the absence of highly refined knowledge about a feature, discretization with structured regularization enables higher quality impact of new feature sets that contain non-linearities. In our experiments, we observed that discretization out-performed na\u00efve features lacking a good non-linear transformation by up to 4.4 BLEU and that it can outperform a baseline by up to 0.8 BLEU while dropping the log transform of the lexical probabilities and removing the phrasal probabilities in favor of counts. Looking beyond this basic feature set, non-linear transformations could be the difference between showing quality improvements or not for novel features. As researchers include more real-valued features including counts, similarity measures, and separately-trained models with millions of features, we suspect this will become an increasingly relevant issue. We conclude that non-linear responses play an important role in SMT, even for a commonly-used feature set, an observation that we hope will inform feature engineers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "This is especially problematic for chart-based decoders.2 We define the oracle-best hypothesis in terms of some external quality measure such as BLEU", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Gimpel et al. eventually used raw probabilities in their model rather than log-probabilities.4 Since we dispense with nearly all of the original dense features and our structured regularizer is scale sensitive, one would need to use the 1-renormalized variant of regularized MERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "found in many modern MT systems is often conceptualized as \"what is the count of target words in the complete hypothesis?\" A hypothesis-level view of discretization would view this as \"Did this hypothesis have 5 target words?\". Only one such feature will fire for each hypothesis. However, local discretization reinterprets this feature as \"How many phrases in the complete hypothesis have 1 target word?\" Many such features are likely to fire for each hypothesis. We provide a further example of this technique inFigure 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CzEng is distributed deduplicated at the document level, leading to very high sentence-level overlap.8 The section splits recommended byBojar et al. (2012). 9 MultEval 0.5.1: github.com/jhclark/multeval", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also keep a real-valued copy of the word penalty to help normalize the language model.11 These features can single-out rules with c(s) = 1, c(s, t) = 1, subsuming separate low-count features", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by Google Faculty Research grants 2011 R2 705 and 2012 R2 10 and by the NSF-sponsored XSEDE computing resources program under grant TG-CCR110017.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Joint Language and Translation Modeling with Recurrent Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Empirical Methods in Natural Language Processing, number October", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1044--1054", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint Language and Translation Mod- eling with Recurrent Neural Networks. In Empirical Methods in Natural Language Processing, number Oc- tober, pages 1044-1054.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Statistical inference under order restrictions; the theory and application of isotonic regression", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Barlow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bartholomew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bremner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Brunk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. E. Barlow, D. Bartholomew, J. M. Bremner, and H. D. Brunk. 1972. Statistical inference under order restric- tions; the theory and application of isotonic regres- sion. Wiley.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The Joy of Parallelism with CzEng 1 .0", |
|
"authors": [ |
|
{ |
|
"first": "Ondej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ondej", |
|
"middle": [], |
|
"last": "Zden\u011bk\u017eabokrtsk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petra", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Galu\u0161\u010d\u00e1kov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Majli\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u00ed", |
|
"middle": [], |
|
"last": "Mare\u010dek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Mar\u0161\u00edk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Nov\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ale\u0161", |
|
"middle": [], |
|
"last": "Popel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tamchyna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of LREC2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ondej Bojar, Zden\u011bk\u017dabokrtsk\u00fd, Ondej Du\u0161ek, Pe- tra Galu\u0161\u010d\u00e1kov\u00e1, Martin Majli\u0161, David Mare\u010dek, Ji\u00ed Mar\u0161\u00edk, Michal Nov\u00e1k, Martin Popel, and Ale\u0161 Tam- chyna. 2012. The Joy of Parallelism with CzEng 1 .0. In Proceedings of LREC2012, Istanbul, Turkey. Euro- pean Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Mathematics of Statistical Machine Translation : Parameter Estimation. Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Stephen A Della", |
|
"middle": [], |
|
"last": "Peter E Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent J Della", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert L", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter E Brown, Stephen A Della Pietra, Vincent J Della Pietra, and Robert L Mercer. 1993. The Mathematics of Statistical Machine Translation : Parameter Estima- tion. Computational Linguistics, 10598.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Top-Down Nearly-Context-Sensitive Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Empirical Methods in Natural Language Processing, number October", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "674--683", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2010. Top-Down Nearly-Context- Sensitive Parsing. In Empirical Methods in Natural Language Processing, number October, pages 674- 683.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Batch Tuning Strategies for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the North American Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "427--436", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Cherry and George Foster. 2012. Batch Tuning Strategies for Statistical Machine Translation. In Pro- ceedings of the North American Association for Com- putational Linguistics, pages 427-436.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Online large-margin training of syntactic and structural translation features", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Marton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing -EMNLP '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and struc- tural translation features. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing -EMNLP '08, pages 224-233, Morris- town, NJ, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Hierarchical Phrase-Based Translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "201--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2007. Hierarchical Phrase-Based Trans- lation. Computational Linguistics, 33(2):201-228, June.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Hope and fear for discriminative training of statistical translation models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "1159--1187", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. Journal of Machine Learning Research, 13:1159-1187.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jonathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H Clark, Chris Dyer, Alon Lavie, and Noah A Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Insta- bility. In Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning Non-Linear Combinations of Kernels", |
|
"authors": [ |
|
{ |
|
"first": "Corinna", |
|
"middle": [], |
|
"last": "Cortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Afshin", |
|
"middle": [], |
|
"last": "Rostamizadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corinna Cortes, Mehryar Mohri, and Afshin Ros- tamizadeh. 2009. Learning Non-Linear Combinations of Kernels. In Advances in Neural Information Pro- cessing Systems (NIPS 2009), pages 1-9, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Supervised and Unsupervised Discretization of Continuous Features", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Dougherty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Kohavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehran", |
|
"middle": [], |
|
"last": "Sahami", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Twelfth International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "194--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Dougherty, Ron Kohavi, and Mehran Sahami. 1995. Supervised and Unsupervised Discretization of Continuous Features. In Proceedings of the Twelfth International Conference on Machine Learning, pages 194-202, San Francisco, CA.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "cdec: A Decoder, Alignment, and Learning Framework for Finite-State and Context-Free Translation Models", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Eidelman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Association for Computational Linguistics, number July", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Jonathan Weese, Adam Lopez, Vladimir Ei- delman, Phil Blunsom, and Philip Resnik. 2010. cdec: A Decoder, Alignment, and Learning Framework for Finite-State and Context-Free Translation Models. In Association for Computational Linguistics, number July, pages 7-12.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Training structural SVMs when exact inference is intractable", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Finley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "304--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Finley and Thorsten Joachims. 2008. Training structural SVMs when exact inference is intractable. In Proceedings of the International Conference on Ma- chine Learning, pages 304-311, New York, New York, USA. ACM Press.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Large-Scale Discriminative Training for Statistical Machine Translation Using Held-Out Line Search", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Flanigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "North American Association for Computational Linguistics, number June", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "248--258", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Flanigan, Chris Dyer, and Jaime Carbonell. 2013. Large-Scale Discriminative Training for Statistical Machine Translation Using Held-Out Line Search. In North American Association for Computational Lin- guistics, number June, pages 248-258.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Regularized Minimum Error Rate Training", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Galley, Chris Quirk, Colin Cherry, and Kristina Toutanova. 2013. Regularized Minimum Error Rate Training. In Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Joshua 4.0: Packing, PRO, and Paraphrases", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "283--291", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Yuan Cao, Jonathan Weese, Matt Post, and Chris Callison-Burch. 2012. Joshua 4.0: Pack- ing, PRO, and Paraphrases. In Workshop on Statistical Machine Translation, pages 283-291.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Contextaware Discriminative Phrase Selection for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Jes\u00fas", |
|
"middle": [], |
|
"last": "Gim\u00e9nez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "159--166", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jes\u00fas Gim\u00e9nez and Llu\u00eds M\u00e0rquez. 2007. Context- aware Discriminative Phrase Selection for Statistical Machine Translation. In Workshop on Statistical Ma- chine Translation, number June, pages 159-166.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Feature-Rich Translation by Quasi-Synchronous Lattice Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Gimpel and Noah A Smith. 2009. Feature-Rich Translation by Quasi-Synchronous Lattice Parsing. In Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Structured Ramp Loss Minimization for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "North American Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Gimpel and Noah A Smith. 2012. Structured Ramp Loss Minimization for Machine Translation. In North American Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Maximum Expected BLEU Training of Phrase and Lexicon Translation Models", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodong He and Li Deng. 2012. Maximum Expected BLEU Training of Phrase and Lexicon Translation Models. In Proceedings of the Association for Com- putational Linguistics, Jeju Island, Korea. Microsoft Research.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Tuning as Ranking. Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hopkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1352--1362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as Ranking. Computational Linguistics, pages 1352- 1362.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Decision Tree Parsing using a Hidden Derivation Model", |
|
"authors": [ |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Magerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Workshop on Human Language Technologies (HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frederick Jelinek, John Lafferty, David Magerman, Robert Mercer, Adwait Ratnaparkhi, and Salim Roukos. 1994. Decision Tree Parsing using a Hidden Derivation Model. In Workshop on Human Language Technologies (HLT).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Recurrent Continuous Translation Models", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Empirical Meth- ods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Discretization Techniques : A recent survey", |
|
"authors": [ |
|
{ |
|
"first": "Sotiris", |
|
"middle": [], |
|
"last": "Kotsiantis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Kanellopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "In GESTS International Transactions on Computer Science and Engineering", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "47--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sotiris Kotsiantis and Dimitris Kanellopoulos. 2006. Discretization Techniques : A recent survey. In GESTS International Transactions on Computer Sci- ence and Engineering, volume 32, pages 47-58.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Additive Neural Networks for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Lemao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taro", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eiichiro", |
|
"middle": [], |
|
"last": "Sumita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiejun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lemao Liu, Taro Watanabe, Eiichiro Sumita, and Tiejun Zhao. 2013a. Additive Neural Networks for Statistical Machine Translation. In Proceedings of the Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Tuning SMT with A Large Number of Features via Online Feature Grouping", |
|
"authors": [ |
|
{ |
|
"first": "Lemao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiejun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taro", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eiichiro", |
|
"middle": [], |
|
"last": "Sumita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lemao Liu, Tiejun Zhao, Taro Watanabe, and Eiichiro Sumita. 2013b. Tuning SMT with A Large Number of Features via Online Feature Grouping. In Proceed- ings of the International Joint Conference on Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Tera-Scale Translation Models via Pattern Matching", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Association for Computational Linguistics Computational Linguistics, number August", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "505--512", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Lopez. 2008. Tera-Scale Translation Models via Pattern Matching. In Association for Computa- tional Linguistics Computational Linguistics, number August, pages 505-512.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Statistical Decision-Tree Models for Parsing", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "David M Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M Magerman. 1995. Statistical Decision-Tree Models for Parsing. In Association for Computational Linguistics, pages 276-283.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Structured Penalties for Log-linear Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Anil", |
|
"middle": [], |
|
"last": "Nelakanti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cedric", |
|
"middle": [], |
|
"last": "Archambeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Mairal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Bouchard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anil Nelakanti, Cedric Archambeau, Julien Mairal, Fran- cis Bach, and Guillaume Bouchard. 2013. Structured Penalties for Log-linear Language Models. In Empiri- cal Methods in Natural Language Processing, Seattle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Training Non-Parametric Features for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milind", |
|
"middle": [], |
|
"last": "Mahajan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Nguyen, Milind Mahajan, Xiaodong He, and Mi- crosoft Way. 2007. Training Non-Parametric Features for Statistical Machine Translation. In Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Discriminative training and maximum entropy models for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Association for Computational Linguistics, number July", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimi- native training and maximum entropy models for sta- tistical machine translation. In Proceedings of the Association for Computational Linguistics, number July, page 295, Morristown, NJ, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Minimum Error Rate Training in Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Association for Computational Linguistics, number July", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz J Och. 2003. Minimum Error Rate Training in Sta- tistical Machine Translation. In Association for Com- putational Linguistics, number July, pages 160-167.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "BLEU : a Method for Automatic Evaluation of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weijing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computational Linguistics, number July", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- jing Zhu. 2002. BLEU : a Method for Automatic Evaluation of Machine Translation. In Computational Linguistics, number July, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Order Restricted Statistical Inference", |
|
"authors": [ |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Robertson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Wright", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Dykstra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim Robertson, F. T. Wright, and R. L. Dykstra. 1988. Order Restricted Statistical Inference. Wiley.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Regularized Learning with Feature Networks", |
|
"authors": [ |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Sandler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S Ted Sandler. 2010. Regularized Learning with Feature Networks. Ph.D. thesis, University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Continuous Space Translation Models for Phrase-Based Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1071--1080", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Holger Schwenk. 2012. Continuous Space Translation Models for Phrase-Based Statistical Machine Transla- tion. In International Conference on Computational Linguistics (COLING), number December 2012, pages 1071-1080, Mumbai, India.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Constrained Statistical Inference: Order, Inequality, and Shape Constraints", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mervyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranab", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Silvapulle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mervyn J. Silvapulle and Pranab K. Sen. 2005. Con- strained Statistical Inference: Order, Inequality, and Shape Constraints. Wiley.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Non-linear N-best List Reranking with Few Features", |
|
"authors": [ |
|
{ |
|
"first": "Artem", |
|
"middle": [], |
|
"last": "Sokolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Association for Machine Translation in the Americas", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Artem Sokolov, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2012. Non-linear N-best List Reranking with Few Features. In Association for Machine Translation in the Americas.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Supervised Model Learning with Feature Grouping based on a Discrete Constraint", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Suzuki and Masaaki Nagata. 2013. Supervised Model Learning with Feature Grouping based on a Discrete Constraint. In Proceedings of the Association for Computational Linguistics, pages 18-23.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Max-Margin Markov Networks", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-Margin Markov Networks. In Neural Informa- tion Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Nearly-Isotonic Regression", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ryan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Hoefling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "53", |
|
"issue": "", |
|
"pages": "54--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan J Tibshirani, Holger Hoefling, and Robert Tibshi- rani. 2011. Nearly-Isotonic Regression. Technomet- rics, 53(1):54-61.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Learning Non-linear Features for Machine Translation Using Gradient Boosting Machines", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byung-Gyu", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova and Byung-Gyu Ahn. 2013. Learn- ing Non-linear Features for Machine Translation Us- ing Gradient Boosting Machines. In Proceedings of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Support Vector Machine Learning for Interdependent and Structured Output Spaces", |
|
"authors": [ |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Tsochantaridis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasemin", |
|
"middle": [], |
|
"last": "Altun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support Vector Machine Learning for Interdependent and Structured Output Spaces. In International Conference on Ma- chine Learning (ICML).", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Kernel Regression Based Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhuoran", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Shawe-Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandor", |
|
"middle": [], |
|
"last": "Szedmak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "North American Association for Computational Linguistics, number April", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "185--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhuoran Wang, John Shawe-Taylor, and Sandor Szed- mak. 2007. Kernel Regression Based Machine Trans- lation. In North American Association for Compu- tational Linguistics, number April, pages 185-188, Rochester, N.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "A Kernel PCA Method for Superior Word Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weifeng", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekai Wu, Weifeng Su, and Marine Carpuat. 2004. A Kernel PCA Method for Superior Word Sense Disam- biguation. In Association for Computational Linguis- tics, Barcelona.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Random Forests in Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Xu and Frederick Jelinek. 2004. Random Forests in Language Modeling. In Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Learning structural SVMs with latent variables", |
|
"authors": [ |
|
{ |
|
"first": "Chun-Nam John", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning -ICML '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural SVMs with latent variables. In Proceedings of the 26th Annual International Confer- ence on Machine Learning -ICML '09, pages 1-8, New York, New York, USA. ACM Press.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "For a complete derivation D, the global features H(D) are an efficient summation over local features h(d):", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "We perform discretization locally on each grammar rule or phrase pair, operating on the local feature vectors h. In this example, the original real-valued features are crossed out with a solid gray line and their discretized indicator features are written above. When forming a complete hypothesis from partial hypotheses, we sum the counts of these indicator features to obtain the complete feature vector H. In this example, H = {H TM 0.1 : 2, H TM 0.2 : 1, H Count 2 : 1}", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Corpus statistics: number of parallel sentences." |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Condition Zh\u2192En</td><td>Ar\u2192En</td><td>Cz\u2192En</td></tr><tr><td>P</td><td>20.8 (-2.7)</td><td>44.3 (-3.6)</td><td>36.5 (-1.1)</td></tr><tr><td>log P</td><td>23.5 \u2020</td><td>47.9 \u2020</td><td>37.6 \u2020</td></tr><tr><td>Disc P Over. P</td><td colspan=\"3\">23.4 \u2020 (-0.1) 47.2 \u2020 (-0.7) 36.8 (-0.8) 20.7 (-2.8) 44.6 (-3.3) 36.6 (-1.0)</td></tr><tr><td>LNR P MNR P</td><td colspan=\"3\">23.1 \u2020 (-0.4) 48.0 \u2020 (+0.1) 37.3 (-0.3) 23.8 \u2020 (+0.3) 48.7 \u2020 (+0.8) 37.6 \u2020 (\u00b1)</td></tr><tr><td>MNR C</td><td>23.6 \u2020 (\u00b1)</td><td colspan=\"2\">48.7 \u2020 (+0.8) 37.4 \u2020 (-0.2)</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Translation quality for Cz\u2192En system with varying bits for discretization. For all other experiments, we tune the number of bits on held-out data." |
|
} |
|
} |
|
} |
|
} |