Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:10.677862Z"
},
"title": "Dynamic Topic Adaptation for Phrase-based MT",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Hasler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {}
},
"email": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Translating text from diverse sources poses a challenge to current machine translation systems which are rarely adapted to structure beyond corpus level. We explore topic adaptation on a diverse data set and present a new bilingual variant of Latent Dirichlet Allocation to compute topic-adapted, probabilistic phrase translation features. We dynamically infer document-specific translation probabilities for test sets of unknown origin, thereby capturing the effects of document context on phrase translations. We show gains of up to 1.26 BLEU over the baseline and 1.04 over a domain adaptation benchmark. We further provide an analysis of the domain-specific data and show additive gains of our model in combination with other types of topic-adapted features.",
"pdf_parse": {
"paper_id": "E14-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Translating text from diverse sources poses a challenge to current machine translation systems which are rarely adapted to structure beyond corpus level. We explore topic adaptation on a diverse data set and present a new bilingual variant of Latent Dirichlet Allocation to compute topic-adapted, probabilistic phrase translation features. We dynamically infer document-specific translation probabilities for test sets of unknown origin, thereby capturing the effects of document context on phrase translations. We show gains of up to 1.26 BLEU over the baseline and 1.04 over a domain adaptation benchmark. We further provide an analysis of the domain-specific data and show additive gains of our model in combination with other types of topic-adapted features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistical machine translation (SMT), there has been a lot of interest in trying to incorporate information about the provenance of training examples in order to improve translations for specific target domains. A popular approach are mixture models (Foster and Kuhn, 2007) where each component contains data from a specific genre or domain. Mixture models can be trained for crossdomain adaption when the target domain is known or for dynamic adaptation when the target domain is inferred from the source text under translation. More recent domain adaptation methods employ corpus or instance weights to promote relevant training examples (Matsoukas et al., 2009; Foster et al., 2010) or do more radical data selection based on language model perplexity (Axelrod et al., 2011) . In this work, we are interested in the dynamic adaptation case, which is challenging because we cannot tune our model towards any specific domain.",
"cite_spans": [
{
"start": 254,
"end": 277,
"text": "(Foster and Kuhn, 2007)",
"ref_id": "BIBREF12"
},
{
"start": 644,
"end": 668,
"text": "(Matsoukas et al., 2009;",
"ref_id": "BIBREF20"
},
{
"start": 669,
"end": 689,
"text": "Foster et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 759,
"end": 781,
"text": "(Axelrod et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In previous literature, domains have often been loosely defined in terms of corpora, for example, news texts would be defined as belonging to the news domain, ignoring the specific content of news documents. It is often assumed that the data within a domain is homogeneous in terms of style and vocabulary, though that is not always true in practice. The term topic on the other hand can describe the thematic content of a document (e.g. politics, economy, medicine) or a latent cluster in a topic model. Topic modelling for machine translation aims to find a match between thematic context and topic clusters. We view topic adaptation as fine-grained domain adaptation with the implicit assumption that there can be multiple distributions over translations within the same data set. If these distributions overlap, then we expect topic adaptation to help separate them and yield better translations than an unadapted system. Topics can be of varying granularity and are therefore a flexible means to structure data that is not uniform enough to be modelled in its entirety. In recent years there have been several attempts to integrating topical information into SMT either by learning better word alignments (Zhao and Xing, 2006) , by adapting translation features cross-domain (Su et al., 2012) , or by dynamically adapting lexical weights (Eidelman et al., 2012) or adding sparse topic features (Hasler et al., 2012) .",
"cite_spans": [
{
"start": 1210,
"end": 1231,
"text": "(Zhao and Xing, 2006)",
"ref_id": "BIBREF30"
},
{
"start": 1280,
"end": 1297,
"text": "(Su et al., 2012)",
"ref_id": "BIBREF26"
},
{
"start": 1343,
"end": 1366,
"text": "(Eidelman et al., 2012)",
"ref_id": "BIBREF11"
},
{
"start": 1399,
"end": 1420,
"text": "(Hasler et al., 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take a new approach to topic adaptation by estimating probabilistic phrase translation features in a completely Bayesian fashion. The motivation is that automatically identifying topics in the training data can help to select the appropriate translation of a source phrase in the context of a document. By adapting a system to automatically induced topics we do not have to trust data from a given domain to be uniform. We also overcome the problem of defining the level of granularity for domain adaptation. With more and more training data automatically extracted from the web and little knowledge about its content, we believe this is an important area to focus on. Translation of web sites is already a popular application for MT systems and could be helped by dynamic model adaptation. We present results on a mixed data set of the TED corpus, parts of the Commoncrawl corpus which contains crawled web data and parts of the News Commentary corpus which contains documents about politics and economics. We believe that the broad range of this data set makes it a suitable testbed for topic adaptation. We focus on translation model adaptation to learn how words and phrases translate in a given document-context without knowing the origin of the document. By learning translations over latent topics and combining several topic-adapted features we achieve improvements of more than 1 BLEU point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model is based on LDA and infers topics as distributions over phrase pairs instead of over words. It is specific to machine translation in that the conditional dependencies between source and target phrases are modelled explicitly, and therefore we refer to it as phrasal LDA. Topic distributions learned on a training corpus are carried over to tuning and test sets by running a modified inference algorithm on the source side text of those sets. Translation probabilities are adapted separately to each source text under translation which makes this a dynamic topic adaptation approach. In the following we explain our approach to topic modelling with the objective of estimating better phrase translation probabilities for data sets that exhibit a heterogeneous structure in terms of vocabulary and style. The advantage from a modelling point of view is that unlike with mixture models, we avoid sparsity problems that would arise if we treated documents or sets of documents as domains and learned separate models for them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual topic models over phrase pairs",
"sec_num": "2"
},
{
"text": "LDA is a generative model that learns latent topics in a document collection. In the original formulation, topics are multinomial distributions over words of the vocabulary and each document is assigned a multinomial distribution over topics (Blei et al., 2003) . Our goal is to learn topic-dependent phrase translation probabilities and hence we modify this formulation by replacing words with phrase pairs. This is straightforward when both source and target phrases are observed but requires a modified inference approach when only source phrases are observed in an unknown test set. Different from standard LDA and previous uses of LDA for MT, we define a bilingual topic model that learns topic distributions over phrase pairs. This allows us to model the units of interest in a more principled way, without the need to map per-word or per-sentence topics to phrase pairs. Figure 1 shows a graphical representation of the following generative process. For each of N documents in the collection",
"cite_spans": [
{
"start": 242,
"end": 261,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 878,
"end": 886,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "2.1"
},
{
"text": "1. Choose topic distribution \u03b8 d \u223c Dirichlet(\u03b1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "2.1"
},
{
"text": "2. Choose the number of phrases pairs P d in the document, P d \u223c Poisson(\u03b6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Dirichlet Allocation (LDA)",
"sec_num": "2.1"
},
{
"text": "sponding to a phrase pair p d,i of source and target phrase s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For every position d i in the document corre-",
"sec_num": "3."
},
{
"text": "i and t i 1 : (a) Choose a topic z d,i \u223c Multinomial(\u03b8 d ). (b) Conditioned on topic z d,i , choose a source phrase s d,i \u223c Multinomial(\u03c8 z d,i ). (c) Conditioned on z d,i and s d,i , choose tar- get phrase t d,i \u223c Multinomial(\u03c6 s d,i ,z d,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For every position d i in the document corre-",
"sec_num": "3."
},
{
"text": "). \u03b1, \u03b2 and \u03b3 are parameters of the Dirichlet distributions, which are asymmetric for k = 0. Our inference algorithm is an implementation of collapsed variational Bayes (CVB), with a first-order Gaussian approximation (Teh et al., 2006) . It has been shown to be more accurate than standard VB and to converge faster than collapsed Gibbs sampling (Teh et al., 2006; Wang and Blunsom, 2013) , with little loss in accuracy. Because we have to do inference over a large number of phrase pairs, CVB is more practical than Gibbs sampling.",
"cite_spans": [
{
"start": 218,
"end": 236,
"text": "(Teh et al., 2006)",
"ref_id": "BIBREF27"
},
{
"start": 347,
"end": 365,
"text": "(Teh et al., 2006;",
"ref_id": "BIBREF27"
},
{
"start": 366,
"end": 389,
"text": "Wang and Blunsom, 2013)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "For every position d i in the document corre-",
"sec_num": "3."
},
{
"text": "Ultimately, we want to learn translation probabilities for all possible phrase pairs that apply to a given test document during decoding. Therefore, topic modelling operates on phrase pairs as they will be seen during decoding. Given word-aligned parallel corpora from several domains, we extract lists of per-document phrase pairs produced by the extraction algorithm in the Moses toolkit (Koehn et al., 2007) which contain all phrase pairs consistent with the word alignment. We run CVB on the set of all training documents to learn latent topics without providing information about the domains. Using the trained model, CVB with modified inference is run on all test documents with the set of possible phrase translations that a decoder would load from a phrase table before decoding. When test inference has finished, we compute adapted translation probabilities at the document-level by marginalising over topics for each phrase pair.",
"cite_spans": [
{
"start": 390,
"end": 410,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of training strategy",
"sec_num": "2.2"
},
{
"text": "3 Bilingual topic inference",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of training strategy",
"sec_num": "2.2"
},
{
"text": "The aim of inference on the training data is to find latent topics in the distributions over phrase pairs in each document.This is done by repeatedly visiting all phrase pair positions in all documents, computing conditional topic probabilities and updating counts. To bias the model to cluster stop word phrases in one topic, we place an asymmetric prior over the hyperparameters 2 as described in (Wallach et al., 2009) to make one of the topics a priori more probable in every document. We use a fixed-point update (Minka, 2012) to update the hyperparameters after every iteration. For CVB the conditional probability of topic z d,i given the current state of all variables except z d,i is",
"cite_spans": [
{
"start": 399,
"end": 421,
"text": "(Wallach et al., 2009)",
"ref_id": "BIBREF28"
},
{
"start": 518,
"end": 531,
"text": "(Minka, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(z d,i = k|z \u2212(d,i) , s, t, d, \u03b1, \u03b2, \u03b3) \u221d (Eq[n \u2212(d,i) .,k,s,t ] + \u03b2) (Eq[n \u2212(d,i) .,k,s,. ] + T s \u2022 \u03b2) (Eq[n \u2212(d,i) .,k,s,. ] + \u03b3) (Eq[n \u2212(d,i) .,k,. ] + S \u2022 \u03b3) \u2022(Eq[n \u2212(d,i) d,k,. ] + \u03b1)",
"eq_num": "(1)"
}
],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "where s and t are all source and target phrases in the collection. n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "\u2212(d,i) .,k,s,t , n \u2212(d,i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": ".,k,s,. and n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "\u2212(d,i) d,k,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "are cooccurrence counts of topics with phrase pairs, source phrases and documents respectively. Eq is the expectation under the variational posterior and in comparison to Gibbs sampling where the posterior would otherwise look very similar, counts are replaced by their means. n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "\u2212(d,i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": ".,k,. is a topic occurrence count, T s is the number of possible target phrases for a given source phrase and S is the total number of source phrases. By modelling phrase translation probabilities separately as P(t i |s i , z i = k, ..) and P(s i |z i = k, ..), we can put different priors on these distributions. For example, we want a sparse distribution over target phrases for a given source phrase and topic to express our translation preference under each topic. The algorithm stops when the variational posterior has converged for all documents or after a maximum of 100 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on training documents",
"sec_num": "3.1"
},
{
"text": "To compute translation probabilities for tuning and test documents where target phrases are not 2 Omitted from the following equations for simplicity. observed, the variational posterior is adapted as shown in Equation 2",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on tuning and test documents",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(z d,i = k,t i, j |z \u2212(d,i) , s, t \u2212(d,i) , d, \u03b1, \u03b2, \u03b3) \u221d (Eq[n \u2212(d,i) .,k,s,t j ] + \u03b2) (Eq[n \u2212(d,i) .,k,s,. ] + T s \u2022 \u03b2) (Eq[n \u2212(d,i) .,k,s,. ] + \u03b3) (Eq[n \u2212(d,i) .,k,. ] + S \u2022 \u03b3) \u2022(Eq[n \u2212(d,i) d,k,. ] + \u03b1)",
"eq_num": "(2)"
}
],
"section": "Inference on tuning and test documents",
"sec_num": "3.2"
},
{
"text": "which now computes the joint conditional probability of a topic k and a target phrase t i, j , given the source phrase s i and the test document d. Therefore, the size of the support changes from K to K \u2022T s . While during training inference we compute a distribution over topics for each source-target pair, in test inference we can use the posterior to marginalise out the topics and get a distribution over target phrases for each source phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on tuning and test documents",
"sec_num": "3.2"
},
{
"text": "We use the Moses decoder to produce lists of translation options for each document in the tuning and test sets. These lists comprise all phrase pairs that will enter the search space at decoding time. By default, only 20 target phrases per source phrase are loaded from the phrase table, so in order to allow for new phrase pairs to enter the search space and for translation probabilities to be computed more accurately, we allow for up to 200 target phrases per source. For each source sentence, we consider all possible phrase segmentations and applicable target phrases. Unlike in training, we do not iterate over all phrase pairs in the list but over blocks of up to 200 target phrases for a given source phrase. The algorithm stops when all marginal translation probabilities have converged though in practice we stopped earlier to avoid overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference on tuning and test documents",
"sec_num": "3.2"
},
{
"text": "After topic inference on the tuning and test data, the forward translation probabilities P(t|s, d) are computed. This is done separately for every document d because we are interested in the translation probabilities that depend on the inferred topic proportions for a given document. For every document, we iterate over source positions p d,i and use the current variational posterior to compute P(t i, j |s i , d) for all possible target phrases by marginalizing over topics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase translation probabilities",
"sec_num": "3.3"
},
{
"text": "P(t i, j |s i , d) = \u2211 k P(z i = k,t i, j |z \u2212(d,i) , s, t \u2212(d,i) , d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase translation probabilities",
"sec_num": "3.3"
},
{
"text": "This is straightforward because during test inference the variational posterior is normalised to a distribution over topics and target phrases for a given source phrase. If a source phrase occurs multiple times in the same document, the probabilities are averaged over all occurrences. The inverse translation probabilities can be computed analogously except that in cases where we do not have variational posteriors for a given pair of source and target phrases, an approximation is needed. We omit the results here since our experiments so far did not indicate improvements with the inverse features included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase translation probabilities",
"sec_num": "3.3"
},
{
"text": "Inspired by previous work on topic adaptation for SMT, we add three additional topic-adapted features to our model. All of these features make use of the topic mixtures learned by our bilingual topic model. The first feature is an adapted lexical weight, similar to the features in the work of Eidelman et al. (2012) . Our feature is different in that we marginalise over topics to produce a single adapted feature where v[k] is the k th element of a document topic vector for document d and w(t|s,k) is a topic-dependent word translation probability:",
"cite_spans": [
{
"start": 294,
"end": 316,
"text": "Eidelman et al. (2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "lex(t|s, d) = |t| \u220f i 1 { j|(i, j) \u2208 a} \u2211 \u2200(i, j)\u2208a \u2211 k w(t|s, k) \u2022 v[k] w(t|s) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "The second feature is a target unigram feature similar to the lazy MDI adaptation of Ruiz and Federico (2012) . It includes an additional term that measures the relevance of a target word w i by comparing its document-specific probability P doc to its probability under the asymmetric topic 0:",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "Ruiz and Federico (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "trgUnigrams t = |t| \u220f i=1 f ( P doc (w i ) P baseline (w i ) ) lazy MDI \u2022 f ( P doc (w i ) P topic0 (w i ) ) relevance (4) f (x) = 2 1 + 1 x , x > 0 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "The third feature is a document similarity feature, similar to the semantic feature described by Banchs and Costa-juss\u00e0 (2011):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "docSim t = max i (1 \u2212 JSD(v train doc i , v test doc )) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "where v train_doc i and v test_doc are document topic vector of training and test documents. Because topic 0 captures phrase pairs that are common to many documents, we exclude it from the topic vectors before computing similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More topic-adapted features",
"sec_num": "4"
},
{
"text": "We tried integrating the four topic-adapted features separately and in all possible combinations. As we will see in the results section, while all features improve over the baseline in isolation, the adapted translation feature P(t|s,d) is the strongest feature. For the features that have a counterpart in the baseline model (p(t|s,d) and lex(t|s,d)), we experimented with either adding or replacing them in the log-linear model. We found that while adding the features worked well and yielded close to zero weights for their baseline counterparts after tuning, replacing them yielded better results in combination with the other adapted features. We believe the reason could be that fewer phrase table features in total are easier to optimise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature combination",
"sec_num": "4.1"
},
{
"text": "5 Experimental setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature combination",
"sec_num": "4.1"
},
{
"text": "Our experiments were carried out on a mixed data set, containing the TED corpus (Cettolo et al., 2012) , parts of the News Commentary corpus (NC) and parts of the Commoncrawl corpus (CC) from the WMT13 shared task (Bojar et al., 2013) as described in Table 1 . We were guided by two constraints in chosing our data set. 1) the data has document boundaries and the content of each document is assumed to be topically related, 2) there is some degree of topical variation within each data set. In order to compare to domain adaptation approaches, we chose a setup with data from different corpora. We want to abstract away from adaptation effects that concern tuning of length penalties and language models, so we use a mixed tuning set containing data from all three domains and train one language model on the concatenation of (equally sized) target sides of the training data. Word alignments are trained on the concatenation of all training data and fixed for all models. Our baseline (ALL) is a phrase-based French-English system trained on the concatenation of all parallel data. It was built with the Moses toolkit (Koehn et al., 2007) using the 14 standard core features including a 5gram language model. Translation quality is evaluated on a large test set, using the average feature weights of three optimisation runs with PRO (Hopkins and May, 2011) . We use the mteval-v13a.pl script to compute caseinsensitive BLEU. As domain-aware benchmark systems, we use the phrase table fill-up method (FILLUP) of Bisazza et al. (2011) Table 2 shows BLEU scores of the baseline system as well as the performance of three in-domain models (IN) tuned under the same conditions. For the IN models, every portion of the test set is decoded with a domain-specific model. Results on the test set are broken down by domain but also reported for the entire test set (mixed). For Ted and NC, the in-domain models perform better than ALL, while for CC the all-domain model improves quite significantly over IN.",
"cite_spans": [
{
"start": 80,
"end": 102,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 214,
"end": 234,
"text": "(Bojar et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 1120,
"end": 1140,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF18"
},
{
"start": 1335,
"end": 1358,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF17"
},
{
"start": 1513,
"end": 1534,
"text": "Bisazza et al. (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1535,
"end": 1542,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data and baselines",
"sec_num": "5.1"
},
{
"text": "In this section we analyse some internal properties of our three data sets that are relevant for adaptation. All of the scores were computed on the sets of source side tokens of the test set which were limited to contain content words (nouns, verbs, adjectives and adverbs). The test set was tagged with the French TreeTagger (Schmid, 1994) . The top of Table 3 shows the average Jensen-Shannon divergence (using log 2 , JSD \u2208 [0, 1]) of each in-domain model in comparison to the all-domain model, which is an indicator of how much the distributions in the IN model change when adding out-ofdomain data. Likewise, Rank1-diff gives the percentage of word tokens in the test set where the preferred translation according to p(e| f ) changes between IN and ALL. These are the words that are most affected by adding data to the IN model. Both numbers show that for Commoncrawl the IN and ALL models differ more than in the other two data sets. According to the JS divergence between NC-IN and ALL, translation distibutions in the NC phrase table are most similar to the ALL phrase table. Table 4 shows the average JSD for each IN model compared to a model trained on half of its in-domain data. This score gives an idea of how diverse a data set is, measured by comparing distributions over translations for source words in the test set. According to this score, Commoncrawl is the most diverse data set and Ted the most uni-",
"cite_spans": [
{
"start": 326,
"end": 340,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1084,
"end": 1091,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "General properties of the data sets",
"sec_num": "5.2"
},
{
"text": "Avg JSD Ted-half vs Ted-full 0.07 CC-half vs CC-full 0.17 NC-half vs NC-full 0.09 Table 4 : Average JSD of in-domain models trained on half vs. all of the data.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "form. Note however, that these divergence scores do not provide information about the relative quality of the systems under comparison. For CC, the ALL model yields a much higher BLEU score than the IN model and it is likely that this is due to noisy data in the CC corpus. In this case, the high divergence is likely to mean that distributions are corrected by out-of-domain data rather than being shifted away from in-domain distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The phrase translation probabilities and additional features described in the last two sections are used as features in the log-linear translation model in addition to the baseline translation features. When combining all four adapted features, we replace P(t|s) and lex(t|s) by their adapted counterparts. We construct separate phrase tables for each document in the development and test sets and use a wrapper around the decoder to ensure that each input document is paired with a configuration file pointing to its document-specific translation table. Documents are decoded in sequence so that only one phrase table needs to be loaded at a time. Using the wrapped decoder we can run parameter optimisation (PRO) in the usual way to get one set of tuned weights for all test documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-dependent decoding",
"sec_num": "5.3"
},
{
"text": "In this section we present experimental results with phrasal LDA. We show BLEU scores in comparison to a baseline system and two domainaware benchmark systems. We also evaluate the adapted translation distributions by looking at translation probabilities under specific topics and inspect translations of ambiguous source words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We experimented with different numbers of topics for phrasal LDA. The diagrams in Figure 2 shows blocks of training and test documents in each of the three domains for a model with 20 topics. Darker shading means that documents have a higher proportion of a particular topic in their document-topic distribution. The first topic is the one that was affected by the asymmetric prior and inspecting its most probable phrase pairs showed that it had 'collected' a large number of stop word phrases. This explains why it is the topic that is most shared across documents and domains. There is quite a clear horizontal separation between documents of different domains, for example, topics 6, 8, 19 occur mostly in Ted, NC and CC documents respectively. The overall structure is very similar between training (top) and test (bottom) documents, which shows that test inference was successful in carrying over the information learned on training documents. There is also some degree of topic sharing across domains, for example topics 4 and 15 occur in documents of all three domains. Figure 3 shows examples of latent topics found during inference on the training data. Topic 8 and 11 seem to be about politics and economy and occur frequently in documents from the NC corpus. Topic 14 contains phrases related to hotels and topic 19 is about web and software, both frequent themes in the CC corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1078,
"end": 1086,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analyis of bilingual topic models",
"sec_num": "6.1"
},
{
"text": "In Table 5 we compare our topic-adapted features when added separately to the baseline phrase table. The inclusion of each feature improves over the concatenation baseline but the combination of all four features gives the best overall results. Though the relative performance differs slightly for each domain portion in the test set, overall the adapted lexical weight is the weakest feature and the adapted translation probability is the strongest feature. We also performed feature ablation tests and found that no combination of features was superior to combining all four features. This confirms that the gains of each feature lead to additive improvements in the combined model. In Table 6 we compare topic-adapted models Table 6 : BLEU scores of baseline and topicadapted systems (pLDA) with all 4 features and largest improvements over baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 688,
"end": 695,
"text": "Table 6",
"ref_id": null
},
{
"start": 728,
"end": 735,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison according to BLEU",
"sec_num": "6.2"
},
{
"text": "with varying numbers of topics to the concatenation baseline. We see a consistent gain on all domains when increasing the number of topics from three to five and ten topics. This is evidence that the number of domain labels is in fact smaller than the number of underlying topics. The optimal number of latent topics varies for each domain and reflects our insights from section 5.2. The CC domain was shown to be the most diverse and the best performance on the CC portion of the test set is achieved with 100 topics. Likewise, the TED domain was shown to be least diverse and here the best performance is achieved with only 10 topics. The best performance on the entire test set is achieved with 50 topics, which is also the optimal number of topics for the NC domain. The botton row of the table indicates the relative improvement of the best topic-adapted model per domain over the ALL model. Using all four topic-adapted features yields an improvement of 0.81 BLEU on the mixed test set. The highest improvement on a given domain is achieved for TED with an increase of 1.26 BLEU. The smallest improvement is measured on the NC domain. This is in line with the observation that distributions in the NC in-domain table are most similar to the ALL table, therefore we would expect the smallest improvement for domain or topic adaptation. We used bootstrap resampling (Koehn, 2004) to measure significance on the mixed test set and marked all statistically significant results compared to the respective baselines with asterisk (*: p \u2264 0.01).",
"cite_spans": [
{
"start": 1370,
"end": 1383,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison according to BLEU",
"sec_num": "6.2"
},
{
"text": "To demonstrate the benefit of topic adaptation over more standard domain adaptation approaches for a diverse data set, we show the performance Table 8 : Combination of all models with additional LM adaptation (pLDA: 50 topics).",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison according to BLEU",
"sec_num": "6.2"
},
{
"text": "of two state-of-the-art domain-adapted systems in Table 7 . Both FILLUP and LIN-TM improve over the ALL model on the mixed test set, by 0.26 and 0.38 BLEU respectively. The largest improvement is on TED while on the CC domain, FILLUP decreases in performance and LIN-TM yields no improvement either. This shows that relying on indomain distributions for adaptation to a noisy and diverse domain like CC is problematic. The pLDA model yields the largest improvement over the domain-adapted systems on the CC test set, with in increase of 1.04 BLEU over FILLUP and 0.79 over LIN-TM. The improvements on the other two domains are smaller but consistent. We also compare the best model from Table 6 to all other models in combination with linearly interpolated language models (LIN-LM), interpolated separately for each domain. Though the improvements are slightly smaller than without adapted language models, there is still a gain over the concatenation baseline of 0.68 BLEU on the mixed test set and similar improvements to before over the benchmarks (on TED the improvements are actually even larger). Thus, we have shown that topic-adaptation is effective for test sets of diverse documents and that we can achieve substantial improvements even in comparison with domain-adapted translation and language models.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 7",
"ref_id": null
},
{
"start": 687,
"end": 694,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison according to BLEU",
"sec_num": "6.2"
},
{
"text": "topic-specific translations The first column of Table 9 shows the average entropy of phrase table entries in the adapted models according to p(t|s, d) versus the all-domain model, computed over source tokens in the test set that are content words. The entropy decreases in the adapted tables in all cases which is an indicator that the distributions over translations of content demon* = 0.98 devil = 0.01 topic 19 daemon = 0.95 demon = 0.04 Table 10 : The two most probable translations of r\u00e9gime, noyau and d\u00e9mon and probabilities under different latent topics (*: preferred by ALL).",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 9",
"ref_id": null
},
{
"start": 442,
"end": 450,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "words have become more peaked. The second column shows the average perplexity of target tokens in the test set which is a measure of how likely a model is to produce words in the reference translation. We use the alignment information between source and reference and therefore limit our analysis to pairs of aligned words, but nevertheless this shows that the adapted translation distributions model the test set distributions better than the baseline model. Therefore, the adapted distributions are not just more peaked but also more often peaked towards the correct translation. Table 10 shows examples of ambiguous French words that have different preferred translations depending on the latent topic. The word r\u00e9gime can be translated as diet, regime and restrictions and the model has learned that the probability over translations changes when moving from one topic to another (preferred translations under the ALL model are marked with *). For example, the translation to diet is most probable under topic 6 and the translation to regime which would occur in a political context is most probable under topic 8. Topic 6 is most prominent among Ted documents while topic 8 is found most frequently in News Commentary documents which have a high percentage of politically related text. The French word noyau can be translated to nucleus (physics), core (generic) and kernel (IT) among other translations and the topics that exhibit these preferred translations can be attributed to Ted (which contains many talks about physics), NC and CC (with Src: \"il suffit d'\u00e9jecter le noyau et d'en ins\u00e9rer un autre, comme ce qu'on fait pour le cl\u00f4nage.\" BL:",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 590,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"it is the nucleus eject and insert another, like what we do to the cl\u00f4nage.\" pLDA: \"he just eject the nucleus and insert another, like what we do to the cl\u00f4nage.\" (nucleus = 0.77) Ref:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"you can just pop out the nucleus and pop in another one, and that's what you've all heard about with cloning.\" Src:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"pourtant ceci obligerait les contribuables des pays de ce noyau \u00e0 fournir du capital au sud\" BL:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"but this would force western taxpayers to provide the nucleus of capital in the south\" pLDA: \"but this would force western taxpayers to provide the core of capital in the south\" (core = 0.78) Ref:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"but this would unfairly force taxpayers in the core countries to provide capital to the south\" Src:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"le noyau contient de nombreux pilotes, afin de fonctionner chez la plupart des utilisateurs.\" BL:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"the nucleus contains many drivers, in order to work for most users.\" pLDA: \"the kernel contains many drivers, to work for most users.\" (kernel = 0.53) Ref:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "\"the precompiled kernel includes a lot of drivers, in order to work for most users.\" Figure 4 : pLDA correctly translates noyau in test docs from Ted, NC and CC (adapted probabilities in brackets). The baseline (nucleus = 0.27, core = 0.27, kernel = 0.23) translates all instances to nucleus.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "many IT-related documents). The last example, d\u00e9mon, has three frequent translations in English: devil, demon and daemon. The last translation refers to a computer process and would occur in an IT context. The topic-phrase probabilities reveal that its mostly likely translation as daemon occurs under topic 19 which clusters IT-related phrase pairs and is frequent in the CC corpus. These examples show that our model can disambiguate phrase translations using latent topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "As another motivating example, in Figure 4 we compare the output of our adapted models to the output produced by the all-domain baseline for the word noyau from Table 10. While the ALL baseline translates each instance of noyau to nucleus, the adapted model translates each instance differently depending on the inferred topic mixtures for each document and always matches the reference translation. The probabilities in brackets show that the chosen translations were indeed the most likely under the respective adapted model. While the ALL model has a flat distribution over possible translations, the adapted models are peaked towards the correct translation. This shows that topic-specific translation probabilities are necessary when the translation of a word shifts between topics or domains and that peaked, adapted distributions can lead to more correct translations.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Properties of adapted distributions and",
"sec_num": "6.3"
},
{
"text": "There has been a lot of previous work using topic information for SMT, most of it using monolingual topic models. For example, Gong and Zhou (2011) use the topical relevance of a target phrase, computed using a mapping between source and target side topics, as an additional feature in decoding. Axelrod et al. (2012) build topic-specific translation models from the TED corpus and select topic-relevant data from the UN corpus to improve coverage. Su et al. (2012) perform phrase table adaptation in a setting where only monolingual in-domain data and parallel out-of-domain data are available. Eidelman et al. (2012) use topic-dependent lexical weights as features in the translation model, which is similar to our work in that topic features are tuned towards useful-ness of topic information and not towards a target domain. Hewavitharana et al. (2013) perform dynamic adaptation with monolingual topics, encoding topic similarity between a conversation and training documents in an additional feature. This is similar to the work of Banchs and Costa-juss\u00e0 (2011) , both of which inspired our document similarity feature. Also related is the work of Sennrich (2012a) who explore mixturemodelling on unsupervised clusters for domain adaptation and Chen et al. (2013) who compute phrase pair features from vector space representations that capture domain similarity to a development set. Both are cross-domain adaptation approaches, though. Instances of multilingual topic models outside the field of MT include Boyd-Graber and Blei (2009; Boyd-Graber and Resnik (2010) who learn cross-lingual topic correspondences (but do not learn conditional distributions like our model does). In terms of model structure, our model is similar to BiTAM (Zhao and Xing, 2006) which is an LDA-style model to learn topicbased word alignments. The work of Carpuat and Wu (2007) is similar to ours in spirit, but they predict the most probable translation in a context at the token level while our adaptation operates at the type level of a document.",
"cite_spans": [
{
"start": 127,
"end": 147,
"text": "Gong and Zhou (2011)",
"ref_id": "BIBREF14"
},
{
"start": 296,
"end": 317,
"text": "Axelrod et al. (2012)",
"ref_id": "BIBREF1"
},
{
"start": 449,
"end": 465,
"text": "Su et al. (2012)",
"ref_id": "BIBREF26"
},
{
"start": 596,
"end": 618,
"text": "Eidelman et al. (2012)",
"ref_id": "BIBREF11"
},
{
"start": 829,
"end": 856,
"text": "Hewavitharana et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 1038,
"end": 1067,
"text": "Banchs and Costa-juss\u00e0 (2011)",
"ref_id": "BIBREF2"
},
{
"start": 1251,
"end": 1269,
"text": "Chen et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 1514,
"end": 1541,
"text": "Boyd-Graber and Blei (2009;",
"ref_id": "BIBREF6"
},
{
"start": 1542,
"end": 1571,
"text": "Boyd-Graber and Resnik (2010)",
"ref_id": "BIBREF7"
},
{
"start": 1743,
"end": 1764,
"text": "(Zhao and Xing, 2006)",
"ref_id": "BIBREF30"
},
{
"start": 1842,
"end": 1863,
"text": "Carpuat and Wu (2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "We have presented a novel bilingual topic model based on LDA and applied it to the task of translation model adaptation on a diverse French-English data set. Our model infers topic distributions over phrase pairs to compute document-specific translation probabilities and performs dynamic adaptation on test documents of unknown origin. We have shown that our model outperforms a concatenation baseline and two domain-adapted benchmark systems with BLEU gains of up to 1.26 on domain-specific test set portions and 0.81 overall. We have also shown that a combination of topicadapted features performs better than each feature in isolation and that these gains are additive. An analysis of the data revealed that topic adaptation compares most favourably to domain adaptation when the domain in question is rather diverse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Parallel documents are modelled as bags of phrase pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by funding from the Scottish Informatics and Computer Science Alliance (Eva Hasler) and funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement 287658 (EU BRIDGE) and grant agreement 288769 (AC-CEPT). Thanks to Chris Dyer for an initial discussion about the phrasal LDA model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Domain adaptation via pseudo in-domain data selection",
"authors": [
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "New methods and evaluation experiments on translating TED talks in the IWSLT benchmark",
"authors": [
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "Mei-Yuh",
"middle": [],
"last": "Hwang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amittai Axelrod, Xiaodong He, Li Deng, Alex Acero, and Mei-Yuh Hwang. 2012. New methods and evaluation experiments on translating TED talks in the IWSLT benchmark. In Proceedings of ICASSP. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A semantic feature for statistical machine translation",
"authors": [
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation, SSST-5",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael E. Banchs and Marta R. Costa-juss\u00e0. 2011. A semantic feature for statistical machine translation. In Proceedings of the Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation, SSST-5. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fill-up versus Interpolation Methods for Phrase-based SMT Adaptation",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arianna Bisazza, Nick Ruiz, and Marcello Federico. 2011. Fill-up versus Interpolation Methods for Phrase-based SMT Adaptation. In Proceedings of IWSLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Latent dirichlet allocation. JMLR",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent dirichlet allocation. JMLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Findings of WMT 2013. Association for Computational Linguistics",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of WMT 2013. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multilingual Topic Models for Unaligned Text",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Graber",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber and David Blei. 2009. Multilin- gual Topic Models for Unaligned Text. In Proceed- ings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Holistic Sentiment Analysis Across Languages: Multilingual Supervised Latent Dirichlet Allocation",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber and Philip Resnik. 2010. Holistic Sentiment Analysis Across Languages: Multilingual Supervised Latent Dirichlet Allocation. In Proceed- ings of EMNLP. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How phrase sense disambiguation outperforms word sense disambiguation for SMT",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2007,
"venue": "International Conference on Theoretical and Methodological Issues in MT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Carpuat and Dekai Wu. 2007. How phrase sense disambiguation outperforms word sense dis- ambiguation for SMT. In International Conference on Theoretical and Methodological Issues in MT.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wit3: Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EAMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit3: Web inventory of transcribed and translated talks. In Proceedings of EAMT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Vector space model for adaptation in SMT",
"authors": [
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boxing Chen, Roland Kuhn, and George Foster. 2013. Vector space model for adaptation in SMT. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Topic models for dynamic translation model adaptation",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic models for dynamic translation model adaptation. In Proceedings of ACL. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mixture-model adaptation for SMT",
"authors": [
{
"first": "G",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Foster and R. Kuhn. 2007. Mixture-model adapta- tion for SMT. In Proceedings of WMT. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Discriminative instance weighting for domain adaptation in SMT",
"authors": [
{
"first": "G",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Foster, C. Goutte, and R. Kuhn. 2010. Discrimi- native instance weighting for domain adaptation in SMT. In Proceedings of EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Employing topic modeling for SMT",
"authors": [
{
"first": "Zhengxian",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IEEE",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengxian Gong and Guodong Zhou. 2011. Employ- ing topic modeling for SMT. In Proceedings of IEEE (CSAE), volume 4.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sparse lexicalised features and topic adaptation for SMT",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Hasler",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Hasler, Barry Haddow, and Philipp Koehn. 2012. Sparse lexicalised features and topic adaptation for SMT. In Proceedings of IWSLT.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Incremental topic-based TM adaptation for conversational SLT",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hewavitharana",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mehay",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ananthakrishnan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Natarajan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hewavitharana, D. Mehay, S. Ananthakrishnan, and P. Natarajan. 2013. Incremental topic-based TM adaptation for conversational SLT. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Tuning as ranking",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Moses: Open source toolkit for SMT",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL 2007: Demo and poster sessions. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for SMT. In ACL 2007: Demo and poster sessions. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Discriminative corpus weight estimation for MT",
"authors": [
{
"first": "S",
"middle": [],
"last": "Matsoukas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rosti",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Matsoukas, A. Rosti, and B. Zhang. 2009. Discrim- inative corpus weight estimation for MT. In Pro- ceedings of EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Estimating a Dirichlet distribution",
"authors": [
{
"first": "P",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas P Minka. 2012. Estimating a Dirichlet distri- bution. Technical report.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "MDI Adaptation for the Lazy: Avoiding Normalization in LM Adaptation for Lecture Translation",
"authors": [
{
"first": "Nick",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nick Ruiz and Marcello Federico. 2012. MDI Adap- tation for the Lazy: Avoiding Normalization in LM Adaptation for Lecture Translation. In Proceedings of IWSLT.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the International Conference on New Methods in Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Lan- guage Processing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Mixture-modeling with unsupervised clusters for domain adaptation in SMT",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EAMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2012a. Mixture-modeling with unsu- pervised clusters for domain adaptation in SMT. In Proceedings of EAMT.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Perplexity Minimization for Translation Model Domain Adaptation in SMT",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2012b. Perplexity Minimization for Translation Model Domain Adaptation in SMT. In Proceedings of EACL. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Translation model adaptation for SMT with monolingual topic information",
"authors": [
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Su, H. Wu, H. Wang, Y. Chen, X. Shi, H. Dong, and Q. Liu. 2012. Translation model adaptation for SMT with monolingual topic information. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A collapsed variational Bayesian inference algorithm for LDA",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh, David Newman, and Max Welling. 2006. A collapsed variational Bayesian inference algorithm for LDA. In Proceedings of NIPS.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Rethinking LDA: Why priors matter",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna M. Wallach, David M. Mimno, and Andrew Mc- Callum. 2009. Rethinking LDA: Why priors matter. In Proceedings of NIPS.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Collapsed variational Bayesian inference for Hidden Markov Models",
"authors": [
{
"first": "Pengyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "AISTATS",
"volume": "31",
"issue": "",
"pages": "599--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengyu Wang and Phil Blunsom. 2013. Collapsed variational Bayesian inference for Hidden Markov Models. In AISTATS, volume 31 of JMLR Proceed- ings, pages 599-607.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bilingual topic admixture models for word alignment",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao and Eric P. Xing. 2006. Bilingual topic ad- mixture models for word alignment. In Proceedings of ACL. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Phrasal LDA model for inference on training data."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Document-topic distributions for training (top) and test (bottom) documents, grouped by domain and averaged into blocks for visualisation."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Frequent phrase pairs in learned topics."
},
"TABREF1": {
"content": "<table/>",
"text": "Number of sentence pairs and documents (in brackets) in the French-English data sets. The training data has 2.7M English words per domain.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"2\">Model Mixed CC</td><td>NC</td><td>TED</td></tr><tr><td>IN</td><td>26.77</td><td/></tr></table>",
"text": "which preserves the translation scores of phrases from the IN model and the linear mixture models (LIN-TM) of Sennrich (2012b) (both available in the Moses toolkit). For both systems, we build separate phrase tables for each domain and use a wrapper to decode tuning and test sets with domainspecific tables. Both benchmarks have an advan-18.76 29.56 32.47 ALL 26.86 19.61 29.42 31.88",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">Avg JSD Rank1-diff</td></tr><tr><td>Ted-IN vs ALL</td><td>0.15</td><td>10.8%</td></tr><tr><td>CC-IN vs ALL</td><td>0.17</td><td>18.4%</td></tr><tr><td>NC-IN vs ALL</td><td>0.13</td><td>13.3%</td></tr></table>",
"text": "BLEU of in-domain and baseline models.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"text": "Average JSD of IN vs. ALL models. Rank1-diff: % PT entries where preferred translation changes. tage over our model because they are aware of domain boundaries in the test set. Further, LIN-TM adapts phrase table features in both translation directions while we only adapt the forward features.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table><tr><td>Model</td><td>Mixed CC</td><td>NC</td><td>TED</td></tr><tr><td>lex(e|f,d)</td><td>26.99</td><td/><td/></tr></table>",
"text": "19.93 29.34 32.19 trgUnigrams 27.15 19.90 29.54 32.50 docSim 27.22 20.11 29.63 32.40 p(e|f,d) 27.31 20.23 29.52 32.58 All features 27.67 20.40 30.04 33.08",
"type_str": "table",
"num": null,
"html": null
},
"TABREF7": {
"content": "<table><tr><td>Model Mixed</td><td>CC</td><td>NC</td><td>TED</td></tr><tr><td colspan=\"4\">ALL -26.86 19.61 29.42 31.88</td></tr><tr><td colspan=\"4\">3 topics -26.95 19.83 29.46 32.02</td></tr><tr><td colspan=\"4\">5 topics *27.48 19.98 29.94 33.04</td></tr><tr><td colspan=\"4\">10 topics *27.65 20.34 29.99 33.14</td></tr><tr><td colspan=\"4\">20 topics *27.63 20.39 29.93 33.09</td></tr><tr><td colspan=\"4\">50 topics *27.67 20.40 30.04 33.08</td></tr><tr><td colspan=\"4\">100 topics *27.65 20.54 30.00 32.90</td></tr><tr><td colspan=\"4\">&gt;ALL +0.81 +0.93 +0.62 +1.26</td></tr></table>",
"text": "BLEU scores of pLDA features (50 topics), separately and combined.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}