Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:03.299247Z"
},
"title": "Translingual Document Representations from Discriminative Projections",
"authors": [
{
"first": "John",
"middle": [
"C"
],
"last": "Platt",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Representing documents by vectors that are independent of language enhances machine translation and multilingual text categorization. We use discriminative training to create a projection of documents from multiple languages into a single translingual vector space. We explore two variants to create these projections: Oriented Principal Component Analysis (OPCA) and Coupled Probabilistic Latent Semantic Analysis (CPLSA). Both of these variants start with a basic model of documents (PCA and PLSA). Each model is then made discriminative by encouraging comparable document pairs to have similar vector representations. We evaluate these algorithms on two tasks: parallel document retrieval for Wikipedia and Europarl documents, and cross-lingual text classification on Reuters. The two discriminative variants, OPCA and CPLSA, significantly outperform their corresponding baselines. The largest differences in performance are observed on the task of retrieval when the documents are only comparable and not parallel. The OPCA method is shown to perform best.",
"pdf_parse": {
"paper_id": "D10-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "Representing documents by vectors that are independent of language enhances machine translation and multilingual text categorization. We use discriminative training to create a projection of documents from multiple languages into a single translingual vector space. We explore two variants to create these projections: Oriented Principal Component Analysis (OPCA) and Coupled Probabilistic Latent Semantic Analysis (CPLSA). Both of these variants start with a basic model of documents (PCA and PLSA). Each model is then made discriminative by encouraging comparable document pairs to have similar vector representations. We evaluate these algorithms on two tasks: parallel document retrieval for Wikipedia and Europarl documents, and cross-lingual text classification on Reuters. The two discriminative variants, OPCA and CPLSA, significantly outperform their corresponding baselines. The largest differences in performance are observed on the task of retrieval when the documents are only comparable and not parallel. The OPCA method is shown to perform best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Given the growth of multiple languages on the Internet, Natural Language Processing must operate on dozens of languages. It is becoming critical that computers reach high performance on the following two tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Comparable and parallel document retrieval -Cross-language information retrieval and text categorization have become important with the growth of the Web (Oard and Diekema, 1998) . In addition, machine translation (MT) systems can be improved by training on sentences extracted from parallel or comparable documents mined from the Web (Munteanu and Marcu, 2005) . Comparable documents can also be used for learning word-level translation lexicons (Fung and Yee, 1998; Rapp, 1999) .",
"cite_spans": [
{
"start": 156,
"end": 180,
"text": "(Oard and Diekema, 1998)",
"ref_id": "BIBREF18"
},
{
"start": 337,
"end": 363,
"text": "(Munteanu and Marcu, 2005)",
"ref_id": "BIBREF17"
},
{
"start": 449,
"end": 469,
"text": "(Fung and Yee, 1998;",
"ref_id": "BIBREF9"
},
{
"start": 470,
"end": 481,
"text": "Rapp, 1999)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Cross-language text categorization -Applications of text categorization, such as sentiment classification (Pang et al., 2002) , are now required to run on multiple languages. Categorization is usually trained on the language of the developer: it needs to be easily extended to other languages.",
"cite_spans": [
{
"start": 108,
"end": 127,
"text": "(Pang et al., 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two broad approaches to comparable document retrieval and cross-language text categorization. One approach is to translate queries or a training set from different languages into a single target language. Standard monolingual retrieval and classification algorithms can then be applied in the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alternatively, a cross-language system can project a bag-of-words vector into a translingual lowerdimensional vector space. Ideally, vectors in this space represent the semantics of a document, independent of the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The advantage of pre-translation is that MT systems tend to preserve the meaning of documents. However, MT can be very slow (more than 1 second per document), preventing its use on large training sets. When full MT is not practical, a fast word-byword translation model can be used instead, (Ballesteros and Croft, 1996) but may be less accurate.",
"cite_spans": [
{
"start": 291,
"end": 320,
"text": "(Ballesteros and Croft, 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conversely, applying a projection into a lowdimensional space is quick. Linear projection algorithms use matrix-sparse vector multiplication, which can be easily parallelized. However, as seen in section 3, the accuracies of previous projection techniques are not as high as machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents two techniques: Oriented PCA and Coupled PLSA. These techniques retain the high speed of projection, while approaching or exceeding the quality level of word glossing. We improve the quality of the projections by the use of discriminative training: we minimize the difference between comparable documents in the projected vector space. Oriented PCA minimizes the difference by modifying the eigensystem of PCA (Diamantaras and Kung, 1996) , while Coupled PLSA uses posterior regularization (Graca et al., 2008; Ganchev et al., 2009) on the topic assignments of the comparable documents.",
"cite_spans": [
{
"start": 430,
"end": 458,
"text": "(Diamantaras and Kung, 1996)",
"ref_id": "BIBREF6"
},
{
"start": 510,
"end": 530,
"text": "(Graca et al., 2008;",
"ref_id": "BIBREF11"
},
{
"start": 531,
"end": 552,
"text": "Ganchev et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been extensive work in projecting monolingual documents into a vector space. The initial algorithm for projecting documents was Latent Semantic Analysis (LSA), which modeled bag-ofword vectors as low-rank Gaussians (Deerwester et al., 1990) . Subsequent projection algorithms were based on generative models of individual terms in the documents, including Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003) .",
"cite_spans": [
{
"start": 225,
"end": 250,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF5"
},
{
"start": 412,
"end": 427,
"text": "(Hofmann, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 466,
"end": 485,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "1.1"
},
{
"text": "Work on cross-lingual projections followed a similar pattern of moving from Gaussian models to term-wise generative models. Cross-language Latent Semantic Indexing (CL-LSI) (Dumais et al., 1997) applied LSA to concatenated comparable documents from multiple languages. Similarly, Polylingual Topic Models (PLTM) (Mimno et al., 2009) generalized LDA to tuples of documents from multiple languages. The experiments in section 3 use CL-LSI and an algorithm similar to PLTM as benchmarks.",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "(Dumais et al., 1997)",
"ref_id": "BIBREF7"
},
{
"start": 312,
"end": 332,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "1.1"
},
{
"text": "The closest previous work to this paper is the use of Canonical Correlation Analysis (CCA) to find projections for multiple languages whose results are maximally correlated with each other (Vinokourov et al., 2003) .",
"cite_spans": [
{
"start": 189,
"end": 214,
"text": "(Vinokourov et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "1.1"
},
{
"text": "PLSA-, LDA-, and CCA-based cross-lingual models have also been trained without the use of parallel or comparable documents, using only knowledge from a translation dictionary to achieve sharing of topics across languages (Haghighi et al., 2008; Jagarlamudi and Daum\u00e9, 2010; Zhang et al., 2010) . Such work is complementary to ours and can be used to extend the models to domains lacking parallel documents.",
"cite_spans": [
{
"start": 221,
"end": 244,
"text": "(Haghighi et al., 2008;",
"ref_id": "BIBREF12"
},
{
"start": 245,
"end": 273,
"text": "Jagarlamudi and Daum\u00e9, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 274,
"end": 293,
"text": "Zhang et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "1.1"
},
{
"text": "Outside of NLP, researchers have designed algorithms to find discriminative projections. We build on the Oriented Principal Component Analysis (OPCA) algorithm (Diamantaras and Kung, 1996) , which finds projections that maximize a signal-tonoise ratio (as defined by the user). OPCA has been used to create discriminative features for audio fingerprinting (Burges et al., 2003) .",
"cite_spans": [
{
"start": 160,
"end": 188,
"text": "(Diamantaras and Kung, 1996)",
"ref_id": "BIBREF6"
},
{
"start": 356,
"end": 377,
"text": "(Burges et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "1.1"
},
{
"text": "This paper now presents two algorithms for translingual document projection (in section 2): OPCA and Coupled PLSA (CPLSA). To explain OPCA, we first review CL-LSI in section 2.1, then discuss the details of OPCA (section 2.2), and compare it to CCA (section 2.3). To explain CPLSA, we first introduce Joint PLSA (JPLSA), analogous to CL-LSI, in section 2.4, and then describe the details of CPLSA (section 2.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of paper",
"sec_num": "1.2"
},
{
"text": "We have evaluated these algorithms on two different tasks: comparable document retrieval (section 3.2) and cross-language text categorization (section 3.3). We discuss the findings of the evaluations and extensions to the algorithms in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of paper",
"sec_num": "1.2"
},
{
"text": "2 Algorithms for translingual document projection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of paper",
"sec_num": "1.2"
},
{
"text": "Cross-language Latent Semantic Indexing (CL-LSI) is Latent Semantic Analysis (LSA) applied to multiple languages. First, we review the mathematics of LSA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "LSA models an n \u00d7 k document-term matrix D, where n is the number of documents and k is the number of terms. The model of the document-term matrix is a low-rank Gaussian. Originally, LSA was presented as performing a Singular Value Decomposition (Deerwester et al., 1990) , but here we present it as eigendecomposition, to clarify its relationship with OPCA.",
"cite_spans": [
{
"start": 246,
"end": 271,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "LSA first computes the correlation matrix between terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = D T D.",
"eq_num": "(1)"
}
],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "The Rayleigh quotient for a vector v with the matrix C is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v T C v v T v ,",
"eq_num": "(2)"
}
],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "and is equal to the variance of the data projected using the vector v, normalized by the length of v, if D has columns that are zero mean. Good projections retain a large amount of variance. LSA maximizes the Rayleigh ratio by taking its derivative against v and setting it to zero. This yields a set of projections that are eigenvectors of C,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C v j = \u03bb j v j ,",
"eq_num": "(3)"
}
],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "where \u03bb j is the jth-largest eigenvalue. Each eigenvalue is also the variance of the data when projected by the corresponding eigenvector v j . LSA simply uses top d eigenvectors as projections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "LSA is very similar to Principal Components Analysis (PCA). The only difference is that the correlation matrix C is used, instead of the covariance matrix. In practice, the document-term matrix D is sparse, so the column means are close to zero, and the correlation matrix is close to the covariance matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "There are a number of methods to form the document-term matrix D. One method that works well in practice is to compute the log(tf)-idf weighting: (Dumais, 1990; Wild et al., 2005) ",
"cite_spans": [
{
"start": 146,
"end": 160,
"text": "(Dumais, 1990;",
"ref_id": "BIBREF8"
},
{
"start": 161,
"end": 179,
"text": "Wild et al., 2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D ij = log 2 (f ij + 1) log 2 (n/d j ),",
"eq_num": "(4)"
}
],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "where f ij is the number of times term j occurs in document i, n is the total number of documents, and d j is the total number of documents that contain term j. Applying a logarthm to the term counts makes the distribution of matrix entries approach Gaussian, which makes the LSA model more valid. Cross-language LSI is an application of LSA where each row of D is formed by concatenating comparable or parallel documents in multiple languages. If a single term occurs in multiple languages, the term only has one slot in the concatenation, and the term count accumulates for all languages. Such terms could be proper nouns, such as \"Smith\" or \"Merkel\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "In general, the elements of D are computed via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D ij = log 2 m f m ij + 1 log 2 (n/d j ),",
"eq_num": "(5)"
}
],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "where f m ij is the number of times term j occurs in document i in language m. Here, d j is the number of documents term j appears in, and n is the total number of documents across all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "Because CL-LSI is simply LSA applied to concatenated documents, it models terms in document vectors jointly across languages as a single low-rank Gaussian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Latent Semantic Indexing",
"sec_num": "2.1"
},
{
"text": "The limitations of CL-LSI can be illustrated by considering Oriented Principal Components Analysis (OPCA), a generalization of PCA. A user of OPCA computes a signal covariance matrix S and a noise covariance matrix N. OPCA projections v j maximize the ratio of the variance of the signal projected by v j to the variance of the noise projected by v j . This signal-to-noise ratio is the generalized Rayleigh quotient: (Diamantaras and Kung, 1996) ",
"cite_spans": [
{
"start": 418,
"end": 446,
"text": "(Diamantaras and Kung, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v T S v v T N v .",
"eq_num": "(6)"
}
],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "Taking the derivative of the Rayleigh quotient with respect to the projections v and setting it to zero yields the generalized eigenproblem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S v j = \u03bb j N v j .",
"eq_num": "(7)"
}
],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "This eigenproblem has no local minima, and can be solved with commonly available parallel code. PCA is a specialization of OPCA, where the noise covariance matrix is assumed to be the identity (i.e., uncorrelated noise). PCA projections maximize the signal-to-noise ratio where the signal is the empirical covariance of the data, and the noise is spherical white noise. PCA projections are not truly appropriate for forming multilingual document projections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "Instead, we want multilingual document projections to maximize the projected covariance of document vectors across all languages, while simultaneously minimizing the projected distance between comparable documents (see Figure 1 ). OPCA gives us a framework for finding such discriminative projections. The covariance matrix for all documents is the signal covariance in OPCA, and captures the meaning of documents across all languages. The projection of this covariance matrix should be maximized. The covariance matrix formed from differences between comparable documents is the noise covariance in OPCA: we wish to minimize the latter covariance, to make the projection languageindependent.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "Specifically, we create the weighted documentterm matrix D m for each language:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D ij,m = log 2 (f m ij + 1)log 2 (n/d j ).",
"eq_num": "(8)"
}
],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "We then derive a signal covariance matrix over all languages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = m D T m D m /n \u2212 \u00b5 T m \u00b5 m ,",
"eq_num": "(9)"
}
],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "where \u00b5 m is the mean of each D m over its columns, and a noise covariance matrix,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N = m (D m \u2212 D) T (D m \u2212 D)/n + \u03b3I,",
"eq_num": "(10)"
}
],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "where D is the mean across all languages of the document-term matrix,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D = 1 M m D m ,",
"eq_num": "(11)"
}
],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "and M is the number of languages. Applying equation (7) to these matrices and taking the top generalized eigenvectors yields the projection matrix for OPCA. Note the regularization term of \u03b3I in equation (10). The empirical sample of comparable documents may not cover the entire space of translation noise the system will encounter in the test set. For safety, we add a regularizer that prevents the variance of a term from getting too small. We tuned \u03b3 on the development sets in section 3.2: for log(tf)idf weighted vectors, C = 0.1 works well for the data sets and dimensionalities that we tried. We use C = 0.1 for all final tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oriented Principal Component Analysis",
"sec_num": "2.2"
},
{
"text": "Canonical Correlation Analysis (CCA) is a technique that is related to OPCA. CCA was kernelized and applied to creating cross-language document models by (Vinokourov et al., 2003) . In CCA, a linear projection is found for each language, such that the projections of the corpus from each language are maximally correlated with each other. Similar to OPCA, this linear projection can be found by finding the top generalized eigenvectors of the system 7, where S is now a matrix of cross-correlations that the projection maximizes,",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "(Vinokourov et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Correlation Analysis",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = 0 C 12 C 21 0 ,",
"eq_num": "(12)"
}
],
"section": "Canonical Correlation Analysis",
"sec_num": "2.3"
},
{
"text": "and N is a matrix of autocorrelations that the projection minimizes,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Correlation Analysis",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N = C 11 + \u03b3I 0 0 C 22 + \u03b3I .",
"eq_num": "(13)"
}
],
"section": "Canonical Correlation Analysis",
"sec_num": "2.3"
},
{
"text": "Here, C ij is the (cross-)covariance matrix, with dimension equal to the vocabulary size, that is computed between the document vectors for languages i and j. Analogous to OPCA, \u03b3 is a regularization term, set by optimizing performance on a validation set. Like OPCA, these matrices can be generalized to more than two languages. Unlike OPCA, CCA finds projections that maximize the cross-covariance between the projected vectors, instead of minimizing Euclidean distance. 1 By definition, CCA cannot take advantage of the information that same term occurs simultaneously in comparable documents. As shown in section 3, this information is useful and helps OPCA perform better then CCA. In addition, CCA encourages comparable documents to be projected to vectors that are mutually linearly predictable. This is not the same OPCA's projected vectors that have low Euclidean distance: the latter may be preferred by algorithms that consume the projections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canonical Correlation Analysis",
"sec_num": "2.3"
},
{
"text": "We now turn to a baseline generative model that is analogous to CL-LSI. Our baseline joint PLSA model (JPLSA) is closely related to the poly-lingual LDA model of (Mimno et al., 2009 ). The graphical model for JPLSA is shown at the top in Figure 2 . We describe the model for two languages, but it is straightforward to generalize to more than two languages, as in (Mimno et al., 2009) . The model sees documents d i as sequences of words w 1 , w 2 , . . . , w n i from a vocabulary V . There are T cross-language topics, each of which has a distribution \u03c6 t over words in V . In the case of models for two languages, we define the vocabulary V to contain word types from both languages. In this way, each topic is shared across languages.",
"cite_spans": [
{
"start": 162,
"end": 181,
"text": "(Mimno et al., 2009",
"ref_id": "BIBREF16"
},
{
"start": 364,
"end": 384,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "z z \u03b8 \u03b1 w w \u03c6 T D N 1 N 2 z z \u03b8 1 \u03b1 w w \u03c6 T D N 1 N 2 \u03b8 2 \u03b2 \u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "Each topic-specific distribution \u03c6 t , for t = 1 . . . T , is drawn from a symmetric Dirichlet prior with concentration parameter \u03b2. Given the topicspecific word distributions, the generative process for a corpus of paired documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "[d 1 i , d 2 i ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "in two languages L 1 and L 2 is described in the next paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "For each pair of documents, pick a distribution over topics \u03b8 i , from a symmetric Dirichlet prior with concentration parameter \u03b1. Then generate the documents d 1 i and d 2 i in turn. Each word token in each document is generated independently by first picking a topic z from a multinomial distribution with parameter \u03b8 i (MULTI(\u03b8 i )), and then generating the word token from the topic-specific word distribution for the chosen topic MULTI(\u03c6 z ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "The probability of a document pair",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "[d 1 , d 2 ] with words [w 1 1 , w 1 2 , . . . , w 1 n 1 ], [w 2 1 , w 2 2 , . . . , w 2 n 2 ], topic assignments [z 1 1 , . . . , z 1 n 1 ], [z 2 1 , . . . , z 2 n 2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": ", and a common topic vector \u03b8 is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "P (\u03b8|\u03b1) n 1 j=1 P (z 1 j |\u03b8)P (w 1 j |\u03c6 z 1 j ) n 2 j=1 P (z 2 j |\u03b8)P (w 2 j |\u03c6 z 2 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "The difference between the JPLSA model and the poly-lingual topic model of (Mimno et al., 2009) is that we merge the vocabularies in the two languages and learn topic-specific word distributions over these merged vocabularies, instead of having pairs of topic-specific word distributions, one for each language, like in (Mimno et al., 2009) . Thus our model is more similar to the CL-LSI model, because it can be seen as viewing a pair of documents in two languages as one bigger document containing the words in both documents.",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 320,
"end": 340,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "Another difference between our model and the poly-lingual LDA model of (Mimno et al., 2009) is that we use maximum aposteriori (MAP) instead of Bayesian inference. Recently, MAP inference was shown to perform comparably to the best inference method for LDA (Asuncion et al., 2009) , if the hyper-parameters are chosen optimally for the inference method. Our initial experiments with Bayesian versus MAP inference for parallel document retrieval using JPLSA confirmed this result. In practice our baseline model outperforms polylingual LDA as mentioned in our experiments.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 257,
"end": 280,
"text": "(Asuncion et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language Topic Models",
"sec_num": "2.4"
},
{
"text": "The JPLSA model assumes that a pair of translated or comparable documents have a common topic distribution \u03b8. JPLSA fits its parameters to optimize the probability of the data, given this assumption. For the task of comparable document retrieval, we want our topic model to assign similar topic distributions \u03b8 to a pair of corresponding documents. But this is not exactly what the JPLSA model is doing. Instead, it derives a common topic vector \u03b8 which explains the union of all tokens in the English and foreign documents, instead of making sure that the best topic assignment for the English document is close to the best topic assignment of the foreign document. This difference becomes especially apparent when corresponding documents have different lengths. In this case, the model will tend to derive a topic vector \u03b8 which explains the longer document best, making the sum of the two documents' loglikelihoods higher. Modeling the shorter document's best topic carries little weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "Modeling both documents equally is what Coupled PLSA (CPLSA) is designed to do. The graphical model for CPLSA is shown at the bottom of Figure 2 . In this figure, the topic vectors of a pair of documents in two languages are shown completely independent. We use the log-likelihood according to this model, but also add a regularization term, which tries to make the topic assignments of corresponding documents close. In particular, we use posterior regularization (Graca et al., 2008; Ganchev et al., 2009) to place linear constraints on the expectations of topic assignments to two corresponding documents.",
"cite_spans": [
{
"start": 465,
"end": 485,
"text": "(Graca et al., 2008;",
"ref_id": "BIBREF11"
},
{
"start": 486,
"end": 507,
"text": "Ganchev et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "For two linked documents d 1 and d 2 , we would like our model to be such that the expected fraction of tokens in d 1 that get assigned topic t is approximately the same as the expected fraction of tokens in d 2 that get assigned the same topic t, for each topic t = 1 . . . T . This is exactly what we need to make each pair of corresponding documents close.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "Let z 1 and z 2 denote vectors of topic assignments to the tokens in document d 1 and d 2 , respectively. Their dimensionality is equal to the lengths of the two documents, n 1 and n 2 . We define a space of posterior distributions Q over hidden topic assignments to the tokens in d 1 and d 2 , that has the desired property: the expected fraction of each topic is approximately equal in d 1 and d 2 . We can formulate this constrained space Q as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "Q = {q 1 (z 1 ), q 2 (z 2 )} such that E q 1 [ n 1 j=1 1(z 1 j = t) n 1 ] \u2212 E q 2 [ n 2 j=1 1(z 2 j = t) n 2 ] \u2264 t E q 2 [ n 2 j=1 1(z 2 j = t) n 2 ] \u2212 E q 1 [ n 1 j=1 1(z 1 j = t) n 1 ] \u2264 t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "We then formulate an objective function that maximizes the log-likelihood of the data while simultaneously minimizing the KL-divergence between the desired distribution set Q and the posterior distribution according to the model: P (z 1 |d 1 , \u03b8 1 , \u03c6) and P (z 2 |d 2 , \u03b8 2 , \u03c6) .",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 279,
"text": "(z 2 |d 2 , \u03b8 2 , \u03c6)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "The objective function for a single document pair is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "log P (d 1 |\u03b8 1 , \u03c6) + log P (d 2 |\u03b8 2 , \u03c6) \u2212KL(Q||P (z 1 |d 1 , \u03b8 1 , \u03c6), P (z 2 |d 2 , \u03b8 2 , \u03c6)) \u2212|| ||",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "The final corpus-wide objective is summed over document-pairs, and also contains terms for the probabilities of the parameters \u03b8 and \u03c6 given the Dirichlet priors. The norm of is minimized, which makes the expected proportions of topics in two documents as close as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "Following (Ganchev et al., 2009) , we fit the parameters by an EM-like algorithm, where for each document pair, after finding the posterior distribution of the hidden variables, we find the KLprojection of this posterior onto the constraint set, and take expected counts with respect to this projection; these expected counts are used in the M-step. The projection is found using a simple projected gradient algorithm. 2 For both the baseline JPLSA and the CPLSA models, we performed learning through MAP inference using EM (with a projection step for CPLSA). We did up to 500 iterations for each model, and did early stopping based on task performance on the development set. The JPLSA model required more iterations before reaching its peak accuracy, tending to require around 300 to 450 iterations for convergence. CPLSA required fewer iterations, but each iteration was slower due to the projection step.",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "(Ganchev et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 419,
"end": 420,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "All models use \u03b1 = 1.1 and \u03b2 = 1.01 for the values of the concentration parameters. We found that the performance of the models was not very sensitive to these values, in the region that we tested (\u03b1, \u03b2 \u2208 [1.001, 1.1]). Higher hyper-parameter values resulted in faster convergence, but the final performance was similar across these different values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Probabilistic Latent Semantic Analysis",
"sec_num": "2.5"
},
{
"text": "We test the proposed discriminative projections versus more established cross-language models on the two tasks described in the introduction: retrieving comparable documents from a corpus, and training a classifier in one language and using it in another. We measure accuracy on a test set, and also examine the sensitivity to dimensionality of the projection on development sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental validation",
"sec_num": "3"
},
{
"text": "We first test the speed of the various algorithms discussed in this paper, compared to a full machine translation system. When finding document projections, CL-LSI, OPCA, CCA, JPLSA, and CPLSA are equally fast: they perform a matrix multiplication and require O(nk) operations, where n is the number of distinct words in the documents and k is the dimensionality of the projection. 3 A single CPU core can read the indexed documents into memory and take logarithms at 216K words per second. Projecting into a 2000-dimensional space operates at 41K words per second. Translating word-by-word operates at 274K words per second. In contrast, machine translation processes 50 words per second, approximately 3 orders of magnitude slower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speed of training and evaluation",
"sec_num": "3.1"
},
{
"text": "Total training time for OPCA on 43,380 pairs of comparable documents was 90 minutes, running on an 8-core CPU for 2000 dimensions. On the same corpus, JPLSA requires 31 minutes per iteration and CPLSA requires 377 minutes per iteration. CPLSA requires a factor of five times fewer iterations: overall, it is twice as slow as JPLSA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speed of training and evaluation",
"sec_num": "3.1"
},
{
"text": "In comparable document retrieval, a query is a document in one language, which is compared to a cor-pus of documents in another language. By mapping all documents into the same vector space, the comparison is a vector comparison. For our experiments with CL-LSI, OPCA, and CCA, we use cosine similarity between vectors to rank the documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "For the JPLSA and CPLSA models, we map the documents to corresponding topic vectors \u03b8, and compute distance between these probability vectors. The mapping to topic vectors requires EM iterations, or folding-in (Hofmann, 1999) . We found that performing a single EM iteration resulted in best performance so we used this for all models. For computing distance we used the L1-norm of the difference, which worked a bit better than the Jensen-Shannon divergence between the topic vectors used in (Mimno et al., 2009) .",
"cite_spans": [
{
"start": 210,
"end": 225,
"text": "(Hofmann, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 493,
"end": 513,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "We test all algorithms on the Europarl data set of documents in English and Spanish, and a set of Wikipedia articles in English and Spanish that contain interlanguage links between them (i.e., articles that the Wikipedia community have identified as comparable across languages).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "For the Europarl data set, we use 52,685 documents as training, 11,933 documents as a development set, and 18,415 documents as a final test set. Documents are defined as speeches by a single speaker, as in (Mimno et al., 2009) . 4 For the Wikipedia set, we use 43,380 training documents, 8,675 development documents, and 8,675 final test set documents.",
"cite_spans": [
{
"start": 206,
"end": 226,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 229,
"end": 230,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "For both corpora, the terms are extracted by wordbreaking all documents, removing the top 50 most frequent terms and keeping the next 20,000 most frequent terms. No stemming or folding is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "We assess performance by testing each document in English against all possible documents in Spanish, and vice versa. We measure the Top-1 accuracy (i.e., whether the true comparable is the closest in the test set), and the Mean Reciprocal Rank of the true comparable, and report the average performance over the two retrieval directions. Ties are counted as errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "We tuned the dimensionality of the projections on the development set, as shown in Figures 3 and 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 98,
"text": "Figures 3 and 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "We chose the best dimension on the development set for each algorithm, and used it on the final test set. The regularization \u03b3 was tuned for CCA: \u03b3 = 10 for Europarl, and \u03b3 = 3 for Wikipedia. In the two figures, we evaluate the five projection methods, as well as a word-by-word translation method (denoted by WbW in the graphs). Here \"word-by-word\" refers to using cosine distance after applying a word-by-word translation model to the Spanish documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The word-by-word translation model was trained on the Europarl training set, using the WDHMM model (He, 2007) , which performs similarly to IBM Model 4. The probability matrix of generating English words from Spanish words was multiplied by each document's log(tf)-idf vector to produce a translated document vector. We found that multiplying the probability matrix to the log(tf)-idf vector was more accurate on the development set than multiplying the tf vector directly. This vector was either tested as-is, or mapped through LSA learned from the English training set of the corpus. In the figures, the dimensionality of WbW translation refers to the dimensionality of monolingual LSA.",
"cite_spans": [
{
"start": 99,
"end": 109,
"text": "(He, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The overall ordering of the six models is different for the Europarl and Wikipedia development datasets. The discriminative models outperform the corresponding generative ones (OPCA vs CL-LSI) and (CPLSA vs JPLSA) for both datasets, and OPCA performs best overall, dominating the best fast-translation based model, as well as the other projection methods, including CCA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "On Europarl, JPLSA and CPLSA outperform CL-LSI, with the best dimension or JPLSA also slightly outperforming the best setting for the word-by-word translation model, whereas on Wikipedia the PLSAbased models are significantly worse than the other models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The results on the final test set, evaluating each model using its best dimensionality setting, confirm the trends observed on the development set. The final results are shown in Tables 1 and 2. For these experiments, we use the unpaired t-test with Bonferroni correction to determine the smallest set of algorithms that have statistically significantly better accuracy than the rest. The p-value threshold for significance is chosen to be 0.05. The accuracies for these significantly superior algorithms are shown in boldface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "For Wikipedia and Europarl, we include an additional baseline model,\"Untranslated\": this refers to applying cosine distance to both the Spanish and English documents directly (since they share some vocabulary terms). For Wikipedia, comparable documents seem to share many common terms, so cosine distance between untranslated documents is a reasonable benchmark.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "From the final Europarl results we can see that the best models can learn to retrieve parallel documents from the narrow Europarl domain very well. All dimensionality reduction methods can learn from cleanly parallel data, but discriminative training can bring additional error reduction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "In previously reported work, (Mimno et al., 2009 ) evaluate parallel document retrieval using PLTM on Europarl speeches in English and Spanish, using training and test sets of size similar to ours. They report an accuracy of 81.2% when restricting to test documents of length at least 100 and using 50 topics. JPLSA with 50 topics obtains accuracy of 98.9% for documents of that length.",
"cite_spans": [
{
"start": 29,
"end": 48,
"text": "(Mimno et al., 2009",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The final Wikipedia results are also similar to the the development set results. The problem setting for Wikipedia is different, because corresponding documents linked in Wikipedia may have widely varying degrees of parallelism. While most linked documents share some main topics, they could cover different numbers of sub-topics at varying depths. Thus the training data of linked documents is noisy, which makes it hard for projection methods to learn. The word-by-word translation model in this setting is trained on clean, but out-of-domain parallel data (Europarl), so it has the disadvantage that it may not have a good coverage of the vocabulary; however, it is not able to make use of the Wikipedia training data since it requires sentence-aligned translations. We find it encouraging that the best projection method OPCA outperformed word-by-word translation. This means that OPCA is able to uncover topic correspondence given only comparable document pairs, and to learn well in this noisy setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The PLSA-based models fare worse on Wikipedia document retrieval. CPLSA outperforms JPLSA more strongly, but both are worse than CL-LSI and even the Untranslated baseline. We think this is partly explained by the diverse vocabulary in the heterogenous Wikipedia collection. All other models use log(tf)-idf weighting, which automatically assigns importance weights to terms, whereas the topic models use word counts. This weighting is very useful for Wikipedia. For example, if we apply the untranslated matching using raw word counts, the MRR is 0.1024 on the test set, compared to 0.5383 for log(tf)-idf. We hypothesize that using a hierarchical topic model that automatically learns about more general and more topic-specific words would be helpful in this case. It is also possible that PLSAbased models require cleaner data to learn well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The overall conclusion is that OPCA outper- formed all other document retrieval methods we tested, including fast machine translation of documents. Additionally, both discriminative projection methods outperformed their generative counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval of comparable documents",
"sec_num": "3.2"
},
{
"text": "The second task is to train a text categorization system in one language, and test it with documents in another. To evaluate on this task, we use the Multilingual Reuters Collection, defined and provided by (Amini et al., 2009) . We test the English/Spanish language pair. The collection has news articles in English and Spanish, each of which has been translated to the other by the Portage translation system (Ueffing et al., 2007) . From the English news corpus, we take 13,131 documents as training, 1,875 documents as development, and 1,875 documents as test. We take the English training documents translated into Spanish as our comparable training data. For testing, we use the entire Spanish news corpus of 12,342 documents, ei-ther mapped with cross-lingual projection, or translated by Portage.",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "(Amini et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 411,
"end": 433,
"text": "(Ueffing et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language text classification",
"sec_num": "3.3"
},
{
"text": "The data set was provided by (Amini et al., 2009) as already-processed document vectors, using BM25 weighting. Thus, we only test OPCA, CL-LSI, and related methods: JPLSA and CPLSA require modeling the term counts directly. The performance on the task is measured by classification accuracy on the six disjoint category labels defined by (Amini et al., 2009) . To introduce minimal bias due to the classifier model, we use 1nearest neighbor on top of the cosine distance between vectors as a classifier. For all of the techniques, we treated the vocabulary in each language as completely separate, using the top 10,000 terms from each language.",
"cite_spans": [
{
"start": 29,
"end": 49,
"text": "(Amini et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 338,
"end": 358,
"text": "(Amini et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language text classification",
"sec_num": "3.3"
},
{
"text": "Note that no Spanish labeled data is provided for training any of these algorithms: only English and translated English news is labeled. The optimal dimension (and \u03b3 for CCA) on the development set was chosen to maximize the accuracy of English classification and translated English-to-Spanish classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-language text classification",
"sec_num": "3.3"
},
{
"text": "Dim The test classification accuracy is shown in Table 3. As above, the smallest set of superior algorithms as determined by Bonferroni-corrected ttests are shown in boldface. The results for MT and word-by-word translation use the log(tf)-idf vector directly for documents that were written in English, and use a Spanish-to-English translated vector if the document was written in Spanish. As in section 3.2, word-by-word translation multiplied each log(tf)-idf vector by the translation probability matrix trained on Europarl.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "The tests show that OPCA is better than CCA, CL-LSI, plain word-by-word translation, and even full translation for Spanish documents. However, if we post-process full translation by an LSI model trained on the English training set, full translation is the most accurate. If full translation is timeprohibitive, then OPCA is the best method: it is significantly better than word-by-word translation followed by LSI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": null
},
{
"text": "OPCA extends naturally to multiple languages. However, it requires memory and computation time that scales quadratically with the size of the vocabulary. As the number of languages goes up, it may become impractical to perform OPCA directly on a large vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Extensions",
"sec_num": "4"
},
{
"text": "Researchers have solved the problem of scaling OPCA by using Distortion Discriminant Analysis (DDA) (Burges et al., 2003) . DDA performs OPCA in two stages which avoids the need for solving a very large generalized eigensystem. As future work, DDA could be applied to mapping documents in many languages simultaneously.",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "(Burges et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Extensions",
"sec_num": "4"
},
{
"text": "Spherical Admixture Models (Reisinger et al., 2010) have recently been proposed that combine an LDA-like hierarchical generative model with the use of tf-idf representations. A similar model could be used for CPLSA: future work will show whether such a model can outperform OPCA.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "(Reisinger et al., 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Extensions",
"sec_num": "4"
},
{
"text": "This paper presents two different methods for creating discriminative projections: OPCA and CPLSA. Both of these methods avoid the use of artificial concatenated documents. Instead, they model documents in multiple languages, with the constraint that comparable documents should map to similar locations in the projected space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "When compared to other techniques, OPCA had the highest accuracy while still having a run-time that allowed scaling to large data sets. We therefore recommend the use of OPCA as a pre-processing step for large-scale comparable document retrieval or cross-language text categorization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Note that the eigenvectors have length equal to the sum of the length of the vocabularies of each language. The projections for each language are created by splitting the eigenvectors into sections, each with length equal to the vocabulary size for each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We initialized the models deterministically by assigning each word to exactly one topic to begin with, such that all topics have roughly the same number of words. Words were sorted by frequency and thus words of similar frequency are more likely to be assigned to the same topic.This initialization method outperformed random initialization and we use it for all models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For JPLSA and CPLSA this is the case only when performing a single EM iteration at test time, which we found to perform best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The training section contains documents from the years 96 through 99 and the year 02; the dev section contains documents from 01, and the test section contains documents from 00 plus the first 9 months of 03.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning from multiple partially observed views -an application to multilingual text categorization",
"authors": [
{
"first": "Massih-Reza",
"middle": [],
"last": "Amini",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 22 (NIPS 2009)",
"volume": "",
"issue": "",
"pages": "28--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massih-Reza Amini, Nicolas Usunier, and Cyril Goutte. 2009. Learning from multiple partially observed views -an application to multilingual text categoriza- tion. In Advances in Neural Information Processing Systems 22 (NIPS 2009), pages 28-36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On smoothing and inference for topic models",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Asuncion",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "27--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and inference for topic models. In Proceedings of Uncertainty in Ar- tificial Intelligence, pages 27-34.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dictionary methods for cross-lingual information retrieval",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 7th International DEXA Conference on Database and Expert Systems Applications",
"volume": "",
"issue": "",
"pages": "791--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Ballesteros and Bruce Croft. 1996. Dictionary methods for cross-lingual information retrieval. In Proceedings of the 7th International DEXA Confer- ence on Database and Expert Systems Applications, pages 791-801.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distortion discriminant analysis for audio fingerprinting",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Burges",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Platt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jana",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "11",
"issue": "3",
"pages": "165--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher J.C. Burges, John C. Platt, and Soumya Jana. 2003. Distortion discriminant analysis for audio fin- gerprinting. IEEE Transactions on Speech and Audio Processing, 11(3):165-174.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391- 407.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Principal Component Neural Networks: Theory and Applications",
"authors": [
{
"first": "I",
"middle": [],
"last": "Konstantinos",
"suffix": ""
},
{
"first": "S",
"middle": [
"Y"
],
"last": "Diamantaras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kung",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantinos I. Diamantaras and S.Y. Kung. 1996. Prin- cipal Component Neural Networks: Theory and Appli- cations. Wiley-Interscience.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic crosslanguage retrieval using latent semantic indexing",
"authors": [
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "Todd",
"middle": [
"A"
],
"last": "Letsche",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Littman",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
}
],
"year": 1997,
"venue": "AAAI-97 Spring Symposium Series: Cross-Language Text and Speech Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan T. Dumais, Todd A. Letsche, Michael L. Littman, and Thomas K. Landauer. 1997. Automatic cross- language retrieval using latent semantic indexing. In AAAI-97 Spring Symposium Series: Cross-Language Text and Speech Retrieval.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enhancing performance in latent semantic indexing (LSI) retrieval",
"authors": [
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan T. Dumais. 1990. Enhancing performance in la- tent semantic indexing (LSI) retrieval. Technical Re- port TM-ARH-017527, Bellcore.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An IR approach for translating new words from nonparallel, comparable texts",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Yee",
"middle": [],
"last": "Lo Yuen",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "414--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from nonparallel, compa- rable texts. In Proceedings of COLING-ACL, pages 414-420.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Posterior regularization for structured latent variable models",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, and Ben Taskar. 2009. Posterior regularization for struc- tured latent variable models. Technical Report MS- CIS-09-16, University of Pennsylvania.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Expectation maximization and posterior constraints",
"authors": [
{
"first": "Joao",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems 20",
"volume": "",
"issue": "",
"pages": "569--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joao Graca, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, edi- tors, Advances in Neural Information Processing Sys- tems 20, pages 569-576. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning bilingual lexicons from monolingual corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "771--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proc. ACL, pages 771- 779.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using word-dependent transition models in HMM based word alignment for statistical machine translation",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL 2nd Statistical MT workshop",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong He. 2007. Using word-dependent transition models in HMM based word alignment for statistical machine translation. In ACL 2nd Statistical MT work- shop, pages 80-87.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Probabilistic latent semantic analysis",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "289--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of Uncertainty in Artificial Intelligence, pages 289-296.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Extracting multilingual topics from unaligned comparable corpora",
"authors": [
{
"first": "Jagadeesh",
"middle": [],
"last": "Jagarlamudi",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
}
],
"year": 2010,
"venue": "ECIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jagadeesh Jagarlamudi and Hal Daum\u00e9, III. 2010. Ex- tracting multilingual topics from unaligned compara- ble corpora. In ECIR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Polylingual topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"W"
],
"last": "Wallach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "880--889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno, Hanna W. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In Proceedings of Empir- ical Methods in Natural Language Processing, pages 880-889.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving machine translation performance by exploiting non-parallel corpora",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dragos",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Munteanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "",
"pages": "477--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2005. Im- proving machine translation performance by exploit- ing non-parallel corpora. Computational Linguistics, 31:477-504.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Crosslanguage information retrieval",
"authors": [
{
"first": "W",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"R"
],
"last": "Oard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Diekema",
"suffix": ""
}
],
"year": 1998,
"venue": "Annual Review of Information Science",
"volume": "33",
"issue": "",
"pages": "223--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas W. Oard and Anne R. Diekema. 1998. Cross- language information retrieval. In Martha Williams, editor, Annual Review of Information Science (ARIST), volume 33, pages 223-256.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Thumbs up?: sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using ma- chine learning techniques. In Proc. EMNLP, pages 79-86.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic identification of word translations from unrelated English and German corpora",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "519--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and German cor- pora. In Proceedings of the ACL, pages 519-526.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Spherical topic models",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Waters",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Silverthorn",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger, Austin Waters, Bryan Silverthorn, and Raymond J. Mooney. 2010. Spherical topic models. In Proc. ICML.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "NRC's PORTAGE system for WMT",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": ""
},
{
"first": "J. Howard",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL-2007 2nd Workshop on SMT",
"volume": "",
"issue": "",
"pages": "185--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Ueffing, Michel Simard, Samuel Larkin, and J. Howard Johnson. 2007. NRC's PORTAGE system for WMT 2007. In ACL-2007 2nd Workshop on SMT, pages 185-188.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Inferring a semantic representation of text via cross-language correlation analysis",
"authors": [
{
"first": "Alexei",
"middle": [],
"last": "Vinokourov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems 15",
"volume": "",
"issue": "",
"pages": "1473--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexei Vinokourov, John Shawe-Taylor, and Nello Cris- tianini. 2003. Inferring a semantic representation of text via cross-language correlation analysis. In S. Thrun S. Becker and K. Obermayer, editors, Ad- vances in Neural Information Processing Systems 15, pages 1473-1480, Cambridge, MA. MIT Press.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Parameters driving effectiveness of automated essay scoring with LSA",
"authors": [
{
"first": "Fridolin",
"middle": [],
"last": "Wild",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Stahl",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Stermsek",
"suffix": ""
},
{
"first": "Gustaf",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings 9th Internaional Computer-Assisted Assessment Conference",
"volume": "",
"issue": "",
"pages": "485--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fridolin Wild, Christina Stahl, Gerald Stermsek, and Gustaf Neumann. 2005. Parameters driving effective- ness of automated essay scoring with LSA. In Pro- ceedings 9th Internaional Computer-Assisted Assess- ment Conference, pages 485-494.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cross-lingual latent topic extraction",
"authors": [
{
"first": "Duo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qiaozhu",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1128--1137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. 2010. Cross-lingual latent topic extraction. In Proc. ACL, pages 1128-1137, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "OPCA finds a projection that maximizes the variance of all documents, while minimizing distance between comparable documents",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Graphical models for JPLSA (top) and CPLSA (bottom)",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Mean reciprocal rank versus dimension for EuroparlFigure 4: Mean reciprocal rank versus dimension for Wikipedia",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Test results for comparable document retrieval in Wikipedia. Boldface indicates statistically significant best result.",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Test results for cross-language text categorization",
"num": null
}
}
}
}