|
{ |
|
"paper_id": "S14-2016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:32:17.001943Z" |
|
}, |
|
"title": "Bielefeld SC: Orthonormal Topic Modelling for Grammar Induction", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "M C Crae", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we consider the application of topic modelling to the task of inducting grammar rules. In particular, we look at the use of a recently developed method called orthonormal explicit topic analysis, which combines explicit and latent models of semantics. Although, it remains unclear how topic model may be applied to the case of grammar induction, we show that it is not impossible and that this may allow the capture of subtle semantic distinctions that are not captured by other methods. This work is licensed under a Creative Commons Attribution 4.0 International Licence.", |
|
"pdf_parse": { |
|
"paper_id": "S14-2016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we consider the application of topic modelling to the task of inducting grammar rules. In particular, we look at the use of a recently developed method called orthonormal explicit topic analysis, which combines explicit and latent models of semantics. Although, it remains unclear how topic model may be applied to the case of grammar induction, we show that it is not impossible and that this may allow the capture of subtle semantic distinctions that are not captured by other methods. This work is licensed under a Creative Commons Attribution 4.0 International Licence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Grammar induction is the task of inducing highlevel rules for application of grammars in spoken dialogue systems. In practice, we can extract relevant rules and the task of grammar induction reduces to finding similar rules between two strings. As these strings are not necessarily similar in surface form, what we really wish to calculate is the semantic similarity between these strings. As such, we could think of applying a semantic analysis method. As such we attempt to apply topic modelling, that is methods such as Latent Dirichlet Allocation (Blei et al., 2003) , Latent Semantic Analysis (Deerwester et al., 1990) or Explicit Semantic Analysis (Gabrilovich and Markovitch, 2007) . In particular we build on the recent work to unify latent and explicit methods by means of orthonormal explicit topics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 551, |
|
"end": 570, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 598, |
|
"end": 623, |
|
"text": "(Deerwester et al., 1990)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 688, |
|
"text": "(Gabrilovich and Markovitch, 2007)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In topic modelling the key choice is the document space that will act as the corpus and hence topic space. The standard choice is to regard all articles from a background document collection -Wikipedia articles are a typical choice -as the topic space. However, it is crucial to ensure that these topics cover the semantic space evenly and completely. Following McCrae et al. (McCrae et al., 2013) we remap the semantic space defined by the topics in such a manner that it is orthonormal. In this way, each document is mapped to a topic that is distinct from all other topics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 397, |
|
"text": "McCrae et al. (McCrae et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The structure of the paper is as follows: we describe our method in three parts, first the method in section 2, followed by approximation method in section 3, the normalization methods in section 4 and finally the application to grammar induction in section 5, we finish with some conclusions in section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Orthonormal explicit topic analysis ONETA (McCrae et al., 2013 , Orthonormal explicit topic analysis) follows Explicit Semantic Analysis in the sense that it assumes the availability of a background document collection B = {b 1 , b 2 , ..., b N } consisting of textual representations. The mapping into the explicit topic space is defined by a language-specific function \u03a6 that maps documents into R N such that the j th value in the vector is given by some association measure \u03c6 j (d) for each background document b j . Typical choices for this association measure \u03c6 are the sum of the TF-IDF scores or an information retrieval relevance scoring function such as BM-25 (Sorg and Cimiano, 2010).", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 64, |
|
"text": "ONETA (McCrae et al., 2013", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For the case of TF-IDF, the value of the j-th element of the topic vector is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03c6 j (d) = \u2212 \u2212\u2212 \u2192 tf-idf(b j ) T \u2212 \u2212\u2212 \u2192 tf-idf(d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Thus, the mapping function can be represented as the product of a TF-IDF vector of document d multiplied by a word-by-document (W \u00d7 N ) TF-IDF matrix, which we denote as a X: 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1T denotes the matrix transpose as usual", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03a6(d) = \uf8eb \uf8ec \uf8ed \u2212 \u2212\u2212 \u2192 tf-idf(b1) T . . . \u2212 \u2212\u2212 \u2192 tf-idf(bN ) T \uf8f6 \uf8f7 \uf8f8 \u2212 \u2212\u2212 \u2192 tf-idf(d) = X T \u2022 \u2212 \u2212\u2212 \u2192 tf-idf(d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For simplicity, we shall assume from this point on that all vectors are already converted to a TF-IDF or similar numeric vector form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to compute the similarity between two documents d i and d j , typically the cosine-function (or the normalized dot product) between the vectors \u03a6(d i ) and \u03a6(d j ) is computed as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "sim(di, dj) = cos(\u03a6(di), \u03a6(dj)) = \u03a6(di) T \u03a6(dj) ||\u03a6(di)||||\u03a6(dj)|| sim(di, dj) = cos(X T di, X T dj) = d T i XX T dj ||X T di||||X T dj||", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The key challenge with topic modelling is choosing a good background document collection B = {b 1 , ..., b N }. A simple minimal criterion for a good background document collection is that each document in this collection should be maximally similar to itself and less similar to any other document:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2200i = j 1 = sim(b j , b j ) > sim(b i , b j ) \u2265 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As shown in McCrae et al. (2013) , this property is satisfied by the following projection:", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 32, |
|
"text": "McCrae et al. (2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03a6 ONETA (d) = (X T X) \u22121 X T d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "And hence the similarity between two documents can be calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "sim(d i , d j ) = cos(\u03a6 ONETA (d i ), \u03a6 ONETA (d j ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "ONETA relies on the computation of a matrix inverse, which has a complexity that, using current practical algorithms, is approximately cubic and as such the time spent calculating the inverse can grow very quickly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We notice that X is typically very sparse and moreover some rows of X have significantly fewer non-zeroes than others (these rows are for terms with low frequency). Thus, if we take the first N 1 columns (documents) in X, it is possible to rearrange the rows of X with the result that there is some W 1 such that rows with index greater than W 1 have only zeroes in the columns up to N 1 . In other words, we take a subset of N 1 documents and enumerate the words in such a way that the terms occurring in the first N 1 documents are enumerated 1, . . . , W 1 . Let N 2 = N \u2212 N 1 , W 2 = W \u2212 W 1 . The result of this row permutation does not affect the value of X T X and we can write the matrix X as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "X = A B 0 C", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where A is a W 1 \u00d7 N 1 matrix representing term frequencies in the first N 1 documents, B is a W 1 \u00d7N 2 matrix containing term frequencies in the remaining documents for terms that are also found in the first N 1 documents, and C is a W 2 \u00d7 N 2 containing the frequency of all terms not found in the first N 1 documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Application of the well-known divide-andconquer formula (Bernstein, 2005, p. 159) for matrix inversion yields the following easily verifiable matrix identity, given that we can find C such that C C = I.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 81, |
|
"text": "(Bernstein, 2005, p. 159)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(A T A) \u22121 A T \u2212(A T A) \u22121 A T BC 0 C A B 0 C = I", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The inverse C is approximated by the Jacobi Preconditioner, J, of C T C:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "C JC T (2) = \uf8eb \uf8ec \uf8ed ||c 1 || \u22122 0 . . . 0 ||c N 2 || \u22122 \uf8f6 \uf8f7 \uf8f8 C T", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approximations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A key factor in the effectiveness of topic-based methods is the appropriate normalization of the elements of the document matrix X. This is even more relevant for orthonormal topics as the matrix inversion procedure can be very sensitive to small changes in the matrix. In this context, we consider two forms of normalization, term and document normalization, which can also be considered as row/column normalizations of X.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A straightforward approach to normalization is to normalize each column of X to obtain a matrix as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "X = x 1 ||x 1 || . . . x N ||x N ||", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "If we calculate X T X = Y then we get that the (i, j)-th element of Y is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "y ij = x T i x j ||x i ||||x j ||", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Thus, the diagonal of Y consists of ones only and due to the Cauchy-Schwarz inequality we have that |y ij | \u2264 1, with the result that the matrix Y is already close to I. Formally, we can use this to state a bound on ||X T X \u2212 I|| F , but in practice it means that the orthonormalizing matrix has more small or zero values. Previous experiments have indicated that in general term normalization such as TF-IDF is not as effective as using the direct term frequency in ONETA, so we do not apply term normalization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The application to grammar induction is simply carried out by taking the rules and creating a single ground instance. That is if we have a rule of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application to grammar induction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We would replace the instance of <CITY> with a known terminal for this rule, e.g., leaving from Berlin This reduces the task to that of string similarity which can be processed by means of any string similarity function, for example such as the ONETA function described above. As such the procedure is as follows: This approach has the obvious drawback that it removes all information about the valence of the rule, however the effect of this loss of information remains unclear.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LEAVING FROM <CITY>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For application, we used 20,000 Wikipedia articles, filtered to contain only those of over 100 words, giving us a corpus of 15.6 million tokens. We applied ONETA using document normalization but no term normalization and the value N 1 = 5000. These parameters were chosen based on the best results in previous experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LEAVING FROM <CITY>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results show that such a naive approach is not directly applicable to the case of grammar induction, however we believe that it is possible that the subtle semantic similarities captured by topic modelling may yet prove useful for grammar induction. However it is clear from the presented results that the use of a topic model alone does not suffice to solve this task. We notice that from the data many of the distinctions rely on antonyms and stop words, especially distinctions such as 'to'/'from', which are not captured by a topic model as topic models generally ignore stop words, and generally consider antonyms to be in the same topic, as they frequently occur together in text. The question of when semantic similarity such as provided by topic modelling is applicable remains an open question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Philipp Sorg and Philipp Cimiano. 2010. An experimental comparison of explicit semantic analysis implementations for cross-language retrieval. In Natural Language Processing and Information Systems, pages 36-48. Springer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Matrix mathematics", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Dennis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dennis S Bernstein. 2005. Matrix mathematics, 2nd Edition. Princeton University Press Princeton.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Latent Dirichlet Allocation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael I Jordan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3:993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Indexing by latent semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Deerwester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Susan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Landauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Furnas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harshman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "JASIS", |
|
"volume": "41", |
|
"issue": "6", |
|
"pages": "391--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott C. Deerwester, Susan T Dumais, Thomas K. Lan- dauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. JASIS, 41(6):391-407.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Computing semantic relatedness using Wikipediabased explicit semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaul", |
|
"middle": [], |
|
"last": "Markovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using Wikipedia- based explicit semantic analysis. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, volume 6, page 12.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Orthonormal explicit topic analysis for crosslingual document matching", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Mccrae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1732--1740", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John P. McCrae, Philipp Cimiano, and Roman Klinger. 2013. Orthonormal explicit topic analysis for cross- lingual document matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1732-1740.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": {} |
|
} |
|
} |