|
{ |
|
"paper_id": "R19-1031", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:02:40.026081Z" |
|
}, |
|
"title": "Lexical Quantile-Based Text Complexity Measure", |
|
"authors": [ |
|
{ |
|
"first": "Maksim", |
|
"middle": [], |
|
"last": "Eremeev", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Science and Technology MISIS", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Vorontsov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Science and Technology MISIS", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper introduces a new approach to estimating the text document complexity. Common readability indices are based on average length of sentences and words. In contrast to these methods, we propose to count the number of rare words occurring abnormally often in the document. We use the reference corpus of texts and the quantile approach in order to determine what words are rare, and what frequencies are abnormal. We construct a general text complexity model, which can be adjusted for the specific task, and introduce two special models. The experimental design is based on a set of thematically similar pairs of Wikipedia articles, labeled using crowdsourcing. The experiments demonstrate the competitiveness of the proposed approach.", |
|
"pdf_parse": { |
|
"paper_id": "R19-1031", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper introduces a new approach to estimating the text document complexity. Common readability indices are based on average length of sentences and words. In contrast to these methods, we propose to count the number of rare words occurring abnormally often in the document. We use the reference corpus of texts and the quantile approach in order to determine what words are rare, and what frequencies are abnormal. We construct a general text complexity model, which can be adjusted for the specific task, and introduce two special models. The experimental design is based on a set of thematically similar pairs of Wikipedia articles, labeled using crowdsourcing. The experiments demonstrate the competitiveness of the proposed approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automated text complexity measurement tools have been proposed in order to help teachers to select textbooks that correspond to the students' comprehension level and publishers to explore whether their articles are readable. Thus, plenty of readability indexes were developed. Measures like Automated Readability Index (Senter and Smith, 1967) , Flesch-Kincaid readability tests (Flesh, 1951) , SMOG index (McLaughlin, 1969) , Gunning fog (Gunning, 1952) and etc. use heuristics based on simple statistics such as total number of words, mean number of words per sentence, total number of sentences or even number of syllables to evaluate how complex given text is. By combining these statistics with different weighting factors, readability indexes assign the given document a complexity score, which is, in most cases, the approximate representation of the US grade level needed to comprehend the text. For instance, an Automated Readability Index (ARI) has the following form for the document d:", |
|
"cite_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 343, |
|
"text": "(Senter and Smith, 1967)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 392, |
|
"text": "(Flesh, 1951)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 424, |
|
"text": "(McLaughlin, 1969)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 454, |
|
"text": "(Gunning, 1952)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "ARI(d) = 4.71 \u00d7 c w + 0.5 \u00d7 w s \u2212 21.43 (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "where c refers to the total number of letters in the document d, w is the total number of words and s denotes the total number of sentences in d.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since readability indexes rely on a few basic factors, precise assessment requires aggregation of many scores. Thus, Coh-Metrix-PORT tool (Aluisio et al., 2010) includes more than 50 different indexes for Portuguese language. The tool is based on Coh-Metrix (Graesser et al., 2004) principles to estimate complexity and cohesion not only for explicit text, but for the mental representation of the document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 160, |
|
"text": "(Aluisio et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 281, |
|
"text": "(Graesser et al., 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Readability indexes are interpretable and easy to implement. However, the great number of constants tuned specifically for the English language texts, lack of the semantics consideration and tailoring to the US grade level system restrains the number of possible applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As for the non-English languages, several lexical and morphological features for Italian to solve text simplification problem were presented (Brunato et al., 2015) , supervised approach in readability estimations was introduced (vor der Brck et al., 2008) and the complexity estimations for legal documents in Russian were explored (Dzmitryieva, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 163, |
|
"text": "(Brunato et al., 2015)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 255, |
|
"text": "Brck et al., 2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 351, |
|
"text": "(Dzmitryieva, 2017)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we introduce a new approach to gauge the complexity of the documents based on their lexical features. Our research is motivated by information retrieval applications such as exploratory search for learning or editorial purposes (Marchionini, 2006; White and Roth, 2009; Palagi et al., 2017) . In the exploratory search, the user needs a hint which of the found documents to read first, gradually moving from simple to more complex documents. Reading order optimization is an alternative way to content consumption that departs from the typical ranked lists of documents based on their relevance (Koutrika et al., 2015) . The more specific terms document contains, and the more rare they are, the more complex the document is. To formalize this consideration, we estimate the complexity of each term in the document and then aggregate them to get the complete document complexity score. We use Wikipedia as a reference collection of moderately complex texts in order to determine what term frequencies are abnormal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 261, |
|
"text": "(Marchionini, 2006;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 283, |
|
"text": "White and Roth, 2009;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 304, |
|
"text": "Palagi et al., 2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 632, |
|
"text": "(Koutrika et al., 2015)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In section 2 we describe quantile approach to estimate the single term complexity. We present highly flexible general model in section 3 and models in subsections 3.1 and 3.2. The way of evaluating the proposed methods is introduced in section 4 and the experiments result are provided in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Reference collection: Let D denote a reference collection. Let document d \u2208 D consist of terms t 1 , t 2 , . . . t n d , where n d refers to the length of document d. Each term can be either a single word or a key phrase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Quantile approach: In general case each term can occur in different complexity states, which may depend on a position in text or context surrounding the term. Each complexity state of the term t i standing in position i is described with a term complexity score c(t i ). Consider a complexity scores empirical distribution for each term over the reference collection. Assume that term t i is in complex state if its complexity c(t i ) in current text position i is greater than \u03b3-quantile C \u03b3 (t i ) of the distribution over c(t i ), where \u03b3 is a hyperparameter, responsible for the complexity level. Therefore, when estimating complexity score of the document, we count c(t i ) only for terms t i which are in the complex state, defined by the \u03b3 parameter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For instance, c(t i ) can be a constant, which means all terms have identical complexity, or can be set equal to 0 if it occurs in the reference collection and 1 otherwise. In this case, we count new terms (for the reference collection) as complex and all other terms as simple.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 General Document Complexity Model Document d complexity W (d) can be calculated by aggregating complexity scores of terms that form d. In this paper we propose a weighed sum over the complex terms to be the aggregate func-tion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "W (d) = n d i=1 w(t i )[c(t i ) > C \u03b3 (t i )]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "( 2)where [ ] refers to the Iverson notation (i.e.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "[true] = 1, [f alse] = 0). By defining weights w(t i ) and complexity scores c(t i ) for all terms t i specialize the complexity model. Some examples of interpretable weights w(t i ) are presented in Table 1 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 207, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "w(t i ) Meaning of w(t i ) 1 number of complex terms 1/n d \u00d7100% complex terms percentage c(t i ) total complexity c(t i )/n d mean complexity c(t i ) \u2212 C \u03b3 (t i ) excessive complexity (c(t i ) \u2212 C \u03b3 (t i ))/n d mean excessive complexity", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Term Complexity Estimation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The following model relies on the assumption, proposed in (Birkin, 2007) . Consider an arbitrary document d which is the sequence of terms t 1 , t 2 , . . . t n d . Let r(t i ) be a distance in terms to the previous occurrence of the same term t i in document d. Formally,", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 72, |
|
"text": "(Birkin, 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r(t i ) = min 1\u2264j<i {i \u2212 j | t i = t j }.", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "If i is the first occurrence of term t i in document d, it means that r(t i ) is undefined. In such cases we take r(t i ) equal to n d . Hence, for terms with the only occurrence in d complexity scores are the greatest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "If term t does not appear in the reference collection, we set C \u03b3 equal to \u2212\u221e, therefore counting it as a constantly complex term.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Assume that term t in the position i is more complex than the same term in the position j if r(t i ) > r(t j ). Consider there are no separators between documents in the reference collection, so it becomes a single document d all . Thus, it is possible to count distributions of r(t) of each unique term t in d all and corresponding \u03b3-quantiles C \u03b3 (t) of these distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the document d, which complexity we try to estimate, we calculate r d (t i ) values for all terms t i \u2208 d.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We define mean distance r d,i (t i ) for term t i in i-th position in the document d as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "r d,i (t i ) = i j=1 r d (t i )[t i = t j ] i j=1 [t i = t j ] (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "which aggregates all occurrences of the term t i from the document start.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Finally c(t i ) has the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c(t i ) =r(t i ) \u2212r d,i (t i )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "wherer(t i ) is the mean distance of the reference collection scores r(t i ) for the term t i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Intuitively, this means, that term is more complex if it occurs less in reference collection and occurs more in document d. Figures 1 and 2 show distributions of distances r(t) for the simple term 'algebra' and the complex term 'nlp', calculated over the reference collection containing 1.5M documents of the Russian Wikipedia. For the 'algebra' term most occurrences are relatively close to each other, whether 'nlp' occurrences have fairly greater distance scores. So, using the formula for c(t i ) as above and choosing weights w(t i ) we get the distance-based complexity model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 139, |
|
"text": "Figures 1 and 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Distance-Based Complexity Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The second model presented in this paper is based on the assumption that each term has an independent fixed complexity in the whole language. Thus, in this section we consider not the complexity distribution of a single term, but the general complexity distribution over all terms in the language. Hence, each term t is assigned the only complexity score c(t) and the \u03b3-quantile we count is now a constant C \u03b3 . Hence, the model has the following form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W (d) = n d i=1 w(t i ) 1 count(t i ) > C \u03b3", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where w(t i ) corresponds to the term weights introduced before. Assume the term t 1 is more complex than the term t 2 if number of occurrences in the reference collection of the term t 1 is lesser than the number of occurrences of the term t 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Let count(t) denote number of occurrences of the term t in the reference collection. Thus, the complexity score function can be defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c(t) = 1 count(t)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "so the assumption above is satisfied. For each term t we calculate counters count(t) and complexity scores c(t) over the reference collection. Having the distribution of c(t), we obtain \u03b3-quantiles C \u03b3 . The described distribution for the Russian Wikipedia reference collection is shown on Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 298, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Thus, we have defined c(t) for all terms possible and the distribution necessary to count the C \u03b3 . By varying weights w(t i ) described in section 3, we obtain the counter-based model for the complexity estimation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Counter-Based Complexity Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To measure the quality of proposed algorithms, we asked assessors to label 10K pairs of Russian Wikipedia articles. Assessors were asked to carefully read both articles and to choose which was more difficult to comprehend. If person cannot determine which document is more complex, then he was asked to choose 'documents are equal' option.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metric", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "If documents in the given pair are from different scientific domains, then we ask assessor to choose 'invalid pair' option. Documents were chosen from math, physics, chemistry and programming areas. Clustering was performed using the topic modeling technique (Hofmann, 1999) . BigARTM open-source library was used to perform the clustering . Pairs were formed so that both documents belong to a single topic and their lengths are almost identical. Examples of document pairs to assess are introduced in Table 2 Each pair was labeled twice in order to avoid human factor mistakes. We assume that the pair was labeled correctly if labels were not controversial, i.e. first assessor labeled the first document as more complex, while second assessor chose the second document. If one or both grades were 'documents are equal' then we assume the pair to be correctly labeled.", |
|
"cite_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 274, |
|
"text": "(Hofmann, 1999)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 503, |
|
"end": 510, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality Metric", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "8K pairs out of 10K were labeled correctly and were used to compare for the different versions of algorithms. For each we calculated the accuracy score, which is the rate of correctly chosen document in the pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metric", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Two types of experiments were done. In first case we used full Russian Wikipedia articles dataset (1.5M documents) as a reference collection. In second type we used only Wikipedia articles from the math domain. To do that, we built a topic model using ARTM (Additive Regularization of Topic Models) technique (Vorontsov and Potapenko, 2015) , which clusters documents into monothematic groups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 340, |
|
"text": "(Vorontsov and Potapenko, 2015)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Preprocessing: All Wikipedia articles were lemmatized (i.e. reduced to normal form). In this experiment we assume term to be either a single word or a bigram (i.e. two words combination). To extract them, RAKE algorithm (Rose et al., 2010) was used. Hence, each document in the collection was turned into the sequence of such terms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 239, |
|
"text": "(Rose et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Complete Wikipedia Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Reference collection: Preprocessed Wikipedia articles were used as a reference collection. r(t) for every term position and count(t) for every unique term were counted. Documents to estimate complexity on: We used the labeled pairs described in Section 4 to evaluate the models. Accuracy was used as a quality metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Complete Wikipedia Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Models to evaluate: Models introduced in 3.1 and 3.2 with different w(t i ) parameters were tested. We took ARI and Flesch-Kincaid readability test as benchmarks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Complete Wikipedia Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The results of the experiments are introduced in Table 3 . Also we tested how the bigrams extraction affects final quality with fixed weight function w(t) = c(t)/n d . The results are given in Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 56, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 200, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Complete Wikipedia Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Model Results show that both distance-and counterbased approaches work twice as well as readability indexes. Counter-based model with w(t) = c(t)/n d weights show the best results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Complete Wikipedia Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "w(t) Accuracy ARI - 46% Flesch-Kincaid - 57% Distance-based c(t) 68% Distance-based c(t)/n d 71% Counter-based c(t) 77% Counter-based c(t)/n d 81%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Complete Wikipedia Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Terms Accuracy Distance-based Words 63% Distance-based Words+Bigrams 71% Counter-based Words+Bigrams 74% Counter-based Bigrams 81% Table 4 : Results of experiments 1 with terms differently defined.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 138, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In experiment 2 we shortened the reference collection to include only documents from specific topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "ARTM model: To divide documents into single-topic clusters, topic modeling is used. Topic Models are unsupervised machine learning models and perform soft clustering (i.e. assign each document a distribution over topics).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The set of such vectors for all documents form a matrix, which is usually denoted by \u0398. ARTM model was trained on the preprocessed Wikipedia dataset. ARTM features dozens of various types of regularizers and allows to treat modalities (i.e. types of terms) differently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this specific experiment we used regularizers to sparse \u0398 matrix and make each topic distribution over terms more different. Words and bigrams (i.e. pairs of words) modalities were used with weights 1 and 5 respectively. Using this model, we detect the most likely topic for each document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Experiment setup: In the following experiment we chose math and physics documents to be the reference collection. Documents were preprocessed in the same way as they were in the previous experiment. We also divided labeled pairs into same single-topic groups to test models configured with different reference collections on various single-topic groups of labeled pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Math collection included 200K documents in reference collection and 3.5K labeled pairs, while for the physics collection it was 250K documents in reference collection and 1.5K labels. The results are shown in Table 5 and Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "Table 5 and Table 6", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As it can be seen from results, using tailored reference collection improves the score. Indeed, that solves terms ambiguity problem and eliminates terms unrelated to the topic from the reference collection, so they are treated complex in the estimating document, which is fairly logical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Model w(t) Accuracy ARI -41% Flesch-Kincaid -49% Distance-based c(t) 55% Distance-based c(t)/n d 61% Counter-based c(t) 79% Counter-based c(t)/n d 84% ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Topic Wikipedia Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "w(t) Accuracy ARI -52% Flesch-Kincaid -58% Distance-based c(t) 65% Distance-based c(t)/n d 63% Counter-based c(t) 82% Counter-based c(t)/n d 81% ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have presented an approach to estimating text complexity based on lexical features. Document complexity is an aggregation of terms' complexities. Introduced general model is highly flexible, it can be adjusted by tuning weights w(t) and choosing proper reference collection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Complexity score can only be count with respect to the reference collection. Reference collection can be a large set of documents on different topics or just contain single-topic texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The proposed complexity measures are used in AITHEA exploratory search system (http://aithea.com/exploratory-search) for ranking search results in complexity-based reading order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Application of topic modeling in this research was supported by the Russian Research Foundation grant no. 19-11-00281. The work of K.Vorontsov was partially supported by the Government of the Russian Federation (agreement 05.Y09.21.0018) and the Russian Foundation for Basic Research (grants 17-07-01536).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Readability assessment for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Aluisio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caroline", |
|
"middle": [], |
|
"last": "Gasperin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandra Aluisio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability assessment for text simplification.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Speech Codes. Hippocrat, Saint-Peterburg", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Birkin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A.A. Birkin. 2007. Speech Codes. Hippocrat, Saint- Peterburg.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Design and annotation of the first italian corpus for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Brunato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felice", |
|
"middle": [], |
|
"last": "Dell'orletta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giulia", |
|
"middle": [], |
|
"last": "Venturi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simonetta", |
|
"middle": [], |
|
"last": "Montemagni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominique Brunato, Felice Dell'Orletta, Giulia Ven- turi, and Simonetta Montemagni. 2015. Design and annotation of the first italian corpus for text simpli- fication. pages 31-41.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A readability checker with supervised learning using deep syntactic and semantic indicators", |
|
"authors": [ |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Vor Der Brck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sven", |
|
"middle": [], |
|
"last": "Hartrumpf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Helbig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim vor der Brck, Sven Hartrumpf, and Hermann Hel- big. 2008. A readability checker with supervised learning using deep syntactic and semantic indica- tors.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The art of legal writing: A quantitative analysis of russian constitutional court rulings. Sravnitel'noe konstitucionnoe obozrenie", |
|
"authors": [ |
|
{ |
|
"first": "Aryna", |
|
"middle": [], |
|
"last": "Dzmitryieva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "125--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aryna Dzmitryieva. 2017. The art of legal writing: A quantitative analysis of russian constitutional court rulings. Sravnitel'noe konstitucionnoe obozrenie, 3:125-133.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "How to test readability", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Flesh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1951, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Flesh. 1951. How to test readability. New York, Harper and Brothers.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Coh-metrix: Analysis of text on cohesion and language. Behavior research methods, instruments, computers : a journal of the", |
|
"authors": [ |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Graesser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danielle", |
|
"middle": [], |
|
"last": "Mcnamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Louwerse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiqiang", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "193--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arthur Graesser, Danielle McNamara, Max Louwerse, and Zhiqiang Cai. 2004. Coh-metrix: Analysis of text on cohesion and language. Behavior research methods, instruments, computers : a journal of the Psychonomic Society, Inc, 36:193-202.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The technique of clear writing", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gunning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1952, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Gunning. 1952. The technique of clear writing. McGraw-Hill, New York.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Probabilistic latent semantic indexing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22Nd Annual Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '99, pages 50-57, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Generating reading orders over document collections", |
|
"authors": [ |
|
{ |
|
"first": "Georgia", |
|
"middle": [], |
|
"last": "Koutrika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Simske", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE 31st International Conference on Data Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "507--518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Georgia Koutrika, Lei Liu, and Steven Simske. 2015. Generating reading orders over document collec- tions. In 2015 IEEE 31st International Conference on Data Engineering, pages 507-518.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Exploratory search: From finding to understanding", |
|
"authors": [ |
|
{ |
|
"first": "Gary", |
|
"middle": [], |
|
"last": "Marchionini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Commun. ACM", |
|
"volume": "49", |
|
"issue": "4", |
|
"pages": "41--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gary Marchionini. 2006. Exploratory search: From finding to understanding. Commun. ACM, 49(4):41- 46.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Smog grading: A new readability formula", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Mclaughlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Journal of Reading", |
|
"volume": "12", |
|
"issue": "8", |
|
"pages": "639--646", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. H. McLaughlin. 1969. Smog grading: A new read- ability formula. Journal of Reading, 12(8):639-646.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A survey of definitions and models of exploratory search", |
|
"authors": [ |
|
{ |
|
"first": "Emilie", |
|
"middle": [], |
|
"last": "Palagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabien", |
|
"middle": [], |
|
"last": "Gandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alain", |
|
"middle": [], |
|
"last": "Giboin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rapha\u00ebl", |
|
"middle": [], |
|
"last": "Troncy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ESIDA17 -ACM Workshop on Exploratory Search and Interactive Data Analytics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emilie Palagi, Fabien Gandon, Alain Giboin, and Rapha\u00ebl Troncy. 2017. A survey of definitions and models of exploratory search. In ESIDA17 -ACM Workshop on Exploratory Search and Interactive Data Analytics, Mar 2017, Limassol, Cyprus, pages 3-8.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Automatic Keyword Extraction from Individual Documents", |
|
"authors": [ |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dave", |
|
"middle": [], |
|
"last": "Engel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Cramer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wendy", |
|
"middle": [], |
|
"last": "Cowley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic Keyword Extraction from Individual Documents.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Automated readability index", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Senter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "AMRL-TR", |
|
"volume": "66", |
|
"issue": "22", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.J. Senter and E.A. Smith. 1967. Automated readabil- ity index. AMRL-TR, 66(22).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Additive regularization of topic models. Machine Learning, Special Issue on Data Analysis and Intelligent Optimization with Applications", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Vorontsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Potapenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "101", |
|
"issue": "", |
|
"pages": "303--323", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. V. Vorontsov and A. A. Potapenko. 2015. Additive regularization of topic models. Machine Learning, Special Issue on Data Analysis and Intelligent Opti- mization with Applications, 101(1):303-323.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Bigartm: Open source library for regularized multimodal topic modeling of large collections", |
|
"authors": [ |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Vorontsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oleksandr", |
|
"middle": [], |
|
"last": "Frei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Murat", |
|
"middle": [], |
|
"last": "Apishev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Romov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Suvorova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "AIST'2015, Analysis of Images, Social networks and Texts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "370--384", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konstantin Vorontsov, Oleksandr Frei, Murat Apishev, Petr Romov, and Marina Suvorova. 2015. Bi- gartm: Open source library for regularized mul- timodal topic modeling of large collections. In AIST'2015, Analysis of Images, Social networks and Texts, pages 370-384. Springer International Pub- lishing Switzerland, Communications in Computer and Information Science (CCIS).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Exploratory Search: Beyond the Query-Response Paradigm. Synthesis Lectures on Information Concepts, Retrieval, and Services", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Ryen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Resa", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryen W. White and Resa A. Roth. 2009. Exploratory Search: Beyond the Query-Response Paradigm. Synthesis Lectures on Information Concepts, Re- trieval, and Services. Morgan and Claypool Publish- ers.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Distribution of distances r(t), calculated over the complete Wikipedia dataset for the word 'algebra'.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Distribution of distances r(t), calculated over the complete Wikipedia dataset for the word 'nlp'.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Distribution of count(t), calculated over complete Wikipedia articles dataset.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "Weights w(t i ) examples.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Examples of labeled document pairs.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "Results of experiment 1 with different weight function.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Results of experiment 2 on math collection of Wikipedia articles with different weights.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Results of experiment 2 on physics collection of Wikipedia articles with different weights.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |