Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C18-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:11:08.102413Z"
},
"title": "Enriching Word Embeddings with Domain Knowledge for Readability Assessment",
"authors": [
{
"first": "Zhiwei",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Qing",
"middle": [],
"last": "Gu",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yafeng",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Daoxu",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a method which learns the word embedding for readability assessment. For the existing word embedding models, they typically focus on the syntactic or semantic relations of words, while ignoring the reading difficulty, thus they may not be suitable for readability assessment. Hence, we provide the knowledge-enriched word embedding (KEWE), which encodes the knowledge on reading difficulty into the representation of words. Specifically, we extract the knowledge on word-level difficulty from three perspectives to construct a knowledge graph, and develop two word embedding models to incorporate the difficulty context derived from the knowledge graph to define the loss functions. Experiments are designed to apply KEWE for readability assessment on both English and Chinese datasets, and the results demonstrate both effectiveness and potential of KEWE.",
"pdf_parse": {
"paper_id": "C18-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a method which learns the word embedding for readability assessment. For the existing word embedding models, they typically focus on the syntactic or semantic relations of words, while ignoring the reading difficulty, thus they may not be suitable for readability assessment. Hence, we provide the knowledge-enriched word embedding (KEWE), which encodes the knowledge on reading difficulty into the representation of words. Specifically, we extract the knowledge on word-level difficulty from three perspectives to construct a knowledge graph, and develop two word embedding models to incorporate the difficulty context derived from the knowledge graph to define the loss functions. Experiments are designed to apply KEWE for readability assessment on both English and Chinese datasets, and the results demonstrate both effectiveness and potential of KEWE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Readability assessment is a classic problem in natural language processing, which attracts many researchers' attention in recent years (Todirascu et al., 2016; Schumacher et al., 2016; Cha et al., 2017) . The objective is to evaluate the readability of texts by levels or scores. The majority of recent readability assessment methods are based on the framework of supervised learning (Schwarm and Ostendorf, 2005) and build classifiers from hand-crafted features extracted from the texts. The performance of these methods depends on designing effective features to build high-quality classifiers.",
"cite_spans": [
{
"start": 135,
"end": 159,
"text": "(Todirascu et al., 2016;",
"ref_id": "BIBREF45"
},
{
"start": 160,
"end": 184,
"text": "Schumacher et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 185,
"end": 202,
"text": "Cha et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 384,
"end": 413,
"text": "(Schwarm and Ostendorf, 2005)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Designing hand-crafted features are essential but labor-intensive. It is desirable to learn representative features from the texts automatically. For document-level readability assessment, an effective feature learning method is to construct the representation of documents by combining the representation of the words contained (Kim, 2014) . For the representation of word, a useful technique is to learn the word representation as a dense and low-dimensional vector, which is called word embedding. Existing word embedding models (Collobert et al., 2011; Mikolov et al., 2013; Pennington et al., 2014) can be used for readability assessment, but the effectiveness is compromised by the fact that these models typically focus on the syntactic or semantic relations of words, while ignoring the reading difficulty. As a result, words with similar functions or topics, such as \"man\" and \"gentleman\", are mapped into close vectors although their reading difficulties are different. It calls for incorporating the knowledge on reading difficulty when training the word embedding.",
"cite_spans": [
{
"start": 329,
"end": 340,
"text": "(Kim, 2014)",
"ref_id": "BIBREF26"
},
{
"start": 532,
"end": 556,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF9"
},
{
"start": 557,
"end": 578,
"text": "Mikolov et al., 2013;",
"ref_id": "BIBREF33"
},
{
"start": 579,
"end": 603,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we provide the knowledge-enriched word embedding (KEWE) for readability assessment, which encodes the knowledge on reading difficulty into the representation of words. Specifically, we define the word-level difficulty from three perspectives, and use the extracted knowledge to construct a knowledge graph. After that, we derive the difficulty context of words from the knowledge graph, and develop two word embedding models to incorporate the difficulty context to define the loss functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We apply KEWE for document-level readability assessment under the supervised framework. The experiments are conducted on four datasets of either English or Chinese. The results demonstrate that our method can outperform other well-known readability assessment methods, and the classic text-based word embedding models on all the datasets. By concatenating our knowledge-enriched word embedding with the hand-crafted features, the performance can be further improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 provides the related work for readability assessment. Section 3 describes the details of KEWE. Section 4 presents the experiments and results. Finally Section 5 concludes the paper with future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we briefly introduce three research topics relevant to our work: readability assessment, word embedding, and graph embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Readability Assessment. The researches on readability assessment have a relatively long history from the beginning of last century (Collinsthompson, 2014) . Early studies mainly focused on designing readability formulas to evaluate the reading scores of texts. Some of the well-known readability formulas include the SMOG formula (McLaughlin, 1969) , the FK formula (Kincaid et al., 1975) , and the Dale-Chall formula (Chall, 1995) . At the beginning of the 21th century, supervised approaches have been introduced and then explored for readability assessment (Si and Callan, 2001; Collins-Thompson and Callan, 2004; Schwarm and Ostendorf, 2005) . Researchers have focused on improving the performance by designing highly effective features (Pitler and Nenkova, 2008; Heilman et al., 2008; Feng et al., 2010; and employing effective classification models (Heilman et al., 2007; Kate et al., 2010; Ma et al., 2012; Cha et al., 2017) . While most studies are conducted for English, there are studies for other languages, such as French (Fran\u00e7ois and Fairon, 2012) , German (Hancke et al., 2012) , Bangla (Sinha et al., 2014) , Basque (Gonzalez-Dios et al., 2014) , Chinese (Jiang et al., 2014) , and Japanese (Wang and Andersen, 2016) .",
"cite_spans": [
{
"start": 131,
"end": 154,
"text": "(Collinsthompson, 2014)",
"ref_id": "BIBREF7"
},
{
"start": 330,
"end": 348,
"text": "(McLaughlin, 1969)",
"ref_id": "BIBREF32"
},
{
"start": 366,
"end": 388,
"text": "(Kincaid et al., 1975)",
"ref_id": "BIBREF27"
},
{
"start": 399,
"end": 431,
"text": "Dale-Chall formula (Chall, 1995)",
"ref_id": null
},
{
"start": 560,
"end": 581,
"text": "(Si and Callan, 2001;",
"ref_id": "BIBREF42"
},
{
"start": 582,
"end": 616,
"text": "Collins-Thompson and Callan, 2004;",
"ref_id": "BIBREF6"
},
{
"start": 617,
"end": 645,
"text": "Schwarm and Ostendorf, 2005)",
"ref_id": "BIBREF40"
},
{
"start": 741,
"end": 767,
"text": "(Pitler and Nenkova, 2008;",
"ref_id": "BIBREF36"
},
{
"start": 768,
"end": 789,
"text": "Heilman et al., 2008;",
"ref_id": "BIBREF21"
},
{
"start": 790,
"end": 808,
"text": "Feng et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 855,
"end": 877,
"text": "(Heilman et al., 2007;",
"ref_id": "BIBREF20"
},
{
"start": 878,
"end": 896,
"text": "Kate et al., 2010;",
"ref_id": "BIBREF24"
},
{
"start": 897,
"end": 913,
"text": "Ma et al., 2012;",
"ref_id": "BIBREF31"
},
{
"start": 914,
"end": 931,
"text": "Cha et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 1034,
"end": 1061,
"text": "(Fran\u00e7ois and Fairon, 2012)",
"ref_id": "BIBREF14"
},
{
"start": 1071,
"end": 1092,
"text": "(Hancke et al., 2012)",
"ref_id": "BIBREF19"
},
{
"start": 1102,
"end": 1122,
"text": "(Sinha et al., 2014)",
"ref_id": "BIBREF43"
},
{
"start": 1132,
"end": 1160,
"text": "(Gonzalez-Dios et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 1171,
"end": 1191,
"text": "(Jiang et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 1207,
"end": 1232,
"text": "(Wang and Andersen, 2016)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Word Embedding. Researchers have proposed various methods on word embedding, which mainly include two broad categories: neural network based methods (Bengio et al., 2003; Collobert et al., 2011; Mikolov et al., 2013) and co-occurrence matrix based methods (Turney and Pantel, 2010; Levy and Goldberg, 2014b; Pennington et al., 2014) . Neural network based methods learn word embedding through training neural network models, which include NNLM (Bengio et al., 2003) , C&W (Collobert and Weston, 2008) , and word2vec (Mikolov et al., 2013) . Co-occurrence matrix based methods learn word embedding based on the co-occurrence matrices, which include LSA (Deerwester, 1990) , Implicit Matrix Factorization (Levy and Goldberg, 2014b) , and GloVe (Pennington et al., 2014) . Besides the general word embedding learning methods, researchers have also proposed methods to learn word embedding to include certain properties (Liu et al., 2015; Shen and Liu, 2016) or for certain domains (Tang et al., 2014; Ren et al., 2016; Alikaniotis et al., 2016; Wu et al., 2017) .",
"cite_spans": [
{
"start": 149,
"end": 170,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF2"
},
{
"start": 171,
"end": 194,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF9"
},
{
"start": 195,
"end": 216,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF33"
},
{
"start": 256,
"end": 281,
"text": "(Turney and Pantel, 2010;",
"ref_id": "BIBREF46"
},
{
"start": 282,
"end": 307,
"text": "Levy and Goldberg, 2014b;",
"ref_id": "BIBREF29"
},
{
"start": 308,
"end": 332,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 444,
"end": 465,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 472,
"end": 500,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF8"
},
{
"start": 516,
"end": 538,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF33"
},
{
"start": 652,
"end": 670,
"text": "(Deerwester, 1990)",
"ref_id": "BIBREF11"
},
{
"start": 703,
"end": 729,
"text": "(Levy and Goldberg, 2014b)",
"ref_id": "BIBREF29"
},
{
"start": 742,
"end": 767,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 916,
"end": 934,
"text": "(Liu et al., 2015;",
"ref_id": "BIBREF30"
},
{
"start": 935,
"end": 954,
"text": "Shen and Liu, 2016)",
"ref_id": "BIBREF41"
},
{
"start": 978,
"end": 997,
"text": "(Tang et al., 2014;",
"ref_id": "BIBREF44"
},
{
"start": 998,
"end": 1015,
"text": "Ren et al., 2016;",
"ref_id": "BIBREF37"
},
{
"start": 1016,
"end": 1041,
"text": "Alikaniotis et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 1042,
"end": 1058,
"text": "Wu et al., 2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Graph embedding. Graph embedding aims to learn continuous representations of the nodes or edges based on the structure of a graph. The graph embedding methods can be classified into three categories (Goyal and Ferrara, 2017) : factorization based (Roweis and Saul, 2000; Belkin and Niyogi, 2001) , random walk based (Perozzi et al., 2014; Grover and Leskovec, 2016) , and deep learning based . Among them, the random walk based methods are easy to comprehend and can effectively reserve the centrality and similarity of the nodes. Deepwalks (Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016) are two representatives of the random walk based methods. The basic idea of Deepwalk is viewing random walk paths as sentences, and feeding them to a general word embedding model. node2vec is similar to Deepwalk, although it simulates a biased random walk over graphs, and often provides efficient random walk paths.",
"cite_spans": [
{
"start": 199,
"end": 224,
"text": "(Goyal and Ferrara, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 271,
"end": 295,
"text": "Belkin and Niyogi, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 316,
"end": 338,
"text": "(Perozzi et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 339,
"end": 365,
"text": "Grover and Leskovec, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 541,
"end": 563,
"text": "(Perozzi et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 577,
"end": 604,
"text": "(Grover and Leskovec, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we present the details of Knowledge-Enriched Word Embedding (KEWE) for readability assessment. By incorporating the word-level readability knowledge, we extend the existing word embedding model and design two models with different learning structures. As shown in Figure 1 , the above one is the knowledge-only word embedding model (KEWE k ) which only takes in the domain knowledge, Figure 1 : Illustration of the knowledge-enriched word embedding models. KEWE k is based on the difficulty context, while KEWE h is based on both the difficulty and text contexts. the other is the hybrid word embedding model (KEWE h ) which compensates the domain knowledge with text corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 289,
"text": "Figure 1",
"ref_id": null
},
{
"start": 401,
"end": 409,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Knowledge-Enriched Word Embedding for Readability Assessment",
"sec_num": "3"
},
{
"text": "In the classic word embedding models, such as C&W, CBOW, and Skip-Gram, the context of a word is represented by its surrounding words in the text corpus. Levy and Goldberg (2014a) have incorporated the syntactic context from the dependency parse-trees and found that the trained word embedding could capture more functional and less topical similarity. For readability assessment, reading difficulty other than function or topic becomes more important. Hence, we introduce a kind of difficulty context, and try to learn a difficulty-focusing word embedding, which leads to KEWE k . In the following, we describe this model in three steps: domain knowledge extraction, knowledge graph construction, and graph-based word embedding learning. The former two steps focus on modeling the relationship among words on reading difficulty, and the final step on deriving the difficulty context and learning the word embedding.",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "Levy and Goldberg (2014a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Knowledge-only Word Embedding Model (KEWE k )",
"sec_num": "3.1"
},
{
"text": "To model the relationship among words on reading difficulty, we first introduce how to extract the knowledge on word-level difficulty from different perspectives. Specifically, we consider three types of wordlevel difficulty: acquisition difficulty, usage difficulty, and structure difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Knowledge Extraction",
"sec_num": "3.1.1"
},
{
"text": "Acquisition difficulty. Word acquisition refers to the temporal stage at which children learn the meaning of new words. Researchers have shown that the information on word acquisition is useful for readability assessment (Kidwell et al., 2009; Schumacher et al., 2016) . Generally, the words acquired at primary school are easier than those acquired at high school. We call the reading difficulty reflected by word acquisition as the acquisition difficulty. Formally, given a word w, its acquisition difficulty is described by a distribution K A w over the age-of-acquisition (AoA) (Kidwell et al., 2009) . Since the rating on AoA is an unsolved problem in cognitive science (Brysbaert and Biemiller, 2016) and not available for many languages, we explore extra materials to describe the acquisition difficulty. In particular, we collect three kinds of knowledge teaching materials, i.e., in-class teaching material, extracurricular teaching material, and proficiency test material. These materials are arranged as lists of words, each of which contains words learned in the same time period and hence corresponds to a certain level of acquisition difficulty. For example, given a lists of words, we can define K A w \u2208 R a , where K A w,i = 1 if a word w belongs to the list i, and K A w,i = 0 otherwise. Usage difficulty. Researchers used to count the usage frequency to measure the difficulty of words (Dale and Chall, 1948) , which can separate the words which are frequently used from those rarely used. We call the difficulty reflected by usage preference as the usage difficulty. Formally, given a word w, its usage difficulty is described by a distribution K U w over the usage preferences. We provide two ways to measure the usage difficulty. One way is estimating the level of words' usage frequency by counting the word frequency lists from the text corpus. The other way is estimating the probability distribution of words over the sentence-level difficulties, which is motivated by . Usage difficulty is defined on both. By discretizing the range of word frequency into b intervals of equal size, the usage frequency level of a word w is i, if its frequency resides in the ith intervals. By estimating the probability distribution vector P w from sentence-level difficulties, we can define K U w \u2208 R 1+|Pw| , and",
"cite_spans": [
{
"start": 221,
"end": 243,
"text": "(Kidwell et al., 2009;",
"ref_id": "BIBREF25"
},
{
"start": 244,
"end": 268,
"text": "Schumacher et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 582,
"end": 604,
"text": "(Kidwell et al., 2009)",
"ref_id": "BIBREF25"
},
{
"start": 675,
"end": 706,
"text": "(Brysbaert and Biemiller, 2016)",
"ref_id": "BIBREF3"
},
{
"start": 1404,
"end": 1426,
"text": "(Dale and Chall, 1948)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Knowledge Extraction",
"sec_num": "3.1.1"
},
{
"text": "K U w,i = [i, P w ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Knowledge Extraction",
"sec_num": "3.1.1"
},
{
"text": ". Structure difficulty. When building readability formulas, researchers have found that the structure of words could imply its difficulty (Flesch, 1948; Gunning, 1952; McLaughlin, 1969) . For example, words with more syllables are usually more difficult than words with less syllables. We call the difficulty reflected by structure of words as the structure difficulty. Formally, given a word w, its structure difficulty can be described by a distribution K S w over the word structures. Words in different languages may have their own special structural characteristics. For example, in English, the structural characteristics of words relate to syllables, characters, affixes, and subwords. Whereas in Chinese, the structural characteristics of words relate to strokes and radicals of Chinese characters. Here we use the number of syllables (strokes for Chinese) and characters in a word w to describe its structure difficulty. By discretizing the range of each number into intervals, K S w is obtained by counting the interval in which w resides, respectively.",
"cite_spans": [
{
"start": 138,
"end": 152,
"text": "(Flesch, 1948;",
"ref_id": "BIBREF13"
},
{
"start": 153,
"end": 167,
"text": "Gunning, 1952;",
"ref_id": "BIBREF18"
},
{
"start": 168,
"end": 185,
"text": "McLaughlin, 1969)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Knowledge Extraction",
"sec_num": "3.1.1"
},
{
"text": "After extracting the domain knowledge on word-level difficulty, we then quantitatively represent the knowledge by a graph. We define the knowledge graph as an undirected graph G = (V, E), where V is the set of vertices, each of which represents a word, and E is the set of edges, each of which represents the relation (i.e., similarity) between two words on difficulty. Each edge e \u2208 E is a vertex pair (w i , w j ) and is associated with a weight z ij , which indicates the strength of the relation. If no edge exists between w i and w j , the weight z ij = 0. We define two edge types in the graph: Sim edge and Dissim edge. The former indicates that its end words have similar difficulty and is associated with a positive weight. The latter indicates that its end words have significant different difficulty and is associated with a negative weight. We derived the edges from the similarities computed between pairs of the words' knowledge vectors. Formally, given the extracted knowledge vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Construction",
"sec_num": "3.1.2"
},
{
"text": "K w = [K A w , K U w , K S w ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Construction",
"sec_num": "3.1.2"
},
{
"text": "of a word w, E can be constructed using the similarity between pairs of words (w i , w j ) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Construction",
"sec_num": "3.1.2"
},
{
"text": "z ij = \uf8f1 \uf8f2 \uf8f3 sim(K w i , K w j ) w j \u2208 N p (w i ) \u2212sim(K w i , K w j ) w j \u2208 N n (w i ) 0 otherwise (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Construction",
"sec_num": "3.1.2"
},
{
"text": "where sim() is a similarity function (e.g., cosine similarity), N p (w i ) refers to the set of k most similar (i.e., greatest similarity) neighbors of w i , and N n (w i ) refers to the set of k most dissimilar (i.e., least similarity) neighbors of w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph Construction",
"sec_num": "3.1.2"
},
{
"text": "After constructing the knowledge graph, which models the relationship among words on difficulty, we can derive the difficulty context from the graph and train the word embedding focused on reading difficulty. For the graph-based difficulty context, given a word w, we define its difficulty context as the set of other words that have relevance to w on difficulty. Specifically, we define two types of difficulty context, positive context and negative context, corresponding to the two types of edges in the knowledge graph (i.e., Sim edge and Dissim edge). Unlike the context defined on texts, which can be sampled by sliding windows over consecutive words, the context defined on a graph requires special sampling strategies. Different sampling strategies may define the context differently. For difficulty context, we design two relatively intuitive strategies, the random walk strategy and the immediate neighbors strategy, for the sampling of either positive or negative context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "From the type Sim edge, we sample the positive target-context pairs where the target word and the context words are similar on difficulty. Since the similarity is generally transitive, we adopt the random walk strategy to sample the positive context. Following the idea of node2vec (Grover and Leskovec, 2016) , we sample the positive contexts of words by simulating a 2 nd order random walk on the knowledge graph with only Sim edge. After that, by applying a sliding window of fixed length s over the sampled random walk paths, we can get the positive target-context pairs {(w t , w c )}.",
"cite_spans": [
{
"start": 282,
"end": 309,
"text": "(Grover and Leskovec, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "From the type Dissim edge, we sample the negative target-context pairs where the target word and the context words are dissimilar on difficulty. Since dissimilarity is generally not transitive, we adopt the immediate neighbor strategy to sample the negative context. Specifically, on the knowledge graph with only Dissim edge, we collect the negative context from the immediate neighbors of the target node w t and get the negative context list C n (w t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "By replacing the text-based linear context with our graph-based difficulty context, we can train the word embedding using the classic word embedding models, such as C&W, CBOW, and Skip-Gram. Here we use the Skip-Gram model with Negative Sampling (SGNS) proposed by Mikolov et al. (2013) . Specifically, given N positive target-context pairs (w t , w c ) and the negative context list of the target word C n (w t ), the objective of KEWE k is to minimize the loss function L k , which is defined as follows:",
"cite_spans": [
{
"start": 265,
"end": 286,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L k = \u2212 1 N (wt,wc) log \u03c3(u wc v wt ) + E w i \u2208Cn(wt) log \u03c3(\u2212u w i v wt )",
"eq_num": "(2)"
}
],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "where v w and u w are the \"input\" and \"output\" vector representation of w, and \u03c3 is the sigmoid function defined as \u03c3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "(x) = 1 (1+e \u2212x )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": ". This loss function enables the positive context (e.g., w c ) to be distinguished from the negative context (e.g., w i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Graph-based Word Embedding",
"sec_num": "3.1.3"
},
{
"text": "The classic text-based word embedding models yield word embedding focusing on syntactic and semantic contexts, while ignoring the word difficulty. By contrast, KEWE k trains the word embedding focusing on the word difficulty, while leaving out the syntactic and semantic information. Since readability may also relate to both syntax and semantics, we develop a hybrid word embedding model (KEWE h ), to incorporate both domain knowledge and text corpus. The loss function of the hybrid model L h can be expressed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "L h = \u03bbL k + (1 \u2212 \u03bb)L t (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "where L k is the loss of predicting the knowledge graph-based difficulty contexts, L t is the loss of predicting the text-based syntactic and semantic contexts, and \u03bb \u2208 [0, 1] is a weighting factor. Clearly, the case of \u03bb = 1 reduces the hybrid model to the knowledge-only model. As there are many text-based word embedding models, the text-based loss L t can be defined in various ways. To be consistent with KEWE k , we formalize L t based on the Skip-Gram model. Given a text corpus, the Skip-Gram model aims to find word representations that are good at predicting the context words. Specifically, given a sequence of training words, denoted as w 1 , w 2 , \u2022 \u2022 \u2022 , w T , the objective of Skip-Gram model is to minimize the log loss of predicting the context using target word embedding, which can be expressed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L t = \u2212 1 T T t=1 \u2212s\u2264j\u2264s,j =0 log p(w t+j |w t )",
"eq_num": "(4)"
}
],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "where s is the window size of the context sampling. Since the full softmax function used to define p(w t+j |w t ) is computationally expensive, we employ the negative sampling strategy (Mikolov et al., 2013) and replace every log p(w c |w t ) in L t by the following formula:",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log p(w c |w t ) = log \u03c3(u wc v wt ) + k i=1 E w i \u223cPn(w) log \u03c3(\u2212u w i v wt )",
"eq_num": "(5)"
}
],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "where v w , u w , and \u03c3 are of the same meanings as in Eq. 2, k is the number of negative samples, and P n (w) is the noise distribution. This strategy enables the actual context w c to be distinguished from the noise context w i drawn from the noise distribution P n (w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Word Embedding Model (KEWE h )",
"sec_num": "3.2"
},
{
"text": "We adopt the stochastic gradient descent to train our models. Specifically, for the hybrid model (KEWE h ), we adopt the mini-batch mode used in (Yang et al., 2016) . Firstly, we sample a batch of random walk paths of size N 1 and take a gradient step to optimize the loss function L k . Secondly, we sample a batch of text sentences of size N 2 and take a gradient step to optimize the loss function L t . We repeat the above procedures until the model converged or the predefined number of iterations is reached. The ratio N 1 N 1 +N 2 is used to approximate the weighting factor \u03bb in Eq. 3. For training the knowledge-only model, the process is the same without L t and \u03bb.",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.3"
},
{
"text": "We apply KEWE for document-level readability assessment under a supervised learning framework proposed by Schwarm and Ostendorf (2005) . The classifier for readability assessment is built on documents with annotated reading levels. Instead of using the hand-crafted features, we use the word embedding to produce the features of documents.",
"cite_spans": [
{
"start": 106,
"end": 134,
"text": "Schwarm and Ostendorf (2005)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "To extract high level features from documents using word embedding, we design the max layer similar to the one used in the convolutional sentence approach proposed by Collobert et al. (Collobert et al., 2011) . The max layer is used to generate a fix-size feature vector from variant length sequences. Specifically, given a document represented by a matrix M \u2208 R m\u00d7n , where the kth column is the word embedding of the kth word in the document, the max layer output a vector f max (M ):",
"cite_spans": [
{
"start": 184,
"end": 208,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[f max (M )] i = max t [M ] i,t 1 \u2264 i \u2264 m",
"eq_num": "(6)"
}
],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "where t = {1, 2, . . . , n} represents \"time\" of a sequence, and m is the dimension of embedding vectors. Besides the max layer, the min and average layers are also used to extract features. By concatenating all three feature vectors, we get the final feature vector f (M ) of the document M as follows, which can be fed to the classifier for readability assessment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (M ) = [f max (M ), f min (M ), f avg (M )]",
"eq_num": "(7)"
}
],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "In this section, we conduct experiments based on four datasets of two languages, to investigate the following two research questions: RQ1: Whether KEWE is effective for readability assessment, compared with other well-known readability assessment methods, and other word embedding models?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "RQ2: What are the effects of the quality of input (i.e., the quality of the knowledge base and text corpus) and the hybrid ratio (i.e., the weighting factor \u03bb) on the prediction performance of KEWE?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Assessment",
"sec_num": "3.4"
},
{
"text": "The experiments are conducted on four datasets, including two English datasets: ENCT and EHCT, two Chinese datasets: CPT and CPC. ENCT, CPT , and EHCT are extracted from textbooks 1 , where the documents have already been leveled into grades; CPC is extracted from the students' compositions 2 written by Chinese primary school students, where the documents are leveled by the six grades of authors. EHCT is collected from the English textbooks of Chinese high schools and colleges, which contains 4 levels corresponding to the 3 grades of high school plus undergraduates. The details of the four datasets are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Datasets and Knowledge Base",
"sec_num": "4.1"
},
{
"text": "Since the experiments are conducted on two languages, we collect the knowledge bases for both English and Chinese, which are used for extracting domain knowledge and constructing the knowledge graphs, as described in Section 3.1. The details are shown in Table 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Datasets and Knowledge Base",
"sec_num": "4.1"
},
{
"text": "For readability assessment, we design experiments on the four datasets using the hold-out validation, which randomly divides a dataset into training and test sets by stratified sampling. The test ratio is set as 0.3. To reduce randomness, under each case, 100 rounds of hold-out validations are performed, and the average results are reported. To tune the hyper-parameters, we randomly choose three-tenths of the training set as development set. We choose three widely used metrics (Jiang et al., 2014) : Accuracy (Acc), Adjacent Accuracy (\u00b1Acc) and Pearson's Correlation Coefficient (PCC), to measure the performance on readability assessment. For the training of word embedding, we use all the sentences in the target dataset as the training corpus (denote as the internal corpus), to ensure sufficient word coverage. To avoid mixing the level information into the internal corpus, we shuffle the sentences in it before feeding to the model. For the hyper-parameters of text-based word embedding, we set the dimension as 300, the window size as 5, the number of negative samples as 5, and the iteration number as 5. For KEWE, the default length of random walk path is set as 80 (Grover and Leskovec, 2016) . k and \u03bb are tuned using the development set.",
"cite_spans": [
{
"start": 482,
"end": 502,
"text": "(Jiang et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 1180,
"end": 1207,
"text": "(Grover and Leskovec, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.2"
},
{
"text": "To address RQ1, we firstly compare KEWE with the following readability assessment methods: McLaughlin, 1969) and FK (Kincaid et al., 1975) are two widely-used readability formulas.",
"cite_spans": [
{
"start": 91,
"end": 108,
"text": "McLaughlin, 1969)",
"ref_id": "BIBREF32"
},
{
"start": 113,
"end": 138,
"text": "FK (Kincaid et al., 1975)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4.3"
},
{
"text": "\u2022 SMOG (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4.3"
},
{
"text": "\u2022 SUM (Collins-Thompson and Callan, 2004) is a smoothed unigram model. \u2022 HCF is the hand-crafted feature based method. For English and Chinese, we employ the state-ofthe-art feature set proposed by and Jiang et al. (2014) respectively. \u2022 BOW refers to the bag-of-words model. For the feature-based methods (i.e., HCF, BOW, and KEWE), we use both Logistic Regression (LR) and Random Forest (RF) as the classifiers for readability assessment . Table 3 lists the performance measures of all these methods on four datasets. The value marked in bold in each column refers to the maximum (best) measures acquired by the methods on each dataset by certain metric.",
"cite_spans": [
{
"start": 202,
"end": 221,
"text": "Jiang et al. (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 442,
"end": 449,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4.3"
},
{
"text": "From Table 3 , we can see that the performances of readability formulas (SMOG and FK) are not good on all the datasets, except the adjacent accuracies on ENCT. The smoothed unigram model (SUM) outperforms SOMG and FK on all the datasets, and on EHCT, its accuracy is only slightly inferior to KEWE h . HCF performs the best among methods other than KEWE. Even compared with KEWE, it achieves the best performance on CPC and the best adjacent accuracy on ENCT. KEWE is only slightly inferior to HCF in four columns, but outperforms all the other methods in the other eight columns. Table 4 : Performance comparison between KEWE and other word embedding models Overall, KEWE is competitive for readability assessment. In addition, by combining KEWE with the hand-crafted feature set of HCF, the performance can be further improved in many columns.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 581,
"end": 588,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4.3"
},
{
"text": "Secondly, we compare KEWE with other text-based word embedding models for readability assessment. These models include NNLM (Bengio et al., 2003) , C&W (Collobert and Weston, 2008) , GloVe (Pennington et al., 2014) , CBOW and Skip-Gram (SG) (Mikolov et al., 2013) . Random embedding (Random) and knowledge vectors from the three perspectives (KV) are also used as baselines. All the text-based models mentioned are trained on the four datasets respectively. Table 4 lists the results of applying word embedding for readability assessment using Random Forest as the classifier. From Table 4 , the model KEWE h gets the best performance among all the embedding baselines, including SG+KV, which also takes in both knowledge and text corpus. The results demonstrate the superiority of hybridizing the knowledge graph and text corpus to learn the representation of words.",
"cite_spans": [
{
"start": 124,
"end": 145,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 152,
"end": 180,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF8"
},
{
"start": 189,
"end": 214,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 241,
"end": 263,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 4",
"ref_id": null
},
{
"start": 582,
"end": 590,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Other Methods",
"sec_num": "4.3"
},
{
"text": "To address RQ2, we analyze the effects of the knowledge graph, the text corpus, and the hybrid ratio on the performance of KEWE. Firstly, we study the impacts of the knowledge graph on the performance of KEWE k . Specifically, we study the following three parameters of the knowledge graph: the knowledge vectors generated from the three word-level difficulties (i.e., acquisition difficulty, usage difficulty, and structure difficulty), the similarity function (i.e., sim()), and the number of neighbors of each node in the knowledge graph (i.e., k). Figure 2 shows the performance measures of using each of the three knowledge vectors respectively in bar chart. It can be found that the acquisition difficulty outperforms the other two on all four datasets. By combining the three knowledge vectors, the performance can be further improved. Figure 3 shows the performance measures of using different similarity functions and different values of k in line charts. It can be found that the knowledge graphs achieve a relatively good performance, when the cosine similarity is used and k is close to 50 or 100. Secondly, we study the impacts of using external corpus on the performance of KEWE h . To learn textbased word embedding for both languages, we collect the external corpus from both English Wikipedia (Ewiki) and Chinese Wikipedia (Cwiki). After preprocessing, Ewiki contains 7M sentences and 172M words, and Cwiki contains 7.4M sentences and 160M words. We sample different number of sentences from the corpus for training word embedding, and then measure the performance of KEWE h on readability assessment. Figure 4 shows the performance measures by using different volumes of external corpus in line charts, with Skip-Gram trained for comparison. Both models are also trained using the internal corpus, and the performance measures (dotted lines) are depicted as baselines. From Figure 4 , it can be found that the performance of both Skip-Gram and KEWE h increases as the volume of external corpus increases, and keeps stable when the corpus is large enough. Besides, on English datasets, word embedding trained using external corpus achieves comparable performance with that using internal corpus. The above suggests that external corpus is a good substitution for the internal corpus, especially on English datasets, and a relative large volume of corpus is required to achieve stable performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 552,
"end": 560,
"text": "Figure 2",
"ref_id": null
},
{
"start": 843,
"end": 851,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1619,
"end": 1627,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1892,
"end": 1900,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Analysis",
"sec_num": "4.4"
},
{
"text": "Finally, we study the impacts of the weighting factor \u03bb on the performance of KEWE h . Since \u03bb is approximated by N 1 /(N 1 + N 2 ) in our model, we vary \u03bb from 0 to 1 by setting N 1 +N 2 = 100 and then varying N 1 from 0 to 100 stepping by 10. Figure 5 shows the performance of KEWE h with varied \u03bb in line charts with error bars, where \u03bb = 0 and 1 correspond to Skip-Gram and KEWE k respectively. From Figure 5 , it can be found that KEWE h can outperform its base models (i.e., KEWE k and Skip-Gram), by setting \u03bb with a suitable value near the base model which performs better. However, it requires further study to find a suitable \u03bb for training KEWE h . Internal Corpus(CPC 4k) External Corpus(Cwiki 1000k) Figure 5 : The performance of KEWE h with varied weighting factor \u03bb",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 253,
"text": "Figure 5",
"ref_id": null
},
{
"start": 404,
"end": 412,
"text": "Figure 5",
"ref_id": null
},
{
"start": 713,
"end": 721,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Analysis",
"sec_num": "4.4"
},
{
"text": "To better understand our method, we perform error analysis on the classification results. We mainly describe two sources of errors: word representation failure and word order capturing failure. From the perspective of word representation, the classification error is caused by the fact that the difficulty information may not be properly encoded into the word embedding. Table 4 shows that KEWE performs better than the classical text-based word embeddings in encoding difficulty information into representation, but in the final results there still exists the failure of word representation in KEWE. From the perspective of word order capturing, the classification error is caused by the fact that the order among words may be neglected, so that the syntactic difficulty, pragmatic difficulty, and discourse difficulty of documents are ignored during the process of readability assessment. Table 3 shows that KEWE h can be further improved by combining with the features related to the word order (i.e., HCF+KEWE h ), which means that there exists the failure of word order capturing in KEWE. These two kinds of error sources reveal the limitation of our method. In future work, the neural network accompanied with word embedding is a good alternative, which can produce better representation of documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 371,
"end": 378,
"text": "Table 4",
"ref_id": null
},
{
"start": 891,
"end": 898,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.5"
},
{
"text": "In this paper, we propose the knowledge-enriched word embedding (KEWE) for readability assessment. We extract the domain knowledge on word-level difficulty from three different perspectives and construct a knowledge graph. Based on the difficulty context derived from the knowledge graph, we develop two word embedding models (i.e., KEWE k and KEWE h ). The experimental results on English and Chinese datasets demonstrate that KEWE can outperform other well-known readability assessment methods, and the classic text-based word embedding models. Future work is planned to involve extra datasets and additional word embedding strategies so that the soundness of KEWE can be further approved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.dzkbw.com 2 http://www.eduxiao.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic text scoring using neural networks",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Alikaniotis",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics",
"volume": "",
"issue": "",
"pages": "715--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics, pages 715-725.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Laplacian eigenmaps and spectral techniques for embedding and clustering",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Belkin",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Niyogi",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Belkin and Partha Niyogi. 2001. Laplacian eigenmaps and spectral techniques for embedding and clus- tering. Advances in Neural Information Processing Systems, 14(6).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine learning research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137-1155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Test-based age-of-acquisition norms for 44 thousand english word meanings",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Biemiller",
"suffix": ""
}
],
"year": 2016,
"venue": "Behavior Research Methods",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Brysbaert and Andrew Biemiller. 2016. Test-based age-of-acquisition norms for 44 thousand english word meanings. Behavior Research Methods, pages 1-4.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Language modeling by clustering with word embeddings for text readability assessment",
"authors": [
{
"first": "Miriam",
"middle": [],
"last": "Cha",
"suffix": ""
},
{
"first": "Youngjune",
"middle": [],
"last": "Gwon",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Kung",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2003--2006",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miriam Cha, Youngjune Gwon, and H. T. Kung. 2017. Language modeling by clustering with word embeddings for text readability assessment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, pages 2003-2006.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Readability revisited: The new Dale-Chall readability formula",
"authors": [
{
"first": "Jeanne",
"middle": [],
"last": "Sternlicht",
"suffix": ""
},
{
"first": "Chall",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "118",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeanne Sternlicht Chall. 1995. Readability revisited: The new Dale-Chall readability formula, volume 118. Brookline Books Cambridge, MA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A language modeling approach to predicting reading difficulty",
"authors": [
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
},
{
"first": "James",
"middle": [
"P"
],
"last": "Callan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevyn Collins-Thompson and James P Callan. 2004. A language modeling approach to predicting reading diffi- culty. In Proceedings of the 2004 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 193-200.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Computational assessment of text readability: A survey of current and future research",
"authors": [
{
"first": "Kevyn",
"middle": [],
"last": "Collinsthompson",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Applied Linguistics",
"volume": "165",
"issue": "2",
"pages": "97--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevyn Collinsthompson. 2014. Computational assessment of text readability: A survey of current and future research. International Journal of Applied Linguistics, 165(2):97-135.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Machine Learning, Proceedings of the Twenty-Fifth International Conference(ICML 2008)",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Machine Learning, Proceedings of the Twenty-Fifth International Conference(ICML 2008), pages 160-167.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493- 2537.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A formula for predicting readability",
"authors": [
{
"first": "Edgar",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Jeanne",
"middle": [
"S"
],
"last": "Chall",
"suffix": ""
}
],
"year": 1948,
"venue": "Educational Research Bulletin",
"volume": "27",
"issue": "1",
"pages": "11--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Dale and Jeanne S. Chall. 1948. A formula for predicting readability. Educational Research Bulletin, 27(1):11-28.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the Association for Information Science & Technology",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester. 1990. Indexing by latent semantic analysis. Journal of the Association for Information Science & Technology, 41(6):391-407.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A comparison of features for automatic readability assessment",
"authors": [
{
"first": "Lijun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jansche",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Huenerfauth",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "276--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A comparison of features for auto- matic readability assessment. In Proceedings of the 23rd International Conference on Computational Linguis- tics: Posters, pages 276-284.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A new readability yardstick",
"authors": [
{
"first": "Rudolph",
"middle": [],
"last": "Flesch",
"suffix": ""
}
],
"year": 1948,
"venue": "Journal of applied psychology",
"volume": "32",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An ai readability formula for french as a foreign language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "C\u00e9drick",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "466--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Fran\u00e7ois and C\u00e9drick Fairon. 2012. An ai readability formula for french as a foreign language. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning, pages 466-477.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Simple or complex? assessing the readability of basque texts",
"authors": [
{
"first": "Itziar",
"middle": [],
"last": "Gonzalez-Dios",
"suffix": ""
},
{
"first": "Mar\u0131a",
"middle": [],
"last": "Jes\u00fas Aranzabe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "334--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itziar Gonzalez-Dios, Mar\u0131a Jes\u00fas Aranzabe, Arantza D\u0131az de Ilarraza, and Haritz Salaberri. 2014. Simple or complex? assessing the readability of basque texts. In Proceedings of the 25th International Conference on Computational Linguistics, pages 334-344.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Graph embedding techniques, applications, and performance: A survey",
"authors": [
{
"first": "Palash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Ferrara",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.02801"
]
},
"num": null,
"urls": [],
"raw_text": "Palash Goyal and Emilio Ferrara. 2017. Graph embedding techniques, applications, and performance: A survey. arXiv preprint arXiv:1705.02801.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "node2vec: Scalable feature learning for networks",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "855--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The technique of clear writing",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Gunning",
"suffix": ""
}
],
"year": 1952,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Gunning. 1952. The technique of clear writing. McGraw-Hill.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Readability classification for german using lexical, syntactic, and morphological features",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hancke",
"suffix": ""
},
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1063--1080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hancke, Sowmya Vajjala, and Detmar Meurers. 2012. Readability classification for german using lexical, syntactic, and morphological features. In Proceedings of the 24th International Conference on Computational Linguistics, pages 1063-1080.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Combining lexical and grammatical features to improve readability measures for first and second language texts",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Callan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NAACL HLT",
"volume": "",
"issue": "",
"pages": "460--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman, Collins-Thompson, Jamie Callan, and Maxine Eskenazi. 2007. Combining lexical and gram- matical features to improve readability measures for first and second language texts. In Proceedings of NAACL HLT, pages 460-467.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An analysis of statistical models and features for reading difficulty prediction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman, Kevyn Collins-Thompson, and Maxine Eskenazi. 2008. An analysis of statistical models and features for reading difficulty prediction. In Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications, pages 71-79.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "An ordinal multi-class classification method for readability assessment of chinese documents",
"authors": [
{
"first": "Zhiwei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Daoxu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "In Knowledge Science, Engineering and Management",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiwei Jiang, Gang Sun, Qing Gu, and Daoxu Chen. 2014. An ordinal multi-class classification method for readability assessment of chinese documents. In Knowledge Science, Engineering and Management, pages 61-72. Springer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A graph-based readability assessment method using word coupling",
"authors": [
{
"first": "Zhiwei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Daoxu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "411--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiwei Jiang, Gang Sun, Qing Gu, Tao Bai, and Daoxu Chen. 2015. A graph-based readability assessment method using word coupling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 411-420. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning to predict readability using diverse linguistic features",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rohit",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Kate",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "546--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit J Kate, Xiaoqiang Luo, Siddharth Patwardhan, Martin Franz, Radu Florian, Raymond J Mooney, Salim Roukos, and Chris Welty. 2010. Learning to predict readability using diverse linguistic features. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 546-554.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Statistical estimation of word acquisition with application to readability prediction",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kidwell",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lebanon",
"suffix": ""
},
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "900--909",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kidwell, Guy Lebanon, and Kevyn Collins-Thompson. 2009. Statistical estimation of word acquisition with application to readability prediction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 900-909.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2014, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel",
"authors": [
{
"first": "Robert P Fishburne",
"middle": [],
"last": "Peter Kincaid",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"L"
],
"last": "Jr",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"S"
],
"last": "Rogers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chissom",
"suffix": ""
}
],
"year": 1975,
"venue": "Naval Air Station",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new read- ability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Air Station, Memphis, TN.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dependency-based word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency-based word embeddings. In Meeting of the Association for Computational Linguistics, pages 302-308.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177-2185.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning semantic word embeddings based on ordinal knowledge constraints",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1501--1511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 1501-1511.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Ranking-based readability assessment for early primary children's literature",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Lofthus",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "548--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Ma, Eric Fosler-Lussier, and Robert Lofthus. 2012. Ranking-based readability assessment for early primary children's literature. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 548-552.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Smog grading: A new readability formula",
"authors": [
{
"first": "",
"middle": [],
"last": "Harry Mclaughlin",
"suffix": ""
}
],
"year": 1969,
"venue": "Journal of reading",
"volume": "12",
"issue": "8",
"pages": "639--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G Harry McLaughlin. 1969. Smog grading: A new readability formula. Journal of reading, 12(8):639-646.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word repre- sentation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Deepwalk: Online learning of social representations",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "701--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Revisiting readability: A unified framework for predicting text quality",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2008,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "186--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 186-195.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improving twitter sentiment classification using topic-enriched multi-prototype word embeddings",
"authors": [
{
"first": "Yafeng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3038--3044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yafeng Ren, Yue Zhang, Meishan Zhang, and Donghong Ji. 2016. Improving twitter sentiment classification using topic-enriched multi-prototype word embeddings. In Thirtieth AAAI Conference on Artificial Intelligence, pages 3038-3044.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Nonlinear dimensionality reduction by locally linear embedding",
"authors": [
{
"first": "T",
"middle": [],
"last": "Sam",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"K"
],
"last": "Roweis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saul",
"suffix": ""
}
],
"year": 2000,
"venue": "Science",
"volume": "290",
"issue": "5500",
"pages": "2323--2329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam T. Roweis and Lawrence K. Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323-6.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Predicting the relative difficulty of single sentences with and without surrounding context",
"authors": [
{
"first": "Elliot",
"middle": [],
"last": "Schumacher",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
},
{
"first": "Gwen",
"middle": [],
"last": "Frishkoff",
"suffix": ""
},
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1871--1881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elliot Schumacher, Maxine Eskenazi, Gwen Frishkoff, and Kevyn Collins-Thompson. 2016. Predicting the rela- tive difficulty of single sentences with and without surrounding context. In Conference on Empirical Methods in Natural Language Processing, pages 1871-1881.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Reading level assessment using support vector machines and statistical language models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sarah",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Schwarm",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah E Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 523-530.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Improved word embeddings with implicit structure information",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "2408--2417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Shen and Cong Liu. 2016. Improved word embeddings with implicit structure information. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference, pages 2408-2417.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A statistical model for scientific readability",
"authors": [
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "James",
"middle": [
"P"
],
"last": "Callan",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 ACM CIKM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "574--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luo Si and James P. Callan. 2001. A statistical model for scientific readability. In Proceedings of the 2001 ACM CIKM International Conference on Information and Knowledge Management, pages 574-576.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Influence of target reader background and text features on text readability in bangla: A computational approach",
"authors": [
{
"first": "Manjira",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Tirthankar",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Basu",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "345--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manjira Sinha, Tirthankar Dasgupta, and Anupam Basu. 2014. Influence of target reader background and text fea- tures on text readability in bangla: A computational approach. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference, pages 345-354.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Learning sentiment-specific word embedding for twitter sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2014,
"venue": "Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1555--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Meeting of the Association for Computational Linguistics, pages 1555-1565.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Are cohesive features relevant for text readability evaluation",
"authors": [
{
"first": "Amalia",
"middle": [],
"last": "Todirascu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "Delphine",
"middle": [],
"last": "Bernhard",
"suffix": ""
},
{
"first": "Nuria",
"middle": [],
"last": "Gala",
"suffix": ""
},
{
"first": "Anne-Laure",
"middle": [],
"last": "Ligozat",
"suffix": ""
}
],
"year": 2016,
"venue": "26th International Conference on Computational Linguistics (COLING 2016)",
"volume": "",
"issue": "",
"pages": "987--997",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amalia Todirascu, Thomas Fran\u00e7ois, Delphine Bernhard, Nuria Gala, and Anne-Laure Ligozat. 2016. Are co- hesive features relevant for text readability evaluation? In 26th International Conference on Computational Linguistics (COLING 2016), pages 987-997.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37:141-188.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "On improving the accuracy of readability classification using insights from second language acquisition",
"authors": [
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, BEA@NAACL-HLT 2012",
"volume": "",
"issue": "",
"pages": "163--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sowmya Vajjala and Detmar Meurers. 2012. On improving the accuracy of readability classification using in- sights from second language acquisition. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, BEA@NAACL-HLT 2012, June 7, 2012, Montr\u00e9al, Canada, pages 163-173.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Grammatical templates: Improving text difficulty evaluation for language learners",
"authors": [
{
"first": "Shuhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Andersen",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "1692--1702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuhan Wang and Erik Andersen. 2016. Grammatical templates: Improving text difficulty evaluation for language learners. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference, pages 1692-1702.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Structural deep network embedding",
"authors": [
{
"first": "Daixin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wenwu",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1225--1234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural deep network embedding. In ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 1225-1234.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Improving implicit discourse relation recognition with discourse-specific word embeddings",
"authors": [
{
"first": "Changxing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yidong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Boli",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "269--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changxing Wu, Xiaodong Shi, Yidong Chen, Jinsong Su, and Boli Wang. 2017. Improving implicit discourse relation recognition with discourse-specific word embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 269-274.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Revisiting semi-supervised learning with graph embeddings",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "40--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. 2016. Revisiting semi-supervised learning with graph embeddings. pages 40-48.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Figure 2: The performance of KEWE k using different types of knowledge vectors"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The performance of KEWE k using knowledge graphs with different similarity functions and neighbor numbers The performance of KEWE h using different volumes of external corpus"
},
"TABREF1": {
"num": null,
"text": "Statistics of the Four Datasets",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Perspective Type of Material</td><td>English</td><td>Chinese</td></tr><tr><td/><td>AoA rating</td><td colspan=\"2\">AoA norms for over 50,000 English Words \u2212</td></tr><tr><td>Acqusition</td><td colspan=\"2\">In-class teaching Extra-curricular teaching New Concept English vocabulary New word list (primary, junior, high)</td><td>New word list (primary, junior, high) Overseas Chinese Language vocabulary</td></tr><tr><td/><td>Proficiency test</td><td>CET(4,6), PETS(1,2,5) vocabulary</td><td>HSK(1-6) vocabulary</td></tr><tr><td>Usage</td><td>Frequency List Sentence Corpus</td><td colspan=\"2\">Word Frequency List of American English Chinese High Frequency Word List English Wikipedia Chinese Wikipedia</td></tr><tr><td colspan=\"2\">Structure Dictionary</td><td>Syllabary</td><td>Stroke dictionary</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "Details of the collected knowledge bases",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "SMOG 46.04 94.13 0.7441 28.47 84.05 0.5517 30.95 70.21 0.6063 19.58 60.10 0.4076 FK 50.98 97.67 0.7870 25.17 76.16 0.3417 25.15 64.61 0.4912 16.93 50.49 0.0808 SUM 66.64 97.19 0.8345 70.41 91.41 0.7257 33.61 72.04 0.5866 26.71 56.59 0.2734",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Dataset</td><td/></tr><tr><td>Method</td><td>ENCT</td><td>EHCT</td><td>CPT</td><td>CPC</td></tr><tr><td/><td>Acc \u00b1Acc PCC</td><td>Acc \u00b1Acc PCC</td><td>Acc \u00b1Acc PCC</td><td>Acc \u00b1Acc PCC</td></tr><tr><td>HCF</td><td colspan=\"4\">LR 87.24 98.01 0.9136 53.69 89.85 0.6937 43.71 81.69 0.7652 27.08 61.54 0.4275 RF 90.96 100 0.9592 60.36 91.24 0.7421 47.97 87.99 0.8159 35.09 72.49 0.5880</td></tr><tr><td>BOW</td><td colspan=\"4\">LR 81.57 99.28 0.9049 65.88 89.61 0.7257 34.82 74.67 0.6593 29.01 56.71 0.3083 RF 78.89 94.71 0.8294 59.03 89.96 0.7384 39.56 80.73 0.7486 31.13 62.74 0.5247</td></tr><tr><td>KEWE k</td><td colspan=\"4\">LR 91.12 99.72 0.9545 64.03 91.43 0.7745 52.69 87.35 0.8091 30.59 61.32 0.5272 RF 92.34 99.71 0.9606 69.13 95.28 0.8251 52.94 88.39 0.8167 30.73 66.14 0.5278</td></tr><tr><td>KEWE h</td><td colspan=\"4\">LR 93.67 99.58 0.9654 65.25 93.40 0.8129 54.33 88.03 0.8233 34.81 65.11 0.5507 RF 94.48 99.72 0.9705 71.20 96.12 0.8449 54.83 88.94 0.8287 33.26 65.63 0.5111</td></tr><tr><td>HCF+KEWE h</td><td colspan=\"4\">LR 87.37 98.07 0.9153 53.96 90.05 0.6976 45.92 83.53 0.7876 27.31 61.84 0.4300 RF 95.87 99.89 0.9794 70.05 95.85 0.8380 60.23 92.28 0.8776 35.52 70.24 0.5801</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "Performance comparison between KEWE and other readability assessment methods .9136 66.00 92.73 0.7833 46.49 86.08 0.7921 32.03 63.34 0.4729 KEWE k 92.34 99.71 0.9606 69.13 95.28 0.8251 52.94 88.39 0.8167 30.73 66.14 0.5278 KEWE h 94.48 99.72 0.9705 71.20 96.12 0.8449 54.83 88.94 0.8287 33.26 65.63 0.5111",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Dataset</td><td/><td/><td/><td/></tr><tr><td>Model</td><td/><td>ENCT</td><td/><td/><td>EHCT</td><td/><td/><td>CPT</td><td/><td/><td>CPC</td></tr><tr><td/><td>Acc</td><td>\u00b1Acc</td><td>PCC</td><td>Acc</td><td>\u00b1Acc</td><td>PCC</td><td>Acc</td><td>\u00b1Acc</td><td>PCC</td><td>Acc</td><td>\u00b1Acc</td><td>PCC</td></tr><tr><td colspan=\"13\">Random 31.73 75.57 0.0834 24.96 61.35 0.0786 16.96 45.96 0.0590 16.81 45.59 0.0743</td></tr><tr><td colspan=\"13\">NNLM 79.80 98.66 0.8843 58.43 88.36 0.7046 43.47 85.22 0.7806 30.68 65.07 0.5195</td></tr><tr><td>C&amp;W</td><td colspan=\"12\">82.35 99.05 0.9039 59.05 90.06 0.7160 45.59 84.70 0.7874 30.60 64.83 0.5023</td></tr><tr><td>GloVe</td><td colspan=\"12\">82.66 99.52 0.9129 58.15 90.53 0.7302 43.63 85.80 0.7882 30.33 64.48 0.4933</td></tr><tr><td colspan=\"13\">CBOW 73.94 96.86 0.8299 62.40 91.29 0.7518 40.49 82.83 0.7564 27.40 59.48 0.3929</td></tr><tr><td>SG</td><td colspan=\"12\">82.20 98.60 0.9001 62.73 62.90 0.7563 44.31 85.54 0.7869 31.92 63.04 0.4527</td></tr><tr><td>KV</td><td colspan=\"12\">89.10 98.71 0.9295 64.15 90.85 0.7483 50.63 84.84 0.7806 28.19 61.90 0.4658</td></tr><tr><td colspan=\"4\">SG+KV 84.71 98.81 0</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
}
}
}
}