ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:20.857722Z"
},
"title": "Evaluation of Unsupervised Automatic Readability Assessors Using Rank Correlations",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Gakugei University",
"location": {
"addrLine": "4-1-1 Nukuikita-machi, Koganei-shi",
"postCode": "184-8501",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic readability assessment (ARA) is the task of automatically assessing readability with little or no human supervision. ARA is essential for many second language acquisition applications to reduce the workload of annotators, who are usually language teachers. Previous unsupervised approaches manually searched textual features that correlated well with readability labels, such as perplexity scores of large language models. This paper argues that, to evaluate an assessors' performance, rank-correlation coefficients should be used instead of Pearson's correlation coefficient (\u03c1). In the experiments, we show that its performance can be easily underestimated using Pearson's \u03c1, which is significantly affected by the linearity of the output readability scores. We also propose a lightweight unsupervised readability assessor that achieved the best performance in both the rank correlations and Pearson's \u03c1 among all unsupervised assessors compared.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic readability assessment (ARA) is the task of automatically assessing readability with little or no human supervision. ARA is essential for many second language acquisition applications to reduce the workload of annotators, who are usually language teachers. Previous unsupervised approaches manually searched textual features that correlated well with readability labels, such as perplexity scores of large language models. This paper argues that, to evaluate an assessors' performance, rank-correlation coefficients should be used instead of Pearson's correlation coefficient (\u03c1). In the experiments, we show that its performance can be easily underestimated using Pearson's \u03c1, which is significantly affected by the linearity of the output readability scores. We also propose a lightweight unsupervised readability assessor that achieved the best performance in both the rank correlations and Pearson's \u03c1 among all unsupervised assessors compared.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Assessing readability plays an essential role in second language acquisition; it can be used for many educational applications such as intelligent reading support systems and placement tests for language classes. Readability assessment is a costly task for educational experts and language teachers. To perform it, they must read a text and assess its readability by guessing how difficult the text is for target learning readers. Hence, to reduce the cost of the labor required by educational experts, the task of automatically identifying the readability of texts for language learners, known as automatic readability assessment (ARA), has been extensively studied in the field of artificial intelligence (AI).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised automatic readability assessment appeared early but has recently been reexamined as a research focus. Early studies such as the Dale-Chall formula (1948) (Dale and Chall, 1948) , the Flesch Reading Ease formula (Flesch, 1948 (Flesch, ) (1948 , and the Flesch-Kincaid readability formula (1975) (Kincaid et al., 1975) were unsupervised, as they did not use costly annotated readability labels. Given a text, these formulae calculate its readability score based on simple superficial textual features such as the average length of a word in the given text. However, most of these early formulae are designed to assess readability for children who are native speakers. Evaluation datasets with readability labels annotated by language teachers targeting second language learners appeared much later, in the 2010s (Feng et al., 2010; Xia et al., 2016; Vajjala and Lu\u010di\u0107, 2018) . In these works, automatic readability assessment tasks using these evaluation datasets were formalized as a supervised document classification problem, and substantial research efforts were invested into the construction of classifiers by feature engineering to find complicated textual features that correlate well with readability labels.",
"cite_spans": [
{
"start": 141,
"end": 166,
"text": "Dale-Chall formula (1948)",
"ref_id": null
},
{
"start": 167,
"end": 189,
"text": "(Dale and Chall, 1948)",
"ref_id": "BIBREF8"
},
{
"start": 224,
"end": 237,
"text": "(Flesch, 1948",
"ref_id": "BIBREF22"
},
{
"start": 238,
"end": 254,
"text": "(Flesch, ) (1948",
"ref_id": "BIBREF22"
},
{
"start": 265,
"end": 306,
"text": "Flesch-Kincaid readability formula (1975)",
"ref_id": null
},
{
"start": 307,
"end": 329,
"text": "(Kincaid et al., 1975)",
"ref_id": "BIBREF24"
},
{
"start": 823,
"end": 842,
"text": "(Feng et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 843,
"end": 860,
"text": "Xia et al., 2016;",
"ref_id": "BIBREF36"
},
{
"start": 861,
"end": 885,
"text": "Vajjala and Lu\u010di\u0107, 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, Martinc et al. (2021) revisited the unsupervised approach. They proposed that the perplexity scores of neural language models can also be used to represent the readability of text for second language learners and proposed to use them for unsupervised automatic readability assessment. The upper part of Fig. 1 show their approach. Given a text, their method uses no valuable readability label for training but uses only the language model trained on other large corpora, their method pre-dicts the text's readability score as an output.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 313,
"end": 319,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While this idea is sound, however, their evaluation is validated using the correlation coefficient, or Pearson's \u03c1, as illustrated on the right-hand side of Fig. 1 . Pearson's \u03c1 measures the degree of linear correlation between two random variables (Mukaka, 2012) . As neither the readability levels of the evaluation corpora nor the readability scores output by unsupervised readability assessors are necessarily linear, the use of Pearson's \u03c1 can lead to inaccurate evaluation.",
"cite_spans": [
{
"start": 249,
"end": 263,
"text": "(Mukaka, 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 157,
"end": 163,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study investigates how the use of Pearson's \u03c1 affects the evaluation of unsupervised readability assessors. We analyze how unsupervised assessors' performance can be easily underestimated if the readability scores are not linear. For this purpose, we also build a lightweight unsupervised readability assessor, denoted by the lower part of Fig. 1 . We show that, alternatively, rank-correlation coefficients are more robust to the linearity and appropriate for this evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 351,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are summarized as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We indicate that the previous evaluation of unsupervised readability assessors by using Pearson's \u03c1 is problematic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate the degree by which Pearson's \u03c1 underestimates the readability score without linearity on a publicly available reliable evaluation dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that, instead of Pearson's \u03c1, the use of rank-correlation coefficients is appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel lightweight unsupervised readability assessor that achieves best performance in terms of both Pearson's \u03c1 and rankcorrelation coefficients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section formalizes the problem of automatic readability assessment. Let us suppose that we have N texts to assess: we write the set of texts as {T i |i \u2208 {1, . . . , N }}. Let Y be the set of readability labels. Labels are typically ordered in the order of difficulty. For example, in the On-eStopEnglish dataset (Vajjala and Lu\u010di\u0107, 2018) , we can set Y = {0, 1, 2}, where 0 is elementary, 1 is intermediate, and 2 is advanced. The number of levels depends on the evaluation corpus. Using Y, we write the label for T i as y i \u2208 Y.",
"cite_spans": [
{
"start": 318,
"end": 343,
"text": "(Vajjala and Lu\u010di\u0107, 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Readability Assessment",
"sec_num": "2"
},
{
"text": "Given each text T i , an assessor outputs its readability score s i . In a supervised setting, the assessor knows the number of levels in the evaluation corpus from training examples. Hence, s i ranges within Y: s i \u2208 Y. However, in an unsupervised setting, it is noteworthy that the assessor does not know Y, or how many levels the evaluation corpus has, because no label is given. Hence, even if only integers are allowed for y i , s i can be a real value. Throughout this paper, we write arrays using [ and ] . Given N texts [T i |i \u2208 {1, . . . , N }], our goal is to make an assessor output arrays of readability scores [s i |i \u2208 {1, . . . , N }] that correlate well with the array of labels",
"cite_spans": [
{
"start": 504,
"end": 511,
"text": "[ and ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goal in Unsupervised Setting",
"sec_num": "2.1"
},
{
"text": "[y i |i \u2208 {1, . . . , N }].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goal in Unsupervised Setting",
"sec_num": "2.1"
},
{
"text": "Here, there are multiple types of correlation coefficients between the array of scores and the array of labels, which we explain in the later sections. Typically, we should use rank coefficients when s i is real-valued.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goal in Unsupervised Setting",
"sec_num": "2.1"
},
{
"text": "In most evaluation datasets, educational experts are asked to assess text readability by choosing a label from the set of predefined readability labels, Y. In contrast, automatic readability assessors output real-valued scores in an unsupervised setting. How do we compare readability level labels and realvalued scores?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "A simple but na\u00efve way to make this comparison is to use the Pearson correlation coefficient \u03c1 y,s , which is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c1 y,s = cov(y, s) \u03c3 y \u03c3 s",
"eq_num": "(1)"
}
],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "In Eq. 1, cov(y, s) denotes the covariance between y and s, \u03c3 y denotes the standard deviation of y, and \u03c3 s denotes the standard deviation of s. Eq. 1 ranges [\u22121, 1], where 1 is the perfect correlation. However, the Pearson correlation coefficient measures the degree of linear correlation between two random variables (Mukaka, 2012). The readability levels of evaluation corpora are not necessarily linearly distributed. Readability scores that the assessors output are also not necessarily linear. In these cases, it is usually more appropriate to focus on the correlation between the rankings of the readability label y i s and scores s i s. Rank correlation coefficients measure the correlation between two rankings with the range of [\u22121, 1]. Two types of them are notable: Spearman's \u03c1 and Kendall's \u03c4 (Alvo and Philip, 2014 ).",
"cite_spans": [
{
"start": 808,
"end": 830,
"text": "(Alvo and Philip, 2014",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "Spearman's \u03c1 is defined as the Pearson's \u03c1 between two rankings. We first convert labels into rankings: rg y , and convert scores into rankings: rg s . Then, using Eq. 1, Spearman's \u03c1 is defined as \u03c1 rg y rg s . When converting labels into rankings, texts that have the same level are regarded as ties in a ranking. While there are many ways to handle ties, the mid-rank method is usually used in calculating Spearman's \u03c1 (Amerise and Tarsitano, 2015). This method simply uses the average of ranks for the rank of a tie. For example, let us consider an array of labels [2, 1, 1, 0]. The two 1s in this array are ties taking the 2nd and 3rd ranks. As the average of 2 and 3 is 2.5, the mid-rank ranking of this array is [4, 2.5, 2.5, 1] .",
"cite_spans": [
{
"start": 719,
"end": 735,
"text": "[4, 2.5, 2.5, 1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "We first introduce the definition of Kendall's \u03c4 when there are no ties as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c4 = n c \u2212 n d Num. of all pairs",
"eq_num": "(2)"
}
],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "Kendall's \u03c4 focuses on the pairs of the given arrays: in our setting, (y i , s i ) and (y i , s i ) where i < i . n c denotes the number of concordant pairs, n d denotes the number of discordant pairs. The pair is said to be concordant if either both y i < y i and s i < s i hold or y i > y i and s i > s i ; otherwise, the pair is said to be discordant. If y i = y i , we call y i and y i ties. The same holds for s. Num. of all pairs = 1 2 N (N \u2212 1) when there are no ties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "In reality, y has many ties, so Eq. 2 cannot be used for the evaluation.. There are multiple correction methods to account for ties in Kendall's \u03c4 ; they are named \u03c4 -a, \u03c4 -b, and \u03c4 -c. In our setting, namely unsupervised readability assessment, tau-c should be used because y and s may have different scales.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "\u03c4 -b can be described as follows 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c4 b = n c \u2212 n d (n 0 \u2212 n 1 )(n 0 \u2212 n 2 )",
"eq_num": "(3)"
}
],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "n c denotes the number of concordant pairs, n d denotes the number of discordant pairs. n 0 = N (N \u2212 1)/2, n 1 is the sum of all possible pairs within each tied group for the first quantity, n 2 is the sum of all possible pairs within each tied group for the second quantity. \u03c4 -c can be written as follows 2 . To obtain m, we first construct the contingency table made 1 https://en.wikipedia.org/wiki/ Kendall_rank_correlation_coefficient# Tau-b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "2 https://en.wikipedia.org/wiki/ Kendall_rank_correlation_coefficient# Tau-c 15. deficit: The company <had a large deficit>. a: spent a lot more money than it earned b: went down a lot in value c: had a plan for its spending that used a lot of money d: had a lot of money stored in the bank 26. malign: His <malign> influence is still felt. a: good b: evil c: very important d: secret Figure 2 : Examples of the Vocabulary Size Test. Testtakers are asked to choose the option that paraphrases the part between \"<\" and \">\" from a, b, c, and d.",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 393,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "from the first and second quantity. Using the rows and columns of the table, m is defined as min(num. of rows, num. of columns).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "\u03c4 c = 2(n c \u2212 n d ) N 2 m\u22121 m (4) 3 Proposed Method: Vocabulary Testing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "This section describes our unsupervised readability assessor that employs a novel approach: instead of using valuable readability labels as the source of text difficulty for typical second language learners, our proposed method uses vocabulary tests as the source of word difficulty for typical second language learners and obtains readability scores based on accurately estimated word difficulty. To this end, this section explains how to analyze vocabulary test result data to obtain word difficulty. Fig. 2 shows example questions from the vocabulary size test, a widely used vocabulary test in applied linguistics (Beglar and Nation, 2007) . Each question asks about a word in a multiple-choice question format. The test consists of 100 questions like those shown in Fig. 2 . Ehara (2018) used this test to have 100 second-language learners take the test and to collect their responses. Their data were published and made publicly available. We used their dataset to train our classifiers.",
"cite_spans": [
{
"start": 618,
"end": 643,
"text": "(Beglar and Nation, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 780,
"end": 792,
"text": "Ehara (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 503,
"end": 509,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 771,
"end": 777,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Correlation coefficients",
"sec_num": "2.2"
},
{
"text": "We want to analyze vocabulary test results to obtain word difficulty values encoding learners' language knowledge. To this end, we employed the idea of item response theory (Baker, 2004), a statistical model that can estimate learners' abilities and test questions' difficulties from the learners' responses to the questions. Let V be the set of vocabulary, and let L be the set of learners. Let z v,l \u2208 {0, 1} be the result of whether learner l \u2208 L correctly answered the question for word v \u2208 V: z l,v = 1 if l answered correctly for word v; otherwise, z l,v = 0. Correct answers usually imply that l knows word v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating vocabulary test results: Item Response Theory",
"sec_num": "3.1"
},
{
"text": "Then, by using {z v,l } as the training data, we train the following model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating vocabulary test results: Item Response Theory",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z = 1|v, l) = sigmoid(a l \u2212 d v )",
"eq_num": "(5)"
}
],
"section": "Evaluating vocabulary test results: Item Response Theory",
"sec_num": "3.1"
},
{
"text": "In Eq. 5, a l is the ability parameter of learner l, d v is the difficulty of word w, and sigmoid denotes the logistic sigmoid function, i.e., sigmoid(x) = 1 1+exp(\u2212x) . The logistic sigmoid function is the binary version of the softmax function, which is frequently used in neural classifiers. It is a monotonously increasing function ranging within (0, 1). As sigmoid(0) = 1 1+1 = 1 2 , when a learner's ability a l is larger than the word difficulty d v , the probability that learner l knows word v can be written as follows: p(z = 1|v, l) > 1 in Eq. 5. Likewise, by using Eq. 5, we can compare a learner's ability and word difficulty in the same dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating vocabulary test results: Item Response Theory",
"sec_num": "3.1"
},
{
"text": "To estimate learner ability and word difficulty, z v,l is given as z in Eq. 5 in the training phase. In this way, in item response theory, learner ability and word difficulty are comparable, and these parameters are to be estimated from the test result data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating vocabulary test results: Item Response Theory",
"sec_num": "3.1"
},
{
"text": "In Eq. 5, d v denotes the word difficulty estimated from the vocabulary tests. Here, in addition to the word difficulty for the words within the vocabulary test, we also want to obtain word difficulty values for all words that may appear in the target language. To this end, we calculate d v from the word frequency in large balanced corpora as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining difficulty of words not in the vocabulary test",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d v = \u2212 K k=1 w k log(freq k (v) + 1)",
"eq_num": "(6)"
}
],
"section": "Obtaining difficulty of words not in the vocabulary test",
"sec_num": "3.2"
},
{
"text": "In Eq. 6, K is the number of corpora to use, freq k (v) denotes the frequency of word v in the k-th corpus, and w k is the weight parameter of the k-th corpus. In summary, given the vocabulary test results {z v,l } and corpus frequency features freq k (v), we can estimate the parameters: namely, the weight of the k-th corpus w k and learner l's ability a l . By putting Eq. 5 and Eq. 6 together, we can see that the inside formula of the sigmoid function is linear with respect to the parameters to be estimated because all terms consist of a product of a parameter and a constant calculated from features, and no term has a product of two or more parameters. As the sigmoid function of a linear combination of parameters can be reformulated as a logistic regression, we can implement Eq. 5 and Eq. 6 by using typical logistic regression classifiers such as scikit-learn 3 and LIBLINEAR 4 . We will release our code upon the acceptance of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining difficulty of words not in the vocabulary test",
"sec_num": "3.2"
},
{
"text": "Note that we do not use the valuable readability label {y i } in the training phase; hence, our method is categorized as an unsupervised method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining difficulty of words not in the vocabulary test",
"sec_num": "3.2"
},
{
"text": "After estimating the parameters using the abovementioned procedure, we use the following formula to obtain the readability of given T i . Here, l avg denotes the test-taker whose estimated ability parameter is closest to the average of the estimated ability parameter values {a l }s. Intuitively, the following equation calculates the probability that the average learner knows all the words that appear in T i and uses it as the readability score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Automatic Readability Assessor",
"sec_num": "3.3"
},
{
"text": "s i = score(T i ) = \u2212 1 |T i | log \uf8eb \uf8ed v\u2208T i p(z = 1|v, l avg ) \uf8f6 \uf8f8 . (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Automatic Readability Assessor",
"sec_num": "3.3"
},
{
"text": "4 Experimental Settings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Automatic Readability Assessor",
"sec_num": "3.3"
},
{
"text": "We used the OneStopEnglish dataset (Vajjala and Lu\u010di\u0107, 2018) for our evaluation because of the following reasons. First, it is one of the newest datasets. Second, it is publicly available and downloadable. Third, it is a reliable dataset in the sense that it has no known pitfalls when used as a corpus for evaluation. While Martinc et al. (2021) uses other corpora such as the WeeBit corpus (Xia et al., 2016) and the Newsela corpus (Xu et al., 2015) , both have known pitfalls when used for the evaluation of automatic readability assessment. The WeeBit corpus is not a parallel corpus, which is explained in the next subsection. This means that each level consists of totally different articles covering different topics.",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "(Vajjala and Lu\u010di\u0107, 2018)",
"ref_id": "BIBREF35"
},
{
"start": 325,
"end": 346,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
},
{
"start": 392,
"end": 410,
"text": "(Xia et al., 2016)",
"ref_id": "BIBREF36"
},
{
"start": 434,
"end": 451,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Dataset",
"sec_num": "4.1"
},
{
"text": "As some topics such as politics tend to use more difficult phrases than other topics, it is difficult to see how the topic of content influences the resulting performance values. The Newsela corpus is a parallel corpus, which removes the influence caused by topics. However, according to Martinc et al. (2021) , its readability labels can be easily identified from the average sentence length in a text: the average sentence length achieved 0.906 in the Pearson's \u03c1 correlation. Hence, even if a method works well on the Newsela corpus, it could be possible that the method merely inherently calculates and uses average sentence length.",
"cite_spans": [
{
"start": 288,
"end": 309,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Dataset",
"sec_num": "4.1"
},
{
"text": "Regarding the source of the dataset, Vajjala and Lu\u010di\u0107 (2018) says that \"onestopenglish.com is an English language learning resources website run by MacMillan Education, with over 700,000 users across 100 countries.\"",
"cite_spans": [
{
"start": 37,
"end": 61,
"text": "Vajjala and Lu\u010di\u0107 (2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OneStopEnglish dataset",
"sec_num": "4.2"
},
{
"text": "The dataset has three levels: elementary, intermediate, and advanced. According to Vajjala and Lu\u010di\u0107 (2018) , the original articles were taken from the Guardian newspaper. The OneStopEnglish dataset is a parallel corpus, i.e, language teachers manually rewrote the original articles into the three aforementioned readability levels. Hence, one notable characteristic of this dataset is that all three levels have the same content with different readability levels. Hence, by using this dataset, we can avoid having classifiers learn differences in content or topic rather than readability levels.",
"cite_spans": [
{
"start": 83,
"end": 107,
"text": "Vajjala and Lu\u010di\u0107 (2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OneStopEnglish dataset",
"sec_num": "4.2"
},
{
"text": "All three levels have 189 texts each, 567 texts in total. We split these texts into a training set consisting of 339 texts, a validation set consisting of 114 texts, and a test set consisting of 114 texts. The training set and validation sets were used to train solely supervised methods for comparison. Unsupervised methods did not use the training and validation sets; they used only the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OneStopEnglish dataset",
"sec_num": "4.2"
},
{
"text": "First, we introduce the supervised methods that we used for comparison because it involves the training data mentioned right above. As the BERT-based sequence classification has been reported to achieve excellent results (Devlin et al., 2019) , we applied the standard BERT-based sequence classification approach involving pretraining and fine-tuning. For the pretrained model, we used bert-large-casedwhole-word-masking in the Huggingface models 5 .",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised methods",
"sec_num": "4.3.1"
},
{
"text": "Then, we fine-tuned the model using the aforementioned 339 training texts. For this fine-tuning, we used a GeForce RTX 3090 board that has 24 GiB of Graphical Processing Unit (GPU) memory. The fine-tuning and resulting model took up 16 GiB of GPU memory. This means that it is difficult to achieve similar performance without GPUs with large memory. We named this fine-tuned model spvBERT, in which \"spv\" denotes being supervised. In order to see how the size of training data has an influence on the performance, we also conducted experiments with 168 training texts, which amounted to almost half of the total 339 training texts. We named this model spvBERT_half.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised methods",
"sec_num": "4.3.1"
},
{
"text": "All the fine-tuning procedures were conducted using the Adam optimizer (Kingma and Ba, 2015) with a setting of 10 epochs and a 0.00001 training rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised methods",
"sec_num": "4.3.1"
},
{
"text": "For the implementation of conventional readability formulae, we used the readability PyPI package 6 . We used almost all readability formulae implemented in this package for our experiments: namely, Flesch-Kincaid (Flesch-Kincaid Grade Level, FKGL) (Kincaid et al., 1975) , ARI (Automated Readability Index) (Senter and Smith, 1967) , the Coleman-Liau Index (Coleman and Liau, 1975) , Flesch Reading Ease (Flesch, 1948) , the Gunning Fog Index (Gunning, 1952) , LIX (Bj\u00f6rnsson, 1968) , the SMOG Index (Mc Laughlin, 1969) , the RIX index (Anderson, 1983) , and the Dale-Chall Index (Dale and Chall, 1948) . Among these methods, notably, some formulae such as the Dale-Chall Index depend on their own list of easy/difficult words. Others, such as the Flesch-Kincaid grade level (FKGL), do not require such a list of difficult words but use superficial features such as the total number of syllables in a text. For space limitation, we do not cite all equations, however, we only cite FKGL as being famous and cite Dale-Chall More details of these formulae and their implementation are described on the project page. All of these readability formulae are unsupervised in the sense that they do not require any training data.",
"cite_spans": [
{
"start": 249,
"end": 271,
"text": "(Kincaid et al., 1975)",
"ref_id": "BIBREF24"
},
{
"start": 308,
"end": 332,
"text": "(Senter and Smith, 1967)",
"ref_id": "BIBREF34"
},
{
"start": 358,
"end": 382,
"text": "(Coleman and Liau, 1975)",
"ref_id": "BIBREF7"
},
{
"start": 405,
"end": 419,
"text": "(Flesch, 1948)",
"ref_id": "BIBREF22"
},
{
"start": 444,
"end": 459,
"text": "(Gunning, 1952)",
"ref_id": "BIBREF23"
},
{
"start": 466,
"end": 483,
"text": "(Bj\u00f6rnsson, 1968)",
"ref_id": "BIBREF6"
},
{
"start": 501,
"end": 520,
"text": "(Mc Laughlin, 1969)",
"ref_id": "BIBREF30"
},
{
"start": 537,
"end": 553,
"text": "(Anderson, 1983)",
"ref_id": "BIBREF2"
},
{
"start": 581,
"end": 603,
"text": "(Dale and Chall, 1948)",
"ref_id": "BIBREF8"
},
{
"start": 1012,
"end": 1022,
"text": "Dale-Chall",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "For the unsupervised neural language model, we also used the bert-large-cased-whole-wordmasking pretraining model and used the BertFor-MaskedLM function to obtain the perplexity of each sentence of the text of interest. We chose this pretraining model because Martinc et al. (2021) reported that they used bert-base-uncased and reported not so good performance, so we chose a BERT-based model larger than the one that they used. Note that, unlike neural sequence classification, language models are designed to be unsupervised and thus do not require any training data to fine-tune. All we need to do for the neural language model is to input each sentence in the text of interest and calculate the perplexity score of the inputted sentence.",
"cite_spans": [
{
"start": 260,
"end": 281,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "For splitting a text into sentences, we used the sent_tokenize function in the nltk Python package 7 . After the split, we simply used the average of the perplexity scores of each sentence in a text as the readability score. As the perplexity score of a sentence encodes the fluency of the inputted sentence, this roughly measures the overall fluency of the inputted sentence. We call this method BERTL-Mavg, where LM denotes a language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "As BERTLMavg does not use fine-tuning, it uses less GPU memory compared to spvBERT. However, BERTLMavg uses 1,793 MiB of GPU memory to output perplexity scores, which is still impractical in a low-computational-resource environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "According to Martinc et al. (2021) , BERT language models do not perform good results. Hence, while not directly comparable because we could not obtain their test set, for a rough comparison, we cited their best model on the OneStopEnglish dataset, TCN RSRS-simple. The model is temporal convolutional network (TCN) trained on the Simplified Wikipedia corpus. For space limitations, 7 nltk.org refer to Martinc et al. (2021) for the details of this method.",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
},
{
"start": 383,
"end": 384,
"text": "7",
"ref_id": null
},
{
"start": 403,
"end": 424,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "Proposed model was trained on a previously published and publicly available vocabulary dataset (Ehara, 2018) . For the corpus word frequency, we used the frequencies taken from the British National Corpus (BNC Consortium, 2007) and the Corpus of Contemporary American English (COCA) (Davies, 2008) . Both corpora are balanced general corpora used extensively in English education (Nation, 2006) . Especially, the word frequencies of these corpora are important resources for determining word difficulty in English education. For counting text frequencies, we used nltk.stem.WordNetLemmatizer in the nltk package to lemmatize words appearing in running texts.",
"cite_spans": [
{
"start": 95,
"end": 108,
"text": "(Ehara, 2018)",
"ref_id": "BIBREF11"
},
{
"start": 205,
"end": 227,
"text": "(BNC Consortium, 2007)",
"ref_id": null
},
{
"start": 283,
"end": 297,
"text": "(Davies, 2008)",
"ref_id": "BIBREF9"
},
{
"start": 380,
"end": 394,
"text": "(Nation, 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "Our Proposed model uses the average of the negative log likelihood that an average learner knows each word in the text as presented in Eq. 7. As our Proposed model uses the BNC and COCA word frequencies, it could be possible that these word frequencies have an essential influence on the performance of the Proposed model. To check this, we also measured the correlation between the gold labels and the average negative log of the unigram probability values of the given text in each corpus. We name these feature-based methods as BNC and COCA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised methods",
"sec_num": "4.3.2"
},
{
"text": "This subsection describes the experimental results showing the problem of using Pearson's \u03c1 in evaluation. Tab. 1 shows the experimental results. The columns of Tab. 1 show the rank correlation coefficients introduced in the previous sections. Namely, they are Spearman's \u03c1, Kendall's \u03c4 with tie correction type b (\u03c4 -b), and Kendall's \u03c4 with tie correction type c (\u03c4 -c). Pearson's \u03c1 is shown in the rightmost column. As we explained in previous sections, Pearson's \u03c1 is affected by the linearity of scores. To see how Pearson's \u03c1 is affected by the linearity of scores, below each unsupervised method M, we show exp(M) to indicate the resulting performance values when we replaced the scores of M with the exponentilized the scores of M, i.e, exp(the score of M) to remove linearity. The distinction of \"unsupervised\" and \"supervised\" is clearly marked in the leftmost column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results: Pearson's \u03c1 and performance",
"sec_num": "4.4"
},
{
"text": "In Tab vised methods except for BNC and COCA, the correlations of the exponentialized scores measured by Pearson's \u03c1 are closer to 0 than their original scores. In contrast, the rank correlation coefficient values are kept unchanged because exp is a monotonous function and hence the ranking is not altered by the use of exp. The reason why the performance values of BNC and COCA seem slightly increased is presumably because of noise: BNC and COCA did not correlate with the readability labels statistically significantly in the first place. Neither did exp(BNC) and exp(COCA). The drop in performance scores is enormous for some methods such as Proposed: its performance was originally 0.715 but plunges to 0.260 by using exp. This result indicates the vulnerability of using Pearson's \u03c1 in the evaluation: the evaluation by Pearson's \u03c1 is strongly affected by how linear the scores are, suggesting the use of rank correlation for better evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 6,
"text": "Tab",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Pearson's \u03c1 and performance",
"sec_num": "4.4"
},
{
"text": "TCN RSRS-simple is the best model using the same dataset in Martinc et al. (2021) . As they show only the performance measured by the Pearson correlation, we wrote \u2212 for other rank correlation coefficients. Also note that we cannot make direct comparison as we could not obtain their test set used for their experiments. This is marked by the (*) after the value. While we can see that Proposed achieved better correlation than TCN RSRS-simple, we are not sure if this result indicates the linearity of the methods or the superiority of Proposed against TCN RSRS-simple. Like-wise, the use of Pearson's \u03c1 only makes followup papers' efforts to compare results difficult.",
"cite_spans": [
{
"start": 60,
"end": 81,
"text": "Martinc et al. (2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results: Pearson's \u03c1 and performance",
"sec_num": "4.4"
},
{
"text": "In all unsupervised methods, our Proposed method achieved the best results in all rank correlation coefficients and Pearson's \u03c1, although we need to be careful with the interpretation of Pearson's rho as explained in Sec. 2.2. These results were statistically significant (p < 0.01): all correlation coefficients can also be used for statistical testing. In each of the statistical tests, the null hypothesis is that no association exists between the scores and the gold labels. When measured using Spearman's \u03c1, Proposed achieved a value of 0.730, which is close to 0.751, the performance achieved by supervised BERT using half of the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.5"
},
{
"text": "BERTLMavg did not achieve good results in predicting readability labels. This result suggests that perplexity and readability are different measures and that, to measure readability, we need to obtain and make use of the information about what a typical language learner knows about the target second language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.5"
},
{
"text": "Interestingly, BNC and COCA achieved poor results in predicting readability labels. This result shows that the reason that Proposed method outperformed the others is not merely because the features that Proposed used are excellent. A good combination of the two features results in significant results. The use of only one of the two does not achieve good results. Hence, we can see that Proposed works excellently for making the combination of the two corpus-based features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.5"
},
{
"text": "For a comparison with supervised models, Tab. 1 shows their performances: spvBERT and spvBERT_half. Supervised models output labels rather than scores in their prediction phase: we directly used these labels to calculate rank correlation coefficients for a fair comparison with unsupervised models. Leveraging the supervision, they outperformed most of the unsupervised methods in all rank correlation coefficients. This means that using valuable supervision yields great improvement in the predictive performance of readability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.5"
},
{
"text": "The performance differences among spvBERT, BERTLMavg, and Proposed can be interpreted as follows. BERT is a large model trying to use as much information as possible from a sentence, such as syntactic structure. Hence, it is difficult for the model to find useful information contributing to readability without supervision. Proposed is a bagof-words model that is designed to be lightweight by sacrificing such complicated factors. Hence, the performance difference between spvBERT and Proposed can be regarded as a degree that information beyond word difficulty -such as syntactic information or sentence context -accounts for readability. While this is beyond the focus of this paper, a detailed error analysis between spvBERT and Proposed may lead to understanding what kind of syntactic information or contexts in a sentence contribute to readability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.5"
},
{
"text": "We used a Core i7-10700K (3.80 GHz) machine with a GeForce RTX 3090 board for all experiments. The BERTLMavg, which is an unsupervised BERT language model, uses 1, 793 MiB GPU memory. In contrast, Proposed is merely a logistic regression and does not require as GPU for practical use. In addition, the model's features are smaller than those of the BERT models. Proposed uses the BNC and COCA frequencies, which amount to 10 MiB of CPU memory, which is roughly 1 100 of that used by the unsupervised BERT models. In terms of speed, to classify all texts in the test set, while BERTLMavg utilizes 368 s, Proposed utilizes only 5.37 s. This indicates that the Proposed is 68.5 times faster than BERTLMavg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory and Speed",
"sec_num": "4.6"
},
{
"text": "In this paper, we discussed the necessity to use rank correlation coefficients to evaluate automatic readability assessment. The problem that Pearson correlation coefficients reflect not only the correlation between two scores but also the linearity of the scores is not particularly novel and has been pointed out for a long time. This study showed that this problem has a significant impact in the evaluation of the ARA task. In fact, in the recent evaluation of the ARA task (Martinc et al., 2021) , the problem of linearity in the Pearson coefficients was not addressed and its evaluation simply uses the Pearson correlation coefficients. To the best of our knowledge, this is the first study to demonstrate the effect of this problem on the performance values in the ARA task and examine the extent to which the linearity of the scores affects the scores in Tab. 1.",
"cite_spans": [
{
"start": 478,
"end": 500,
"text": "(Martinc et al., 2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we termed the Proposed method as \"unsupervised,\" according to (Martinc et al., 2021) . They termed methods that do not use manually annotated readability labels as \"unsupervised\" even if the methods use supervised machine learning. In fact, the proposed method is trained using the vocabulary test dataset (Ehara, 2018) . Phrases, such as \"the most average learner\" and \"learner ability, all refer to the learners on this vocabulary test dataset. In this study, knowing that the term supervised/unsupervised is misleading, we deliberately described the proposed method as \"unsupervised\" for easy comparison with previous studies. In NLP, the Proposed method is closely related to complex word identification (CWI) tasks (Yimam et al., 2018; Paetzold and Specia, 2016 ). CWI is a task that aims to discover difficult words in a text. The relationship between CWI and personalized text readability was previously studied in . The task of obtaining the difficulty of an English word for each individual ESL learner, as we did in this study, can be regarded as personalized CWI (Ehara et al., 2012 (Ehara et al., , 2014 8 . Personalized CWI has many downstream applications in NLP such as lexical simplification Yeung, 2018, 2019) , text recommendation for language learners (Ehara et al., 2013; Lee, 2021) , and translator selection in crowdsourcing (Ehara et al., 2016) . Some studies focus on the relationship between word semantics and word difficulty (Ehara et al., 2014; Beinborn et al., 2016; Ehara, 2020b) . Regarding the interpretability of CWI classifiers, Ehara (2020a) studied the relationship CWI classifiers' weights and vocabulary sizes.",
"cite_spans": [
{
"start": 77,
"end": 99,
"text": "(Martinc et al., 2021)",
"ref_id": "BIBREF29"
},
{
"start": 321,
"end": 334,
"text": "(Ehara, 2018)",
"ref_id": "BIBREF11"
},
{
"start": 735,
"end": 755,
"text": "(Yimam et al., 2018;",
"ref_id": "BIBREF39"
},
{
"start": 756,
"end": 781,
"text": "Paetzold and Specia, 2016",
"ref_id": "BIBREF33"
},
{
"start": 1089,
"end": 1108,
"text": "(Ehara et al., 2012",
"ref_id": "BIBREF17"
},
{
"start": 1109,
"end": 1130,
"text": "(Ehara et al., , 2014",
"ref_id": "BIBREF16"
},
{
"start": 1131,
"end": 1132,
"text": "8",
"ref_id": null
},
{
"start": 1223,
"end": 1241,
"text": "Yeung, 2018, 2019)",
"ref_id": null
},
{
"start": 1286,
"end": 1306,
"text": "(Ehara et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 1307,
"end": 1317,
"text": "Lee, 2021)",
"ref_id": "BIBREF28"
},
{
"start": 1362,
"end": 1382,
"text": "(Ehara et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 1467,
"end": 1487,
"text": "(Ehara et al., 2014;",
"ref_id": "BIBREF16"
},
{
"start": 1488,
"end": 1510,
"text": "Beinborn et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 1511,
"end": 1524,
"text": "Ehara, 2020b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we investigated the correlation coefficients to evaluate the performance of unsupervised automatic readability assessors. The experimental results showed that the readability performances measured by Pearson's \u03c1 are strongly affected by the linearity of the output scores, whereas those measured by rank correlations are not affected. This indicates the appropriateness of using rank correlation coefficients to evaluate unsupervised automatic readability assessors. We also proposed a lightweight unsupervised assessor based on word difficulty for typical second language learners calculated from a vocabulary test result dataset. This",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "https://scikit-learn.org/stable/ 4 https://www.csie.ntu.edu.tw/~cjlin/ liblinear/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/models 6 https://pypi.org/project/readability/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The journal version of(Ehara et al., 2012) is. assessor could achieve the best score among all the compared unsupervised assessors.In the future, we plan to conduct a more detailed analysis to investigate which rank correlations, including those not introduced in this paper, are more appropriate for the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This study was supported by JST ACT-X Grant Number JPMJAX2006 and JSPS KAKENHI Grant Number 18K18118. We used the ABCI infrastructure from AIST for the computational resources. We appreciate anonymous reviewers for their valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical methods for ranking data",
"authors": [
{
"first": "Mayer",
"middle": [],
"last": "Alvo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Philip",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "1341",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mayer Alvo and LH Philip. 2014. Statistical methods for ranking data, volume 1341. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Correction methods for ties in rank correlations",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ilaria",
"suffix": ""
},
{
"first": "Agostino",
"middle": [],
"last": "Amerise",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tarsitano",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Applied Statistics",
"volume": "42",
"issue": "12",
"pages": "2584--2596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilaria L Amerise and Agostino Tarsitano. 2015. Correc- tion methods for ties in rank correlations. Journal of Applied Statistics, 42(12):2584-2596.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lix and rix: Variations on a little-known readability index",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 1983,
"venue": "Journal of Reading",
"volume": "26",
"issue": "6",
"pages": "490--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Anderson. 1983. Lix and rix: Variations on a little-known readability index. Journal of Reading, 26(6):490-496.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Item Response Theory : Parameter Estimation Techniques, Second Edition",
"authors": [
{
"first": "B",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank B. Baker. 2004. Item Response Theory : Param- eter Estimation Techniques, Second Edition. CRC Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A vocabulary size test. The Language Teacher",
"authors": [
{
"first": "David",
"middle": [],
"last": "Beglar",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Nation",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "31",
"issue": "",
"pages": "9--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Beglar and Paul Nation. 2007. A vocabulary size test. The Language Teacher, 31(7):9-13.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Predicting the Spelling Difficulty of Words for Language Learners",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "73--83",
"other_ids": {
"DOI": [
"10.18653/v1/W16-0508"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2016. Predicting the Spelling Difficulty of Words for Language Learners. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 73-83, San Diego, CA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "L\u00e4sbarhet, Stockholm. BNC Consortium",
"authors": [
{
"first": "C",
"middle": [
"H"
],
"last": "Bj\u00f6rnsson",
"suffix": ""
}
],
"year": 1968,
"venue": "Distributed by Bodleian Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. H. Bj\u00f6rnsson. 1968. L\u00e4sbarhet, Stockholm. BNC Consortium. 2007. The british national cor- pus, version 3 (bnc xml edition). Distributed by Bodleian Libraries, University of Oxford, on behalf of the BNC Consortium http://www.natcorp. ox.ac.uk/.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A computer readability formula designed for machine scoring",
"authors": [
{
"first": "Meri",
"middle": [],
"last": "Coleman",
"suffix": ""
},
{
"first": "Ta",
"middle": [
"Lin"
],
"last": "Liau",
"suffix": ""
}
],
"year": 1975,
"venue": "Journal of Applied Psychology",
"volume": "60",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A formula for predicting readability: Instructions. Educational research bulletin",
"authors": [
{
"first": "Edgar",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Jeanne",
"middle": [
"S"
],
"last": "Chall",
"suffix": ""
}
],
"year": 1948,
"venue": "",
"volume": "",
"issue": "",
"pages": "37--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Dale and Jeanne S Chall. 1948. A formula for predicting readability: Instructions. Educational re- search bulletin, pages 37-54.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The corpus of contemporary american english (coca)",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Davies. 2008. The corpus of contemporary amer- ican english (coca). Available online at https: //www.english-corpora.org/coca/.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proc. of NAACL, pages 4171-4186, Minneapolis, Minnesota.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building an English Vocabulary Knowledge Dataset of Japanese English-as-a-Second-Language Learners Using Crowdsourcing",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara. 2018. Building an English Vocabu- lary Knowledge Dataset of Japanese English-as-a- Second-Language Learners Using Crowdsourcing. In Proc. of LREC.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Uncertainty-Aware Personalized Readability Assessments for Second Language Learners",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2019,
"venue": "18th IEEE International Conference On Machine Learning And Applications (ICMLA)",
"volume": "",
"issue": "",
"pages": "1909--1916",
"other_ids": {
"DOI": [
"10.1109/ICMLA.2019.00307"
]
},
"num": null,
"urls": [],
"raw_text": "Yo Ehara. 2019. Uncertainty-Aware Personalized Readability Assessments for Second Language Learners. In 2019 18th IEEE International Con- ference On Machine Learning And Applications (ICMLA), pages 1909-1916.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Interpreting neural CWI classifiers' weights as vocabulary size",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "171--176",
"other_ids": {
"DOI": [
"10.18653/v1/2020.bea-1.17"
]
},
"num": null,
"urls": [],
"raw_text": "Yo Ehara. 2020a. Interpreting neural CWI classifiers' weights as vocabulary size. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 171-176, Seattle, WA, USA \u2192 Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural rasch model: How do word embeddings adjust word difficulty?",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2019,
"venue": "Computational Linguistics -16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019",
"volume": "",
"issue": "",
"pages": "88--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara. 2020b. Neural rasch model: How do word embeddings adjust word difficulty? In Computa- tional Linguistics -16th International Conference of the Pacific Association for Computational Lin- guistics, PACLING 2019, Hanoi, Vietnam, October 11-13, 2019, Revised Selected Papers, pages 88-96, Singapore. Springer Singapore.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Assessing Translation Ability through Vocabulary Ability Assessment",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Yukino",
"middle": [],
"last": "Baba",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Yukino Baba, Masao Utiyama, and Ei- ichiro Sumita. 2016. Assessing Translation Ability through Vocabulary Ability Assessment. In Proc. of IJCAI.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Formalizing Word Sampling for Vocabulary Prediction as Graph-based Active Learning",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Hidekazu",
"middle": [],
"last": "Oiwa",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "1374--1384",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1143"
]
},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Yusuke Miyao, Hidekazu Oiwa, Issei Sato, and Hiroshi Nakagawa. 2014. Formalizing Word Sampling for Vocabulary Prediction as Graph-based Active Learning. In Proc. of EMNLP, pages 1374- 1384.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining Words in the Minds of Second Language Learners: Learner-Specific Word Difficulty",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hidekazu",
"middle": [],
"last": "Oiwa",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "799--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Issei Sato, Hidekazu Oiwa, and Hiroshi Nak- agawa. 2012. Mining Words in the Minds of Sec- ond Language Learners: Learner-Specific Word Dif- ficulty. In Proceedings of COLING 2012, pages 799-814, Mumbai, India. The COLING 2012 Orga- nizing Committee.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Mining Words in the Minds of Second Language Learners for Learner-specific Word Difficulty",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Information Processing",
"volume": "26",
"issue": "",
"pages": "267--275",
"other_ids": {
"DOI": [
"10.2197/ipsjjip.26.267"
]
},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Issei Sato, Hidekazu Oiwa, and Hiroshi Nak- agawa. 2018. Mining Words in the Minds of Second Language Learners for Learner-specific Word Diffi- culty. Journal of Information Processing, 26:267- 275.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Personalized Reading Support for Second-language Web Documents",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Nobuyuki",
"middle": [],
"last": "Shimizu",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Ninomiya",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2438653.2438666"
]
},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Nobuyuki Shimizu, Takashi Ninomiya, and Hiroshi Nakagawa. 2013. Personalized Read- ing Support for Second-language Web Documents.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Comparison of Features for Automatic Readability Assessment",
"authors": [
{
"first": "Lijun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jansche",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Huenerfauth",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "276--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A Comparison of Features for Automatic Readability Assessment. pages 276- 284.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A new readability yardstick",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Flesch",
"suffix": ""
}
],
"year": 1948,
"venue": "Journal of Applied Psychology",
"volume": "32",
"issue": "3",
"pages": "221--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32(3):221-233.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Technique of Clear Writing",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Gunning",
"suffix": ""
}
],
"year": 1952,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Gunning. 1952. The Technique of Clear Writ- ing. McGraw-Hill.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel",
"authors": [
{
"first": "Robert P Fishburne",
"middle": [],
"last": "Peter Kincaid",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"L"
],
"last": "Jr",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"S"
],
"last": "Rogers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chissom",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Technical Training Command Millington TN Re- search Branch.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Personalizing Lexical Simplification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Chak Yan",
"middle": [],
"last": "Yeung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "224--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lee and Chak Yan Yeung. 2018. Personalizing Lexical Simplification. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 224-232, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Personalized Substitution Ranking for Lexical Simplification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Chak Yan",
"middle": [],
"last": "Yeung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "258--267",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8634"
]
},
"num": null,
"urls": [],
"raw_text": "John Lee and Chak Yan Yeung. 2019. Personalized Substitution Ranking for Lexical Simplification. In Proceedings of the 12th International Conference on Natural Language Generation, pages 258-267, Tokyo, Japan. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An editable learner model for text recommendation for language learning. ReCALL",
"authors": [
{
"first": "S",
"middle": [
"Y"
],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John SY Lee. 2021. An editable learner model for text recommendation for language learning. ReCALL, pages 1-15.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Supervised and Unsupervised Neural Approaches to Text Readability",
"authors": [
{
"first": "Matej",
"middle": [],
"last": "Martinc",
"suffix": ""
},
{
"first": "Senja",
"middle": [],
"last": "Pollak",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2021,
"venue": "Computational Linguistics",
"volume": "47",
"issue": "1",
"pages": "141--179",
"other_ids": {
"DOI": [
"10.1162/coli_a_00398"
]
},
"num": null,
"urls": [],
"raw_text": "Matej Martinc, Senja Pollak, and Marko Robnik- \u0160ikonja. 2021. Supervised and Unsupervised Neu- ral Approaches to Text Readability. Computational Linguistics, 47(1):141-179.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Smog grading-a new readability formula",
"authors": [
{
"first": "G",
"middle": [],
"last": "Harry Mc Laughlin",
"suffix": ""
}
],
"year": 1969,
"venue": "Journal of reading",
"volume": "12",
"issue": "8",
"pages": "639--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G Harry Mc Laughlin. 1969. Smog grading-a new read- ability formula. Journal of reading, 12(8):639-646.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A guide to appropriate use of correlation coefficient in medical research",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mavuto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukaka",
"suffix": ""
}
],
"year": 2012,
"venue": "Malawi medical journal",
"volume": "24",
"issue": "3",
"pages": "69--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mavuto M Mukaka. 2012. A guide to appropriate use of correlation coefficient in medical research. Malawi medical journal, 24(3):69-71.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "How Large a Vocabulary is Needed For Reading and Listening? Canadian Modern Language Review",
"authors": [],
"year": 2006,
"venue": "",
"volume": "63",
"issue": "",
"pages": "59--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Nation. 2006. How Large a Vocabulary is Needed For Reading and Listening? Canadian Modern Lan- guage Review, 63(1):59-82.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1669--1679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Paetzold and Lucia Specia. 2016. Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Pa- pers, pages 1669-1679, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Automated readability index",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Senter",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Edgar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "RJ Senter and Edgar A Smith. 1967. Automated readability index. Technical report, CINCINNATI UNIV OH.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "On-eStopEnglish corpus: A new corpus for automatic readability assessment and text simplification",
"authors": [
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
},
{
"first": "Ivana",
"middle": [],
"last": "Lu\u010di\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "297--304",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0535"
]
},
"num": null,
"urls": [],
"raw_text": "Sowmya Vajjala and Ivana Lu\u010di\u0107. 2018. On- eStopEnglish corpus: A new corpus for automatic readability assessment and text simplification. In Proceedings of the Thirteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 297-304, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Text Readability Assessment for Second Language Learners",
"authors": [
{
"first": "Menglin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Kochmar",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "12--22",
"other_ids": {
"DOI": [
"10.18653/v1/W16-0502"
]
},
"num": null,
"urls": [],
"raw_text": "Menglin Xia, Ekaterina Kochmar, and Ted Briscoe. 2016. Text Readability Assessment for Second Lan- guage Learners. In Proceedings of the 11th Work- shop on Innovative Use of NLP for Building Edu- cational Applications, pages 12-22, San Diego, CA. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Problems in Current Text Simplification Research: New Data Can Help",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "283--297",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00139"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in Current Text Simplification Re- search: New Data Can Help. Transactions of the As- sociation for Computational Linguistics, 3:283-297.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Personalized Text Retrieval for Learners of Chinese as a Foreign Language",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Chak",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Yeung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "3448--3455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chak Yan Yeung and John Lee. 2018. Personalized Text Retrieval for Learners of Chinese as a Foreign Language. In Proc. of COLING, pages 3448-3455.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Ana\u00efs Tack, and Marcos Zampieri",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"H"
],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u0160tajner",
"suffix": ""
}
],
"year": 2018,
"venue": "A Report on the Complex Word Identification Shared",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.09132[cs].ArXiv:1804.09132"
]
},
"num": null,
"urls": [],
"raw_text": "Seid Muhie Yimam, Chris Biemann, Shervin Malmasi, Gustavo H. Paetzold, Lucia Specia, Sanja \u0160tajner, Ana\u00efs Tack, and Marcos Zampieri. 2018. A Report on the Complex Word Identification Shared Task 2018. arXiv:1804.09132 [cs]. ArXiv: 1804.09132.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Overview of the previous and our approaches.",
"type_str": "figure"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "Experimental Results on the OneStopEnglish Dataset. For a method M, exp(M) denotes the correlations between the array of exp(M's score) and the gold labels. (*) denotes that the value is cited from other papers.",
"content": "<table/>"
}
}
}
}