|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:53:24.208573Z" |
|
}, |
|
"title": "Machine Learning-Driven Language Assessment", |
|
"authors": [ |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Laflair", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Masato", |
|
"middle": [], |
|
"last": "Hagiwara", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe a method for rapidly creating language proficiency assessments, and provide experimental evidence that such tests can be valid, reliable, and secure. Our approach is the first to use machine learning and natural language processing to induce proficiency scales based on a given standard, and then use linguistic models to estimate item difficulty directly for computer-adaptive testing. This alleviates the need for expensive pilot testing with human subjects. We used these methods to develop an online proficiency exam called the Duolingo English Test, and demonstrate that its scores align significantly with other high-stakes English assessments. Furthermore, our approach produces test scores that are highly reliable, while generating item banks large enough to satisfy security requirements.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe a method for rapidly creating language proficiency assessments, and provide experimental evidence that such tests can be valid, reliable, and secure. Our approach is the first to use machine learning and natural language processing to induce proficiency scales based on a given standard, and then use linguistic models to estimate item difficulty directly for computer-adaptive testing. This alleviates the need for expensive pilot testing with human subjects. We used these methods to develop an online proficiency exam called the Duolingo English Test, and demonstrate that its scores align significantly with other high-stakes English assessments. Furthermore, our approach produces test scores that are highly reliable, while generating item banks large enough to satisfy security requirements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Language proficiency testing is an increasingly important part of global society. The need to demonstrate language skills-often through standardized testing-is now required in many situations for access to higher education, immigration, and employment opportunities. However, standardized tests are cumbersome to create and maintain. Lane et al. (2016) and the Standards for Educational and Psychological Testing (AERA et al., 2014 ) describe many of the procedures and requirements for planning, creating, revising, administering, analyzing, and reporting on high-stakes tests and their development.", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 352, |
|
"text": "Lane et al. (2016)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 431, |
|
"text": "(AERA et al., 2014", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In practice, test items are often first written by subject matter experts, and then ''pilot tested'' with a large number of human subjects for psy- * Research conducted at Duolingo. chometric analysis. This labor-intensive process often restricts the number of items that can feasibly be created, which in turn poses a threat to security: Items may be copied and leaked, or simply used too often (Cau, 2015; Dudley et al., 2016) . Security can be enhanced through computeradaptive testing (CAT), by which a subset of items are administered in a personalized way (based on examinees' performance on previous items). Because the item sequences are essentially unique for each session, there is no single test form to obtain and circulate (Wainer, 2000) , but these security benefits only hold if the item bank is large enough to reduce item exposure (Way, 1998) . This further increases the burden on item writers, and also requires significantly more item pilot testing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 407, |
|
"text": "(Cau, 2015;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 428, |
|
"text": "Dudley et al., 2016)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 750, |
|
"text": "(Wainer, 2000)", |
|
"ref_id": "BIBREF69" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 859, |
|
"text": "(Way, 1998)", |
|
"ref_id": "BIBREF70" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For the case of language assessment, we tackle both of these development bottlenecks using machine learning (ML) and natural language processing (NLP). In particular, we propose the use of test item formats that can be automatically created, graded, and psychometrically analyzed using ML/NLP techniques. This solves the ''cold start'' problem in language test development, by relaxing manual item creation requirements and alleviating the need for human pilot testing altogether.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the pages that follow, we first summarize the important concepts from language testing and psychometrics ( \u00a72), and then describe our ML/NLP methods to learn proficiency scales for both words ( \u00a73) and long-form passages ( \u00a74). We then present evidence for the validity, reliability, and security of our approach using results from the Duolingo English Test, an online, operational English proficiency assessment developed using these methods ( \u00a75). After summarizing other related work ( \u00a76), we conclude with a discussion of limitations and future directions ( \u00a77). ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here we provide an overview of relevant language testing concepts, and connect them to work in machine learning and natural language processing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In psychometrics, item response theory (IRT) is a paradigm for designing and scoring measures of ability and other cognitive variables (Lord, 1980) . IRT forms the basis for most modern high-stakes standardized tests, and generally assumes:", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 147, |
|
"text": "(Lord, 1980)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. An examinee's response to a test item is modeled by an item response function (IRF);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "2. There is a unidimensional latent ability for each examinee, denoted \u03b8;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "3. Test items are locally independent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In this work we use a simple logistic IRF, also known as the Rasch model (Rasch, 1993) . This expresses the probability p i (\u03b8) of a correct response to test item i as a function of the difference between the item difficulty parameter \u03b4 i and the examinee's ability parameter \u03b8:", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 86, |
|
"text": "(Rasch, 1993)", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p i (\u03b8) = 1 1 + exp(\u03b4 i \u2212 \u03b8) .", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The response pattern from equation (1) is shown in Figure 1 . As with most IRFs, p i (\u03b8) monotonically increases with examinee ability \u03b8, and decreases with item difficulty \u03b4 i .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 59, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In typical standardized test development, items are first created and then ''pilot tested'' with human subjects. These pilot tests produce many examinee, item pairs that are graded correct or incorrect, and the next step is to estimate \u03b8 and \u03b4 i parameters empirically from these grades. The reader may recognize the Rasch model as equivalent to binary logistic regression for predicting whether an examinee will answer item i correctly (where \u03b8 represents a weight for the ''examinee feature,'' \u2212\u03b4 i represents a weight for the ''item feature,'' and the bias/intercept weight is zero). Once parameters are estimated, \u03b8s for the pilot population can be discarded, and \u03b4 i s are used to estimate \u03b8 for a future examinee, which ultimately determines his or her test score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We focus on the Rasch model because item difficulty \u03b4 i and examinee ability \u03b8 are interpreted on the same scale. Whereas other IRT models exist to generalize the Rasch model in various ways (e.g., by accounting for item discrimination or examinee guessing), the additional parameters make them more difficult to estimate correctly (Linacre, 2014) . Our goal in this work is to estimate item parameters using ML/NLP (rather than traditional item piloting), and a Rasch-like model gives us a straightforward and elegant way to do this.", |
|
"cite_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 347, |
|
"text": "(Linacre, 2014)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Response Theory (IRT)", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Given a bank of test items and their associated \u03b4 i s, one can use CAT techniques to efficiently administer and score tests. CATs have been shown to both shorten tests (Weiss and Kingsbury, 1984) and provide uniformly precise scores for most examinees, by giving harder items to subjects of higher ability and easier items to those of lower ability (Thissen and Mislevy, 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 195, |
|
"text": "(Weiss and Kingsbury, 1984)", |
|
"ref_id": "BIBREF71" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 376, |
|
"text": "(Thissen and Mislevy, 2000)", |
|
"ref_id": "BIBREF63" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computer-Adaptive Testing (CAT)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Assuming test item independence, the conditional probability of an item response sequence r = r 1 , r 2 , . . . , r t given \u03b8 is the product of all the item-specific IRF probabilities:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computer-Adaptive Testing (CAT)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "p(r|\u03b8) = t i=1 p i (\u03b8) r i (1 \u2212 p i (\u03b8)) 1\u2212r i , (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computer-Adaptive Testing (CAT)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where r i denotes the graded response to item i (i.e., r i = 1 if correct, r i = 0 if incorrect).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computer-Adaptive Testing (CAT)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The goal of a CAT is to estimate a new examinee's \u03b8 as precisely as possible with as few items as possible. The precision of \u03b8 depends on the items in r: Examinees are best evaluated by items where \u03b4 i \u2248 \u03b8. However, because the true value of \u03b8 is unknown (this is, after all, the reason for testing!), we use an iterative adaptive algorithm. First, make a ''provisional'' estimate\u03b8 t \u221d argmax \u03b8 p(r t |\u03b8) by maximizing the likelihood of observed responses up to point t. Then, select the next item difficulty based on a ''utility'' function of the current estimate \u03b4 t+1 = f (\u03b8 t ). This process repeats until reaching some stopping criterion, and the final\u03b8 t determines the test score. Conceptually, CAT methods are analogous to active learning in the ML/NLP literature (Settles, 2012) , which aims to minimize the effort required to train accurate classifiers by adaptively selecting instances for labeling. For more discussion on CAT administration and scoring, see Segall (2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 772, |
|
"end": 787, |
|
"text": "(Settles, 2012)", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 970, |
|
"end": 983, |
|
"text": "Segall (2005)", |
|
"ref_id": "BIBREF56" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computer-Adaptive Testing (CAT)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The Common European Framework of Reference (CEFR) is an international standard for describing the proficiency of foreign-language learners (Council of Europe, 2001). Our goal is to create a test integrating reading, writing, listening, and speaking skills into a single overall score that corresponds to CEFR-derived ability. To that end, we designed a 100-point scoring system aligned to the CEFR levels, as shown in Table 1 . By its nature, the CEFR is a descriptive (not prescriptive) proficiency framework. That is, it describes what kinds of activities a learner should be able to do-and competencies they should have-at each level, but provides little guidance on what specific aspects of language (e.g., vocabulary) are needed to accomplish them. This helps the CEFR achieve its goal of applying broadly across languages, but also presents a challenge for curriculum and assessment development for any particular language. It is a coarse description of potential target domains-tasks, contexts, and conditions associated with language use (Bachman and Palmer, 2010; Kane, 2013) -that can be sampled from in order to create language curricula or assessments. As a result, it is left to the developers to define and operationalize constructs based on the CEFR, targeting a subset of the activities and competences that it describes. Such work can be seen in recent efforts undertaken by linguists to profile the vocabulary and grammar linked to each CEFR level for specific languages (particularly English). We leverage these lines of research to create labeled data sets, and train ML/NLP models that project item difficulty onto our CEFR-derived scale.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1046, |
|
"end": 1072, |
|
"text": "(Bachman and Palmer, 2010;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1073, |
|
"end": 1084, |
|
"text": "Kane, 2013)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 418, |
|
"end": 425, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Common European Framework of Reference (CEFR)", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Our aim is to develop a test of general English language proficiency. According to the CEFR global descriptors, this means the ability to understand written and spoken language from varying topics, genres, and linguistic complexity, and to write or speak on a variety of topics and for a variety of purposes (Council of Europe, 2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 333, |
|
"text": "(Council of Europe, 2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construct and Item Formats", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We operationalize part of this construct using five item formats from the language testing literature. These are summarized in Table 2 and collectively assess reading, writing, listening, and speaking skills. Note that these items may not require examinees to perform all the linguistic tasks relevant to a given CEFR level (as is true with any language test), but they serve as strong proxies for the underlying skills. These formats were selected because they can be automatically generated and graded at scale, and have decades of research demonstrating their ability to predict linguistic competence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test Construct and Item Formats", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Two of the formats assess vocabulary breadth, known as yes/no vocabulary tests ( Figure 2 ). These both follow the same convention but vary in modality (text vs. audio), allowing us to measure both written and spoken vocabulary. For these items, the examinee must select, from among text or audio stimuli, which are real English words and which are English-like pseudowords (morphologically and phonologically plausible, but have no meaning in English). These items target a foundational linguistic competency of the CEFR, namely, the written and spoken vocabulary required to meet communication needs across CEFR levels (Milton, 2010) . Test takers who do well on these tasks have a broader lexical inventory, allowing for performance in a variety of language use situations. Poor performance on these tasks indicates a more basic inventory. The other three item formats come out of the integrative language testing tradition (Alderson et al., 1995) , which requires examinees to draw on a variety of language skills (e.g., grammar, discourse) and abilities (e.g., reading, writing) in order to respond correctly. Example screenshots of these item formats are shown in Figure 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 621, |
|
"end": 635, |
|
"text": "(Milton, 2010)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 950, |
|
"text": "(Alderson et al., 1995)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 89, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1178, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test Construct and Item Formats", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The c-test format is a measure of reading ability (and to some extent, writing). These items contain passages of text in which some of the words have been ''damaged'' (by deleting the second half of every other word), and examinees must complete the passage by filling in missing letters from the damaged words. The characteristics of the damaged words and their relationship to the text ranges from those requiring lexical, phrasal, clausal, and discourse-level comprehension in order to respond correctly. These items indicate how well test takers can process texts of varied abstractness and complexity versus shorter more concrete texts, and have been shown to reliably predict other measures of CEFR level (Reichert et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 711, |
|
"end": 734, |
|
"text": "(Reichert et al., 2010)", |
|
"ref_id": "BIBREF54" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construct and Item Formats", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The dictation task taps into both listening and writing skills by having examinees transcribe an audio recording. In order to respond successfully, examinees must parse individual words and understand their grammatical relationships prior to typing what they hear. This targets the linguistic demands required for overall listening comprehension as described in the CEFR. The writing portion of the dictation task measures examinee knowledge of orthography and grammar (markers of writing ability at the A1/A2 level), and to some extent meaning. The elicited speech task taps into reading and speaking skills by requiring examinees to say a sentence out loud. Test takers must be able to process the input (e.g., orthography and grammatical structure) and are evaluated on their fluency, accuracy, and ability to use complex language orally (Van Moere, 2012). This task targets sentence-level language skills that incorporate simple-to-complex components of both the reading and speaking ''can-do'' statements in the CEFR framework. Furthermore, both the dictation and elicited speech tasks also measure working memory capacity in the language, which is regarded as shifting from lexical competence to structure and pragmatics somewhere in the B1/B2 range (Westhoff, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1256, |
|
"end": 1272, |
|
"text": "(Westhoff, 2007)", |
|
"ref_id": "BIBREF72" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construct and Item Formats", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "For the experiments in this section, a panel of linguistics PhDs with ESL teaching experience first compiled a CEFR vocabulary wordlist, synthesizing previous work on assessing active English language vocabulary knowledge (e.g., Capel, 2010 Capel, , 2012 Cambridge English, 2012) . This standardsetting step produced an inventory of 6,823 English words labeled by CEFR level, mostly in the B1/B2 range ( ). We did not conduct any formal annotator agreement studies, and the inventory does include duplicate entries for types at different CEFR levels (e.g., for words with multiple senses). We used this labeled wordlist to train a vocabulary scale model that assigns \u03b4 i scores to each yes/no test item ( Figure 2 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 240, |
|
"text": "Capel, 2010", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 254, |
|
"text": "Capel, , 2012", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 279, |
|
"text": "Cambridge English, 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 705, |
|
"end": 713, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Vocabulary Scale", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Culligan 2015found character length and corpus frequency to significantly predict word difficulty, according IRT analyses of multiple vocabulary tests (including the yes/no format). This makes them promising features for our CEFR-based vocabulary scale model. Although character length is straightforward, corpus frequencies only exist for real English words. For our purposes, however, the model must also make predictions for English-like pseudowords, since our CAT approach to yes/no items requires examinees to distinguish between words and pseudowords drawn from a similar CEFRbased scale range. As a proxy for frequency, we trained a character-level Markov chain language model on the OpenSubtitles corpus 1 using modified Kneser-Ney smoothing (Heafield et al., 2013) . We then use the log-likelihood of a word (or pseudoword) under this model as a feature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 750, |
|
"end": 773, |
|
"text": "(Heafield et al., 2013)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We also use the Fisher score of a word under the language model to generate more nuanced orthographic features. The Fisher score \u2207x of word x is a vector representing the gradient of its loglikelihood under the language model, parameterized by m: \u2207x = \u2202 \u2202m log p(x|m). These features are conceptually similar to trigrams weighted by tfidf (Elkan, 2005) , and are inspired by previous work leveraging information from generative sequence models to improve discriminative classifiers (Jaakkola and Haussler, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 352, |
|
"text": "(Elkan, 2005)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 511, |
|
"text": "(Jaakkola and Haussler, 1999)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We consider two regression approaches to model the CEFR-based vocabulary scale: linear and weighted-softmax. Let y x be the CEFR level of word x, and \u03b4(y x ) be the 100-point scale value corresponding to that level from Table 1. For the linear approach, we treat the difficulty of a word as \u03b4 x = \u03b4(y x ), and learn a linear function with weights w on the features of x directly. For weighted-softmax, we train a six-way multinomial ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 228, |
|
"text": "Table 1.", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "= y \u03b4(y)p(y|x, w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "as a weighted sum over the posterior p(y|x, w).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Experimental results are shown in Table 3 . We report Pearson's r between predictions and expert CEFR judgments as an evaluation measure. The r ALL results train and evaluate using the same data; this is how models are usually analyzed in the applied linguistics literature, and provides a sense of how well the model captures word difficulty for real English words. The r XV results use 10-fold cross-validation; this is how models are usually evaluated in the ML/NLP literature, and gives us a sense of how well it generalizes to English-like pseudowords (as well as English words beyond the expert CEFR wordlist). Both models have a strong, positive relationship with expert human judgments (r ALL \u2265 .90), although they generalize to unseen words less well (r XV \u2264 .60). Linear regression appears to drastically overfit compared to weighted-softmax, since it reconstructs the training data almost perfectly while explaining little of the variance among cross-validated labels. The feature ablations also reveal that Fisher score features are the most important, while character length has little impact (possibly because length is implicitly captured by all the Fisher score features).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Sample predictions from the weighted-softmax vocabulary scale model are shown in Table 4 . The more advanced words (higher \u03b4) are rarer and mostly have Greco-Latin etymologies, whereas the more basic words are common and mostly have Anglo-Saxon origins. These properties appear to hold for non-existent pseudowords (e.g., 'cload' seems more Anglo-Saxon and more common than 'fortheric' would be). Although we did not conduct any formal analysis of pseudoword difficulty, these illustrations suggest that the model captures qualitative subtleties of the English lexicon, as they relate to CEFR level.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 88, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Boxplots visualizing the relationship between our learned scale and expert judgments are shown in Figure 3 (a). Qualitative error analysis reveals that the majority of mis-classifications are in fact under-predictions simply due to polysemy. For example: 'a just cause' (C1) vs. 'I just left' (\u03b4 = 24), and 'to part ways' (C2) vs. 'part of the way' (\u03b4 = 11). Because these more basic word senses do exist, our correlation estimates may be on the conservative side. Thus, using these predicted word difficulties to construct yes/no items (as we do later in \u00a75) seems justified.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For the experiments in this section, we leverage a variety of corpora gleaned from online sources, and use combined regression and ranking techniques to train longer-form passage scale models. These models can be used to predict difficulty for c-test, dictation, and elicited speech items (Figure 4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 298, |
|
"text": "(Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Passage Scale", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In contrast to vocabulary, little to no work has been done to profile CEFR text or discourse features for English, and only a handful of ''CEFRlabeled'' documents are even available for model training. Thus, we take a semi-supervised learning approach (Zhu and Goldberg, 2009) , first by learning to rank passages by overall difficulty, and then by propagating CEFR levels from a small number of labeled texts to many more unlabeled texts that have similar linguistic features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 276, |
|
"text": "(Zhu and Goldberg, 2009)", |
|
"ref_id": "BIBREF74" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Passage Scale", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Average word length and sentence length have long been used to predict text difficulty, and in fact measures based solely on these features have been shown to correlate (r = .91) with comprehension in reading tests (DuBay, 2006) . Inspired by our vocabulary model experiments, we also trained a word-level unigram language model to produce log-likelihood and Fisher score features (which is similar to a bag of words weighted by tf-idf ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 228, |
|
"text": "(DuBay, 2006)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We gathered an initial training corpus from online English language self-study Web sites (e.g., free test preparation resources for popular English proficiency exams). These consist of reference phrases and texts from reading comprehension exercises, all organized by CEFR level. We segmented these documents and assigned documents' CEFR labels to each paragraph. This resulted in 3,049 CEFR-labeled passages, containing very few A1 texts, and a peak at the C1 level ( ). We refer to this corpus as CEFR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpora", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Due to the small size of the CEFR corpus and its uncertain provenance, we also downloaded pairs of articles from English Wikipedia 2 that had also been rewritten for Simple English 3 (an alternate version that targets children and adult English learners). Although the CEFR alignment for these articles is unknown, we hypothesize that the levels for texts on the English site should be higher than those on the Simple English site; thus by comparing these article pairs a model can learn features related to passage difficulty, and therefore the CEFR level (in addition to expanding topical coverage beyond those represented in CEFR). This corpus includes 3,730 article pairs resulting in 18,085 paragraphs (from both versions combined). We refer to this corpus as WIKI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpora", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We also downloaded thousands of English sentences from Tatoeba, 4 a free, crowd-sourced database of self-study resources for language learners. We refer to this corpus as TATOEBA. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpora", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To rank passages for difficulty, we use a linear approach similar to that of Sculley (2010) . Let x be the feature vector for a text with CEFR label y. A standard linear regression can learn a weight vector w such that \u03b4(y) \u2248 x \u22ba w. Given a pair of texts, one can learn to rank by ''synthesizing'' a label and feature vector representing the difference between them:", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 91, |
|
"text": "Sculley (2010)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "[\u03b4(y 1 )\u2212\u03b4(y 2 )] \u2248 [x 1 \u2212x 2 ] \u22ba w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The resulting w can still be applied to single texts (i.e., by subtracting the 0 vector) in order to score them for ranking. Although the resulting predictions are not explicitly calibrated (e.g., to our CEFR-based scale), they should still capture an overall ranking of textual sophistication. This also allows us to combine the CEFR and WIKI corpora for training, since relative difficulty for the latter is known (even if precise CEFR levels are not).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To train ranking models, we sample 1% of paragraph pairs from CEFR (up to 92,964 instances), and combine this with the cross of all paragraphs in English \u00d7 Simple English versions of the same article from WIKI (up to 25,438 instances). We fix \u03b4(y) = 25 for Simple English and \u03b4(y) = 75 for English in the WIKI pairs, under a working assumption that (on average) the former are at the A2/B1 level, and the latter B2/C1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Results using cross-validation are shown in Table 5 . For each fold, we train using pairs from the training partition and evaluate using individual instance scores on the test partition. We report the AUC, or area under the ROC curve (Fawcett, 2006) , which is a common ranking metric for classification tasks. Ablation results show that Fisher score features (i.e., weighted bag of words) again have the strongest effect, although they improve ranking for the CEFR subset while harming WIKI. We posit that this is because WIKI is topically balanced (all articles have an analog from both versions of the site), so word and sentence length alone are in fact good discriminators. that 85% of the time, the model correctly ranks a more difficult passage above a simpler one (with respect to CEFR level). 5", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 249, |
|
"text": "(Fawcett, 2006)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 51, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ranking Experiments", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Given a text ranking model, we now present experiments with the following algorithm for propagating CEFR levels from labeled texts to unlabeled ones for semi-supervised training:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "1. Score all individual passages in CEFR, WIKI, and TATOEBA (using the ranking model);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "2. For each labeled instance in CEFR, propagate its CEFR level to the five most similarly ranked neighbors in WIKI and TATOEBA;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "3. Combine the label-propagated passages from WIKI and TATOEBA with CEFR;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "4. Balance class labels by sampling up to 5,000 passages per CEFR level (30,000 total); 5. Train a passage scale model using the resulting CEFR-aligned texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Cross-validation results for this procedure are shown in Table 7 . The weighted-softmax regression has a much stronger positive relationship with CEFR labels than simple linear regression. Furthermore, the label-propagated WIKI and TATOEBA supplements offer small but statistically significant improvements over training on CEFR texts alone. Since these supplemental passages also expand the feature set more than tenfold (i.e., by 5 AUC is also the effect size of the Wilcoxon rank-sum test, which represents the probability that the a randomly chosen text from WIKI English will be ranked higher than Simple English. For CEFR, increasing the model vocabulary for Fisher score features), we claim this also helps the model generalize better to unseen texts in new domains. Boxplots illustrating the positive relationship between scale model predictions and CEFR labels are shown in Figure 3(b) . This, while strong, may also be a conservative correlation estimate, since we propagate CEFR document labels down to paragraphs for training and evaluation and this likely introduces noise (e.g., C1-level articles may well contain A2-level paragraphs).", |
|
"cite_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 433, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 64, |
|
"text": "Table 7", |
|
"ref_id": "TABREF12" |
|
}, |
|
{ |
|
"start": 883, |
|
"end": 894, |
|
"text": "Figure 3(b)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Example predictions from the WIKI corpus are shown in Table 6 . We can see that the C-level text (\u03b4 \u2248 90) is rather academic, with complex sentence structures and specialized jargon. On the other hand, the A-level text (\u03b4 \u2248 10) is more accessible, with short sentences, few embedded clauses, and concrete vocabulary. The B-level text (\u03b4 \u2248 50) is in between, discussing a political topic using basic grammar, but some colloquial vocabulary (e.g., 'underdog' and 'headline').", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 61, |
|
"text": "Table 6", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scaling Experiments", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The results from \u00a74.3 and \u00a74.4 are encouraging. However, they are based on data gathered from the Internet, of varied provenance, using possibly noisy labels. Therefore, one might question whether the resulting scale model correlates well with more trusted human judgments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Hoc Validation Experiment", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "To answer this question, we had a panel of four experts-PhDs and graduate students in linguistics with ESL teaching experience-compose roughly 400 new texts targeting each of the six CEFR levels (2,349 total). These were ultimately converted into c-test items for our operational English test experiments ( \u00a75), but because they were developed independently from the passage scale model, they are also suitable as a ''blind'' test set for validating our approach. Each passage was written by one expert, and vetted by another (with the two negotiating the final CEFR label in the case of any disagreement).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Hoc Validation Experiment", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Boxplots illustrating the relationship between the passage scale model predictions and expert judgments are shown in Figure 3 (c), which shows a moderately strong, positive relationship. The flattening at the C1/C2 level is not surprising, since the distinction here is very fine-grained, and can be difficult even for trained experts to distinguish or produce (Isbell, 2017) . They may also be dependent on genre or register (e.g., textbooks), thus the model may have been looking for features in some of these expert-written passages that were missing for non-textbook-like writing samples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 375, |
|
"text": "(Isbell, 2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 125, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Hoc Validation Experiment", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "The Duolingo English Test 6 is an accessible, online, computer-adaptive English assessment initially created using the methods proposed in this paper. In this section, we first briefly describe how the test was developed, administered, and scored ( \u00a75.1). Then, we use data logged from many thousands of operational tests to show that our approach can satisfy industry standards for psychometric properties ( \u00a75.2), criterion validity ( \u00a75.3), reliability ( \u00a75.4), and test item security ( \u00a75.5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Duolingo English Test Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Drawing on the five formats discussed in \u00a72.4, we automatically generated a large bank of more than 25,000 test items. These items are indexed into eleven bins for each format, such that each bin corresponds to a predicted difficulty range on our 100-point scale (0-5, 6-15, . . . , 96-100).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construction and Administration", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The CAT administration algorithm chooses the first item format to use at random, and then cycles through them to determine the format for each subsequent item (i.e., all five formats have equal representation). Each session begins with a ''calibration'' phase, where the first item is sampled from the first two difficulty bins, the second item from the next two, and so on. After the first four items, we use the methods from \u00a72.2 to iteratively estimate a provisional test score, select the difficulty \u03b4 i of the next item, and sample randomly from the corresponding bin for the next format. This process repeats until the test exceeds 25 items or 40 minutes in length, whichever comes first. Note that because item difficulties (\u03b4 i s) are on our 100-point CEFR-based scale, so are the resulting test scores (\u03b8s). See Appendix A.1 for more details on test administration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construction and Administration", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the yes/no formats, we used the vocabulary scale model ( \u00a73) to estimate \u03b4 x for all words in an English dictionary, plus 10,000 pseudowords. 7 These predictions were binned by \u03b4 x estimate, and test items created by sampling both dictionaries from the same bin (each item also contains at least 15% words and 15% pseudowords). Item difficulty \u03b4 i =\u03b4 x is the mean difficulty of all words/pseudowords x \u2208 i used as stimuli.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construction and Administration", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the c-test format, we combined the expertwritten passages from \u00a74.5 with paragraphs extracted from other English-language sources, including the WIKI corpus and English-language literature. 8 We followed standard procedure (Klein-Braley, 1997) to automatically generate c-test items from these paragraphs. For the dictation and elicited speech formats, we used sentence-level candidate texts from WIKI, TATOEBA, English Universal Dependencies, 9 as well as custom-written sentences. All passages were then manually reviewed for grammaticality (making corrections where necessary) or filtered for inappropriate content. We used the passage scale model ( \u00a74) to estimate \u03b4 i for these items directly from raw text. For items requiring audio (i.e., audio yes/no and elicited speech items), we contracted four native English-speaking voice actors (two male, two female) with experience voicing ESL instructional materials. Each item format also has its own stat- istical grading procedure using ML/NLP. See Appendix A.2 for more details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 195, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 247, |
|
"text": "(Klein-Braley, 1997)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Construction and Administration", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Recall that the traditional approach to CAT development is to first create a bank of items, then pilot test them extensively with human subjects, and finally use IRT analysis to estimate item \u03b4 i and examinee \u03b8 parameters from pilot data. What is the relationship between test scores based on our machine-learned CEFR-derived scales and such pilot-tested ability estimates? A strong relationship between our scores and \u03b8 estimates based on IRT analysis of real test sessions would provide evidence that our approach is valid as an alternative form of pilot testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confirmatory IRT Analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To investigate this, we analyzed 524,921 examinee, item pairs from 21,351 of the tests administered during the 2018 calendar year, and fit a Rasch model to the observed response data post-hoc. 10 Figure 5(a) shows the relationship between our test scores and more traditional ''pilottested'' IRT \u03b8 estimates. The Spearman rank correlation is positive and very strong (\u03c1 = .96), indicating that scores using our method produce rankings nearly identical to what traditional IRTbased human pilot testing would provide.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 207, |
|
"text": "Figure 5(a)", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Confirmatory IRT Analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "10 Because the test is adaptive, most items are rarely administered ( \u00a75.5). Thus, we limit this analysis to items with >15 observations to be statistically sound. We also omit sessions that went unscored due to evidence of rule-breaking ( \u00a7A.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confirmatory IRT Analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "One source of criterion validity evidence for our method is the relationship between these test scores and other measures of English proficiency. A strong correlation between our scores and other major English assessments would suggest that our approach is well-suited for assessing language proficiency for people who want to study or work in and English-language environment. For this, we compare our results with two other high-stakes English tests: TOEFL iBT 11 and IELTS. 12 After completing our test online, we asked examinees to submit official scores from other tests (if available). This resulted in a large collection of recent parallel scores to compare against. The relationships between our test scores with TOEFL and IELTS are shown in Figures 5(b) and 5(c), respectively. Correlation coefficients between language tests are generally expected to be in the .5-.7 range (Alderson et al., 1995) , so our scores correlate very well with both tests (r > .7). Our relationship with TOEFL and IELTS appears, in fact, to be on par with their published relationship with each other (r = .73, n = 1,153), which is also based on self-reported data (ETS, 2010).", |
|
"cite_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 479, |
|
"text": "12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 883, |
|
"end": 906, |
|
"text": "(Alderson et al., 1995)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 750, |
|
"end": 762, |
|
"text": "Figures 5(b)", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Relationship with Other English Language Assessments", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Another aspect of test validity is the reliability or overall consistency of its scores (Murphy and Davidshofer, 2004) . Reliability coefficient estimates for our test are shown in Table 8 . Importantly, these are high enough to be considered appropriate for high-stakes use.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 118, |
|
"text": "Davidshofer, 2004)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 188, |
|
"text": "Table 8", |
|
"ref_id": "TABREF14" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Score Reliability", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Internal consistency measures the extent to which items in the test measure the same underlying construct. For CATs, this is usually done using the ''split half'' method: randomly split the item bank in two, score both halves separately, and then compute the correlation between halfscores, adjusting for test length (Sireci et al., 1991) . The reliability estimate is well above .9, the threshold for tests ''intended for individual diagnostic, employment, academic placement, or other important purposes'' (DeVellis, 2011).", |
|
"cite_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 338, |
|
"text": "(Sireci et al., 1991)", |
|
"ref_id": "BIBREF58" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Reliability", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Test-retest reliability measures the consistency of people's scores if they take the test multiple times. We consider all examinees who took the test twice within a 30-day window (any longer may reflect actual learning gains, rather than measurement error) and correlate the first score with the second. Such coefficients range from .8-.9 for standardized tests using identical forms, and .8 is considered sufficient for high-stakes CATs, since adaptively administered items are distinct between sessions (Nitko and Brookhart, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 505, |
|
"end": 532, |
|
"text": "(Nitko and Brookhart, 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score Reliability", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Due to the adaptive nature of CATs, they are usually considered to be more secure than fixedform exams, so long as the item bank is sufficiently large (Wainer, 2000) . Two measures for quantifying the security of an item bank are the item exposure rate (Way, 1998) and test overlap rate (Chen et al., 2003) . We report the mean and median values for these measures in Table 9 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 165, |
|
"text": "(Wainer, 2000)", |
|
"ref_id": "BIBREF69" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 264, |
|
"text": "(Way, 1998)", |
|
"ref_id": "BIBREF70" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 306, |
|
"text": "(Chen et al., 2003)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 375, |
|
"text": "Table 9", |
|
"ref_id": "TABREF15" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Item Bank Security", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "The exposure rate of an item is the proportion of tests in which it is administered; the average item exposure rate for our test is .10% (or one in every 1,000 tests). While few tests publish exposure rates for us to compare against, ours is well below the 20% (one in five tests) limit recommended for unrestricted continuous testing (Way, 1998) . The test overlap rate is the proportion of items that are shared between any two randomly-chosen test sessions. The mean overlap for our test is .43% (and the median below .01%), which is well below the 11-14% range reported for other operational CATs like the GRE 13 (Stocking, 1994) . These results suggest that our proposed methods are able to create very large item banks that are quite secure, without compromising the validity or reliability of resulting test scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 346, |
|
"text": "(Way, 1998)", |
|
"ref_id": "BIBREF70" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 633, |
|
"text": "(Stocking, 1994)", |
|
"ref_id": "BIBREF60" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Item Bank Security", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "There has been little to no work using ML/NLP to drive end-to-end language test development as we do here. To our knowledge, the only other example is Hoshino and Nakagawa (2010) , who used a support vector machine to estimate the difficulty of cloze 14 items for a computer-adaptive test. However, the test did not contain any other item formats, and it was not intended as an integrated measure of general language ability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 178, |
|
"text": "Hoshino and Nakagawa (2010)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Instead, most related work has leveraged ML/ NLP to predict test item difficulty from operational test logs. This has been applied with some success to cloze (Mostow and Jang, 2012) , vocabulary (Susanti et al., 2016) , listening comprehension (Loukina et al., 2016) , and grammar exercises (Perez-Beltrachini et al., 2012) . However, these studies all use multiple-choice formats where difficulty is largely mediated by the choice of distractors. The work of Beinborn et al. (2014) is perhaps most relevant to our own; they used ML/ NLP to predict c-test difficulty at the word-gap level, using both macro-features (e.g., paragraph difficulty as we do) as well as micro-features (e.g., frequency, polysemy, or cognateness for each gap word). These models performed on par with human experts at predicting failure rates for English language students living in Germany.", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 181, |
|
"text": "(Mostow and Jang, 2012)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 217, |
|
"text": "(Susanti et al., 2016)", |
|
"ref_id": "BIBREF62" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 266, |
|
"text": "(Loukina et al., 2016)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 323, |
|
"text": "(Perez-Beltrachini et al., 2012)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 482, |
|
"text": "Beinborn et al. (2014)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Another area of related work is in predicting text difficulty (or readability) more generally. Napoles and Dredze (2010) trained classifiers to discriminate between English and Simple English Wikipedia, and Vajjala et al. (2016) applied English readability models to a variety of Web texts (including English and Simple English Wikipedia) . Both of these used linear classifiers with features similar to ours from \u00a74.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 228, |
|
"text": "English and Simple English Wikipedia, and Vajjala et al. (2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 338, |
|
"text": "English and Simple English Wikipedia)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Recently, more efforts have gone into using ML/ NLP to align texts to specific proficiency frameworks like the CEFR. However, this work mostly focuses on languages other than English (e.g., Curto et al., 2015; Sung et al., 2015; Volodina et al., 2016; Vajjala and Rama, 2018) . A notable exception is Xia et al. (2016) , who trained classifiers to predict CEFR levels for reading passages from a suite of Cambridge English 15 exams, targeted at learners from A2-C2. In addition to lexical and language model features like ours ( \u00a74), they showed additional gains from explicit discourse and syntax features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 209, |
|
"text": "Curto et al., 2015;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 228, |
|
"text": "Sung et al., 2015;", |
|
"ref_id": "BIBREF61" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 251, |
|
"text": "Volodina et al., 2016;", |
|
"ref_id": "BIBREF68" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 275, |
|
"text": "Vajjala and Rama, 2018)", |
|
"ref_id": "BIBREF65" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 318, |
|
"text": "Xia et al. (2016)", |
|
"ref_id": "BIBREF73" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The relationship between test item difficulty and linguistic structure has also been investigated in the language testing literature, both to evaluate the validity of item types (Brown, 1989; Abraham and Chapelle, 1992; Kostin, 1993, 1999) and to establish what features impact difficulty so as to inform test development (Nissan et al., 1995; Kostin, 2004) . These studies have leveraged both correlational and regression analyses to examine the relationship between passage difficulty and linguistic features such as passage length, word length and frequency, negations, rhetorical organization, dialogue utterance pattern (questionquestion, statement-question), and so on.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 191, |
|
"text": "(Brown, 1989;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 219, |
|
"text": "Abraham and Chapelle, 1992;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 239, |
|
"text": "Kostin, 1993, 1999)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 343, |
|
"text": "(Nissan et al., 1995;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 357, |
|
"text": "Kostin, 2004)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented a method for developing computer-adaptive language tests, driven by machine learning and natural language processing. This allowed us to rapidly develop an initial version of the Duolingo English Test for the experiments reported here, using ML/NLP to directly estimate item difficulties for a large item bank in lieu of expensive pilot testing with human subjects. This test correlates significantly with other high-stakes English assessments, and satisfies industry standards for score reliability and test security. To our knowledge, we are the 15 https://www.cambridgeenglish.org.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "first to propose language test development in this way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The strong relationship between scores based on ML/NLP estimates of item difficulty and the IRT estimates from operational data provides evidence that our approach-using items' linguistic characteristics to predict difficulty, a priori to any test administration-is a viable form of test development. Furthermore, traditional pilot analyses produce inherently norm-referenced scores (i.e., relative to the test-taking population), whereas it can be argued that our method yields criterionreferenced scores (i.e., indicative of a given standard, in our case the CEFR). This is another conceptual advantage of our method. However, further research is necessary for confirmation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We were able to able to achieve these results using simple linear models and relatively straightforward lexical and language model feature engineering. Future work could incorporate richer syntactic and discourse features, as others have done ( \u00a76). Furthermore, other indices such as narrativity, word concreteness, topical coherence, etc., have also been shown to predict text difficulty and comprehension (McNamara et al., 2011) . A wealth of recent advances in neural NLP that may also be effective in this work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 408, |
|
"end": 431, |
|
"text": "(McNamara et al., 2011)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Other future work involves better understanding how our large, automatically-generated item bank behaves with respect to the intended construct. Detecting differential item functioning (DIF)-the extent to which people of equal ability but different subgroups, such as gender or age, have (un)equal probability of success on test items-is an important direction for establishing the fairness of our test. While most assessments focus on demographics for DIF analyses, online administration means we must also ensure that technology differences (e.g., screen resolution or Internet speed) do not affect item functioning, either.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It is also likely that the five item formats presented in this work over-index on language reception skills rather than production (i.e., writing and speaking). In fact, we hypothesize that the ''clipping'' observed to the right in plots from Figure 5 can be attributed to this: Despite being highly correlated, the CAT as presented here may over estimate overall English ability relative to tests with more open-ended writing and speaking exercises. In the time since the present experiments were conducted, we have updated the Duolingo English Test to include such writing and speaking sections, which are automatically graded and combined with the CAT portion. The test-retest reliability for these improved scores is .85, and correlation with TOEFL and IELTS are .77 and .78, respectively (also, the ''clipping'' effect disappears). We continue to conduct research on the quality of the interpretations and uses of Duolingo English Test scores; interested readers are able to find the latest ongoing research at https://go.duolingo.com/ dettechnicalmanual.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 251, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, in some sense what we have proposed here is partly a solution to the ''cold start'' problem facing language test developers: How does one estimate item difficulty without any response data to begin with? Once a test is in production, however, one can leverage the operational data to further refine these models. It is exciting to think that such analyses of examinees' response patterns (e.g., topical characteristics, register types, and pragmatic uses of language in the texts) can tell us more about the underlying proficiency scale, which in turn can contribute back to the theory of frameworks like the CEFR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "of disagreement between two probability distributions. As a result, r i can just as easily be a probabilistic response (0 \u2264 r i \u2264 1) as a binary one (r i \u2208 {0, 1}). In other words, this MLE optimization seeks to find\u03b8 t such that the IRF prediction p i (\u03b8 t ) is most similar to each probabilistic response r i . We believe the flexibility of this generalized Rasch-like framework helps us reduce test administration time above and beyond a binary-response CAT, since each item's grade summarizes multiple facets of the examinee's performance on that item. To use this generalization, however, we must specify a probabilistic grading procedure for each item format. Since an entire separate manuscript can be devoted to this topic, we simply summarize our approaches here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The yes/no vocabulary format (Figure 2 ) is traditionally graded using the sensitivity index d \u2032a measure of separation between signal (word) and noise (pseudoword) distributions from signal detection theory (Zimmerman et al., 1977) . This index is isomorphic with the AUC (Fawcett, 2006 ), which we use as the graded response r i . This can be interpreted as ''the probability that the examinee can discriminate between English words and pseudowords at level \u03b4 i .'' C-test items (Figure 4(a) ) are graded using a weighted average of the correctly filled wordgaps, such that each gap's weight is proportional to its length in characters. Thus, r i can be interpreted as ''the proportion of this passage the examinee understood, such that longer gaps are weighted more heavily.'' (We also experimented with other grading schemes, but this yielded the highest test score reliability in preliminary work.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 232, |
|
"text": "(Zimmerman et al., 1977)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 287, |
|
"text": "(Fawcett, 2006", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 38, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 493, |
|
"text": "(Figure 4(a)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The dictation (Figure 4(b) ) and elicited speech (Figure 4(c) ) items are graded using logistic regression classifiers. We align the examinee's submission (written for dictation; transcribed using automatic speech recognition for elicited speech) to the expected reference text, and extract features representing the differences in the alignment (e.g., string edit distance, n-grams of insertion/substitution/deletion patterns at both the word and character level, and so on). These models were trained on aggregate human judgments of correctness and intelligibility for tens of thousands of test item submissions (stratified by \u03b4 i ) collected during preliminary work. Each item received \u2265 15 independent binary judgments from fluent English speakers via Amazon Mechanical Turk, 16 which were then averaged to produce ''soft'' (probabilistic) training labels. Thus r i can be interpreted as ''the probability that a random English speaker would find this transcription/ utterance to be faithful, intelligible, and accurate.'' For the dictation grader, the correlation between human labels and model predictions is r = .86 (10-fold cross-validation). Correlation for the elicited speech grader is r = .61 (10-fold crossvalidation).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 26, |
|
"text": "(Figure 4(b)", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 61, |
|
"text": "(Figure 4(c)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We found movie subtitle counts(Lison and Tiedemann, 2016) to be more correlated with the expert CEFR judgments than other language domains (e.g., Wikipedia or newswire).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://en.wikipedia.org. 3 https://simple.wikipedia.org. 4 https://tatoeba.org.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://englishtest.duolingo.com.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We trained a character-level LSTM RNN(Graves, 2014) on an English dictionary to produce pseudowords, and then filtered out any real English words. Remaining candidates were manually reviewed and filtered if they were deemed too similar to real words, or were otherwise inappropriate.8 https://www.wikibooks.org. 9 http://universaldependencies.org.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.ets.org/toefl. 12 https://www.ielts.org/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.ets.org/gre. 14 Cloze tests and c-tests are similar, both stemming from the ''reduced redundancy'' approach to language assessment(Lin et al., 2008). The cloze items in the related work cited here contain a single deleted word with four multiple-choice options for filling in the blank.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.mturk.com/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Micheline Chalhoub-Deville, Steven Sireci, Bryan Smith, and Alina von Davier for their input on this work, as well as Klinton Bicknell, Erin Gustafson, Stephen Mayhew, Will Monroe, and the TACL editors and reviewers for suggestions that improved this paper. Others who have contributed in various ways to research about our test to date include Cynthia M. Berger, Connor Brem, Ramsey Cardwell, Angela DiCostanzo, Andre Horie, Jennifer Lake, Yena Park, and Kevin Yancey.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tests are administered remotely via Web browser at https://englishtest.duolingo.com. Examinees are required to have a stable Internet connection and a device with a working microphone and front-facing camera. Each test session is recorded and reviewed by human proctors before scores are released. Prohibited behaviors include:\u2022 Interacting with another person in the room ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Test Administration Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The item formats in this work (Table 2) are not multiple-choice or true/false. This means responses may not be simply ''correct'' or ''incorrect,'' and require more nuanced grading procedures. While partial credit IRT models do exist (Andrich, 1978; Masters, 1982) , we chose instead to generalize the binary Rasch framework to incorporate ''soft'' (probabilistic) responses.The maximum-likelihood estimation (MLE) estimate used to score the test (or select the next item) is based on the log-likelihood function:which follows directly from equation 2. Note that maximizing this is equivalent to minimizing cross-entropy (de Boer et al., 2005) , a measure", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 249, |
|
"text": "(Andrich, 1978;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 264, |
|
"text": "Masters, 1982)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 643, |
|
"text": "Boer et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 39, |
|
"text": "(Table 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.2 Item Grading Details", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The meaning of cloze test scores: An item difficulty perspective", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Abraham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Chapelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "The Modern Language Journal", |
|
"volume": "76", |
|
"issue": "4", |
|
"pages": "468--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. G. Abraham and C. A. Chapelle. 1992. The meaning of cloze test scores: An item difficulty perspective. The Modern Language Journal, 76(4):468-479.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Standards for Educational and Psychological Testing", |
|
"authors": [ |
|
{ |
|
"first": "Apa", |
|
"middle": [], |
|
"last": "Aera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ncme", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "AERA, APA, and NCME. 2014. Standards for Educational and Psychological Testing.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Language Test Construction and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Alderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Clapham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Wall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. C. Alderson, C. Clapham, and D. Wall. 1995. Language Test Construction and Evaluation, Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A rating formulation for ordered response categories", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Andrich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "Psychometrika", |
|
"volume": "43", |
|
"issue": "4", |
|
"pages": "561--573", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Andrich. 1978. A rating formulation for ordered response categories. Psychometrika, 43(4):561-573.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Language Assessment in Practice", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Bachman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Bachman and A. Palmer. 2010. Language Assessment in Practice. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Predicting the difficulty of language proficiency tests", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Beinborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "517--530", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Beinborn, T. Zesch, and I. Gurevych. 2014. Predicting the difficulty of language proficiency tests. Transactions of the Association for Computational Linguistics, 2:517-530.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The clear speech effect for non-native listeners", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bradlow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Bent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of the Acoustical Society of America", |
|
"volume": "112", |
|
"issue": "", |
|
"pages": "272--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. R. Bradlow and T. Bent. 2002. The clear speech effect for non-native listeners. Journal of the Acoustical Society of America, 112:272-284.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Perceptual adaptation to non-native speech", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bradlow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Bent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Cognition", |
|
"volume": "106", |
|
"issue": "", |
|
"pages": "707--729", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. R. Bradlow and T. Bent. 2008. Perceptual adaptation to non-native speech. Cognition, 106:707-729.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A1-B2 vocabulary: Insights and issues arising from the English Profile Wordlists project", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Capel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "English Profile Journal", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Capel. 2010. A1-B2 vocabulary: Insights and issues arising from the English Profile Wordlists project. English Profile Journal, 1.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Completing the English Vocabulary Profile: C1 and C2 vocabulary", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "English Profile Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Capel. 2012. Completing the English Vocab- ulary Profile: C1 and C2 vocabulary. English Profile Journal, 3.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "TOEFL questions, answers leaked in China", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Cau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Global Times", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Cau. 2015. TOEFL questions, answers leaked in China. Global Times.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Exploring the relationship between item exposure rate and item overlap rate in computerized adaptive testing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Ankenmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Spray", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Educational Measurement", |
|
"volume": "40", |
|
"issue": "", |
|
"pages": "129--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Chen, R. D. Ankenmann, and J. A. Spray. 2003. Exploring the relationship between item exposure rate and item overlap rate in com- puterized adaptive testing. Journal of Educa- tional Measurement, 40:129-145.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Common European Framework of Reference for Languages: Learning, Teaching, Assessment", |
|
"authors": [], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Council of Europe. 2001. Common European Framework of Reference for Languages: Learn- ing, Teaching, Assessment. Cambridge Univer- sity Press.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A comparison of three test formats to assess word difficulty", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Culligan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Language Testing", |
|
"volume": "32", |
|
"issue": "4", |
|
"pages": "503--520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Culligan. 2015. A comparison of three test formats to assess word difficulty. Language Testing, 32(4):503-520.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatic text difficulty classifier-assisting the selection of adequate reading materials for European Portuguese teaching", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Curto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mamede", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Baptista", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Conference on Computer Supported Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "36--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Curto, N. J. Mamede, and J. Baptista. 2015. Automatic text difficulty classifier-assisting the selection of adequate reading materials for European Portuguese teaching. In Proceedings of the International Conference on Computer Supported Education, pages 36-44.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A tutorial on the cross-entropy method", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Boer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Kroese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mannor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Rubinstien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Annals of Operations Research", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "19--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. T. de Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstien. 2005. A tutorial on the cross-entropy method. Annals of Operations Research, 34:19-67.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Scale Development: Theory and Applications, Number 26 in Applied Social Research Methods", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Devellis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. F. DeVellis. 2011. Scale Development: Theory and Applications, Number 26 in Applied Social Research Methods. SAGE Publications.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Smart Language: Readers, Readability, and the Grading of Text", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Dubay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. H. DuBay. 2006. Smart Language: Readers, Readability, and the Grading of Text, Impact Information.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "As SAT was hit by security breaches, College Board went ahead with tests that had leaked", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Dudley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Stecklow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Harney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Reuters Investigates", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Dudley, S. Stecklow, A. Harney, and I. J. Liu. 2016. As SAT was hit by security breaches, College Board went ahead with tests that had leaked. Reuters Investigates.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Deriving TF-IDF as a Fisher kernel", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Elkan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "String Processing and Information Retrieval", |
|
"volume": "3772", |
|
"issue": "", |
|
"pages": "295--300", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Elkan. 2005, Deriving TF-IDF as a Fisher kernel. In M. Consens and G. Navarro, editors, String Processing and Information Retrieval, volume 3772 of Lecture Notes in Computer Science, pages 295-300. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Linking TOEFL iBT scores to IELTS scores -A research report", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ets", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ETS TOEFL Report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ETS. 2010, Linking TOEFL iBT scores to IELTS scores -A research report. ETS TOEFL Report.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "An introduction to ROC analysis", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Fawcett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Pattern Recognition Letters", |
|
"volume": "27", |
|
"issue": "8", |
|
"pages": "861--874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Fawcett. 2006. An introduction to ROC anal- ysis. Pattern Recognition Letters, 27(8):861-874.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "The prediction of TOEFL reading comprehension item difficulty for expository prose passages for three item types: Main idea, inference, and supporting idea items", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Freedle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Kostin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "ETS Research Report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Freedle and I. Kostin. 1993, The prediction of TOEFL reading comprehension item difficulty for expository prose passages for three item types: Main idea, inference, and supporting idea items. ETS Research Report 93-13.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Does the text matter in a multiple-choice test of comprehension? the case for the construct validity of TOEFL's minitalks. Language Testing", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Freedle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Kostin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "2--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Freedle and I. Kostin. 1999. Does the text matter in a multiple-choice test of comprehension? the case for the construct validity of TOEFL's minitalks. Language Testing, 16(1):2-32.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Generating sequences with re", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Graves. 2014. Generating sequences with re- current neural networks. arXiv, 1308.0850v5 [cs.NE].", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Scalable modified Kneser-Ney language model estimation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Pouzyrevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "690--696", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Heafield, I. Pouzyrevsky, J. H. Clark, and P. Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the Association for Computational Linguistics (ACL), pages 690-696.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Predicting the difficulty of multiple-choice close questions for computer-adaptive testing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hoshino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Research in Computing Science", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "279--292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Hoshino and H. Nakagawa. 2010. Predicting the difficulty of multiple-choice close questions for computer-adaptive testing. Research in Computing Science, 46:279-292.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Assessing C2 writing ability on the Certificate of English Language Proficiency: Rater and examinee age effects", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Isbell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Assessing Writing", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "37--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Isbell. 2017. Assessing C2 writing ability on the Certificate of English Language Proficiency: Rater and examinee age effects. Assessing Writ- ing, 34:37-49.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Exploiting generative models in discriminative classifiers", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Haussler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "487--493", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Jaakkola and D. Haussler. 1999. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing Systems (NIPS), volume 11, pages 487-493.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Elicited imitation in second language acquisition research", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Jessop", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Canadian Modern Language Review", |
|
"volume": "64", |
|
"issue": "1", |
|
"pages": "215--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Jessop, W. Suzuki, and Y. Tomita. 2007. Elicited imitation in second language acqui- sition research. Canadian Modern Language Review, 64(1):215-238.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Validating the interpretations and uses of test scores", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Kane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Educational Measurement", |
|
"volume": "50", |
|
"issue": "", |
|
"pages": "1--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.T. Kane. 2013. Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50:1-73.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Construct validity of C-tests: A factorial approach", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Khodadady", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Language Teaching and Research", |
|
"volume": "5", |
|
"issue": "6", |
|
"pages": "1353--1362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Khodadady. 2014. Construct validity of C-tests: A factorial approach. Journal of Language Teaching and Research, 5(6):1353-1362.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "C-Tests in the context of reduced redundancy testing: An appraisal. Language Testing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Klein-Braley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "47--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Klein-Braley. 1997. C-Tests in the context of reduced redundancy testing: An appraisal. Language Testing, 14(1):47-84.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Exploring item characteristics that are related to the difficulty of TOEFL dialogue items", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Kostin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ETS Research Report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Kostin. 2004, Exploring item characteristics that are related to the difficulty of TOEFL dialogue items. ETS Research Report 04-11.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Handbook of Test Development", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Lane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Raymond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Downing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Lane, M. R. Raymond, and S. M. Downing, editors . 2016. Handbook of Test Development, 2nd edition. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Language reduced redundancy tests: A reexamination of cloze test and c-test", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Feng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Pan-Pacific Association of Applied Linguistics", |
|
"volume": "12", |
|
"issue": "1", |
|
"pages": "61--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Y. Lin, H. C. Yuan, and H. P. Feng. 2008. Language reduced redundancy tests: A re- examination of cloze test and c-test. Journal of Pan-Pacific Association of Applied Linguistics, 12(1):61-79.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "3PL, Rasch, quality-control and science", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Linacre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Rasch Measurement Transactions", |
|
"volume": "27", |
|
"issue": "4", |
|
"pages": "1441--1444", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. M. Linacre. 2014. 3PL, Rasch, quality-control and science. Rasch Measurement Transactions, 27(4):1441-1444.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "OpenSubtitles-2016: Extracting large parallel corpora from movie and TV subtitles", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "923--929", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Lison and J. Tiedemann. 2016. OpenSubtitles- 2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the International Conference on Language Re- sources and Evaluation (LREC), pages 923-929.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Applications of Item Response Theory to Practical Testing Problems", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. M. Lord. 1980. Applications of Item Re- sponse Theory to Practical Testing Problems, Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Textual complexity as a predictor of difficulty of listening items in language proficiency tests", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Loukina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sakano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sheehan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3245--3253", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Loukina, S. Y. Yoon, J. Sakano, Y. Wei, and K. Sheehan. 2016. Textual complexity as a predictor of difficulty of listening items in language proficiency tests. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 3245-3253.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "A Rasch model for partial credit scoring", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Masters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "Psychometrika", |
|
"volume": "47", |
|
"issue": "2", |
|
"pages": "149--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. N. Masters. 1982. A Rasch model for partial credit scoring. Psychometrika, 47(2):149-174.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Coh-Metrix easability components: Aligning text difficulty with theories of text comprehension", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mcnamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Graesser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kulikowich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Annual Meeting of the American Educational Research Association (AERA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. S. McNamara, A. C. Graesser, Z. Cai, and J. Kulikowich. 2011. Coh-Metrix easability components: Aligning text difficulty with the- ories of text comprehension. In Annual Meeting of the American Educational Research Asso- ciation (AERA).", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "The development of vocabulary breadth across the CEFR levels", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Milton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Milton. 2010, The development of vocabulary breadth across the CEFR levels. In I.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Communicative Proficiency and Linguistic Development: Intersections Between SLA and Language Testing Research", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bartning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vedder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "211--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bartning, M. Martin, and I. Vedder, editors, Communicative Proficiency and Linguistic Development: Intersections Between SLA and Language Testing Research, volume 1 of EuroSLA Monograph Series, pages 211-232. EuroSLA.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Aural word recognition and oral competence in English as a foreign language", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Milton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wade", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Hopkins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Insights Into Non-Native Vocabulary Teaching and Learning", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "83--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Milton, J. Wade, and N. Hopkins. 2010. Aural word recognition and oral competence in English as a foreign language. In R. Chac\u00f3n- Beltr\u00e1n,C. Abello-Contesse, and M. Torreblanca- L\u00f3pez, editors, Insights Into Non-Native Vocabulary Teaching and Learning, volume 52, pages 83-98. Multilingual Matters.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Generating diagnostic multiple choice comprehension cloze questions", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mostow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Workshop on Building Educational Applications Using NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "136--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Mostow and H. Jang. 2012. Generating diag- nostic multiple choice comprehension cloze questions. In Proceedings of the Workshop on Building Educational Applications Using NLP, pages 136-146.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Psychological Testing: Principles and Applications", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Davidshofer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. R. Murphy and C. O. Davidshofer. 2004. Psy- chological Testing: Principles and Applica- tions, Pearson.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Learning Simple Wikipedia: A cogitation in ascertaining abecedarian language", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Napoles and M. Dredze. 2010. Learning Sim- ple Wikipedia: A cogitation in ascertaining abecedarian language. In Proceedings of the Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids, pages 42-50.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "An analysis of factors affecting the difficulty of dialogue items in TOEFL listening comprehension", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Nissan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Devincenzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Tang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "ETS Research Report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Nissan, F. DeVincenzi, and K. L. Tang. 1995. An analysis of factors affecting the dif- ficulty of dialogue items in TOEFL listening comprehension. ETS Research Report 95-37.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Generating grammar exercises", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Perez-Beltrachini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Workshop on Building Educational Applications Using NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Perez-Beltrachini, C. Gardent, and G. Kruszewski. 2012. Generating grammar ex- ercises. In Proceedings of the Workshop on Building Educational Applications Using NLP, pages 147-156.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Probabilistic Models for Some Intelligence and Attainment Tests", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rasch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Rasch. 1993. Probabilistic Models for Some Intelligence and Attainment Tests, MESA Press.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "The C-test, the TCF and the CEFR: A validation study", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Reichert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "The C-Test: Contributions from Current Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "205--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Reichert, U. Keller, and R. Martin. 2010. The C-test, the TCF and the CEFR: A validation study. In The C-Test: Contributions from Cur- rent Research, pages 205-231. Peter Lang.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Combined regression and ranking", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sculley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Conference on Knowledge Discovery and Data Mining (KDD)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "979--988", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Sculley. 2010. Combined regression and rank- ing. In Proceedings of the Conference on Knowledge Discovery and Data Mining (KDD), pages 979-988.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Computerized adaptive testing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Segall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. O. Segall. 2005, Computerized adaptive testing. In K. Kempf-Leonard, editor, Encyclopedia of Social Measurement. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Settles. 2012. Active Learning. Synthesis Lec- tures on Artificial Intelligence and Machine Learning. Morgan & Claypool.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "On the reliability of testlet-based tests", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Sireci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Thissen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wainer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Journal of Educational Measurement", |
|
"volume": "28", |
|
"issue": "3", |
|
"pages": "237--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. G. Sireci, D. Thissen, and H. Wainer. 1991. On the reliability of testlet-based tests. Journal of Educational Measurement, 28(3):237-247.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Vocabulary size and the skills of listening, reading and writing", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Staehr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Language Learning Journal", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "139--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. S. Staehr. 2008. Vocabulary size and the skills of listening, reading and writing. Language Learning Journal, 36:139-152.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Three practical issues for modern adaptive testing item pools", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Stocking", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. L. Stocking. 1994, Three practical issues for modern adaptive testing item pools. ETS Re- search Report 94-5.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "Leveling L2 texts through readability: Combining multilevel linguistic features with the CEFR", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Sung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Dyson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Change", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "99", |
|
"issue": "", |
|
"pages": "371--391", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. T. Sung, W. C. Lin, S. B. Dyson, K. E. Change, and Y. C. Chen. 2015. Leveling L2 texts through readability: Combining multilevel linguistic features with the CEFR. The Modern Language Journal, 99(2):371-391.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Item difficulty analysis of english vocabulary questions", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Susanti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Nishikawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Tokunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Obari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the International Conference on Computer Supported Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "267--274", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Susanti, H. Nishikawa, T. Tokunaga, and H. Obari. 2016. Item difficulty analysis of english vocabulary questions. In Proceedings of the International Conference on Computer Supported Education, pages 267-274.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "Testing algorithms", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Thissen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mislevy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computerized Adaptive Testing: A Primer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Thissen and R. J. Mislevy. 2000. Testing algorithms. In H. Wainer, editor, Computerized Adaptive Testing: A Primer. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF64": { |
|
"ref_id": "b64", |
|
"title": "Towards grounding computational linguistic approaches to readability: Modeling reader-text interaction for easy and difficult texts", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Vajjala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Eitel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Scheiter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Vajjala, D. Meurers, A. Eitel, and K. Scheiter. 2016. Towards grounding computational lin- guistic approaches to readability: Modeling reader-text interaction for easy and difficult texts. In Proceedings of the Workshop on Com- putational Linguistics for Linguistic Complex- ity, pages 38-48.", |
|
"links": null |
|
}, |
|
"BIBREF65": { |
|
"ref_id": "b65", |
|
"title": "Experiments with universal CEFR classification", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Vajjala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Rama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Vajjala and T. Rama. 2018. Experiments with universal CEFR classification. In Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications, pages 147-153.", |
|
"links": null |
|
}, |
|
"BIBREF66": { |
|
"ref_id": "b66", |
|
"title": "A psycholinguistic approach to oral assessment", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Van Moere", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Language Testing", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "325--344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Van Moere. 2012. A psycholinguistic ap- proach to oral assessment. Language Testing, 29:325-344.", |
|
"links": null |
|
}, |
|
"BIBREF67": { |
|
"ref_id": "b67", |
|
"title": "Elicited imitation: A brief overview", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Vinther", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "International Journal of Applied Linguistics", |
|
"volume": "12", |
|
"issue": "1", |
|
"pages": "54--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Vinther. 2002. Elicited imitation: A brief overview. International Journal of Applied Lin- guistics, 12(1):54-73.", |
|
"links": null |
|
}, |
|
"BIBREF68": { |
|
"ref_id": "b68", |
|
"title": "Classification of Swedish learner essays by CEFR levels", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Volodina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Pil\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Alfter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of EUROCALL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "456--461", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Volodina, I. Pil\u00e1n, and D. Alfter. 2016. Classification of Swedish learner essays by CEFR levels. In Proceedings of EUROCALL, pages 456-461.", |
|
"links": null |
|
}, |
|
"BIBREF69": { |
|
"ref_id": "b69", |
|
"title": "Computerized Adaptive Testing: A Primer", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wainer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Wainer. 2000. Computerized Adaptive Testing: A Primer. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF70": { |
|
"ref_id": "b70", |
|
"title": "Protecting the integrity of computerized testing item pools. Educational Measurement: Issues and Practice", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "17--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. D. Way. 1998. Protecting the integrity of com- puterized testing item pools. Educational Mea- surement: Issues and Practice, 17(4):17-27.", |
|
"links": null |
|
}, |
|
"BIBREF71": { |
|
"ref_id": "b71", |
|
"title": "Application of computerized adaptive testing to educational problems", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Journal of Educational Measurement", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "361--375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. J. Weiss and G. G. Kingsbury. 1984. Applica- tion of computerized adaptive testing to edu- cational problems. Journal of Educational Measurement, 21:361-375.", |
|
"links": null |
|
}, |
|
"BIBREF72": { |
|
"ref_id": "b72", |
|
"title": "Challenges and opportunities of the CEFR for reimagining foreign language pedagogy", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Westhoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "The Modern Language Journal", |
|
"volume": "91", |
|
"issue": "4", |
|
"pages": "676--679", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Westhoff. 2007. Challenges and opportunities of the CEFR for reimagining foreign language pedagogy. The Modern Language Journal, 91(4):676-679.", |
|
"links": null |
|
}, |
|
"BIBREF73": { |
|
"ref_id": "b73", |
|
"title": "Text readability assessment for second language learners", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Kochmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Workshop on Building Educational Applications Using NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Xia, E. Kochmar, and T. Briscoe. 2016. Text readability assessment for second language learners. In Proceedings of the Workshop on Building Educational Applications Using NLP, pages 12-22.", |
|
"links": null |
|
}, |
|
"BIBREF74": { |
|
"ref_id": "b74", |
|
"title": "Introduction to Semi-Supervised Learning", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X. Zhu and A. B. Goldberg. 2009. Introduction to Semi-Supervised Learning. Synthesis Lectures on Artificial Intelligence and Machine Learn- ing. Morgan & Claypool.", |
|
"links": null |
|
}, |
|
"BIBREF75": { |
|
"ref_id": "b75", |
|
"title": "A recognition test of vocabulary using signal-detection measures, and some correlates of word and nonword recognition", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zimmerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Broder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Shaughnessy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Underwood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Intelligence", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "5--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Zimmerman, P. K. Broder, J. J. Shaughnessy, and B. J. Underwood. 1977. A recognition test of vocabulary using signal-detection measures, and some correlates of word and nonword recognition. Intelligence, 1(1):5-31.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "The Rasch model IRF, showing the probability of a correct response p i (\u03b8) for three test item difficulties \u03b4 i , across examinee ability level \u03b8.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Example test item formats that use the vocabulary scale model to estimate difficulty.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Boxplots and correlation coefficients evaluating our machine-learned proficiency scale models. (a) Results for the weighted-softmax vocabulary model (n = 6,823). (b) Cross-validation results for the weighted-softmax passage model (n = 3,049). (c) Results applying the trained passage model, post-hoc, to a novel set of ''blind'' texts written by ESL experts at targeted CEFR levels (n = 2,349).", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Example test item formats that use the passage scale model to estimate difficulty.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"text": "Scatterplots and correlation coefficients showing how Duolingo English Test scores, based on our ML/NLP scale models, relate to other English proficiency measures. (a) Our test score rankings are nearly identical to those of traditional IRT \u03b8 estimates fit to real test session data (n = 21,351). (b-c) Our test scores correlate significantly with other high-stakes English assessments such as TOEFL iBT (n = 2,319) and IELTS (n = 991).", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: The Common European Framework of</td></tr><tr><td>Reference (CEFR) levels and our corresponding test</td></tr><tr><td>scale.</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Summary of language assessment item formats in this work. For each format, we indicate the machinelearned scale model used to predict item difficulty \u03b4 i , the linguistic skills it is known to predict (L = listening, R = reading, S = speaking, W = writing), and some of the supporting evidence from the literature.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Vocabulary scale model evaluations.", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">\u2248 \u03b4 English Words</td><td>Pseudowords</td></tr><tr><td>90</td><td>loft, proceedings</td><td>fortheric, retray</td></tr><tr><td>70</td><td colspan=\"2\">brutal, informally insequent, vasera</td></tr><tr><td>50</td><td colspan=\"2\">delicious, unfairly anage, compatively</td></tr><tr><td>30</td><td>into, rabbit</td><td>knoce, thace</td></tr><tr><td>10</td><td>egg, mother</td><td>cload, eut</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "Example words and pseudowords, rated for difficulty by the weighted-softmax vocabulary model.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "Passage ranking model evaluations.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"text": "for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione, and enzymes such as catalases and peroxidases.50In 1948, Harry Truman ran for a second term as President against Thomas Dewey. He was the underdog and everyone thought he would lose. The Chicago Tribune published a newspaper on the night of the election with the headline ''Dewey Defeats Truman.'' To everyone's surprise, Truman actually won.10Minneapolis is a city in Minnesota. It is next to St. Paul, Minnesota. St. Paul and Minneapolis are called the ''Twin Cities'' because they are right next to each other. Minneapolis is the biggest city in Minnesota with about 370,000 people. People who live here enjoy the lakes, parks, and river. The Mississippi River runs through the city.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>\u2248 \u03b4</td><td>Candidate Item Text</td></tr><tr><td>90</td><td>A related problem</td></tr><tr><td/><td>The CEFR results indicate</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF10": { |
|
"html": null, |
|
"text": "Example WIKI paragraphs, rated for predicted difficulty by the weighted-softmax passage model.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"text": "reports macro-averaged AUC over the five ordinal breakpoints between CEFR levels.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Passage Scale Model</td><td>r cefr</td></tr><tr><td>Weighted-softmax regression</td><td>.76</td></tr><tr><td>w/o TATOEBA propagations</td><td>.75</td></tr><tr><td>w/o WIKI propagations</td><td>.74</td></tr><tr><td>w/o label-balancing</td><td>.72</td></tr><tr><td>Linear regression</td><td>.13</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF12": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF14": { |
|
"html": null, |
|
"text": "Test score reliability estimates.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Security Measure</td><td>Mean</td><td>Median</td></tr><tr><td>Item exposure rate</td><td>.10%</td><td>.08%</td></tr><tr><td>Test overlap rate</td><td>.43%</td><td><.01%</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF15": { |
|
"html": null, |
|
"text": "Test item bank security measures.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |