Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W05-0210",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:45:19.675861Z"
},
"title": "Measuring Non-native Speakers' Proficiency of English by Using a Test with Automatically-Generated Fill-in-the-Blank Questions",
"authors": [
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Fumiaki",
"middle": [],
"last": "Sugaya",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Seiichi",
"middle": [],
"last": "Yamamoto",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes the automatic generation of Fill-in-the-Blank Questions (FBQs) together with testing based on Item Response Theory (IRT) to measure English proficiency. First, the proposal generates an FBQ from a given sentence in English. The position of a blank in the sentence is determined, and the word at that position is considered as the correct choice. The candidates for incorrect choices for the blank are hypothesized through a thesaurus. Then, each of the candidates is verified by using the Web. Finally, the blanked sentence, the correct choice and the incorrect choices surviving the verification are together laid out to form the FBQ. Second, the proficiency of nonnative speakers who took the test consisting of such FBQs is estimated through IRT. Our experimental results suggest that: (1) the generated questions plus IRT estimate the non-native speakers' English proficiency; (2) while on the other hand, the test can be completed almost perfectly by English native speakers; and (3) the number of questions can be reduced by using item information in IRT. The proposed method provides teachers and testers with a tool that reduces time and expenditure for testing English proficiency. * See the detailed discussion in Section 6.",
"pdf_parse": {
"paper_id": "W05-0210",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes the automatic generation of Fill-in-the-Blank Questions (FBQs) together with testing based on Item Response Theory (IRT) to measure English proficiency. First, the proposal generates an FBQ from a given sentence in English. The position of a blank in the sentence is determined, and the word at that position is considered as the correct choice. The candidates for incorrect choices for the blank are hypothesized through a thesaurus. Then, each of the candidates is verified by using the Web. Finally, the blanked sentence, the correct choice and the incorrect choices surviving the verification are together laid out to form the FBQ. Second, the proficiency of nonnative speakers who took the test consisting of such FBQs is estimated through IRT. Our experimental results suggest that: (1) the generated questions plus IRT estimate the non-native speakers' English proficiency; (2) while on the other hand, the test can be completed almost perfectly by English native speakers; and (3) the number of questions can be reduced by using item information in IRT. The proposed method provides teachers and testers with a tool that reduces time and expenditure for testing English proficiency. * See the detailed discussion in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "English has spread so widely that 1,500 million people, about a quarter of the world's population, speak it, though at most about 400 million speak it as their native language (Crystal, 2003) . Thus, English education for non-native speakers both now and in the near future is of great importance.",
"cite_spans": [
{
"start": 176,
"end": 191,
"text": "(Crystal, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The progress of computer technology is advancing an electronic tool for language learning called Computer-Assisted Language Learning (CALL) and for language testing called Computer-Based Testing (CBT) or Computer-Adaptive Testing (CAT). However, no computerized support for producing a test, a collection of questions for evaluating language proficiency, has emerged to date. * Fill-in-the-Blank Questions (FBQs) are widely used from the classroom level to far larger scales to measure peoples' proficiency at English as a second language. Examples of such tests include TOEFL (Test Of English as a Foreign Language, http://www.ets.org/toefl/) and TOEIC (Test Of English for International Communication, http://www.ets.org/toeic/).",
"cite_spans": [
{
"start": 376,
"end": 377,
"text": "*",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A test comprising FBQs has merits in that (1) it is easy for test-takers to input answers, (2) computers can mark them, thus marking is invariable and objective, and (3) they are suitable for the modern testing theory, Item Response Theory (IRT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because it is regarded that writing incorrect choices that distract only the non-proficient testtaker is a highly skilled business (Alderson, 1996) , FBQs have been written by human experts. Thus, test construction is time-consuming and expensive. As a result, utilizing up-to-date texts for question writing is not practical, nor is tuning in to individual students.",
"cite_spans": [
{
"start": 131,
"end": 147,
"text": "(Alderson, 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To solve the problems of time and expenditure, this paper proposes a method for generating FBQs using a corpus, a thesaurus, and the Web. Experiments have shown that the proficiency estimated through IRT with generated FBQs highly correlates with non-native speakers' real proficiency. This system not only provides us with a quick and inexpensive testing method, but it also features the following advantages: (I) It provides \"anyone\" individually with up-to-date and interesting questions for self-teaching. We have implemented a program that downloads any Web page such as a news site and generates questions from it. (II)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It also enables on-demand testing at \"anytime and anyplace.\" We have implemented a system that operates on a mobile phone. Questions are generated and pooled in the server, and upon a user's request, questions are downloaded. CAT (Wainer, 2000) is then conducted on the phone. The system for mobile phone is scheduled to be deployed in May of 2005 in Japan.",
"cite_spans": [
{
"start": 230,
"end": 244,
"text": "(Wainer, 2000)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. Section 2 introduces a method for making FBQ, Section 3 explains how to estimate testtakers' proficiency, and Section 4 presents the experiments that demonstrate the effectiveness of the proposal. Section 5 provides some discussion, and Section 6 explains the differences between our proposal and related work, followed by concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will review an FBQ, and then explain our method for producing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Method",
"sec_num": "2.1"
},
{
"text": "FBQs are the one of the most popular types of questions in testing. Figure 1 shows a typical sample consisting of a partially blanked English sentence and four choices for filling the blank. The tester ordinarily assumes that exactly one choice is correct (in this case, b)) and the other three choices are incorrect. The latter are often called distracters, because they fulfill a role to distract the less proficient test-takers. ",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Fill-in-the-Blank Question (FBQ)",
"sec_num": null
},
{
"text": "Using question 1 above, the outline of generation is presented below (Figure 2) . A seed sentence (in this case, \"I only have to keep my head above water one more week.\") is input from the designated source, e.g., a corpus or a Web page such as well-known news site. * The seed sentence is a correct English sentence that is decomposed into a sentence with a blank (blanked sentence) and the correct choice for the blank. After the seed sentence is analyzed morphologically by a computer, according to the testing knowledge * the blank position of the sentence is determined. In this paper's experiment, the verb of the seed is selected, and we obtain the blanked sentence \"I only have to ______ my head above water one more week.\" and the correct choice \"keep.\" [b] To be a good distracter, the candidates must maintain the grammatical characteristics of the correct choice, and these should be similar in meaning \u2020 . Using a thesaurus \u2021 , words similar to the correct choice are listed up as candidates, e.g., \"clear,\" \"guarantee,\" \"promise,\" \"reserve,\" and \"share\" for the above \"keep.\" [c] Verify (see Section 2.3 for details) the incorrectness of the sentence restored by each candidate, and if it is not incorrect (in this case, \"clear\" and \"share\"), the candidate is given up.",
"cite_spans": [
{
"start": 763,
"end": 766,
"text": "[b]",
"ref_id": null
},
{
"start": 1090,
"end": 1093,
"text": "[c]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 69,
"end": 79,
"text": "(Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Flow of generation",
"sec_num": "2.2"
},
{
"text": "If a sufficient number (in this paper, three) of candidates remain, form a question by randomizing the order of all the choices (\"keep,\" \"guarantee,\" \"promise,\" and \"re-serve\"); otherwise, another seed sentence is input and restart from step [a].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "[d]",
"sec_num": null
},
{
"text": "In FBQs, by definition, (1) the blanked sentence restored with the correct choice is correct, and (2) the blanked sentence restored with the distracter must be incorrect. In order to generate an FBQ, the incorrectness of the sentence restored by each distracter candidate must be verified and if the combination is not incorrect, the candidate is rejected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorrectness Verification",
"sec_num": "2.3"
},
{
"text": "The Web includes all manners of language data in vast quantities, which are for everyone easy to access through a networked computer. Recently, exploitation of the Web for various natural language applications is rising (Grefenstette, 1999; Turney, 2001; Kilgarriff and Grefenstette, 2003; Tonoike et al., 2004) .",
"cite_spans": [
{
"start": 220,
"end": 240,
"text": "(Grefenstette, 1999;",
"ref_id": "BIBREF5"
},
{
"start": 241,
"end": 254,
"text": "Turney, 2001;",
"ref_id": "BIBREF14"
},
{
"start": 255,
"end": 289,
"text": "Kilgarriff and Grefenstette, 2003;",
"ref_id": "BIBREF9"
},
{
"start": 290,
"end": 311,
"text": "Tonoike et al., 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "We also propose a Web-based approach. We dare to assume that if there is a sentence on the Web, that sentence is considered correct; otherwise, the sentence is unlikely to be correct in that there is no sentence written on the Web despite the variety and quantity of data on it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "* Testing knowledge tells us what part of the seed sentence should be blanked. For example, we selected the verb of the seed because it is one of the basic types of blanked words in popular FBQs such as in TOEIC. Figure 3 illustrates verification based on the retrieval from the Web. Here, s (x) is the blanked sentence, s (w) denotes the sentence restored by the word w, and hits (y) represents the number of documents retrieved from the Web for the key y.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "This can be a word of another POS (Part-Of-Speech). For this, we can use knowledge in the field of second-language education. Previous studies on errors in English usage by Japanese native speakers such as (Izumi and Isahara, 2004) unveiled patterns of errors specific to Japanese, e.g., (1) article selection error, which results from the fact there are no articles in Japanese; (2) preposition selection error, which results from the fact some Japanese counterparts have broader meaning; (3) adjective selection error, which results from mismatch of meaning between Japanese words and their counterpart. Such knowledge may generate questions harder for Japanese who study English.",
"cite_spans": [
{
"start": 206,
"end": 231,
"text": "(Izumi and Isahara, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "\u2020 There are various aspects other than meaning, for example, spelling, pronunciation, and translation and so on. Depending on the aspect, lexical information sources other than a thesaurus should be consulted. ; correct \u2021 We used an in-house English thesaurus whose hierarchy is based on one of the off-the-shelf thesauruses for Japanese, called Ruigo-Shin-Jiten (Ohno and Hamanishi, 1984 ). In the above examples, the original word \"keep\" expresses two different concepts: (1) possession-or-disposal, which is shared by the words \"clear\" and \"share,\" and (2) promise, which is shared by the words \"guarantee,\" \"promise,\" and \"reserve.\" Since this depends on the thesaurus used, some may sense a slight discomfort at these concepts. If a different thesaurus is used, the distracter candidates may differ.",
"cite_spans": [
{
"start": 363,
"end": 388,
"text": "(Ohno and Hamanishi, 1984",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "If hits (s (w)), is small, then the sentence restored with the word w is unlikely, thus the word w should be a good distracter. If hits (s (w)), is large then the sentence restored with the word w is likely, then the word w is unlikely to be a good distracter and is given up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "We used the strongest condition. If hits (s (w)) is zero, then the sentence restored with the word w is unlikely, thus the word w should be a good distracter. If hits (s (w)), is not zero, then the sentence restored with the word w is likely, thus the word w is unlikely to be a good distracter and is given up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Hit Sentence",
"sec_num": null
},
{
"text": "Item Response Theory (IRT) is the basis of modern language tests such as TOEIC, and enables Computerized Adaptive Testing (CAT). Here, we briefly introduce IRT. IRT, in which a question is called an item, calculates the test-takers' proficiency based on the answers for items of the given test (Embretson, 2000) .",
"cite_spans": [
{
"start": 294,
"end": 311,
"text": "(Embretson, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Proficiency Item Response Theory (IRT)",
"sec_num": "3.1"
},
{
"text": "It is often the case that retrieval by sentence does not work. Instead of a sentence, a sequence of words around a blank position, beginning with a content word (or sentence head) and ending with a content word (or sentence tail) is passed to a search engine automatically. For the abovementioned sample, the sequence of words passed to the engine is \"I only have to clear my head\" and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval NOT By Sentence",
"sec_num": null
},
{
"text": "The basic idea is the item response function, which relates the probability of test-takers answering particular items correctly to their proficiency. The item response functions are modeled as logistic curves making an S-shape, which take the form (1) for item i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval NOT By Sentence",
"sec_num": null
},
{
"text": ")) ( exp( 1 1 ) ( i i i b a P \u2212 \u2212 + = \u03b8 \u03b8 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Search",
"sec_num": null
},
{
"text": "We can use any search engine, though we have been using Google since February 2004. At that point in time, Google covered an enormous four billion pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Search",
"sec_num": null
},
{
"text": "The test-taker parameter, \u03b8, shows the proficiency of the test-taker, with higher values indicating higher performance. The \"correct\" hits may come from non-native speakers' websites and contain invalid language usage. To increase reliability, we could restrict Google searches to Websites with URLs based in English-speaking countries, although we have not done so yet. There is another concern: even if sentence fragments cannot be located on the Web, it does not necessarily mean they are illegitimate. Thus, the proposed verification based on the Web is not perfect; the point, however, is that with such limitations, the generated questions are useful for estimating proficiency as demonstrated in a later section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Search",
"sec_num": null
},
{
"text": "Each of the item parameters, a i and b i , controls the shape of the item response function. The a parameter, called discrimination, indexes how steeply the item response function rises. The b parameter is called difficulty. Difficult items feature larger b values and the item response functions are shifted to the right. These item parameters are usually estimated by a maximal likelihood method. For computations including the estimation, there are many commercial programs such as BILOG (http://www.assess.com/) available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Search",
"sec_num": null
},
{
"text": "Setting aside the convenience provided by the off-the-shelf search engine, another search specialized for this application is possible, although the current implementation is fast enough to automate generation of FBQs, and the demand to accelerate the search is not strong. Rather, the problem of time needed for test construction has been reduced by our proposal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "It is important to estimate the proficiency of the test-taker by using as few items as possible. For this, we have proposed a method based on item information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "Expression (2) is the item information of item i at \u03b8 j , the proficiency of the test-taker j, which indicates how much measurement discrimination an item provides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "The throughput depends on the text from which a seed sentence comes and the network traffic when the Web is accessed. Empirically, one FBQ is obtained in 20 seconds on average and the total number of FBQs in a day adds up to over 4,000 on a single computer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "The procedure is as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "1. Initialize I by the set of all generated FBQs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "2. According to Equation (3), we select the item whose contribution to test information is maximal. 3. We eliminate the selected item from I according to Equation (4). 4. If I is empty, we obtain the ordered list of effective items; otherwise, go back to step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": ")) ( 1 )( ( ) ( 2 j i j i i j i P P a I \u03b8 \u03b8 \u03b8 \u2212 = (2) ( ) \uf8f7 \uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ec \uf8ed \uf8eb = \u2211 \u2211 \u2208 j I i j i i I i \u03b8 max ar\u011d (3) i I I\u2212 = (4) 4 4.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing test size by selection of effective items",
"sec_num": "3.2"
},
{
"text": "The FBQs for the experiment were generated in February of 2004. Seed sentences were obtained from ATR's corpus (Kikui et al., 2003) of the business and travel domains. The vocabulary of the corpus comprises about 30,000 words. Sentences are relatively short, with the average length being 6.47 words. For each domain 5,000 questions were generated automatically and each question consists of an English sentence with one blank and four choices.",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Kikui et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": null
},
{
"text": "We used the TOEIC score as the experiment's proficiency measure, and collected 100 Japanese subjects whose TOEIC scores were scattered from 400 to less than 900. The actual range for TOEIC scores is 10 to 990. Our subjects covered the dominant portion * of test-takers for TOEIC in Japan, excluding the highest and lowest extremes. \u2020 We had the subjects answer 320 randomly selected questions from the 10,000 mentioned above. The raw marks were as follows: the average \u2021 mark was 235.2 (73.5%); the highest mark was 290 (90.6%); and the lowest was 158 (49.4%)\uff0eThis suggests that our FBQs are sensitive to test-takers' proficiency. In Figure 4 , the y-axis represents estimated proficiency according to IRT (Section 3.1) and generated questions, while the x-axis is the real TOEIC score of each subject.",
"cite_spans": [
{
"start": 332,
"end": 333,
"text": "\u2020",
"ref_id": null
}
],
"ref_spans": [
{
"start": 634,
"end": 642,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment with non-native speakers",
"sec_num": null
},
{
"text": "As the graph illustrates, the IRT-estimated proficiency (\u03b8) and real TOEIC scores of subjects correlate highly with a co-efficiency of about 80%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with non-native speakers",
"sec_num": null
},
{
"text": "For comparison we refer to CASEC (http://casec.evidus.com/), an off-the-shelf test consisting of human-made questions and IRT. Its co-efficiency with real TOEIC scores is reported to be 86%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with non-native speakers",
"sec_num": null
},
{
"text": "This means the proposed automatically generated questions are promising for measuring English proficiency, achieving a nearly competitive level with human-made questions but with a few reservations: (1) whether the difference of 6% is large depends on the standpoint of possible users; (2) as for the number of questions to be answered, our proposal uses 320 questions in the experiments, while TOEIC uses 200 questions and CASEC uses only about 60 questions; (3) the proposed method uses FBQs only whereas CASEC and TOEIC use various types of questions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with non-native speakers",
"sec_num": null
},
{
"text": "To examine the quality of the generated questions, we asked a single subject \u00a7 who is a native speaker of English to answer 4,000 questions ( The native speaker largely agreed with our generation, determining correct choices (type I). The \u2020 We have covered only the range of TOEIC scores from 400 to 900 due to expense of the experiment. In this restricted experiment, we do not claim that our proficiency estimation method covers the full range of TOEIC scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with a native speaker",
"sec_num": "4.2"
},
{
"text": "\u00a7 Please note that the analysis is based on a single nativespeaker, thus we need further analysis by multiple subjects. \u2021 The standard deviation was 29.8 (9.3%). rate was 93.50%, better than 90.6%, the highest mark among the non-native speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with a native speaker",
"sec_num": "4.2"
},
{
"text": "We present the problematic cases here. Type II is caused by the seed sentence being incorrect for the native speaker, and a distracter is bad because it is correct. Or like type III, it consists of ambiguous choices\uff0e Type III is caused by some generated distracters being correct; therefore, the choices are ambiguous. Type IV is caused by the seed sentence being incorrect and the generated distracters also being incorrect; therefore, the question cannot be answered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with a native speaker",
"sec_num": "4.2"
},
{
"text": "Type V is caused by the seed sentence being nonsense to the native speaker; the question, therefore, cannot be answered. Cases with bad seed sentences (portions of II, IV, and V) require cleaning of the corpus by a native speaker, and cases with bad distracters (portions of II and III) require refinement of the proposed generation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with a native speaker",
"sec_num": "4.2"
},
{
"text": "Since the questions produced by this method can be flawed in ways which make them unanswerable even by native speakers (about 6.5% of the time) due to the above-mentioned reasons, it is difficult to use this method for high-stakes testing applications although it is useful for estimating proficiency as explained in the previous section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment with a native speaker",
"sec_num": "4.2"
},
{
"text": "This section explains the on-demand generation of FBQs according to individual preference, an immediate extension and a limitation of our proposed method, and finally touches on free-format Q&A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.2",
"sec_num": null
},
{
"text": "The method provides teachers and testers with a tool that reduces time and expenditure. Furthermore, the method can deal with any text. For example, up-to-date and interesting materials such as news articles of the day can be a source of seed sentences ( Figure 6 is a sample generated from an article (http://www.japantimes.co.jp/) on an earthquake that occurred in Japan), which enables realization of a personalized learning environment. We have generated questions from over 100 documents on various genres such as novels, speeches, academic papers and so on found in the enormous collection of e-Books provided by Project Gutenberg (http://www.gutenberg.org/). Figure 5 shows the relationship between reduction of the test size according to the method explained in Section 3.2 and the estimated proficiency based on the reduced test. The x-axis represents the size of the reduced test in number of items, while the yaxis represents the correlation coefficient (R) between estimated proficiency and real TOEIC score.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 263,
"text": "Figure 6",
"ref_id": "FIGREF5"
},
{
"start": 666,
"end": 674,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Automatic FBQ Construction",
"sec_num": null
},
{
"text": "In Section 2.2, we mentioned a constraint that a good distracter should maintain the grammatical characteristics of the correct choice originating in the seed sentence. The question checks not the grammaticality but the semantic/pragmatic correctness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variation of Fill-in-the-Blank Questions for Grammar Checking",
"sec_num": null
},
{
"text": "We can generate another type of FBQ by slightly modifying step [b] of the procedure in Section 2.2 to retain the stem of the original word w and vary the surface form of the word w. This modified procedure generates a question that checks the grammatical ability of the test takers. Figure 7 shows a sample of this kind of question taken from a TOEIC-test textbook (Educational Testing Service, 2002) . ",
"cite_spans": [
{
"start": 63,
"end": 66,
"text": "[b]",
"ref_id": null
},
{
"start": 365,
"end": 400,
"text": "(Educational Testing Service, 2002)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 7",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "A Variation of Fill-in-the-Blank Questions for Grammar Checking",
"sec_num": null
},
{
"text": "The questions dealt with in this paper concern testing reading ability, but these questions are not suitable for testing listening ability because they are presented visually and cannot be pronounced. To test listening ability, like in TOIEC, other types of questions should be used, and automated generation of them is yet to be developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6 Limitation of the Addressed FBQs",
"sec_num": "5.4"
},
{
"text": "Besides measuring one's ability to receive information in a foreign language, which has been addressed so far in this paper, it is important to measure a person's ability to transmit information in a foreign language. For that purpose, tests for translating, writing, or speaking in a free format have been actively studied by many researchers (Shermis, 2003; Yasuda, 2004) .",
"cite_spans": [
{
"start": 344,
"end": 359,
"text": "(Shermis, 2003;",
"ref_id": "BIBREF12"
},
{
"start": 360,
"end": 373,
"text": "Yasuda, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Free-Format Q&A",
"sec_num": null
},
{
"text": "Here, we explain other studies on the generation of multiple-choice questions for language learning. There are a few previous studies on computer-based generation such as Mitkov (2003) and Wilson (1997) .",
"cite_spans": [
{
"start": 171,
"end": 184,
"text": "Mitkov (2003)",
"ref_id": "BIBREF10"
},
{
"start": 189,
"end": 202,
"text": "Wilson (1997)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work *",
"sec_num": null
},
{
"text": "A computer can generate questions by deleting words or parts of words randomly or at every N-th word from text. Test-takers are requested to restore the word that has been deleted. This is called a \"cloze test.\" The effectiveness of a \"cloze test\" or its derivatives is a matter of controversy among researchers of language testing such as Brown (1993) and Alderson (1996) .",
"cite_spans": [
{
"start": 340,
"end": 352,
"text": "Brown (1993)",
"ref_id": "BIBREF1"
},
{
"start": 357,
"end": 372,
"text": "Alderson (1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "7 Conclusion Cloze Test",
"sec_num": "6.2"
},
{
"text": "N.B. The correct answer is c) care.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7 Conclusion Cloze Test",
"sec_num": "6.2"
},
{
"text": "Because the equipment is very delicate, it must be handled with ______\uff0e a) caring b) careful c) care d) carefully",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question 3 (FBQ)",
"sec_num": null
},
{
"text": "Mitkov 2003proposed a computer-aided procedure for generating multiple-choice questions from textbooks. The differences from our proposal are that (1) Mitkov's method generates questions not about language usage but about facts explicitly stated in a text \u2020 ; (2) Mitkov uses techniques such as term extraction, parsing, transformation of trees, which are different from our proposal; and (3) Mitkov does not use IRT while we use it. This paper proposed the automatic construction of Fill-in-the-Blank Questions (FBQs). The proposed method generates FBQs using a corpus, a thesaurus, and the Web. The generated questions and Item Response Theory (IRT) then estimate secondlanguage proficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tests on Facts",
"sec_num": null
},
{
"text": "Experiments have shown that the proposed method is effective in that the estimated proficiency highly correlates with non-native speakers' real proficiency as represented by TOEIC scores; native-speakers can achieve higher scores than non-native speakers. It is possible to reduce the size of the test by removing non-discriminative questions with item information in IRT. \u2020 Based on a fact stated in a textbook like, \"A prepositional phrase at the beginning of a sentence constitutes an introductory modifier,\" Mitkov generates a question such as, \"What does a prepositional phrase at the beginning of a sentence constitute? i. a modifier that accompanies a noun; ii. an associated modifier; iii. an introductory modifier; iv. a misplaced modifier.\" * There are many works on item generation theory (ITG) such as Irvine and Kyllonen (2002) , although we do not go any further into the area. We focus only on multiple-choice questions for language learning in this paper.",
"cite_spans": [
{
"start": 814,
"end": 840,
"text": "Irvine and Kyllonen (2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tests on Facts",
"sec_num": null
},
{
"text": "The method provides teachers, testers, and test takers with novel merits that enable low-cost testing of second-language proficiency and provides learners with up-to-date and interesting materials suitable for individuals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tests on Facts",
"sec_num": null
},
{
"text": "Further research should be done on (1) largescale evaluation of the proposal, (2) application to different languages such as Chinese and Korean, and (3) generation of different types of questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tests on Facts",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors' heartfelt thanks go to anonymous reviewers for providing valuable suggestions and Kadokawa-Shoten for providing the thesaurus named Ruigo-Shin-Jiten. The research reported here was supported in part by a contract with the NiCT entitled, \"A study of speech dialogue translation technology based on a large corpus.\" It was also supported in part by the Grants-in-Aid for Scientific Research (KAKENHI), contract with MEXT numbered 16300048. The study was conducted in part as a cooperative research project by KDDI and ATR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Do corpora have a role in language assessment? Using Corpora for Language Research",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Alderson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Short",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Longman",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "248--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alderson, Charles. 1996. Do corpora have a role in language assessment? Using Corpora for Language Research, eds. Thomas, J. and Short, M., Longman: 248-259.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "What are the characteristics of natural cloze tests?",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Brown",
"suffix": ""
}
],
"year": 1993,
"venue": "Language Testing",
"volume": "10",
"issue": "",
"pages": "93--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, J. D. 1993. What are the characteristics of natu- ral cloze tests? Language Testing 10: 93-116.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "English as a Global Language",
"authors": [
{
"first": "David",
"middle": [],
"last": "Crystal",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crystal, David. 2003. English as a Global Language, (Second Edition). Cambridge University Press: 212.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TOEIC koushiki gaido & mondaishu. IIBC",
"authors": [],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Educational Testing Service 2002. TOEIC koushiki gaido & mondaishu. IIBC: 249.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Item Response Theory for Psychologists",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Embretson",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Embretson, Susan et al. 2000. Item Response Theory for Psychologists. LEA: 371.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The WWW as a resource for example-based MT tasks. ASLIB \"Translating and the Computer\" conference",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, G. 1999. The WWW as a resource for ex- ample-based MT tasks. ASLIB \"Translating and the Computer\" conference.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Item generation for test development",
"authors": [
{
"first": "H",
"middle": [
"S"
],
"last": "Irvine",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Kyllonen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irvine, H. S., and Kyllonen, P. C. (2002). Item genera- tion for test development. LEA: 412.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Investigation into language learners' acquisition order based on the error analysis of the learner corpus",
"authors": [
{
"first": "E",
"middle": [],
"last": "Izumi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Pacific-Asia Conference on Language, Information and Computation (PACLIC) 18 Satellite Workshop on E-Learning, Japan",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Izumi, E., and Isahara, H. (2004). Investigation into language learners' acquisition order based on the er- ror analysis of the learner corpus. In Proceedings of Pacific-Asia Conference on Language, Information and Computation (PACLIC) 18 Satellite Workshop on E-Learning, Japan. (in printing)",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Creating Corpora for Speech-to-Speech Translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takaezawa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Special Session \"Multilingual Speech-to-Speech Translation\" of EuroSpeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kikui, G., Sumita, E., Takaezawa, T. and Yamamoto, S., \"Creating Corpora for Speech-to-Speech Transla- tion,\" Special Session \"Multilingual Speech-to- Speech Translation\" of EuroSpeech, 2003.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Special Issue on the WEB as Corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "3",
"pages": "333--502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilgarriff, A. and Grefenstette, G. 2003. Special Issue on the WEB as Corpus. Computational Linguistics 29 (3): 333-502.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Computer-Aided Generation of Multiple-Choice Tests",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "An",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL 2003 Workshop: Building Educational Applications Using Natural Language Processing",
"volume": "",
"issue": "",
"pages": "17--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitkov, Ruslan and Ha, Le An. 2003. Computer-Aided Generation of Multiple-Choice Tests. HLT-NAACL 2003 Workshop: Building Educational Applications Using Natural Language Processing: 17-22.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automated Essay Scoring",
"authors": [
{
"first": "M",
"middle": [
"D"
],
"last": "Shermis",
"suffix": ""
},
{
"first": "",
"middle": [
"J C"
],
"last": "Burstein",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shermis, M. D. and Burstein. J. C. 2003. Automated Essay Scoring. LEA: 238.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Answer Validation by Keyword Association",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tonoike",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Utsuro",
"suffix": ""
}
],
"year": 2004,
"venue": "IPSJ, SIGNL",
"volume": "161",
"issue": "",
"pages": "53--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tonoike, M., Sato, S., and Utsuro, T. 2004. Answer Validation by Keyword Association. IPSJ, SIGNL, 161: 53-60, (in Japanese).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining the Web for synonyms: PMI-IR vs",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2001,
"venue": "LSA on TOEFL. ECML",
"volume": "",
"issue": "",
"pages": "491--502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P.D. 2001. Mining the Web for synonyms: PMI- IR vs. LSA on TOEFL. ECML 2001: 491-502.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Conputerized Adaptive Testing: A Primer",
"authors": [
{
"first": "Howard",
"middle": [],
"last": "Wainer",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wainer, Howard et al. 2000. Conputerized Adaptive Testing: A Primer, (Second Edition). LEA: 335.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Automatic Generation of CALL exercises from general corpora",
"authors": [
{
"first": "E",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fligelstone",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mcenery",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 1997,
"venue": "Teaching and Language Corpora",
"volume": "",
"issue": "",
"pages": "116--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, E. 1997. The Automatic Generation of CALL exercises from general corpora, in eds. Wichmann, A., Fligelstone, S., McEnery, T., Knowles, G., Teaching and Language Corpora, Harlow: Long- man:116-130.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic Measuring of English Language Proficiency using MT Evaluation Technology",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yasuda",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004 eLearning for Computational Linguistics and Computational Linguistics for eLearning",
"volume": "",
"issue": "",
"pages": "53--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuda, K., Sugaya, F., Sumita, E., Takezawa, T., Kikui, G. and Yamamoto, S. 2004. Automatic Measuring of English Language Proficiency using MT Evaluation Technology, COLING 2004 eLearning for Computa- tional Linguistics and Computational Linguistics for eLearning: 53-60.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "A sample Fill-in-the-Blank Question (FBQ) Question 1 (FBQ) I only have to _______ my head above water one more week\uff0e a) reserve b) keep c) guarantee d) promise N.B. the correct choice is b) keep.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Flow generating Fill-In-The-Blank Question (",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Incorrectness and Hits on the WebBlanked sentence: s (x)= \"I only have to ____ my head above water one more week\uff0e\" Hits of incorrect choice candidates:",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Figure 4: IRT-Estimated Proficiency (\u03b8) vs. Real TOEIC Score",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "On-demand construction -a sample question from a Web news article in The Japan Times on \"an earthquake\" N.B. The correct answer is c) originated.Question 2 (FBQ)The second quake 10 km below the seabed some 130 km east of Cape Shiono. a) put b) came c) originated d) opened",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "A variation on fill-in-the-blank questions 5.3",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>).</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "Responses of a Native speaker",
"content": "<table><tr><td>Type</td><td>Explanation</td><td/><td>Count</td><td>%</td></tr><tr><td>I</td><td>Single</td><td>Match</td><td>3,740</td><td>93.50</td></tr><tr><td>II</td><td>Selection</td><td>No match</td><td>55</td><td>1.38</td></tr><tr><td>III</td><td/><td>Ambiguous Choices</td><td>70</td><td>1.75</td></tr><tr><td>IV</td><td>No Selection</td><td>No Correct Choice</td><td>45</td><td>1.13</td></tr><tr><td>V</td><td/><td>Nonsense</td><td>90</td><td>2.25</td></tr></table>",
"num": null
}
}
}
}