ACL-OCL / Base_JSON /prefixU /json /unimplicit /2021.unimplicit-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:10:32.699133Z"
},
"title": "Implicit Phenomena in Short-Answer Scoring Data",
"authors": [
{
"first": "Marie",
"middle": [],
"last": "Bexte",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Duisburg-Essen",
"location": {
"settlement": "Duisburg",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Horbach",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Duisburg-Essen",
"location": {
"settlement": "Duisburg",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Duisburg-Essen",
"location": {
"settlement": "Duisburg",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Short-answer scoring is the task of assessing the correctness of a short text given as response to a question that can come from a variety of educational scenarios. As only content, not form, is important, the exact wording including the explicitness of an answer should not matter. However, many state-of-the-art scoring models heavily rely on lexical information, be it word embeddings in a neural network or n-grams in an SVM. Thus, the exact wording of an answer might very well make a difference. We therefore quantify to what extent implicit language phenomena occur in short answer datasets and examine the influence they have on automatic scoring performance. We find that the level of implicitness depends on the individual question, and that some phenomena are very frequent. Resolving implicit wording to explicit formulations indeed tends to improve automatic scoring performance.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Short-answer scoring is the task of assessing the correctness of a short text given as response to a question that can come from a variety of educational scenarios. As only content, not form, is important, the exact wording including the explicitness of an answer should not matter. However, many state-of-the-art scoring models heavily rely on lexical information, be it word embeddings in a neural network or n-grams in an SVM. Thus, the exact wording of an answer might very well make a difference. We therefore quantify to what extent implicit language phenomena occur in short answer datasets and examine the influence they have on automatic scoring performance. We find that the level of implicitness depends on the individual question, and that some phenomena are very frequent. Resolving implicit wording to explicit formulations indeed tends to improve automatic scoring performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic short answer scoring is an application area of natural language processing where short free-form answers written by students in an educational context are automatically scored based on the correctness of their content. They occur for example in science education (Nielsen et al., 2008; Dzikovska et al., 2010) , but also in foreign language learning to measure reading (Bailey and Meurers, 2008; Meurers et al., 2011) or listening comprehension (Horbach et al., 2014) .",
"cite_spans": [
{
"start": 273,
"end": 295,
"text": "(Nielsen et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 296,
"end": 319,
"text": "Dzikovska et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 379,
"end": 405,
"text": "(Bailey and Meurers, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 406,
"end": 427,
"text": "Meurers et al., 2011)",
"ref_id": "BIBREF15"
},
{
"start": 455,
"end": 477,
"text": "(Horbach et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In such a scoring task, answers are graded based on their content alone -in comparison to essay scoring (Attali and Burstein, 2006) where also linguistic form is taken into consideration. Thus, judging whether an answer is correct or not may require the resolution of a number of implicit language phenomena as a form of normalization. Figure 1 Implicit: 3 is the perfect amount, 2 is not enough, 3 is too many.",
"cite_spans": [
{
"start": 104,
"end": 131,
"text": "(Attali and Burstein, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Explicit: 3 scoops is the perfect amount of fertilizer, because 2 scoops is not enough, but 3 scoops is too many. shows two answers that express the same content, but with differing levels of explicitness. How the content is expressed on the surface does not matter for the score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In fact, the two answers in the example should be treated in the same way regardless of their explicitness. The only relevant criterion should be whether they convey the right content and thus show that the learner understood the concepts. While humans often effortlessly resolve implicit phenomena, automatic resolution of many of these phenomena is not trivial. However, we argue that resolution of implicitness is a kind of normalization step that can help to improve automatic scoring performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most work on automatic short-answer scoring does not actively resolve most implicit phenomena. However, the c-rater system performs pronoun resolution (Leacock and Chodorow, 2003) , but they do not report the impact of that single component. Banjade et al. (2015) perform implicit resolution of coreferences between entities in learner answers and entities in the question and similarly target ellipses resolution, where part of the question is implied in the learner answer, both by aligning concepts from the learner answer to the question. They report a positive influence on overall scoring performance. Another notable exception is information structure, i.e. whether the answer repeats parts of the question as researched through focus annotations by Ziai and Meurers (2014) . They report only a minor effect on automatic scoring performance.",
"cite_spans": [
{
"start": 151,
"end": 179,
"text": "(Leacock and Chodorow, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 242,
"end": 263,
"text": "Banjade et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 757,
"end": 780,
"text": "Ziai and Meurers (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we analyse which implicit phenomena occur in short answer scoring datasets. We then analyze the impact of implicit language on automatic scoring performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are a number of linguistic phenomena that pertain to the implicitness of language and are especially relevant for learner answers. In the following, we describe the ones we considered as candidates for our analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Language in Learner Answers",
"sec_num": "2"
},
{
"text": "Coreference Coreference describes the phenomenon that the same entity is referred to several times throughout a text, often using different referring expressions (see (Mitkov, 2014) ). The most prototypical example of pronominal reference is shown in Example 1, where they at the beginning of the second sentence refers to the same entity as pandas in the first sentence.",
"cite_spans": [
{
"start": 167,
"end": 181,
"text": "(Mitkov, 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Language in Learner Answers",
"sec_num": "2"
},
{
"text": "-Pandas live in China. They eat bamboo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Language in Learner Answers",
"sec_num": "2"
},
{
"text": "-Pandas live in China. Pandas eat bamboo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Language in Learner Answers",
"sec_num": "2"
},
{
"text": "Bridging Anaphora The relationship between an anaphor and its antecedent may be indirect, constituting the special case of bridging anaphora (Clark, 1975) . Take for example the statement shown in Example 2. While this can be understood from the context of the first sentence, it is left implicit that the second sentence refers to the fur of the panda.",
"cite_spans": [
{
"start": 141,
"end": 154,
"text": "(Clark, 1975)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 1: Coreference",
"sec_num": null
},
{
"text": "-The panda is ill. The fur is dull.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 1: Coreference",
"sec_num": null
},
{
"text": "-The panda is ill. The fur of the panda is dull.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 1: Coreference",
"sec_num": null
},
{
"text": "Ellipsis An ellipsis is the omission of content that can be derived from context (see Example 3). There, the second sentence does not explicitly state that koalas are highly specialized, too, which can however be gathered from the first sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 2: Bridging",
"sec_num": null
},
{
"text": "-Pandas are highly specialized. Koalas are, too. -Pandas are highly specialized. Koalas are highly specialized, too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 2: Bridging",
"sec_num": null
},
{
"text": "Numeric Terms In numeric expressions, the head word, i.e. usually the measurement unit, can often be left out. In cases with parallelism to a previous sentence this is a sub-type of an ellipsis, in others it is not (Elazar and Goldberg, 2019) . Example 4 shows an instance of the latter case, where the implication is that this sentence talks about age, indicated by the use of turn in front of 30. Instead of saying that pandas turn 30 years old, this is shortened to saying that they turn 30.",
"cite_spans": [
{
"start": 215,
"end": 242,
"text": "(Elazar and Goldberg, 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3: Ellipsis",
"sec_num": null
},
{
"text": "-Pandas turn 30 in the wild.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3: Ellipsis",
"sec_num": null
},
{
"text": "-Pandas turn 30 years in the wild.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3: Ellipsis",
"sec_num": null
},
{
"text": "Information Structure Another specific subcase of ellipses that is particularly important in a question and answer scenario is information structure (Krifka and Musan, 2012) , i.e. the distinction whether the answer repeats given information from the question. Given the question that is shown in Example 5, bamboo is the focus of the answer, that actually answers the question. Focus has been automatically annotated for short answer data, although focus-based feature made only a minor difference in scoring performance (Ziai and Meurers, 2018 ).",
"cite_spans": [
{
"start": 149,
"end": 173,
"text": "(Krifka and Musan, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 522,
"end": 545,
"text": "(Ziai and Meurers, 2018",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4: Numeric Terms",
"sec_num": null
},
{
"text": "-What do pandas eat? Bamboo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4: Numeric Terms",
"sec_num": null
},
{
"text": "-What do pandas eat? Pandas eat bamboo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4: Numeric Terms",
"sec_num": null
},
{
"text": "Presupposition A presupposition (see Example 6) is a precondition that has to be fulfilled for a sentence to be true or false (Strawson, 1950) . The statement pandas no longer eat bamboo presupposes that pandas used to eat bamboo, which then makes it a valid statement to say that they no longer do.",
"cite_spans": [
{
"start": 126,
"end": 142,
"text": "(Strawson, 1950)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 5: Information Structure",
"sec_num": null
},
{
"text": "-Pandas no longer eat bamboo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 5: Information Structure",
"sec_num": null
},
{
"text": "-Pandas used to eat bamboo. Pandas no longer eat bamboo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 5: Information Structure",
"sec_num": null
},
{
"text": "Restrictive vs. Non-restrictive Remarks Any appositional adjective and any relative clause (Fabb, 1990 ) can either be restrictive, i.e. necessary for selecting the right entity out of a set of alternatives or non-restrictive. In the question Explain how pandas in China are similar to koalas in Australia.",
"cite_spans": [
{
"start": 91,
"end": 102,
"text": "(Fabb, 1990",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 6: Presupposition",
"sec_num": null
},
{
"text": "in China is non-restrictive (because it is not meant to differentiate between different kinds of pandas living in different parts of the world). We could think of such non-restrictive terms as the explicit version of an implicit sentence. Especially in a learner answer targeting that question the term pandas can be used, implicitly meaning pandas in China.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 6: Presupposition",
"sec_num": null
},
{
"text": "Implicit Discourse Relations The relation between sentences is often marked by discourse connectives. In some cases, there may be a discourse relation that is left implicit. With regard to the statement shown in Example 7, there is such a relation between the two sentences, which is an implicit therefore, as the reason for taking the panda to the veterinarian was its dull fur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 6: Presupposition",
"sec_num": null
},
{
"text": "-The panda had dull fur. We took it to the vet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 6: Presupposition",
"sec_num": null
},
{
"text": "-The panda had dull fur, therefore we took it to the vet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 6: Presupposition",
"sec_num": null
},
{
"text": "Example 7: Implicit Discourse",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 6: Presupposition",
"sec_num": null
},
{
"text": "Short answer-scoring datasets can include very different prompts, i.e. an (optional) reading text and some question the student has to answer, coming from domains such as sciences, biology, or English language arts. To cover a range of different learner answers, we select prompts from two short answer datasets and annotate occurrences of the implicit phenomena within the learner answers given in response to these prompts. This procedure has three goals: First, we want to assess the frequency of these phenomena in learner data. Second, we want to evaluate the effect of implicitness on the final score an answer receives, i.e. we ask whether implicit answers are on average scored higher or lower than explicit ones by teachers. And finally, we want to know the effect of implicitness on automatic scoring performance. We investigate this third question by extracting explicit versions of the answers regarding the different phenomena from the implicit versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicitness Annotations",
"sec_num": "3"
},
{
"text": "For our annotations we needed publicly available short-answer data in English where answers are full sentences and not only single phrases like in the Powergrading dataset (Basu et al., 2013) . Ideally, there should be a larger amount of answers for a single prompt so that prompt-specific models can be trained later in Section 4. (For an overview of publicly available shortanswer datasets, see Horbach and Zesch (2019).) We consider two short answer datasets in our analysis. The first one is the Student Response Analysis Corpus (SRA) of the 2013 SemEval task 7 (Dzikovska et al., 2013). It consists of data from two different sources. The Beetle subset has 3k student answers to 56 questions about electricity and electronics. The Sci-EntsBank subset contains 10k student answers to 197 questions about different science domains. All questions have a reference answer and (among others) 5-way labels judging the appropriateness of the student answers.",
"cite_spans": [
{
"start": 172,
"end": 191,
"text": "(Basu et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "The second dataset we consider is that of the 2012 Automated Student Assessment Prize (ASAP). 1 It consists of about 2,200 student answers to each of ten science-related prompts. The answers to four of the prompts were rated on a four-point scale and the others received scores on a three-point scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Our annotation study focuses on four of the phenomena we presented in the introduction. These are coreference, bridging anaphora, ellipsis and numeric terms. We chose them as we expected them to be relatively frequent, based on a short manual inspection of the data, and because they can all be annotated following the same general schema, which we describe below. Thus, we expected that they would have a larger influence on automatic scoring performance. For each of them, we selected prompts from one of the datasets that seemed to contain instances of that phenomenon in larger quantities. For the ASAP data, we randomly sampled 100 of the answers to the selected prompt. As some of the SRA prompts only have 40 answers, we in these cases selected two suitable prompts to arrive at a combined amount of 80 candidate sentences. Table 1 shows the chosen prompts.",
"cite_spans": [],
"ref_spans": [
{
"start": 831,
"end": 838,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.2"
},
{
"text": "Coreference, numeric terms and bridging anaphora were all annotated following the same pattern. An occurrence of any of these phenomena is marked by annotating the span, which is then linked to the last explicit mentioning of what is necessary to resolve the phenomenon. Take for example a sentence 30 meters plus 20 is 50. Here, both 20 and 50 would be annotated and linked back to meters. Ellipses were annotated in the same way, but following the convention that the token before the ellipsis was linked to what is necessary to resolve the ellipsis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.2"
},
{
"text": "In some instances, there was no explicit mentioning of what is necessary to resolve implicit into explicit. Depending on whether this could be inferred from the context we then either directly annotated these spans with their resolved form or marked them as non-resolvable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation process",
"sec_num": "3.2"
},
{
"text": "All answers were double-annotated by two of the authors of this paper to calculate two different measures of agreement. The first one is the token-level agreement on whether a token was annotated as covering the phenomenon. The other is the antecedent agreement, which is based on the subset of tokens where both annotators agreed that a token was part of a chain. Here, we only check those tokens that were not the first item in a coreference chain. For those, we checked whether they linked to the same antecedent. Table 2 shows the agreement results. The \u03ba token-level agreement ranges between .74 and .86 for all phenomena, except ellipsis where it is only .45. Ellipses seem to be hard to annotate. While both annotators found the same amount of instances, they substantially disagreed what exactly to label. One example for such a problematic instance was the sentence Plastic A is the most stretchy that could be either interpreted as a normal superlative or as leaving out the head (the most stretchy plastic).",
"cite_spans": [],
"ref_spans": [
{
"start": 517,
"end": 524,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotation analysis",
"sec_num": "3.3"
},
{
"text": "Antecedent agreement is .90 and above for coreference, bridging and ellipsis, but lower for numeric terms with values between .51 and .7. With respect to prompt VB 22c, this arises from the fact that many answers reference numbers for which the context suggest that they represent some kind of unit of weight, but while one annotator did not find the context clues sufficient to resolve this, the other linked these numbers back to the span mass of beans mentioned in the prompt question. Example 8 shows the prompt and an example answer where this occurs. While both annotators agreed on the whole numbers being scoops, the decimal numbers created disagreement, with one annotator linking them to mass of beans, the other marking them as unresolvable. Without disagreement arising from this particular phenomenon, antecedent agreement increases to .81.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation analysis",
"sec_num": "3.3"
},
{
"text": "Describe what the graph tells you about the relationship between the number of scoops of fertilizer and the mass of beans harvested? -Answer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "It goes in a pattern like 0 is on 0.2 and like one is on 0.7 and goes from even to odd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "Example 8: Annotation of numeric terms Table 3 shows how frequently the different phenomena occur within the prompts. As we did not curate the two sets of annotations, the reported phenomenon counts are based on the first annotator, who is the same for all of them. The most prevalent phenomenon is coreference, with 97 out of the 100 answers we annotated containing at least one instance of it. The two prompts we chose for the annotation of bridging anaphora differ in the frequency of answers with bridging, as 80% of the answers to one of the prompts contain instances of bridging, whereas just 18% of the other do. With respect to ellipsis and numeric terms we find that 40% of the answers contain ellipsis, and that 30% of the answers to VB 22c and 50% of the answers to LF 27a contain at least one unre- solved numeric term. Apparently some phenomena are more frequent than others even when selecting datasets that seem most suitable for a certain phenomenon. While coreference by means of pronouns is a common phenomenon where sentences avoiding it completely would look marked, students in a school context might be less inclined to leave out, e.g., units of measurement in an exam situation.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "In Table 3 , we also report on the question of whether explicit or implicit answers are scored higher by humans and find mixed results.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "As only three of the answers to ASAP prompt 8 did not contain coreferences, we cannot compare how the assigned labels may differ between answers with and without coreference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "In the case of bridging, the two prompts we chose also exhibit different patterns. Within the answers to prompt LF 26b2, the majority contains instances of bridging and those that do not tend to be labeled worse, most frequently as irrelevant. The other bridging prompt, ST 31b, contains fewer instances of bridging, and those answers that include bridging receive worse labels, most frequently irrel-evant. Therefore, a typical answer to the LF 26b2 prompt seems to be one with bridging, with those that do not contain bridging receiving lower scores. A typical answer to the ST 31b prompt on the other hand is one without bridging, with those that do contain it getting lower scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "For numeric terms, while answers to the VB 22 prompt that contain unresolved numeric terms generally receive good labels of either partially correct or correct, the other prompt we chose does not exhibit such a pattern. There, answers with unresolved numeric terms are equally likely labeled as contradictory or correct. We also see very similar label distributions for answers with and without ellipsis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "Overall we do not see a clear trend, which is reassuring, as teachers scoring such answers manually are probably not influenced by the presence or absence of implicit language (although of course a controlled annotation study would be needed to confirm this). In the next section, we will check whether automatic scoring models are equally unimpressed by the choice of wording in a learner answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "-Question:",
"sec_num": null
},
{
"text": "As we have seen in our dataset analysis, there is a large variance whether learners use implicit or explicit language. However, as in content scoring, only the meaning and not the form of an answer is important, both variants should be scored by an automatic scoring model in completely the same way. Many state of the art models heavily rely on lexical information, be it word embeddings in a neural network or n-grams in an SVM. Thus, the exact wording of an answer might very well make a difference, especially if one variant is much more frequent than the other and therefore only rarely seen in the training data. To asses the extent of the influence of implicitness, we perform in this section automatic scoring experiments that control for the implicitness of our annotated phenomena in the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Implicit Language on Automatic Scoring",
"sec_num": "4"
},
{
"text": "For our experiments we use Weka's (Hall et al., 2009 ) SMO Support Vector classifier in standard configuration with the top 10,000 most frequent token uni-to trigram and the 1,000 most frequent POS uni-to trigram features, and train a separate classifier per prompt. 2 Due to the small amount of answers, we perform leave-one-out cross validation.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "(Hall et al., 2009",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.1"
},
{
"text": "In order to assess the impact of implicitness, we compare two versions of the dataset, making use of our annotations. In the baseline condition, the training and test data is used as is. In the explicit condition, we use the antecedent annotations to resolve any implicit phenomena to their explicit version and then train and test on explicit answers. Figure 2 shows examples for implicit and explicit versions of the four phenomena. For coreference, we resolve every pronoun to obtain the explicit version. For bridging and numeric terms, we add what is necessary to resolve them. In case of ellipsis, we add what was left out. Table 4 shows the results of our experiments. Because the SemEval labels do not have a natural order, we report \u03ba values for them, but QWK for the ASAP prompt. For the two ASAP prompts, we only had 100 annotated answers and hence a much smaller amount than the full set of answers that is typically used to train models on this dataset. This is reflected in a reduced performance compared to other experiments on the same dataset, but the focus of our experiments is rather to assess of the effect of making things implicitly contained in the answers explicit than to achieve the best possible performance for a prompt.",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 361,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 630,
"end": 637,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Controlling the amount of explicitness in the data",
"sec_num": "4.2"
},
{
"text": "Overall, making the phenomena explicit within the answers seems to be beneficial for their automatic scoring. For coreferences and ellipsis, we see slight increases of .01 and .03 OWK, respectively. For the two bridging prompts, \u03ba increases by .03 and .07. Regarding numeric terms, for the prompt VB 22c we see a decrease of \u03ba of .03, but even the baseline does not do well here. The other prompt we annotated for numeric terms shows the highest increase of \u03ba .17.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.3"
},
{
"text": "One obvious question one might ask as a student being graded by such an automatic system is whether it is beneficial to use explicit or implicit wording to get a better grade. We therefore also compare the average number of points a model trained on the original data assigns to either an explicit or implicit answer. This can be seen as analogous to our analysis of whether human evaluators favor implicit or explicit answers, this time examining whether the automatic scoring model prefers one over the other. Table 5 shows the results of this analysis. For coreference, results are mixed. While the overall average predicted score of the explicit testing data is slightly higher, there are also answers where the explicit version receives a lower score. For nine answers, the predicted score drops by an average of 1.1 points when they are made explicit, but for 14 answers the predicted score increases by an average of 1.75 points.",
"cite_spans": [],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "Within the ellipsis data, being more explicit is beneficial. There are four instances where the predicted score improves by one point, and none where During the story, the reader gets background information about Mr. Leonard. Explain the effect that background information has on Paul. Support your response with details from the story. Original Answer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "To help make the strangulating sound .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "Explicit Answer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "To help make the strangulating sound of the bess beetle .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "Draw a conclusion based on the student's data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ellipsis Prompt:",
"sec_num": null
},
{
"text": "Original Answer: Based on student data, I noticed that the trial two (T2) plastics stretched longer then most plastics in trial one (T1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ellipsis Prompt:",
"sec_num": null
},
{
"text": "Based on student data, I noticed that the trial two (T2) plastics stretched longer then most plastics stretched in trial one (T1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Answer:",
"sec_num": null
},
{
"text": "Prompt:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Numeric Terms",
"sec_num": null
},
{
"text": "Describe what the graph tells you about the relationship between the number of scoops of fertilizer and the mass of beans harvested?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Numeric Terms",
"sec_num": null
},
{
"text": "Original Answer: Well 3 is the perfect amount because 4 is too many 2 is not enough.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Numeric Terms",
"sec_num": null
},
{
"text": "Well 3 scoops is the perfect amount because 4 scoops is too many 2 scoops is not enough. it worsens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Answer:",
"sec_num": null
},
{
"text": "While the instance count for the SemEval prompts is low, numeric terms and bridging seem to exhibit different trends. For the numeric prompts, the prediction only changes for three of the answers, with the predicted outcome always improving, twice from contradictory to correct and once from partially correct to correct. For bridging, the outcome changes for five of the answers, the predicted label once changing from partially correct to correct, but worsening in the remaining cases, three times from correct to partially correct and once from partially correct to contradictory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Answer:",
"sec_num": null
},
{
"text": "Thus, our results suggest that it depends on the phenomenon whether making it explicit leads to a more favorable prediction of the model. While refraining from using an ellipsis or leaving out the head word of a numeric term seems beneficial, making bridging explicit does not lead to the model predicting a higher score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Answer:",
"sec_num": null
},
{
"text": "We find that implicit language does occur frequently in short answer data and that the phenomena we focused our analysis on can reliably be annotated in learner answers, thus showing that such data is a promising source for implicit language in a relatively controlled setting. We will publish our set of annotated answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "As we find that making the answers more explicit improves their automatic scoring, a next step would be to automatically resolve implicit language into explicit, to enable examining this effect on a larger scale. Subsequent analyses will also widen the experiments to include more different implicit phenomena and resolve more than one phenomenon in the same set of answers. Table 5 : Analysis of how the predictions of a model trained on original prompt answers differ for the original answers and their explicit versions.",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://www.kaggle.com/c/asap-sas",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also ran experiments using a fastText classifier(Joulin et al., 2016), which was however unable to generalize from the small number of training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgement. This work was supported by the DFG RTG 2535: Knowledge-and Data-Based Personalization of Medicine at the Point of care.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automated essay scoring with e-rater\u00ae v. 2. The Journal of Technology",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2006,
"venue": "Learning and Assessment",
"volume": "4",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater\u00ae v. 2. The Journal of Technol- ogy, Learning and Assessment, 4(3).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Diagnosing meaning errors in short answers to reading comprehension questions",
"authors": [
{
"first": "Stacey",
"middle": [],
"last": "Bailey",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the third workshop on innovative use of NLP for building educational applications",
"volume": "",
"issue": "",
"pages": "107--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stacey Bailey and Detmar Meurers. 2008. Diagnosing meaning errors in short answers to reading compre- hension questions. In Proceedings of the third work- shop on innovative use of NLP for building educa- tional applications, pages 107-115.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using an implicit method for coreference resolution and ellipsis handling in automatic student answer assessment",
"authors": [
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "Nobal Bikram",
"middle": [],
"last": "Vasile Rus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Niraula",
"suffix": ""
}
],
"year": 2015,
"venue": "The Twenty-Eighth International Flairs Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajendra Banjade, Vasile Rus, and Nobal Bikram Ni- raula. 2015. Using an implicit method for corefer- ence resolution and ellipsis handling in automatic student answer assessment. In The Twenty-Eighth International Flairs Conference.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Powergrading: a clustering approach to amplify human effort for short answer grading. Transactions of the Association for",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "Chuck",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "391--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Basu, Chuck Jacobs, and Lucy Vanderwende. 2013. Powergrading: a clustering approach to am- plify human effort for short answer grading. Trans- actions of the Association for Computational Lin- guistics, 1:391-402.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bridging",
"authors": [
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 1975,
"venue": "Theoretical issues in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert H Clark. 1975. Bridging. In Theoretical issues in natural language processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Beetle ii: a system for tutoring and computational linguistics experimentation",
"authors": [
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Myroslava O Dzikovska",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Gwendolyn",
"middle": [],
"last": "Steinhauser",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Campbell",
"suffix": ""
},
{
"first": "Charles B",
"middle": [],
"last": "Farrow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Callaway",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 System Demonstrations",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myroslava O Dzikovska, Johanna D Moore, Natalie Steinhauser, Gwendolyn Campbell, Elaine Farrow, and Charles B Callaway. 2010. Beetle ii: a sys- tem for tutoring and computational linguistics exper- imentation. In Proceedings of the ACL 2010 System Demonstrations, pages 13-18.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual embodiment challenge",
"authors": [
{
"first": "O",
"middle": [],
"last": "Myroslava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dzikovska",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rodney",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Nielsen",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Brew",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dagan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dang",
"suffix": ""
}
],
"year": 2013,
"venue": "Second Joint Conference on Lexical and Computational Semantics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myroslava O Dzikovska, Rodney D Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa Trang Dang. 2013. Semeval-2013 task 7: The joint stu- dent response analysis and 8th recognizing textual embodiment challenge. In Second Joint Conference on Lexical and Computational Semantics (* SEM): Seventh International Workshop on Semantic Eval- uation (SemEval 2013), volume 2. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Where's my head? definition, data set, and models for numeric fused-head identification and resolution",
"authors": [
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "519--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanai Elazar and Yoav Goldberg. 2019. Where's my head? definition, data set, and models for numeric fused-head identification and resolution. Transac- tions of the Association for Computational Linguis- tics, 7:519-535.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The difference between english restrictive and nonrestrictive relative clauses",
"authors": [
{
"first": "Nigel",
"middle": [],
"last": "Fabb",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of linguistics",
"volume": "26",
"issue": "1",
"pages": "57--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nigel Fabb. 1990. The difference between english re- strictive and nonrestrictive relative clauses. Journal of linguistics, 26(1):57-77.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10- 18.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding a tradeoff between accuracy and rater's workload in grading clustered short answers",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Horbach",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Wolska",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "588--595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Horbach, Alexis Palmer, and Magdalena Wol- ska. 2014. Finding a tradeoff between accuracy and rater's workload in grading clustered short answers. In LREC, pages 588-595. Citeseer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The influence of variance in learner answers on automatic content scoring",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Horbach",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in Education",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Horbach and Torsten Zesch. 2019. The influ- ence of variance in learner answers on automatic content scoring. In Frontiers in Education, vol- ume 4, page 28. Frontiers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Information structure: Overview and linguistic issues. The expression of information structure",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Krifka",
"suffix": ""
},
{
"first": "Renate",
"middle": [],
"last": "Musan",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Krifka and Renate Musan. 2012. Information structure: Overview and linguistic issues. The ex- pression of information structure, pages 1-44.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "C-rater: Automated scoring of short-answer questions",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 2003,
"venue": "Computers and the Humanities",
"volume": "37",
"issue": "",
"pages": "389--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Leacock and Martin Chodorow. 2003. C-rater: Automated scoring of short-answer questions. Com- puters and the Humanities, 37(4):389-405.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Evaluating answers to reading comprehension questions in context: Results for german and the role of information structure",
"authors": [
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Ziai",
"suffix": ""
},
{
"first": "Niels",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Janina",
"middle": [],
"last": "Kopp",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the TextInfer 2011 Workshop on Textual Entailment",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detmar Meurers, Ramon Ziai, Niels Ott, and Janina Kopp. 2011. Evaluating answers to reading com- prehension questions in context: Results for german and the role of information structure. In Proceed- ings of the TextInfer 2011 Workshop on Textual En- tailment, pages 1-9.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Anaphora resolution",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Mitkov. 2014. Anaphora resolution. Rout- ledge.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Annotating students' understanding of science concepts",
"authors": [
{
"first": "",
"middle": [],
"last": "Rodney D Nielsen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodney D Nielsen, Wayne H Ward, James H Martin, and Martha Palmer. 2008. Annotating students' un- derstanding of science concepts. In LREC. Citeseer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On referring. Mind",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter F Strawson",
"suffix": ""
}
],
"year": 1950,
"venue": "",
"volume": "59",
"issue": "",
"pages": "320--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Strawson. 1950. On referring. Mind, 59(235):320-344.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Focus annotation in reading comprehension data",
"authors": [
{
"first": "Ramon",
"middle": [],
"last": "Ziai",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2014,
"venue": "LAW VIII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramon Ziai and Detmar Meurers. 2014. Focus anno- tation in reading comprehension data. In LAW VIII, page 159.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic focus annotation: Bringing formal pragmatics alive in analyzing the information structure of authentic data",
"authors": [
{
"first": "Ramon",
"middle": [],
"last": "Ziai",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "117--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramon Ziai and Detmar Meurers. 2018. Automatic fo- cus annotation: Bringing formal pragmatics alive in analyzing the information structure of authentic data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 117-128.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Two (made-up) answers to the same prompt demonstrating how one can say the same thing with different levels of explicitness.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Exemplary original and explicit variants of answers.",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Prompts selected for annotation of the implicit phenomena.",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Phenomenon</td><td>% LA</td><td colspan=\"2\">\u00f8 # phen. Scores of LAs Scores of LAs</td></tr><tr><td/><td colspan=\"2\">w/ phen. per LA</td><td>w/ phen.</td><td>w/o phen.</td></tr><tr><td>Coreference</td><td>97</td><td>5.0</td></tr><tr><td>Bridging Anaphora (LF 26b2)</td><td>80</td><td>0.9</td></tr><tr><td>Bridging Anaphora (ST 31b)</td><td>18</td><td>0.2</td></tr><tr><td>Ellipsis (ASAP 2)</td><td>40</td><td>0.9</td></tr><tr><td>Numeric Terms (VB 22c)</td><td>30</td><td>1.2</td></tr><tr><td>Numeric Terms (LF 27a)</td><td>50</td><td>1.0</td></tr></table>",
"text": "Binary token-level and antecedent agreement for the annotation of the phenomena.",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Frequency with which the phenomena occur in the chosen prompts shown inTable 1. For the label distribution, individual labels from left to right are: 0, 1 and 2 points for Coreference, 0, 1, 2 and 3 points for Ellipsis and contradictory, irrelevant, partially correct, correct for the other phenomena.",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Explicit Answer:</td><td>The background information motivated Paul , Paul knew what</td></tr><tr><td/><td>Mr. leonard meant and that gave Paul incintive to try harder.</td></tr><tr><td>Bridging Anaphora</td><td/></tr><tr><td>Prompt:</td><td>One function of the bess beetle's elytra (the hard, black wing set) is</td></tr><tr><td/><td>protection. What is another function of the elytra?</td></tr></table>",
"text": "Original answer: It motivated him , He knew what Mr. leonard meant and that gave him incintive to try harder.",
"num": null
},
"TABREF7": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Change in Prediction</td><td/><td colspan=\"2\">Number of Answers</td><td/></tr><tr><td>after Making Explicit</td><td colspan=\"4\">Coreference Ellipsis Bridging Numeric Terms</td></tr><tr><td>Better</td><td>14</td><td>4</td><td>1</td><td>3</td></tr><tr><td>Worse</td><td>9</td><td>0</td><td>4</td><td>0</td></tr></table>",
"text": "Automatic scoring results for the training and testing on the original data (baseline) compared to training and testing on answers that were made explicit.",
"num": null
}
}
}
}