|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:10:59.745547Z" |
|
}, |
|
"title": "Automatic Entity State Annotation using the VerbNet Semantic Parser", |
|
"authors": [ |
|
{ |
|
"first": "Ghazaleh", |
|
"middle": [], |
|
"last": "Kazeminejad", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Utah", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Utah", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Tracking entity states is a natural language processing task assumed to require human annotation. In order to reduce the time and expenses associated with annotation, we introduce a new method to automatically extract entity states, including location and existence state of entities, following Dalvi et al. (2018) and Tandon et al. (2020). For this purpose, we rely primarily on the semantic representations generated by the state of the art VerbNet parser (Gung, 2020), and extract the entities (event participants) and their states, based on the semantic predicates of the generated Verb-Net semantic representation, which is in propositional logic format. For evaluation, we used ProPara (Dalvi et al., 2018), a reading comprehension dataset which is annotated with entity states in each sentence, and tracks those states in paragraphs of natural human-authored procedural texts. Given the presented limitations of the method, the peculiarities of the ProPara dataset annotations, and that our system, Lexis, makes no use of task-specific training data and relies solely on VerbNet, the results are promising, showcasing the value of lexical resources.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Tracking entity states is a natural language processing task assumed to require human annotation. In order to reduce the time and expenses associated with annotation, we introduce a new method to automatically extract entity states, including location and existence state of entities, following Dalvi et al. (2018) and Tandon et al. (2020). For this purpose, we rely primarily on the semantic representations generated by the state of the art VerbNet parser (Gung, 2020), and extract the entities (event participants) and their states, based on the semantic predicates of the generated Verb-Net semantic representation, which is in propositional logic format. For evaluation, we used ProPara (Dalvi et al., 2018), a reading comprehension dataset which is annotated with entity states in each sentence, and tracks those states in paragraphs of natural human-authored procedural texts. Given the presented limitations of the method, the peculiarities of the ProPara dataset annotations, and that our system, Lexis, makes no use of task-specific training data and relies solely on VerbNet, the results are promising, showcasing the value of lexical resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Question answering in reading comprehension tasks focusing on procedural texts (i.e. texts describing processes) is particularly challenging in natural language processing (NLP), because this type of text describes a changing world state , Du et al. 2019 , Gupta and Durrett 2019 . Tracking the state of entities in such texts is an important task to enable proper question answering in reading comprehension tasks. The challenging part in such question answering tasks is not where answers are explicitly mentioned in a sentence . For example, in Figure 1 , the sentence with bold elements tells us explicitly that urea and carbon dioxide are located in kidneys by the end of Paragraph: Blood delivers oxygen in the body. Proteins and acids are broken down in the liver. The liver releases waste in the form of urea. The blood carries the urea and carbon dioxide to the kidneys. The kidneys strain the urea and salts needed from the blood. The carbon dioxide by product is transported back to the lungs. Q: Does blood enter the kidney? A: Yes. this sentence. However, the fact that the blood itself will also be located at the kidneys is implicit in the semantics of the verb \"carry\", in which the agent of the action (mover) moves along with the theme (thing moved). Therefore, answering the question in Figure 1 requires knowledge of the particular semantics of this verb. This is the kind of inference our system, Lexis, is able to model using VerbNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 254, |
|
"text": ", Du et al. 2019", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 279, |
|
"text": ", Gupta and Durrett 2019", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 548, |
|
"end": 556, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1306, |
|
"end": 1314, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This type of reasoning requires event extraction as a first step, and event participant state extraction as a second step. Our method covers both steps, but is at this point limited to sentence-level inference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In section 2, we provide an overview of the work related to this research. Section 3 introduces the dataset used to evaluate Lexis, as well as the details of the methods we have used. Section 4 presents our experimental settings, followed by section 5 which illustrates the results of each setting. Section 6 discusses the advantages of Lexis, and provides an error analysis to illuminate the sources of weakness. In section 7, we conclude and briefly discuss our future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The water forms a stream.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "VNSP VerbNet semantic predicates output: [{'polarity': False, 'predicateType': 'Has State', 'args': [{'type': 'Material', 'value': ''}, {'type': 'Verbspecific', 'value': 'V_Final_State'}]}, {'polarity': True, 'predicateType': 'Do', 'args': [{'type': 'Agent', 'value' : 'The water'}]}, {'polarity ': True, 'predicateType': 'Be', 'args': [{'type': 'Result', 'value': 'a ", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 266, |
|
"text": "[{'polarity': False, 'predicateType': 'Has State', 'args': [{'type': 'Material', 'value': ''}, {'type': 'Verbspecific', 'value': 'V_Final_State'}]}, {'polarity': True, 'predicateType': 'Do', 'args': [{'type': 'Agent', 'value'", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 296, |
|
"end": 367, |
|
"text": "': True, 'predicateType': 'Be', 'args': [{'type': 'Result', 'value': 'a", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A more readable format:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "stream'}]}]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u00acHAS_STATE(? Material , V_Final_State VerbSpecific ) \u00acBE(a stream Result ) DO(The water Agent ) BE(a stream Result ) HAS_STATE(? Material , V_Final_State VerbSpecific )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "stream'}]}]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Extracting a CREATE entity state:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "stream'}]}]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this case, based on the predicateType 'Be' along with a 'Result' argument type, we can conclude that the value 'a stream' is the created entity. This is then fed into spaCy to extract the head noun 'stream' as the entity. 2 Related Work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "stream'}]}]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our method is similar to in the sense that we both rely on VerbNet semantic representations to make inferences. However, our point of departure is that previously, VerbNet did not have a neural semantic parser, and Clark et al. (2018) did not disambiguate verb senses, and used the most frequently used verb sense instead. Also, they had to develop a huge rulebase to encode commonsense knowledge about the states that events produce, using a STRIPS-style list of before (preconditions) and after (effects) expressed as possibly negated literals. In order to extract arguments (i.e. event participants), they relied on the syntactic patterns provided in the VerbNet, and used them to instantiate the arguments in the before/after literals. They also performed a manual annotation effort to check and correct the rulebase entries, and added entries for other verbs that affect existence and loca-tion. In addition, they added new entries for verbs not existing in VerbNet. In contrast, our method is fully automatic (see section 3.2). Most work on tracking entity states uses neural methods (Henaff et al. 2016 , Ji et al. 2017 , Tang et al. 2020 . However, these models rely on large amounts of annotated data, which is expensive and labor-intensive to provide (Sun et al., 2020) . One of the commonly used solutions to the data scarcity problem in NLP tasks is data augmentation. According to Feng et al. (2021) , the goal of data augmentation is increasing training data diversity without directly collecting more data. Most strategies for data augmentation consist of creating synthetic data based on the main data. On the same note, automatic training data generation attempts show promise in various NLP tasks, such as event extraction (Chen et al. 2017 , Zeng et al. 2018 , and named entity recognition (Tchoua et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1090, |
|
"end": 1109, |
|
"text": "(Henaff et al. 2016", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1110, |
|
"end": 1126, |
|
"text": ", Ji et al. 2017", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1127, |
|
"end": 1145, |
|
"text": ", Tang et al. 2020", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1261, |
|
"end": 1279, |
|
"text": "(Sun et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1394, |
|
"end": 1412, |
|
"text": "Feng et al. (2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1741, |
|
"end": 1758, |
|
"text": "(Chen et al. 2017", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1759, |
|
"end": 1777, |
|
"text": ", Zeng et al. 2018", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1809, |
|
"end": 1830, |
|
"text": "(Tchoua et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "stream'}]}]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As explained below, Lexis uses a state-of-the-art semantic parser to automatically generate entity states for each sentence. These entity states are the same as the entity state labels provided by human annotators in the dataset on which we evaluate Lexis. The generated inferences can be used, in future work, to augment the existing training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "stream'}]}]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The main contribution of this work is bringing to the forefront the advantages of using lexical resource based methods. In particular, in this work, these methods are used for automatic annotation of text with labels regarding the location and existence states of entities in a given sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The ProPara (Process Paragraphs) dataset , developed by the Allen Institute for AI (AI2), contains 183 prompts (with 152 topics) and 488 human authored paragraphs of procedural text in response to these prompts (each paragraph having 10 or less sentences), along with 81k annotations about the changing states (existence and location) of entities in those paragraphs. The training set contains 391 paragraphs (80% of the data). The end-task of the dataset is predicting and tracking location and existence changes for the entities. ProPara is the first dataset of annotated, natural text for real-world processes, which also contains a simple representation of entity states during those processes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We used the recently developed BERT-based Verb-Net semantic parser (Gung 2020, Gung and , which is located at the GitHub SemParse site 1 , to parse every single sentence in each paragraph. The VerbNet semantic parser (VNSP) returns a json file containing the verb sense disambiguated Verb-Net class, the complete logical predicates for that class instantiated with arguments extracted from the sentence, as well as the text spans (phrases) labeled with both VerbNet thematic roles (Schuler, 2005 ) (if applicable) and PropBank argument role labels (Kingsbury and Palmer 2002 , Palmer et al. 2005 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 87, |
|
"text": "(Gung 2020, Gung and", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 495, |
|
"text": "(Schuler, 2005", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 574, |
|
"text": "Palmer 2002", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 595, |
|
"text": ", Palmer et al. 2005", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The main idea of using VNSP, and in general what gives VNSP an edge over other semantic parsers, is the logical predicates it generates (for a list of some of the VerbNet predicates used in this work, see Table 1 ). These predicates are utilized here to infer/predict an entity's change of location and change of existence state (i.e. whether it has been created or destroyed during the course of the sentence). Some of these predicates uncover implicit information about entity states, so our method covers explicit and implicit information, as long as the information is implicit in the semantics of the verb, rather than requiring world knowledge. Figure 2 illustrates the utility of the semantic predicate part of the VNSP output and how it is used to predict that the entity 'stream' is CREATED in the input sentence 'The water forms a stream'. The VNSP output is close to our desired form, but does not match exactly. Certain inferences first need to be drawn, and we have implemented blocks of Python code for this purpose. Figure x gives the block of Python code for 'Results' that extracts \"a stream\" as the created entity. VNSP also gives us syntactic phrases, and we still have to extract head nouns and objects of prepositions, etc, to get specific entities, as well as before-after states. We use spaCy (Honnibal et al., 2020) to do this.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 212, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 659, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1031, |
|
"end": 1039, |
|
"text": "Figure x", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We represent the inferred entity states as a triple for change of location cases (entity, AtLoc, location), and a tuple for change of existence cases (entity, Created) or (entity, Destroyed). These generated states can, in future work, be used for data augmentation, to improve the performance of machine learning models on this task. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "'A0'}, {'text': 'absorb', 'pb': 'V'},", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "{'text': 'water and minerals', 'pb': 'A1'}, {'text': 'from the soil', 'pb':", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "'A2']}]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Extracting a MOVE entity state:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this case, based on the predicateType 'Take In' along with a 'Theme' and a 'Goal' argument type, we can conclude that the value 'water and minerals' is an entity that moves to the value for 'Goal'. However, the value for 'Goal' has remained uninstantiated, so we resort to the PropBank parse. Knowing that the 'Goal' argument for the 'Take In' predicate is the same as A0 numbered argument, we extract 'The roots' as the destination for the extracted entity. This is then fed into spaCy to extract 'water' and 'minerals' as two entities, both moving to the destination 'root'. A separate block of Python code draws Take In inferences This completes the process used for Setting 1 in Section 4.1. In the second setting for our experiments, 4.2, we added information from the Prop-Bank argument roles generated by VNSP to cover cases where there are uninstantiated arguments in the VerbNet semantic parse that PropBank can supply. This should, in theory, increase recall, but may decrease precision, because VerbNet thematic roles are more fine grained compared to PropBank, and are therefore more accurate in predicting semantic roles. Figure 3 illustrates the utility of the Prop-Bank parse part of the VNSP output and how it is used to predict that the entity 'water' and the entity 'minerals' is MOVED to 'root' in the input sentence 'The roots absorb water and minerals from the soil'. This example illustrates a case in which the VerbNet semantic predicate generated by the VNSP fails to generate a value for the 'Goal' argument, while looking at the PropBank A0 argument supplies us with it. It also illustrates another example of lexically encoded implicit semantic information, since for the Take In predicate that is provided in the VNSP output, the 'Goal' argument is the same as 'Agent' argument. For that reason, we can confidently extract the value for A0 from the PropBank parse as the value for the Goal, and therefore the destination of the MOVE event.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1138, |
|
"end": 1146, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In addition, we have also introduced a relaxed setting in which the predicted false positive labels are evaluated by a human judge to be true false positives or correctly predicted labels that are missed in the gold labels in ProPara.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The text spans that represent arguments are either noun phrases (for entities, e.g. sediment from the ocean in the sentence The waves contain sediment from the ocean.) or prepositional phrases (for location state tracking, e.g. in sediment in the sentence They are buried in sediment.). A human annotator labels sediment as the destination of they, but the parser labels the whole phrase in sediment as the destination. On the other hand, there are cases of conjunction, where several entities are conjoined and one predicate is stated about them. In that case, we should be able to separate the conjoined entities and generate a triple or tuple for each. For example, in the sentence Algae and plankton die, two change of existence tuples should be generated: (algae, Destroyed), and (plankton, Destroyed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Such syntactic problems were solved using the spaCy (Honnibal et al., 2020) dependency parser. Nouns were lemmatized, the nominal heads of the noun phrases in prepositional phrases were extracted, the conjoined noun phrases were disjoined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In setting 1, we only relied on the predicates from the VerbNet semantic representations part of the output of the VNSP. Given the definition of each predicate and the types of arguments it can take across the VerbNet, we categorized the predicates into those indicating a change of location and those indicating a change of existence (in particular, creation and destruction). We also used the VerbNet thematic role hierarchy to collect all thematic roles that could point to an event participant (e.g. Agent, Co-Agent, Pivot, Patient, etc.), a location (e.g. Destination, Location, Goal, Source, etc.), an Undergoer (e.g. Patient, Theme, Pivot, etc.), as well as those particularly indicating a Destination-type location and a Source-type location. For example, in the sentence Water from oceans, lakes, swamps, rivers, and plants turns into water vapor., the semantic predicate output of the VNSP is summarized (by our algorithm) into the following: [(True, 'Has State', [('Patient', ' Water from oceans , lakes , swamps , rivers , and plants'), ('Initial State', '')]), (True, 'Has State', [('Patient', ' Water from oceans , lakes , swamps , rivers , and plants'), ('Result', 'into water vapor')])]", |
|
"cite_spans": [ |
|
{ |
|
"start": 953, |
|
"end": 988, |
|
"text": "[(True, 'Has State', [('Patient', '", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1074, |
|
"end": 1108, |
|
"text": "(True, 'Has State', [('Patient', '", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "From this logical statement, since the final subpredicate is a 'Has State' predicate and it has a 'Result' argument, our algorithm extracts the value for the 'Result', i.e. into water vapor, as the entity created.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Another example of the semantic predicate output that explains the inference in Figure 1 follows: [ (True, 'Has Location', [('Theme', ' The last two lines show that both the Theme (the urea and carbon dioxide) and the Agent (The blood) will end up in the Destination (to the kidneys), uncovering the implicit information in the semantics of the verb carry.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 135, |
|
"text": "[ (True, 'Has Location', [('Theme', '", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 88, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Some predicates required special handling. For example, the Take In predicate (representing verbs such as inject, drink, eat, smoke) is fairly opaque, and does not explicitly indicate a change of location in VerbNet. However, we know that as a result of Block of Python Code: Figure 4 : Python code for extracting a MOVED entity, along with its location states, from the VerbNet semantic predicate Take In. Pseudo Code: If Take In is in the list of generated VNSP semantic predicates for this sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 284, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "loop over the generated predicates If there is a Take In predicate with positive polarity: loop over the arguments of that predicate: If the type of an argument is Theme: extract the head noun of the value as an entity which has MOVED. If the type of an argument is Goal or Agent:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "extract the noun head of the prepositional object as the after state.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "this type of event, an undergoer (such as a Theme) would move to the Goal, Agent, or Recipient of the event, whichever is specified in the predicates. The block of Python code for Take In (Figure 4) includes an explicit expansion of Take In to indicate the Goal is now At The Agent. Such semantic expansions needed to be assumed in the semantics of such predicates in the entity extraction algorithm, as VerbNet fails to do its normal expansion of the predicates. These expansions will be added to the next VerbNet release.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 198, |
|
"text": "(Figure 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Setting 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In setting 2, we used the PropBank argument roles included in the VNSP output in addition to the VerbNet predicates. This covered cases of verbs that exist in VerbNet, but VNSP fails to instantiate the entity or location. This is most likely due to the small size of the VerbNet labeled training data compared to PropBank labeled training data. For example, in the sentence The stream becomes a river. from the training data, here is the output of the VNSP: Here, the value for the 'Result' semantic role has remained uninstantiated, which means that setting 1 yields no output (i.e. does not extract 'river' as a created entity). Therefore, we look at the A2 argument in the PropBank parse output of the VNSP, summarized below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "[(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "['text': 'The stream', 'pb': 'A1', 'vn': 'Patient', 'text': 'becomes', 'pb': 'V', 'vn': 'Verb', 'text': 'a river', 'pb': 'A2', 'vn': ''] Linguistically, the A2 argument for this verb is described as 'new state', which is exactly what we need. Since the numbered arguments in PropBank are too coarse-grained, we use SemLink (Palmer 2009 , Bonial et al. 2013 , Stowe et al. 2021 to find their mapping into VerbNet thematic roles and find those that suit our purposes in this task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 136, |
|
"text": "'']", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 335, |
|
"text": "(Palmer 2009", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 356, |
|
"text": ", Bonial et al. 2013", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 376, |
|
"text": ", Stowe et al. 2021", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In general, we assumed that A1 (proto-patient, which is typically the undergoer) is the entity moved or created. This assumption should be accurate in theory at least for verbs indicating creation or motion, because an entity in motion is theoretically a \"theme\", and an entity created could be a \"result\", a \"product\", or \"theme\", which are all labeled A1 in PropBank. However, there are also change of state verbs such as become, for which the 'Result' (new state) is not the entity undergoing change or created, so it is marked as A2 rather than A1. It should also be noted that since the Prop-Bank numbered arguments A2 and above are very coarse-grained and overloaded. That is the reason we consult SemLink to find the correct mapping.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Since Lexis generates more states than gold standard annotations in ProPara. For example, for the sentence 'Magma rises from deep in the earth.' in the training data, no motion has been predicted for magma, while as a human judge, we know that magma moves in this sentence. Lexis predicts this movement of magma in this sentence, and this is counted as a false positive in evaluation, reducing the precision and F1 score. Therefore, we examined a third setting in which we do not count the extra annotations as false positives, if they are correct. Knowledge of whether these extra predicted entities are correct or wrong requires human judgment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relaxed Setting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For that matter, we set up a judgment task on the system output (i.e. the predicted created/destroyed/moved entities) on the test data, and asked two unbiased human annotators to make judgments. The results were then examined by the authors to identify the sources of false negatives. These will be discussed in section 6.2, and illustrated in Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 351, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Relaxed Setting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Tables 2, 3, and 4 illustrate the evaluation results. In addition to the cumulative/general state tracking results, we have evaluated location tracking and existence tracking separately in order to monitor the sources of error. We have also provided the results for the relaxed setting in Table 4 (see 4. 3).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 304, |
|
"text": "Table 4 (see 4.", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As expected, the inclusion of PropBank argument roles (in setting 2) increases recall from 0.29 (Table 2) to 0.39 (Table 3) . Interestingly, this is achieved through existence tracking, not location tracking. Within location tracking, not much change is observed between the two settings, as discussed in section 6.2.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 105, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 114, |
|
"end": 123, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "While performing the human judgment task for the relaxed setting, we also came across cases of gold label errors (see 'GLE' in Table 6 ), as well as useless gold labels in location tracking, labeling the location of an entity as unk (i.e. unknown). Both the GLE and superfluous labels were considered unlabeled in the gold data for evaluation purposes. Therefore, the results in the top section of Table 4 are different from the results in setting 2 (VerbNet + PropBank, see Table 3 ), because here we removed the gold label errors. These will be discussed further in section 6. Within Table 4 , it is notable that the precision significantly increases (by 20%) in the relaxed setting (compared to the strict setting in the same table).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 405, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 482, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 586, |
|
"end": 593, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Within the Allen AI leaderboard for the ProPara task (see Table 5 ), our general recall in setting 2 (VerbNet + PropBank) beats Facebook AI Research EntNet, University of Washington QRN, and AI2 ProLocal. The recall increases from 0.39 to 0.48 in the relaxed setting (Table 4 ). There are 6 teams that have a higher recall, and Lexis Relaxed would stand on the 7 th place with respect to recall. For the general F1 score, too, Lexis Relaxed would take the 7 th place on the leaderboard.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 65, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 275, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "First and foremost, it should be noted that in this work, we have relied solely on a lexical resource, and no task-specific learning-based methods have been utilized. Also, our method generates more entity states than human annotators generated. That increases our system's false positives, which in turn decreases the precision and F1 scores. For this reason, we decided to create a relaxed setting (cf. section 4.3) and judge which one of the false positives are true false positives and which ones are correct labels which are missing from the train set. In the judgment task to find true false positives, out of 235 false positives (in 42% of the train set), 129 were in fact judged as correct labels missed in the gold data. Therefore, the count of false positives in the relaxed settings was reduced from 235 to 106 (i.e. 45% of the false positives were not true false positives).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Advantages", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Another advantage of Lexis is that it predicts states for all possible entities in a sentence, and is not limited to the set of entities labeled in ProPara. However, for evaluation purposes, we only compared our results against the gold labeled entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Advantages", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "What makes our method valuable is the fact that, unlike learning models which need annotated data, Lexis does not require training. As such, this method can be used for automatic generation of entity states, potentially with higher recall than human annotation. In addition, this is not limited to the ProPara dataset, or even procedural texts. Since VerbNet is a general semantic lexicon for English verbs, the VNSP output can be utilized to predict entity states in any domain. However, we should also acknowledge the sources of error and weaknesses of this method, which are discussed in section 6.2 below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Advantages", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In this section, we discuss and analyze the sources of error and weakness of our method and results, illuminating the limitations. The first noticeable source of weakness is the limited coverage of the VerbNet lexicon itself. There are currently 4588 unique verbs in VerbNet. For the verbs that are absent from the VerbNet, the VNSP output is empty. Out of 2639 sentences in the train set, 158 remained unparsed ( 6%). This increases false negative counts and therefore decreases our recall and F1 score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Another source of weakness of VNSP, and therefore our method, is the relatively small size of VerbNet-labeled data for training the VNSP. This is in comparison to the size of the PropBank-labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "One of the most important sources of error was use of world knowledge in the gold standard human annotation in addition to the explicit linguistic knowledge -something that is beyond the scope of VerbNet. For a complete picture of the false negative underlying reasons, see The annotator has annotated the highlighted step with a 'Move' state change of the 'seed' entity to 'dirt'. This knowledge requires recovering of information from step 4 to know that the seed is in the hole, from step 3 to know that the hole is dug in the dirt, and from world knowledge to know that covering up the seed in the hole which is in the dirt normally happens with the dirt that has been dug. Another example of the same type is in the paragraph below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Water from the surface seeps below the soil. The water comes into contact with the rock below. The water over the years carves through the rock. The space in the rock becomes larger and forms into a cave. As more rather rushes through the cave becomes a cavern.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "One of the annotations is that in the highlighted sentence, the entity 'space' has been 'Created', whereas, space is not even mentioned in this sentence (or before that). The annotator infers, from the following sentence, that a space must have been created at this point. These types of inference require real world knowledge that goes way beyond lexical semantics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Finally, true entity state tracking requires entityreference tracking in a given paragraph. For this, we need to go beyond sentence-level semantic analysis ( analyze the whole paragraph as one unit of discourse. This is mandatory to uncover anaphora (or even cataphora) cases in the text, which is required to track the state of entities in a paragraph. For example, in the sentence \"Spray some plant food on it.\", our system predicts that the location of 'plant food' at the end of this sentence will be 'it', without finding the reference to 'seed'. State-of-the-art reference resolution is clearly indicated to improve results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The goal of this work was demonstrating how lexical resource-based methods using existing NLP resources could be utilized in automatic annotation of text for tracking entity states. The results were quite encouraging and demonstrated the potential of leveraging pre-existing rich lexical resources for challenging inference tasks. The limitations of our approach were examined, including VerbNet's lack of coverage compared to PropBank, another semantic role labeling lexical resource. We also found certain VerbNet predicates that were particularly opaque and benefited from expansion. In addition, and as expected, using the VerbNet semantic parser does not always yield a 100% accurate parse, resulting in downstream errors. In addition, there is a clear need for document-level entity state tracking, which requires automatic reference resolution. Given the limitations of the VerbNet semantic False Negative Type Frequency RT 34 WK 30 GLE 22 REF 7 UP 3 ONT 2 Other 100 Total 198 Table 6 : Underlying reasons for predicted labels judged as false negative. RT (Reverse Tracing): the annotator has assumed the existence or motion of an entity in a prior state for it to have an effect later. WK (World Knowledge): the annotator has used world knowledge to generate this label. GLE (Gold Label Error): This was judged as erroneous -something that is not true even given the world knowledge or reverse tracing. In the judgment task, this was considered an empty gold label. REF (Reference Resolution): the gold label is achievable given a reliable reference resolution system. UP: (Unparsed): state extraction failure due to parsing failure. ONT (Ontology): the gold label is achievable given a reliable entity ontology. Other: failure in predicting the state due to none of the above reasons.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 898, |
|
"end": 1009, |
|
"text": "False Negative Type Frequency RT 34 WK 30 GLE 22 REF 7 UP 3 ONT 2 Other 100 Total 198 Table 6", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "parser discussed in section 6.2, we can expect to increase the recall by backing-off to a PropBank semantic parser (e.g. Li et al. 2020) , that has been trained on much larger training data. This is the next step we plan to undertake. However, it is important to note that the reason we chose VNSP for this task is that it is the only semantic parser that generates the rich semantic representations in the predicate logic format that we employed in our method. Therefore, another possibility that looks more promising (but is a much longer-term solution) is continuing to update the VerbNet lexical resource based on the error analysis introduced in this work, annotating new data to cover examples from these updates, and re-training the VerbNet semantic parser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 136, |
|
"text": "Li et al. 2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Another area for planned future work is to use the generated entity states to augment the ProPara data and feed it into a machine learning model to assess performance improvement. Finally, as proposed in section 6.2, we will be adding a stateof-the-art reference resolution system to achieve document-level entity state tracking. All of these additions will be evaluated and reported on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their helpful feedback. We gratefully acknowledge the support of NSF under grants #1801446 (SATC) and #1822877 (Cyberlearning), and of DARPA CwC (subcontracts from UIUC and SIFT), DARPA AIDA Award FA8750-18-2-0016 (RAMFIS), DARPA KAIROS FA8750-19-2-1004-A20-0047-S005 (RESIN, sub to RPI, PI: Heng Ji), as well as DTRA HDTRA1-16-1-0002/Project 1553695 (eTASC -Empirical Evidence for a Theoretical Approach to Semantic Components). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any government agency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Renewing and revising semlink", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Stowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, terminologies and other language data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Bonial, Kevin Stowe, and Martha Palmer. 2013. Renewing and revising semlink. In Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, ter- minologies and other language data, pages 9-17.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatically labeled data generation for large scale event extraction", |
|
"authors": [ |
|
{ |
|
"first": "Yubo", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shulin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "409--419", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data genera- tion for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 409-419.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "What happened? leveraging verbnet to predict the effects of actions in procedural text", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhavana", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niket", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.05435" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Clark, Bhavana Dalvi, and Niket Tandon. 2018. What happened? leveraging verbnet to predict the effects of actions in procedural text. arXiv preprint arXiv:1804.05435.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Bhavana", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifu", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niket", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yih", |
|
"middle": [], |
|
"last": "Wen-Tau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1595--1604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1595-1604.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Be consistent! improving procedural text comprehension using label consistency", |
|
"authors": [ |
|
{ |
|
"first": "Xinya", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhavana", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niket", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bosselut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2347--2356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinya Du, Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark, and Claire Cardie. 2019. Be consistent! improving procedural text comprehension using label consistency. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 2347-2356.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for nlp", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varun", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Gangal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarath", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soroush", |
|
"middle": [], |
|
"last": "Chandar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vosoughi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2105.03075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chan- dar, Soroush Vosoughi, Teruko Mitamura, and Ed- uard Hovy. 2021. A survey of data augmentation ap- proaches for nlp. arXiv preprint arXiv:2105.03075.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Abstraction, Sense Distinctions and Syntax in Neural Semantic Role Labeling", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Gung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Gung. 2020. Abstraction, Sense Distinctions and Syntax in Neural Semantic Role Labeling. Ph.D. thesis, University of Colorado at Boulder.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Predicate representations and polysemy in verbnet semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Gung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 14th International Conference on Computational Semantics (IWCS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Gung and Martha Palmer. 2021. Predicate rep- resentations and polysemy in verbnet semantic pars- ing. In Proceedings of the 14th International Con- ference on Computational Semantics (IWCS), pages 51-62, Groningen, The Netherlands (online). Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Tracking discrete and continuous entity state for process understanding", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Third Workshop on Structured Prediction for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Gupta and Greg Durrett. 2019. Tracking dis- crete and continuous entity state for process under- standing. In Proceedings of the Third Workshop on Structured Prediction for NLP, pages 7-12.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Tracking the world state with recurrent entity networks", |
|
"authors": [ |
|
{ |
|
"first": "Mikael", |
|
"middle": [], |
|
"last": "Henaff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Szlam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1612.03969" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "spaCy: Industrial-strength Natural Language Processing in Python", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5281/zenodo.1212303" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Dynamic entity representations in neural language models", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenhao", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Martschat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1830--1839", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity rep- resentations in neural language models. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1830-1839.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "From treebank to propbank", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1989--1993", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul R Kingsbury and Martha Palmer. 2002. From tree- bank to propbank. In LREC, pages 1989-1993. Cite- seer.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Structured tuning for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Parth Anand Jawale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8402--8412", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.744" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Li, Parth Anand Jawale, Martha Palmer, and Vivek Srikumar. 2020. Structured tuning for semantic role labeling. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 8402-8412, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semlink: Linking propbank, verbnet and framenet", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the generative lexicon conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer. 2009. Semlink: Linking propbank, verbnet and framenet. In Proceedings of the gen- erative lexicon conference, pages 9-15. GenLex-09, Pisa, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The proposition bank: An annotated corpus of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational linguistics", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "71--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated cor- pus of semantic roles. Computational linguistics, 31(1):71-106.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "VerbNet: A broadcoverage, comprehensive verb lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Karin Kipper", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karin Kipper Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Semlink 2.0: Chasing lexical resources", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Stowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenette", |
|
"middle": [], |
|
"last": "Preciado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathryn", |
|
"middle": [], |
|
"last": "Conger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"Windisch" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ghazaleh", |
|
"middle": [], |
|
"last": "Kazeminejad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "30th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "222--227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Stowe, Jenette Preciado, Kathryn Conger, Su- san Windisch Brown, Ghazaleh Kazeminejad, and Martha Palmer. 2021. Semlink 2.0: Chasing lex- ical resources. In 30th Annual Meeting of the As- sociation for Computational Linguistics, pages 222- 227, Gronigen, Netherlands. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Mixuptransformer: Dynamic data augmentation for NLP tasks", |
|
"authors": [ |
|
{ |
|
"first": "Lichao", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Congying", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tingting", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifang", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3436--3440", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.coling-main.305" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip Yu, and Lifang He. 2020. Mixup- transformer: Dynamic data augmentation for NLP tasks. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 3436- 3440, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Reasoning about actions and state changes by injecting commonsense knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Niket", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhavana", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Grus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bosselut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niket Tandon, Bhavana Dalvi, Joel Grus, Wen-tau Yih, Antoine Bosselut, and Peter Clark. 2018. Reasoning about actions and state changes by injecting com- monsense knowledge. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 57-66.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "and Eduard Hovy. 2020. A dataset for tracking entities in open domain procedural text", |
|
"authors": [ |
|
{ |
|
"first": "Niket", |
|
"middle": [], |
|
"last": "Tandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keisuke", |
|
"middle": [], |
|
"last": "Sakaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dheeraj", |
|
"middle": [], |
|
"last": "Bhavana Dalvi Mishra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Rajagopal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Guerquin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2011.08092" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi Mishra, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020. A dataset for tracking entities in open domain proce- dural text. arXiv preprint arXiv:2011.08092.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Understanding procedural text using interactive entity networks", |
|
"authors": [ |
|
{ |
|
"first": "Jizhi", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yansong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7281--7290", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.591" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jizhi Tang, Yansong Feng, and Dongyan Zhao. 2020. Understanding procedural text using interactive en- tity networks. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 7281-7290, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Creating training data for scientific named entity recognition with minimal human effort", |
|
"authors": [ |
|
{ |
|
"first": "Aswathy", |
|
"middle": [], |
|
"last": "Roselyne B Tchoua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Ajith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Logan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Chard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Debra", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Belikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shrayesh", |
|
"middle": [], |
|
"last": "Audus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Patel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Foster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Computational Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "398--411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roselyne B Tchoua, Aswathy Ajith, Zhi Hong, Lo- gan T Ward, Kyle Chard, Alexander Belikov, De- bra J Audus, Shrayesh Patel, Juan J de Pablo, and Ian T Foster. 2019. Creating training data for sci- entific named entity recognition with minimal hu- man effort. In International Conference on Compu- tational Science, pages 398-411. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Scale up event extraction learning via automatic training data generation", |
|
"authors": [ |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yansong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chongde", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, and Dongyan Zhao. 2018. Scale up event extraction learning via automatic training data generation. In Thirty-Second AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "A paragraph from ProPara about blood, showing one kind of inference needed to answer a question. Knowing that blood (as well as the urea and carbon dioxide) will end up in the kidney requires lexical semantics knowledge.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "VerbNet semantic predicate output of VNSP on the input sentence above, and how it is used to predict a CREATE type change of state for the entity 'stream'", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "VerbNet semantic predicate and PropBank output of VNSP on the input sentence above, and how it is used to predict a MOVE type change of state for the entities 'water' and 'minerals'", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "the urea and carbon dioxide'), ('Initial Location', '')]),(True, 'Has Location', [('Agent', 'The blood'), ('Initial Location', '')]), (True, 'Do', [('Agent', 'The blood')]),(True, 'Motion', [('Theme', 'the urea and carbon dioxide'), ('Verbspecific', 'Trajectory')]), (True, 'Motion', [('Agent', 'The blood'), ('Verbspecific', 'Trajectory')]),(True, 'Has Location', [('Theme', 'the urea and carbon dioxide'), ('Destination', 'to the kidneys')]),(True, 'Has Location', [('Agent', 'The blood'), ('Destination', 'to the kidneys')])]", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "False, 'Has State', [('Patient', 'The stream'), ('Result', '')]), (True, 'Has State', [('Patient', 'The stream'), ('Result', '')])]", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "Some of the VerbNet predicates used in our method to infer state changes.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Sentence:</td></tr><tr><td>The roots absorb water and minerals from the soil.</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"4\">: Evaluation Results for Setting 1 -using only</td></tr><tr><td colspan=\"4\">VerbNet semantic predicates and semantic representa-</td></tr><tr><td colspan=\"3\">tions provided in the VNSP output.</td><td/></tr><tr><td/><td colspan=\"3\">Precision Recall F1 Score</td></tr><tr><td>Lexis</td><td/><td/><td/></tr><tr><td>General</td><td>0.36</td><td>0.39</td><td>0.38</td></tr><tr><td colspan=\"2\">Location 0.41</td><td>0.22</td><td>0.29</td></tr><tr><td colspan=\"2\">Existence 0.64</td><td>0.65</td><td>0.64</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Evaluation Results for Setting 2 -using the</td></tr><tr><td>VerbNet predicates, and utilizing PropBank argument</td></tr><tr><td>roles (numbered arguments) when an argument is unin-</td></tr><tr><td>stantiated in the semantic representations. Both the</td></tr><tr><td>VerbNet semantic representations and PropBank argu-</td></tr><tr><td>ment labels are provided by the VNSP.</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "An illustrative", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\">Precision Recall F1 Score</td></tr><tr><td colspan=\"4\">Lexis (Gold Label Errors Removed)</td></tr><tr><td>General</td><td>0.44</td><td>0.48</td><td>0.46</td></tr><tr><td colspan=\"2\">Location 0.54</td><td>0.44</td><td>0.49</td></tr><tr><td colspan=\"2\">Existence 0.37</td><td>0.53</td><td>0.44</td></tr><tr><td colspan=\"4\">Lexis (Relaxed: True False Positives Filtered)</td></tr><tr><td>General</td><td>0.64</td><td>0.48</td><td>0.55</td></tr><tr><td colspan=\"2\">Location 0.79</td><td>0.44</td><td>0.57</td></tr><tr><td colspan=\"2\">Existence 0.53</td><td>0.53</td><td>0.53</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"text": "Evaluation Results for the Relaxed Setting. Put the seed in the hole. Pour some water on the seed and hold. Cover up the hole. Press down on it. Spray some plant food on it.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>At the top, we have the predicted labels from Table 3,</td></tr><tr><td>except that we have removed the gold label errors. At</td></tr><tr><td>the bottom, in addition to removing gold label errors,</td></tr><tr><td>we have also filtered the false positives and kept only</td></tr><tr><td>the true false positives (not counting the correct labels</td></tr><tr><td>that are only missing from the gold data).</td></tr><tr><td>example follows. Given the paragraph</td></tr><tr><td>Get some seeds. Pick a spot to plant them. Dig</td></tr><tr><td>a hole in the dirt.</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"text": "i.e. what we have achieved in this work) and", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">ProPara Leaderboard</td><td/></tr><tr><td/><td colspan=\"3\">Precision Recall F1 Score</td></tr><tr><td>KOALA</td><td>0.777</td><td>0.644</td><td>0.704</td></tr><tr><td>TSLM</td><td>0.684</td><td>0.689</td><td>0.686</td></tr><tr><td>DynaPro</td><td>0.752</td><td>0.579</td><td>0.655</td></tr><tr><td>NCET</td><td>0.671</td><td>0.585</td><td>0.625</td></tr><tr><td>KG-MRC</td><td>0.693</td><td>0.493</td><td>0.576</td></tr><tr><td>LACE</td><td>0.753</td><td>0.454</td><td>0.566</td></tr><tr><td colspan=\"2\">Lexis Relaxed 0.64</td><td>0.48</td><td>0.55</td></tr><tr><td>ProStruct</td><td>0.743</td><td>0.430</td><td>0.545</td></tr><tr><td>AQA</td><td>0.620</td><td>0.451</td><td>0.523</td></tr><tr><td>ProGlobal</td><td>0.488</td><td>0.617</td><td>0.519</td></tr><tr><td>ProLocal</td><td>0.817</td><td>0.368</td><td>0.507</td></tr><tr><td>QRN</td><td>0.609</td><td>0.311</td><td>0.411</td></tr><tr><td>EntNet</td><td>0.547</td><td>0.307</td><td>0.394</td></tr><tr><td colspan=\"2\">Lexis (VN+PB) 0.36</td><td>0.39</td><td>0.38</td></tr><tr><td>Lexis (VN)</td><td>0.38</td><td>0.29</td><td>0.33</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"text": "ProPara Leaderboard and where different settings of Lexis would be placed.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |