|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:10:23.211635Z" |
|
}, |
|
"title": "Biomedical Event Extraction as Multi-turn Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [ |
|
"David" |
|
], |
|
"last": "Wang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Humboldt-Universit\u00e4t zu Berlin", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Humboldt-Universit\u00e4t zu Berlin", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Leser", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Humboldt-Universit\u00e4t zu Berlin", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Biomedical event extraction from natural text is a challenging task as it searches for complex and often nested structures describing specific relationships between multiple molecular entities, such as genes, proteins, or cellular components. It usually is implemented by a complex pipeline of individual tools to solve the different relation extraction subtasks. We present an alternative approach where the detection of relationships between entities is described uniformly as questions, which are iteratively answered by a question answering (QA) system based on the domain-specific language model SciBERT. This model outperforms two strong baselines in two biomedical event extraction corpora in a Knowledge Base Population setting, and also achieves competitive performance in BioNLP challenge evaluation settings.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Biomedical event extraction from natural text is a challenging task as it searches for complex and often nested structures describing specific relationships between multiple molecular entities, such as genes, proteins, or cellular components. It usually is implemented by a complex pipeline of individual tools to solve the different relation extraction subtasks. We present an alternative approach where the detection of relationships between entities is described uniformly as questions, which are iteratively answered by a question answering (QA) system based on the domain-specific language model SciBERT. This model outperforms two strong baselines in two biomedical event extraction corpora in a Knowledge Base Population setting, and also achieves competitive performance in BioNLP challenge evaluation settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Biomedical event extraction (BEE) (Bj\u00f6rne and Salakoski, 2011) aims to extract molecular events from natural text, where an event typically encompasses certain biomedical entities, such as genes, proteins, complexes or cellular components, specific trigger words determining the event type, and relationships between the entities whose roles depends on the event type. For instance, the verb phosphorylates is a hint to a mention of a phosphorylation event in a given sentence and typically has two entities, one that controls the phosphorylation and one that is phosphorylated. Events may also involve other events, such as the inhibition of an expression, and may ultimately form partial or entire biological pathways (Gonzalez et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 62, |
|
"text": "(Bj\u00f6rne and Salakoski, 2011)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 720, |
|
"end": 743, |
|
"text": "(Gonzalez et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "State-of-the-art methods for BEE rely on learning textual patterns and features from annotated documents where entities and their specific role in an event structure are manually marked. They typically consist of multiple classifiers to solve the different subtasks of trigger, role, and event detection, each requiring individual training and validation data. In this paper, we instead model BEE as iterative question answering, using the same model for each of the individual steps which allows knowledge sharing and joint learning of the different event components. We show that this model is as effective in predicting event structures in two BioNLP shared tasks (GENIA, 2011 and Pathway Curation, 2013) as a baseline consisting of multiple, CNN based classifiers (Bj\u00f6rne and Salakoski, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 768, |
|
"end": 796, |
|
"text": "(Bj\u00f6rne and Salakoski, 2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is structured as followed: In Section 2, we give a brief overview over related work in biomedical event extraction and in question answering. We define the event extraction task, our question answering model, and our evaluation setup in Section 3. In Section 4, we present our results and discuss them before we conclude the paper. The code and pretrained models are freely available at https://github.com/WangXII/bio_ event_qa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Approaches to BEE can be divided into two categories: Approaches using manually defined rules (Valenzuela-Esc\u00e1rcega et al., 2015) and approaches making use of machine learning algorithms. Early approaches of the latter category, such as Event-Mine (Miwa and Ananiadou, 2013) or the Turku Event Extraction System (TEES) (Bj\u00f6rne and Salakoski, 2011) , had in common that they achieve event extraction through a pipeline of several independent classifiers, each solving a different subtask of event extraction and each based on a set of specifically defined features extracted from the text, often after heavy and error-prone preprocessing (e.g., POS tagging, dependency parsing). More recent works use neural architectures, where the previously manually defined features are replaced by automatically learned text representations (Bj\u00f6rne and Salakoski, 2018; Trieu et al., 2020) , involving techniques like word embeddings and other language models. While the original TEES (TEES SVM) (Bj\u00f6rne and Salakoski, 2011) , was based on a pipeline of SVMs using manually defined features, the more recent TEES CNN (Bj\u00f6rne and Salakoski, 2018) additionally incorporates biomedical word embeddings as features and replaces the SVMs with CNNs. As pipelined models suffer from error propagation (for instance, an undetected event trigger in the first phase leads to missing the event entirely), approaches based on joint inference recently became more popular. Zhu and Zheng (2020) assign a separate probability to each event trigger, relation and event candidate and move the final decision about the veracity of an event structure to an optimization scheme solved in a post-processing step. DeepEventMine (Trieu et al., 2020 ) is a derivative of EventMine (Miwa and Ananiadou, 2013) and makes use of text representations learned by BERT (Devlin et al., 2018) . It tries to avoid error propagation by training a multilayer network for BEE in an end-to-end manner and achieves new state-of-the-art performance in various biomedical event extraction corpora. In contrast to these previous approaches, our model employs a network with only one single output layer for all event extraction subtasks and it does not need to introduce a new layer for each subtask.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 129, |
|
"text": "(Valenzuela-Esc\u00e1rcega et al., 2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 274, |
|
"text": "(Miwa and Ananiadou, 2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 347, |
|
"text": "(Bj\u00f6rne and Salakoski, 2011)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 856, |
|
"text": "(Bj\u00f6rne and Salakoski, 2018;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 857, |
|
"end": 876, |
|
"text": "Trieu et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 983, |
|
"end": 1011, |
|
"text": "(Bj\u00f6rne and Salakoski, 2011)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1104, |
|
"end": 1132, |
|
"text": "(Bj\u00f6rne and Salakoski, 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1693, |
|
"end": 1712, |
|
"text": "(Trieu et al., 2020", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1744, |
|
"end": 1770, |
|
"text": "(Miwa and Ananiadou, 2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1825, |
|
"end": 1846, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this work, we will model BEE as a an iterative question answering (QA) process. This idea was brought up first by McCann et al. (2018) , who showed how to model ten different NLP tasks, among them machine translation, summarization, and sentiment analysis, as question answering tasks over a properly defined context. Li et al. (2019) proposed a specific question answering framework for event extraction based on the idea of extracting the entities of individual relations using so-called \"question turns\". In each turn, the question answering procedure asks a question for a new entity from the relation followed by a text passage where a span is marked as the output entity. Found entities from previous turns are included in the questions of subsequent turns to allow for more precise subsequent queries. The process is controlled by predefined question templates which determine the sequence of turns depending on the event type. However, this work is not applicable to BEE, because it assumes a fixed number of arguments and has no support for nested events (events that serve as arguments for other events), which are two defining characteristics of the BEE task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 137, |
|
"text": "McCann et al. (2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 337, |
|
"text": "Li et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this paper, we develop a similar framework for the extraction of nested biomedical events. Our framework applies SciBert (Beltagy et al., 2019) , a domain-specific refinement of BERT (Devlin et al., 2018) , as underlying QA method. BERT (and SciBert) is a pre-trained transformer model (Vaswani et al., 2017) which relies on an attention mechanism to learn relationships between different parts of a sequential input, which was shown to better capture long-term dependencies than Convolutional and Recurrent Neural Networks. The parameters of its final layers can be used as input features to other models, or can be used in a finetuning procedure involving a further, task-specific layer for problems like question answering, sentence similarity quantification, or sentence continuation prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 146, |
|
"text": "(Beltagy et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 207, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 311, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Biomedical event structures are used to model biomedical processes. In general, they consist of signal words, called trigger of the event, and biomedical entities, called arguments of the event. The trigger determines the event type, which in turn determines the semantics (or roles) of its arguments. Event triggers are often verbs or nouns such as phosphorylation, transcription or binds, whereas biomedical entities typically are proper nouns, such as NF-kappa B, ATP or glucose. The role theme denotes the central object of interest in an event, while the role cause often is the facilitator or driver of the event. Notably, events can be arguments of other events, for instance when a protein A (the cause) activates (the trigger) the phosphorylation (the theme, in this case a nested trigger) of another protein B (the argument of the nested event). A typical biomedical event structure is illustrated in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 911, |
|
"end": 919, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to find simple and complex event structures, we adopt the multi-turn question answering approach of Li et al. (2019) to BEE. We cast it as a series of QA tasks, where each individual QA problem is modeled as a sequence labeling task in Figure 1 : Event visualization using BRAT by Stenetorp et al. (2011) Table 1: Our question template and the expected answers when applied to the example from Figure 1 . In the first question we ask for simple events involving our the chosen entity as a theme. If the entity is part of an event we retrieve the corresponding event trigger, its type and position in the text. Then, we ask for other event arguments belonging to the trigger-theme pair. Subsequent questions aim to uncover recursive events containing the just extracted simple event as a theme. The recursive descent ends as soon as the event is not found to be part of another structure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 125, |
|
"text": "Li et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 313, |
|
"text": "Stenetorp et al. (2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 253, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 411, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-turn Question Answering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Answers: which the model decides for each token whether it belongs to an answer of the current question and if it does, which role it has. This can be interpreted as a kind of multitask learning in which the different tasks are not defined by different loss functions but through different types of questions. Triggers determine the specific event type whereas entities take one of the event argument roles. The formulation as sequence labeling tasks allows for multiple text spans to be tagged as answers of the same question which is beneficial as (1) an entity can participate in two distinct event structures and (2) an event can have multiple different arguments. The model assumes gold standard annotation of all entities in the corpus and uses these to structure the iterative QA process, treating each gold-standard entity as a potential theme argument. It expands events from there by iteratively asking for corresponding event triggers, event arguments and nested regulation events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Questions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. What", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Questions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We introduce the notion of a question template which defines the different types of questions we use in our model and the sequence of turns we pose them. Our question template follows a recursive procedure and distinguishes two main question types, one for detecting event triggers and one for detecting event arguments. The process iterates through all given entities and asks whether there are any events with this entity as theme. This first question belongs to the Triggers question type and detects triggers corresponding to a theme candidate. In the subsequent Arguments question we ask for arguments belonging to a previously discovered (theme, trigger) combination. Applying the first question type Triggers to our example from Figure 1, we ask for all event triggers and their event type belonging to the protein VCAM-1. Note that this question addresses all different mentions of the entity VCAM-1 in the given document. In our example, the assignment of answer triggers to entity evidences is clear as VCAM-1 is mentioned exactly once in the document; in cases where an argument is mentioned more than once, we need to perform the correct assignment in a subsequent step (see next section). As the answer to our question we mark the event trigger expression with the event type Expression. In every Arguments question (cf. Table 1 ) we incorporate the event trigger found from the previous answer into the formulation of the new question. Next, we query for non-theme arguments belonging to the Expression of VCAM-1 which yields no answers in this example.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 736, |
|
"end": 742, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1334, |
|
"end": 1341, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Questions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The subsequent questions deal with finding nested structures and rely on the same schema of alternating Triggers and Arguments question turns. We ask which other events our previously found event could be a theme of, i.e., we ask \"Which are the events of the Expression of VCAM-1?\". In our example, we find that the VCAM-1 expression is upregulated; the trigger upregulated denotes an event of type positive regulation. If we found multiple answers of different event types to the same entity or event in a Triggers question, we expand each single of these into a separate event structure (see next section). In our example, we find exactly one answer to the nested trigger question and proceed again by querying for the arguments of the found event. The recursion can go on for an arbitrary amount of steps as it only stops when there is no new event trigger for a Triggers question 1 . In the example, we recurse twice and then stop with the result that the upregulation of the VCAM-1 expression is itself inhibited in a negative regulation event which is caused by an expression event.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Questions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An overview of the application of our method to the example from Figure 1 can be found in Table 1. Pseudocode of our framework is given in Algorithm 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Questions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We transform all the event annotations provided by the tasks to natural language questions. The mapping from event annotation to question is straightforward and not described further here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Questions:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The answers from our question answering model results in only basic and partly underdetermined event structures that do not fit the format of events in our evaluation corpora. We apply two different post-processing steps: Event matching, where we identify the text span best matching the prior event structure from the question and the entity/trigger from the answer, and event merging, where we merge the prior event structure and the entity/trigger from the answer into one single event structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Merging", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We illustrate both procedures using the example from Figure 1 . In the first Triggers questions, we receive the expression trigger at character positions (62,72) as an answer for VCAM-1. In the matching step, we need to identify which VCAM-1 entity in the text the expression trigger at position (62,72) belongs to. We look up an entity and trigger dictionary, which stores positions of all entities and of all detected triggers. We then compute the differences of starting positions for each mention of the entity and the starting position of the trigger and choose the occurrence with the smallest difference. In our example, the VCAM-1 entity at position (55,61) is identified as a match for the trigger expression at position (62,72) with a difference of 7 characters. In the merging step, we combine the trigger expression at position (62,72) and the entity VCAM-1 at position (55,61) to a single new event structure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Event Merging", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The specific algorithm for event merging depends on the question type and the possible answers. We explain the differences using two examples. Assume we found a phosphorylation event with theme A in the first Triggers question. Asking for arguments belonging to this prior event, assume we receive four answers, namely cause B, cause C, site D and site E. In this case of multiple argument types, we enumerate all possible cause site combinations, merge them with the prior event and receive four new phosphorylation events, i.e., phosphorylation of theme A with cause B and site D, phosphorylation of theme A with cause B and site E etc. Details regarding the performance for this merging heuristic is found in Table 4 , query five. A more sophisticated merging approach is needed for binding and pathway events which may contain multiple participants. For these events, we store a directed graph per event trigger where nodes are participants and a directed edge exists from entity A to entity B if B is answer to the Arguments question of A. After the graph is constructed, we transform it into an undirected graph where we keep all edges which exist in both directions. In the final step, we detect maximal cliques in the graph and form a distinct binding/pathway event for each clique. The results for binding/pathway merging is found in Table 4 , query six. We use similar heuristics for the merging step of nested regulations and other event types.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 712, |
|
"end": 719, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1343, |
|
"end": 1350, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Event Merging", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We use Huggingface's Transformers 2 (Wolf et al., 2019) library in Pytorch for our implementation. For the initialization of the pretrained BERT neural network model we use SciBERT 3 (Beltagy et al., 2019) which has been pretrained on scientific literature. We add one softmax layer as output on top of the final hidden representation of each token as we fine-tune the model parameters for our question answering task. In the final output layer each token in a given document sequence is tagged in IOB2-style as either being inside, outside or the beginning of Algorithm 1 Pseudocode of our QA framework for the extraction of event structures. We expand event structure candidates around potential theme arguments, adding corresponding event triggers in the question Triggers and corresponding event arguments in the question Arguments. If we have found new events we add their (theme, trigger)-pair to our event candidates list for the next iteration, where we ask whether the just found event is a theme to a (new) nested event.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 55, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 205, |
|
"text": "(Beltagy et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "1: event candidates = proteinsF romDocument() 2: while event candidates = \u2205 do 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "new events = \u2205 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "for candidate in event candidates do 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "new triggers = T riggers(candidate) 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "new arguments = Arguments(candidate) 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "new events.add(Event(candidate, new triggers)) 8: end for 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "event candidates = new events 10: end while an answer token. The beginning and inside tags are further divided into the different event type and event argument classes according to the structures seeked in a corpus. The same BERT neural network model is shared across the whole task and all questions. This allows knowledge sharing and joint learning of the different questions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "For training, we create all existing questions in the training set exactly once in the beginning and then draw randomized batches as our training examples. We use the default AdamW configuration with learning rate 5e-5, no weight decay, \u03b2 1 = 0.9, \u03b2 2 = 0.999 and = 1e-8. Training is conducted on four Nvidia GeForce RTX 2080 Ti GPUs. Our maximum sequence length for the input data is 384 tokens. To deal with longer sequences than the maximum sequence length, we duplicate the beginning and the end of intermediate sequences so that they form overlapping windows with a length of 64 tokens. To decide between two differently predicted tags for the same token in two adjacent windows, we choose the tag of the token which has the larger context window. We enable apex 4 fp16 16-bit mixed precision for improved computation efficiency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Hyperparameters to choose are the batch size and the number of epochs when to stop training. During our model development, a batch size of 16 has proven to work well together with 16 epochs after which the validation loss usually does not improve anymore. The whole training process during fine-tuning is relatively fast and the training time ranges from half an hour to an hour on Pathway Curation to around two hours on GENIA depending on hyperparameter choice. Performance in evaluation fluctuates over few percentage in F1-score 4 https://github.com/NVIDIA/apex depending on the initial seed during neural network initialization. As we mainly compare to Bj\u00f6rne and Salakoski (2018), we adopt their evaluation strategy and report the results of the seed with the best performance on the validation set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We evaluate our approach to BEE on two corpora used widely in biomedical NLP research, namely the Pathway Curation corpus (PC) from the BioNLP13 challenge (Ohta et al., 2013) and the Genia 11 corpus (GENIA) from the BioNLP11 challenge . These corpora consist of annotated PubMed abstracts and full texts. The PC dataset focuses on pathway reactions whereas GENIA aims to cover molecular biology in general. GENIA contains 14,958 sentences and PC 5,040 sentences. GENIA distinguishes seven different event types and six different argument types, whereas PC distinguishes 24 different event types and nine argument types. Both corpora include common biochemical event types, such as phosphorylation, gene expression, binding or positive (negative) regulation. PC further distinguishes multiple conversion types, such as dephosphorylation, acetylation, ubiquitination etc. and it adds activation and inactivation to the class of regulation events. PC also annotates event modifiers, i.e., speculation and negation, and allows for events without a theme. The latter two types of event components currently are not addressed by our work, but could be included by adding further turns and questions to our question template. A closer breakdown of the events and their components in the two corpora can be found in Table 2 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 174, |
|
"text": "(Ohta et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1308, |
|
"end": 1315, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpora", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We evaluate our model for two different tasks: Knowledge Base Population (KBP) and the standard BioNLP a* setting. In both cases, goldstandard entity annotations are provided with the corpus whereas event annotations have to be predicted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Tasks", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Following Kim et al. (2015) , we evaluate the models' capability to answer a set of predefined queries, such as finding all pairs of proteins that bind to each other. An overview of the different knowledge base queries is found in Table 3 . The first four queries can be directly answered from our question answering model while the remaining three require event merging, which we perform as described in Section 3.3. As usual in KBP settings, the extracted event structures are compared on a document-level, so a same event occurring twice in a single document is counted once only in this format.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 27, |
|
"text": "Kim et al. (2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 238, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Base Population", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The .a* evaluation format is the standard evaluation format provided by the GENIA and PC shared tasks. PC is conducted in a strict matching evaluation mode, where the extracted triggers, all event arguments, and their text spans must exactly coincide. The approximate span and approximate recursive matching mode for GENIA is more lenient as the text spans and positions may differ up to one word from the gold-standard annotations and nested regulation events only need to coincide in their theme arguments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BioNLP .a* evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4 Results and Discussion", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BioNLP .a* evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use TEES SVM (Bj\u00f6rne and Salakoski, 2011) and TEES CNN (Bj\u00f6rne and Salakoski, 2018) 5 as baselines for knowledge base population. Both provide result files and models online 6 . We compare the result of our single homogeneous QA multiturn model to the individual models of these approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 44, |
|
"text": "(Bj\u00f6rne and Salakoski, 2011)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Base Population", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The results can be found in Table 4 . Our approach achieves a 0.87 percentage points (pp) and a 2.47 pp better F1-score than TEES CNN and TEES SVM, respectively, on GENIA. On PC, it achieves a 2.40 pp and a 3.13 pp better F1-score. This increase can be attributed to a considerably better recall (2.35 pp for GENIA and 6.59 pp for PC, compared to TEES CNN). Its precision is 1.38 pp and 2.24 pp lower than the respective best baseline result. It shows performance gains of up to 5.16 pp F1 in the first three Basic Event queries which require no event merging. Results for the other type of queries are mixed: Our model achieves good results for binding and pathway pairs, yet is worse for transitive protein regulations and the combination of all conversion arguments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 35, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Base Population", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Most likely, the question answering approach achieves strong performances in extraction of simple events as they rely on only one or two questions and require no complicated merging steps. The model infers binding and pathway pairs in the fifth query relatively well since we explicitly query for those in the Arguments question type. The worse results for the arguments of a conversion event in the sixth query are probably due to the naive heuristic of simply enumerating all valid argument combinations as output during event merging. Regulation event detection in the forth and seventh query presumably also suffer from our too-simple event merging as we match a detected event trigger cause to a whole previously discovered event structure. We also observe that error propagation negatively influences regulation detection and event detection as we immediately extract simple events after our first Triggers question from the (theme, trigger)pairs, but we do not incorporate event arguments or regulations found in later question turns into a joint extraction of events. Kim et al. (2015) . We conduct evaluation of found events at document level, i.e., counting unique event structures per document. The answers are denoted as tuples. Example questions and answers are given in italics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1076, |
|
"end": 1093, |
|
"text": "Kim et al. (2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Base Population", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Knowledge Base Queries on a document", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Base Population", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Example answer 1. Which protein appears in context of event A? (EventType, ProteinTheme) -Which protein appears in context of a gene expression?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-(Gene Expression, MACS1) 2. What is an argument of event A of entity X?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(EventType, ProteinTheme, ArgumentType, Argument) -What is the location of the localization of MACS1?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-(Localization, MACS1, ToLoc, mitochondrial matrix) 3. Is the simple event A part of a regulation?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(SimpleEvent, Boolean) -Is the transport of hydroxyl part of a regulation?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-((Transport, hydroxyl), yes) 4. What regulates the simple event A?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(SimpleEvent, Cause) -What regulates the transport of hydroxyl?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-((Transport, hydroxyl), amiloride) 5. What is the site of the conversion event of A with cause B?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(EventType, ProteinTheme, ProteinCause, ProteinSite) -What is the site for the acetylation of H3 by Asf1?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-(Acetylation, H3, Asf1, K56) 6. What binds to protein A?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(Protein1, Protein2) -What binds to Na+?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-(Na+, H+) 7. What regulates A transitively?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(ProteinTheme, ProteinCause) -What regulates NF-kappaB?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-(NF-kappaB, TLR2) The test set evaluation is conducted online where predictions are submitted to a server and the final results are returned. DeepEventMine (Trieu et al., 2020) represents results of very recent work. Note that our model does not account for event modifications or events without themes in the PC corpus. Dev (adjusted) denotes the results on the PC development set excluding these annotations. The best value in each partial column is marked in bold. 4.2 BioNLP .a* Evaluation", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 177, |
|
"text": "(Trieu et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 5 , evaluation results in the BioNLP .a* challenge setting are compared to four competitors: TES CNN (Bj\u00f6rne and Salakoski, 2018) , TEES SVM (Bj\u00f6rne and Salakoski, 2011) , EventMine (Miwa and Ananiadou, 2013) , and the very recent DeepEventMine (Trieu et al., 2020) . On GENIA11, our proposed approach beats three competitors on the test set, but is outperformed by DeepEvent-Mine by almost 5 pp in F1-score. The higher recall and lower precision compared to DeepEventMine might be attributed to the simple rule-based event merging step, which constructs events for all detected relations regardless of their score. In contrast, DeepEventMine models the event construction as a separate machine learning task in which errors from earlier steps can be corrected, potentially leading to a higher precision. For the PC corpus, our results are considerably worse than those of the baselines on both the dev and the test set. This inferior performance can be attributed to the fact that the proposed model does not account for event modifications or events without themes. Accordingly, we evaluated the models again on the development set excluding such annotations. The results for this experiment can be found in the column Pathway Curation Dev (adjusted). Under this setting our proposed model outperforms both TEES variants. Note that events without themes and their regulations make up to a tenth of the events in the development set of PC, among them the majority are simple pathway events only made up by an event trigger. We conducted an error analysis on the dev sets of the GENIA11 and PC corpora. Results are shown in Table 6 . We distinguish error types into false positives and false negatives:", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 138, |
|
"text": "(Bj\u00f6rne and Salakoski, 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 178, |
|
"text": "(Bj\u00f6rne and Salakoski, 2011)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 217, |
|
"text": "(Miwa and Ananiadou, 2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 274, |
|
"text": "(Trieu et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1633, |
|
"end": 1640, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Query description", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Wrong Trigger/Argument Spans denotes answers predicted by the model which are no gold-standard answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Wrong Trigger/Argument Label means correctly detected text spans which have the wrong event type or wrong argument type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Missing Trigger/Argument Spans (Question) refers to questions where a trigger or an argument has not been extracted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Missing Trigger/Argument Spans (Propagated) refers to triggers or arguments which have not been extracted because the according question has not been found (i.e., the answers from a previous question have been wrong so that the subsequent question is not posed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We find that wrong label assignment is the cause for about five percent of false positives and false negatives. Missing propagated questions make up about one half of the false negatives during question answering in non-regulation event types. The relative amount of errors is lower in GENIA11 compared to Pathway Curation which reflects the overall better model performances in GENIA11.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In an ablation study, we examine the impact of joint training on all questions versus training only on the one question type of simple events trigger detection, i.e., only using the examples of the first Triggers question and examining the impact of multi-task learning in our model. We find that training the model only on the one question type results in a worse performance (1.08 pp F1-score) for answering this one specific question compared to evaluating the found triggers trained on the full questions dataset. This indicates that the shared model parameters provide a benefit for detecting the right answer to all question types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We presented an approach for BEE in which this task is modeled as multi-turn question answering problem using BERT as underlying language model. We show that our model is able to form event structures from the answers of multiple questions. Our experiments show promising results on two corpora, especially in a Knowledge Base Population setting. In future work, we aim to improve model performance by adjusting the event merging procedure and by using further or modified question templates. It would also be worthwhile to study the reasons of the performance gains of our model compared to TEES in more detail, for instance by replacing the CNN in TEES CNN with BERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The deepest nesting occurring in our two evaluation corpora is three.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/huggingface/ transformers 3 https://github.com/allenai/scibert", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/jbjorne/TEES 6 https://b2share.eudat.eu/records/ bee50aa63b0b404da9c76b29de4d8653", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Scibert: Pretrained language model for scientific text", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scib- ert: Pretrained language model for scientific text. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Generalizing biomedical event extraction", |
|
"authors": [ |
|
{ |
|
"first": "Jari", |
|
"middle": [], |
|
"last": "Bj\u00f6rne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of BioNLP Shared Task 2011 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jari Bj\u00f6rne and Tapio Salakoski. 2011. Generaliz- ing biomedical event extraction. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 183- 191.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Biomedical event extraction using convolutional neural networks and dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jari", |
|
"middle": [], |
|
"last": "Bj\u00f6rne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the BioNLP 2018 workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "98--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jari Bj\u00f6rne and Tapio Salakoski. 2018. Biomedi- cal event extraction using convolutional neural net- works and dependency parsing. In Proceedings of the BioNLP 2018 workshop, pages 98-108.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Recent advances and emerging applications in text and data mining for biomedical discovery", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Graciela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tasnia", |
|
"middle": [], |
|
"last": "Gonzalez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tahsin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Britton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Goodale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Casey S", |
|
"middle": [], |
|
"last": "Greene", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Greene", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Briefings in bioinformatics", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "33--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graciela H Gonzalez, Tasnia Tahsin, Britton C Goodale, Anna C Greene, and Casey S Greene. 2015. Recent advances and emerging applications in text and data mining for biomedical discovery. Briefings in bioinformatics, 17(1):33-42.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Extending the evaluation of genia event task toward knowledge base construction and comparison to gene regulation ontology task", |
|
"authors": [ |
|
{ |
|
"first": "Jin-Dong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jung-Jae", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Rebholz-Schuhmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "BMC bioinformatics", |
|
"volume": "16", |
|
"issue": "S10", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jin-Dong Kim, Jung-jae Kim, Xu Han, and Dietrich Rebholz-Schuhmann. 2015. Extending the evalua- tion of genia event task toward knowledge base con- struction and comparison to gene regulation ontol- ogy task. BMC bioinformatics, 16(S10):S3.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Overview of genia event task in bionlp shared task", |
|
"authors": [ |
|
{ |
|
"first": "Jin-Dong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toshihisa", |
|
"middle": [], |
|
"last": "Takagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akinori", |
|
"middle": [], |
|
"last": "Yonezawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the BioNLP Shared Task 2011 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jin-Dong Kim, Yue Wang, Toshihisa Takagi, and Aki- nori Yonezawa. 2011. Overview of genia event task in bionlp shared task 2011. In Proceedings of the BioNLP Shared Task 2011 Workshop, pages 7-15. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Entity-relation extraction as multi-turn question answering", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoya", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zijun", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiayu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arianna", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duo", |
|
"middle": [], |
|
"last": "Chai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingxin", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1340--1350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question an- swering. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1340-1350.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The natural language decathlon", |
|
"authors": [ |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Mccann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Shirish Keskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Multitask learning as question answering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1806.08730" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Nactem eventmine for bionlp 2013 cg and pc tasks", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the BioNLP Shared Task 2013 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "94--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Miwa and Sophia Ananiadou. 2013. Nactem eventmine for bionlp 2013 cg and pc tasks. In Pro- ceedings of the BioNLP Shared Task 2013 Workshop, pages 94-98.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Overview of the pathway curation (pc) task of bionlp shared task 2013", |
|
"authors": [ |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Rak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Rowley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong-Woo", |
|
"middle": [], |
|
"last": "Chun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sung-Jae", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sung-Pil", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the BioNLP Shared Task 2013 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomoko Ohta, Sampo Pyysalo, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Sophia Ananiadou, and Jun'ichi Tsujii. 2013. Overview of the pathway curation (pc) task of bionlp shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 67-75.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bionlp shared task 2011: Supporting resources", |
|
"authors": [ |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Topi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin-Dong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of BioNLP Shared Task 2011 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "112--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pontus Stenetorp, Goran Topi\u0107, Sampo Pyysalo, Tomoko Ohta, Jin-Dong Kim, and Jun'ichi Tsujii. 2011. Bionlp shared task 2011: Supporting re- sources. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 112-120, Portland, Oregon, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deepeventmine: End-to-end neural nested event extraction from biomedical texts", |
|
"authors": [ |
|
{ |
|
"first": "Hai-Long", |
|
"middle": [], |
|
"last": "Trieu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thy", |
|
"middle": [], |
|
"last": "Thy Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Khoa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anh", |
|
"middle": [], |
|
"last": "Duong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hai-Long Trieu, Thy Thy Tran, Khoa NA Duong, Anh Nguyen, Makoto Miwa, and Sophia Ananiadou. 2020. Deepeventmine: End-to-end neural nested event extraction from biomedical texts. Bioinformat- ics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A domainindependent rule-based framework for event extraction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Marco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gus", |
|
"middle": [], |
|
"last": "Valenzuela-Esc\u00e1rcega", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Hahn-Powell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hicks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "127--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco A Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell, Mi- hai Surdeanu, and Thomas Hicks. 2015. A domain- independent rule-based framework for event extrac- tion. In Proceedings of ACL-IJCNLP 2015 System Demonstrations, pages 127-132.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Transformers: State-of-theart natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.03771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Transformers: State-of-the- art natural language processing. arXiv preprint arXiv:1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Biomedical event extraction with a novel combination strategy based on hybrid deep neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Lvxing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haoran", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "BMC bioinformatics", |
|
"volume": "21", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lvxing Zhu and Haoran Zheng. 2020. Biomedical event extraction with a novel combination strategy based on hybrid deep neural networks. BMC bioin- formatics, 21(1):47.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"num": null, |
|
"text": "What are events of the Expression of VCAM-1? The Positive regulation upregulation at (39,51). 4. What are arguments of the Positive regulation of the Expression VCAM-1? None. 5. What are events of the Positive regulation of the Expression of VCAM-1? The Negative regulation inhibited at (25,34). 6. What are arguments of the Negative regulation of the Positive regulation of the Expression of VCAM-1? The Cause expression at (1,11). 7. What are events of the Negative regulation of the Positive regulation of the Expression of VCAM-1?", |
|
"type_str": "table", |
|
"content": "<table><tr><td>are events of VCAM-1?</td><td>The Expression expression at (62,72).</td></tr><tr><td>2. What are arguments of the Expression of VCAM-1?</td><td>None.</td></tr><tr><td colspan=\"2\">3. None.</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Statistics of our question answering training datasets built from the gold event annotations.", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">GENIA11</td><td colspan=\"2\">Pathway Curation</td></tr><tr><td colspan=\"5\">Question type #questions #gold answers #questions #gold answers</td></tr><tr><td>Simple Events</td><td/><td/><td/><td/></tr><tr><td>Triggers</td><td>6,392</td><td>6,549</td><td>4,316</td><td>3,857</td></tr><tr><td>Arguments</td><td>6,263</td><td>1,486</td><td>3,242</td><td>2,389</td></tr><tr><td>Nested Events</td><td/><td/><td/><td/></tr><tr><td>Triggers</td><td>10,564</td><td>3,523</td><td>5,012</td><td>1,708</td></tr><tr><td>Arguments</td><td>4,303</td><td>1,096</td><td>1,775</td><td>1,440</td></tr><tr><td>Total</td><td>27,522</td><td>12,654</td><td>14,345</td><td>9,394</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Queries for our Knowledge Base Population evaluation, adapted from", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "Results for Knowledge Base Population on the development sets, compared to TEES SVM and TEES CNN. Semantics for each individual question are found inTable 3. The answers of the first four queries (Simple Events) can be derived by our model without event merging. The two lower sections show only F1 scores. The best value in each partial column is marked in bold.", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>GENIA</td><td/><td/><td/><td colspan=\"2\">Pathway Curation</td><td/></tr><tr><td>Metric/Question type</td><td colspan=\"8\">TEES SVM TEES CNN QA with BERT Support TEES SVM TEES CNN QA with BERT Support</td></tr><tr><td>F1 (Total)</td><td>59.78</td><td>61.38</td><td>62.25</td><td>3625</td><td>56.06</td><td>56.79</td><td>59.19</td><td>3141</td></tr><tr><td>Precision (Total)</td><td>68.80</td><td>69.68</td><td>68.30</td><td>3625</td><td>60.57</td><td>60.52</td><td>58.33</td><td>3141</td></tr><tr><td>Recall (Total)</td><td>52.86</td><td>54.84</td><td>57.19</td><td>3625</td><td>52.18</td><td>53.49</td><td>60.08</td><td>3141</td></tr><tr><td>1. Theme Trigger Pairs</td><td>73.07</td><td>75.23</td><td>79.41</td><td>1301</td><td>69.21</td><td>69.34</td><td>74.50</td><td>866</td></tr><tr><td>2. Event Arguments</td><td>49.17</td><td>46.76</td><td>47.36</td><td>568</td><td>45.84</td><td>46.94</td><td>49.31</td><td>648</td></tr><tr><td>3. Nested Regulation Events</td><td>63.61</td><td>66.40</td><td>71.08</td><td>585</td><td>66.14</td><td>64.43</td><td>71.05</td><td>339</td></tr><tr><td>4. Nested Regulation Causes</td><td>39.71</td><td>44.21</td><td>36.03</td><td>384</td><td>46.19</td><td>43.78</td><td>44.44</td><td>419</td></tr><tr><td>Basic Events (Total)</td><td>63.24</td><td>64.38</td><td>66.53</td><td>2838</td><td>58.21</td><td>57.84</td><td>61.27</td><td>2272</td></tr><tr><td>5. Full Conversion Events</td><td>-</td><td>-</td><td>-</td><td>0</td><td>38.89</td><td>61.11</td><td>56.25</td><td>16</td></tr><tr><td>6. Binding/Pathway Pairs</td><td>55.14</td><td>46.03</td><td>60.18</td><td>126</td><td>56.84</td><td>53.64</td><td>60.59</td><td>138</td></tr><tr><td>7. Transitive Regulations</td><td>42.14</td><td>50.05</td><td>38.64</td><td>660</td><td>48.72</td><td>53.79</td><td>51.14</td><td>715</td></tr><tr><td>Merged Events (Total)</td><td>44.58</td><td>49.38</td><td>42.80</td><td>787</td><td>50.00</td><td>53.93</td><td>53.20</td><td>869</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "Results on the standard .a* evaluation of BioNLP shared tasks, comparing our model with four competitors.", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "Error statistics of our question answering model.", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>GENIA11</td><td/><td colspan=\"2\">Pathway Curation</td></tr><tr><td>Error type</td><td># wrong answer</td><td colspan=\"2\">% #questions</td><td>%</td></tr><tr><td>Wrong Trigger Spans</td><td colspan=\"2\">643 46.5</td><td colspan=\"2\">912 67.8</td></tr><tr><td>Wrong Trigger Label</td><td>56</td><td>4.1</td><td>60</td><td>4.4</td></tr><tr><td>Wrong Argument Spans</td><td colspan=\"2\">674 48.9</td><td colspan=\"2\">370 27.5</td></tr><tr><td>Wrong Argument Label</td><td>7</td><td>0.5</td><td>3</td><td>0.2</td></tr><tr><td>False Positives (Total)</td><td colspan=\"2\">1,380 100</td><td colspan=\"2\">1,345 100</td></tr><tr><td>Missing Trigger Spans (Question)</td><td colspan=\"2\">335 31.8</td><td colspan=\"2\">642 40.1</td></tr><tr><td>Missing Trigger Spans (Propagated)</td><td colspan=\"2\">122 11.6</td><td colspan=\"2\">243 15.2</td></tr><tr><td>Wrong Trigger Label</td><td>56</td><td>5.3</td><td>60</td><td>3.7</td></tr><tr><td>Missing Argument Spans (Question)</td><td colspan=\"2\">177 16.9</td><td colspan=\"2\">194 12.1</td></tr><tr><td>Missing Argument Spans (Propagated)</td><td colspan=\"2\">356 33.4</td><td colspan=\"2\">458 28.7</td></tr><tr><td>Wrong Argument Label</td><td>7</td><td>0.7</td><td>3</td><td>0.2</td></tr><tr><td>False Negatives (Total)</td><td colspan=\"2\">1,053 100</td><td colspan=\"2\">1,600 100</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |