ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.33.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:13.448261Z"
},
"title": "WNUT-2020 Task 1 Overview: Extracting Entities and Relations from Wet Lab Protocols",
"authors": [
{
"first": "Jeniya",
"middle": [],
"last": "Tabassum",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
},
{
"first": "Sydney",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the results of the wet lab information extraction task at WNUT 2020. This task consisted of two sub tasks: (1) a Named Entity Recognition (NER) task with 13 participants and (2) a Relation Extraction (RE) task with 2 participants. We outline the task, data annotation process, corpus statistics, and provide a high-level overview of the participating systems for each sub task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the results of the wet lab information extraction task at WNUT 2020. This task consisted of two sub tasks: (1) a Named Entity Recognition (NER) task with 13 participants and (2) a Relation Extraction (RE) task with 2 participants. We outline the task, data annotation process, corpus statistics, and provide a high-level overview of the participating systems for each sub task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Wet Lab protocols consist of natural language instructions for carrying out chemistry or biology experiments (for an example, see Figure 1 ). While there have been efforts to develop domain-specific formal languages in order to support robotic automation 1 of experimental procedures (Bates et al., 2017) , the vast majority of knowledge about how to carry out biological experiments or chemical synthesis procedures is only documented in natural language texts, including in scientific papers, electronic lab notebooks, and so on.",
"cite_spans": [
{
"start": 284,
"end": 304,
"text": "(Bates et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent research has begun to apply human language technologies to extract structured representations of procedures from natural language protocols (Kuniyoshi et al., 2020; Vaucher et al., 2020; Kulkarni et al., 2018; Soldatova et al., 2014; Vasilev et al., 2011; Ananthanarayanan and Thies, 2010) . Extraction of named entities and relations from these protocols is an important first step towards machine reading systems that can interpret the meaning of these noisy human generated instructions.",
"cite_spans": [
{
"start": 147,
"end": 171,
"text": "(Kuniyoshi et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 172,
"end": 193,
"text": "Vaucher et al., 2020;",
"ref_id": null
},
{
"start": 194,
"end": 216,
"text": "Kulkarni et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 217,
"end": 240,
"text": "Soldatova et al., 2014;",
"ref_id": "BIBREF40"
},
{
"start": 241,
"end": 262,
"text": "Vasilev et al., 2011;",
"ref_id": "BIBREF42"
},
{
"start": 263,
"end": 296,
"text": "Ananthanarayanan and Thies, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, performance of state-of-the-art tools for extracting named entity and relations from wet lab protocols still lags behind well edited text genres (Jiang et al., 2020) . This motivates the need for continued research, in addition to new datasets and tools adapted to this noisy text genre.",
"cite_spans": [
{
"start": 154,
"end": 174,
"text": "(Jiang et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 https://autoprotocol.org/ In this overview paper, we describe the development and findings of a shared task on named entity and relation extraction from the noisy wet lab protocols, which was held at the 6-th Workshop on Noisy User-generated Text (WNUT 2020) and attracted 15 participating teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we describe details of the task including training and development datasets in addition to the newly annotated test data. We briefly summarize the systems developed by selected teams, and conclude with results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Wet lab protocols consist of the guidelines from different lab procedures which involve chemicals, drugs, or other materials in liquid solutions or volatile phases. The protocols contain a sequence of steps that are followed to perform a desired task. These protocols also include general guidelines or warnings about the materials being used. The publicly available archive of protocol.io contains such guidelines of wet lab experiments, written by researchers and lab technicians around the world. This protocol archive covers a large spectrum of experimental procedures including neurology, epigenetics, metabolomics, stem cell biology, etc. Figure 1 shows a representative wet lab protocol.",
"cite_spans": [],
"ref_spans": [
{
"start": 645,
"end": 654,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Wet Lab Protocols",
"sec_num": "2"
},
{
"text": "The wet lab protocols, written by users from all over the worlds, contain domain specific jargon as well as numerous nonstandard spellings, abbreviations, unreliable capitalization. Such diverse and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wet Lab Protocols",
"sec_num": "2"
},
{
"text": "Dev Test-18 Test-20 Total #protocols 370 122 123 111 726 #sentences 8444 2839 2813 3562 17658 #tokens 107038 36106 36597 51688 231429 #entities 48197 15972 16490 104654 185313 #relations 32158 10812 11242 70591 noisy style of user created protocols imposed crucial challenges for the entity and relation extraction systems. Hence, off-the-shelf named entity recognition and relation extraction tools, tuned for well edited texts, suffer a severe performance degradation when applied to noisy protocol texts (Kulkarni et al., 2018) .",
"cite_spans": [
{
"start": 531,
"end": 554,
"text": "(Kulkarni et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 4,
"end": 234,
"text": "Test-18 Test-20 Total #protocols 370 122 123 111 726 #sentences 8444 2839 2813 3562 17658 #tokens 107038 36106 36597 51688 231429 #entities 48197 15972 16490 104654 185313 #relations 32158 10812 11242 70591",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Train",
"sec_num": null
},
{
"text": "To address these challenges, there has been an increasing body of work on adapting entity and relation extraction recognition tools for noisy wet lab texts (Jiang et al., 2020; Luan et al., 2019; Kulkarni et al., 2018) . However, different research groups have used different evaluation setups (e.g., training / test splits) making it challenging to perform direct comparisons across systems. By organizing a shared evaluation, we hope to help establish a common evaluation methodology (for at least one dataset) and also promote research and development of NLP tools for user generated wet-lab text genres.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Jiang et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 177,
"end": 195,
"text": "Luan et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 196,
"end": 218,
"text": "Kulkarni et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Train",
"sec_num": null
},
{
"text": "Our annotated wet lab corpus includes 726 experimental protocols from the 8-year archive of ProtocolIO (April 2012 to March 2020). These protocols are manually annotated with 15 types of relations among the 18 entity types 2 . The fine-grained entities can be broadly classified into 5 categories: ACTION, CONSTITUENTS, QUAN-TIFIERS, SPECIFIERS, and MODIFIERS. The CONSTITUENTS category includes mentions of REAGENT, LOCATION, DEVICE, MENTION, and SEAL. The QUANTIFIERS category includes mentions of AMOUNT, CONCENTRATION, SIZE, TIME, TEMPERATURE, PH, SPEED, GENERIC-MEASURE and NUMERICAL. The SPECIFIERS category includes mentions of MODIFIER, MEASURE-TYPE and METHOD. The ACTION entity refers to the phrases denoting tasks that are performed to complete a step in the protocol. The mentions of these entities contain different types of relations, including-SITE, SETTING, CREATES, MEASURE-TYPE-LINK, CO-REFERENCE-LINK, MOD-LINK, COUNT, MERONYM, USING, MEASURE, COM-MANDS, OF-TYPE, OR, PRODUCT, and ACTS-ON.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotated Corpus",
"sec_num": "2.1"
},
{
"text": "The training and development dataset for our task was taken from previous work on wet lab corpus (Kulkarni et al., 2018 ) that consists of from the 623 protocols. We excluded the eight duplicate protocols from this dataset and then re-annotated the 615 unique protocols in BRAT (Stenetorp et al., 2012) . This re-annotation process aided us to add the previously missing 20,613 missing entities along with 10,824 previously missing relations and also to facilitate removing the inconsistent annotations. The updated corpus statics is provided in Table 1 . This full dataset (Train, Dev, Test-18) was provided to the participants at the beginning of the task and they were allowed to use any of part of this dataset to train their final model.",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Kulkarni et al., 2018",
"ref_id": "BIBREF20"
},
{
"start": 278,
"end": 302,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 546,
"end": 553,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Train and Development data",
"sec_num": "2.1.1"
},
{
"text": "For this shared task we added 111 new protocols (Test-20) which were used to evaluate the submitted models. Test-20 dataset consists of 100 randomly sampled general protocols and 11 manually selected covid-related protocols from ProtocolIO (https://www.protocols.io/). This 111 protocols were double annotated by three annotators using a web-based annotation tool, BRAT (Stenetorp et al., 2012) . Figure 1 presents a screenshot of our annotation interface. We also provided the annotators a set of guidelines containing the entity and relation type definitions. The annotation task was split in multiple iterations. In each iteration, an annotator was given a set of 10 protocols. An adjudicator then went through all the entity and relation annotations in these protocols and resolved the disagreements. Before adjudication, the interannotator agreement is 0.75 , measured by Cohen's Kappa (Cohen, 1960) .",
"cite_spans": [
{
"start": 370,
"end": 394,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF41"
},
{
"start": 891,
"end": 904,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Test Data",
"sec_num": "2.1.2"
},
{
"text": "We provided the participants baseline model for both of the subtasks. The baseline model for named entity recognition task utilized a feature-based CRF tagger developed using the CRF-Suite 3 with a standard set of contextual, lexical and gazetteer features. The baseline relation extraction system employed a feature-based logistic regression model developed using the Scikit-Learn 4 with a standard set of contextual, lexical and gazetteer features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "2.2"
},
{
"text": "Thirteen teams (Table 3 ) participated in the named entity recognition sub-task. A wide variety of approaches were taken to tackle this task. Table 2 summarizes the word representations, features and the machine learning approaches taken by each team. Majority of the teams (11 out of 13) utilized contextual word representations. Four teams combined the contextual word representations with global word vectors. Only two teams did not use any type of word representations and relied entirely on hand-engineered features and a CRF taggers. The best performing teams utilized a combination of contextual word representation with ensemble of learning. Below we provide a brief description of the approach taken by each team.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "(Table 3",
"ref_id": "TABREF4"
},
{
"start": 142,
"end": 149,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "B-NLP (Lange et al., 2020) modeled the NER as a parsing task and uses a biaffine classifier. The second classifier of their system used the predictions from the first classifier and then updated the labels of the predicted entities. Both of the classifiers utilized word2vec (Mikolov et al., 2013) and SciBERT word representations.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Lange et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 275,
"end": 297,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "BIO-BIO (Kecheng et al., 2020) implemented a BiLSTM-CRF tagger that utilized BioBERT (Lee et al., 2020) word representation.",
"cite_spans": [
{
"start": 8,
"end": 30,
"text": "(Kecheng et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 85,
"end": 103,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "BiTeM (Knafou et al., 2020 ) developed a voting based ensemble classifier containing 14 transformer models, and utilized 7 different word representations including BERT (Devlin et al., 2019) , ClinicalBERT (Huang et al., 2019) , PubMedBERT base (Gu et al., 2020) , BioBERT (Lee et al., 2020) , RoBERTa (Liu et al., 2019) , Biomed-RoBERTa base (Gururangan et al., 2020) and XLNet (Yang et al., 2019) .",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Knafou et al., 2020",
"ref_id": "BIBREF19"
},
{
"start": 169,
"end": 190,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 206,
"end": 226,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 245,
"end": 262,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 273,
"end": 291,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 302,
"end": 320,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 379,
"end": 398,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "DSC-IITISM (Gupta et al., 2020 ) developed a BiLSTM-CRF model that utilized a concatenation of CamemBERT base (Martin et al., 2020) , Flair(PubMed) (Akbik et al., 2018) , and GloVe(en) (Pennington et al., 2014) word representations.",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "(Gupta et al., 2020",
"ref_id": "BIBREF9"
},
{
"start": 110,
"end": 131,
"text": "(Martin et al., 2020)",
"ref_id": null
},
{
"start": 148,
"end": 168,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 185,
"end": 210,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "Fancy Man (Zeng et al., 2020) fine-tuned the BERT base (Devlin et al., 2019) model with an additional linear layer.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Zeng et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 55,
"end": 76,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "IBS (Sikdar et al., 2020) utilized an ensemble classifier with 4 feature based on CRF taggers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "Kabir (Khan, 2020) employed an RNN-CRF model that utilized concatenation of Flair(PubMed) (Akbik et al., 2018) and ELMo(PubMed) (Peters et al., 2018) ",
"cite_spans": [
{
"start": 6,
"end": 18,
"text": "(Khan, 2020)",
"ref_id": null
},
{
"start": 90,
"end": 110,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 128,
"end": 149,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Systems",
"sec_num": "2.3"
},
{
"text": "Two teams (Table 3 ) participated in the relation extraction sub-task. Both of the teams followed fine-tuning of contextual word representation and did not use any hand-crafted features. Table 5 summarizes the word representations and the machine learning approaches followed by each team. Below we provide a brief description of the model developed by taken by each team.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 18,
"text": "(Table 3",
"ref_id": "TABREF4"
},
{
"start": 187,
"end": 194,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "RE Systems",
"sec_num": "2.4"
},
{
"text": "Big Green (Miller and Vosoughi, 2020) considered the protocols as a knowledge graph, in which relationships between entities are edges in the knowledge graph. They trained a BERT (Devlin et al., 2019) based system to classify edge presence and type between two entities, given entity text, label, and local context. mgsohrab (Sohrab et al., 2020) utilized Pub-MedBERT (Gu et al., 2020) as input to the relation extraction model that enumerates all possible pairs of arguments using deep exhaustive span representation approach.",
"cite_spans": [
{
"start": 10,
"end": 37,
"text": "(Miller and Vosoughi, 2020)",
"ref_id": "BIBREF29"
},
{
"start": 179,
"end": 200,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 368,
"end": 385,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RE Systems",
"sec_num": "2.4"
},
{
"text": "In this section, we present the performance of each participating systems along with a description of the errors made by the model types. Table 4 shows the comparison of precision (P), recall (R) and F 1 score among different teams, evaluated on the Test-20 corpus. Here the exact match refers to the cases where a predicted entity Table 4 : Results on extraction of 18 Named Entity types from the Test-20 dataset. Exact Match reports the performance when the predicted entity type is same as the gold entity and the predicted entity boundary is the exact same as the gold entity boundary. Partial Match reports the performance when the predicted entity type is same as the gold entity and the predicted entity boundary has some overlap with gold entity boundary.",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 4",
"ref_id": null
},
{
"start": 332,
"end": 339,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "is considered correct, only if the predicted type and boundary is exactly same as the gold entity. Whereas, in partial match, a predicted entity is considered correct if the predicted type is the same as the gold entity type and predicted entity boundary has some overlap with the gold entity boundary. We observe that ensemble models with contextual word representations outperforms all other approaches by achieving 77.99 F 1 score in exact match (Team:BiTeM) and 81.75 F 1 score in partial match (Team:PublishInCovid19). Fine tuning of contextual word representation systems demonstrated quite competent performance with SciBERT-fine tuning being the best (Team:mgsohrab).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Errors Analysis",
"sec_num": "3.1"
},
{
"text": "In Figure 2 , we present an error analysis. Among the best performing models, the ensemble of transformer (Team:BiTeM) had significantly lower amount of 'over prediction' error (i.e., tagging a non-entity token as entity), compared to the system with ensemble of BiLSTM-CRFs (Team:PublishInCovid19). Table 6 shows the comparison of precision (P), recall (R) and F 1 score among the participant teams, evaluated on the Test-20 corpus. Both of the teams utilized the gold entities and then predict the relations among these entities by fine-tuning con-textual word representations. We observed that fine-tuning of domain related PubMedBERT, provides significantly higher performance compared to the general domain BERT. While examining the relation predictions from both of these systems, we found that model with fine-tuned PubMed-BERT (Team:mgsohrab) resulted in significantly less amount of errors in every category ( Figure 3 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 300,
"end": 307,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 919,
"end": 927,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "NER Errors Analysis",
"sec_num": "3.1"
},
{
"text": "The task of information extraction from wet lab protocols is closely related to the event trigger extraction task. The event trigger task has been studied extensively, mostly using ACE data (Doddington et al., 2004) and the BioNLP data (N\u00e9dellec et al., 2013) . Broadly, there are two ways to classify various event trigger detection models: (1) Rule-based methods using pattern matching and regular expression to identify triggers (Vlachos et al., 2009) and (2) Machine Learning based methods focusing on generation of high-end hand-crafted features to be used in classification models like SVMs or maxent classifiers . Kernel based learning methods have also been utilized with embedded features from the syntactic and semantic contexts to identify and extract the biomed-ical event entities (Zhou et al., 2014) . In order to counteract highly sparse representations, different neural models were proposed. These neural models utilized the dependency based word embeddings with feed forward neural networks (Wang et al., 2016b) , CNNs (Wang et al., 2016a) and Bidirectional RNNs (Rahul et al., 2017) . Previous work has experimented on datasets of well-edited biomedical publications with a small number of entity types. For example, the JNLPBA corpus (Kim et al., 2004) with 5 entity types (CELL LINE, CELL TYPE, DNA, RNA, and PROTEIN) and the BC2GM corpus (Hirschman et al., 2005 ) with a single entity class for genes/proteins. In contrast, our dataset addresses the challenges of recognizing 18 finegrained named entities along with 15 types of relations from the user-created wet lab protocols.",
"cite_spans": [
{
"start": 190,
"end": 215,
"text": "(Doddington et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 236,
"end": 259,
"text": "(N\u00e9dellec et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 432,
"end": 454,
"text": "(Vlachos et al., 2009)",
"ref_id": "BIBREF44"
},
{
"start": 794,
"end": 813,
"text": "(Zhou et al., 2014)",
"ref_id": "BIBREF49"
},
{
"start": 1009,
"end": 1029,
"text": "(Wang et al., 2016b)",
"ref_id": "BIBREF46"
},
{
"start": 1037,
"end": 1057,
"text": "(Wang et al., 2016a)",
"ref_id": "BIBREF45"
},
{
"start": 1062,
"end": 1101,
"text": "Bidirectional RNNs (Rahul et al., 2017)",
"ref_id": null
},
{
"start": 1254,
"end": 1272,
"text": "(Kim et al., 2004)",
"ref_id": "BIBREF18"
},
{
"start": 1360,
"end": 1383,
"text": "(Hirschman et al., 2005",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this paper, we presented a shared task for consisting of two sub-tasks: named entity recognition and relation extraction from the wet lab protocols. We described the task setup and datasets details, and also outlined the approach taken by the participating systems. The shared task included larger and improvised dataset compared to the prior literature (Kulkarni et al., 2018) . This improvised dataset enables us to draw stronger conclusions about the true potential of different approaches. It also facilitates us in analyzing the results of the participating systems, which aids us in suggesting potential research directions for both future shared tasks and noisy text processing in user generated lab protocols.",
"cite_spans": [
{
"start": 357,
"end": 380,
"text": "(Kulkarni et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "5"
},
{
"text": "Our annotated corpus is available at: https:// github.com/jeniyat/WNUT_2020_NER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.chokkan.org/software/ crfsuite/ 4 https://scikit-learn.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Ethan Lee and Jaewook Lee for helping with data annotation. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001119C0108. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "KaushikAcharya at WNUT 2020 Shared Task-1: Conditional Random Field(CRF) based Named Entity Recognition(NER) for Wet Lab Protocols",
"authors": [],
"year": null,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaushik Acharya. 2020. KaushikAcharya at WNUT 2020 Shared Task-1: Conditional Random Field(CRF) based Named Entity Recognition(NER) for Wet Lab Protocols. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual String Embeddings for Sequence Labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual String Embeddings for Sequence Labeling. In COLING 2018, 27th International Con- ference on Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Biocoder: A programming language for standardizing and automating biology protocols",
"authors": [
{
"first": "Vaishnavi",
"middle": [],
"last": "Ananthanarayanan",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Thies",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of biological engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaishnavi Ananthanarayanan and William Thies. 2010. Biocoder: A programming language for standardiz- ing and automating biology protocols. Journal of biological engineering.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Wet lab accelerator: a web-based application democratizing laboratory automation for synthetic biology",
"authors": [
{
"first": "Maxwell",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Berliner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lachoff",
"suffix": ""
},
{
"first": "Eli",
"middle": [
"S"
],
"last": "Paul R Jaschke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Groban",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxwell Bates, Aaron J Berliner, Joe Lachoff, Paul R Jaschke, and Eli S Groban. 2017. Wet lab acceler- ator: a web-based application democratizing labora- tory automation for synthetic biology. ACS synthetic biology.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SciB-ERT: Pretrained Contextualized Embeddings for Scientific Text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. SciB- ERT: Pretrained Contextualized Embeddings for Sci- entific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing (EMNLP).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Coefficient of Agreement for Nominal Scales",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The automatic content extraction (ace) program-tasks, data, and evaluation",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "George R Doddington",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lance",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [
"M"
],
"last": "Strassel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2004,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George R Doddington, Alexis Mitchell, Mark A Przy- bocki, Lance A Ramshaw, Stephanie Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.15779"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- Specific Language Model Pretraining for Biomed- ical Natural Language Processing. arXiv preprint arXiv:2007.15779.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "DSC-IITISM at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Saket",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Aman",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy Usergenerated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saket Gupta, Aman Sinha, and Rohit Agarwal. 2020. DSC-IITISM at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol. In Pro- ceedings of EMNLP 2020 Workshop on Noisy User- generated Text (WNUT).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10964"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. arXiv preprint arXiv:2004.10964.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of biocreative: critical assessment of information extraction for biology",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Blaschke",
"suffix": ""
},
{
"first": "Alfonso",
"middle": [],
"last": "Valencia",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynette Hirschman, Alexander Yeh, Christian Blaschke, and Alfonso Valencia. 2005. Overview of biocreative: critical assessment of information extraction for biology.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission",
"authors": [
{
"first": "Kexin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jaan",
"middle": [],
"last": "Altosaar",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Ranganath",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.05342"
]
},
"num": null,
"urls": [],
"raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SudeshnaTCS at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Sudeshna",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sudeshna Jana. 2020. SudeshnaTCS at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol. In Proceedings of EMNLP 2020 Work- shop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generalizing Natural Language Analysis through Span-relation Representations",
"authors": [
{
"first": "Zhengbao",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengbao Jiang, Wei Xu, Jun Araki, and Graham Neu- big. 2020. Generalizing Natural Language Analysis through Span-relation Representations. In Proceed- ings of the 2020 Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "IITKGP at WNUT 2020 Shared Task-1: Domain specific BERT representation for Named Entity Recognition of lab protocol",
"authors": [
{
"first": "Ayush",
"middle": [],
"last": "Kaushal",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Vaidhya",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayush Kaushal and Tejas Vaidhya. 2020. IITKGP at WNUT 2020 Shared Task-1: Domain specific BERT representation for Named Entity Recognition of lab protocol. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BIO-BIO at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Zhan",
"middle": [],
"last": "Kecheng",
"suffix": ""
},
{
"first": "Xiong",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Liqing",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhan Kecheng, Xiong Ying, Peng Hao, Yao, and LiQing Yao. 2020. BIO-BIO at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Pro- tocol.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "2020. kabir at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Kabir",
"middle": [],
"last": "Khan",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kabir Khan. 2020. kabir at WNUT 2020 Shared Task- 1: Name Entity Extraction from Wet Lab Protocol. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Introduction to the bio-entity recognition task at jnlpba",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yuka",
"middle": [],
"last": "Tateisi",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the international joint workshop on natural language processing in biomedicine and its applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. 2004. Introduction to the bio-entity recognition task at jnlpba. In Pro- ceedings of the international joint workshop on nat- ural language processing in biomedicine and its ap- plications.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "BiTeM at WNUT 2020 Shared Task-1: Named Entity Recognition over Wet Lab Protocols using an Ensemble of Contextual Language Models",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Knafou",
"suffix": ""
},
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Copara",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Teodoro",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ruch",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julien Knafou, Nona Naderi, Jenny Copara, Dou- glas Teodoro, and Patrick Ruch. 2020. BiTeM at WNUT 2020 Shared Task-1: Named Entity Recog- nition over Wet Lab Protocols using an Ensemble of Contextual Language Models. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols",
"authors": [
{
"first": "Chaitanya",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Raghu",
"middle": [],
"last": "Machiraju",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An Annotated Corpus for Ma- chine Reading of Instructions in Wet Lab Protocols. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annotating and Extracting Synthesis Process of All-Solid-State Batteries from Scientific Literature",
"authors": [
{
"first": "Fusataka",
"middle": [],
"last": "Kuniyoshi",
"suffix": ""
},
{
"first": "Kohei",
"middle": [],
"last": "Makino",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Ozawa",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.07339"
]
},
"num": null,
"urls": [],
"raw_text": "Fusataka Kuniyoshi, Kohei Makino, Jun Ozawa, and Makoto Miwa. 2020. Annotating and Extract- ing Synthesis Process of All-Solid-State Batter- ies from Scientific Literature. arXiv preprint arXiv:2002.07339.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "B-NLP at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Lange",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Lange, Xiang Dai, Heike Adel, and Jannik Str\u00f6tgen. 2020. B-NLP at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Pro- tocol. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, and V Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A general framework for information extraction using dynamic span graphs",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A gen- eral framework for information extraction using dy- namic span graphs. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "CamemBERT: a Tasty French Language Model",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a Tasty French Language Model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Big Green at WNUT 2020 Shared Task-1: Relation Extraction as Contextualized Sequence Classification",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy Usergenerated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Miller and Soroush Vosoughi. 2020. Big Green at WNUT 2020 Shared Task-1: Relation Extraction as Contextualized Sequence Classification. In Pro- ceedings of EMNLP 2020 Workshop on Noisy User- generated Text (WNUT).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Overview of bionlp shared task 2013",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "N\u00e9dellec",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bossy",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jung-Jae",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the BioNLP Shared Task 2013 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire N\u00e9dellec, Robert Bossy, Jin-Dong Kim, Jung- Jae Kim, Tomoko Ohta, Sampo Pyysalo, and Pierre Zweigenbaum. 2013. Overview of bionlp shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop. Association for Computational Linguistics Sofia, Bulgaria.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Comparisons of sequence labeling algorithms and extensions",
"authors": [
{
"first": "Nam",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Yunsong",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nam Nguyen and Yunsong Guo. 2007. Comparisons of sequence labeling algorithms and extensions. In Proceedings of the 24th international conference on Machine learning.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Deep Contextualized Word Representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the of Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the of Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "2020. mahab at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Mahdi",
"suffix": ""
},
{
"first": "Abdollah",
"middle": [],
"last": "Pour",
"suffix": ""
},
{
"first": "Parsa",
"middle": [],
"last": "Farinnia",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy Usergenerated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Mahdi Abdollah Pour and Parsa Farinnia. 2020. mahab at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol. In Pro- ceedings of EMNLP 2020 Workshop on Noisy User- generated Text (WNUT).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Event extraction across multiple levels of biological organization",
"authors": [
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Han-Cheol",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2012,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sampo Pyysalo, Tomoko Ohta, Makoto Miwa, Han- Cheol Cho, Jun'ichi Tsujii, and Sophia Ananiadou. 2012. Event extraction across multiple levels of bio- logical organization. Bioinformatics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Biomedical event trigger identification using bidirectional recurrent neural network based models",
"authors": [
{
"first": "Vss",
"middle": [],
"last": "Patchigolla",
"suffix": ""
},
{
"first": "Sunil",
"middle": [],
"last": "Rahul",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Kumar Sahu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anand",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.09516"
]
},
"num": null,
"urls": [],
"raw_text": "Patchigolla VSS Rahul, Sunil Kumar Sahu, and Ashish Anand. 2017. Biomedical event trigger identifi- cation using bidirectional recurrent neural network based models. arXiv preprint arXiv:1705.09516.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "IBS at WNUT 2020 Shared Task-1: Name Entity Extraction from Wet Lab Protocol",
"authors": [
{
"first": "Bjorn",
"middle": [],
"last": "Utpal Kumar Sikdar",
"suffix": ""
},
{
"first": "M Krishana",
"middle": [],
"last": "Gamback",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Utpal Kumar Sikdar, Bjorn Gamback, and M Krishana Kumar. 2020. IBS at WNUT 2020 Shared Task- 1: Name Entity Extraction from Wet Lab Protocol. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Pub-lishInCovid19 at WNUT 2020 Shared Task-1: Entity Recognition in Wet Lab Protocols using Structured Learning Ensemble and Contextualised Embeddings",
"authors": [
{
"first": "Janvijay",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Wadhawan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janvijay Singh and Anshul Wadhawan. 2020. Pub- lishInCovid19 at WNUT 2020 Shared Task-1: En- tity Recognition in Wet Lab Protocols using Struc- tured Learning Ensemble and Contextualised Em- beddings. In Proceedings of EMNLP 2020 Work- shop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "2020. mgsohrab at WNUT 2020 Shared Task-1: Neural Exhaustive Approach for Entity and Relation Recognition Over Wet Lab Protocols",
"authors": [
{
"first": "Khoa",
"middle": [],
"last": "Mohammad Golam Sohrab",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Takamura",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Golam Sohrab, Khoa Duong, Makoto Miwa, and Hiroya Takamura. 2020. mgsohrab at WNUT 2020 Shared Task-1: Neural Exhaustive Ap- proach for Entity and Relation Recognition Over Wet Lab Protocols. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "EXACT2: the semantics of biomedical protocols",
"authors": [
{
"first": "N",
"middle": [],
"last": "Larisa",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Soldatova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nadis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Piyali",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Haddi",
"suffix": ""
},
{
"first": "Nigel",
"middle": [
"J"
],
"last": "Bauml\u00e9",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"B"
],
"last": "Marwan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudkin",
"suffix": ""
}
],
"year": 2014,
"venue": "BMC bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Larisa N Soldatova, Daniel Nadis, Ross D King, Piyali S Basu, Emma Haddi, V\u00e9ronique Bauml\u00e9, Nigel J Saunders, Wolfgang Marwan, and Brian B Rudkin. 2014. EXACT2: the semantics of biomedi- cal protocols. BMC bioinformatics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "brat: a Web-based Tool for NLP-Assisted Text Annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. brat: a Web-based Tool for NLP-Assisted Text Annotation. In Proceedings of the Demonstra- tions at the 13th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A software stack for specification and robotic execution of protocols for synthetic biological engineering",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Vasilev",
"suffix": ""
},
{
"first": "Chenkai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Traci",
"middle": [],
"last": "Haddock",
"suffix": ""
},
{
"first": "Swapnil",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Adler",
"suffix": ""
},
{
"first": "Fusun",
"middle": [],
"last": "Yaman",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Beal",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Babb",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Densmore",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktor Vasilev, Chenkai Liu, Traci Haddock, Swap- nil Bhatia, Aaron Adler, Fusun Yaman, Jacob Beal, Jonathan Babb, Ron Weiss, Douglas Densmore, et al. 2011. A software stack for specification and robotic execution of protocols for synthetic biological en- gineering. In 3rd international workshop on bio- design automation.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Biomedical event extraction without training data",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Buttery",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Vlachos, Paula Buttery, Diarmuid O S\u00e9aghdha, and Ted Briscoe. 2009. Biomedical event extraction without training data. In Pro- ceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Biomedical event trigger detection based on convolutional neural network",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Honglei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Data Mining and Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Wang, Honglei Li, Yuan An, Hongfei Lin, and Zhi- hao Yang. 2016a. Biomedical event trigger detec- tion based on convolutional neural network. Interna- tional Journal of Data Mining and Bioinformatics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Biomedical event trigger detection by dependencybased word embedding",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianhai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuanyuan",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "BMC medical genomics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Wang, Jianhai Zhang, Yuan An, Hongfei Lin, Zhi- hao Yang, Yijia Zhang, and Yuanyuan Sun. 2016b. Biomedical event trigger detection by dependency- based word embedding. BMC medical genomics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Fancy Man Launches Zippo at WNUT 2020 Shared Task-1: A Bert Case Model for Wet Lab Entity Extraction",
"authors": [
{
"first": "Qingcheng",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Haoding",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Xiaoyang",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Zhexin",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingcheng Zeng, Haoding Meng, Xiaoyang Fang, and Zhexin Liang. 2020. Fancy Man Launches Zippo at WNUT 2020 Shared Task-1: A Bert Case Model for Wet Lab Entity Extraction. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Event trigger identification for biomedical events extraction using domain knowledge",
"authors": [
{
"first": "Deyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Dayou",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2014,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deyu Zhou, Dayou Zhong, and Yulan He. 2014. Event trigger identification for biomedical events extrac- tion using domain knowledge. Bioinformatics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Examples of named entities and relations in a wet lab protocol",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "word representations. KaushikAcharya (Acharya, 2020) employed a linear CRF with hand-crafted features. mahab (Pour and Farinnia, 2020) fine-tuned the BERT base (Devlin et al., 2019) sequence tagging model. mgsohrab (Sohrab et al., 2020) fine-tuned the SciBERT (Beltagy et al., 2019) model. PublishInCovid19 (Singh and Wadhawan, 2020) employed a structured ensemble classifier(Nguyen and Guo, 2007) consisting of 11 BiLSTM-CRF taggers, that utilized the PubMedBERT(Gu et al., 2020) word representation.SudeshnaTCS (Jana, 2020) fine-tuned XLNet(Yang et al., 2019) model. IITKGP(Kaushal and Vaidhya, 2020) finetuned the Bio-BERT(Lee et al., 2020) model.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Summary of incorrectly classified entity tokens by each submitted systems.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Summary of incorrectly predicted relations in each submitted systems.",
"uris": null,
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Statistics of the Wet Lab Protocol corpus.",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Team Name</td><td>Affiliation</td></tr><tr><td>B-NLP</td><td>Bosch Center for Artificial</td></tr><tr><td/><td>Intelligence</td></tr><tr><td>Big Green</td><td>Dartmouth College</td></tr><tr><td>BIO-BIO</td><td>Harbin Institute of technology,</td></tr><tr><td/><td>Shenzhen</td></tr><tr><td/><td>University of Applied Sciences and</td></tr><tr><td>BiTeM</td><td>Arts of Western Switzerland, Swiss</td></tr><tr><td/><td>Institute of Bioinformatics,</td></tr><tr><td/><td>University of Geneva</td></tr><tr><td>DSC-IITISM</td><td>IIT(ISM) Dhanbad</td></tr><tr><td/><td>University of Manchester, Xian Jiao-</td></tr><tr><td>Fancy Man</td><td>tong University, East China Univer-</td></tr><tr><td/><td>sity of Science and Technology,</td></tr><tr><td/><td>Zhejiang University</td></tr><tr><td>IBS</td><td>IBS Software Pvt. Ltd, NTNU</td></tr><tr><td>IITKGP</td><td>IIT, Kharagpur</td></tr><tr><td>Kabir</td><td>Microsoft</td></tr><tr><td colspan=\"2\">KaushikAcharya Philips</td></tr><tr><td>mahab</td><td>Amirkabir University of Technology</td></tr><tr><td>mgsohrab</td><td>National Institute of Advanced Industrial Science and Technology</td></tr><tr><td colspan=\"2\">PublishInCovid19 Flipkart Private Limited</td></tr><tr><td>SudeshnaTCS</td><td>TCS Research &amp; Innovation Lab</td></tr></table>",
"type_str": "table",
"text": "Summary of NER systems designed by each team.",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Team Name and affiliation of the participant.",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Summary of relation extraction systems designed by each team.",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Results on extraction of 15 relation types from the Test-20 dataset.",
"num": null
}
}
}
}