|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T06:34:09.754189Z" |
|
}, |
|
"title": "PublishInCovid19 at WNUT 2020 Shared Task-1: Entity Recognition in Wet Lab Protocols using Structured Learning Ensemble and Contextualised Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Janvijay", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Anshul", |
|
"middle": [], |
|
"last": "Wadhawan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we describe the approach that we employed to address the task of Entity Recognition over Wet Lab Protocols-a shared task in EMNLP WNUT-2020 Workshop. Our approach is composed of two phases. In the first phase, we experiment with various contextualised word embeddings (like Flair, BERTbased) and a BiLSTM-CRF model to arrive at the best-performing architecture. In the second phase, we create an ensemble composed of eleven BiLSTM-CRF models. The individual models are trained on random trainvalidation splits of the complete dataset. Here, we also experiment with different output merging schemes, including Majority Voting and Structured Learning Ensembling (SLE). Our final submission achieved a micro F1-score of 0.8175 and 0.7757 for the partial and exact match of the entity spans, respectively. We were ranked first and second, in terms of partial and exact match, respectively.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we describe the approach that we employed to address the task of Entity Recognition over Wet Lab Protocols-a shared task in EMNLP WNUT-2020 Workshop. Our approach is composed of two phases. In the first phase, we experiment with various contextualised word embeddings (like Flair, BERTbased) and a BiLSTM-CRF model to arrive at the best-performing architecture. In the second phase, we create an ensemble composed of eleven BiLSTM-CRF models. The individual models are trained on random trainvalidation splits of the complete dataset. Here, we also experiment with different output merging schemes, including Majority Voting and Structured Learning Ensembling (SLE). Our final submission achieved a micro F1-score of 0.8175 and 0.7757 for the partial and exact match of the entity spans, respectively. We were ranked first and second, in terms of partial and exact match, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Entity Recognition (aka entity extraction or chunking) involves detection (begin and end boundaries) and classification of entities mentioned in unstructured text into pre-defined categories. It is one of the foundational sub-task of several Information Extraction (Hanafiah and Quix, 2014) (IE) and Natural Language Processing (NLP) pipelines. Hence, errors introduced during the extraction of entities can propagate further and degrade the performance of the complete IE or NLP pipeline. In the domains of experimental biology, the growing complexity of experiments has resulted in a need to automate wet laboratory procedures. Such an automation will be useful in avoiding human errors introduced in the wet lab protocols and thereby will enhance the reproducibility of experimental biological research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 290, |
|
"text": "(Hanafiah and Quix, 2014)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 295, |
|
"text": "(IE)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To achieve this reproducibility, some of the previous research works have focussed on defining machine-readable formats for writing wet lab protocols (King et al., 2009; Ananthanarayanan and Thies, 2010; Vasilev et al., 2011) . However, the vast majority of today's protocols are written in natural language with jargon and colloquial language constructs that emerge as a byproduct of ad-hoc protocol documentation. This motivates the need for machine reading systems that can interpret the meaning of these natural language instructions, to enhance reproducibility via semantic protocols (e.g. the Aquarium project) and enable robotic automation (Bates et al., 2017) by mapping natural language instructions to executable actions. In order to enable research on interpreting natural language instructions, with practical applications in biology and life sciences, an annotated database (Kulkarni et al., 2018) of wet lab protocols was introduced.", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 169, |
|
"text": "(King et al., 2009;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 203, |
|
"text": "Ananthanarayanan and Thies, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 225, |
|
"text": "Vasilev et al., 2011)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 667, |
|
"text": "(Bates et al., 2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 887, |
|
"end": 910, |
|
"text": "(Kulkarni et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The first step in interpreting natural language lab protocols is to extract entities, followed by identification of relations between them. To address the research focussing on entity recognition over Wet Lab Protocols a shared task (Tabassum et al., 2020) was introduced at EMNLP WNUT-2020 Workshop. The task was based on the annotated database (Kulkarni et al., 2018) of wet lab protocols. We tackle this task in two phases. In the first phase, we experiment with various contextualised word embeddings (like Flair, BERT-based) and a BiLSTM-CRF model to arrive at the bestperforming architecture. In the second phase, we create an ensemble composed of eleven BiLSTM-CRF models. The individual models are trained on random train-validation splits of the complete dataset. Here, we also experiment with different output merging schemes, including Majority Voting and SLE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 256, |
|
"text": "(Tabassum et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 369, |
|
"text": "(Kulkarni et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follows: Section 2 states the task definition. Section 3 describes the specifics of our methodology. Section 4 explains the experimental setup and the results, and Section 5 concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The steps involved in any lab procedure are specified by lab protocols. These protocols have several characteristics like noise, density and domain specificity. Any process that can automatically or semiautomatically convert protocols into a format that machine recognizes advantages biological research. In this task, system entries for entity recognition on a dataset of lab protocols are invited. Since the protocols are written manually by lab technicians and researchers, they are subject to spelling errors and non standard language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The data provided in the task is made available in two formats:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this format, each line represents the named entity in the following manner:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CoNLL format", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "<word >+ \"\\t\"+ <NE > An empty line denotes the end of a sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CoNLL format", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The standoff format contains each protocol represented by two separate files. One file, with .txt extension, contains protocols in text format, while the other file, with .ann extension, contains protocol annotations. The two files are linked by using a simple file naming convention wherein their base name is the same, i.e. the file name without the extension is the same. For example,the annotation file named as protocol 17.ann contains annotations for the file protocol 17.txt.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Standoff format", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Within each annotation file, individual annotations connect to different parts of text through character offsets. For example, in the document starting as \"Put 3.68 g of NaCl\", the text \"Put\" is denoted by the offset range 0..3. It is evident from the above example that all offsets are 0 indexed and include the character at the start offset and exclude the character at the end offset. All text files have the file extension .txt and contain the text of original documents provided as inputs to the system. The encoding used in the protocol text files which are stored as plain text files is UTF-8 (an extension of ASCII). Each line in the protocol text file denotes a single step in the protocol. Hence, all steps in the entire protocol are separated by newline characters. The first line in every file indicates the protocol's name/title.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Standoff format", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "This section talks about the core methodology we adopted to tackle the given problem. The process pipeline involves providing contextualised word embeddings as input to the BiLSTM-CRF model, followed by a Structured learning Ensemble approach. Each of the these modules have been described in detail in the below subsections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We experiment with two types of contextualised word embeddings, BERT and Flair based, which we discuss in detail in the below subsections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embeddings", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Neural models based on transformers (Vaswani et al., 2017) have excelled in most NLP tasks. The primary components in their architecture being the self attention blocks and feed forward layers, these models have been proven successful in providing a significant boost to state-of-the-art results. The major difference between transformers and RNN based models (Li et al., 2018) is that transformers do not rely on recurrence mechanisms to establish relations and dependencies in the input sequence, by making use of self attention at each input time step instead. Attention can be interpreted as a technique to map a query and a set of key-value pairs to an output, where the query, keys, values and output are all vectors. As far as self attention is concerned, a separate feed forward layer is used to formulate the query, key and value vectors for each vector in the input sequence. For every input vector, the score for attention is calculated using a compatibility function which takes as input the input keys and query vector. These attention scores are used to denote the weights of a weighted sum of value vectors, which is the output of self attention technique. Another technique widely used is the multi headed attention technique in which several modules of these self attention blocks work over the input sequence. The encoder module in the transformer's architecture contains 6 identical layers each having two sublayers -position wise densely connected feed forward network and multi headed self attention layers. These sublayers are wrapped around with residual connections. Layer normalisation follows the above module. BERT pre-trains bidirectional representations by jointly utilizing both right and left contexts across all layers with the help of a multi layer encoder module.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 58, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 377, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "These pre-trained BERT representations are then fine tuned as per the required task by appending a separate output layer depending on the task to be performed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "For every token, the summation of the corresponding token, segment and position embeddings is carried out to produce BERT's input representation. The training process for BERT involves Masked Language Modelling (Nozza et al., 2020) and Next Sentence Prediction (Shi and Demberg, 2019) , both of which are unsupervised prediction tasks. BERT representation for each token in the input text is then fed to the appended densely connected layers to produce the output labels for the token as part of the fine tuning process. The predictions produced are independent of the surrounding predictions produced.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 231, |
|
"text": "(Nozza et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 284, |
|
"text": "(Shi and Demberg, 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "We experimented with different variations of BERT models (Devlin et al., 2018) for generating word embeddings. All the listed model types have 12 layers, 12 attention heads and 110M parameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 78, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "BERT-base-cased : This model is trained on cased English text of general domain like Wikipedia text and BooksCorpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "BioBERT (Lee et al., 2019) : BioBERT is a language representation model pre-trained on the domain of biomedical data. The pre-training process for BioBERT involves initializing weights with those of BERT which is pre-trained on general domain corpora, followed by pre-training BioBERT with biomedical data corpora like PMC full-text articles and PubMed abstracts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 26, |
|
"text": "(Lee et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "PubMedBERT (Gu et al., 2020) : The base architecture of PubMedBERT is the same as an uncased BERT base model. The model is pre-trained on full PubMed Central articles and PubMed abstracts. The pre-training process for this model involves direct pre-training on biomedical text from scratch. Thus, the weights are not initialized with those of BERT as was in the case of BioBERT. The pre-training corpus contains 14 million PubMed abstracts with 3 billion words, 21 GB of textual data in total. Another version of the same model is pre-trained on additional data of full text PubMed Central articles, with the total textual data containing 16.8 billion words and 107 GB in size.", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 28, |
|
"text": "(Gu et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "1 Flair embeddings are pre-trained Contextualised Word Embeddings (CWE) provided in the Flair 1 https://github.com/flairNLP/flair NLP framework. In contrast to classical work embeddings like GloVe, the Flair CWE concatenate two context vectors based on the left and right sentence context of the word to it. These context vectors are computed using two recurrent neural models. One of the character language model is trained from left to right while the other is trained from right to left. Flair CWEs have been applied successfully to sequence tagging tasks such as Named Entity Recognition and Part of Speech Tagging. Since this shared task is closely related to Bio-medical domain, we have used \"pubmed\" variant of Flair CWEs in all our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flair", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "The ability of Recurrent Neural Networks (RNNs) (Yadav and Bethard, 2018) to execute the same function at each time step, allowing parameters to be shared across the input sequence, make them highly suitable for sequential input data . Useful information from each time step is forwarded to further time steps in the form of a hidden vector, which is utilized to make a prediction at each of the future steps. However, RNNs face the issue of vanishing gradients in case of large input sequences. To solve this issue of vanishing gradients, (Long Short Term Memory) LSTM (Hochreiter and Schmidhuber, 1997) was introduced. The presence of gating mechanisms in LSTMs makes sure that long range dependencies are captured appropriately. While LSTMs utilize only past time steps to make a prediction, Bidirectional LSTM (BiLSTM) (Schuster and Paliwal, 1997) utilizes information from past as well as future time steps. In our case, the output embeddings are fed to the BiLSTM layer, which outputs a vector for each word in the input sequence. Since the task under consideration has labels which have dependencies among themselves, such as an intermediate label following a start label, we need to consider these dependencies in our modelling approach. For this, a linear chain (Conditional Random Fields) CRF layer (Sutton and McCallum, 2010) is appended to the BiLSTM layer. Due to utilization of transition matrices for output labels, a linear chain CRF is able to learn inter label dependencies, if any, among the output labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 73, |
|
"text": "(Yadav and Bethard, 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 604, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1309, |
|
"end": 1336, |
|
"text": "(Sutton and McCallum, 2010)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiLSTM-CRF Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We created eleven randomly shuffled splits of training and validation data, and fine tuned our final model on these eleven splits to produce eleven sets of predictions. We then merged these predictions following two merging techniques, Majority Voting and Structured Learning Ensemble (SLE), thus comparing the performance of the two merging functions. In our experiments, we provide a fair comparison of the above two combination techniques, i.e. Majority Voting technique and SLE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Given N number of ensembles and x as the input example, {y 1 , y 2 , ..., y N } being the predictions from N different models are merged to produce the final prediction y. The ensemble methods for structured output classification and multiclass classification differ in the way they merge the predicted results of the base models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The merging techniques have been described below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble Process", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For every entity predicted, we choose the mode i.e. the most frequently occurring entity among the eleven predictions (Adejo and Connolly, 2017) . Thus, the entity which has the maximum number of votes wins.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 144, |
|
"text": "(Adejo and Connolly, 2017)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Majority voting", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "Mathematically, the above process of majority voting scheme to produce the final predictions can be denoted in the below manner :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Majority voting", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "y = majority {(y 1 ) 1 , (y 2 ) 1 , . . . , (y N ) 1 } . . . . . . . . . . majority {(y 1 ) L , (y 2 ) L , . . . , (y N ) L }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Majority voting", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "where L is the length of all predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Majority voting", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "Due to the presence of correlations and intrinsic structures in the output labels, we speculated that the majority voting scheme would not suffice for our problem. (Nguyen and Guo, 2007) proposed a technique to combine the predictions considering the correlations of the output labels. Named as weighted transition combination, the algorithm involves construction of (L-1) transition matrices of size ( |\u03a3| x |\u03a3| ) , where \u03a3 is the set of all possible labels. Apart from this, it also involves construction of a transition matrix T k which provides the number of transitions at the k th position as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 186, |
|
"text": "(Nguyen and Guo, 2007)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "T k (t i , t j ) = count k (t i , t j ) , \u22001 \u2264 k \u2264 (L \u2212 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "where count k (t i , t j ) denotes the number of times the label t j occurs after t i at the k th position in the set of predicted sequences {y 1 , y 2 , ..., y N }. Also, a stateweight vector is constructed that denotes the number of times label t i occurs at position k in the predicted sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "U k (t i ) = count k (t i ) , \u22001 \u2264 k \u2264 L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "The predicted sequence of SLE is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "y = argmax y L\u22121 k=1 T k (y k , y k+1 ) L k=1 U k (y k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "The computation involved in the argmax calculation of the above equation is similar to Viterbi dynamic programming approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Learning Ensemble (SLE)", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Our experimentation strategy is distributed in two phases. In the first phase, we experiment with various architectures and their specifications by varying the type of pre-trained model, deciding layers to freeze i.e. complete fine-tuning or contextual word embeddings, varying type and size of final layer in order to arrive at the best performing model. We trained each of our model architectures on the train split and identified the checkpoint which worked best using the validation split. We reported the final numbers on the test split. For each model, we train three different models with random seed values and then report averaged f1 scores to ensure that improvements are not the result of randomisation. A configuration of concatenated contextual word embeddings from PubmedBERT and Flair, followed by 2 BiLSTM layers with 512 dimensional hidden size and a CRF layer in the end worked best. In the second phase, we train individual models on random splits of train + validation sets. In order to merge the outputs of individual models, we experiment with two output merging schemes namely Majority Voting and Structured Learning Ensemble (SLE). Finally, we report the results on the test dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the following sub-sections, we describe the dataset, system settings, evaluation metrics, results and a brief error analysis for our final submitted system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Wet Lab Protocol (WLP) dataset consists of 615 unique protocols from 623 protocols released by (Kulkarni et al., 2018 After discarding the duplicate protocols, the remaining 615 unique protocols are re-annotated in brat by 3 annotators with 0.75 inter-annotator agreement, measured by span-level Krippendorff's \u03b1. The annotators not only added the missing entityrelations but also rectified the inconsistencies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 117, |
|
"text": "(Kulkarni et al., 2018", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The detailed class-wise statistics pertaining to each of the dataset splits provided in the task are shown in Table 1 . Corresponding number of protocols and sentences are provided in test dataset and test data 2020 denotes the surprise test dataset. The surprise dataset was not revealed before the evaluation window. Table 3 presents the total number of words, words absent in reference and words present in reference for each dataset. Reference varies according to the dataset being considered. For validation dataset and test dataset, training dataset is the reference. For surprise dataset, all data i.e. the union of training dataset, validation dataset and test dataset is considered as the reference. There is no reference in case of training dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 117, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 326, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While training individual models of our final ensemble, we rely on concatenated word representations from PubMedBERT and Flair. We train the BiLSTM-CRF based model with 3 BiLSTM layer each of hidden size 512 using a patience-based strategy. With this strategy, after every epoch of training, we compute the F1-score on validation split and if the metric doesn't improve continuously for \"patience\" number of epochs, we reduce the learning rate by half. We ultimately stop the training when either the learning rate diminishes to 0.0001 or the epoch number reaches a maximum limit. We have utilised hugging-face 2 BERT APIs and Flair Framework (Akbik et al., 2019) to train our model. We ran our experiments on a single NVIDIA V100 GPU. It took around 2.5 hours to train each individual model of our final submitted ensemble. Table 4 summarises the hyperparameters which we employed to train our models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 643, |
|
"end": 663, |
|
"text": "(Akbik et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 825, |
|
"end": 832, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Settings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Assuming that P and T represent the set of predicted and ground-truth entities for a particular word in the protocol text. Then, precision, recall and F1-score for the entity prediction of the considered word is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "P recision = |P T | |P | Recall = |P T | |T | F 1 = 2 * P recision * Recall P recision + Recall", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "There were two criteria for evaluation metrics in the task, partial match and exact match. In case of partial match, P intersection T will include all entities whose types match and boundaries match partially, i.e. there is some overlap in the boundaries. However, in case of exact match, for an entity to be included in the intersection set, it must have the same type as well as exact same boundaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "2 https://huggingface.co/transformers/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our approach involved working in two phases, first in which we experiment with different model architectures and the second in which we experiment with two output merging schemes. The results of our experiments in Phases 1 and 2 are summarised in Table 5 and 6 respectively. In Table 5 , we present the micro-F1 and macro-F1 scores for different model architectures we experiment with by varying the base model, fine tuning implementation, type and specifications of final layer and CRF layer addition. Table 6 presents the micro-F1 scores on the test set when we experiment with the number of ensembles, i.e. on merging different number of prediction sets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 254, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 285, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 510, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Error Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For our final submission to WNUT Shared Task-1, we employed an ensemble of eleven individual models. Each of these models was trained on a random train-validation split of original train + validation + test dataset. Our ensemble achieved a micro-F1 score of 0.8175 and 0.7757 for the partial and exact match of entity boundaries, respectively. We achieved highest micro-recall score among all the participating teams. In Table 7 , we report the top-10 confusions which our model makes while assigning entity type to different words. Results of the final submission on surprise test set are summarised in Table 8 . Upon close inspection of predicted outputs on test split, we identified the following error patterns in the model predictions:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 428, |
|
"text": "Table 7", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 611, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Error Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 From Table 7 , we can see that model dominantly gets confused while identifying the begin and intermediate tags for class Reagent. Upon inspection of the predictions, we identified that such errors were more common when the Reagent class in validation/test set was unseen in training examples. We can come up with a dictionary based approach to improve the precision of tags specifically for the Reagent class.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 14, |
|
"text": "Table 7", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Error Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 Modifier entity type modifies the semantics of some other entity type, so for a word to be Modifier or not is highly dependent on context and modified entity. But since our model fails to over-rely on context for recognition of certain entities, Modifier entity-type often gets confused with Other type. Numerical, model often gets confused among such entities. The main reason which we suspect is that to classify these entities, the model should over-rely on context and not on the token corresponding to the entity itself. Since tokens can be shared across different classes. e.g. 1.5 ml microcentrifuge tube; Preds: B-Amount I-Amount B-Location I-Location; True Label: B-Size I-Size B-Location I-Location;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Error Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Through this paper, we showcased our approach to tackle the Shared Task 1 in EMNLP WNUT-2020 Workshop which involved Entity Recognition over Wet Lab Protocols. We solved the task in two phases. The first phase involved experimenting with different contextualised word embeddings like BERT and Flair, and a BiLSTM-CRF model to find the best performing model configuration for the problem at hand. In the second phase, we create an ensemble consisting of eleven BiLSTM-CRF models. We train individual models on randomly shuffled train-validation splits of the complete dataset. Also, we experiment with different merging techniques like Majority Voting and Structured Learning Ensemble (SLE). Our end solution achieved a micro F1-score of 0.8175 and 0.7757 in the partial and exact match categories, respectively. We were ranked first and second in partial and exact match categories respectively. In the future, we wish to explore the idea of employing rule-based approach to overcome the shortcomings of current solution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Predicting student academic performance using multimodel heterogeneous ensemble approach", |
|
"authors": [ |
|
{ |
|
"first": "Olugbenga", |
|
"middle": [], |
|
"last": "Adejo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Connolly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of Applied Research in Higher Education", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "0--00", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1108/JARHE-09-2017-0113" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olugbenga Adejo and Thomas Connolly. 2017. Pre- dicting student academic performance using multi- model heterogeneous ensemble approach. Journal of Applied Research in Higher Education, 10:00-00.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "FLAIR: An easy-to-use framework for state-of-theart NLP", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kashif", |
|
"middle": [], |
|
"last": "Rasul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--59", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-4010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Biocoder: A programming language for standardizing and automating biology protocols", |
|
"authors": [ |
|
{ |
|
"first": "Vaishnavi", |
|
"middle": [], |
|
"last": "Ananthanarayanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Thies", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of biological engineering", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vaishnavi Ananthanarayanan and William Thies. 2010. Biocoder: A programming language for standardiz- ing and automating biology protocols. Journal of biological engineering, 4(1):1-13.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Wet lab accelerator: a web-based application democratizing laboratory automation for synthetic biology", |
|
"authors": [ |
|
{ |
|
"first": "Maxwell", |
|
"middle": [], |
|
"last": "Bates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Aaron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Berliner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lachoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eli", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Paul R Jaschke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Groban", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACS synthetic biology", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "167--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maxwell Bates, Aaron J Berliner, Joe Lachoff, Paul R Jaschke, and Eli S Groban. 2017. Wet lab acceler- ator: a web-based application democratizing labora- tory automation for synthetic biology. ACS synthetic biology, 6(1):167-171.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Tinn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Lucas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoto", |
|
"middle": [], |
|
"last": "Usuyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedical natural language processing.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Entity recognition in information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Novita", |
|
"middle": [], |
|
"last": "Hanafiah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Quix", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Intelligent Information and Database Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--122", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Novita Hanafiah and Christoph Quix. 2014. Entity recognition in information extraction. In Intelligent Information and Database Systems, pages 113-122, Cham. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Long shortterm memory", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9:1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The automation of science", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jem", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rowland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Oliver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wayne", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emma", |
|
"middle": [], |
|
"last": "Aubrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magdalena", |
|
"middle": [], |
|
"last": "Liakata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pinar", |
|
"middle": [], |
|
"last": "Markham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larisa", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Pir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Soldatova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Science", |
|
"volume": "324", |
|
"issue": "5923", |
|
"pages": "85--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ross D King, Jem Rowland, Stephen G Oliver, Michael Young, Wayne Aubrey, Emma Byrne, Maria Liakata, Magdalena Markham, Pinar Pir, Larisa N Soldatova, et al. 2009. The automation of science. Science, 324(5923):85-89.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An annotated corpus for machine reading of instructions in wet lab protocols", |
|
"authors": [ |
|
{ |
|
"first": "Chaitanya", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghu", |
|
"middle": [], |
|
"last": "Machiraju", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An annotated corpus for machine reading of instructions in wet lab protocols. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Bioinformatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/bioinformatics/btz682" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A survey on deep learning for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aixin", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianglei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenliang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2018. A survey on deep learning for named entity recognition.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Comparisons of sequence labeling algorithms and extensions", |
|
"authors": [ |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunsong", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 24th International Conference on Machine Learning, ICML '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "681--688", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1273496.1273582" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nam Nguyen and Yunsong Guo. 2007. Comparisons of sequence labeling algorithms and extensions. In Proceedings of the 24th International Conference on Machine Learning, ICML '07, page 681-688, New York, NY, USA. Association for Computing Machin- ery.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "What the [mask]? making sense of language-specific bert models", |
|
"authors": [ |
|
{ |
|
"first": "Debora", |
|
"middle": [], |
|
"last": "Nozza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [mask]? making sense of language-specific bert models.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Bidirectional recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Paliwal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Trans. Sig. Proc", |
|
"volume": "45", |
|
"issue": "11", |
|
"pages": "2673--2681", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/78.650093" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. Trans. Sig. Proc., 45(11):2673-2681.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Next sentence prediction helps implicit discourse relation classification within and across domains", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5790--5796", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1586" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Shi and Vera Demberg. 2019. Next sentence pre- diction helps implicit discourse relation classifica- tion within and across domains. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5790-5796, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "An introduction to conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles Sutton and Andrew McCallum. 2010. An in- troduction to conditional random fields.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "WNUT-2020 Task 1: Extracting Entities and Relations from Wet Lab Protocols", |
|
"authors": [ |
|
{ |
|
"first": "Jeniya", |
|
"middle": [], |
|
"last": "Tabassum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeniya Tabassum, Wei Xu, and Alan Ritter. 2020. WNUT-2020 Task 1: Extracting Entities and Rela- tions from Wet Lab Protocols. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A software stack for specification and robotic execution of protocols for synthetic biological engineering", |
|
"authors": [ |
|
{ |
|
"first": "Viktor", |
|
"middle": [], |
|
"last": "Vasilev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenkai", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Traci", |
|
"middle": [], |
|
"last": "Haddock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swapnil", |
|
"middle": [], |
|
"last": "Bhatia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Adler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fusun", |
|
"middle": [], |
|
"last": "Yaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Beal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Babb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Viktor Vasilev, Chenkai Liu, Traci Haddock, Swap- nil Bhatia, Aaron Adler, Fusun Yaman, Jacob Beal, Jonathan Babb, Ron Weiss, and Douglas Densmore. 2011. A software stack for specification and robotic execution of protocols for synthetic biological engi- neering.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A survey on recent advances in named entity recognition from deep learning models", |
|
"authors": [ |
|
{ |
|
"first": "Vikas", |
|
"middle": [], |
|
"last": "Yadav", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2145--2158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vikas Yadav and Steven Bethard. 2018. A survey on re- cent advances in named entity recognition from deep learning models. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2145-2158, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"content": "<table><tr><td/><td colspan=\"2\">#protocols #sentences</td></tr><tr><td>train data</td><td>370</td><td>8444</td></tr><tr><td>dev data</td><td>122</td><td>2839</td></tr><tr><td>test data</td><td>123</td><td>2862</td></tr><tr><td colspan=\"2\">test data 2020 111</td><td>3562</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Frequency of various entity-types in different dataset splits.", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>protocol 464 (duplicate of protocol 46)</td></tr><tr><td>protocol 480 (duplicate of protocol 473)</td></tr><tr><td>protocol 482 (duplicate of protocol 474)</td></tr><tr><td>protocol 483 (duplicate of protocol 475)</td></tr><tr><td>protocol 484 (duplicate of protocol 476)</td></tr><tr><td>protocol 621 (duplicate of protocol 570)</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Statistics of different dataset splits.", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>. Here,</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "System Settings for the final model.", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td colspan=\"2\">#ensembles MajV SLE</td></tr><tr><td>3</td><td>82.32 82.50</td></tr><tr><td>5</td><td>82.52 82.68</td></tr><tr><td>7</td><td>82.52 82.58</td></tr><tr><td>9</td><td>82.55 82.64</td></tr><tr><td>11</td><td>82.60 82.74</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Results of experiments to identity the best architecture specification.", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td>P Label</td><td>T Label</td><td>Count</td></tr><tr><td>O</td><td colspan=\"2\">B-Modifier 324</td></tr><tr><td colspan=\"2\">B-Modifier O</td><td>287</td></tr><tr><td>O</td><td colspan=\"2\">I-Modifier 247</td></tr><tr><td colspan=\"2\">B-Reagent I-Reagent</td><td>180</td></tr><tr><td>I-Reagent</td><td colspan=\"2\">B-Reagent 112</td></tr><tr><td colspan=\"3\">B-Modifier B-Reagent 122</td></tr><tr><td>O</td><td>B-Action</td><td/></tr><tr><td>O</td><td>I-Reagent</td><td>115</td></tr><tr><td>O</td><td>I-Method</td><td>112</td></tr><tr><td>B-Action</td><td>O</td><td>190</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "micro-F1 on test-set after ensembling.", |
|
"num": null |
|
}, |
|
"TABREF10": { |
|
"content": "<table><tr><td/><td colspan=\"2\">Exact Match Partial Match</td></tr><tr><td colspan=\"2\">Precision 81.36</td><td>85.74</td></tr><tr><td>Recall</td><td>74.12</td><td>78.11</td></tr><tr><td colspan=\"2\">Micro-F1 77.57</td><td>81.75</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Top-10 errors occurring in model predictions.", |
|
"num": null |
|
}, |
|
"TABREF11": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Final results on surprise-test dataset.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |