|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T06:34:33.879391Z" |
|
}, |
|
"title": "KaushikAcharya at WNUT 2020 Shared Task-1: Conditional Random Field(CRF) based Named Entity Recognition(NER) for Wet Lab Protocols", |
|
"authors": [ |
|
{ |
|
"first": "Kaushik", |
|
"middle": [], |
|
"last": "Acharya", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Philips India Ltd", |
|
"location": { |
|
"settlement": "Bangalore", |
|
"country": "India" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Detecting named entities in user generated text is a challenging task. Lab protocols specify steps in performing a lab procedure. The majority of wet lab protocols are written in noisy, dense, and domain-specific natural language. There is a growing need of automatic or semi-automatic conversion of protocols into machine-readable format to benefit biological research. The paper describes how a classifier model built using Conditional Random Field[1] detects named entities in wet lab protocols. The model 1 trained on the training data showed precision, recall and F1-score of 0.762, 0.743 and 0.752 respectively on the development set. When applied to unseen test data, the model showed 0.737, 0.640 and 0.685 respectively.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Detecting named entities in user generated text is a challenging task. Lab protocols specify steps in performing a lab procedure. The majority of wet lab protocols are written in noisy, dense, and domain-specific natural language. There is a growing need of automatic or semi-automatic conversion of protocols into machine-readable format to benefit biological research. The paper describes how a classifier model built using Conditional Random Field[1] detects named entities in wet lab protocols. The model 1 trained on the training data showed precision, recall and F1-score of 0.762, 0.743 and 0.752 respectively on the development set. When applied to unseen test data, the model showed 0.737, 0.640 and 0.685 respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Wet laboratories are laboratories for conducting biology and chemistry experiments. These require handling of various types of chemicals and potential \"wet\" hazards. These experiments are guided by a sequence of instructions collectively referred as wet lab protocols.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The instructions are mostly composed of imperative statements which are meant to describe an action. Figure 1 shows a representative wet lab protocol. Figure 2 shows BRAT annotations (entities and relations) on two sentences from the representative protocol. For each protocol, annotators had identified and marked every span of text corresponding to action or one of the 17 types of entities. Table 1 shows a few typical examples for each of these classes. For detailed description of entities please refer Kulkarni et al's [2] Annotation Guidelines. 1 https://github.com/kaushikacharya/ wet_lab_protocols Standard RNA Synthesis (E2050) Thaw the necessary kit components. Mix and pulse-spin in microfuge to collect solutions to the bottoms of tubes. Keep on ice. Assemble the reaction at room temperature in the following order:. Mix thoroughly and pulse-spin in a microfuge. Incubate at 37C for 2 hours. Optional step: DNase treatment to remove DNA template. To remove template DNA, add 30 l nuclease-free water to each 20 l reaction, followed by 2 l of DNase I (RNase-free), mix and incubate for 15 minutes at 37C. Proceed with purification of synthesized RNA or analysis of transcription products by gel electrophoresis. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 525, |
|
"end": 528, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 553, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 109, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 159, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 401, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A Conditional Random Fields (CRF) classifier was trained to recognize named entities. The CRF NER model was implemented using sklearn-crfsuite 2 which is a Python wrapper over C++ based CRFsuite 3 . It utilized L-BFGS [3] , a limited memory quasi-Newton algorithm for large scale numerical optimization. The classifier was trained with both L1 and L2 regularization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 144, |
|
"text": "2", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 196, |
|
"text": "3", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 221, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition Methodology", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Three types of features have been extracted using Python library spaCy [4] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 74, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Lexical features ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The experiments were based on the datasets provided by the organizers of W-NUT 2020 shared task on Entity and Relation Recognition over Wet Lab Protocols [5] . The dataset (Table 2) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 157, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 181, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For the experiments, the classifier was trained on the training data and evaluated on development and test data. The reported averages are defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Macro average: averaging the unweighted mean per label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Micro average: averaging the total true positives, false negatives and false positives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Weighted average: averaging the supportweighted mean per label Table 3 and Table 4 show results at token and entity level respectively. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 72, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 84, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Entity type wise performance metrics on development dataset are available in Table 6 . This is based on strict evaluation mode of matching as defined in SemEval'13 [6] . As per strict evaluation, a predicted entity is correct only if it matches with goldstandard in both exact boundary and type. Used seqeval 4 for the evaluation. Table 7 shows the poorly performing entity classes along with their frequent confusers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 167, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 84, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 338, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Errors are of primarily two types: 4 https://github.com/chakki-works/ seqeval \u2022 Predicted entity text span matches truth but entity class is incorrect.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 36, |
|
"text": "4", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Entity text span mismatches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "-Partial match: Example shown in Table 8 . -Complete mis-match: Example shown in Table 9 . Table 8 and Table 9 show examples of misclassification for the highlighted text portion of the corresponding sentences. Table 8 : Expected: Two entities for the highlighted phrase: a) Modifier: lab grade b) Reagent: water. Whereas the system predicted a single entity(Reagent) over the entire text span. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 41, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 89, |
|
"text": "Table 9", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 99, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 111, |
|
"text": "Table 9", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 219, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This paper has proposed a CRF-based named entity extraction system to extract Action and 17 Entities of wet lab protocols. Future plan:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Analyse the errors in more detail and extract richer features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Extract global structured information features of the dependency trees [7] as shown in Figure 4 . Currently as the system only uses local dependency features, it predicts Fe Stock Solution as Reagent entity and misses 10g/L.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 76, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 97, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Develop Long Short-Term Memory (LSTM) recurrent neural network model [8] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 74, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://sklearn-crfsuite.readthedocs. io/en/latest/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.chokkan.org/software/ crfsuite/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lafferty, Andrew Mccallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. pages 282-289, 01 2001.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter Raghu Machiraju Chaitanya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Ritter Raghu Machiraju Chaitanya Kulkarni, Wei Xu. An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols. In Proceedings of NAACL-HLT, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "On the limited memory BFGS method for large scale optimization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nocedal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Mathematical Programming", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "503--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45:503-528, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Honnibal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Montani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear, 2017.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "WNUT-2020 Task 1 Overview: Extracting Entities and Relations from Wet Lab Protocols", |
|
"authors": [ |
|
{ |
|
"first": "Jeniya", |
|
"middle": [], |
|
"last": "Tabassum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sydney", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeniya Tabassum, Sydney Lee, Wei Xu, and Alan Ritter. WNUT-2020 Task 1 Overview: Extracting Entities and Relations from Wet Lab Protocols. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT), 2020.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Isabel", |
|
"middle": [], |
|
"last": "Segura-Bedmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paloma", |
|
"middle": [], |
|
"last": "Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mar\u00eda", |
|
"middle": [], |
|
"last": "Herrero-Zazo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "341--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Mar\u00eda Herrero-Zazo. SemEval-2013 task 9 : Extrac- tion of drug-drug interactions from biomedical texts (DDIExtraction 2013). In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 341-350, Atlanta, Georgia, USA, June 2013. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Efficient dependency-guided named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Zhanming", |
|
"middle": [], |
|
"last": "Jie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aldrian", |
|
"middle": [], |
|
"last": "Obaja Muis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhanming Jie, Aldrian Obaja Muis, and Wei Lu. Ef- ficient dependency-guided named entity recognition, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuya", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California, June 2016. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "An example wet lab protocolNamed Entity Recognition (NER) aims at identifying these entities within a given protocol.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Example sentences from the wet lab protocol example shown in figure 1 as shown in the BRAT annotation interface.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Parts of speech (POS) features -Current word's POS -Prev and Next word's POS -Governor word's POS \u2022 Dependency parse features -Governor words -Dependency type -Dependency type of Governor word As an example, microfuge in the sentence shown in Figure 3 produces the following features: \u2022 current word POS: NOUN \u2022 dependency tag: pobj \u2022 parent dependency tag: prep", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "POS and dependency parse for the sentence shown in figure 2. Dependency parse tree visualized using spaCy's displaCy.", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Global structured information in the dependency tree. The figure shows the dependency sub-tree for the Reagent entity: 10g/L Fe Stock Solution. The entity's text span is covered by the subtree having Solution as its root and of as its head.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "was annotated in both StandOff and CoNLL formats. Entities and relations of 615 protocols were annotated in brat with 3 annotators with 0.75 inter-annotator agreement, measured by span-level Cohen's Kappa.", |
|
"html": null, |
|
"content": "<table><tr><td>Tag</td><td>Examples</td></tr><tr><td>Action</td><td>add, incubate, mix</td></tr><tr><td>Amount</td><td>50 l, equal volume</td></tr><tr><td>Concentration</td><td>1x</td></tr><tr><td>Device</td><td>filter, vacuum, microfuge</td></tr><tr><td>Generic-Measure</td><td>30-kd, several times, 100v</td></tr><tr><td>Location</td><td>tube, plate, well</td></tr><tr><td>Measure-Type</td><td>volume, concentration</td></tr><tr><td>Mention</td><td>It, them, this</td></tr><tr><td>Method</td><td>up and down, extraction</td></tr><tr><td>Modifier</td><td>each, gently, at least</td></tr><tr><td>Numerical</td><td>one, 3, several, several times</td></tr><tr><td>pH</td><td>ph 8.0, ph8.0</td></tr><tr><td>Reagent</td><td>cells, supernatant</td></tr><tr><td>Seal</td><td>Lid, cap, aluminum foil</td></tr><tr><td>Size</td><td>0.02 m, 12 x 75 mm</td></tr><tr><td>Speed</td><td>14,000xg, 10,000 rpm</td></tr><tr><td>Temperature</td><td>room temperature, overnight</td></tr><tr><td>Time</td><td>5 minutes</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Top frequent examples of Action and Entities", |
|
"html": null, |
|
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Protocols Sentences</td></tr><tr><td>Training</td><td>370</td><td>8444</td></tr><tr><td>Development</td><td>122</td><td>2839</td></tr><tr><td>Test</td><td>123</td><td>2813</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>compares</td><td>my</td><td>results</td></tr><tr><td colspan=\"3\">(KaushikAcharya) to the other systems par-</td></tr><tr><td colspan=\"3\">ticipating in the shared task on the unseen test</td></tr><tr><td>data.</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"5\">Token level metrics on development and test</td></tr><tr><td colspan=\"5\">sets(P: Precision, R: Recall, F1: F1 score). This in-</td></tr><tr><td colspan=\"5\">cludes non-entity tokens also as one of the classes.</td></tr><tr><td colspan=\"2\">Dataset Average</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Dev</td><td>Micro</td><td colspan=\"3\">0.762 0.743 0.752</td></tr><tr><td>Dev</td><td colspan=\"4\">Macro 0.755 0.743 0.748</td></tr><tr><td>Test</td><td>Micro</td><td colspan=\"3\">0.782 0.766 0.774</td></tr><tr><td>Test</td><td colspan=\"4\">Macro 0.777 0.766 0.771</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>Team</td><td colspan=\"2\">Exact Partial</td></tr><tr><td>BITEM</td><td colspan=\"2\">77.99 81.67</td></tr><tr><td>PublishInCovid19</td><td colspan=\"2\">77.57 81.75</td></tr><tr><td>mgsohrab</td><td>76.6</td><td>80.5</td></tr><tr><td>Kabir</td><td colspan=\"2\">75.35 80.08</td></tr><tr><td>Winners</td><td colspan=\"2\">74.91 79.54</td></tr><tr><td>BIO-BIO</td><td colspan=\"2\">74.59 79.03</td></tr><tr><td colspan=\"3\">Fancy Man Launches Zippo 73.92 78.71</td></tr><tr><td>SudeshnaTCS</td><td>73.16</td><td>77.8</td></tr><tr><td>B-NLP</td><td colspan=\"2\">70.25 76.46</td></tr><tr><td>KaushikAcharya</td><td colspan=\"2\">68.48 73.73</td></tr><tr><td>IBS</td><td>67.9</td><td>72.89</td></tr><tr><td>DSC-IITISM</td><td colspan=\"2\">60.42 64.49</td></tr><tr><td>mahab</td><td colspan=\"2\">51.54 56.57</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Comparison of system results on both exact and partial match (F1 score)", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">: Entity level classification metrics per entity</td></tr><tr><td>type</td><td/></tr><tr><td>Truth</td><td>Confusers</td></tr><tr><td colspan=\"2\">Generic-Measure Concentration, Numerical</td></tr><tr><td>Method</td><td>Action, Reagent</td></tr><tr><td>Modifier</td><td>Reagent, Location, Action</td></tr><tr><td>Numerical</td><td>Amount, Generic-Measure</td></tr><tr><td>Size</td><td>Concentration, Location</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Frequent confuser entity classes", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Expected: Entity(Method) for the highlighted phrase. Whereas the system predicted two entities: a) Action: without lysing b) Reagent: erythrocytes.", |
|
"html": null, |
|
"content": "<table><tr><td>Text</td><td>Entity</td><td>Truth/Predicted</td></tr><tr><td>lab grade</td><td>Modifier</td><td>Truth</td></tr><tr><td>water</td><td>Reagent</td><td>Truth</td></tr><tr><td colspan=\"2\">lab grade water Reagent</td><td>Predicted</td></tr></table>" |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Mis-classification #1: Sentence: Rinse slides with lab grade water.", |
|
"html": null, |
|
"content": "<table><tr><td>Text</td><td/><td>Entity</td><td>Truth/Predicted</td></tr><tr><td>without</td><td>lysing</td><td>Method</td><td>Truth</td></tr><tr><td colspan=\"2\">erythrocytes</td><td/><td/></tr><tr><td colspan=\"2\">without lysing</td><td>Action</td><td>Predicted</td></tr><tr><td>erythrocytes</td><td/><td>Reagent</td><td>Predicted</td></tr></table>" |
|
}, |
|
"TABREF12": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>: Mis-classification #2:</td></tr><tr><td>Sentence: Prepare cells from your tissue of interest</td></tr><tr><td>without lysing erythrocytes.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |