ACL-OCL / Base_JSON /prefixA /json /alta /2021.alta-1.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:40.311648Z"
},
"title": "Handling Variance of Pretrained Language Models in Grading Evidence in the Medical Literature",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Biaoyan",
"middle": [],
"last": "Fang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we investigate the utility of modern pretrained language models for the evidence grading system in the medical literature based on the ALTA 2021 shared task. We benchmark 1) domain-specific models that are optimized for medical literature and 2) domain-generic models with rich latent discourse representation (i.e. ELECTRA, RoBERTa). Our empirical experiments reveal that these modern pretrained language models suffer from high variance, and the ensemble method can improve the model performance. We found that ELECTRA performs best with an accuracy of 53.6% on the test set, outperforming domain-specific models. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we investigate the utility of modern pretrained language models for the evidence grading system in the medical literature based on the ALTA 2021 shared task. We benchmark 1) domain-specific models that are optimized for medical literature and 2) domain-generic models with rich latent discourse representation (i.e. ELECTRA, RoBERTa). Our empirical experiments reveal that these modern pretrained language models suffer from high variance, and the ensemble method can improve the model performance. We found that ELECTRA performs best with an accuracy of 53.6% on the test set, outperforming domain-specific models. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Evidence-Based Medicine (EBM) is an approach by health practitioners to integrate individual clinical expertise and external evidence from medical literatures in making decisions about the care of patients (Sackett et al., 1996) . In practice, understanding the current best evidence from the literature minimizes the unexpected risk of outdated treatments that can be detrimental to patients.",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Sackett et al., 1996)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "Strength of Recommendation Taxonomy (SORT) (Ebell et al., 2004) is one of the standard scale systems for grading evidence in medical literature and it has been used to assist the EBM approach. SORT groups a medical literature into one of three classes: A (consistent and goodquality patient-oriented evidence), B (inconsistent or limited-quality patient-oriented evidence) and C (other evidence, such as consensus guidelines, usual practice and opinion). While obtaining these grades on a wide-scale is expensive and requires in-depth medical expertise, previous works (Sarker et al., 2015) have attempted to automate the process by modelling the grading system with n-gram language model via SVM (Molla and Sarker, 2011) and ensemble method (Gyawali et al., 2012) .",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Ebell et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 569,
"end": 590,
"text": "(Sarker et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 697,
"end": 721,
"text": "(Molla and Sarker, 2011)",
"ref_id": "BIBREF17"
},
{
"start": 742,
"end": 764,
"text": "(Gyawali et al., 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "In this work, we focus on investigating the utility of various modern pretrained language models for modelling the evidence grading system in the medical literature. Although transformer (Vaswani et al., 2017) and pretrained language models such as BERT (Devlin et al., 2019) , RoBERTa have achieved impressive performance across various NLP tasks (Wang et al., 2018; Wang et al., 2019) and languages (Koto et al., 2020; Martin et al., 2020) , we hypothesize that such evidence grading task is still challenging because of three reasons. First, in-depth medical expertise and knowledge are not always present in the language models. Second, it is very likely that machine learning models suffer from high variance as disagreement in assessing scientific literature is natural, even among the experts. Lastly, obtaining high-quality training data for this task is difficult, and the large transformer-based models potentially suffer from overfitting if the available data is limited.",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 254,
"end": 275,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 348,
"end": 367,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 368,
"end": 386,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 401,
"end": 420,
"text": "(Koto et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 421,
"end": 441,
"text": "Martin et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "To address the aforementioned challenges, we use three main strategies. First, we fine-tune domain-specific pretrained models (Gu et al., 2020) that are optimized for medical literature. Previous works (Gururangan et al., 2020; Gu et al., 2020; Alsentzer et al., 2019; Fang et al., 2021; have shown that such models contain domain-specific knowledge that can boost system performance. Second, we argue that discourse is prominent for this task because each of three SORT classes might have different document structure. For instance, patient-oriented literature and consensus guidelines potentially are written differently in terms of flow and discourse. In this work, rather than employing a complicated discourse parser (Yu et al., 2018; Koto et al., 2019 , we rely on modern pretrained language models such as ELEC-TRA (Clark et al., 2020) that contains a rich latent discourse representation . Lastly, similar to Gyawali et al. (2012) , we also perform ensemble learning to tackle the high variance issue of models.",
"cite_spans": [
{
"start": 126,
"end": 143,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 202,
"end": 227,
"text": "(Gururangan et al., 2020;",
"ref_id": "BIBREF6"
},
{
"start": 228,
"end": 244,
"text": "Gu et al., 2020;",
"ref_id": null
},
{
"start": 245,
"end": 268,
"text": "Alsentzer et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 269,
"end": 287,
"text": "Fang et al., 2021;",
"ref_id": "BIBREF4"
},
{
"start": 722,
"end": 739,
"text": "(Yu et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 740,
"end": 757,
"text": "Koto et al., 2019",
"ref_id": "BIBREF8"
},
{
"start": 822,
"end": 842,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 917,
"end": 938,
"text": "Gyawali et al. (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "We conduct our experiments based on the ALTA 2021 shared task 2 which aims to automatically grade evidence in the medical literature. The grading system follows the SORT framework (Ebell et al., 2004) with three classes: A (Strong), B (Moderate) and C (Weak).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "As shown in Figure 1 each line in the training data is a single piece of evidence and consists of an ID, a SORT grade, and a list of resource/publication ID(s) from PubMed. 3 Each publication ID is mapped to an XML file containing bibliographic information (e.g. title, author, affiliation, etc.), abstract, and some meta-data such as type and status of the publication.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "In Table 1 , we present overall statistics of the train, development and test sets. First, nearly 45% of the train and development data are classified as class B. We also found there is no significant difference in terms of the number of resources and words between each subset. Figure 2 describes the best model that we submit to ALTA 2021 shared task. We use filtered ensemble method over 3 domain-specific pretrained language models: 1) Biomed BERT (Gu et al., 2020) , 2) Biomed RoBERTa (Gururangan et al., 2020) and 3) Biomed RoBERTa that is further pretrained with the training set for 400 epochs, denoted as Task Adaptive Pretraining (TAPT) model; and 3 domaingeneric pretrained language models: 1) RoBERTa , 2) ELECTRA, and 3) ELEC-TRA (large) (Clark et al., 2020) . The selection of RoBERTa and ELECTRA is based on their rich latent discourse representation as reported by . Given a list of resources or publications R = {r 1 , r 2 , .., r n } for evidence x, we construct an input sequence as follows. First, each resource r i consists of journal name j i , title t i , and abstract a i . We form an input sequence x as the concatenation of all texts j 1 \u2295 t 1 \u2295 a 1 \u2295 ... \u2295 j n \u2295 t n \u2295 a n . We truncate a resource r i if the tokens are more than 250, and set the maximum length of the input x to be 512.",
"cite_spans": [
{
"start": 452,
"end": 469,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 751,
"end": 771,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 279,
"end": 287,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "To understand the variance of pretrained language models in this task, we fine-tune each model with 100 different random seeds. For ensemble learning, we first select models with accuracy more than hyper-parameter \u03b1 (values range between 0 and 1) and apply two types of voting mechanism to aggregate the prediction: 1) simple voting based on majority classes, and 2) filtered voting. For the second approach, if the selected n models have an even class distribution, we set class B as the prediction, otherwise normal majority voting is applied. Mathematically, this even prediction is determined based on a threshold \u03b2 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "3"
},
{
"text": "1 3 (|y A \u2212 y B | + |y A \u2212 y C | + |y B \u2212 y C |) \u2264 \u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "3"
},
{
"text": "where y A , y B , y C are the occurrence of class A, B, and C in n models prediction, respectively (meaning y A +y B +y C = n), and |y A \u2212y B | indicates the absolute difference of class A and B occurrence. \u03b2 is a hyper-parameter with values ranging between 0 and n, and \u03b2 < 0 means normal majority voting is applied. All parameters (including \u03b1 and \u03b2) are tuned based on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "3"
},
{
"text": "We use the huggingface Pytorch framework (Wolf et al., 2020) for the experiments. 4 In total, there are 6 models: 1) Biomed BERT, 5 2) Biomed RoBERTa, 6 , 3) Biomed RoBERTa (TAPT), 4) RoBERTa, 7 5) ELECTRA, 8 6) ELECTRA (large). 9 Each model is fine-tuned for 20 epochs with a batch size of 10, warm-up of 10% of the total steps, learning rate of 5e-5, Adam optimizer with epsilon of 1e-8, and early stopping with patience of 5.",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 82,
"end": 83,
"text": "4",
"ref_id": null
},
{
"start": 151,
"end": 152,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Set-up",
"sec_num": "4.1"
},
{
"text": "In this work, accuracy is used as the primary evaluation metric, following ALTA 2021 shared task description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set-up",
"sec_num": "4.1"
},
{
"text": "In Table 2 , we report the aggregate score (mean, max, min, std) of 100 runs of each models. First, we observe that Biomed RoBERTa has the highest average performance of 59.5, but only 0.3 higher than ELECTRA. In fact, Domain-generic models such as RoBERTa and ELECTRA outperform Biomed BERT and Biomed RoBERTa (TAPT), despite their domain/task-adaptive pretraning. We also found that even with 100 different random 4 https://huggingface.co/ 5 microsoft/BiomedNLP-PubMedBERT-baseuncased-abstract-fulltext 6 allenai/biomed roberta base 7 roberta-base 8 google/electra-base-discriminator 9 google/electra-large-discriminator seeds, all models still have relatively high variance (std) with more than 2 points. ELECTRA (large) suffers worst from this issue, compared to the other models.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results over Development Set",
"sec_num": "4.2"
},
{
"text": "In Table 3 , we describe the main experiment results. For baselines, we run unigram and bigram representation with Naive Bayes and Logistic Regression, and found the results are less optimal. For the ensemble method, we perform grid search over \u03b1 \u2208 {0.60, 0.61, 0.62, 0.63, 0.64, 0.65} and \u03b2 \u2208 {\u22121, 0, .., n}. n is number of models after filtered by parameter \u03b1. Ensemble results presented in Table 3 use the best combinations of \u03b1 and \u03b2.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 393,
"end": 400,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results over Development Set",
"sec_num": "4.2"
},
{
"text": "First, we perform ensemble method with all 500 \"base\" models from Table 2 , and obtain accuracy of 69.7, 2 points higher than the best Biomed RoBERTa model (max in Table 2 ). 8 selected models after filtering with \u03b1 are 2 Biomed RoBERTa, 2 Biomed RoBERTA (TAPT), 2 Biomed BERT, and 2 ELECTRA. In the next results, we also perform a grid search for each 6 pretrained language models (each initially has 100 models), and found that ELECTRA performs best with an accuracy of 70.2, outperforming all domain-specific models.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 2",
"ref_id": null
},
{
"start": 164,
"end": 171,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results over Development Set",
"sec_num": "4.2"
},
{
"text": "Another thing to note is that parameter \u03b2 or fil- tered voting mechanism is not significant except for Biomed RoBERTa. From Table 3 we can see that the optimal combinations of \u03b1 and \u03b2 for 5 ensemble models have \u03b2 = \u22121, which indicates that the standard majority voting solely can yield the optimal result.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results over Development Set",
"sec_num": "4.2"
},
{
"text": "We pick the three best models for ALTA 2021 shared task submission as shown in Table 4 . These models are the ensemble methods from Table 3 : 1) All 500 \"base\" models, 2) ELECTRA, and 3) ELECTRA (large). We observe that the gap between development and test set is high, roughly 20 points, which can be due to overfitting problems and small training sets. The best models on the test set are ELECTRA and ELECTRA (large) with the accuracies of 50.2 and 53.6, respectively. Our best result with ELECTRA (large) put us in the first rank on the leaderboard. 10",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 132,
"end": 139,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results over Test Set",
"sec_num": "4.3"
},
{
"text": "10 The committee limits three submissions for each team. At the end of the competition, ELECTRA result with accuracy 50.2 is picked and put us in the second rank. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results over Test Set",
"sec_num": "4.3"
},
{
"text": "Figure 3 describes label distributions on development and test sets using our best model, ELECTRA (large). First, we found that the model tends to predict class B on the development, with a disparity of +23 instances with the gold label B. In contrast, the model only classifies 31 instances as class C, despite being there 50 gold labels C. Lastly, our final prediction in the test sets has a ratio of 40:109:34 of class A:B:C, respectively, and the graph in Figure 3 describes a similar shape with the development set prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 460,
"end": 468,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Discussions and Conclusion",
"sec_num": "5"
},
{
"text": "In conclusion, we have shown in this experiment that grading evidence in the medical literature is a challenging task, and modern pretrained language models suffer from high-variance issues. Interestingly, we found that ELECTRA, the domaingeneral models outperform domain-specific models through ensemble methods. We argue that this is because discourse is one of the relevant features for this task. This is in line with that has shown that the last layer of ELECTRA contains the richest latent discourse representation, compared to BERT, RoBERTa, ALBERT (Lan et al., 2019) , GPT2 (Radford et al., 2019) , BART (Lewis et al., 2020) , and T5 (Raffel et al., 2019) .",
"cite_spans": [
{
"start": 556,
"end": 574,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 582,
"end": 604,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 612,
"end": 632,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 642,
"end": 663,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusion",
"sec_num": "5"
},
{
"text": "Our best result with ELECTRA (large) and ELECTRA (base) put us in the first and second rank on the leaderboard, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.alta.asn.au/events/ sharedtask2021/index.html 3 https://pubmed.ncbi.nlm.nih.gov/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "In this work, Fajri Koto is supported by the Australia Awards Scholarship (AAS), funded by the Department of Foreign Affairs and Trade (DFAT), Australia. Biaoyan Fang is supported by a graduate research scholarship from the Melbourne School of Engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Strength of recommendation taxonomy (sort): a patient-centered approach to grading evidence in the medical literature",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Ebell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siwek",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Woolf",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Susman",
"suffix": ""
},
{
"first": "Marjorie",
"middle": [],
"last": "Ewigman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2004,
"venue": "The Journal of the American Board of Family Practice",
"volume": "17",
"issue": "1",
"pages": "59--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark H Ebell, Jay Siwek, Barry D Weiss, Steven H Woolf, Jeffrey Susman, Bernard Ewigman, and Mar- jorie Bowman. 2004. Strength of recommenda- tion taxonomy (sort): a patient-centered approach to grading evidence in the medical literature. The Journal of the American Board of Family Practice, 17(1):59-67.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ChEMU-ref: A corpus for modeling anaphora resolution in the chemical domain",
"authors": [
{
"first": "Biaoyan",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Druckenbrodt",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Jiayuan",
"middle": [],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1362--1375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biaoyan Fang, Christian Druckenbrodt, Saber A Akhondi, Jiayuan He, Timothy Baldwin, and Karin Verspoor. 2021. ChEMU-ref: A corpus for model- ing anaphora resolution in the chemical domain. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1362-1375, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedical natural language processing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8342--8360",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.740"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Grading the quality of medical evidence",
"authors": [
{
"first": "Binod",
"middle": [],
"last": "Gyawali",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Yassine",
"middle": [],
"last": "Benajiba",
"suffix": ""
}
],
"year": 2012,
"venue": "BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "176--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binod Gyawali, Thamar Solorio, and Yassine Bena- jiba. 2012. Grading the quality of medical evi- dence. In BioNLP: Proceedings of the 2012 Work- shop on Biomedical Natural Language Processing, pages 176-184, Montr\u00e9al, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improved document modelling with a neural discourse parser",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association",
"volume": "",
"issue": "",
"pages": "67--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2019. Improved document modelling with a neural dis- course parser. In Proceedings of the The 17th An- nual Workshop of the Australasian Language Tech- nology Association, pages 67-76, Sydney, Australia. Australasian Language Technology Association.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discourse probing of pretrained language models",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3849--3864",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.301"
]
},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2021. Discourse probing of pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 3849-3864, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "IndoBERTweet: A pretrained language model for Indonesian twitter with effective domainspecific vocabulary initialization",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.04607"
]
},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2021. IndoBERTweet: A pretrained language model for Indonesian twitter with effective domain- specific vocabulary initialization. arXiv preprint arXiv:2109.04607.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Top-down discourse parsing via sequence labelling",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "715--726",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2021. Top-down discourse parsing via sequence labelling. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 715-726, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "IndoLEM and IndoBERT: A benchmark dataset and pre-trained language model for Indonesian NLP",
"authors": [
{
"first": "Fajri",
"middle": [],
"last": "Koto",
"suffix": ""
},
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "757--770",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.66"
]
},
"num": null,
"urls": [],
"raw_text": "Fajri Koto, Afshin Rahimi, Jey Han Lau, and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A bench- mark dataset and pre-trained language model for In- donesian NLP. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 757-770, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7203--7219",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.645"
]
},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic grading of evidence: the 2011 ALTA shared task",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Molla",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "4--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Molla and Abeed Sarker. 2011. Automatic grad- ing of evidence: the 2011 ALTA shared task. In Pro- ceedings of the Australasian Language Technology Association Workshop 2011, pages 4-8, Canberra, Australia.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Evidence based medicine: what it is and what it isn't",
"authors": [
{
"first": "L",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sackett",
"suffix": ""
},
{
"first": "M",
"middle": [
"C"
],
"last": "William",
"suffix": ""
},
{
"first": "Ja Muir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haynes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scott Richardson",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David L Sackett, William MC Rosenberg, JA Muir Gray, R Brian Haynes, and W Scott Richardson. 1996. Evidence based medicine: what it is and what it isn't.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic evidence quality prediction to support evidence-based decision making. Artificial intelligence in medicine",
"authors": [
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Moll\u00e1",
"suffix": ""
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Paris",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "64",
"issue": "",
"pages": "89--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abeed Sarker, Diego Moll\u00e1, and C\u00e9cile Paris. 2015. Automatic evidence quality prediction to support evidence-based decision making. Artificial intelli- gence in medicine, 64(2):89-103.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "33rd Annual Conference on Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "3261--3275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019, volume 32, pages 3261-3275.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving multilabel emotion classification via sentiment classification with dual attention transfer network",
"authors": [
{
"first": "Jianfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Karuturi",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Brendel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1097--1102",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1137"
]
},
"num": null,
"urls": [],
"raw_text": "Jianfei Yu, Lu\u00eds Marujo, Jing Jiang, Pradeep Karu- turi, and William Brendel. 2018. Improving multi- label emotion classification via sentiment classifica- tion with dual attention transfer network. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1097- 1102, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Sample training data from ALTA 2021 shared task.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Filtered ensemble model used in this task.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Label distributions on development and test set using ELECTRA (large).",
"num": null
},
"TABREF1": {
"text": "Overall statistics of the ALTA 2021 shared task dataset. Evidence classes in test dataset are withheld by the organizer. \"Ave. resources per evidence\" means the average number of XML files the evidence has. \"Ave. words per abstract\" means the average number of words per single abstract. \"Ave. words per evidence\" means the average number of words per evidence, including journal name, title and abstract.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Results of baseline vs. ensemble methods on the development set. Parameter \u03b1 and \u03b2 are selected based on the grid search.",
"html": null,
"content": "<table><tr><td>Model</td><td>Accuracy</td></tr><tr><td/><td>Dev Test</td></tr><tr><td colspan=\"2\">All 500 \"base\" models 69.7 49.7</td></tr><tr><td>ELECTRA</td><td>70.2 50.2</td></tr><tr><td>ELECTRA (large)</td><td>67.4 53.6</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "Results of selected model (for shared task submission) on the development and test set.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}