|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:22:24.002106Z" |
|
}, |
|
"title": "Task Transfer and Domain Adaptation for Zero-Shot Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Sheng", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Shimshoni", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Singhal", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Avirup", |
|
"middle": [], |
|
"last": "Sil", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms Domain-Adaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms Domain-Adaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Pretrained language models (Liu et al., 2019; Wolf et al., 2020) require substantial quantities of labeled data to learn downstream tasks. For domains that are novel or where labeled data is in short supply, supervised learning methods may not be suitable (Zhang et al., 2020; Madasu and Rao, 2020; Rietzler et al., 2020) . Collecting sufficient quantities of labeled data for each new application can be resource intensive, especially when aiming for both a specific task type and a specific data domain. By traditional transfer learning methods, it is prohibitively difficult to fine-tune a pretrained model on a domain-specific downstream task for which there is no existing training data. In light of this, we would like to use more readily available labeled indomain data from unrelated tasks to domain-adapt our fine-tuned model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 45, |
|
"text": "(Liu et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 46, |
|
"end": 64, |
|
"text": "Wolf et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 256, |
|
"end": 276, |
|
"text": "(Zhang et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 298, |
|
"text": "Madasu and Rao, 2020;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 321, |
|
"text": "Rietzler et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we consider a problem setting where we have a domain-specific target task (QA) for which we do not have any in-domain training * Equal Contribution data (SQuAD). However, we assume that we have generic training data for the target task type, and in-domain training data for another task. To address this problem setting, we present Task and Domain Adaptive Pretraining (T+DAPT), a technique that combines domain adaptation and task adaptation to improve performance in downstream target tasks. We evaluate the effectiveness of T+DAPT in zero-shot domain-specific machine reading comprehension (MRC) (Hazen et al., 2019; Reddy et al., 2020; Wiese et al., 2017) by pretraining on in-domain NER data and fine-tuning for generic domain-agnostic MRC on SQuADv1 (Rajpurkar et al., 2018) , combining knowledge from the two different tasks to achieve zero-shot learning on the target task. We test the language model's performance on domain-specific reading comprehension data taken from 4 domains: News, Movies, Biomedical, and COVID-19. In our experiments, RoBERTa-Base models trained using our approach perform favorably on domain-specific reading comprehension tasks compared to baseline RoBERTa-Base models trained on SQuAD as well as Domain Adaptive Pretraining (DAPT). Our code is publicly available for reference. 1 We summarize our contributions as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 143, |
|
"text": "*", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 634, |
|
"text": "(Hazen et al., 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 635, |
|
"end": 654, |
|
"text": "Reddy et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 674, |
|
"text": "Wiese et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 795, |
|
"text": "(Rajpurkar et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1329, |
|
"end": 1330, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose Task and Domain Adaptive Pretraining (T+DAPT) combining domain adaptation and task adaptation to achieve zeroshot learning on domain-specific downstream tasks. \u2022 We experimentally validate the performance of T+DAPT, showing our approach performs favorably compared to both a previous approach (DAPT) and a baseline RoBERTa finetuning approach. \u2022 We analyze the adaptation performance on different domains, as well as the behavior of DAPT and T+DAPT under various experimental conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It has been shown that pretrained language models can be domain-adapted with further pretraining (Pruksachatkun et al., 2020) on unlabeled indomain data to significantly improve the language model's performance on downstream supervised tasks in-domain. This was originally demonstrated by BioBERT . Gururangan et al. (2020) further explores this method of domain adaptation via unsupervised pretraining, referred to as Domain-Adaptive Pretraining (DAPT), and demonstrates its effectiveness across several domains and data availability settings. This procedure has been shown to improve performance on specific domain reading comprehension tasks, in particular in the biomedical domain (Gu et al., 2021) . In this paper, as a baseline for comparison, we evaluate the performance of DAPT-enhanced language models in their respective domains, both in isolation with SQuAD1.1 fine-tuning and in conjunction with our approach that incorporates the respective domain's NER task. DAPT models for two of our domains, News and Biomedical, are initialized from pretrained weights as provided by the authors of Gururangan et al. (2020). We train our own DAPT baselines on the Movies and COVID-19 domains. Xu et al. (2020) explore methods to reduce catastrophic forgetting during language model fine-tuning. They apply topic modeling on the MS MARCO dataset (Bajaj et al., 2018) to generate 6 narrow domain-specific data sets, from which we use BioQA and MoviesQA as domain-specific reading comprehension benchmarks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 125, |
|
"text": "(Pruksachatkun et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 702, |
|
"text": "(Gu et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1346, |
|
"end": 1366, |
|
"text": "(Bajaj et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We aim to achieve zero-shot learning for an unseen domain-specific MRC task by fine-tuning on both a domain transfer task and a generic MRC task. The model is initialized by pretrained RoBERTa weights (Liu et al., 2019) , then fine-tuned using our approach with a domain-specific supervised task to augment domain knowledge, and finally trained on SQuAD to learn generic MRC capabilities to achieve zero-shot MRC in the target domain on an unseen domain-specific MRC task without explicitly training on the final task. This method is illustrated in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 219, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 549, |
|
"end": 557, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We explore the performance of this approach in the Movies, News, Biomedical, and COVID-19 domains. Specifically, our target domain-specific MRC tasks are MoviesQA (Xu et al., 2020) , NewsQA (Trischler et al., 2017) , BioQA (Xu et al., 2020) , and CovidQA , respectively. We choose to use named entity recognition (NER) as our supervised domain adaptation task for all four target domains, as labeled NER data is widely available across various domains. Furthermore, NER and MRC share functional similarities, as both rely on identifying key tokens in a text as entities or answers. The domain-specific NER tasks are performed using supervised training data from the MIT Movie Corpus (Liu et al., 2013) , CoNLL 2003 News NER (Tjong Kim Sang and De Meulder, 2003) , NCBI-Disease (Dogan et al., 2014) and COVID-NER 2 . The domain-specific language modeling tasks for DAPT are performed using unsupervised text from IMDB (Maas et al., 2011) , the RealNews Corpus (Zellers et al., 2020) , the Semantic Scholar Open Research Corpus and the Covid-19 Corpus 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 180, |
|
"text": "(Xu et al., 2020)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 214, |
|
"text": "(Trischler et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 240, |
|
"text": "(Xu et al., 2020)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 683, |
|
"end": 701, |
|
"text": "(Liu et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 761, |
|
"text": "(Tjong Kim Sang and De Meulder, 2003)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 917, |
|
"end": 936, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 959, |
|
"end": 981, |
|
"text": "(Zellers et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We compare our approach (T+DAPT) to a previous approach (DAPT) as well as a baseline model. For the baseline, the pretrained RoBERTa-Base model is fine-tuned on SQuAD and evaluated on domain-specific MRC without any domain adaptation. In the DAPT approach, RoBERTa-Base is first initialized with fine-tuned DAPT weights (NewsRoBERTa and BioRoBERTa) provided by Gururangan et al. (2020) or implemented ourselves using the methodology described in Gururangan et al. (2020) and different Movies and COVID-19 datasets (Maas et al., 2011; Danescu-Niculescu-Mizil and Lee, 2011; Pang et al., 2019) . These models are initialized by DAPT weights-which have been fine-tuned beforehand on unsupervised text corpora for domain adaptation-from the Hugging-Face model hub (Wolf et al., 2020) , fine-tuned on SQuAD, and evaluated on domain-specific MRC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 533, |
|
"text": "(Maas et al., 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 572, |
|
"text": "Danescu-Niculescu-Mizil and Lee, 2011;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 591, |
|
"text": "Pang et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 779, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We compare the effectiveness of our approach, which uses NER instead of language modeling 2 https://github.com/tsantosh7/ COVID-19-Named-Entity-Recognition 3 https://github.com/davidcampos/ covid19-corpus (as in DAPT) for the domain adaptation method in a sequential training regime. Our experiments cover every combination of domain (Movies, News, Biomedical, or COVID) and domain adaptation method (T+DAPT which uses named entity recognition vs. DAPT which uses language modeling vs. baseline with no domain adaptation at all).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our results are presented in Table 2 . We use F1 score to evaluate the QA performance of each model in its target domain. In our experiments, DAPT performs competitively with baseline models and outperforms in one domain (CovidQA). Our T+DAPT approach (RoBERTA + Domain NER + SQuAD) outperforms the baseline in three out of four domains (Movies, Biomedical, COVID) and outperforms DAPT in three out of four domains (Movies, News, Biomedical). We also test a combination of DAPT and T+DAPT by retraining DAPT models on domain NER then SQuAD, and find that this combined approach underperforms compared to either T+DAPT alone or DAPT alone in all four domains. We further discuss the possible reasons for these results in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 36, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Specific domains learn from adaptation: Our approach shows promising performance gains when used for zero-shot domain-specific question answering, particularly in the biomedical, movies, and COVID domains, where the MRC datasets were designed with the evaluation of domainspecific features in mind. Performance gains are less apparent in the News domain, where the NewsQA dataset was designed primarily to evaluate causal reasoning and inference abilitieswhich correlate strongly with SQuAD and base- line RoBERTa pretraining-rather than domainspecific features and adaptation. The lack of performance gains from either T+DAPT or DAPT in the News domain could also possibly be attributed to the nature of the domain: Gururangan et al. 2020found that the News domain had the highest vocabulary overlap of any domain (54.1%) with the RoBERTa pretraining corpus, so the baseline for this domain could have had an advantage in the News domain that would be lost due to catastrophic forgetting while little relevant knowledge is gained from domain adaptation. We perform follow-up experiments with varying amounts of epochs and training data in SQuAD fine-tuning to analyze the tradeoff between more thorough MRC fine-tuning and better preservation of source domain knowledge from DAPT and auxiliary domain adaptation tasks. The results from these runs are in the Appendix (Table 4) . When does DAPT succeed or fail: In zeroshot QA, DAPT performs competitively with the baseline in all domains and outperforms in the COVID domain. This builds upon the results of Gururangan et al. (2020), which reports superior performance on tasks like relation classification, sentiment analysis, and topic modeling, but does not address reading comprehension tasks, which DAPT may not have originally been optimized for. Unsupervised language modeling may not provide readily transferable features for reading comprehension, as opposed to NER which identifies key tokens and classifies those tokens into specific entities. These entities are also often answer tokens in reading comprehension, lending to transferable representations between NER and reading comprehension. Another possible factor is that RoBERTa was pretrained on the English Wikipedia corpus, the same source that the SQuAD questions were drawn from. Because of this, it is possible that pretrained RoBERTa already has relevant representations that would provide an intrinsic advantage for SQuAD-style reading comprehension which would be lost due to catastrophic forgetting after retraining on another large language modeling corpus in DAPT.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1368, |
|
"end": 1377, |
|
"text": "(Table 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the COVID domain, we use the article dataset from . These articles also make the basis for the CovidNER and CovidQA datasets, which may explain the large performance improvement from DAPT in this domain. These results suggest that the performance of DAPT is sensitive to the similarity of its language modeling corpus to the target task dataset. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluate the performance of our T+DAPT approach with domain-specific NER, achieving positive results in a zero-shot reading comprehension setting in four different domain-specific QA datasets. These results indicate that our T+DAPT approach robustly improves performance of pretraining language models in zero-shot domain QA across several domains, showing that T+DAPT is a promising approach to domain adaptation in lowresource settings for pretrained language models, particularly when directly training on target task data is difficult.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In future work, we intend to explore various methods to improve the performance of T+DAPT by remedying catastrophic forgetting and maximizing knowledge transfer. For this we hope to emulate the regularization used by Xu et al. (2020) and implement multi-task learning and continual learning methods like AdapterNet (Hazan et al., 2018) . In order to improve the transferability of learned features, we will explore different auxiliary tasks such as NLI and sentiment analysis in addition to few-shot learning approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 233, |
|
"text": "Xu et al. (2020)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 335, |
|
"text": "(Hazan et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Question answering systems are useful tools in complement to human experts, but the \"word-of-BioQA Samples Q: what sugar is found in rna DAPT: ribose, whereas the sugar in DNA is deoxyribose T+DAPT: ribose Q: normal blood pressure range definition DAPT: 120 mm Hg1 T+DAPT: a blood pressure of 120 mm Hg1 when the heart beats (systolic) and a blood pressure of 80 mm Hg when the heart relaxes (diastolic) MoviesQA Samples Q: what is cyborgs real name DAPT: Victor Stone/Cyborg is a hero from DC comics most famous for being a member of the Teen Titans T+DAPT: Victor Stone Q: who plays klaus baudelaire in the show DAPT: Liam Aiken played the role of Klaus Baudelaire in the 2004 movie A Series of Unfortunate Events. T+DAPT: Liam Aiken Table 3 : Samples from BioQA and MoviesQA where T+DAPT achieves exact match with the label answer, and DAPT produces a different answer. Answers from each approach are shown side-by-side for comparison. machine effect\" (Longoni and Cian, 2020) demonstrates the effects of a potentially dangerous overtrust in the results of such systems. While the methods proposed in this paper would allow more thorough usage of existing resources, they also bestow confidence and capabilities to models which may not have much domain expertise. T+DAPT models aim to mimic extensively domain-trained models, which are themselves approximations of real experts or source documents. Use of domain adaptation methods for low-data settings could propagate misinformation from a lack of source data. For example, while making an information-retrieval system for biomedical and COVID information could become quicker and less resource-intensive using our approach, people should not rely on such a system for medical advice without extensive counsel from a qualified medical professional.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 736, |
|
"end": 743, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ethical Considerations", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/adityaarunsinghal/ Domain-Adaptation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Additional experiments in the COVID domain with different auxiliary tasks are presented in the Appendix A.1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Freezing Layer -We tried to freeze the bottom layer after NER training and only train the QA layer on SQuAD, the performance is worse than finetuning the whole RoBERTa and QA layer. NER and QA may not rely on the exact same features for the final task which may be the reason that freezing causes a performance decrease. Different Training Epoch and Training Examples -When selecting the best performance model, we use a validation set in target domain to evaluate the performance. From Table 5 , we show our trials with different amounts of SQuAD training in the News Domain and how it affected performance in NewsQA.Different Training Order -We tried to use different training order, for example, we train on SQuAD1.1 task first and then on NER, the F1 score is 42.15 in CovidQA, which has some improvement, but QA as the last task performs better.Another Auxiliary Task -In the Covid domain, we also do experiments on a more QA-relevant task, question classification (QCLS) (Wei et al., 2020) . We show the result in Table 4 . The experiments show that QCLS task have more improvements than NER task. In addition, we test the model trained on CovidQA as the performance upper bound.", |
|
"cite_spans": [ |
|
{ |
|
"start": 977, |
|
"end": 995, |
|
"text": "(Wei et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 494, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1020, |
|
"end": 1027, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.1 Experiment Details and Additional Experiments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Ms marco: A human generated machine reading comprehension dataset", |
|
"authors": [ |
|
{ |
|
"first": "Payal", |
|
"middle": [], |
|
"last": "Bajaj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Campos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Craswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rangan", |
|
"middle": [], |
|
"last": "Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mcnamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhaskar", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tri", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mir", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xia", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alina", |
|
"middle": [], |
|
"last": "Stoica", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Tiwary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.09268" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. Ms marco: A human generated machine reading comprehension dataset. arXiv:1611.09268 [cs].", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs", |
|
"authors": [ |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Danescu-Niculescu-Mizil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new ap- proach to understanding coordination of linguistic style in dialogs.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Ncbi disease corpus: A resource for disease name recognition and concept normalization", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Rezarta Islamaj Dogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Leaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Biomedical Informatics", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.jbi.2013.12.006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: A resource for dis- ease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1-10.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Tinn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Lucas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoto", |
|
"middle": [], |
|
"last": "Usuyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific lan- guage model pretraining for biomedical natural lan- guage processing.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Adapternet -learning input transformation for domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoel", |
|
"middle": [], |
|
"last": "Shoshan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Khapun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alon Hazan, Yoel Shoshan, Daniel Khapun, Roy Alad- jem, and Vadim Ratner. 2018. Adapternet -learning input transformation for domain adaptation.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Towards domain adaptation from limited data for question answering using deep neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hazen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shehzaad", |
|
"middle": [], |
|
"last": "Dhuliawala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Boies", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02655" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy J. Hazen, Shehzaad Dhuliawala, and Daniel Boies. 2019. Towards domain adaptation from lim- ited data for question answering using deep neural networks. arXiv:1911.02655 [cs].", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Bioinformatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/bioinformatics/btz682" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Query understanding enhanced by hierarchical parsing structures", |
|
"authors": [ |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yining", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Cyphers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jim", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, and Jim Glass. 2013. Query understanding enhanced by hierarchical parsing structures.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "S2orc: The semantic scholar open research corpus", |
|
"authors": [ |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [ |
|
"Lu" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rodney", |
|
"middle": [], |
|
"last": "Kinney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kin- ney, and Dan S. Weld. 2020. S2orc: The semantic scholar open research corpus.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Artificial intelligence in utilitarian vs. hedonic contexts: The \"word-of-machine\" effect", |
|
"authors": [ |
|
{ |
|
"first": "Chiara", |
|
"middle": [], |
|
"last": "Longoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Cian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of Marketing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1177/0022242920957347" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chiara Longoni and Luca Cian. 2020. Artificial in- telligence in utilitarian vs. hedonic contexts: The \"word-of-machine\" effect. Journal of Marketing.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Maas, Raymond Daly, Peter Pham, Dan Huang, Andrew Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Sequential domain adaptation through elastic weight consolidation for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Avinash", |
|
"middle": [], |
|
"last": "Madasu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vijjini Anvesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.01189" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avinash Madasu and Vijjini Anvesh Rao. 2020. Sequen- tial domain adaptation through elastic weight consol- idation for sentiment analysis. arXiv:2007.01189 [cs].", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "COVID-QA: A question answering dataset for COVID-19", |
|
"authors": [ |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "M\u00f6ller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Reina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghavan", |
|
"middle": [], |
|
"last": "Jayakumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malte", |
|
"middle": [], |
|
"last": "Pietsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timo M\u00f6ller, Anthony Reina, Raghavan Jayakumar, and Malte Pietsch. 2020. COVID-QA: A question answering dataset for COVID-19. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Covid-qa: A question answering dataset for covid-19", |
|
"authors": [ |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "M\u00f6ller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Reina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghavan", |
|
"middle": [], |
|
"last": "Jayakumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malte", |
|
"middle": [], |
|
"last": "Pietsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timo M\u00f6ller, Anthony Reina, Raghavan Jayakumar, and Malte Pietsch. 2020. Covid-qa: A question answer- ing dataset for covid-19.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Thumbs up? sentiment classification using machine learning techniques", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivakumar", |
|
"middle": [], |
|
"last": "Vaithyanathan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2019. Thumbs up? sentiment classification using machine learning techniques.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Bowman. 2020. Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?", |
|
"authors": [ |
|
{ |
|
"first": "Yada", |
|
"middle": [], |
|
"last": "Pruksachatkun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Phang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haokun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyi", |
|
"middle": [], |
|
"last": "Phu Mon Htut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"Yuanzhe" |
|
], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Vania", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Kann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.48550/ARXIV.2005.00628" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bow- man. 2020. Intermediate-task transfer learning with pretrained models for natural language understand- ing: When and why does it work?", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Squad: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2018. Squad: 100,000+ questions for machine comprehension of text.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Radu Florian, and Salim Roukos. 2020. End-to-end qa on covid-19: Domain adaptation with synthetic training", |
|
"authors": [ |
|
{ |
|
"first": "Bhavani", |
|
"middle": [], |
|
"last": "Revanth Gangi Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Md", |
|
"middle": [], |
|
"last": "Iyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Arafat Sultan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vittorio", |
|
"middle": [], |
|
"last": "Sil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Castelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Revanth Gangi Reddy, Bhavani Iyer, Md Arafat Sultan, Rong Zhang, Avi Sil, Vittorio Castelli, Radu Florian, and Salim Roukos. 2020. End-to-end qa on covid-19: Domain adaptation with synthetic training.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rietzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Stabinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Engl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4933--4941", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4933-4941, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Introduction to the conll-2003 shared task", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tjong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fien", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1119176.1119195" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task. Proceed- ings of the seventh conference on Natural language learning at HLT-NAACL 2003.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Newsqa: A machine comprehension dataset", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Trischler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingdi", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Bachman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaheer", |
|
"middle": [], |
|
"last": "Suleman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Cord-19: The covid-19 open research dataset", |
|
"authors": [ |
|
{ |
|
"first": "Lucy", |
|
"middle": [ |
|
"Lu" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoganand", |
|
"middle": [], |
|
"last": "Chandrasekhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russell", |
|
"middle": [], |
|
"last": "Reas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiangjiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Burdick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darrin", |
|
"middle": [], |
|
"last": "Eide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathryn", |
|
"middle": [], |
|
"last": "Funk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannis", |
|
"middle": [], |
|
"last": "Katsis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rodney", |
|
"middle": [ |
|
"Michael" |
|
], |
|
"last": "Kinney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Doug Burdick, Darrin Eide, Kathryn Funk, Yannis Katsis, Rodney Michael Kinney, et al. 2020. Cord-19: The covid-19 open research dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "What are people asking about covid-19? a question classification dataset", |
|
"authors": [ |
|
{ |
|
"first": "Jerry", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengyu", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soroush", |
|
"middle": [], |
|
"last": "Vosoughi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jerry Wei, Chengyu Huang, Soroush Vosoughi, and Ja- son Wei. 2020. What are people asking about covid- 19? a question classification dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Neural domain adaptation for biomedical question answering", |
|
"authors": [ |
|
{ |
|
"first": "Georg", |
|
"middle": [], |
|
"last": "Wiese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Weissenborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariana", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K17-1029" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Georg Wiese, Dirk Weissenborn, and Mariana Neves. 2017. Neural domain adaptation for biomedical ques- tion answering.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug- gingface's transformers: State-of-the-art natural lan- guage processing.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J J" |
|
], |
|
"last": "Yepes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.00202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Xu, X. Zhong, A. J. J. Yepes, and J. H. Lau. 2020. Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension. arXiv:1911.00202 [cs].", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Avirup Sil, and Todd Ward. 2020. Multi-stage pre-training for lowresource domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Revanth", |
|
"middle": [], |
|
"last": "Gangi Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Md", |
|
"middle": [], |
|
"last": "Arafat Sultan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vittorio", |
|
"middle": [], |
|
"last": "Castelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Ferritto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Florian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Efsun", |
|
"middle": [], |
|
"last": "Sarioglu Kayi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.05904" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong Zhang, Revanth Gangi Reddy, Md Arafat Sultan, Vittorio Castelli, Anthony Ferritto, Radu Florian, Ef- sun Sarioglu Kayi, Salim Roukos, Avirup Sil, and Todd Ward. 2020. Multi-stage pre-training for low- resource domain adaptation. arXiv:2010.05904 [cs].", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "sequential transfer learning procedures of T+DAPT, DAPT, and a RoBERTa baseline for zero-shot question answering." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "After its re-opening, which types of movies did the Tower Theatre show? A: second and third run movies, along with classic films NewsQA 934 Q: Who is the struggle between in Rwanda? A: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo.", |
|
"html": null, |
|
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Dev Set Sample</td></tr><tr><td colspan=\"3\">MoviesQA 755 Q: BioQA 4,790 Q: What is hemophilia?</td></tr><tr><td/><td/><td>A: a bleeding disorder characterized by low levels of clotting factor proteins.</td></tr><tr><td colspan=\"2\">CovidQA 2,019</td><td>Q: What is the molecular structure of bovine coronavirus?</td></tr><tr><td/><td/><td>A: single-stranded, linear, and nonsegmented RNA</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Overview of the domain-specific MRC datasets used in our experiments. The number of question-answer pairs in the train set and development set for each domain is shown, along with a sample question-answer pair from each domain. The datasets share the same format as SQuAD.", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "F1 score of pretrained RoBERTa-Base models on dev sets of MRC datasets for given domains with the stated retraining regimens", |
|
"html": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |