|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:22:16.795221Z" |
|
}, |
|
"title": "Generating unlabelled data for a tri-training approach in a low resourced NER task", |
|
"authors": [ |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Boulanger", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "91400", |
|
"settlement": "Orsay", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Lavergne", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "91400", |
|
"settlement": "Orsay", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sophie", |
|
"middle": [], |
|
"last": "Rosset", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique", |
|
"institution": "CNRS", |
|
"location": { |
|
"postCode": "91400", |
|
"settlement": "Orsay", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Training a tagger for Named Entity Recognition (NER) requires a substantial amount of labeled data in the task domain. Manual labeling is a tedious and complicated task. Semisupervised learning methods can reduce the quantity of labeled data necessary to train a model. However, these methods require large quantities of unlabeled data, which remains an issue in many cases. We address this problem by generating unlabeled data. Large language models have proven to be powerful tools for text generation. We use their generative capacity to produce new sentences and variations of the sentences of our available data. This generation method, combined with a semi-supervised method, is evaluated on CoNLL and I2B2. We prepare both of these corpora to simulate a low resource setting. We obtain significant improvements for semisupervised learning with synthetic data against supervised learning on natural data.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Training a tagger for Named Entity Recognition (NER) requires a substantial amount of labeled data in the task domain. Manual labeling is a tedious and complicated task. Semisupervised learning methods can reduce the quantity of labeled data necessary to train a model. However, these methods require large quantities of unlabeled data, which remains an issue in many cases. We address this problem by generating unlabeled data. Large language models have proven to be powerful tools for text generation. We use their generative capacity to produce new sentences and variations of the sentences of our available data. This generation method, combined with a semi-supervised method, is evaluated on CoNLL and I2B2. We prepare both of these corpora to simulate a low resource setting. We obtain significant improvements for semisupervised learning with synthetic data against supervised learning on natural data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Training models to solve NER tasks requires a considerable amount of labeled data. In most NLP tasks, this data needs to be related to the task domain and must be in the targeted language. While English is a well-covered language, corpora are still being built to cover new domains or expand existing ones. For any other languages, corpora cover fewer domains. Data in the private sector is rarely shareable due to privacy reasons. It is also the case in domains such as the medical domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent approaches tackle the issue of the absence of resources by leveraging knowledge or data from other sources. Zero-shot learning is a learning paradigm trying to solve a target task without any labeled data. It uses the knowledge of how to predict labels of an adjacent task and applies it to predict the unseen labels of the target task (Wang et al., 2019) . We do not aim to solve the NER problem in a situation with such strict data restrictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 362, |
|
"text": "(Wang et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Labeling a few examples is almost always possible. Few-shot learning provides training methods to generalize from a few labeled examples. These methods use the labeled examples to build representations of the class, which serve as comparison points for inference (Dopierre et al., 2021) . Transfer learning leverages the knowledge learned on tasks of the domain to improve the performance on a specific task (Ruder, 2019) . It is quite common to see cross-lingual transfer from higher-resourced languages where the task exists. However, the most prominent use case of transfer learning in NLP is the use of language models for data representation. We use this type of transfer learning to build high-performing taggers from BERT models. Semisupervised learning is a paradigm where unlabeled data is widely available. The unlabeled data is used to improve the model's performance by giving a better topology of the data space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 286, |
|
"text": "(Dopierre et al., 2021)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 421, |
|
"text": "(Ruder, 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose to use a semi-supervised learning method in a context where data is scarce enough to be fully labeled. We aim to achieve this by using large language models to generate the necessary unlabeled data. We test whether large language models can generate data that make tri-training a viable option in a low-resource context. The performances of our baseline models are compared against the performances of the ensembles of models trained with tri-training on CoNLL (Sang and De Meulder, 2003) and I2B2 (Uzuner et al., 2011) . Significant improvements are observed using our method on the reduced datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 482, |
|
"end": 499, |
|
"text": "De Meulder, 2003)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 530, |
|
"text": "(Uzuner et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Language modeling has already been used as an augmentation method to generate labeled and unlabeled examples for NER in DAGA (Ding et al., 2020) . However, our taggers overperform the taggers presented on the gold standard by 30 points at size 1000 and 9 points at full size. The semisupervised method used in DAGA, self-training, is also prone to errors due to reinforcement of early mistakes. In our case, we generate unlabeled sen-tences using pre-trained large language models. We test this method with subsets of data ranging from 50 examples to 1000 examples vs. over 1000 in DAGA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 144, |
|
"text": "(Ding et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Thus, our main contribution is using out-of-thebox large language models as tools to obtain unlabeled data for semi-supervised learning in NER in a low-resource setting. The code relative to the experiment will be available in a public repository 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Section 2 presents state of the art related to data augmentation, semi-supervised learning in NER, and language modeling. Section 3 presents tritraining (Zhou and Li, 2005) , and how we fit generation into it. Section 4 touches on the technical details of the experiments. Section 5 and 6 are the discussion and the conclusion of the article.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 172, |
|
"text": "(Zhou and Li, 2005)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Learning models in a low-resource setting require extracting every possible information from the available data. Data augmentation is a common technique that creates synthetic data from available data. In Natural Language Processing, augmentation is used across various tasks to help achieve better performances. In classification, techniques such as back-translation (Sennrich et al., 2016) or Easy Data Augmentation (Wei and Zou, 2019) are used. However, in tagging, paraphrasing using back-translation (Neuraz et al., 2018) is not bringing significant improvements. Recent works show that using language models learned on the training data to generate labeled and unlabeled examples can bring improvements (Ding et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 391, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 437, |
|
"text": "(Wei and Zou, 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 526, |
|
"text": "(Neuraz et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 728, |
|
"text": "(Ding et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Inductive semi-supervised learning (Van Engelen and Hoos, 2020) aims at improving the performances of models through the addition of unlabeled data. For Named Entity Recognition, pseudolabeling is a method that has been used (Chen et al., 2019) . Pseudo-labeling is one of the semisupervised learning methods. The unlabeled data receives pseudo-labels from the models trained. This pseudo-labeled data is then used alongside labeled data to train the models. Variants of the method exists (Yarowsky, 1995 ) (McClosky et al., 2006 (Blum and Mitchell, 1998) with varying quantities of models trained. The separation of the data between the different models trained and how the models are used to produce pseudo-labels also creates variants to this method. In our case, we use tri-training (Zhou and Li, 2005) , which uses three models. This method has been used to solve Clinical Concept Extraction in the medical domain (Chen et al., 2019) on new data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 244, |
|
"text": "(Chen et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 504, |
|
"text": "(Yarowsky, 1995", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 529, |
|
"text": ") (McClosky et al., 2006", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 530, |
|
"end": 555, |
|
"text": "(Blum and Mitchell, 1998)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 806, |
|
"text": "(Zhou and Li, 2005)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 938, |
|
"text": "(Chen et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Semi-supervised learning methods still require a significant amount of unlabeled data. However, with current advances in language modeling, this method could be improved. Transformer-based models (Vaswani et al., 2017) have been a revolution in the language modeling landscape. From their first iterations like GPT (Radford et al., 2018) to their more recent ones like T5 (Raffel et al., 2020) and GPT-3 (Brown et al., 2020) , transformerbased models have become a staple of Natural Language Processing as fine-tuning or transferring knowledge from these models often outperforms learning a model on the task directly. While our taggers are based on BERT models (Devlin et al., 2018) , we otherwise use the generative power of GPT2 (Radford et al., 2019) to provide unlabeled data for the semi-supervised training. GPT2 has been finetuned and used to generate unlabeled data for classification in a high resource context (He et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 218, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 337, |
|
"text": "(Radford et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 393, |
|
"text": "(Raffel et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 424, |
|
"text": "GPT-3 (Brown et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 683, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 732, |
|
"end": 754, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 921, |
|
"end": 938, |
|
"text": "(He et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This section provides details on the tri-training process for sentence tagging and how we levy language modeling as an unlabeled data provider.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Tri-training ( (Zhou and Li, 2005) , (Ruder and Plank, 2018) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 46, |
|
"text": "(Zhou and Li, 2005)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "(Ruder and Plank, 2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tri-training", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ") 1: for i \u2208 {1..3} do 2: m i \u2190 train_model(sampling(L), m i ) 3: while Any m i still learns do 4: for i \u2208 {1..3} do 5: L i \u2190 \u2205 6: j, k \u2190 {1..3} \u2212 |i| 7: for x \u2208 U do 8: if m j (x) = m k (x) then 9: L i \u2190 L i \u222a {(x, m j (x))} 10: for i \u2208 {1..3} do 11: m i \u2190 train_model(L i \u222a L, m i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tri-training", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Tri-training is an inductive semi-supervised learning (Van Engelen and Hoos, 2020) method using an ensemble of three models. The models are trained in a supervised learning manner on a set of labeled and pseudo-labeled data. As we try to solve L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tri-training", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "U Figure 1 : Tri-training with unlabeled data U generation. In rectangles are the data sets, and in rounded rectangles are the different models. The procedure is shown at episode t for model m i . The initialization is not represented and is done by sampling with replacement from L.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 2, |
|
"end": 10, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "time = t, tagger = m i x3 \u2295 m j tagger t\u22121 m k tagger t\u22121 L i L \u222a L i m i tagger t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a NER task, the models we use for the ensemble are taggers. Further description of the taggers can be found in the experiments section. We describe the Algorithm 1 in the following paragraphs, and we show our additions in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 230, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tri-training. Tri-training is an episodic training method that stops when each model of the ensemble has stopped improving. The most crucial feature of tri-training is the construction of the training set of the models. This is shown from line 4 to line 9 in Algorithm 1 and in the second line of Figure 1. For each model m i , a pseudo-labeled set L i is constructed. L i is composed of the unlabeled sentences x \u2208 U for which the predictions of the models m j and m k i / \u2208 {j, k} are equal. These predictions are added to L i alongside x as their pseudo-labels. A threshold can also be used to remove uncertain annotations. However, it was concluded that it was not necessary for simple tri-training (Ruder and Plank, 2018) . The models are then trained on both the natural and synthetic data L \u222a L i . L is the labeled training data. In our case, it represents any subset of the training corpus made for the low resource setting as explained in section 4.2. The operations described above are repeated until all models have stopped learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 703, |
|
"end": 726, |
|
"text": "(Ruder and Plank, 2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Initialization. The central part of Algorithm 1 described above assumes that models are sufficiently trained and different to create varied pseudo-labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To achieve these prerequisites, we pre-train the models. The models m i are pre-trained on different random subsets of the labeled data L. These subsets are made by sampling with replacement from the training set. This operation is also referred to as bootstrap sampling in (Zhou and Li, 2005) . Sampling the pre-training data is done to introduce variety in the train sets of the three models without incurring performance losses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 293, |
|
"text": "(Zhou and Li, 2005)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Inference. For inference, we obtain an ensemble of 3 different models that can be used together with a voting system. We keep the labels with the highest summed score across the three models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a semi-supervised learning algorithm, tritraining requires a substantial amount of unlabeled examples. The specificity of our study is the use of a generator to create the unlabeled examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GPT2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Applying semi-supervised learning methods is more complicated when there is no unlabeled data. We used the text of the labeled data as the context for the generation model. We use the generation model in two different ways: (i) follow-up sentence generation and (ii) sentence completion, as shown in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 308, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The first generation method we use is follow-up sentence generation. Large language models like GPT-2 (Radford et al., 2019) are trained on texts containing multiple sentences. This kind of model should be able to generate the follow-up sentence from the context. Using these models out-of-thebox should work without any finetuning. We apply follow-up sentence generation to generate new examples. With this method, we aim to generate new sentences that are within the same domain but have different structures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 124, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The second method we use is sentence completion. We remove the end of the sentence and complete it using the language model for this method. We aim to generate alternative contexts to the part of the sentence we keep with this method. While this method might bring more variations by taking out random portions of the sentences, it is easier to use this way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We aim at evaluating whether the data generated with large language models is of sufficient quality to serve as unlabeled data in a tri-training scenario. To that end, we evaluate the performances of the This is an example That would be it's follow-up This is the completion Figure 2 : Generation methods examples. In blue is the initial example and in red is the generated text. The first generated example is from sentence follow-up, and the second is from sentence completion.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 283, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "tri-trained models against the performances of a single model trained on the same amount of labeled natural data. We do not reduce the size of our testing sets as we aim to compare our method to existing results. Our evaluation is comparative between our tri-training method and no augmentation method. We want to see whether there are increases in performance in a low-resource setting. Comparisons are made between a tagger trained on one subset against the ensemble of taggers obtained via our tritraining and generation method on the same subset. The sampling of subsets is seeded as explained in Section 4.2. We average results over those seeds to reduce the impact of selection biases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section, we describe the technical details of the experiments and explain the variants tested.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The task we are working on is the Named Entity Recognition (NER) task. The goal of this task is to find mentions associated with certain concepts in sequences of text. In practice, this is done by assigning labels representing the concepts and the position within the mention to each of the tokens of the text. The corpora we are using are CoNLL 2003 English (Sang and De Meulder, 2003) and I2B2 (Uzuner et al., 2011) . CoNLL is a corpus of Reuters news annotated with four different concepts: person, location, organization, and miscellaneous. The difficulties of this corpus reside in the various types of information portrayed within. From geopolitical news to tables of sports results, the input format varies greatly. I2B2 is a corpus of medical records annotated with three different concepts: problem, treatment, and test. These corpora are classic corpora for the NER task and cover (Ramshaw and Marcus, 1995) (Nakayama, 2018) . Best model based on development set F1, trained on 50 epochs, with batch size of 32.", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 386, |
|
"text": "De Meulder, 2003)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 417, |
|
"text": "(Uzuner et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 891, |
|
"end": 917, |
|
"text": "(Ramshaw and Marcus, 1995)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 918, |
|
"end": 934, |
|
"text": "(Nakayama, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "diverse specialty domains. These complete datasets contain enough data to be considered an ideal case for their respective tasks. We have tested our tagger architecture (see 4.3) on the full-sized data in order to verify its quality and select the best pre-trained BERT model available. This topline can be seen in Table 1 . Our experiment focuses on low resources; the maximum size of the training data is less than 10% of the full set. We do not expect to reach topline results with our method at this quantity of data. However, we have to look at how much of the gap between topline and baseline is bridged by our method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 322, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The purpose of our method is to be used in a lowresource setting. We simulate such a setting by sampling a small number of labeled examples from the training set to create a new training set. We also consider that the quantity of data is small enough that all of the data is labeled. For our experiment, we reduce the training set to a subset S 1000 of size 1000 by sampling without replacement using ten different seeds. This is where the sampling bias is induced. S 1000 contains less than 10% of each of our sets. The seeding is done to reduce the variability of results due to sampling biases. Most of the results will be averaged over the ten seeds. We cut each subset S 1000 in a series of subsets: S 50 \u2282 S 100 \u2282 S 250 \u2282 S 500 \u2282 S 1000 . This is useful to evaluate the impact of the addition of new examples. For each seed, we obtain five subsets of labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low resource setting", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This section presents the architecture shared by all the taggers we train. It is a simple BERT + Table 2 : F 1 score on baseline averaged across seeds. Average of the deltas between the performances of each individual tri-trained tagger and their respective baselines at \u2206unique lines. Average of the deltas between the performances of tri-trained ensembles and their respective baselines at \u2206ensemble lines. Corpora used are I2B2 and CoNLL.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 104, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagger", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "classifier architecture. The classifier is a two-layer feed-forward network with a hidden size of 768 and ReLU (rectified linear unit) activation. Dropout with p = 0.1 is applied between BERT and the classifier during training. The model is trained with the Adam optimizer with an initial learning rate of 10 \u22125 . We train all taggers for tri-training and baseline for 1000 epochs with early stopping when the development set F 1 score stops increasing for 20 epochs (40 epochs for a subset of size 50). The sentence batch size is 16.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagger", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "While we refer to our tagger architecture as BERT + classifier, we have tried different pretrained BERT models 234 as shown in Table 1 and have settled on two different models. For CoNLL, the best results were obtained with BERT large cased (Devlin et al., 2018) , and for I2B2, with BioBERT base cased .", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 262, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagger", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We generate the unlabeled set U with GPT-2 (Radford et al., 2019) . We use HuggingFace's implementation 5 . The text from the labeled train set is used as the context to generate entailed examples. With each labeled example, we generate five follow-up sentences. We also use the language model for sentence completion. In this case, we cut the original text and complete it using the model. Each labeled example is cut to 75%, 50%, and 25% of its length. In each of these cases, we generate five completed sentences. This amounts to a total of 20 synthetic examples per natural example. It is, in practice, slightly less than that because we filter out sequences made exclusively of different types of whitespace, newlines, and other such noise. Generated examples can be seen in Figure 3 4", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 65, |
|
"text": "GPT-2 (Radford et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 780, |
|
"end": 788, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The main focus of this article is the use of tritraining without natural unlabeled data. We use the unlabeled data generated, as explained previously, as the unlabeled data of tri-training. Tri-training requires one development set and one validation set: the first for the training of each model m i , the second to validate the stagnation of the models across episodes. We chose to split the corpora's initial development set in half to fulfill each of those purposes. As this is a first experiment, we exclude sentences without tags from the pseudo-labeled set. This is done to avoid a possible problem at very low resources where the pre-trained models are not trained enough and produce sentences with empty tag sequences where they should not. However, our results show that these precautions might not be necessary. The result of the tri-training procedure is an ensemble of three models. Inference using this ensemble is done with a simple voting system. Voting is done by summing the scores output of each tag across all models and picking the highest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".5 Tri-training", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we present the results obtained across the different subsets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Baseline. Baseline are the results of models trained in a supervised manner only on the natural training data. For each subset S n , it is an average of 10 scores. The results in Table 2 show consistent performance increases between each subset sizes. Seqeval (Ramshaw and Marcus, 1995) (Nakayama, 2018) is used to compute the results. I2B2 F 1 range from 36.2 (size 50) to 77.4 (size 1000), and Figure 3 : Examples of generation. The three first examples are from CoNLL and the three last from I2B2. Each series is formed of an example of completion and two examples of sentence follow-up. The examples were cherry-picked to show both positive and negative aspects of generation, be of short length, and be labeled by the models. On CoNLL's completion example, only a full stop was added. On I2B2's completion example, the context was \"FOLLOW\" and was too short and generic to bring the sentence to the medical domain. The second examples for both corpora are okay. The third examples for both corpora happen when short formulaic sentences are used as context. For CoNLL, it is the common -DOCSTART-and for I2B2, it was a date.", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 286, |
|
"text": "(Ramshaw and Marcus, 1995)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 303, |
|
"text": "(Nakayama, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 404, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "CoNLL F 1 range from 59.9 (size 50) to 87.7 (size 1000). As discussed in Section 3.3, smaller sizes show a higher standard deviation with 5.8 for I2B2 and 3.3 for CoNLL at size 50.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "\u2206unique. Tri-training produces three trained models supposed to be used as an ensemble of models. With constraints such as memory consumption or inference time, one might want to use a single model for inference. For such cases, we have reported the results of single models. The \u2206unique results show the deltas between each of the three individual models m i and the baseline. For each subset S n , it is an average of 30 deltas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "\u2206ensemble. The purpose of tri-training is to obtain an ensemble of three models. We report the results of the ensembles by computing the deltas between the performances of the ensembles and their respective baselines. These results can be found within Table 2 at the \u2206ensemble line and in Figure 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 259, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 297, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Our method obtains higher results on average on all subsets and on both corpora. Generally, on I2B2, tri-training allows for a \u2206ensemble to range from +4.32 (S 50 ) to +1.80 (S 1000 ). On CoNLL, it otherwise ranges from +2.98 (S 50 ) to +0.71 (S 1000 ). The \u2206unique shows, as expected, lower gains than \u2206ensemble, ranging from +3.93 (S 50 ) to +1.28 (S 1000 ) for I2B2 and +2.33 (S 50 ) to +0.27 (S 1000 ) for CoNLL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Out of the 50 individual runs for each corpus, one is negative for I2B2, and five are negative for CoNLL. Impacts of the negative results are seen on the average results of CoNLL at subset size 100. Three seeds yield negative gains at this size, with one having extreme (-8.6 points) negative gains. Removing this extreme result in the average calculation brings the \u2206ensemble score closer to expected values (+1.89). Performances of individual models on CoNLL are within the standard deviation of negative results. This is not the case for I2B2. These results show that using the ensemble is a more stable solution. Overall, the method is most consistent with subsets of size 250 plus, as the average performance of tri-trained ensembles is above the standard deviation of the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "While our low-resource setting allows us to compare the impact of the training method in an otherwise similar context, it does not fully represent the nature of the problem. Building the development and test set is also a low resource problem. Reducing the test set to simulate low-resource will only make any comparison meaningless. Simulating the development set in the low resource context is an improvement that could be made.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "It is also to note that while the application domain is low resource, it is necessary to have a sizeable open-domain language model in the target language. Trying this method in languages other than English must be tested. Multilingual models might be the solution to the generalization of this method. As it stands, availability of large language model is the hardest limitation of this method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Leveraging pre-trained models to improve performances on specific tasks is a common approach. With recent improvements to language modeling, recent models are often used directly to solve tasks. Direct usage is the method we use to build our taggers. However, we propose a new use for these sizeable models. They can serve as unlabeled data generators for semi-supervised learning. In particular, we have shown that we can use this method to gain significant improvements to the performances of taggers on NER and Clinical Concept Extraction in a low resource context. We gain between 3 and 4 points of F 1 score on subsets of data of size 50. Gains are overall positive on the sizes of the subsets we have tested. The higher the gains, the lower the data size is. We have shown that large language models are suitable tools to generate unlabeled examples for semi-supervised learning for NER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/HugoBoulanger/ Tritraining-Gen", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://huggingface.co/ bert-base-uncased 3 https://huggingface.co/ bert-large-cased 4 https://huggingface.co/dmis-lab/ biobert-base-cased-v1.15 https://huggingface.co/gpt2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was granted access to the HPC resources of IDRIS under the allocation 2021-AD011013018 made by GENCI. This work was granted access to the HPC resources of Saclay-IA through the Lab-IA machine. This work has been supported by the project PSPC AIDA: 2019-PSPC-09 funded by BPI-France.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Combining labeled and unlabeled data with co-training", |
|
"authors": [ |
|
{ |
|
"first": "Avrim", |
|
"middle": [], |
|
"last": "Blum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the eleventh annual conference on Computational learning theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "92--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In Pro- ceedings of the eleventh annual conference on Com- putational learning theory, pages 92-100.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Language models are few-shot learners", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Ryder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Subbiah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Dhariwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Shyam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Sastry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "1877--1901", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Named entity recognition from chinese adverse drug event reports with lexical feature based bilstm-crf and tri-training", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Biomedical Informatics", |
|
"volume": "96", |
|
"issue": "", |
|
"pages": "103252--103252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y Chen, C Zhou, T Li, H Wu, X Zhao, K Ye, and J Liao. 2019. Named entity recognition from chinese adverse drug event reports with lexical feature based bilstm-crf and tri-training. Journal of Biomedical Informatics, 96:103252-103252.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Daga: Data augmentation with a generation approach for low-resource tagging tasks", |
|
"authors": [ |
|
{ |
|
"first": "Bosheng", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linlin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidong", |
|
"middle": [], |
|
"last": "Bing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canasai", |
|
"middle": [], |
|
"last": "Kruengkrai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Thien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shafiq", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luo", |
|
"middle": [], |
|
"last": "Joty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyan", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2011.01549" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kru- engkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. Daga: Data augmentation with a generation approach for low-resource tagging tasks. arXiv preprint arXiv:2011.01549.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Protaugment: Intent detection metalearning through unsupervised diverse paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Dopierre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Gravier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wilfried", |
|
"middle": [], |
|
"last": "Logerais", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2454--2466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Dopierre, Christophe Gravier, and Wilfried Logerais. 2021. Protaugment: Intent detection meta- learning through unsupervised diverse paraphrasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 2454-2466.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Gholamreza Haffari, and Mohammad Norouzi. 2021. Generate, annotate, and learn: Generative models advance selftraining and knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Xuanli", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Islam Nassar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2106.06168" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuanli He, Islam Nassar, Jamie Kiros, Gholamreza Haf- fari, and Mohammad Norouzi. 2021. Generate, an- notate, and learn: Generative models advance self- training and knowledge distillation. arXiv preprint arXiv:2106.06168.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Effective self-training for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Pro- ceedings of the main conference on human language technology conference of the North American Chap- ter of the Association of Computational Linguistics, pages 152-159. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "seqeval: A python framework for sequence labeling evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Hiroki", |
|
"middle": [], |
|
"last": "Nakayama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Natural language understanding for task oriented dialog in the biomedical domain in a low resources context", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Neuraz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonardo", |
|
"middle": [ |
|
"Campillos" |
|
], |
|
"last": "Llanos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anita", |
|
"middle": [], |
|
"last": "Burgun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophie", |
|
"middle": [], |
|
"last": "Rosset", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1811.09417" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Neuraz, Leonardo Campillos Llanos, Anita Burgun, and Sophie Rosset. 2018. Natural language understanding for task oriented dialog in the biomedi- cal domain in a low resources context. arXiv preprint arXiv:1811.09417.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Improving language understanding by generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "OpenAI Blog", |
|
"volume": "1", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Raffel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharan", |
|
"middle": [], |
|
"last": "Narang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Matena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanqi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter J", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "1--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. Journal of Machine Learning Research, 21:1- 67.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Text chunking using transformation-based learning", |
|
"authors": [ |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitch", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Third Workshop on Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In Third Workshop on Very Large Corpora.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Neural Transfer Learning for Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, National University of Ireland, Galway.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Strong baselines for neural semi-supervised learning under domain shift", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1044--1054", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder and Barbara Plank. 2018. Strong base- lines for neural semi-supervised learning under do- main shift. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044-1054.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"Tjong" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fien", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceed- ings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Annual Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "86--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "i2b2/va challenge on concepts, assertions, and relations in clinical text", |
|
"authors": [ |
|
{ |
|
"first": "\u00d6zlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Brett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuying", |
|
"middle": [], |
|
"last": "South", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott L", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Duvall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Medical Informatics Association: JAMIA", |
|
"volume": "18", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "\u00d6zlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Associ- ation: JAMIA, 18(5):552.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A survey on semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jesper E Van Engelen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Holger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hoos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Machine Learning", |
|
"volume": "109", |
|
"issue": "", |
|
"pages": "373--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesper E Van Engelen and Holger H Hoos. 2020. A sur- vey on semi-supervised learning. Machine Learning, 109(2):373-440.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A survey of zero-shot learning: Settings, methods, and applications", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACM Transactions on Intelligent Systems and Technology (TIST)", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "1--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Wang, Vincent W Zheng, Han Yu, and Chunyan Miao. 2019. A survey of zero-shot learning: Settings, methods, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1- 37.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6382--6388", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1670" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data augmen- tation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 6382-6388, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Unsupervised word sense disambiguation rivaling supervised methods", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "33rd annual meeting of the association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189-196.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Tri-training: Exploiting unlabeled data using three classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Zhi-Hua", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IEEE Transactions on knowledge and Data Engineering", |
|
"volume": "17", |
|
"issue": "11", |
|
"pages": "1529--1541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi-Hua Zhou and Ming Li. 2005. Tri-training: Ex- ploiting unlabeled data using three classifiers. IEEE Transactions on knowledge and Data Engineering, 17(11):1529-1541.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Boxplot of CoNLL and I2B2 deltas between tri-trained ensemble and baseline (\u2206ensemble). For each subset size, the left boxplot is CoNLL, the right boxplot is I2B2.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "23\u00b15.80 49.22\u00b13.23 64.34\u00b11.43 71.39\u00b10.75 77.38\u00b10.64 \u2206unique +3.93\u00b11.89 +2.56\u00b12.37 +1.89\u00b11.25 +1.93\u00b10.70 +1.28\u00b10.84 \u2206ensemble +4.32\u00b11.82 +3.08\u00b12.38 +2.45\u00b11.23 +2.49\u00b10.73 +1.80\u00b10.84 CoNLL baseline 59.87\u00b13.32 69.20\u00b13.92 80.65\u00b11.99 84.74\u00b10.89 87.70\u00b10.38 \u2206unique +2.33\u00b12.01 +0.08\u00b13.64 +1.06\u00b11.11 +0.54\u00b10.83 +0.27\u00b10.37 \u2206ensemble +2.98\u00b11.98 +0.84\u00b13.68 +1.77\u00b11.17 +1.14\u00b10.71 +0.71\u00b10.40", |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td>S 50</td><td>S 100</td><td>S 250</td><td>S 500</td><td>S 1000</td></tr><tr><td>I2B2</td><td>baseline</td><td>36.</td><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "MDS was founded in 1978.", |
|
"num": null, |
|
"content": "<table><tr><td>ORG</td><td/></tr><tr><td/><td>PER</td></tr><tr><td colspan=\"2\">And it was then that Jussi Graf's</td></tr><tr><td>MISC</td><td/></tr><tr><td colspan=\"2\">3_x86_64.tar.gz\"\" ) ; // We'll add this [...]</td></tr><tr><td/><td>problem</td></tr><tr><td colspan=\"2\">FOLLOW US ON TWITTER!</td></tr><tr><td>test</td><td>treatment</td></tr><tr><td colspan=\"2\">Disease tolerance test for benz</td></tr><tr><td>treatment</td><td/></tr><tr><td colspan=\"2\">-12 10:27:28 ] RavenQueen > she's been so [...]</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |