|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:59:25.633006Z" |
|
}, |
|
"title": "", |
|
"authors": [], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Transfer learning methods, and in particular domain adaptation, help exploit labeled data in one domain to improve the performance of a certain task in another domain. However, it is still not clear what factors affect the success of domain adaptation. This paper models adaptation success and selection of the most suitable source domains among several candidates in text similarity. We use descriptive domain information and cross-domain similarity metrics as predictive features. While mostly positive, the results also point to some domains where adaptation success was difficult to predict.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Transfer learning methods, and in particular domain adaptation, help exploit labeled data in one domain to improve the performance of a certain task in another domain. However, it is still not clear what factors affect the success of domain adaptation. This paper models adaptation success and selection of the most suitable source domains among several candidates in text similarity. We use descriptive domain information and cross-domain similarity metrics as predictive features. While mostly positive, the results also point to some domains where adaptation success was difficult to predict.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Since the data-hungry deep learning models have beaten state-of-the-art performances in different natural language processing (NLP) tasks, many efforts have been made to deal with the scarcity of labeled data (Wang et al., 2020; Settles, 2010; Kouw and Loog, 2019) . One of the main avenues taken by researchers of this field is investigating the portability of models between different data distributions, often referred to as different domains (Luo et al., 2019; Gururangan et al., 2020) . While multiple approaches have been proposed to make this portability feasible and efficient, it is still unclear how to predict the adaptability of two domains in advance. It is particularly important to address this gap because almost all domain adaptation approaches adjust a model to a new domain at the expense of more computational resources. Therefore, in practice it is neither desirable nor scalable to try all possible dataset candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "(Wang et al., 2020;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 243, |
|
"text": "Settles, 2010;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 264, |
|
"text": "Kouw and Loog, 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 464, |
|
"text": "(Luo et al., 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 489, |
|
"text": "Gururangan et al., 2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most relevant existing work seeks to identify key factors that can be used to justify why transfer learning between two domains work (Asch and Daelemans, 2010; Dai et al., 2019; Kashyap et al., 2021; Mou et al., 2016; Shah et al., 2018) . In practice, however, one needs to be able to quantitatively select a set of existing datasets that can best be adapted to a certain domain for a certain task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 159, |
|
"text": "(Asch and Daelemans, 2010;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 177, |
|
"text": "Dai et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 199, |
|
"text": "Kashyap et al., 2021;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 217, |
|
"text": "Mou et al., 2016;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 236, |
|
"text": "Shah et al., 2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose a simple yet effective approach to predict the success of transfer or adaptation, with the hope of drawing the research community's attention to this gap. We use the term domain transfer (DT) when a model trained on one domain is simply used for inference in another domain. Domain adaptation (DA) is used for approaches that bridge the source and target domain representations (e.g., by mapping or aligning feature spaces of the two domains) so that a model trained on labeled source data (and unlabeled target data) performs well in the target domain. While for the experiments in this paper we focus on the task of text similarity and autoencoder approaches to DA, the proposed process, shown in Figure 1 , can be easily applied to other NLP tasks and other unsupervised DA approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 710, |
|
"end": 718, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Domain Adaptation. The need for domain adaptation arises when a model trained using labeled data from one (source) domain needs to be applied to another (target) domain with a different data distribution (Miller, 2019) . We focus specifically on unsupervised domain adaptation, which learns using unlabeled data in both source and target domains. Model-based approaches to unsupervised DA have been classified into modifying the feature space and augmenting the", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 218, |
|
"text": "(Miller, 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Nicolai Pogrebnyakov * \u2020 Shohreh Shaghaghian * * Thomson Reuters Labs, Canada \u2020 Copenhagen Business School, Denmark Emails: [email protected] Ramponi and Plank (2020) for a comprehensive review). Domain Similarity. Extant studies have proposed a variety of measures to quantitatively express similarity between a pair of domains. Dai et al. (2019) define three main metrics to measure different aspects of similarity between source and target datasets and investigate how these measures correlate with the effectiveness of named entity recognition tasks. Target vocabulary coverage, language model perplexity, and word vector variance are used as these similarity measures. Asch and Daelemans (2010) show a correlation between six similarity metrics based on word frequency, and the performance of some part-ofspeech tagging tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 186, |
|
"text": "Ramponi and Plank (2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 367, |
|
"text": "Dai et al. (2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicting the Success of Domain Adaptation in Text Similarity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Autoencoders for Domain Adaptation. Stacked denoising autoencoders (SDA) learn latent representations that align feature spaces of the source and target domains (Ramponi and Plank, 2020; Vincent et al., 2010) . SDA first add noise to input, such as dropout or Gaussian noise, and then aim to reconstruct the uncorrupted input (Gondara, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 186, |
|
"text": "(Ramponi and Plank, 2020;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 208, |
|
"text": "Vincent et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 341, |
|
"text": "(Gondara, 2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicting the Success of Domain Adaptation in Text Similarity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A further development of this approach is marginalized SDA, which marginalizes the reconstruction loss. The solution to the loss has a closed form, which lowers computational cost and improves scalability compared to the original SDA (Ramponi and Plank, 2020; Chen et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 259, |
|
"text": "(Ramponi and Plank, 2020;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 278, |
|
"text": "Chen et al., 2012)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicting the Success of Domain Adaptation in Text Similarity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Inspired by another approach to DA, domain adversarial (Ganin et al., 2016) , Clinchant et al. (2016) add a regularization term based on a domain classifier to the reconstruction loss. We refer to this approach as marginalized SDA with domain regularization (mSDAR). There also exists a closed-form solution to that loss, and that approach was shown to outperform marginalized SDA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 75, |
|
"text": "(Ganin et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 78, |
|
"end": 101, |
|
"text": "Clinchant et al. (2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicting the Success of Domain Adaptation in Text Similarity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use 11 publicly available semantic text similarity datasets. Seven of them were obtained from StackExchange forums, with data from 2015 to November 2020: Apple, AskUbuntu, Math, StackOverflow, Stats, SuperUser and Unix 1 . We also use Quora Question Pairs, Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005), Paraphrase Adversaries from Word Scrambling (PAWS) (Zhang et al., 2019) and Sentences Involving Compositional Knowledge-Relatedness (SICK) (SemEval, 2014). All datasets contain binary labels indicating whether the text pair is similar or not. The exception is SICK, where we convert the original relevance score of 1-5 into a binary score of 0 (not semantically similar) if the relevance score is below 4, and 1 otherwise. Each of the 11 datasets is considered a separate domain. Figure 1 shows the process we use to implement DT and DA and train a model that can best identify a proper source domain for a particular target domain. We start with embedding text in each of the 11 domains with the Universal Sentence Encoder (USE) (Cer et al., 2018) . For DT, USE representations are used directly to train models in a source domain S and evaluate the performance in the target domain T. For DA, we implement both SDA and mSDAR. A three-layer SDA is trained on each source-target domain pair. Text from the source and target domains is embedded with USE and corrupted with Gaussian noise, whose parameters are estimated from the hidden representation of the previous layer. In mSDAR, the hyperparameters we use are 5 layers, the target regularization parameter \u03bb = 1, dropout probability 0.6 and the regularization objective R = 1. These parameters have the same meaning as in Clinchant et al. (2016) . Both SDA and mSDAR encoders are then used to embed the USE representations of the text in the source and target domains. Figure 2 shows the original and mSDAR representations for StackOverflow and SuperUser domains, demonstrating the effect of mSDAR on aligning the feature spaces of the two domains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 403, |
|
"text": "(Zhang et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1062, |
|
"end": 1080, |
|
"text": "(Cer et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1708, |
|
"end": 1731, |
|
"text": "Clinchant et al. (2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 812, |
|
"end": 820, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1855, |
|
"end": 1863, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The representations described above are used to train a dense 3-layer neural network in the source domain and evaluate its performance in the target domain by reporting the F1-score. (We train three such models for each domain pair and average their performance.) We denote this cross-domain performance by F1 !\" . In order to make the DT and DA results robust to the relative difficulty of learning in different domains, we normalize F1 !\" by the in-domain F1-score, F1 \"\" , which denotes the performance of the fully supervised model trained and evaluated in the same domain. The normalized F1-score, averaged over all domain pairs, is 0.775 for DT, 0.799 for SDA and 0.817 for mSDAR. This is in line with previous work showing better performance of mSDAR over SDA (Clinchant et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 767, |
|
"end": 791, |
|
"text": "(Clinchant et al., 2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling Domain Transfer and Adaptation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Considering a source domain and a target domain with unigram sets ! and \" , we define a set of features !\" = { # !\" , . . . , #$ !\" } as follows. Unigram Coverage. The simplest metric to evaluate the similarity of two domains is the percentage of their common unigrams. We use the ratio of common unigrams in the source # !\" =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "|& ! \u2229& \" | |( # | and target ) !\" = |& ! \u2229& \" |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "|( $ | domains as two features for the classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Dataset Size. The number of labeled data points in source ( * !\" ) and target ( + !\" ) domains as well as the average number of tokens per example for the source and target domains ( , !\" and -!\" ) are additional features we use for the classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Distribution Similarity. In order to measure the similarity of how tokens have been distributed in the two domains, we add R\u00e9nyi divergence ( . !\" ) (Asch and Daelemans, 2010) and KL divergence ( / !\" ) (Plank and van Noord, 2011) to the set of features. We use \u03b1 = 0.99 as the value of the parameter in R\u00e9nyi divergence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Language Similarity. Similar to Dai et al. (2019), we train a trigram language model in each domain and evaluate its perplexity on other domains ( 0 !\" ). Since the target domain is expected to have many trigrams that are not seen in the source domain, we apply Kneser-Ney smoothing to account for those unseen trigrams (Kneser and Ney, 1995) . We also use word vector variance between the source and target domains ( #$ !\" ) (Dai et al., 2019) . This variance is calculated as", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 342, |
|
"text": "(Kneser and Ney, 1995)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 444, |
|
"text": "(Dai et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "# |& % \u2229( \" |1 \u03a3 2\u2208( % \u2229( \" \u03a3 56# 1 0W 72 5 \u2212 W 82 5 0 , where !9 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "and \"; : are the j th element of the vector for word v respectively in the source and target domains. We use Word2vec Skipgram with vector length of 300.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Similarity Measures", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Success Prediction and Order Ranking. We evaluate two different approaches to select one or multiple source domains for a particular target domain. In the first approach, we train a classifier to predict if a domain can be a good candidate for transfer or adaptation to a specific target domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source Domain Selection", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We consider transfer or adaptation successful if the ratio", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source Domain Selection", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "<# #$ <# $$", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source Domain Selection", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "is greater than 80% i.e., if it can achieve at least 80% of the performance of a fully supervised model on the target domain. We refer to the classifier trained in this approach as Success Predictor. In the second approach, irrespective of what percentage of a fully supervised model performance can be achieved, we order the existing source domains for each target domain. Therefore, we model the problem as a ranking problem and refer to the trained model as Domain Ranker. This ranking problem can be modeled as a binary classifier, in which a sample corresponds to the performance of two source domains S1 and S2 for a specific target domain T. The label is one if F1 ! & \" \u2265 F1 ! ' \" and zero otherwise. Performance Evaluation. While the F1-scores and accuracies reflect how well the trained classifiers work, the original purpose of defining these two approaches was to find the best candidates for source domains. Hence, we also show the performance of the two approaches based on ordering-based metrics. To find the orderings for each target domain by Success Predictor, we order the source domains based on the predicted probability of the binary classifier. In Domain Ranker, we sort the source pairs using the pairwise preference predicted by the classifier. We handle the inconsistencies caused by incorrect predictions using the multi-sort algorithm proposed by Maystre and Grossglauser (2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1375, |
|
"end": 1406, |
|
"text": "Maystre and Grossglauser (2017)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source Domain Selection", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To train the Success Predictor and Domain Ranker models, we use a set of features F ST described in section 4. For both approaches, we train a binary XGBoost classifier with 5-fold cross-validation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source Domain Selection", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Ideally, the best way to evaluate the performance of the two approaches is to train the model on some set of domains and test it on orderings of an entirely different set of domains. However, since we only have 11 domains, to make the most use out of this small data, we train the classifier on multiple traintest splits and report the performance metrics of the trained binary classifier each time. We can split the data into train and test sets randomly. However, to make sure that the target domain for which we want to select the best source domain has never been seen by the model as the target domain, each time we use one of the 11 domains as the target domain in the test data. Hence, we have 100 training and 10 test samples in each split. For Domain Ranker, we use ! & \" \u222a ! ' \" as the feature set and use the same train-test split as for Success Predictor. This leaves us with 450 training and 45 test samples in each of the 11 splits. Success Adaptation Prediction. Table 1 presents the performance metrics achieved for all target domains. Note that there is a wide variation in success prediction among the target domains. While the Success Predictor achieves good performance on Apple, AskUbuntu and Unix target domains, it performs poorly on PAWS and Quora datasets. This might be due to the difficulty of learning in these domains (Zhang et al., 2019) , which is not captured by the descriptive and crossdomain metrics that we use.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1347, |
|
"end": 1367, |
|
"text": "(Zhang et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 978, |
|
"end": 985, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Modeling Domain Adaptation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Order", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rank correlation coefficients such as Kendall's t are a common metric to measure the degree of similarity between two rankings. However, here we are more interested in finding out whether we have correctly identified the most relevant source domains. Hence, we report the percentages of top N domains we have identified correctly for N = 1, 3, 5. We also report a stricter metric, Correct Rank Percentage (CRP), which equals the percentage of the source domains that have been predicted with the same order as the true ordering. For example, for Stats as the target domain, the true ordering of other domains using SDA is [StackOverflow, AskUbuntu, Apple, Unix, MRPC, SuperUser, SICK, Math, PAWS, Quora] . The Success Predictor predicts the ordering of the source domains as [StackOverflow, Math, Apple, SuperUser, Unix, AskUbuntu, SICK, MRPC, PAWS, Quora] . In this case, CRP=0.5 since 5 out of the 10 domains have the same order in the predicted and true orderings. Also, Top1=1 since the domain with highest predicted order, StackOverflow, has also the highest order in the true ordering. Similarly, Top3=0.67 since only 2 out of the 3 highest ordered domains in true ordering exist in the top 3 of predicted ordering. AVERAGE 0.68 0.68 0.27 0.27 0.61 0.84 0.64 0.72 0.26 0.27 0.58 0.73 0.70 0.73 0.26 0.45 0.67 scores. Comparing columns \"Average F1\" with \"In-domain average F1\", DT and DA performance for most domains is lower than in-domain performance. The exception is PAWS, where DA delivers over twice the performance of in-domain training. Additionally, for most domains DT and DA resulted in some successes and some failures (mostly between 3 and 8 successes for mSDAR). The exception was, again, PAWS, where all source domains succeeded in DT/DA, and Quora, where none succeeded.", |
|
"cite_spans": [ |
|
{ |
|
"start": 622, |
|
"end": 703, |
|
"text": "[StackOverflow, AskUbuntu, Apple, Unix, MRPC, SuperUser, SICK, Math, PAWS, Quora]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 775, |
|
"end": 856, |
|
"text": "[StackOverflow, Math, Apple, SuperUser, Unix, AskUbuntu, SICK, MRPC, PAWS, Quora]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1222, |
|
"end": 1314, |
|
"text": "AVERAGE 0.68 0.68 0.27 0.27 0.61 0.84 0.64 0.72 0.26 0.27 0.58 0.73 0.70 0.73 0.26 0.45 0.67", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Order", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Adaptation Success Factors. Comparing CPR, Top1, Top3 and Top5 between Success Predictor and Domain Ranker for all DT and DA methods, we see that in general, Domain Ranker does a better job finding the orderings of candidate source domains. For Success Predictor, the features with highest importance are KL-divergence, / !\" , Target Ratio, ) !\" ,and data size of the target domain + !\" . For Domain Ranker, average example lengths, ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "! & \" , , ! ' \" , R\u00e9nyi divergences . ! & \"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": ", . ! ' \" and perplexities 0 ! & \" , 0 ! ' \" in both source domains are the most informative features. Adaptation Success by Dataset. While the Success Predictor performed reasonably well on most domains, its performance on PAWS and Quora datasets was miserable. We attribute this to the lack of domain similarity features that would reflect the complexities of these datasets, and note this for future work. The PAWS result can be explained by a representation and training we used (USE and dense neural network), which is different from bag-of-words and BERT used by PAWS authors (Zhang et al., 2019) . For Quora, in-domain performance is in line with previous research (Wang et al., 2017 , Tomar et al., 2017 , and aggregate DT/DA results were similar to other datasets such as Stats.", |
|
"cite_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 602, |
|
"text": "(Zhang et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 690, |
|
"text": "(Wang et al., 2017", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 711, |
|
"text": ", Tomar et al., 2017", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We studied the problem of selecting the most relevant labeled datasets from a pool of candidates to be used as a source domain in a transfer learning setup with a specific unlabeled target domain. The experiments focused on the text similarity task and autoencoder approaches to DA. Note that the proposed process can be extended to other NLP tasks and other unsupervised DA approaches as well. We used descriptive domain information and cross-domain similarity metrics as predictive features to model the success of DT and DA, and to rank source domains based on their relevancy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In future work, we intend to study source selection in multi-source domain adaptation setup, using multiple source domains for DT/DA. Identifying additional adaptation success factors that could better predict the success of DT/DA for complex domains such as PAWS and Quora, and learning the success threshold (here, we fixed it at 80%) are other avenues to investigate. Other possibilities include experimenting with various text representations (such as bag-of-words) and models (e.g., Transformer-based).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Obtained from https://data.stackexchange.com", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": " Table 2 . Average absolute F1 scores for in-domain and cross-domain performance with domain transfer (DT) and domain adaptation (DA). \"In-domain\" refers to a model trained and evaluated on the same domain, specified in the first column. DT and DA are results for a model trained on other domains (source) and evaluated on the domain in the first column (target). For DT and DA, \"# of transfer/adaptation successes\" is the number of source domains (out of 10) where a model evaluated on target performed at least at 80% of in-domain performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 8, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Using Domain Similarity for Performance Estimation", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Van Asch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Van Asch and Walter Daelemans. 2010. Using Domain Similarity for Performance Estimation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 31-36, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Universal Sentence Encoder for English", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng-Yi", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Hua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicole", |
|
"middle": [], |
|
"last": "Limtiaco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rhomni", |
|
"middle": [], |
|
"last": "St John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Constant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Guajardo-Cespedes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Tar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun-Hsuan", |
|
"middle": [], |
|
"last": "Sung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Strope", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ray", |
|
"middle": [], |
|
"last": "Kurzweil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Marginalized Denoising Autoencoders for Domain Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Minmin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhixiang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Sha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 29th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1206.4683" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized Denoising Autoencoders for Domain Adaptation. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK, June. arXiv: 1206.4683.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Domain Adaptation Regularization for Denoising Autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Stephane", |
|
"middle": [], |
|
"last": "Clinchant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriela", |
|
"middle": [], |
|
"last": "Csurka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Chidlovskii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "26--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephane Clinchant, Gabriela Csurka, and Boris Chidlovskii. 2016. A Domain Adaptation Regularization for Denoising Autoencoders. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 26-31, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using Similarity Measures to Select Pretraining Data for NER", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarvnaz", |
|
"middle": [], |
|
"last": "Karimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hachey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cecile", |
|
"middle": [], |
|
"last": "Paris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1460--1470", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2019. Using Similarity Measures to Select Pretraining Data for NER. In Proceedings of NAACL-HLT, pages 1460- 1470, Minneapolis, Minnesota.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatically Constructing a Corpus of Sentential Paraphrases", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William B Dolan and Chris Brockett. 2005. Automatically Constructing a Corpus of Sentential Paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), page 8.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Domain-Adversarial Training of Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Yaroslav", |
|
"middle": [], |
|
"last": "Ganin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniya", |
|
"middle": [], |
|
"last": "Ustinova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hana", |
|
"middle": [], |
|
"last": "Ajakan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Germain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Laviolette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Marchand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Lempitsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "1--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research, 17(1), pages 1- 35.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Medical image denoising using convolutional denoising autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Lovedeep", |
|
"middle": [], |
|
"last": "Gondara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 16th International Conference on Data Mining Workshops (ICDMW)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "241--246", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lovedeep Gondara. 2016. Medical image denoising using convolutional denoising autoencoders. 2016. In Proceedings of the 16th International Conference on Data Mining Workshops (ICDMW). pages 241- 246, Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Don't Stop Pretraining: Adapt Language Models to Domains and Tasks", |
|
"authors": [ |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Suchin Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Marasovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Downey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8342--8360", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Domain Divergences: a Survey and Empirical Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Devamanyu", |
|
"middle": [], |
|
"last": "Abhinav Ramesh Kashyap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Hazarika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zimmermann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.12198" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min-Yen Kan, and Roger Zimmermann. 2021. Domain Divergences: a Survey and Empirical Analysis. arXiv:2010.12198 [cs], April. arXiv: 2010.12198.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Improved backingoff for M-gram language modeling", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kneser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "1995 International Conference on Acoustics, Speech, and Signal Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "181--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Kneser and H. Ney. 1995. Improved backing- off for M-gram language modeling. In 1995 International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181-184, Detroit, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "An introduction to domain adaptation and transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wouter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Kouw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Loog", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1812.11806" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wouter M. Kouw and Marco Loog. 2019. An introduction to domain adaptation and transfer learning. Technical report, Delft University of Technology, January. arXiv: 1812.11806.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Taking a Closer Look at Domain Shift: Category-Level Adversaries for Semantics Consistent Domain Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Yawei", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Guan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junqing", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2502--2511", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yawei Luo, Liang Zheng, Tao Guan, Junqing Yu, and Yi Yang. 2019. Taking a Closer Look at Domain Shift: Category-Level Adversaries for Semantics Consistent Domain Adaptation. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2502-2511, Long Beach, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Just Sort It! A Simple and Effective Approach to Active Preference Learning", |
|
"authors": [ |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Maystre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Grossglauser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of 34th International Conference on Machine Learning (PMLR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2344--2353", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucas Maystre and Matthias Grossglauser. 2017. Just Sort It! A Simple and Effective Approach to Active Preference Learning. In Proceedings of 34th International Conference on Machine Learning (PMLR), pages 2344- 2353, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Simplified Neural Unsupervised Domain Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "414--419", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Miller. 2019. Simplified Neural Unsupervised Domain Adaptation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 414-419, Minneapolis, Minnesota.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "How Transferable are Neural Networks in NLP Applications?", |
|
"authors": [ |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhao", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ge", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "479--489", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How Transferable are Neural Networks in NLP Applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 479-489, Austin, Texas.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Effective Measures of Domain Similarity for Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1566--1576", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank and Gertjan van Noord. 2011. Effective Measures of Domain Similarity for Parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1566-1576, Portland, Oregon, USA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Neural Unsupervised Domain Adaptation in NLP-A Survey", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ramponi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6838--6855", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Ramponi and Barbara Plank. 2020. Neural Unsupervised Domain Adaptation in NLP-A Survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "SemEval-2014 Task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Marelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Menini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, Roberto Zamparelli, 2014. SemEval- 2014 Task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland. Burr Settles. 2010. Active Learning Literature Survey. Technical Report Computer Sciences Technical Report 1648, University of Wisconsin-Madison.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Adversarial Domain Adaptation for Duplicate Question Detection", |
|
"authors": [ |
|
{ |
|
"first": "Darsh", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvatore", |
|
"middle": [], |
|
"last": "Romeo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1056--1063", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Darsh Shah, Tao Lei, Alessandro Moschitti, Salvatore Romeo, and Preslav Nakov. 2018. Adversarial Domain Adaptation for Duplicate Question Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1056-1063, Brussels, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Neural Paraphrase Identification of Questions with Noisy Pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Thyago", |
|
"middle": [], |
|
"last": "Gaurav Singh Tomar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "Duque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Tackstrom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Subword and Character Level Models in NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gaurav Singh Tomar, Thyago Duque, Oscar Tackstrom, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural Paraphrase Identification of Questions with Noisy Pretraining. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 142-147, Copenhagen, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Lajoie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Antoine", |
|
"middle": [], |
|
"last": "Manzagol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "11", |
|
"issue": "12", |
|
"pages": "3371--3408", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. Journal of Machine Learning Research, 11(12), pages 3371-3408.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Generalizing from a Few Examples: A Survey on Few-Shot Learning", |
|
"authors": [ |
|
{ |
|
"first": "Yaqing", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quanming", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Kwok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lionel", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Ni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "53", |
|
"issue": "", |
|
"pages": "1--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaqing Wang, Quanming Yao, James Kwok, and Lionel M. Ni. 2020. Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM Computing Surveys (CSUR), 53(3), pages 1-34.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Bilateral multi-perspective matching for natural language sentences", |
|
"authors": [ |
|
{ |
|
"first": "Zhiguo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamza", |
|
"middle": [], |
|
"last": "Wael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Radu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4144--4150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiguo Wang, Hamza Wael, and Florian Radu. 2017. Bilateral multi-perspective matching for natural language sentences, In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pages 4144-4150.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "PAWS: Paraphrase Adversaries from Word Scrambling", |
|
"authors": [ |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1298--1308", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase Adversaries from Word Scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT, pages 1298-1308, Minneapolis, Minnesota.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "The process of modeling the success of domain transfer & adaptation." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Original (a) and mSDAR (b) representations of text in the StackOverflow (red) and SuperUser (blue) domains (PCA projection)." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td>DT</td><td/><td/><td/><td/><td/><td>SDA</td><td/><td/><td/><td/><td colspan=\"2\">mSDAR</td></tr><tr><td colspan=\"4\">Target F1 Acc CRP Apple 1 1 0.4</td><td>1</td><td>0.67</td><td>1</td><td colspan=\"3\">0.89 0.9 0.3</td><td>0</td><td>0.67</td><td>1</td><td colspan=\"3\">0.89 0.9 0.4</td><td>1</td><td>1</td><td>1</td></tr><tr><td>AskUbuntu</td><td>1</td><td>1</td><td>0.3</td><td>0</td><td>0.67</td><td>1</td><td colspan=\"2\">0.86 0.9</td><td>0</td><td>0</td><td colspan=\"2\">0.67 0.8</td><td>1</td><td>1</td><td>0.2</td><td>0</td><td>0.67 0.8</td></tr><tr><td>MRPC</td><td colspan=\"3\">0.71 0.6 0.2</td><td>0</td><td colspan=\"5\">0.67 0.6 0.93 0.9 0.2</td><td>0</td><td colspan=\"5\">0.33 0.6 0.93 0.9 0.3</td><td>0</td><td>0.67 0.8</td></tr><tr><td>Math</td><td colspan=\"3\">0.67 0.6 0.2</td><td>0</td><td colspan=\"4\">0.67 0.6 0.6 0.6</td><td>0</td><td>0</td><td colspan=\"5\">0.33 0.8 0.86 0.8 0.1</td><td>0</td><td>0.33 0.6</td></tr><tr><td>PAWS</td><td>0</td><td>0</td><td>0.3</td><td>0</td><td colspan=\"5\">0.33 0.8 0.18 0.1 0.3</td><td>0</td><td colspan=\"4\">0.33 0.6 0.18 0.1</td><td>0</td><td>0</td><td>0.33 0.6</td></tr><tr><td>Quora</td><td>0</td><td>0</td><td>0.3</td><td>0</td><td>0.67</td><td>1</td><td>0</td><td>0</td><td>0.3</td><td>1</td><td colspan=\"2\">0.67 0.8</td><td>0</td><td colspan=\"2\">0.1 0.3</td><td>1</td><td>0.33 0.8</td></tr><tr><td>SICK</td><td colspan=\"3\">0.89 0.8 0.5</td><td>0</td><td colspan=\"5\">0.67 0.8 0.88 0.8 0.2</td><td>0</td><td>0</td><td colspan=\"4\">0.4 0.93 0.9 0.3</td><td>0</td><td>0.33 0.8</td></tr><tr><td colspan=\"4\">StackOverflow 0.67 0.8 0.1</td><td>0</td><td colspan=\"2\">0.67 0.6</td><td>0</td><td colspan=\"2\">0.8 0.3</td><td>0</td><td>1</td><td colspan=\"4\">0.6 0.75 0.8 0.1</td><td>0</td><td>1</td><td>0.6</td></tr><tr><td>Stats</td><td colspan=\"3\">0.67 0.8 0.2</td><td>1</td><td colspan=\"5\">0.33 0.8 0.67 0.9 0.5</td><td>1</td><td colspan=\"5\">0.67 0.6 0.4 0.7 0.4</td><td>1</td><td>0.67 0.8</td></tr><tr><td>SuperUser</td><td colspan=\"3\">0.89 0.9 0.2</td><td>0</td><td>0.67</td><td>1</td><td>1</td><td>1</td><td>0.1</td><td>0</td><td colspan=\"5\">0.67 0.8 0.86 0.9 0.3</td><td>1</td><td>1</td><td>0.8</td></tr><tr><td>Unix</td><td>1</td><td>1</td><td>0.3</td><td>1</td><td>0.67</td><td>1</td><td>1</td><td>1</td><td>0.7</td><td>1</td><td>1</td><td>1</td><td colspan=\"3\">0.86 0.9 0.5</td><td>1</td><td>1</td><td>1</td></tr></table>", |
|
"text": "shows in-domain and cross-domain performance with DT and DA using absolute F1" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Apple</td><td>0.86 0.89 0.8</td><td>1</td><td>1</td><td>1</td><td>0.97 0.98 0.8</td><td>1</td><td>1</td><td>1</td><td>0.89 0.91 0.5</td><td>1</td><td>0.67</td><td>1</td></tr><tr><td colspan=\"2\">AskUbuntu 0.85 0.89 0.7</td><td>0</td><td>0.67</td><td>1</td><td>0.86 0.91 0.4</td><td>0</td><td>0.67</td><td>1</td><td>0.77 0.84 0.5</td><td>0</td><td colspan=\"2\">0.67 0.8</td></tr><tr><td>MRPC</td><td>0.77 0.78 0.2</td><td>1</td><td colspan=\"3\">0.67 0.8 0.65 0.76 0.2</td><td>0</td><td colspan=\"3\">0.67 0.8 0.72 0.84 0.3</td><td>1</td><td>0.67</td><td>1</td></tr><tr><td>Math</td><td>0.73 0.76 0.1</td><td>0</td><td colspan=\"3\">0.67 0.6 0.67 0.71 0.1</td><td>0</td><td colspan=\"3\">0.33 0.6 0.65 0.76 0.2</td><td>0</td><td colspan=\"2\">0.33 0.8</td></tr><tr><td>PAWS</td><td>0.44 0.56 0.2</td><td>0</td><td colspan=\"3\">0.33 0.6 0.69 0.76 0.2</td><td>0</td><td>0.67</td><td>1</td><td>0.59 0.76 0.1</td><td>0</td><td colspan=\"2\">0.33 0.8</td></tr><tr><td>Quora</td><td>0.86 0.84 0.4</td><td>0</td><td>0.67</td><td>1</td><td>0.87 0.87 0.3</td><td>0</td><td>0.67</td><td>1</td><td>0.76 0.82 0.2</td><td>0</td><td colspan=\"2\">0.67 0.8</td></tr><tr><td>SICK</td><td>0.87 0.87 0.2</td><td>0</td><td>0.33</td><td>1</td><td>0.54 0.62 0.2</td><td>0</td><td colspan=\"3\">0.33 0.6 0.76 0.84 0.4</td><td>0</td><td>0.67</td><td>1</td></tr><tr><td colspan=\"2\">StackOverflow 0.</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Performance of Success Predictor and Domain Ranker in identifying the most suitable target domains under domain transfer (DT) and two domain adaptation approaches (SDA and mSDAR)." |
|
} |
|
} |
|
} |
|
} |