|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:58:34.937595Z" |
|
}, |
|
"title": "Improving Cross-lingual Text Classification with Zero-shot Instance-Weighting", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yale University", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Prithviraj Sen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research Almaden", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Huaiyu", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research Almaden", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yunyao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research Almaden", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yale University", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Cross-lingual text classification (CLTC) is a challenging task made even harder still due to the lack of labeled data in low-resource languages. In this paper, we propose zero-shot instance-weighting, a general model-agnostic zero-shot learning framework for improving CLTC by leveraging source instance weighting. It adds a module on top of pre-trained language models for similarity computation of instance weights, thus aligning each source instance to the target language. During training, the framework utilizes gradient descent that is weighted by instance weights to update parameters. We evaluate this framework over seven target languages on three fundamental tasks and show its effectiveness and extensibility, by improving on F1 score up to 4% in singlesource transfer and 8% in multi-source transfer. To the best of our knowledge, our method is the first to apply instance weighting in zeroshot CLTC. It is simple yet effective and easily extensible into multi-source transfer.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Cross-lingual text classification (CLTC) is a challenging task made even harder still due to the lack of labeled data in low-resource languages. In this paper, we propose zero-shot instance-weighting, a general model-agnostic zero-shot learning framework for improving CLTC by leveraging source instance weighting. It adds a module on top of pre-trained language models for similarity computation of instance weights, thus aligning each source instance to the target language. During training, the framework utilizes gradient descent that is weighted by instance weights to update parameters. We evaluate this framework over seven target languages on three fundamental tasks and show its effectiveness and extensibility, by improving on F1 score up to 4% in singlesource transfer and 8% in multi-source transfer. To the best of our knowledge, our method is the first to apply instance weighting in zeroshot CLTC. It is simple yet effective and easily extensible into multi-source transfer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural language processing (NLP) has largely benefited from recent advances in deep learning and large-scale labeled data. Unfortunately, such labeled corpora are not available for all languages. Cross-lingual transfer learning is one way to spread the success from high-resource to low-resource languages. Cross-lingual text classification (CLTC) (Prettenhofer and Stein, 2010; Ni et al., 2011) can learn a classifier in a low-resource target language by transferring from a resource-rich source language (Chen et al., 2018; Esuli et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 379, |
|
"text": "(Prettenhofer and Stein, 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 396, |
|
"text": "Ni et al., 2011)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 526, |
|
"text": "(Chen et al., 2018;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 546, |
|
"text": "Esuli et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Previous work has learned a classifier in the target language using a very small sample of labeled target instances or external corpora of unlabeled instances Xu and Wan, 2017 ). * Work done as an intern at IBM Research Almaden.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 175, |
|
"text": "Xu and Wan, 2017", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, other resources that may be utilized to achieve the same include, but are not limited to, parallel corpora of unlabeled instances in the target language (Xu and Wan, 2017) . In this work, we address the most challenging setting, zero-shot CLTC (Arnold et al., 2007; Joachims, 2003) , where no resource in the target language is given. Among the many methods for transfer learning that have been successfully employed in NLP (Mogadala and Rettinger, 2016; Zhou et al., 2016; Eriguchi et al., 2018) , instance (re-) weighting is perhaps one of the oldest and most well known (Wang et al., 2017 . It is best illustrated when we are given access to a few target labeled instances (few-shot learning). For example, both Dai et al. (2007) and learn a classifier iteratively by assigning weights to each instance in the source training data. While Dai et al. (2007) assigns weights to both source and target instances, pre-trains a classifier on the source training data and then re-weights the target labeled instances. Crucially, the weights are set to be a function of the error between the prediction made for the instance by the current classifier and the instance's gold label.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 184, |
|
"text": "(Xu and Wan, 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 278, |
|
"text": "(Arnold et al., 2007;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 294, |
|
"text": "Joachims, 2003)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 467, |
|
"text": "(Mogadala and Rettinger, 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 486, |
|
"text": "Zhou et al., 2016;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 509, |
|
"text": "Eriguchi et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 586, |
|
"end": 604, |
|
"text": "(Wang et al., 2017", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 745, |
|
"text": "Dai et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 854, |
|
"end": 871, |
|
"text": "Dai et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In a few-shot case, it is easy to see the appeal of re-weighting target language instances, since an instance that incurs a higher prediction loss can be given a larger weight, so as to improve the classifier. But in a zero-shot case, it seems impossible to compute instance weights based on prediction loss. In this work, we make it possible to assign such weights on instances in zero-shot CLTC. To the best of our knowledge, this is the first attempt to apply such a method to NLP tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions are two-fold: First, we introduce zero-shot instance-weighting, a simple but effective, and extensible framework to enable instance weighted transfer learning for zero-shot CLTC. Second, we evaluate on three cross-lingual classification tasks in seven different languages. Results show that it improves F1 score by up to 4% in single-source transfer and 8% in multi-source transfer, identifying a promising direction for utilizing knowledge from unlabeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We illustrate the zero-shot CLTC framework in Figure 1. The source and target language inputs are x s and x t respectively, during training, only the source label y s is available and the task is to predict the target label y t . We first apply the pre-trained model as an encoder to encode the inputs, the encoded representations are denoted by h s and h t . The figure illustrates four instances for each language in the mini-batch. Then there is an Instance Weighting module to assign weights to source language instances by considering the hidden representations h s and h t . Note that these layers are shared. We train the task layer and fine-tune the pre-trained language model layers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 52, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We compare two multilingual versions of pretrained models for the pre-trained models: multilingual BERT (mBERT) 1 (Devlin et al., 2019) and XLM-Roberta (XLMR) 2 (Conneau et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 135, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 183, |
|
"text": "(Conneau et al., 2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We evaluate on multiple tasks in Section 3, so there are different ways to utilize the pre-trained models. For the sentiment and document classification task, we train a fully-connected layer on top of the output of the [CLS] token, which is considered to be the representation of the input sequence. For the opinion target extraction task, we formulate it as sequence labeling task (Agerri and Rigau, 2019; Jebbara and Cimiano, 2019) . To extract such opinion target tokens is to classify each token into one of the following: Beginning, Inside and Outside of an aspect. We follow a typical IOB scheme for the task (Toh and Wang, 2014; San Vicente et al., 2015; \u00c1lvarez-L\u00f3pez et al., 2016) . In this case, each token should have a label, so we have a fully-connected layer that is shared for each token. We note that it may be possible to improve all the results even further by employing more powerful task layers and modules such as conditional random fields (Lafferty et al., 2001 ), but keep things relatively simple since our main goal is to evaluate instance weighting with zero-shot CLTC. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 407, |
|
"text": "(Agerri and Rigau, 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 434, |
|
"text": "Jebbara and Cimiano, 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 636, |
|
"text": "(Toh and Wang, 2014;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 637, |
|
"end": 662, |
|
"text": "San Vicente et al., 2015;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 663, |
|
"end": 690, |
|
"text": "\u00c1lvarez-L\u00f3pez et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 962, |
|
"end": 984, |
|
"text": "(Lafferty et al., 2001", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The intuition behind instance weighting is the following: if the difference between a source instance and the target language is small, then it shares more common features with the target language, so it should make a larger contribution. For each instance in the source language, a large weight indicates a large contribution by the instance during training. Ideally, when deciding an instance weight, we should compare it with all instances from the target language. But doing so would incur prohibitively excessive computational resources. We thus approximate in small batches and calculate the weights by comparing how similar the instances are to the target ones within a small batch in each training step. Instance Weighting-based Gradient Descent Vanilla mini-batch gradient descent is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 \u2190 \u03b8 \u2212 \u03b1 k i=1 \u2207 \u03b8 f (y i , g \u03b8 (x i ))", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where \u03b1 is the learning rate, \u03b8 is the parameter that we want to update, g \u03b8 (x i ) is the model prediction for x i , \u2207 \u03b8 is the partial derivative, and f (\u2022) is the loss function. We modify Equation 1 to include instance weights:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u03b8 \u2190 \u03b8 \u2212 \u03b1 k i=1 w i \u2022 \u2207 \u03b8 f (y i , g \u03b8 (x i )) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where we assign a weight w i to each instance within a mini-batch, and there is a weighted summation of the gradients in the mini-batch for all the instances and then update the parameter \u03b8. It can be easily extended to multiple source languages, in this case, x s may be training samples from more than one languages. Unsupervised Weighting Metrics In each batch, to obtain weight w i for each source instance i, we follow a similarity-based approach. We define a scoring function to calculate a score between the current source instance representation h i and the target instance representation h j . Then we conduct a summation as the final score for source instance i to the set of target instances within this batch D t . For i \u2208 D s :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "w i = score(i, D t ) = j\u2208Dt score(i, j).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We normalize each w i in this batch to make sure the summation is 1, and they are plugged into Eq.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance Weighting", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Multiple ways exist to define a scoring function score(i, j), and a Cosine-Similarity based scoring function is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "score(i, j) = 1 2 ( h i \u2022 h j h i h j + 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also investigate two other ways for scoring function: Euclidean-Distance based and the CORAL Function (Sun et al., 2016) . While Cosine scoring function performs the best, so we report it in our main experiments and ignoring the other two.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 123, |
|
"text": "(Sun et al., 2016)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We test on three tasks: opinion target extraction, document classification, and sentiment classification 3 . English is the source language for all the experiments. We evaluate four settings: 1) direct adaptation with mBERT-base (mBERT), 2) mBERT with Instance Weighting (mBERT+IW), 3) direct adaption of XLMR-base (XLMR), and 4) XLMR with Instance Weighting (XLMR+IW). Opinion Target Extraction We choose SemEval 2016 Workshop Task 5 (Pontiki et al., 2016) for opinion target extraction. It includes restaurant reviews in five languages 4 : English, Spanish (es), Dutch (nl), Russian (ru) and Turkish (tr). Given a sentence as input, one needs to classify each token into one of the three classes according to the IOB scheme. The training and testing size varies from 144 to 3,655. We compare against a list of models. Pontiki et al. (2014) and Kumar et al. (2016) and Cimiano (2019) applies multi-source (including the target) languages to train a classifier using cross-lingual embeddings and evaluates in a zeroshot manner. We summarize the results in Table 1 . Cross-lingual Document Classification We conduct cross-lingual document classification task on the MLDoc dataset (Schwenk and Li, 2018) . It is a set of news articles with balanced class priors in eight languages; Each language has 1,000 training documents and 4,000 test documents, and splits into four classes. We select a strong baseline (Schwenk and Li, 2018) , which applies pre-trained MultiCCA word embeddings (Ammar et al., 2016) and then trained in a supervised way. Another baseline is a zero-shot method proposed by Artetxe and Schwenk (2019) , which applies a single BiLSTM encoder with a shared vocabulary among all languages, and a decoder trained with parallel corpora. Artetxe and Schwenk (2019) apply mBERT as a zero-shot language transfer. Table 2 shows the results of our comparison study. Sentiment Classification Finally, we evaluate sentiment classification task on Amazon multilingual reviews dataset (Prettenhofer and Stein, 2010) . It contains positive and negative reviews from 3 domains, including DVD, Music and Books, in four languages: English (en), French (fr), German (de), and Japanese (ja). For each domain, there are 1,000 positive samples and 1,000 negative samples in each language for both training and testing. We choose the following baselines: translation baseline, UMM (Xu and Wan, 2017) , CLDFA (Xu and Yang, 2017) and MAN-MoE (Chen et al., 2019) . For the translation baseline, we translate the training and testing data for each target language into English using Watson Language Translator 5 , and trained on the mBERT model, which is more (Xu and Wan, 2017) 0.7772 0.7803 0.7870 CLDFA# (Xu and Yang, 2017) 0.8156 0.8207 0.7960 MAN-MoE (Chen et al., 2019) (Artetxe et al., 2017) embeddings. We summarize the results in Table 3 for each domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 457, |
|
"text": "(Pontiki et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 820, |
|
"end": 841, |
|
"text": "Pontiki et al. (2014)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 865, |
|
"text": "Kumar et al. (2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1179, |
|
"end": 1201, |
|
"text": "(Schwenk and Li, 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1407, |
|
"end": 1429, |
|
"text": "(Schwenk and Li, 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1483, |
|
"end": 1503, |
|
"text": "(Ammar et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1593, |
|
"end": 1619, |
|
"text": "Artetxe and Schwenk (2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1751, |
|
"end": 1777, |
|
"text": "Artetxe and Schwenk (2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1990, |
|
"end": 2020, |
|
"text": "(Prettenhofer and Stein, 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 2377, |
|
"end": 2395, |
|
"text": "(Xu and Wan, 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 2404, |
|
"end": 2423, |
|
"text": "(Xu and Yang, 2017)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 2428, |
|
"end": 2455, |
|
"text": "MAN-MoE (Chen et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2652, |
|
"end": 2670, |
|
"text": "(Xu and Wan, 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 2699, |
|
"end": 2718, |
|
"text": "(Xu and Yang, 2017)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 2748, |
|
"end": 2767, |
|
"text": "(Chen et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2768, |
|
"end": 2790, |
|
"text": "(Artetxe et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1056, |
|
"end": 1063, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1824, |
|
"end": 1831, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 2831, |
|
"end": 2838, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Results Among the three tasks, both base models achieve competitive results for all languages thanks to the choice of pre-trained models. Instance weighting produces consistent improvements over the base models for nearly all target languages. Especially, in Table 1 , the best model XLMR+IW beats the best baseline by 4.65% on average, improving from XLMR by 4% on Russian and gaining substantially on the other target languages; in Table 2 , XLMR+IW outperforms the baselines, and surpassing XLMR steadily, with impressive gains on Russian, Chinese and Spanish. In Table 3 , the best model shows the same trend in most cases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 266, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 441, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 574, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "While our approach is model-agnostic, when the base model or the embedding improves, instance weighting will still help, as we can see the improved results obtained by switching from mBERT to XLMR. Again, the framework is simple but effective given these observations. Most importantly, it requires no additional external data and is easily adaptable into any deep models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Multi-source Expansion Studies show that multilingual transfer outperforms bilingual transfer (Guo et al., 2018) . We run an experiment on the opinion extraction task to illustrate how our approach can be easily extended to enable multi-source transfer, (see Table 5 ). Here, we take the SemEval dataset, and for each target language, we train on the union of all other available languages. We can observe that by easily expanding into multi-source language training, we get a significant boost across the board in all target languages. Specifically, there is a 8.1% improvement on Russian. With easy adaptation, we show the extensibility and that multilingual transfer in zero-shot learning is a promising direction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 112, |
|
"text": "(Guo et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 266, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Case Study Intuitively, we should focus on the source instances with a smaller difference with target language, because they contain more common features with the target language. Thus, if we let those instances contribute more, it is possible that the model may perform better on the target language. As an example, Table 5 shows a positively-labeled French review containing adjectives with positive emotions (e.g., \"exceptionnel\", \"superbe\") and the instance weights for two English reviews, where the weights are generated using our best model XLMR+IW. Since English instance 0.3647 One start , for some very acurate dramatic and terrorific facts about the Ebola, but very weak regarding origin of the virus, very unconvincing about possible \"theories\". sound more like that old music of desinformation, he almost blame another monkey for the Ebola...", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 324, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Origin: ...ce livre est exceptionnel..La construction du livre est superbe, l'\u00e9criture magique... Pos Translation: ...this book is outstanding..The construction of book is superb, magical writing ... 1 contains adjectives with positive emotions (e.g. \"favorite\", \"great\"), it has a higher score than English instance 2 containing adjectives with negative emotions (e.g., \"weak\", \"unconvincing\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neg French", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We proposed instance weighting for CLTC and evaluated on 3 fundamental tasks. The benefits of our approach include simplicity and effectiveness by ensuring wide applicability across NLP tasks, extensibility by involving multiple source languages and effectiveness by outperforming a variety of baselines significantly. In the future, we plan to evaluate on more tasks such as natural language inference and abstract meaning representation (Blloshmi et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 462, |
|
"text": "(Blloshmi et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "github.com/google-research/bert/blob/ master/multilingual.md 2 huggingface.co/XLMRoberta-base", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We release our code in https://github.com/ IreneZihuiLi/ZSIW/. 4 The download script was broken and failed to obtain French data, so we do not report results for French.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.ibm.com/watson/services/ language-translator/, version 2018-05-01", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/google-research/bert/ blob/master/\\multilingual.md explains the pre-training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Language independent sequence labelling for opinion target extraction", |
|
"authors": [ |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "Agerri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Artificial Intelligence", |
|
"volume": "268", |
|
"issue": "", |
|
"pages": "85--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rodrigo Agerri and German Rigau. 2019. Language independent sequence labelling for opinion target ex- traction. Artificial Intelligence, 268:85-95.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "GTI at SemEval-2016 task 5: SVM and CRF for aspect detection and unsupervised aspect-based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Tamara\u00e1lvarez-L\u00f3pez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milagros", |
|
"middle": [], |
|
"last": "Juncal-Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez-Gavilanes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [ |
|
"Javier" |
|
], |
|
"last": "Costa-Montenegro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gonz\u00e1lez-Casta\u00f1o", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "306--311", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S16-1049" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tamara\u00c1lvarez-L\u00f3pez, Jonathan Juncal-Mart\u00ednez, Milagros Fern\u00e1ndez-Gavilanes, Enrique Costa- Montenegro, and Francisco Javier Gonz\u00e1lez- Casta\u00f1o. 2016. GTI at SemEval-2016 task 5: SVM and CRF for aspect detection and unsupervised aspect-based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 306-311, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Massively multilingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Mulcaire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1602.01925v2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925v2.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A comparative study of methods for transductive transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Seventh IEEE International Conference on Data Mining Workshops", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Arnold, Ramesh Nallapati, and William W Co- hen. 2007. A comparative study of methods for transductive transfer learning. In Proceedings of the Seventh IEEE International Conference on Data Mining Workshops, pages 77-82.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning bilingual word embeddings with (almost) no bilingual data", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "451--462", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1042" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "597--610", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00288" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "XL-AMR: Enabling cross-lingual AMR parsing with transfer learning techniques", |
|
"authors": [ |
|
{ |
|
"first": "Rexhina", |
|
"middle": [], |
|
"last": "Blloshmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rocco", |
|
"middle": [], |
|
"last": "Tripodi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2487--2500", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.195" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rexhina Blloshmi, Rocco Tripodi, and Roberto Navigli. 2020. XL-AMR: Enabling cross-lingual AMR pars- ing with transfer learning techniques. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2487-2500, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Multisource cross-lingual model transfer: Learning what to share", |
|
"authors": [ |
|
{ |
|
"first": "Xilun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hassan Awadallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3098--3112", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1299" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xilun Chen, Ahmed Hassan Awadallah, Hany Has- san, Wei Wang, and Claire Cardie. 2019. Multi- source cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3098-3112, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Xilun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Athiwaratkun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "557--570", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00039" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. Transactions of the Association for Compu- tational Linguistics, 6:557-570.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "XNLI: Evaluating cross-lingual sentence representations", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruty", |
|
"middle": [], |
|
"last": "Rinott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2475--2485", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1269" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Boosting for transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Wenyuan", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gui-Rong", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007)", |
|
"volume": "227", |
|
"issue": "", |
|
"pages": "193--200", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1273496.1273521" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2007. Boosting for transfer learning. In Ma- chine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, volume 227 of ACM International Conference Proceeding Series, pages 193-200. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Zeroshot cross-lingual classification using multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Akiko", |
|
"middle": [], |
|
"last": "Eriguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.04686v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, and Wolfgang Macherey. 2018. Zero- shot cross-lingual classification using multilin- gual neural machine translation. arXiv preprint arXiv:1809.04686v1.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Funnelling: A new ensemble method for heterogeneous transfer learning and its application to cross-lingual text classification", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Esuli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alejandro", |
|
"middle": [], |
|
"last": "Moreo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACM Transactions on Information Systems (TOIS)", |
|
"volume": "37", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Esuli, Alejandro Moreo, and Fabrizio Sebas- tiani. 2019. Funnelling: A new ensemble method for heterogeneous transfer learning and its applica- tion to cross-lingual text classification. ACM Trans- actions on Information Systems (TOIS), 37(3):37.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Multi-source domain adaptation with mixture of experts", |
|
"authors": [ |
|
{ |
|
"first": "Jiang", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darsh", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4694--4703", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1498" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of ex- perts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694-4703, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Zero-shot cross-lingual opinion target extraction", |
|
"authors": [ |
|
{ |
|
"first": "Soufian", |
|
"middle": [], |
|
"last": "Jebbara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2486--2495", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1257" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soufian Jebbara and Philipp Cimiano. 2019. Zero-shot cross-lingual opinion target extraction. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2486-2495, Min- neapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Transductive learning via spectral graph partitioning", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "290--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 2003. Transductive learning via spectral graph partitioning. In Machine Learning, Proceedings of the Twentieth International Confer- ence (ICML 2003), August 21-24, 2003, Washington, DC, USA, pages 290-297. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "IIT-TUDA at SemEval-2016 task 5: Beyond sentiment lexicon: Combining domain dependency and distributional semantics features for aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Ayush", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Kohail", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asif", |
|
"middle": [], |
|
"last": "Ekbal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1129--1135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S16-1174" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ayush Kumar, Sarah Kohail, Amit Kumar, Asif Ekbal, and Chris Biemann. 2016. IIT-TUDA at SemEval- 2016 task 5: Beyond sentiment lexicon: Combin- ing domain dependency and distributional seman- tics features for aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1129- 1135, San Diego, California. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 -July 1, 2001, pages 282-289. Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Phrase-based & neural unsupervised machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5039--5049", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1549" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Bilingual word embeddings from parallel and nonparallel corpora for cross-language text classification", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Mogadala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Achim", |
|
"middle": [], |
|
"last": "Rettinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "692--702", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1083" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Mogadala and Achim Rettinger. 2016. Bilin- gual word embeddings from parallel and non- parallel corpora for cross-language text classifica- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 692-702, San Diego, California. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Cross lingual text classification by mining multilingual topics from wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochuan", |
|
"middle": [], |
|
"last": "Ni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian-Tao", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Forth International Conference on Web Search and Web Data Mining, WSDM 2011", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "375--384", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1935826.1935887" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2011. Cross lingual text classification by mining multilingual topics from wikipedia. In Proceedings of the Forth International Conference on Web Search and Web Data Mining, WSDM 2011, Hong Kong, China, February 9-12, 2011, pages 375-384. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "SemEval-2016 task 5: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ion", |
|
"middle": [], |
|
"last": "Androutsopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Al-", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "Smadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanyan", |
|
"middle": [], |
|
"last": "Al-Ayyoub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orph\u00e9e", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e9ronique", |
|
"middle": [], |
|
"last": "De Clercq", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Tannier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Loukachevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kotelnikov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--30", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S16-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. SemEval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 19-30, San Diego, California. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "SemEval-2014 task 4: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--35", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/S14-2004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: As- pect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35, Dublin, Ireland. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Crosslanguage text classification using structural correspondence learning", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benno", |
|
"middle": [], |
|
"last": "Stein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1118--1127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural corre- spondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics, pages 1118-1127, Uppsala, Swe- den. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "EliXa: A modular and flexible ABSA platform", |
|
"authors": [ |
|
{ |
|
"first": "Xabier", |
|
"middle": [], |
|
"last": "I\u00f1aki San Vicente", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "Saralegi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Agerri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "748--752", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S15-2127" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I\u00f1aki San Vicente, Xabier Saralegi, and Rodrigo Agerri. 2015. EliXa: A modular and flexible ABSA plat- form. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 748-752, Denver, Colorado. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A corpus for multilingual document classification in eight languages", |
|
"authors": [ |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Holger Schwenk and Xian Li. 2018. A corpus for mul- tilingual document classification in eight languages. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Return of frustratingly easy domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Baochen", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiashi", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2058--2065", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Re- turn of frustratingly easy domain adaptation. In Pro- ceedings of the Thirtieth AAAI Conference on Arti- ficial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2058-2065. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "DLIREC: Aspect term extraction and term polarity classification system", |
|
"authors": [ |
|
{ |
|
"first": "Zhiqiang", |
|
"middle": [], |
|
"last": "Toh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenting", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "235--240", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/S14-2038" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiqiang Toh and Wenting Wang. 2014. DLIREC: Aspect term extraction and term polarity classifica- tion system. In Proceedings of the 8th Interna- tional Workshop on Semantic Evaluation (SemEval 2014), pages 235-240, Dublin, Ireland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Instance weighting for neural machine translation domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masao", |
|
"middle": [], |
|
"last": "Utiyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lemao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kehai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eiichiro", |
|
"middle": [], |
|
"last": "Sumita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1482--1488", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1155" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Better fine-tuning via instance weighting for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Bi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojiang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "7241--7248", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi Wang, Wei Bi, Yan Wang, and Xiaojiang Liu. 2019. Better fine-tuning via instance weighting for text classification. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 7241-7248.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "833--844", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1077" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Towards a universal sentiment classifier in multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Kui", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "511--520", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1053" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kui Xu and Xiaojun Wan. 2017. Towards a universal sentiment classifier in multiple languages. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 511- 520, Copenhagen, Denmark. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Cross-lingual distillation for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Ruochen", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1415--1425", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1130" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruochen Xu and Yiming Yang. 2017. Cross-lingual distillation for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415-1425, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Transfer learning for crosslingual sentiment classification with weakly shared deep neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Guangyou", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhao", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [ |
|
"Xiangji" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tingting", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "245--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guangyou Zhou, Zhao Zeng, Jimmy Xiangji Huang, and Tingting He. 2016. Transfer learning for cross- lingual sentiment classification with weakly shared deep neural networks. In Proceedings of the 39th In- ternational ACM SIGIR conference on Research and Development in Information Retrieval, pages 245- 254.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Framework Illustration: we illustrate 4 instances for each domain here." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "F1 scores on SemEval for Opinion Target Extraction. # indicates a supervised or semisupervised learning method.", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table><tr><td>Method</td><td>Books</td><td>DVD</td><td>Music</td></tr><tr><td>Translation Baseline</td><td colspan=\"3\">0.7993 0.7789 0.7877</td></tr><tr><td>UMM#</td><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "F1 scores on MLDoc for Cross-lingual Document Classification. # indicates a supervised or semisupervised learning method.", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"5\">: F1 scores on Amazon Review for Sentiment</td></tr><tr><td colspan=\"5\">Classification group by domains: Each cell shows the</td></tr><tr><td colspan=\"5\">average accuracy of the three languages.# indicates a</td></tr><tr><td colspan=\"5\">supervised or semi-supervised learning method.</td></tr><tr><td>Method</td><td>es</td><td>nl</td><td>ru</td><td>tr</td></tr><tr><td>XLMR</td><td colspan=\"4\">0.690 0.700 0.664 0.674</td></tr><tr><td colspan=\"5\">Single-source 0.704 0.714 0.706 0.682</td></tr><tr><td>Multi-source</td><td colspan=\"4\">0.735 0.738 0.745 0.688</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table><tr><td>: Multi-source F1 scores on SemEval for Opin-</td></tr><tr><td>ion Target Extraction: transfer from single-source and</td></tr><tr><td>multi-source using XLMR+IW model.</td></tr><tr><td>confident in English 6 . Both UMM and CLDFA</td></tr><tr><td>utilized more resources or tools like unlabeled cor-</td></tr><tr><td>pora or machine translation. MAN-MoE is the only</td></tr><tr><td>zero-shot baseline method. It applies MUSE (Lam-</td></tr><tr><td>ple et al., 2018) and VecMap</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table><tr><td>Language</td><td>Score</td><td>Content</td><td>Label</td></tr><tr><td>English</td><td colspan=\"3\">0.5056 Pos</td></tr><tr><td>Instance 2</td><td/><td/><td/></tr><tr><td>English</td><td/><td/><td/></tr><tr><td>Instance 1</td><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "...I liked the book. Kaplan has consistently been one of my favorite authors (Atlantic Monthly) His theme is consistent: many nation states are not really nation states... Kaplan had great hope for the future of Iran as they struggle with theocracy...", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "A positive scenario: score comparison within the same batch.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |