Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C18-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:08:09.441758Z"
},
"title": "Adversarial Feature Adaptation for Cross-lingual Relation Classification",
"authors": [
{
"first": "Bowei",
"middle": [],
"last": "Zou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Zengzhuang",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art.",
"pdf_parse": {
"paper_id": "C18-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation classification aims to identify the semantic relationship between two nominals labeled in a given sentence. It is critical to many natural language processing (NLP) applications, such as question answering and knowledge base population. For example, the following sentence contains an instance of the Content-Container(e 2 ,e 1 ) relation between two labeled entity mentions \"e 1 =cartridge\" and \"e 2 =ink\". An open challenge is how to train a model which is suitable for languages with insufficient available data of relation classification, since manual annotation is time-consuming and human-intensive. This makes it difficult to transferrably use the existing well-trained classification models in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle this problem, we propose an adversarial feature adaptation approach to transfer latent feature representations from the source language with rich labeled data to the target language with only unlabeled data. Such an approach are both discriminative for relation classification and invariant across languages. This is largely motivated by the adversarial mechanism which has been effectively applied to measure the similarity between distributions in a variety of scenarios, such as domain adaptation (Bousmalis et al., 2016) and multi-modal representation learning (Park and Im, 2016) .",
"cite_spans": [
{
"start": 510,
"end": 534,
"text": "(Bousmalis et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 575,
"end": 594,
"text": "(Park and Im, 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we build two counterpart networks, using convolutional neural networks (CNNs), for source language and target language, to generate the latent feature representations respectively. Then, we use a rival discriminator to identify the correct source of feature representations. At the training step, the network of the source language is trained to maximize the performance on the annotated dataset, while the network of the target language is trained to imitate the feature representations of the source language by rival confusing the discriminator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform the proposed approach on ACE 2005 multilingual training corpus by regarding English as the richly-labeled language (source) and Chinese as the poorly-labeled language (target). Our approach achieves an F1-score of 70.50% with a significant improvement of 5.7%, compared to the state-of-the-art method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this study are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present a novel neural feature adaptation framework by leveraging a generative adversarial network to transfer feature representations from a richly-labeled language to a poorly-labeled language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, this is the first study on feature adaptation for cross-lingual relation classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Experimental results show that the latent feature representations can be effectively transferred from the source language to the target language. This enables the adaptation of the existing manually annotated resource in one language to a new language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In a slightly better scenario that there are a small-scale annotated data available in the target language, our adversarial feature adaptation approach can also be effectively cooperated with the supervised model to further improve the overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. In Section 2, we overview the related work. In Section 3, we introduce details of the proposed adversarial feature adaptation approach. We show our experimental results and discussions in Section 4. Finally, we conclude the paper in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we briefly review the recent progress in cross-lingual relation classification and existing studies on adversarial adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Labeled data on relation classification are not evenly distributed among languages. While there are various of annotated datasets in English, such as the SemEval'10 Task-8 dataset (Hendrickx et al., 2010) and the SemEval'18 task-7 dataset (G\u00e1bor et al., 2018) , annotated datasets in other languages are few.",
"cite_spans": [
{
"start": 180,
"end": 204,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF8"
},
{
"start": 239,
"end": 259,
"text": "(G\u00e1bor et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Relation Classification",
"sec_num": "2.1"
},
{
"text": "Traditional studies for relation classification usually perform supervised machine learning models trained on mono-lingual labeled datasets, which either rely on a set of linguistic or semantic features (Kambhatla, 2004; Suchanek et al., 2006) , or apply tree kernel-based features to represent the input sentences (Bunescu and Mooney, 2005; Qian et al., 2008) . Recently, deep neural networks (Zeng et al., 2015; dos Santos et al., 2015) and attention mechanism (Wang et al., 2016) show the effectiveness in relation classification. However, the training of neural network relies on large-scale labeled instances. This makes it difficult to re-construct such classification models for the poorly-labeled language.",
"cite_spans": [
{
"start": 203,
"end": 220,
"text": "(Kambhatla, 2004;",
"ref_id": "BIBREF9"
},
{
"start": 221,
"end": 243,
"text": "Suchanek et al., 2006)",
"ref_id": "BIBREF19"
},
{
"start": 315,
"end": 341,
"text": "(Bunescu and Mooney, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 342,
"end": 360,
"text": "Qian et al., 2008)",
"ref_id": "BIBREF15"
},
{
"start": 394,
"end": 413,
"text": "(Zeng et al., 2015;",
"ref_id": "BIBREF25"
},
{
"start": 414,
"end": 438,
"text": "dos Santos et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 463,
"end": 482,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Relation Classification",
"sec_num": "2.1"
},
{
"text": "Most of existing studies have attempted to leverage parallel data or a knowledge-based system to transfer effective information from the richly-labeled language to the poorly-labeled language. Qian et al. (2014) proposed a bilingual active learning paradigm for Chinese and English relation classification with pseudo parallel corpora and entity alignment. Kim et al., (2014) proposed a cross-lingual annotation projection strategy by employing parallel corpora for relation detection. Faruqui and Kumar (2015) also present a cross-lingual annotation projection method by using machine translation results, rather than parallel data. Verga et al. (2016) performs multi-lingual relation classification by a knowledge base. Min et al. (2017) drive a classifier to learn discriminative representations by joint supervision of classification (softmax) loss and ideal representation loss.",
"cite_spans": [
{
"start": 193,
"end": 211,
"text": "Qian et al. (2014)",
"ref_id": "BIBREF14"
},
{
"start": 357,
"end": 375,
"text": "Kim et al., (2014)",
"ref_id": "BIBREF10"
},
{
"start": 486,
"end": 510,
"text": "Faruqui and Kumar (2015)",
"ref_id": "BIBREF4"
},
{
"start": 634,
"end": 653,
"text": "Verga et al. (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Relation Classification",
"sec_num": "2.1"
},
{
"text": "Instead of exploiting external resources and manually selecting a closeness metric, we come up with an adversarial mechanism to provide an adaptive metric for feature adaptation from the richly-labeled language to the poorly-labeled language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Relation Classification",
"sec_num": "2.1"
},
{
"text": "Recently, the generative adversarial networks (GAN) have become increasingly popular, especially in the area of deep generative unsupervised modeling (Goodfellow et al., 2014; Makhzani et al., 2016) . For adversarial adaptation, Ganin et al. (2017) proposed the domain adversarial neural networks (DANN) to learn discriminative but domain-invariant representations, transferring the information from the source domain to the target domain. Different from their study, our approach aims to find sharable languageindependent latent feature representations for cross-lingual relation classification.",
"cite_spans": [
{
"start": 150,
"end": 175,
"text": "(Goodfellow et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 176,
"end": 198,
"text": "Makhzani et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 229,
"end": 248,
"text": "Ganin et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Adaptation",
"sec_num": "2.2"
},
{
"text": "There have been some previous work applying adversarial adaptation technique to NLP tasks, such as that for sentiment analysis (Chen et al., 2016) and parsing (Sato et al., 2017) . These studies learn the domain-invariant or domain-specific features by a shared network. Our work differs from them, since we force a second network with identical structure to learn the latent feature representations from the supervised network. Qin et al. (2017) propose a feature imitation approach. An adversarial mechanism is used between explicit and implicit discourse relation samples. Different from their study, we migrates the feature representations from one language to another (non-parallel). To the best of our knowledge, this is the first work to employ the adversarial feature adaptation for cross-lingual relation classification.",
"cite_spans": [
{
"start": 127,
"end": 146,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 159,
"end": 178,
"text": "(Sato et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 429,
"end": 446,
"text": "Qin et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Adaptation",
"sec_num": "2.2"
},
{
"text": "The common semantic information between different language motivates our adversarial feature adaptation approach. In this section, let us take a glance at the framework first, and then the relation classification networks (sentence encoders) and the adversarial training procedure. Figure 1 illustrates the schematic overview of the framework which consists of four key components: 1) a Richly-labeled Sentence Encoder (RSE) with English sentences as the inputs, 2) a Poorly-labeled Sentence Encoder (PSE) which takes translated Chinese sentences as the input, 3) a language discriminator D distinguishing between the feature representations from the above two encoders, and 4) a relation classifier C to predict the relation label. In general, a GAN consists of a generative network G and a discriminator D, in which G generates instances by a distribution P G(x) , and D aims to determining whether a instance is from P G(x) or the real data distribution P data(x) . In our approach, the PSE is taken as the generative network which generates the feature representations H p to confuse the discriminator.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Adversarial Feature Adaptation for Cross-lingual Relation Classification",
"sec_num": "3"
},
{
"text": "In training step, the relation classifier aims to predict the labels, while the language discriminator attempts to distinguish between the feature representations extracted by the two sentence encoders (H p or H r ). In test step, we utilize the PSE to encode the input sentences in target language, and apply the same classifier to predict to relation labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Feature Adaptation for Cross-lingual Relation Classification",
"sec_num": "3"
},
{
"text": "As a common neural network model that yields good performance for monolingual relation classification, we employ CNN to transform a sentence with pairs of entity mentions into a distributed representation H. Note that, this plug-in architecture can also be implemented with the other networks, e.g., a long short-term memory network (LSTM) 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Components",
"sec_num": "3.1"
},
{
"text": "Following Zeng et al. 2014's work, we build an embedding layer to encode words, word positions, and entity types by real-valued vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": null
},
{
"text": "Given an input sentence S = (w 1 , w 2 , ..., w n ), we first transform every word into a real-valued vector of dimension d w using a word embedding matrix W E \u2208 R dw\u00d7|V | , where V is the input vocabulary. Since the structures of RSE and PSE are the same, it is necessary if the word representations for both languages have a shared vocabulary. Therefore, bilingual word embeddings (Shi et al., 2015) are employed to map words from different languages into the same feature space.",
"cite_spans": [
{
"start": 383,
"end": 401,
"text": "(Shi et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": null
},
{
"text": "To capture the informative features of the relationship between words and the entity mentions, we map the relative distances to entity mentions of each word to two real-valued vectors of dimension d p using a position embedding matrix P E \u2208 R dp\u00d7|D| , where D is the set of relative distances which are mapped to a vector initialized randomly (dos Santos et al., 2015) . For each word, we obtain two position vectors with respect to the two entity mentions.",
"cite_spans": [
{
"start": 343,
"end": 368,
"text": "(dos Santos et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": null
},
{
"text": "For each word, we also incorporate its entity type embedding to reflect the relationship between the entity type and the relation type. Each word is mapped to a real-valued vector using embedding matrix ET E \u2208 R det\u00d7|E| , where E is the set of entity types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": null
},
{
"text": "Finally, we represent a input sentence as a vector sequence w={w 1 , w 2 , ..., w n } with the embedding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": null
},
{
"text": "dimension d = (d w + 2d p + d et ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": null
},
{
"text": "After encoding the input sentence, a convolution layer extracts local features by sliding a window of length w over the sentence and perform a convolution within each sliding window. The output for the ith sliding window is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Layer",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p i = W c w i\u2212w+1:i + b,",
"eq_num": "(1)"
}
],
"section": "Convolution Layer",
"sec_num": null
},
{
"text": "where w i\u2212w+1:i denotes the concatenation of w word embeddings within the ith window, W c \u2208 R dc\u00d7(w\u00d7d) is the convolution matrix and b \u2208 R dc is the bias vector (d c is the dimension of output of the convolution layer).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Layer",
"sec_num": null
},
{
"text": "We merge all local features via a max-pooling layer and apply a hyperbolic tangent function to obtain a fixed-sized final representations. The ith element of the output vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-pooling Layer",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x \u2208 R d c is [x] j = tanh max i p ij .",
"eq_num": "(2)"
}
],
"section": "Max-pooling Layer",
"sec_num": null
},
{
"text": "Classifier and Discriminator",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-pooling Layer",
"sec_num": null
},
{
"text": "While the Classifier C is a fully-connected layer followed by a softmax classifier, the Discriminator D is a binary classifier which is implemented as a fully-connected neural network with a sigmoid activation function. Discriminator D takes the feature representations as input, to discriminate whether the feature representation comes from RSE or from PSE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-pooling Layer",
"sec_num": null
},
{
"text": "Although the aforementioned architecture could be applied to cross-lingual relation classification by leveraging the RSE module to train on source language and the MT module to translate the instances from Train the discriminator D through Eq. 44:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Train PSE and classifier C through Eq.(7) 5: until convergence the target language to the source language, there is no guarantee that the latent feature representations of the source language exist in the target language, or vice versa. On the other hand, the error propagation of MT module also should be considered. Therefore, we introduce adversarial training into our crosslingual relation classification framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 illustrates the adversarial training procedure. First, we pre-train the RSE and the classifier C by minimizing the relation classification (RC) loss function on source language (step line 1). Then we interleave the optimization of the adversarial loss function and the RC loss function on the target language at each iteration (step line 2-5). Finally, if the discriminator cannot tell the language of a input sentence using the adversarially trained features, then those features from PSE are effectively languageinvariant. Upon successful training, the feature representations (H p ) are thus encouraged to be both discriminative for relation classification and invariant across languages. Referring to the expression of Qin et al. (2017) , the three loss function of our adversarial training are as follows.",
"cite_spans": [
{
"start": 735,
"end": 752,
"text": "Qin et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.2"
},
{
"text": "We denote the parameters of the RSE and classifier C as \u03b8 r and \u03b8 C , respectively, and the objective can be learned by minimizing the cross-entropy loss as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RC Loss for Training on Source Language",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L rc-sou (\u03b8 r , \u03b8 C ) = E (xe,y)\u223cdata [J (C(H r (x e ; \u03b8 r ); \u03b8 C ), y)] ,",
"eq_num": "(3)"
}
],
"section": "RC Loss for Training on Source Language",
"sec_num": null
},
{
"text": "where E (xe,y)\u223cdata [\u2022] denotes the expectation in terms of the data distribution, J (p, y) = \u2212 k (y = k) log p k is the cross-entropy loss between predictive distribution p and ground-truth label y, C(H r (x)) is the final prediction of classifier C when the input is the feature representation of RSE (H r (x)), (x e , y) is the pair of input and output of relation classification model, where x e is an English instance, and y is the relation label.",
"cite_spans": [
{
"start": 20,
"end": 23,
"text": "[\u2022]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RC Loss for Training on Source Language",
"sec_num": null
},
{
"text": "The adversarial loss L adv is used to train the discriminator D to make a correct estimation that where the feature representation comes from. Formally, the parameters of the discriminator D is denoted as \u03b8 D .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Loss",
"sec_num": null
},
{
"text": "The training objective of D is to distinguish the input source of feature representation as far as possible:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Loss",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8 D L adv = E (xe,xc,y)\u223cdata [log(1 \u2212 D(H r (x e ; \u03b8 D ))) + log D(H p (x c ; \u03b8 D ))] ,",
"eq_num": "(4)"
}
],
"section": "Adversarial Loss",
"sec_num": null
},
{
"text": "where D(H) denotes the output of discriminator D to estimate the probability that H comes from the RSE rather than the PSE, C(H l (x)) is the final prediction of classifier C when the input is the feature representation of PSE (H l (x)), and (x c , y) is the pair of input and output of relation classification model, where x c is a translated Chinese instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Loss",
"sec_num": null
},
{
"text": "We denote the parameters of the PSE as \u03b8 p . The training objective is to minimize the discriminator's chance of correctly telling apart the features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L p (\u03b8 p ) = E xc\u223cdata [log D(H p (x c ; \u03b8 p ))] .",
"eq_num": "(5)"
}
],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "The parameters of classifier C is denoted as \u03b8 C . The training objective of C is to correctly classify relations. The objective can be learned by minimizing the cross-entropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L C (\u03b8 C ) = E (xc,y)\u223cdata [J (C(H p (x c ; \u03b8 C ), y)] .",
"eq_num": "(6)"
}
],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "Finally, we combine the above objectives Eq.(5) and (6) of the relation classifiers, and minimize the joint loss: min",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8p,\u03b8 C L rc-tar = \u03bbL p (\u03b8 p ) + L C (\u03b8 C )",
"eq_num": "(7)"
}
],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "where \u03bb is a balancing parameters calibrating the weights of the classification loss and the featureregulating loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RC Loss for Training on Target Language",
"sec_num": null
},
{
"text": "In this section, we first describe our datasets, detailed settings, and evaluation metrics used in the experiments. Then we show the effectiveness of our adversarial feature adaptation framework for cross-lingual relation classification. Finally, we further investigate the semi-supervised settings, where a small amount of labeled data of the target language exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "4"
},
{
"text": "In this paper we regard English as the richly-labeled (source) language and Chinese as the poorly-labeled (target) language. Note that, in fact, our model could generalize to any pair of source and target languages in principle. We conduct our experiments on the commonly used ACE 2005 multilingual training corpus (Walker et al., 2006) 2 dataset. Table 1 shows the detailed descriptions of the datasets. We utilize all of the seven types of labeled relation mentions, and evaluate all of the systems by using the micro-and macro-F1 scores over the six types of relations excluding \"Other\". The English and Chinese datasets are not translation of each other.",
"cite_spans": [
{
"start": 315,
"end": 336,
"text": "(Walker et al., 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "We pre-train bilingual word embeddings by CLSim 3 (Shi et al., 2015) , which provides 50-dimensional embeddings for 800k parallel sentence pairs. We set the dimensions of the position embeddings and the entity type embeddings to 20 and 30, respectively, with random initialization following a continuous uniform distribution. To obtain the translated sentence pairs automatically and easily, we employ the commercial Google Translate engine 4 , which is a highly engineered machine translation system. The mention boundaries and the entity type tags are provided by the ACE 2005 multilingual training corpus. All the models are optimized using ADADELTA (Zeiler, 2012) . We pick the parameters showing the best performance on the development set (in Column 4, Table 1 ) via early stopping, and report the scores on the test set (in Column 5, Table 1 ). Table 2 shows the best settings of model parameters in our experiments.",
"cite_spans": [
{
"start": 50,
"end": 68,
"text": "(Shi et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 653,
"end": 667,
"text": "(Zeiler, 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 759,
"end": 766,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 841,
"end": 848,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 852,
"end": 859,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "We compare with the following baselines for cross-lingual relation classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 CNN-MT-Source All the instances in the English training set (the \"Source\" Column in Table 1) are translated into Chinese (target language) by Google Translator. A CNN model with the same structure of PSE is trained by leveraging this new translated training data on Chinese.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 CNN-MT-Target Contrary to the CNN-MT-Source, all the instances in the Chinese test set (the \"Target-test\" Column in Table 1 ) are directly translated into English (source language) by Google Translator. A CNN model with the same structure of RSE is trained by leveraging the English training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 BiLSTM-MT-Source and BiLSTM-MT-Target They are similar to the settings of the CNN-MT-Source and the CNN-MT-Target, respectively. For further examining the robustness of our adversarial feature adaptation framework, we replace the sentence encoder networks (CNNs) with the BiLSTM networks for both RSE and PSE. The BiLSTM model is proposed by Zhang et al. (2015) , which is one of the state-of-the-art mono-lingual relation classification system. For fair comparison, we only retain the word embeddings, the position embeddings, and the entity type embeddings.",
"cite_spans": [
{
"start": 344,
"end": 363,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "\u2022 BI-AL This model is proposed by Qian et al. (2014) , which is a bilingual active learning system for Chinese and English relation classification with pseudo parallel corpora and entity alignment.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "Qian et al. (2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Adversarial Feature Adaptation Table 4 : Performance of systems on the opposite direction for cross-lingual relation classification, in which Chinese is treated as the source language and English is treated as the target language (contrary to Table 3 ). The partition of English dataset is the same as that in Table 1 (i.e. the training set 70%, the development set 10%, and the test set 20%).",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 4",
"ref_id": null
},
{
"start": 243,
"end": 250,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 310,
"end": 317,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "of 70.50%, which outperform the active learning based system (BI-AL) with a relative improvement of about 5%. It indicates that our adversarial feature adaptation framework substantially outperforms the state-of-the-art model for cross-lingual relation classification without any annotated data on the target language. Moreover, we pay more attention to evaluate the proposed adversarial feature adaptation model against bidirectional MT-based baselines, including 1) translate the Chinses text into English, then leveraging the English relation classification model (trained on English dataset) to identify the entity type (CNN-MT-Target and BiLSTM-MT-Target), and 2) translate the English training set into Chinese, then train an Chinese relation classification model to classify the entity types (CNN-MT-Source and BiLSTM-MT-Source). Our approach improves about 2% of macro-F1 score over the machine translation based systems. Besides, the performances of CNN-MT-Source and CNN-MT-Target are comparative, which indicates that the translated direction may be insignificant for cross-lingual relation classification via MT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "We can also see that all of the CNN-based adversarial feature adaptation models achieve better results than the corresponding BiLSTM-based ones within the same settings. The reason might be that the BiL-STM is fitter for encoding order information and long-range context dependency for sequence labeling problem, while the CNN is suited to extracting local and position-invariant features. For relation classification, the essential features are always distributed around the entity mentions, which would be better utilized by CNN 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "To validate the language independence of our feature adaptation framework, we also implement our adversarial feature adaptation systems and the MT-based systems in Table 1 on the same corpus from the opposite direction. Specifically, we regard Chinese as the source language and English as the target, then learn the feature representations from the Chinese dataset to predict the relation labels of the English dataset. Table 4 indicates that our approach also outperforms the baselines on learning the feature representations from Chinese to English. Besides, the results further validate the conclusions mentioned above: 1) Our system outperforms the MT-based systems for cross-lingual relation classification, and 2) the CNN-based systems achieve better performances than the BiLSTM-based systems in the same settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 171,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 421,
"end": 428,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "To further provide an empirical insight into the relationship between 1) the data size of labeled training set for supervised relation classification (the blue curve in Figure 2 ), and 2) the data size required by our adversarial feature adaptation system with only unlabeled sentences (the orange dashed curve in Figure 2 ), we simulate a supervised scenario by adding labeled Chinese instances for training a CNN-based relation classification system (CNN-CH in Table 5 ). We start from adding 100 labeled sentences and keep adding 100 sentences each time until 900. As shown in Figure 2 , when adding the same number of labeled sentences, the CNN-CH system can better utilize the extra supervision. The margin is naturally decreasing as more supervision is incorporated, until the training set contains more than 700 instances. It indicates that our adversarial feature adaptation system can achieve comparable performance to the supervised system trained on a small labeled dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 314,
"end": 323,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 464,
"end": 471,
"text": "Table 5",
"ref_id": null
},
{
"start": 581,
"end": 589,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "Another interesting find is that it seems a very small amount of supervision (e.g., 500 labeled instances) could significantly help the supervised relation classification system. However, it is worth noting that the manual annotation on such amount of dataset is still time-consuming and human-intensive, since the people should annotate not only the entity mentions and their relation, but also the external information such as the lexical or syntactic features if necessary. Figure 3 compares the performances of our CNN-GAN system (the blue curve in Figure 3 ) and a bilingual active learning system BI-AL (Qian et al., 2014 ) (the orange dashed curve in Figure 3 ) when training with different sizes of labeled data from the source language. As we can see, the margin of our approach is not significant when the size of the source-language instances is relatively small. When using 10% of the training data, our system only declines 6.35% of performance (from 70.50% to 64.15%), while for the bilingual active learning system, the gap widens to 20.91% (from 64.80% to 42.89%). It indicates that our feature adaptation approach can efficiently utilize the translated supervision from the source language.",
"cite_spans": [
{
"start": 609,
"end": 627,
"text": "(Qian et al., 2014",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 477,
"end": 485,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 553,
"end": 561,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 658,
"end": 666,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "In this paper, we mainly focus on the the scenario of unsupervised cross-lingual relation classification, i.e., without any labeled dataset. However, for broader comparisons, we also test our framework when a few labeled training instances of the target language are available. Actually, our approach can be easily generalized to a semi-supervised setting. We employ a simple way that directly combine the softmax layers of the CNN-CH system and the CNN-GAN system, to integrate the supervision from target language into relation classification. Table 5 lists the performances of these semi-supervised relation classification systems. We see that our ensemble model (*-CH-EN) which can employ not only the labeled data from the target language but from the source language, slightly improves over the supervised model (*-CH). It indicates that our Table 5 : Performance of the semi-supervised relation classification systems. bilingual-Joint-IRL system: a bilingual approach by joint supervision of classification loss and ideal representation loss (Min et al., 2017) ; *-CH systems: A CNN/BiLSTM with the same architecture of our CNN/BiLSTM feature extractor (trained on 7,556 instances of the CH training set); *-CH-EN systems: a simple ensemble way that directly combine the softmax layers of the CNN-CH system (trained on 3,778 instances of the CH training set) and CNN-GAN system (trained on 3,778 instances of the EN training set).",
"cite_spans": [
{
"start": 1049,
"end": 1067,
"text": "(Min et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 546,
"end": 553,
"text": "Table 5",
"ref_id": null
},
{
"start": 848,
"end": 855,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised Scenario",
"sec_num": null
},
{
"text": "adversarial model can transfer some useful knowledge and information from the source language to the target language for relation classification, by which both the language-specific and language-invariant features could be learned. Besides, better ensemble methods could be attempted to exploit the information across language for semi-supervised relation classification. In addition, we see that our model has obtained improvement over the previous best-performing system (bilingual-Joint-IRL in Table 5 ) of semi-supervised relation classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised Scenario",
"sec_num": null
},
{
"text": "In this paper we introduce an adversarial feature adaptation approach for cross-lingual relation classification without labeled dataset, which leverages the data on richly-labeled language to help relation classification on the poorly-labeled language. We evaluate our approach on ACE 2005 multilingual training corpus. Experimental results show that this approach can effectively transfer feature representations from a richly-labeled language to another poorly-labeled language, and outperforms several baselines including active learning models and highly competitive MT-based baselines. The code is available at https://github.com/zoubowei/feature_adaptation4RC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Theoretically speaking, our adversarial feature adaptation approach can be flexibly implemented in the scenario of multiple languages, while this paper focuses on two languages of English and Chinese. Thus in future, we will extend this approach to more languages and explore its significance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As an alternative, a bidirectional LSTM (BiLSTM) is tried as the basic network. The experimental results are shown in Subsection 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://catalog.ldc.upenn.edu/LDC2006T06 3 http://nlp.csai.tsinghua.edu.cn/\u223clzy/src/acl2015 bilingual .html 4 https://translate.google.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Yin et al. (2017) also demonstrated this conclusion for relation classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by the National Natural Science Foundation of China (No. 61703293, No. 61751206, and No. 61672368). We would like to thank the anonymous reviewers for their insightful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Domain separation networks",
"authors": [
{
"first": "Konstantinos",
"middle": [],
"last": "Bousmalis",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Trigeorgis",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Silberman",
"suffix": ""
},
{
"first": "Dilip",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "343--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Do- main separation networks. Advances in Neural Information Processing Systems, pages 343-351.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Subsequence kernels for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference on Neural Information Processing Systems (NIPS'05)",
"volume": "",
"issue": "",
"pages": "171--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. Subsequence kernels for relation extraction. In Proceedings of the International Conference on Neural Information Processing Systems (NIPS'05), pages 171-178.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01614"
]
},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv preprint arXiv:1606.01614.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classifying relations by ranking with convolutional neural networks",
"authors": [
{
"first": "Cicero",
"middle": [],
"last": "Nogueira Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL'15)",
"volume": "",
"issue": "",
"pages": "626--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with con- volutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL'15), pages 626- 634.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual open relation extraction using cross-lingual projection",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL'15)",
"volume": "",
"issue": "",
"pages": "1351--1356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Shankar Kumar. 2015. Multilingual open relation extraction using cross-lingual projection. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL'15), pages 1351-1356.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2018 Task 7: Semantic relation extraction and classification in scientific papers",
"authors": [
{
"first": "Kata",
"middle": [],
"last": "G\u00e1bor",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Buscaldi",
"suffix": ""
},
{
"first": "Anne-Kathrin",
"middle": [],
"last": "Schumann",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Qasemizadeh",
"suffix": ""
},
{
"first": "Ha\u00effa",
"middle": [],
"last": "Zargayouna",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Charnois",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval'18)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kata G\u00e1bor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang QasemiZadeh, Ha\u00effa Zargayouna, Thierry Charnois. 2018. SemEval-2018 Task 7: Semantic relation extraction and classification in scientific papers. In Proceedings of International Workshop on Semantic Evaluation (SemEval'18).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Domain-adversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Machine Learning Research",
"volume": "17",
"issue": "1",
"pages": "2096--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Mario Marchand, and Victor Lempitsky. 2017. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096-2030.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Advances in Neural Information Processing Systems Conference (NIPS'14)",
"volume": "",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of Advances in Neural Information Processing Systems Conference (NIPS'14), pages 2672-2680.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SemEval-2010 task 8: multi-way classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Nam",
"middle": [
"Kim"
],
"last": "Su",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "33--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Nam Kim Su, Zornitsa Kozareva, Preslav Nakov, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33-38.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations",
"authors": [
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics on Interactive poster and demonstration sessions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics on Interactive poster and demonstration sessions.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cross-lingual annotation projection for weakly-supervised relation extraction",
"authors": [
{
"first": "Seokhwan",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Minwoo",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Jonghoon",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Gary Geunbae",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2014,
"venue": "Acm Transactions on Asian Language Information Processing (TALIP)",
"volume": "13",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2014. Cross-lingual annotation projec- tion for weakly-supervised relation extraction. Acm Transactions on Asian Language Information Processing (TALIP), 13(1):3.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adversarial autoencoders",
"authors": [
{
"first": "Alireza",
"middle": [],
"last": "Makhzani",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Alireza",
"middle": [],
"last": "Makhzani",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Confer-ence on Learning Representations (ICLR'16)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. 2016. Adversarial autoencoders. In Proceedings of the International Confer-ence on Learning Representations (ICLR'16).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning transferable representation for bilingual relation extraction via convolutional neural networks",
"authors": [
{
"first": "Zhuolin",
"middle": [],
"last": "Bonan Min",
"suffix": ""
},
{
"first": "Marjorie",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Freedman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the The 8th International Joint Conference on Natural Language Processing (IJCNLP'17)",
"volume": "",
"issue": "",
"pages": "674--684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonan Min, Zhuolin Jiang, Marjorie Freedman, and Ralph Weischedel. 2017. Learning transferable representation for bilingual relation extraction via convolutional neural networks. In Proceedings of the The 8th International Joint Conference on Natural Language Processing (IJCNLP'17), pages 674-684.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Image-Text multi-Modal representation learning by adversarial backpropagation",
"authors": [
{
"first": "Gwangbeen",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Woobin",
"middle": [],
"last": "Im",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.08354"
]
},
"num": null,
"urls": [],
"raw_text": "Gwangbeen Park and Woobin Im. 2016. Image-Text multi-Modal representation learning by adversarial back- propagation. arXiv preprint arXiv:1612.08354.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bilingual active learning for relation classification via pseudo parallel corpora",
"authors": [
{
"first": "Longhua",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Haotian",
"middle": [],
"last": "Hui",
"suffix": ""
},
{
"first": "Yanan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14)",
"volume": "",
"issue": "",
"pages": "582--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longhua Qian, Haotian Hui, YaNan Hu, Guodong Zhou, and Qiaoming Zhu. 2014. Bilingual active learning for relation classification via pseudo parallel corpora. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14), pages 582-592.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploiting constituent dependencies for tree kernel-based semantic relation extraction",
"authors": [
{
"first": "Longhua",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Peide",
"middle": [],
"last": "Qian",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING'08)",
"volume": "",
"issue": "",
"pages": "697--704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent depen- dencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING'08), pages 697-704.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adversarial connective-exploiting networks for implicit discourse relation classification",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL'17)",
"volume": "",
"issue": "",
"pages": "1006--1017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric P. Xing. 2017. Adversarial connective-exploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL'17), pages 1006-1017.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adversarial training for cross-domain universal dependency parsing",
"authors": [
{
"first": "Motoki",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Manabe",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Noji",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies (CoNLL'17)",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial training for cross-domain universal dependency parsing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies (CoNLL'17), pages 71-79.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning cross-lingual word embeddings via matrix co-factorization",
"authors": [
{
"first": "Tianze",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL'15",
"volume": "",
"issue": "",
"pages": "567--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning cross-lingual word embeddings via matrix co-factorization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL'15, Short Papers), pages 567-572.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Combining linguistic and statistical analysis to extract relations from web documents",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Ifrim",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD'06)",
"volume": "",
"issue": "",
"pages": "712--717",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Georgiana Ifrim, and Gerhard Weikum. 2006. Combining linguistic and statistical analysis to extract relations from web documents. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD'06), pages 712-717.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Multilingual relation extraction using compositional universal schema",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL'16)",
"volume": "",
"issue": "",
"pages": "886--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2016. Multilingual relation extraction using compositional universal schema. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL'16), pages 886-896.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "ACE 2005 multilingual training corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Medero",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. Linguistic Data Consortium.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Relation classification via multi-level attention CNNs",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Gerard",
"middle": [
"De"
],
"last": "Melo",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL'16)",
"volume": "",
"issue": "",
"pages": "1298--1307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Wang, Zhu Cao, Gerard De Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention CNNs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL'16), pages 1298-1307.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Comparative study of CNN and RNN for natural language processing",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Schtze",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.01923"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Schtze. 2017. Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "ADADELTA: An adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. arXiv preprint arXiv:1212.5701.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP'15)",
"volume": "",
"issue": "",
"pages": "1753--1762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piece- wise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP'15), pages 1753-1762.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers (COLING'14)",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolu- tional deep neural network. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers (COLING'14), pages 2335-2344.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dequan",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Xinchen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of 29th Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "73--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Zhang, Dequan Zheng, Xinchen Hu, and Ming Yang. 2015. Bidirectional long short-term memory net- works for relation classification. In Proceedings of 29th Pacific Asia Conference on Language, Information and Computation, pages 73-78.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "[cartridge] e 1 was marked as empty, even with [ink] e 2 in both chambers.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Architecture of the adversarial feature adaptation framework for cross-lingual relation classification.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Comparison with a supervised relation classification system.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Comparison of the performances (macro-F1) when adding different sizes if labeled data of the source language from 10% to 100%.",
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Algorithm 1 Adversarial Training Procedure Input: Training dataset Output: RSE with classifier C 1: Initialize \u03b8 h and \u03b8 C by minimizing Eq.(3).",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Discription of our datasets. The data in this tabel denote the number of samples of corresponding sets. The relation type \"Other\" means that the relation of entity mentions is not among the aforementioned six types.",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>shows a performance comparison of our adversarial feature adaptation models (*-GAN) with</td></tr><tr><td>baselines. With the CNN-GAN system, we achieve a micro-F1 score of 68.41% and a macro-F1 score</td></tr></table>",
"text": "Source 64.86 73.53 68.92 64.18 72.13 67.93 CNN-MT-Target 68.08 72.60 70.27 66.60 71.11 68.78 BiLSTM-MT-Source 68.30 73.24 70.68 67.06 70.79 68.88 BiLSTM-MT-Target 66.85 71.56 69.12 66.30 70.74 68.45 CNN-GAN 71.65 77.53 74.47 69.51 73.74 71.56 BiLSTM-GAN 72.20 75.66 73.89 69.69 72.03 70.84",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Model</td><td>P</td><td>R</td><td>F1</td></tr><tr><td colspan=\"2\">bilingual-Joint-IRL 80.9</td><td>77.1</td><td>78.9</td></tr><tr><td>CNN-</td><td/><td/><td/></tr></table>",
"text": "CH 79.09 81.15 80.11 BiLSTM-CH 76.23 79.94 78.04 CNN-CH-EN 79.61 81.65 80.62 BiLSTM-CH-EN 77.55 79.50 78.52",
"html": null
}
}
}
}