Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:47:18.365639Z"
},
"title": "Multi-Domain Neural Machine Translation with Word-Level Domain Context Discrimination",
"authors": [
{
"first": "Jiali",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University",
"location": {
"settlement": "Xiamen",
"country": "China"
}
},
"email": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University",
"location": {
"settlement": "Xiamen",
"country": "China"
}
},
"email": ""
},
{
"first": "Huating",
"middle": [],
"last": "Wen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University",
"location": {
"settlement": "Xiamen",
"country": "China"
}
},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tencent Technology Co",
"location": {
"settlement": "Ltd, Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yongjing",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University",
"location": {
"settlement": "Xiamen",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Jianqiang",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Meiya Pico information Co",
"location": {
"settlement": "Ltd, Xiamen",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With great practical value, the study of Multidomain Neural Machine Translation (NMT) mainly focuses on using mixed-domain parallel sentences to construct a unified model that allows translation to switch between different domains. Intuitively, words in a sentence are related to its domain to varying degrees, so that they will exert disparate impacts on the multi-domain NMT modeling. Based on this intuition, in this paper, we devote to distinguishing and exploiting word-level domain contexts for multi-domain NMT. To this end, we jointly model NMT with monolingual attention-based domain classification tasks and improve NMT as follows: 1) Based on the sentence representations produced by a domain classifier and an adversarial domain classifier, we generate two gating vectors and use them to construct domain-specific and domain-shared annotations, for later translation predictions via different attention models; 2) We utilize the attention weights derived from target-side domain classifier to adjust the weights of target words in the training objective, enabling domain-related words to have greater impacts during model training. Experimental results on Chinese-English and English-French multi-domain translation tasks demonstrate the effectiveness of the proposed model. Source codes of this paper are available on Github https://github.com/DeepLearnXMU/WDCNMT.",
"pdf_parse": {
"paper_id": "D18-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "With great practical value, the study of Multidomain Neural Machine Translation (NMT) mainly focuses on using mixed-domain parallel sentences to construct a unified model that allows translation to switch between different domains. Intuitively, words in a sentence are related to its domain to varying degrees, so that they will exert disparate impacts on the multi-domain NMT modeling. Based on this intuition, in this paper, we devote to distinguishing and exploiting word-level domain contexts for multi-domain NMT. To this end, we jointly model NMT with monolingual attention-based domain classification tasks and improve NMT as follows: 1) Based on the sentence representations produced by a domain classifier and an adversarial domain classifier, we generate two gating vectors and use them to construct domain-specific and domain-shared annotations, for later translation predictions via different attention models; 2) We utilize the attention weights derived from target-side domain classifier to adjust the weights of target words in the training objective, enabling domain-related words to have greater impacts during model training. Experimental results on Chinese-English and English-French multi-domain translation tasks demonstrate the effectiveness of the proposed model. Source codes of this paper are available on Github https://github.com/DeepLearnXMU/WDCNMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, neural machine translation (NMT) has achieved great advancement (Nal and Phil, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) . However, two difficulties are encountered in the practical applications of NMT. On the one hand, training a NMT model for a spe- * cific domain requires a large quantity of parallel sentences in such domain, which is often not readily available. Hence, the much more common practice is to construct NMT models using mixed-domain parallel sentences. In this way, the domain-shared translation knowledge can be fully exploited. On the other hand, the translated sentences often belong to multiple domains, thus requiring a NMT model general to different domains. Since the textual styles, sentence structures and terminologies in different domains are often remarkably distinctive, whether such domainspecific translation knowledge is effectively preserved could have a direct effect on the performance of the NMT model. Therefore, how to simultaneously exploit the exclusive and shared translation knowledge of mixed-domain parallel sentences for multi-domain NMT remains a challenging task.",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Nal and Phil, 2013;",
"ref_id": "BIBREF22"
},
{
"start": 102,
"end": 125,
"text": "Sutskever et al., 2014;",
"ref_id": "BIBREF31"
},
{
"start": 126,
"end": 148,
"text": "Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle this problem, recently, researchers have carried out many constructive and in-depth studies (Kobus et al., 2016; Zhang et al., 2016; Pryzant et al., 2017; Farajian et al., 2017) . However, most of these studies mainly focus on the utilization of domain contexts as a whole in NMT, while ignoring the discrimination of domain contexts at finer-grained level. In each sentence, some words are closely associated with its domain, while others are domain-independent. Intuitively, these two kinds of words play differ-ent roles in multi-domain NMT, nevertheless, they are not being distinguished by the current models. Take the sentence shown in Figure 1 for example. The Chinese words \"'OE\u00ac\"(congress), \"AE Y\"(bills), \" \\\"(inclusion), and \"AE \u00a7\"(agenda) are frequently used in Laws domain and imply the Laws style of the sentence, while other words in this sentence are common in all domains and they mainly indicate the semantic meaning of the sentence. Thus, it is reasonable to distinguish and encode these two types of words separately to capture domain-specific and domain-shared contexts, allowing the exclusive and shared knowledge to be exploited without any interference from the other. Meanwhile, the English words \"priority\",\"government\", \"bill\" and \"agenda\" are also closely related to Laws domain. To preserve the domain-related text style and idioms in generated translations, it is also reasonable for our model to pay more attention to these domain-related words than the others during model training. On this account, we believe that it is significant to distinguish and explore word-level domain contexts for multi-domain NMT.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Kobus et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 123,
"end": 142,
"text": "Zhang et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 143,
"end": 164,
"text": "Pryzant et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 165,
"end": 187,
"text": "Farajian et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 652,
"end": 660,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a multi-domain NMT model with word-level domain context discrimination. Specifically, we first jointly model NMT with monolingual attention-based domain classification tasks. In source-side domain classification and adversarial domain classification tasks, we perform two individual attention operations on source-side annotations to generate the domainspecific and domain-shared vector representations of source sentence, respectively. Meanwhile, an attention operation is also placed on target-side hidden states to implement target-side domain classification. Then, we improve NMT with the following two approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) According to the sentence representations produced by source-side domain classifier and adverisal domain classifier, we generate two gating vectors for each source annotation. With these two gating vectors, the encoded information of source annotation is selected automatically to construct domain-specific and domain-shared annotations, both of which are used to guide translation predictions via two attention mechanisms;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Based on the attention weights of the target words from target-side domain classifier, we employ word-level cost weighting strategy to refine our model training. In this way, domain-specific target words will be assigned greater weights than others in the objective function of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work demonstrates the benefits of separate modeling of the domain-specific and domainshared contexts, which echoes with the successful applications of the multi-task learning based on shared-private architecture in many tasks, such as discourse relation recognition , word segmentation ), text classification (Liu et al., 2017a) , and image classification . Overall, the main contributions of our work are summarized as follows:",
"cite_spans": [
{
"start": 313,
"end": 332,
"text": "(Liu et al., 2017a)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to construct domain-specific and domain-shared source annotations from initial annotations, of which effects are respectively captured for translation predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to adjust the weights of target words in the training objective of NMT according to their relevance to different domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct experiments on large-scale multi-domain Chinese-English and English-French datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimental results demonstrate the effectiveness of our model. Figure 2 illustrates the architecture of our model, which includes a neural encoder equipped with a domain classifier and an adversarial domain classifier, and a neural decoder with two attention models and a target-side domain classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As shown in the lower part of Figure 2 , our encoder leverages the sentence representations produced by these two classifiers to construct domain-specific and domain-shared annotations from initial ones, preventing the exclusive and shared translation knowledge from interfering with each other. In our encoder, the input sentence x=x 1 , x 2 , ..., x N are first mapped to word vectors and then fed into a bidirectional GRU (Cho et al., 2014) ",
"cite_spans": [
{
"start": 425,
"end": 443,
"text": "(Cho et al., 2014)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "to obtain \u2212 \u2192 h = \u2212 \u2192 h 1 , \u2212 \u2192 h 2 , ..., \u2212 \u2192 h N and \u2190 \u2212 h = \u2190 \u2212 h 1 , \u2190 \u2212 h 2 , ..., \u2190 \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "h N in the left-to-right and right-to-left directions, respectively. These two sequences are then concatenated as ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "h i = { \u2212 \u2192 h \u22a4 i , \u2190 \u2212 h \u22a4 i } \u22a4 to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "} N i=1 , we employ \u2712 \u271e \u261b \u2701 Decoder \u2702 \u2704 \u260e \u2704 \u2706 \u2704 \u271d Domain Classifier \u271f \u2720 \u271f \u260e \u271f \u2721 \u261e \u261e Encoder \u2702 Domain-Specific Annotations \u2712 \u270c \u261b \u2701 \u2702 Domain-Shared Annotations \u2712 \u270c \u261b \u270d \u2701 Domain Classifier \u270e \u270f \u261b \u2711 \u2713 E r ( ) \u2712 \u270c \u261b \u2701 \u271f \u2720 \u271f \u260e \u271f \u2721 \u261e E s ( ) E r (y) \u271f \u2720 \u271f \u260e \u271f \u2721 \u261e Adversarial Domain Classifier \u2702 \u2702 \u2714 \u260e \u2714 \u2706 \u2714 \u2715 Figure 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "The architecture illustration of our model. Note that our two source-side domain classifiers are used to produce domain-specific and domain-shared annotations, respectively, and our target-side domain classifier is only used during model training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "two attention-like aggregators to generate the semantic representations of sentence x, denoted by the vectors E r (x) and E s (x), respectively. Based on these two vectors, we employ the same neural network to model two classifiers with different context modeling objectives: One is a domain classifier that aims to distinguish different domains in order to generate domain-specific source-side contexts. It is trained using the objective function J s dc (x; \u03b8 s dc ) = log p(d|x; \u03b8 s dc ), where d is the domain tag of x and \u03b8 s dc is its parameter set. The other is an adversarial domain classifier capturing source-side domainshared contexts. To this end, we train it using the following adversarial loss functions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J s1 adc (x; \u03b8 s1 adc ) = log p(d|x; \u03b8 s1 adc , \u03b8 s2 adc ), (1) J s2 adc (x; \u03b8 s2 adc ) = H(p(d|x; \u03b8 s1 adc , \u03b8 s2 adc )),",
"eq_num": "(2)"
}
],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "H(p(\u2022))=\u2212 K k=1 p k (\u2022) log p k (\u2022) is an en- tropy of distribution p(\u2022) with K domain labels, \u03b8 s1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "adc and \u03b8 s2 adc denote the parameters of softmax layer and the generation layer of E s (x) in this classifier, respectively. By this means, E r (x) and E s (x) are expected to encode the domain-specific and domain-shared semantic representations of x, respectively. It should be noted that our utilization of domain classifiers is similar to adversarial training used in (Pryzant et al., 2017) which injects domain-shared contexts into annotations. However, by contrast, we introduce domain classifier and adversarial domain classifier simultaneously to distinguish different kinds of contexts for NMT more explicitly.",
"cite_spans": [
{
"start": 372,
"end": 394,
"text": "(Pryzant et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "Here we describe only the modeling procedure of the domain classifier, while it is also applicable to the adversarial domain classifier. Specifically, E r (x) is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E r (x) = N i=1 \u03b1 i h i ,",
"eq_num": "(3)"
}
],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "\u03b1 i = exp(e i ) N i \u2032 exp(e i \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "e i = (v a ) \u22a4 tanh(W a h i ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "and v a and W a are the relevant attention parameters. Then, we feed E r (x) into a fully connected layer with ReLU function (Ballesteros et al., 2015) , and then pass its output through a softmax layer to implement domain classification",
"cite_spans": [
{
"start": 125,
"end": 151,
"text": "(Ballesteros et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(\u2022|x; \u03b8 s dc ) =sof tmax(W s\u22a4 dc ReLU (E r (x)) + b s dc ),",
"eq_num": "(4)"
}
],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "where W s dc and b s dc are softmax parameters. Domain-Specific and Domain-Shared Annotations. Since domain-specific and domain-shared contexts have different effects on NMT, and thus should be distinguished and separately captured by NMT model. Specifically, we first leverage the sentence representations E r (x) and E s (x) to generate two gating vectors, g r i and g s i , for annotation h i in the following way:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g r i = sigmoid(W (1) gr E r (x) + W (2) gr h i + b gr ),",
"eq_num": "(5)"
}
],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g s i = sigmoid(W (1) gs E s (x) + W (2) gs h i + b gs ),",
"eq_num": "(6)"
}
],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "where W * gr , W * gs , b gr and b gs denote the relevant matrices and bias, respectively. With these two vectors, we construct domain-specific and domain-shared annotations h r i and h s i from h i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h r i = g r i \u2299 h i , (7) h s i = g s i \u2299 h i .",
"eq_num": "(8)"
}
],
"section": "Neural Encoder",
"sec_num": "2.1"
},
{
"text": "The upper half of Figure 2 illustrates the architecture of our decoder. In particular, with the attention weights of target words from the domain classifier, we employ word-level cost weighting strategy to refine model training. Formally, our decoder applies a nonlinear function g( * ) to define the conditional probability of translation y=y 1 , y 2 , ..., y M :",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|x) = M j=1 p(y j |x, y <j ) = M j=1 g(y j\u22121 , s j , c r j , c s j ),",
"eq_num": "(9)"
}
],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "where the vector s j denotes the GRU hidden state. It is updated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s j = GRU (s j\u22121 , y j\u22121 , c r j , c s j ).",
"eq_num": "(10)"
}
],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "Here the vectors c r j and c s j represent the domainspecific and domain-shared contexts, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "Domain-Specific and Domain-Shared Context Vectors. When generating y j , we define c r j as a weighted sum of the domain-specific annotations {h r i }:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c r j = N i=1 exp(e r j,i ) N i \u2032 =1 exp(e r j,i \u2032 ) \u2022 h r i ,",
"eq_num": "(11)"
}
],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "where e r j,i = a(s j\u22121 , h r i ), and a(*) is a feedforward neural network. Meanwhile, we produce c s j from the domain-shared annotations {h s i } as in Eq. 11. By introducing c r j and c s j into s j , our decoder is able to distinguish and simultaneously exploit two types of contexts for translation predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "Domain Classifier. We equip our decoder with a domain classifier with parameters \u03b8 tdc , which maximizes the training objective i.e., J t dc (y; \u03b8 t dc ) = log p(d|y; \u03b8 t dc ). To do this, we also apply attention operation to produce the domain-aware semantic representation E r (y) of y,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E r (y) = M j=1 \u03b2 j s j ,",
"eq_num": "(12)"
}
],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "\u03b2 j = exp(e j ) M j \u2032 exp(e j \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "e j = (v b ) \u22a4 tanh(W b s j ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "and v b and W b are the related parameters. Likewise, we stack a domain classifier on top of E r (y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "Note that this classifier is only used in model training to infer attention weights of target words. These weights measure their semantic relevance to different domains and can be utilized to adjust their cost weights in NMT training objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "NMT Training Objective with Word-Level Cost Weighting. Formally, we define the objective function of NMT as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "J nmt (x, y; \u03b8 nmt ) = M j=1 (1 + \u03b2 j ) log p(y j |x, y <j ; \u03b8 nmt ), (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "where \u03b2 j is the attention weight of y j obtained by Eq. 12, and \u03b8 nmt denotes the parameter set of NMT. By this scaling strategy, domainspecific words are emphasized with a bonus, while domain-shared words are updated as usual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "Please note that scaling costs with a multiplicative scalar essentially changes the magnitude of parameter update but without changing its direction (Chen et al., 2017a) . Besides, although our scaling strategy is similar to the cost weighting proposed by Chen et al. (2017a) , our approach differs from it in two aspects: First, we employ wordlevel cost weighting rather than sentence-level one to refine NMT training; Second, our approach is less time-consuming for multi-domain NMT.",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "(Chen et al., 2017a)",
"ref_id": "BIBREF2"
},
{
"start": 256,
"end": 275,
"text": "Chen et al. (2017a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Decoder",
"sec_num": "2.2"
},
{
"text": "Given a mixed-domain training corpus D = {(x, y, d)}, we train the proposed model accord-ing to the following objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Training Objective",
"sec_num": "2.3"
},
{
"text": "J (D; \u03b8) = (x,y,d)\u2208D {J nmt (x, y; \u03b8 nmt ) + J s dc (x; \u03b8 s dc ) + J t dc (y; \u03b8 t dc ) (14) + J s1 adc (x; \u03b8 s1 adc ) + \u03bb \u2022 J s2 adc (x; \u03b8 s2 adc )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Training Objective",
"sec_num": "2.3"
},
{
"text": "where J nmt ( * ), J s dc ( * ), J t dc ( * ) and J s * adc ( * ) are the objective functions of NMT, source-side domain classifier, target-side domain classifier, and source-side adversarial domain classifier, respectively, \u03b8={\u03b8 nmt , \u03b8 s dc , \u03b8 t dc , \u03b8 s1 adc , \u03b8 s2 adc }, and \u03bb is the hyper-parameter for adversarial learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Training Objective",
"sec_num": "2.3"
},
{
"text": "Particularly, to ensure encoding accuracy of domain-shared contexts, we follow to adopt an alternative two-phase strategy in training, where we alternatively optimize J (D; \u03b8) with \u03b8 s1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Training Objective",
"sec_num": "2.3"
},
{
"text": "adc and {\u03b8-\u03b8 s1 adc } respectively fixed at a time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Training Objective",
"sec_num": "2.3"
},
{
"text": "To investigate the effectiveness of our model, we conducted multi-domain translation experiments on Chinese-English and English-French datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "3"
},
{
"text": "Datasets. For Chinese-English translation, our data comes from UM-Corpus (Tian et al., 2014) and LDC 1 . To ensure data quality, we chose only the parallel sentences with domain label Laws, Spoken, and Thesis from UM-Corpus, and the LDC bilingual sentences related to News domain as our dataset. We used randomly selected sentences from UM-Corpus and LDC as development set, and combined the test set of UM-Corpus and randomly selected sentences from LDC to construct our test set. For English-French translation, we conducted experiments on the datasets of OPUS corpus 2 , containing sentence pairs from Medical, News, and Parliamentary domains. We also divided these datasets into training, development and test sets. Table 1 provides the statistics of the corpora used in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 720,
"end": 727,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "We performed word segmentation on Chinese sentences using Stanford Segmenter 3 , and tokenized English and French sentences using MOSES script 4 . Then, we employed Byte Pair Encoding (Sennrich et al., 2016) to convert all words into subwords. The translation quality was evaluated by case-sensitive BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 184,
"end": 207,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 305,
"end": 328,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "Contrast Models. Since our model is essentially a standard attentional NMT model enhanced with word-level domain contexts, we refer to it as +WDC. We compared it with the following models, namely:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 OpenNMT 5 . A famous open-source NMT system used widely in the NMT community trained on mix-domain training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 DL4NMT-single (Bahdanau et al., 2015) . A reimplemented attentional NMT trained on a single domain dataset.",
"cite_spans": [
{
"start": 16,
"end": 39,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 DL4NMT-mix (Bahdanau et al., 2015) . A reimplemented attentional NMT trained on mix-domain training set.",
"cite_spans": [
{
"start": 13,
"end": 36,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 DL4NMT-finetune (Luong and Manning, 2015) . A reimplemented attentional NMT which is first trained using out-of-domain training corpus and then fine-tuned using indomain dataset.",
"cite_spans": [
{
"start": 18,
"end": 43,
"text": "(Luong and Manning, 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +Domain Control (+DC) (Kobus et al., 2016) . It directly introduces embeddings of source domain tag to enrich annotations of encoder.",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Kobus et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +Multitask Learning (+ML1) (Dong et al., 2015) . It adopts a multi-task learning framework that shares encoder representation and separates the decoder modeling of different domains.",
"cite_spans": [
{
"start": 29,
"end": 48,
"text": "(Dong et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +Multitask Learning (+ML2) (Pryzant et al., 2017) . This model jointly trains NMT with domain classification via multitask learning.",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Pryzant et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +Adversarial Discriminative Mixing (+ADM) (Pryzant et al., 2017) . It employs adversarial training to achieve the domain adaptation in NMT.",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "(Pryzant et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +Target Token Mixing (+TTM) (Pryzant et al., 2017) . This model is similar to +DC, with the only difference that it enriches source annotations by adding target-side domain tag rather than source-side one.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Pryzant et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "Note that our model uses two annotation sequences, thus we also compared it with the aforementioned models with two times of hidden state size (2\u00d7hd). To further examine the effectiveness of the proposed components in our model, we also provided the performance of the following variants of our model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +WDC(S). It only exploits the source-side word-level domain contexts for multi-domain NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "\u2022 +WDC(T). It only employ word-level cost weighting on the target side to refine the model training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "Implementation Details. Following the common practice, we only used the training sentences within 50 words to efficiently train NMT models. Thus, 85.40% and 88.96% of the Chinese-English and English-French parallel sentences were covered in our experiments. In addition, we set the vocabulary size for Chinese-English and English-French as 32,000 and 32,000, respectively. In doing so, our vocabularies covered 99.97% Chinese words and 99.99% English words of the Chinese-English corpus, and almost 100% English words and 99.99% French words of the English-French corpus, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "We applied Adam (Kingma and Ba, 2015) to train models and determined the best model parameters based on the model performance on development set. The used hyper-parameter were set as follows: \u03b2 1 and \u03b2 2 of Adam as 0.9 and 0.999, word embedding dimension as 500, hidden layer size as 1000, learning rate as 5\u00d710 \u22124 , batch size as 80, gradient norm as 1.0, dropout rate as 0.1, and beamsize as 10. Other settings were set following (Bahdanau et al., 2015) . Overall Evaluation of the Chinese-English translation task. 2\u00d7hd = two times of hidden state size.",
"cite_spans": [
{
"start": 432,
"end": 455,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "We first determined the optimal hyper-parameter \u03bb (see Eq. (14)) on the development set. To do this, we gradually varied \u03bb from 0.1 to 1.0 with an increment of 0.1 in each step. Since our model achieved the best performance when \u03bb=0.1, hence, we set \u03bb=0.1 for all experiments thereafter. Table 2 shows the overall experimental results. Using almost the same hyper-parameters, our reimplemented DL4NMT outperforms OpenNMT in all domains, demonstrating that our baseline is competitive in performance. Moreover, on all test sets of different domains, our model significantly outperforms other contrast models no matter which hyper-parameters they use. Furthermore, we arrive at the following conclusions: First, our model surpasses DL4NMT-single, DL4NMT-mix and DL4NMT-finetune, all of which are commonly used in domain adaptation for NMT. Please note that DL4NMT-finetune requires multiple adapted NMT models to be constructed, while ours is a unified one that works well in all domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "3.2"
},
{
"text": "Second, compared with +DC, +ML2 and +ADM which all exploit source-side domain contexts for multi-domain NMT, our +WDC(S) still",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "3.2"
},
{
"text": "(b) An Example Sentence in Thesis Domain \u2712 \u2701 \u2702 \u2704 \u260e \u2706 \u271d \u2723 \u271e \u271f \u2720 \u2721 \u2704 \u260e \u261b \u261e \u270c \u2731 de \u2718 \u270d \u2704 \u261e \u270e \u270f \u2711 \u2720 \u2713 \u2704 \u2714 \u2715 y\u00ecngl\u00ec \u2716 \u2717 \u270e \u2719 \u271a sh\u00edy\u00e0n \u2746 \u271b j\u00ecsu\u00e0n (a) An Example Sentence in Laws Domain \u272a \u271c \u00e0om\u00e9n \u272d \u2722 t\u00e8bi\u00e9 \u2743 \u2724 \u2725 \u2726 \u2727 \u2605 \u2729 \u272b \u272c \u272e \u2605 \u2729 \u272f \u2730 \u2736 \u2732 \u2733 \u2734 \u2735 \u2737 \u2738 \u272c \u2739 \u2735 \u273a de \u273b \u273c \u273d \u272c \u2738 \u2605 \u273e \u272c \u273f \u2605 \u2729 \u2740 \u2732 \u2741 \u2742 \u2605 \u2737 \u2738 Figure 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "3.2"
},
{
"text": "The correlation heat map of the gating vectors(blue/green) to domain-specific/domainshared annotations in two example sentences. Note that domain-specific words \"e \u20ac\"(Macao), \"\u00e1{\u00ac\"(Legislative Council), \" )\"(Formation), \"\u2022{\"(Method), \"\u00b54\"(Seal), \"O \u017d\"(Calculation), \"\u00a2 \" (Experiment) are strengthened by g r i , while most of the domainshared words \" \"(of) and \" \u2020\"(and) are focused by g s i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "3.2"
},
{
"text": "exhibits better performance. This is because that these models focus on one aspect of domain contexts, while our model considers both domainspecific and domain-shared contexts on the source side. Third, +WDC(T) also outperforms DL4NMT, revealing that it is reasonable and effective to emphasize domain-specific words in model training..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "3.2"
},
{
"text": "Last, +WDC achieves the best performance when compared with both +WDC(S) and +WDC(T). Therefore, we believe that word-level domain contexts on the both sides are complementary to each other, and utilizing them simultaneously is beneficial to multi-domain NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "3.2"
},
{
"text": "Furthermore, we conducted several visualization experiments to empirically analyze the individual effectiveness of the added model components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Analysis",
"sec_num": "3.3"
},
{
"text": "We first visualized the gating vectors g r i and g s i to quantify their effects on extracting domainspecific and domain-shared contexts from initial source-side annotations. Since both g r i and g s i are high dimension vectors, which are difficult to be visualized directly, we followed and Zhou et al. (2017) to visualize their individual contributions to the final output, which can be The visualization of the sentence representations and their corresponding average annotations, where the triangle-shaped(purple), circle-shaped(red), square-shaped(green) and pentagonal-shaped(blue) points denote News, Laws, Spoken and Thesis sentences, respectively. approximated by their first derivatives. Figure 3 shows the first derivative heat maps for two example sentences in Laws and Thesis domain, respectively. We can observe that without any loss of semantic meanings from source sentences, most of the domain-specific words are strengthened by g r i , while most of the domainshared words, especially function words, are focused by g s i . This result is consistent with our expectation for the function of two gating vectors.",
"cite_spans": [
{
"start": 293,
"end": 311,
"text": "Zhou et al. (2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 699,
"end": 707,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualizations of Gating Vectors",
"sec_num": "3.3.1"
},
{
"text": "Furthermore, we applied the hypertools (Heusser et al., 2018) to visualize the sentence representations E r (x) and E s (x), and the domain-specific and domain-shared annotation sequences",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Heusser et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visualizations of Sentence Representations and Annotations",
"sec_num": "3.3.2"
},
{
"text": "{h r i } N i=1 and {h s i } N i=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualizations of Sentence Representations and Annotations",
"sec_num": "3.3.2"
},
{
"text": "Here we represent each annotation sequence with its average vector in the figure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualizations of Sentence Representations and Annotations",
"sec_num": "3.3.2"
},
{
"text": "As shown in Figure 4 those of the other domains, this may be caused by the more formal and consistent sentence styles in Laws domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualizations of Sentence Representations and Annotations",
"sec_num": "3.3.2"
},
{
"text": "Lastly, for each domain, we presented the top ten target words with the highest weights learned by our target-side domain classifier. To do this, we calculated the average attention weight of each word in the training corpus as its corresponding domain weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Illustrations of Domain-Specific Target Words",
"sec_num": "3.3.3"
},
{
"text": "As is clearly shown in Table 3 that most listed target words are closely related to their domains. This result validates the aforementioned hypothesis that some words are domain-dependent while others are domain-independent, and our targetside domain classifier is capable of distinguishing them with different attention weights.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Illustrations of Domain-Specific Target Words",
"sec_num": "3.3.3"
},
{
"text": "Likewise, we determined the optimal \u03bb=0.1 on the development set. Table 4 gives the results of English-French multi-domain translation. Similar to the previous experimental result in Section 3.2, our model continues to achieve the best performance compared to all contrast models using two different hidden state size settings, which demonstrates again that our model is effective and general to different language pairs in multi-domain NMT.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results on English-French Translation",
"sec_num": "3.4"
},
{
"text": "In this work, we study on multi-domain machine translation in the field of domain adaptation for machine translation, which has attracted great attention since SMT (Clark et al., 2012; Huck et Sennrich et al., 2013) . As for NMT, the dominant strategies for domain adaptation generally fall into two categories: The first category is to transfer out-of-domain knowledge to in-domain translation. The conventional method is fine-tuning, which first trains the model on out-of-domain dataset and then finetunes it on in-domain dataset (Luong and Manning, 2015; Zoph et al., 2016; Servan et al., 2016) . Freitag and Al-Onaizan (2016) proceeded further by ensembling the fine-tuned model with the original one. Chu et al. (2017) fine-tuned the model using the mix of in-domain and out-of-domain training corpora. From the perspective of data selection, Chen et al. (2017a) scaled the top-level costs of NMT system according to each training sentence's similarity to the development set. Wang et al. (2017a) explored the data selection strategy based on sentence embeddings for NMT domain adaptation. Moreover, Wang et al. (2017b) further proposed several sentence and domain weighting methods with a dynamic weight learning strategy. However, these approaches usually only perform well on target domain while being highly time consuming in transferring translation knowledge to all the constitute domains.",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Clark et al., 2012;",
"ref_id": "BIBREF8"
},
{
"start": 185,
"end": 192,
"text": "Huck et",
"ref_id": "BIBREF13"
},
{
"start": 193,
"end": 215,
"text": "Sennrich et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 533,
"end": 558,
"text": "(Luong and Manning, 2015;",
"ref_id": "BIBREF21"
},
{
"start": 559,
"end": 577,
"text": "Zoph et al., 2016;",
"ref_id": "BIBREF40"
},
{
"start": 578,
"end": 598,
"text": "Servan et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 601,
"end": 630,
"text": "Freitag and Al-Onaizan (2016)",
"ref_id": "BIBREF11"
},
{
"start": 707,
"end": 724,
"text": "Chu et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 849,
"end": 868,
"text": "Chen et al. (2017a)",
"ref_id": "BIBREF2"
},
{
"start": 983,
"end": 1002,
"text": "Wang et al. (2017a)",
"ref_id": "BIBREF35"
},
{
"start": 1106,
"end": 1125,
"text": "Wang et al. (2017b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "The second category is to directly use a mixed-domain training corpus to construct NMT model for the translated sentences derived from different domains. In this aspect, Kobus et al. (2016) incorporated domain information into NMT by appending a domain indicator token to each source sequence. Similarly, Johnson et al. (2016) added an artificial token to the input sequence to indicate the required target language. Contrastingly, Farajian et al. (2017) utilized the similarity between each test sentence and the training instances to dynamically set the hyper-parameters of the learning algorithm and update the generic model on the fly. Pryzant et al. (2017) proposed three novel models: discriminative mixing that jointly models NMT with domain classification, adversarial discriminative mixing, and target token mixing which appends a domain token to the target sequence. Sajjad et al. (2017) explored data concatenation, model stacking, data selection and multi-model ensemble to train multi-domain NMT. By exploiting domain as a tag or a feature, Tars and Fishel (2018) treated text domains as distinct languages in order to use multi-lingual approaches when implementing multi-domain NMT. Inspired by topicbased SMT, some researchers resorted to incorporating topical contexts into NMT. used the topic information of input sentence as an additional input to decoder. Zhang et al. (2016) enhanced the word representation by adding its topic embedding. However, these methods require to have explicit document boundaries between training data, which unfortunately do not exist in most datasets. Overall, our work is related to the second type of approach with (Pryzant et al., 2017) and (Chen et al., 2017a ) most related to ours. Unlike (Pryzant et al., 2017) applying adversarial training to only capture domain-shared translation knowledge, we further exploit domain-specific translation knowledge for multi-domain NMT. Also, in sharp contrast to (Chen et al., 2017a) , our model not only exploits the source-side word-level domain contexts differently, but also employs a word-level cost weighting strategy for multi-domain NMT.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "Kobus et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 305,
"end": 326,
"text": "Johnson et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 432,
"end": 454,
"text": "Farajian et al. (2017)",
"ref_id": "BIBREF10"
},
{
"start": 640,
"end": 661,
"text": "Pryzant et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 877,
"end": 897,
"text": "Sajjad et al. (2017)",
"ref_id": "BIBREF25"
},
{
"start": 1054,
"end": 1076,
"text": "Tars and Fishel (2018)",
"ref_id": "BIBREF32"
},
{
"start": 1375,
"end": 1394,
"text": "Zhang et al. (2016)",
"ref_id": "BIBREF38"
},
{
"start": 1666,
"end": 1688,
"text": "(Pryzant et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 1693,
"end": 1712,
"text": "(Chen et al., 2017a",
"ref_id": "BIBREF2"
},
{
"start": 1744,
"end": 1766,
"text": "(Pryzant et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 1956,
"end": 1976,
"text": "(Chen et al., 2017a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this work, we have explored how to utilize word-level domain contexts for multi-domain NMT. By jointly modeling NMT and domain classification tasks, we utilize the sentence representations of source-side domain classifier and ad-versarial domain classifier to construct domainspecific and domain-shared source annotations, which are then exploited by decoder. Moreover, using the attentional weights of target-side domain classifier, we adjust the weights of target words in the training objective to refine model training. Experimental results and in-depth analyses demonstrate the effectiveness of the proposed model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "In the future, we would like to extend the proposed word-level cost weighting strategy to source words. Besides, our method is also general to other NMT models. Therefore, we plan to apply our method to the NMT with complex architectures, for example, lattice-to-sequence NMT , hierarchy-to-sequence NMT (Su et al., 2018) , NMT with context-aware encoder and Transformer (Vaswani et al., 2017) and so on.",
"cite_spans": [
{
"start": 304,
"end": 321,
"text": "(Su et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 371,
"end": 393,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "http://opennmt.net/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors were supported by National Natural Science Foundation of China (No. 61672440), the Fundamental Research Funds for the Central Universities (Grant No. ZK1024), and Scientific Research Project of National Language Committee of China . We also thank the reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR 2015.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improved transition-based parsing by modeling characters instead of words with lstms",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by model- ing characters instead of words with lstms. In Proc. of EMNLP 2015.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cost weighting for neural machine translation domain adaptation",
"authors": [
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boxing Chen, Colin Cherry, George Foster, and Samuel Larkin. 2017a. Cost weighting for neural machine translation domain adaptation. In Proc. of the First Workshop on Neural Machine Translation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Guided alignment training for topic-aware neural machine translation",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Jan-Thorsten",
"middle": [],
"last": "Peter",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. CoRR abs/1607.01628.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adversarial multi-criteria learning for chinese word segmentation",
"authors": [
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhan",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017b. Adversarial multi-criteria learning for chinese word segmentation. In Proc. of ACL 2017.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proc. of EMNLP 2014.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An empirical comparison of domain adaptation methods for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In Proc. of ACL 2017.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "One system, many domains: Open-domain statistical machine translation via feature augmentation",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Alon Lavie, and Chris Dyer. 2012. One system, many domains: Open-domain statisti- cal machine translation via feature augmentation. In Proc. of AMTA 2012.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multi-task learning for multiple language translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for mul- tiple language translation. In Proc. of ACL 2015.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-domain neural machine translation through unsupervised adaptation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Amin Farajian, Marco Turchi, Matteo Negri, and Marcello Federico. 2017. Multi-domain neural ma- chine translation through unsupervised adaptation. In Proc. of WMT 2017.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fast domain adaptation for neural machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR abs/1612.06897.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hypertools: a python toolbox for gaining geometric insights into high-dimensional data",
"authors": [
{
"first": "Andrew",
"middle": [
"C"
],
"last": "Heusser",
"suffix": ""
},
{
"first": "Kirsten",
"middle": [],
"last": "Ziman",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"L W"
],
"last": "Owen",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [
"R"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew C. Heusser, Kirsten Ziman, Lucy L. W. Owen, and Jeremy R. Manning. 2018. Hypertools: a python toolbox for gaining geometric insights into high-dimensional data. Journal of Machine Learn- ing Research.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mixeddomain vs. multi-domain statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of MT Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Huck, A Birch, and B Haddow. 2015. Mixed- domain vs. multi-domain statistical machine trans- lation. In Proc. of MT Summit 2015.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, , and Jeffrey Dean. 2016. Google's multilingual neural machine trans- lation system: Enabling zero-shot translation. CoRR abs/1611.04558.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR 2015.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Domain control for neural machine translation",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Kobus",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Kobus, Josep Crego, and Jean Senellart. 2016. Domain control for neural machine transla- tion. CoRR abs/1612.06140.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Visualizing and understanding neural models in nlp",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proc. of NAACL 2016.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Domain separation networks",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Domain separation networks. In Proc. of NIPS 2016.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adversarial multi-task learning for text classification",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017a. Adversarial multi-task learning for text classifica- tion. In Proc. of ACL 2017.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Implicit discourse relation classification via multi-task neural networks",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2017b. Implicit discourse relation classification via multi-task neural networks. In Proc. of ACL 2017.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stanford neural machine translation systems for spoken language domains",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In Proc. of IWSLT 2015.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Kalchbrenner",
"middle": [],
"last": "Nal",
"suffix": ""
},
{
"first": "Blunsom",
"middle": [],
"last": "Phil",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalchbrenner Nal and Blunsom Phil. 2013. Recurrent continuous translation models. In Proc. of EMNLP 2013.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. of ACL 2002.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Effective domain mixing for neural machine translation",
"authors": [
{
"first": "Reid",
"middle": [],
"last": "Pryzant",
"suffix": ""
},
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reid Pryzant, Denny Britz, and Q Le. 2017. Effective domain mixing for neural machine translation. In Proc. of WMT 2017.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neural machine translation training in a multi-domain sce",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, and Stephan Vogel. 2017. Neural ma- chine translation training in a multi-domain sce- nario. CoRR abs/1708.08712.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL 2016.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A multi-domain translation model framework for statistical machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Aransa",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Holger Schwenk, and Walid Aransa. 2013. A multi-domain translation model framework for statistical machine translation. In Proc. of ACL 2013.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Domain specialization: a post-training domain adaptation for neural machine translation",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Servan, Josep Crego, and Jean Senel- lart. 2016. Domain specialization: a post-training domain adaptation for neural machine translation. CoRR abs/1612.06141.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Lattice-based recurrent neural network encoders for neural machine translation",
"authors": [
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Zhixing",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Rongrong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of AAAI 2017",
"volume": "",
"issue": "",
"pages": "3302--3308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xi- aodong Shi, and Yang Liu. 2017. Lattice-based re- current neural network encoders for neural machine translation. In Proc. of AAAI 2017, pages 3302- 3308.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A hierarchyto-sequence attentional neural machine translation model",
"authors": [
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jiali",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xie",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE/ACM Trans. Audio, Speech & Language Processing",
"volume": "26",
"issue": "3",
"pages": "623--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinsong Su, Jiali Zeng, Deyi Xiong, Yang Liu, Mingx- uan Wang, and Jun Xie. 2018. A hierarchy- to-sequence attentional neural machine translation model. IEEE/ACM Trans. Audio, Speech & Lan- guage Processing, 26(3):623-632.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Proc. of NIPS 2014.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-domain neural machine translation",
"authors": [
{
"first": "Sander",
"middle": [],
"last": "Tars",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sander Tars and Mark Fishel. 2018. Multi-domain neu- ral machine translation. CoRR abs/1805.02282.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Um-corpus: A large english-chinese parallel corpus for statistical machine translation",
"authors": [
{
"first": "Derek",
"middle": [
"F"
],
"last": "Liang Tian",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Paulo",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Quaresma",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Tian, Derek F. Wong, Lidia S. Chao, Paulo Quaresma, Francisco Oliveira, Shuo Li, Yiming Wang, and Yi Lu. 2014. Um-corpus: A large english-chinese parallel corpus for statistical ma- chine translation. In Proc. of LREC 2014.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NIPS 2017.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sentence embedding for neural machine translation domain adaptation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Andrew Finch, Masao Utiyama, and Ei- ichiro Sumita. 2017a. Sentence embedding for neu- ral machine translation domain adaptation. In Proc. of ACL 2017.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Instance weighting for neural machine translation domain adaptation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kehai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance weighting for neural machine translation domain adaptation. In Proc. of EMNLP 2017.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A context-aware recurrent encoder for neural machine translation",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE/ACM Trans. Audio, Speech & Language Processing",
"volume": "25",
"issue": "12",
"pages": "2424--2432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Deyi Xiong, Jinsong Su, and Hong Duan. 2017. A context-aware recurrent encoder for neu- ral machine translation. IEEE/ACM Trans. Audio, Speech & Language Processing, 25(12):2424-2432.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Topic-informed neural machine translation",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liangyou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Zhang, Liangyou Li, Andy Way, and Qun Liu. 2016. Topic-informed neural machine translation. In Proc. of COLING 2016.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Selective encoding for abstractive sentence summarization",
"authors": [
{
"first": "Qingyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proc. of ACL 2017.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. Proc. of EMNLP 2016.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Figure 4:"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(a) and (b), the sentence representation vectors and the average annotation vectors of different domains are clearly distributed in different regions. By contrast, their distributions are much more concentrated in Figure 4 (c) and (d). Thus, we conclude that our model is able to distinctively learn domain-specific and domainshared contexts. Moreover, from Figure 4 (b), we observe that the sentence representation vectors of Laws domain does not completely coincide with"
},
"TABREF3": {
"type_str": "table",
"text": "Sentence numbers of data sets in our experiments.",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"text": "Examples of Domain-Specific Target Words.",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF8": {
"type_str": "table",
"text": "al.,",
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td colspan=\"3\">Medical Parliamentary News</td></tr><tr><td colspan=\"3\">Contrast Models (1\u00d7hd)</td><td/></tr><tr><td>OpenNMT</td><td>78.78</td><td>32.96</td><td>30.22</td></tr><tr><td>DL4NMT-single</td><td>77.34</td><td>33.28</td><td>29.56</td></tr><tr><td>DL4NMT-mix</td><td>78.48</td><td>33.16</td><td>31.62</td></tr><tr><td>DL4NMT-finetune</td><td>78.61</td><td>33.72</td><td>34.04</td></tr><tr><td>+DC</td><td>79.34</td><td>33.38</td><td>33.94</td></tr><tr><td>+ML1</td><td>77.29</td><td>33.39</td><td>31.92</td></tr><tr><td>+ML2</td><td>78.65</td><td>33.55</td><td>33.48</td></tr><tr><td>+ADM</td><td>76.74</td><td>33.06</td><td>33.43</td></tr><tr><td>+TTM</td><td>78.27</td><td>33.29</td><td>33.37</td></tr><tr><td colspan=\"3\">Contrast Models (2\u00d7hd)</td><td/></tr><tr><td>DL4NMT-single</td><td>78.50</td><td>33.38</td><td>30.23</td></tr><tr><td>DL4NMT-mix</td><td>78.84</td><td>33.19</td><td>33.28</td></tr><tr><td>DL4NMT-finetune</td><td>79.17</td><td>33.88</td><td>34.20</td></tr><tr><td>+DC</td><td>79.96</td><td>33.44</td><td>33.52</td></tr><tr><td>+ML1</td><td>78.38</td><td>33.20</td><td>31.90</td></tr><tr><td>+ML2</td><td>79.41</td><td>33.55</td><td>33.62</td></tr><tr><td>+ADM</td><td>79.31</td><td>33.50</td><td>33.34</td></tr><tr><td>+TTM</td><td>79.36</td><td>33.13</td><td>33.68</td></tr><tr><td/><td>Our Models</td><td/><td/></tr><tr><td>+WDC(S)</td><td>82.76</td><td>34.13</td><td>34.31</td></tr><tr><td>+WDC(T)</td><td>81.51</td><td>33.76</td><td>33.78</td></tr><tr><td>+WDC</td><td>83.35</td><td>34.17</td><td>34.87</td></tr></table>"
},
"TABREF9": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: Overall Evaluation on the English-French</td></tr><tr><td>translation task.</td></tr><tr><td>2015;</td></tr></table>"
}
}
}
}