|
{ |
|
"paper_id": "Q18-1044", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:10:01.971598Z" |
|
}, |
|
"title": "Integrating Weakly Supervised Word Sense Disambiguation into Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Pu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Pappas", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Popescu-Belis", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper demonstrates that word sense disambiguation (WSD) can improve neural machine translation (NMT) by widening the source context considered when modeling the senses of potentially ambiguous words. We first introduce three adaptive clustering algorithms for WSD, based on k-means, Chinese restaurant processes, and random walks, which are then applied to large word contexts represented in a low-rank space and evaluated on SemEval shared-task data. We then learn word vectors jointly with sense vectors defined by our best WSD method, within a state-of-the-art NMT system. We show that the concatenation of these vectors, and the use of a sense selection mechanism based on the weighted average of sense vectors, outperforms several baselines including sense-aware ones. This is demonstrated by translation on five language pairs. The improvements are more than 1 BLEU point over strong NMT baselines, +4% accuracy over all ambiguous nouns and verbs, or +20% when scored manually over several challenging words.", |
|
"pdf_parse": { |
|
"paper_id": "Q18-1044", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper demonstrates that word sense disambiguation (WSD) can improve neural machine translation (NMT) by widening the source context considered when modeling the senses of potentially ambiguous words. We first introduce three adaptive clustering algorithms for WSD, based on k-means, Chinese restaurant processes, and random walks, which are then applied to large word contexts represented in a low-rank space and evaluated on SemEval shared-task data. We then learn word vectors jointly with sense vectors defined by our best WSD method, within a state-of-the-art NMT system. We show that the concatenation of these vectors, and the use of a sense selection mechanism based on the weighted average of sense vectors, outperforms several baselines including sense-aware ones. This is demonstrated by translation on five language pairs. The improvements are more than 1 BLEU point over strong NMT baselines, +4% accuracy over all ambiguous nouns and verbs, or +20% when scored manually over several challenging words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The correct translation of polysemous words remains a challenge for machine translation (MT). Although some translation options may be interchangeable, substantially different senses of * Work conducted while at the Idiap Research Institute. source words must generally be rendered by different words in the target language. Hence, an MT system should identify-implicitly or explicitlythe correct sense conveyed by each occurrence in order to generate an appropriate translation. For instance, in the following sentence from Europarl, the translation of \"deal\" should convey the sense \"to handle\" (in French traiter) and not \"to cope\" (in French rem\u00e9dier, which is wrong):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Source: How can we guarantee the system of prior notification for high-risk products at ports that have the necessary facilities to deal with them?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Reference translation: Comment pouvons-nous garantir le syst\u00e8me de notification pr\u00e9alable pour les produits pr\u00e9sentant un risque \u00e9lev\u00e9 dans les ports qui disposent des installations n\u00e9cessaires pour traiter ces produits ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Baseline neural MT: [. . .] les ports qui disposent des moyens n\u00e9cessaires pour y rem\u00e9dier ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sense-aware neural MT: [. . .] les ports qui disposent des installations n\u00e9cessaires pour les traiter ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Current MT systems perform word sense disambiguation implicitly, based on co-occurring words in a rather limited context. In phrase-based statistical MT, the context size is related to the order of the language model (often between 3 and 5) and to the length of n-grams in the phrase table (seldom above 5). In attention-based neural MT (NMT), the context extends to the entire sentence, but multiple word senses are not modeled explicitly. The implicit sense information captured by word representations used in NMT leads to a bias in the attention mechanism towards dominant senses. Therefore, the NMT decoders cannot clearly identify the contexts in which one word sense should be used rather than another one. Hence, although NMT can use local constraints to translate \"great rock band\" into French as superbe groupe de rock rather than grande bande de pierre-thus correctly assigning the musical rather than geological sense to \"rock\"-it fails to do so for word senses that require larger contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we demonstrate that the explicit modeling of word senses can be helpful to NMT by using combined vector representations of word types and senses, which are inferred from contexts that are larger than that of state-of-the-art NMT systems. We make the following contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Weakly supervised word sense disambiguation (WSD) approaches integrated into NMT, based on three adaptive clustering methods and operating on large word contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Three sense selection mechanisms for integrating WSD into NMT, respectively based on top, average, and weighted average (i.e., attention) of word senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Consistent improvements against baseline NMT on five language pairs: from English (EN) into Chinese (ZH), Dutch (NL), French (FR), German (DE), and Spanish (ES).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is organized as follows. In \u00a72, we present three adaptive WSD methods based on k-means clustering, the Chinese restaurant process, and random walks. In \u00a73, we present three sense selection mechanisms that integrate the word senses into NMT. The experimental details appear in \u00a74, and the results concerning the optimal parameter settings are presented in \u00a75, where we also show that our WSD component is competitive on the SemEval 2010 shared task. \u00a76 presents our results: The BLEU scores increase by about 1 point with respect to a strong NMT baseline, and the accuracy of ambiguous noun and verb translation improves by about 4%, while a manual evaluation of several challenging and frequent words shows an improvement of about 20%. A discussion of related work appears finally in \u00a77.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we present the three unsupervised or weakly supervised WSD methods used in our experiments, which aim at clustering different occurrences of the same word type according to their senses. We first consider all nouns and verbs in the source texts that have more than one sense in WordNet, and extract from there the definition of each sense and, if available, the example. For each occurrence of such nouns or verbs in the training data, we use word2vec to build word vectors for their contexts (i.e., neighboring words). All vectors are passed to an unsupervised clustering algorithm, possibly instantiated with WordNet definitions or examples. The resulting clusters can be numbered and used as labels, or their centroid word vector can be used as well, as explained in \u00a73. This approach answers several limitations of previous supervised or unsupervised WSD methods. On the one hand, supervised methods require data with manually sense-annotated labels and are thus limited to typically small subsets of all word types-for example, up to one hundred content words targeted in SemEval 2010 1 ) and up to a thousand words in SemEval 2015 (Moro and Navigli, 2015) . In contrast, our method does not require labeled texts for training, and applies to all word types with multiple senses in WordNet (e.g., nearly 4,000 for some data sets; see Table 1 later in this paper). On the other hand, unsupervised methods often predefine the number of possible senses for all ambiguous words before clustering their occurrences, and do not adapt to what is actually observed in the data; as a result, the senses are often too fine-grained for the needs of MT, especially for a particular domain. In contrast, our model learns the number of senses for each analyzed ambiguous word directly from the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1154, |
|
"end": 1178, |
|
"text": "(Moro and Navigli, 2015)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1356, |
|
"end": 1363, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adaptive Sense Clustering for MT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For each noun or verb type W t appearing in the training data, as identified by the Stanford POS tagger, 2 we extract the senses associated to it in WordNet 3 (Fellbaum, 1998) using NLTK. 4 Specifically, we extract the set of definitions D t = {d tj |j = 1, . . . , m t } and the set of examples of use E t = {e tj |j = 1, . . . , n t }, each of them containing multiple words. Most of the senses are accompanied by a definition, but only about half of them also include an example of use.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 189, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Definitions d tj and examples e tj are represented by vectors defined as the average of the word embeddings over all the words constituting them (except stopwords). Formally, these vectors are d tj = ( w l \u2208d tj w l )/|d tj | and e tj = ( w l \u2208e tj w l )/|e tj |, respectively, where |d tj | is the number of tokens of the definition. Although the entire definition d tj is used to build the d tj vector, we do not consider all words in the example e tj , but limit the sum to a fragment e tj contained in a window of size c centered around the considered word, to avoid noise from long examples. Hence, we divide by the number of words in this window, noted |e tj |. All of these word vectors w l are pre-trained word2vec embeddings from Google 5 (Mikolov et al., 2013) . If dim is the dimensionality of the word vector space, then all vectors w l , d tj , and e tj are in R dim . Each definition vector d tj or example vector e tj for a word type W t is considered as a center vector for each sense during the clustering procedure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 748, |
|
"end": 770, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Turning now to tokens, each word occurrence w i in a source sentence is represented by the average vector u i of the words from its context, that is, a window of c words centered on w i , c being an even number. We calculate the vector u i for w i by averaging vectors from c/2 words before w i and from c/2 words after it. We stop nevertheless at the sentence boundaries, and filter out stopwords before averaging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We adapt three clustering algorithms to our needs for WSD applied to NMT. The objective is to cluster all occurrences w i of a given word type W t , represented as word vectors u i , according to the similarity of their senses, as inferred from the similarity of the context vectors. We compare the algorithms empirically in \u00a75.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "K-means Clustering. The original k-means algorithm (MacQueen, 1967) aims to partition a set of items, which are here tokens w 1 , w 2 , . . . , w n of the same word type W t , represented through their embeddings u 1 , u 2 , . . . , u n where u i \u2208 R dim . The goal of k-means is to partition (or cluster) these vectors into k sets S = {S 1 , S 2 , . . . , S k } so as to minimize the within-cluster sum of squared distances to each centroid \u00b5 i :", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 67, |
|
"text": "(MacQueen, 1967)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "S = arg min S k i=1 u\u2208S i ||u \u2212 \u00b5 i || 2", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "At the first iteration, when there are no clusters yet, the algorithm selects k random points as centroids of the k clusters. Then, at each subsequent iteration t, the algorithm calculates for each candidate cluster a new centroid of the observations, defined as their average vector, as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u00b5 t+1 i = 1 |S t i | u j \u2208S t i u j", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In an earlier application of k-means to phrasebased statistical MT, but not neural MT, we made several modifications to the original k-means algorithm to make it adaptive to the word senses observed in training data (Pu et al., 2017) . We maintain these changes and summarize them briefly here. The initial number of clusters k t for each ambiguous word type W t is set to the number of its senses in WordNet, either considering only the senses that have a definition or those that have an example. The centroids of the clusters are initialized to the vectors representing the senses from WordNet, either using their definition vectors d tj or their example vectors e tj . These initializations are thus a form of weak supervision of the clustering process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 233, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Finally, and most importantly, after running the k-means algorithm, the number of clusters for each word type is reduced by removing the clusters that contain fewer than 10 tokens and assigning their tokens to the closest large cluster. \"Closest\" is defined in terms of the cosine distance between u i and their centroids. The final number of clusters thus depends on the observed occurrences in the training data (which are the same data as for MT), and avoids modeling infrequent senses that are difficult to translate anyway. When used in NMT, in order to assign each new token from the test data to a cluster (i.e., to perform WSD), we select the closest centroid, again in terms of cosine distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Chinese Restaurant Process. The Chinese Restaurant Process (CRP) is an unsupervised method considered as a practical interpretation of a Dirichlet process (Ferguson, 1973) for nonparametric clustering. In the original analogy, each token is compared to a customer in a restaurant, and each cluster is a table where customers can be seated. A new customer can choose to sit at a table with other customers, with a probability proportional to the numbers of customers at that table, or sit at a new, empty table. In an application to multisense word embeddings, Li and Jurafsky (2015) proposed that the probability to \"sit at a table\" should also depend on the contextual similarity between the token and the sense modeled by the table. We build upon this idea and bring several modifications that allow for an instantiation with sense-related knowledge from WordNet, as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 171, |
|
"text": "(Ferguson, 1973)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 582, |
|
"text": "Li and Jurafsky (2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For each word type W t appearing in the data, we start by fixing the maximal number k t of senses or clusters as the number of senses of W t in WordNet. This avoids an unbounded number of clusters (as in the original CRP algorithm) and the risk of cluster sparsity by setting a non-arbitrary limit based on linguistic knowledge. Moreover, we define the initial centroid of each cluster as the word vector corresponding either to the definition d tj of the respective sense, or alternatively to the example e tj illustrating the sense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For each token w i and its context vector u i the algorithm decides whether the token is assigned to one of the sense clusters S j to which previous tokens have been assigned, or whether it is assigned to a new empty cluster, by selecting the option that has the highest probability, which is computed as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "P \u221d \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 N j (\u03bb 1 s(u i , d tj ) + \u03bb 2 s(u i , \u00b5 j )) if N j = 0 (non-empty sense) \u03b3s(u i , d tj ) if N j = 0 (empty sense) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In other words, for a non-empty sense, the probability is proportional to the popularity of the sense (number of tokens it already contains, N j ) and to the weighted sum of two cosine similarities s(\u2022, \u2022): one between the context vector u i of the token and the definition of the sense d tj , and another one between u i and the average context vector of the tokens already assigned to the sense (\u00b5 j ). These terms are weighted by the two hyper-parameters \u03bb 1 and \u03bb 2 . For an empty sense, only the second term is used, weighted by the \u03b3 hyper-parameter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Random Walks. Finally, we also consider for comparison a WSD method based on random walks on the WordNet knowledge graph (Agirre and Soroa, 2009; Agirre et al., 2014) available from the UKB toolkit. 6 In the graph, senses correspond to nodes and the relationships or dependencies between pairs of senses correspond to the edges between those nodes. From each input sentence, we extract its content words (nouns, verbs, adjectives, and adverbs) that have an entry in the WordNet weighted graph. The method calculates the probability of a random walk over the graph from a target word's sense ending on any other sense in the graph, and determines the sense with the highest probability for each analyzed word. In this case, the random walk algorithm is PageRank (Grin and Page, 1998) , which computes a relative structural importance or \"rank\" for each node.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 145, |
|
"text": "(Agirre and Soroa, 2009;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 166, |
|
"text": "Agirre et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 200, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 761, |
|
"end": 782, |
|
"text": "(Grin and Page, 1998)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 Integration with Neural MT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering Word Occurrences by Sense", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We now present several models integrating WSD into NMT, starting from an attention-based NMT baseline (Bahdanau et al., 2015; Luong et al., 2015) . Given a source sentence X with words w x ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 125, |
|
"text": "(Bahdanau et al., 2015;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 126, |
|
"end": 145, |
|
"text": "Luong et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "X = (w x 1 , w x 2 , ..., w x T )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", the model computes a conditional distribution over translations, expressed as p(Y = (w y 1 , w y 2 , ..., w y T )|X). The neural network model consists of an encoder, a decoder, and an attention mechanism. First, each source word w x t \u2208 V is projected from a one-hot word vector into a continuous vector space representation x t . Then, the resulting sequence of word vectors is read by the bidirectional encoder, which consists of forward and backward recurrent networks (RNNs). The forward RNN reads the sequence in left-to-right order (i.e., \u2212", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2192 h t = \u2212 \u2192 \u03c6 ( \u2212 \u2192 h t\u22121 , x t ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", and the backward RNN reads it right-to-left (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2190 \u2212 h t = \u2190 \u2212 \u03c6 ( \u2190 \u2212 h t+1 , x t ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". The hidden states from the forward and backward RNNs are concatenated at each time step t to form an \"annotation\" vector", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h t = [ \u2212 \u2192 h t ; \u2190 \u2212 h t ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Taken over several time steps, these vectors form the \"context\"-that is, a tuple of annotation vectors C = (h 1 , h 2 , ..., h T ). The recurrent activation functions \u2212 \u2192 \u03c6 and \u2190 \u2212 \u03c6 are either long short-term memory units (LSTM) or gated recurrent units (GRU).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The decoder RNN maintains an internal hidden state z t . After each time step t , it first uses the attention mechanism to weight the annotation vectors in the context tuple C. The attention mechanism takes as input the previous hidden state of the decoder and one of the annotation vectors, and returns a relevance score e t ,t = f ATT (z t \u22121 , h t ). These scores are normalized to obtain attention scores:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1 t ,t = exp(e t ,t )/ T k=1 exp(e t ,k )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "These scores serve to compute a weighted sum of annotation vectors c t = T t=1 \u03b1 t ,t h t , which are used by the decoder to update its hidden state:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z t = \u03c6 z (z t \u22121 , y t \u22121 , c t )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Similarly to the encoder, \u03c6 z is implemented as either an LSTM or GRU and y t \u22121 is the targetside word embedding vector corresponding to word w y .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Neural MT Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To model word senses for NMT, we concatenate the embedding of each token with a vector representation of its sense, either obtained from one of the clustering methods presented in \u00a72, or learned during encoding, as we will explain. In other words, the new vector w i representing each source token w i consists of two parts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "w i = [w i ; \u00b5 i ],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where w i is the word embedding learned by the NMT, and \u00b5 i is the sense embedding obtained from WSD or learned by the NMT. To represent these senses, we create two dictionaries, one for words and the other one for sense labels, which will be embedded in a lowdimensional space, before the encoder. We propose several models for using and/or generating sense embeddings for NMT, named and defined as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Top Sense (TOP). In this model, we directly use the sense selected for each token by one of the WSD systems above, and use the embeddings of the respective sense as generated by NMT after training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Weighted Average of Senses (AVG). Instead of fully trusting the decision of a WSD system (even one adapted to MT), we consider all listed senses and the respective cluster centroids learned by the WSD system. Then we convert the distances d l between the input token vector and the centroid of each sense S l into a normalized weight distribution either by a linear or a logistic normalization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u03c9 j = 1 \u2212 d j 1\u2264l\u2264k d l or \u03c9 j = e \u2212d 2 j 1\u2264l\u2264k e \u2212d 2 l (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where k is the total number of senses of token w i . The sense embedding for each token is computed as the weighted average of all sense embeddings:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u00b5 i = 1\u2264j\u2264k \u03c9 j \u00b5 ij (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Attention-Based Sense Weights (ATT). Instead of obtaining the weight distribution from the centroids computed by WSD, we also propose to dynamically compute the probability of relatedness to each sense based on the current word and sense embeddings during encoding, as follows. Given a token w i , we consider all the other tokens in the sentence (w 1 , . . . , w i\u22121 , w i+1 , . . . , w L ) as the context of w i , where L is the length of the sentence. We define the context vector of w i as the mean of all the embeddings u j of the words w j , that is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "u i = ( l =i u l )/(L \u2212 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Then, we compute the similarity f (u i , \u00b5 ij ) between each sense embedding \u00b5 ij and the context vector u i using an additional attention layer in the network, with two possibilities that will be compared empirically:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f (u i , \u00b5 ij ) = \u03c5 T tanh(W u i + U \u00b5 ij ) (8) or f (u i , \u00b5 ij ) = u T i W \u00b5 ij (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The weights \u03c9 j are now obtained through the following softmax normalization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c9 j = e f (u i ,\u00b5 ij ) 1\u2264l\u2264k e f (u i ,\u00b5 il )", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Finally, the average sense embedding is obtained as in Equation 7, and is concatenated to the word vector u i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "ATT Model with Initialization of Embeddings (ATT ini ). The fourth model is similar to the ATT model, with the difference that we initialize the embeddings of the source word dictionary using the word2vec vectors of the word types, and the embeddings of the sense dictionary using the centroid vectors obtained from k-means. 4 Data, Metrics, and Implementation Data Sets. We train and test our sense-aware MT systems on the data shown in Table 1 : the UN Corpus 7 (Rafalovitch and Dale, 2009) and the Europarl Corpus 8 (Koehn, 2005) . We first experiment with our models using the same data set and protocol as in our previous work (Pu et al., 2017) , to enable comparisons with phrase-based statistical MT systems, for which the sense of each ambiguous source word was modeled as a factor. Moreover, in order to make a better comparison with other related approaches, we train and test our sense-aware NMT models on large data sets from Workshop on Statistical Machine Translation (WMT) shared tasks over three language pairs (EN/DE, EN/ES, and EN/FR).", |
|
"cite_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 492, |
|
"text": "(Rafalovitch and Dale, 2009)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 532, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 632, |
|
"end": 649, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 445, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The data set used in our previous work consists of 500k parallel sentences for each language pair, 5k for development and 50k for testing. The data originates from UN for EN/ZH, and from Europarl for the other pairs. The source sides of these sets contain around 2,000 different English word forms (after lemmatization) that have more than one sense in WordNet. Our WSD system generates ca. 3.8K different noun labels and 1.5K verb labels for these word forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The WMT data sets additionally used in this paper are the following ones. First, we use the complete EN/DE set from WMT 2016 (Bojar et al., 2016) with a total of ca. 4.5M sentence pairs. In this case, the development set is NewsTest 2013, and the testing set is made of NewsTest 2014 and 7 www.uncorpora.org. 8 www.statmt.org/europarl. 2015. Second, for EN/FR and EN/ES, we use data from WMT 2014 (Bojar et al., 2014) 9 with 5.3M sentences for EN/FR and 3.8M sentences for EN/ES. Here, the development sets are NewsTest 2008 and 2009, and the testing sets are NewsTest 2012 and 2013 for both language pairs. The source sides of these larger additional sets contain around 3,500 unique English word forms with more than one sense in WordNet, and our system generates ca. 8K different noun labels and 2.5K verb labels for each set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 145, |
|
"text": "(Bojar et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 417, |
|
"text": "(Bojar et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 419, |
|
"text": "9", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Finally, for comparison purposes and model selection, we use the WIT 3 Corpus 10 (Cettolo et al., 2012), a collection of transcripts of TED talks. We use 150K sentence pairs for training, 5K for development and 50K for testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Pre-processing. Before assigning sense labels, we tokenize all the texts and identify the parts of speech using the Stanford POS tagger. 11 Then, we filter out the stopwords and the nouns that are proper names according to the Stanford Name Entity Recognizer. 11 Furthermore, we convert the plural forms of nouns to their singular forms and the verb forms to infinitives using the stemmer and lemmatizer from NLTK, 12 which is essential because WordNet has description entries only for base forms. The pre-processed text is used for assigning sense labels to each occurrence of a noun or verb that has more than one sense in WordNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 139, |
|
"text": "11", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "K-means Settings. Unless otherwise stated, we adopt the following settings in the k-means algorithm, with the implementation provided in Scikit-learn (Pedregosa et al., 2011) . We use the definition of each sense for initializing the centroids, and later compare this choice with the use of examples. We set k t , the initial number of clusters, to the number of WordNet senses of each ambiguous word type W t , and set the window size for the context surrounding each occurrence to c = 8.", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 174, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Neural MT. We build upon the attentionbased neural translation model (Bahdanau et al., 2015) from the OpenNMT toolkit (Klein et al., 2017) . 13 We use LSTM and not GRU. For the proposed ATT and ATT ini models, we add an external attention layer before the encoder, but do not otherwise alter the internals of the NMT model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 92, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 138, |
|
"text": "(Klein et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 143, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We set the source and target vocabulary sizes to 50,000 and the dimension of word embeddings to 500, which is recommended for OpenNMT, so as to reach a strong baseline. For the ATT ini model, because the embeddings from word2vec used for initialization have only 300 dimensions, we randomly pick up a vector with 200 dimensions within range [\u22120.1,0.1] and concatenate it with the vector from word2vec to reach the required number of dimensions, ensuring a fair comparison.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It takes around 15 epochs (25-30 hours on Idiap's GPU cluster) to train each of the five NMT models: the baseline and our four proposals. The AVG model takes more time for training (around 40 hours) because we use additional weights and senses for each token. In fact, we limit the number of senses for AVG to 5 per word type, after observing that in WordNet there are fewer than 100 words with more than 5 senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Evaluation Metrics. For the evaluation of intrinsic WSD performance, we use the V -score, the F 1 -score, and their average, as used for instance at SemEval 2010 . The V -score is the weighted harmonic mean of homogeneity and completeness (favoring systems generating more clusters than the reference), and the F 1 -score measures the classification performance (favoring systems generating fewer clusters). Therefore, the ranking metric for SemEval 2010 is the average of the two.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We select the optimal model configuration based on MT performance on development sets, as measured with the traditional multi-bleu score (Papineni et al., 2002) . Moreover, to estimate the impact of WSD on MT, we also measure the actual impact on the nouns and verbs that have several WordNet senses, by counting how many of them are translated exactly as in the reference translation. To quantify the difference with the baseline, we use the following coefficient. First, for a certain set of tokens in the source data, we note as N improved the number of tokens that are translated by our system with the same token as in the reference translation, but are translated differently by the baseline system. Conversely, we note as N degraded the number of tokens that are translated by the baseline system as in the reference, but dif-ferently by our system. 14 We use the normalized coefficient \u03c1 = (N improved \u2212N degraded )/T , where T is the total number of tokens, as a metric to specifically evaluate the translation of words submitted to WSD.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 160, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For all tables we mark in bold the best score per condition. For MT scores in Tables 5, 7 , and 8, we show the improvement over the baseline and its significance based on two confidence levels: either p < 0.05 (indicated with a ' \u2020') or p < 0.01 (' \u2021'). Any p-values larger than 0.05 are treated as not significant and are left unmarked.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 89, |
|
"text": "Tables 5, 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sense-aware Neural MT Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We first select the optimal clustering method and its initialization settings, in a series of experiments with statistical MT over the WIT 3 corpus, extending and confirming our previous results (Pu et al., 2017) . In Table 2 , we present the BLEU and \u03c1 scores of our previous WSD+SMT system for the three clustering methods, initialized with vectors either from the WordNet definitions or from examples, for two language pairs. We also provide BLEU scores of baseline systems and of oracle ones (i.e., using correct senses as factors). The best method is k-means and the best initialization is with the vectors of definitions. All values of \u03c1 show improvements over the baseline, with up to 4% for k-means on DE/EN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 212, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 225, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Best WSD Method Based on BLEU", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Moreover, we found that random initializations underperform with respect to definitions or examples. For a fair comparison, we set the number of clusters equal either to the number of synsets with definitions or with examples, for each word type, and obtained BLEU scores on EN/ZH of 15.34 and 15.27, respectively-hence lower than 15.54 and 15.41 in Table 2 . We investigated earlier (Pu et al., 2017) the effect of the context window surrounding each ambiguous token, and found with the WSD+SMT factored system on EN/ZH WIT 3 data that the optimal size was 8, which we use here as well. Table 3 : WSD results from three SemEval 2010 systems and our six systems, in terms of V -score, F 1 score, and their average. C = the average number of clusters. The adaptive k-means using definitions outperforms the others on the average of V and F 1 , when considering both nouns and verbs, or nouns only. The SemEval systems are UoY (Korkontzelos and Manandhar, 2010) ; KCDC-GD (Kern et al., 2010) ; and Duluth-Mix-Gap (Pedersen, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 401, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 925, |
|
"end": 959, |
|
"text": "(Korkontzelos and Manandhar, 2010)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 970, |
|
"end": 989, |
|
"text": "(Kern et al., 2010)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1011, |
|
"end": 1027, |
|
"text": "(Pedersen, 2010)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 357, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 595, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Best WSD Method Based on BLEU", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "lines at the bottom) with other significant systems that participated in the SemEval 2010 shared task . 15 The adaptive k-means initialized with definitions has the highest average score (35.20) and ranks among the top systems for most of the metrics individually. Moreover, the adaptive k-means method finds on average 4.5 senses per word type, which is very close to the ground-truth value of 4.46. Overall, we observed that k-means infers fewer senses per word type than WordNet. These results show that k-means WSD is effective and provides competitive performance against other weakly supervised alternatives (CRP or Random Walk) and even against SemEval WSD methods, but using additional knowledge not available to SemEval participants.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 106, |
|
"text": "15", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Best WSD Method Based on V/F1 Scores", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To compare several options of the WSD+NMT systems, we trained and tested them on a subset of EN/FR Europarl (a smaller data set shortened the training times). The results are shown 15 We provide comparisons with more systems from SemEval in our previous paper (Pu et al., 2017 in Table 4 . For the AVG model, the logistic normalization in Equation (6) works better than the linear one. For the ATT model, we compared two different labeling approaches for tokens that do not have multiple senses: Either use the same NULL label for all tokens, or use the word itself as a label for its sense; the second option appeared to be the best. Finally, for the ATT ini model, we compared the two options for the attention function in Equation (8), and found that the formula with tanh is the best. In what follows, we use these settings for the AVG and ATT systems. Table 5 : BLEU scores of our sense-aware NMT systems over five language pairs: ATT ini is the best one among SMT and NMT systems. Significance testing is indicated by \u2020 for p < 0.05 and \u2021 for p < 0.01.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 183, |
|
"text": "15", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 276, |
|
"text": "(Pu et al., 2017", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 287, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 857, |
|
"end": 864, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selection of WSD+NMT Model", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We first evaluate our sense-aware models with smaller data sets (ca. 500K lines) for five language pairs with English as source. We evaluate them through both automatic measures and human assessment. Later on, we evaluate our sense-aware NMT models with larger WMT data sets to enable a better comparison with other related approaches. BLEU scores. Table 5 displays the performance of both sense-aware phrase-based and neural MT systems with the training sets of 500K lines listed in Table 1 on five language pairs. Specifically, we compare several approaches that integrate word sense information in SMT and NMT. The best hyper-parameters are those found above, for each of the WSD+NMT combination strategies, in particular the k-means method for WSD+SMT, and the ATT ini method for WSD+NMT-that is, the attention-based model of senses initialized with the output of k-means clustering.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 356, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 491, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Comparisons with Baselines. Table 5 shows that our WSD+NMT systems perform consistently better than the baselines, with the largest improvements achieved by NMT on EN/FR and EN/ES. The neural systems outperform the phrasebased statistical ones (Pu et al., 2017) , which are shown for comparison in the upper part of the table.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 261, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 35, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We compare our proposal to the recent system proposed by Yang et al. (2017) , on the 500K-line EN/FR Europarl data set (the differences between their system and ours are listed in \u00a77). We carefully implemented their model by following their paper, since their code is not available. Using the sense embeddings of the multi-sense skip-gram model (MSSG) (Neelakantan et al., 2014) as they do, and training for six epochs as in their study, our implementation of their model reaches only 31.05 BLEU points. When increasing the training stage until convergence (15 epochs), the best BLEU score is 34.52, which is still below our NMT baseline of 34.60. We also found that the initialization of embeddings with MSSG brings less than 1 BLEU point improvement with respect to random initializations (which scored 30.11 over six epochs and 33.77 until convergence), while Yang et al. found a 1.3-2.7 increase on two different test sets. In order to better understand the difference, we tried several combinations of their model with ours. We obtain a BLEU score of 35.02 by replacing their MSSG sense specification model with our adaptive k-means approach, and a BLEU score of 35.18 by replacing our context calculation method (averaging word embeddings within one sentence) with their context vector generation method, which is computed from the output of a bi-directional RNN. In the end, the best BLEU score on this EN/FR data set (35.78 as shown in Table 5 , column 1, last line) is reached by our system with its best options.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 75, |
|
"text": "Yang et al. (2017)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 378, |
|
"text": "(Neelakantan et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1444, |
|
"end": 1451, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Lexical Choice. Using word alignment, we assess the improvement brought by our systems with respect to the baseline in terms of the number of words-here, WSD-labeled nouns and verbsthat are translated exactly as in the reference translation (modulo alignment errors). These numbers can be arranged in a confusion matrix with four values: the words translated correctly (i.e., as in the reference) by both systems, those translated correctly by one system but incorrectly by the other one, and vice versa, and those translated incorrectly by both. Table 6 shows the confusion matrix for our sense-aware NMT with the ATT ini model versus Baselines EN/FR EN/ES Correct Incorrect Correct Incorrect WSD+ C. 134, 552 17, 145 146, 806 16, 523 NMT I. 10, 551 101, 228 8, 183 58, 387 WSD+ C. 124, 759 13, 408 139, 800 11, 194 SMT I. 9, 676 115, 633 7, 559 71, 346 the NMT baseline over the Europarl test data. The net improvement (i.e., the fraction of words improved by our system minus those degraded 16 ) appears to be +2.5% for EN/FR and +3.6% for EN/ES. For comparison, we show the results of the WSD+SMT system versus the SMT baseline in the lower part of Table 6 : The improvement is smaller, at +1.4% for EN/FR and +1.5% for EN/ES. Therefore, the ATT ini NMT model brings higher benefits over the NMT baseline than the WSD+SMT factored model, although the NMT baseline is stronger than the SMT one (see Table 5 ). Human Assessment. To compare our systems against baselines, we also consider a human evaluation of the translation of words with multiple senses (nouns or verbs). The goal is to capture more precisely the correct translations that are, however, different from the reference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 702, |
|
"end": 706, |
|
"text": "134,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 714, |
|
"text": "552 17,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 723, |
|
"text": "145 146,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 731, |
|
"text": "806 16,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 732, |
|
"end": 746, |
|
"text": "523 NMT I. 10,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 755, |
|
"text": "551 101,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 756, |
|
"end": 762, |
|
"text": "228 8,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 763, |
|
"end": 770, |
|
"text": "183 58,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 787, |
|
"text": "387 WSD+ C. 124,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 795, |
|
"text": "759 13,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 804, |
|
"text": "408 139,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 805, |
|
"end": 812, |
|
"text": "800 11,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 826, |
|
"text": "194 SMT I. 9,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 835, |
|
"text": "676 115,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 836, |
|
"end": 842, |
|
"text": "633 7,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 843, |
|
"end": 850, |
|
"text": "559 71,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 854, |
|
"text": "346", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 554, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1153, |
|
"end": 1160, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1402, |
|
"end": 1409, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Given the cost of the procedure, one evaluator with good knowledge of EN and FR rated the translations of four word types that appear frequently in the test set and have multiple possible senses and translations into French. These words are: deal (101 tokens), face (84), mark (20), and subject (58). Two translations of deal are exemplified in \u00a71.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For each occurrence, the evaluator sees the source sentence, the reference translation, and the outputs of the NMT baseline and the ATT ini in random order, so that the system cannot be identified. The two translations of the considered word are rated as good, acceptable, or wrong. We submit only cases in which the two translations differ, to minimize the annotation effort with no impact on the comparison between systems. Firstly, Figure 1(a) shows that ATT ini has a higher proportion of good translations, and a lower proportion of wrong ones, for all four words. The largest difference is for subject, where ATT ini has 75% good translations and the baseline only 46%; moreover, the baseline has 22% errors and ATT ini has only 9%. Secondly, Figure 1(b) shows the proportions of tokens, for each type, for which ATT ini was respectively better, equal, or worse than the baseline. Again, for each of the four words, there are far more improvements brought by ATT ini than degradations. On average, 40% of the occurrences are improved and only 10% are degraded.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 446, |
|
"text": "Figure 1(a)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 760, |
|
"text": "Figure 1(b)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Results on WMT Data Sets. To demonstrate that our findings generalize to larger data sets, we report results on three data sets provided by the WMT conference (see \u00a74), namely, for EN/DE, EN/ES and EN/FR. Tables 7 and 8 show the results of our proposed NMT models on these test sets. The results in Table 7 confirm that our sense-aware NMT models improve significantly the translation quality also on larger data sets, which permit stronger baselines. Comparing these results with the ones from Table 5, we even conclude that our models trained on larger, mixed-domain data sets achieve higher improvements than the models trained on smaller, domain-specific data sets (Europarl). This clearly shows that our sense-aware NMT models are beneficial on both narrow and broad domains.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 219, |
|
"text": "Tables 7 and 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 306, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, we compare our model with several recent NMT models that make use of contextual information, thus sharing a similar overall goal to our study. Indeed, the model proposed by NMT model NT14 NT15 Context-dependent (Choi et al., 2017) -21.99 Context-aware (Zhang et al., 2017) 22.57 -Self-attentive (Werlen et al., 2018) 23 Choi et al. (2017) attempts to improve NMT by integrating context vectors associated to source words into the generation process during decoding. The model proposed by Zhang et al. (2017) is aware of previous attended words on the source side in order to better predict which words will be attended in future. The self-attentive residual decoder designed by Werlen et al. (2018) leverages the contextual information from previously translated words on the target side. BLEU scores on the English-German pair shown in Table 8 demonstrate that our baseline is strong and that our model is competitive with respect to recent models that leverage contextual information in different ways.", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 239, |
|
"text": "(Choi et al., 2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 281, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 325, |
|
"text": "(Werlen et al., 2018)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 516, |
|
"text": "Zhang et al. (2017)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 846, |
|
"end": 853, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Word sense disambiguation aims to identify the sense of a word appearing in a given context (Agirre and Edmonds, 2007) . Resolving word sense ambiguities should be useful, in particular, for lexical choice in MT. An initial investigation found that a statistical MT system that makes use of off-the-shelf WSD does not yield significantly better quality translations than an SMT system not using it (Carpuat and Wu, 2005) . However, several studies (Cabezas and Resnik, 2005; Vickrey et al., 2005; Carpuat and Wu, 2007; Chan et al., 2007) reformulated the task of WSD for SMT and showed that integrating the ambiguity information generated from modified WSD improved SMT by 0.15-0.57 BLEU points compared with baselines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 118, |
|
"text": "(Agirre and Edmonds, 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 420, |
|
"text": "(Carpuat and Wu, 2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 474, |
|
"text": "(Cabezas and Resnik, 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 496, |
|
"text": "Vickrey et al., 2005;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 518, |
|
"text": "Carpuat and Wu, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 537, |
|
"text": "Chan et al., 2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Recently, Tang et al. (2016) used only the supersenses from WordNet (coarse-grained semantic labels) for automatic WSD, using maximum entropy classification or sense embeddings learned using word2vec. When combining WSD with SMT using a factored model, Tang et al. improved BLEU scores by 0.7 points on average, though with large differences between their three test subsets (IT Q&A pairs).", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 28, |
|
"text": "Tang et al. (2016)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Although these reformulations of the WSD task proved helpful for SMT, they did not determine whether actual source-side senses are helpful or not for end-to-end SMT. Xiong and Zhang (2014) attempted to answer this question by performing self-learned word sense induction instead of using pre-specified word senses as traditional WSD does. However, they created the risk of discovering sense clusters that do not correspond to the senses of words actually needed for MT. Hence, they left open an important question, namely, whether WSD based on semantic resources such as WordNet (Fellbaum, 1998) can be successfully integrated with SMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 188, |
|
"text": "Xiong and Zhang (2014)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Several studies integrated sense information as features to SMT, either obtained from the sense graph provided by WordNet (Neale et al., 2016) or generated from both sides of word dependencies (Su et al., 2015) . However, apart from the sense graph, WordNet also provides textual information such as sense definitions and examples, which should be useful for WSD, but were not used in these studies. In previous work (Pu et al., 2017) , we used this information to perform sense induction on source-side data using k-means and demonstrated improvement with factored phrasebased SMT but not NMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 142, |
|
"text": "(Neale et al., 2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 210, |
|
"text": "(Su et al., 2015)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 434, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Neural MT became the state of the art (Sutskever et al., 2014; Bahdanau et al., 2015) . Instead of working directly at the discrete symbol level as SMT, it projects and manipulates the source sequence of discrete symbols in a continuous vector space. However, NMT generates only one embedding for each word type, regardless of its possibly different senses, as analyzed, for example, by Hill et al. (2017) . Several studies proposed efficient nonparametric models for monolingual word sense representation (Neelakantan et al., 2014; Li and Jurafsky, 2015; Bartunov et al., 2016; Liu et al., 2017) , but left open the question whether sense representations can help neural MT by reducing word ambiguity. Recent studies integrate the additional sense assignment with neural MT based on these approaches, either by adding such sense assignments as additional features (Rios et al., 2017) or by merging the context information on both sides of parallel data for encoding and decoding (Choi et al., 2017) . Yang et al. (2017) recently proposed to add sense information by using weighted sense embeddings as input to neural MT. The sense labels were generated by a MSSG model (Neelakantan et al., 2014) , and the context vector used for sense weight generation was computed from the output of a bidirectional RNN. Finally, the weighted average sense embeddings were used in place of the word embedding for the NMT encoder. The numerical results given in \u00a76 show that our options for using sense embeddings outperform Yang et al.'s proposal. In fact, their approach even performed worse than the NMT baseline on our EN/FR data set. We conclude that adaptive k-means clustering is better than MSSG for use in NMT, and that concatenating the word embedding and its sense vector as input for the RNN encoder is better than just using the sense embedding for each token. In terms of efficiency, Yang et al. (2017) need an additional bidirectional RNN to generate the context vector for each input token, whereas we compute the context vector by averaging the embeddings of the neighboring tokens. This slows down the training of the encoder by a factor of 3, which may explain why they only trained their model for six epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 62, |
|
"text": "(Sutskever et al., 2014;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 63, |
|
"end": 85, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 405, |
|
"text": "Hill et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 532, |
|
"text": "(Neelakantan et al., 2014;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 555, |
|
"text": "Li and Jurafsky, 2015;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 578, |
|
"text": "Bartunov et al., 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 596, |
|
"text": "Liu et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 884, |
|
"text": "(Rios et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 980, |
|
"end": 999, |
|
"text": "(Choi et al., 2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1002, |
|
"end": 1020, |
|
"text": "Yang et al. (2017)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1196, |
|
"text": "(Neelakantan et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1884, |
|
"end": 1902, |
|
"text": "Yang et al. (2017)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We presented a neural MT system enhanced with an attention-based method to represent multiple word senses, making use of a larger context to disambiguate words that have various possible translations. We proposed several adaptive context-dependent clustering algorithms for WSD and combined them in several ways with NMT-following our earlier experiments with SMT (Pu et al., 2017) -and found that they had competitive WSD performance on data from the SemEval 2010 shared task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 381, |
|
"text": "(Pu et al., 2017)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "For NMT, the best-performing method used the output of k-means to initialize the sense embeddings that are learned by our system. In particular, it appeared that learning sense embeddings for NMT is better than using embeddings learned separately by other methods, although such embeddings may be useful for initialization. Our experiments with five language pairs showed that our sense-aware NMT systems consistently improve over strong NMT baselines, and that they specifically improve the translation of words with multiple senses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In the future, our approach to sense-aware NMT could be extended to other NMT architectures such as the Transformer network proposed by Vaswani et al. (2017) . As was the case with the LSTM-based architecture studied here, the Transformer network does not explicitly model or utilize the sense information of words, and, therefore, we hypothesize that its performance could also be improved by using our sense integration approaches. To encourage further research in sense-aware NMT, our code is made available at https://github.com/idiap/ sense_aware_NMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 157, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "www.cs.york.ac.uk/semeval2010_WSI. 2 nlp.stanford.edu/software. 3 wordnet.princeton.edu/. 4 www.nltk.org/howto/wordnet.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "code.google.com/archive/p/word2vec/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ixa2.si.ehu.es/ukb. Strictly speaking, this is the only genuine WSD method, as the two previous ones pertain to sense induction rather than disambiguation. However, for simplicity, we will refer to all of them as WSD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We selected the data from different years of WMT because the EN/FR and EN/ES pairs were only available in WMT 2014.10 wit3.fbk.eu. 11 nlp.stanford.edu/software. 12 www.nltk.org. 13 www.opennmt.net.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The values of N improved and N degraded are obtained using automatic word alignment. They do not capture, of course, the intrinsic correctness of a candidate translation, but only its identity or not with one reference translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Explicitly, improvements are (system-correct & baseline-incorrect) minus (system-incorrect & baselinecorrect), and degradations the converse difference.deal face mark subject Candidate", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors are grateful for support from the Swiss National Science Foundation through the MODERN Sinergia project on Modeling Discourse Entities and Relations for Coherent Machine Translation, grant no. 147653 (www. idiap.ch/project/modern), and from the European Union through the SUMMA Horizon 2020 project on Scalable Understanding of Multilingual Media, grant no. 688139 (www. summa-project.eu). The authors would like to thank the TACL editors and reviewers for their helpful comments and suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Word Sense Disambiguation: Algorithms and Applications", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Edmonds", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre and Philip Edmonds. 2007. Word Sense Disambiguation: Algorithms and Appli- cations. Springer-Verlag, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Random walks for knowledgebased word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Linguistics", |
|
"volume": "40", |
|
"issue": "1", |
|
"pages": "57--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge- based word sense disambiguation. Computa- tional Linguistics, 40(1):57-84.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Personalizing PageRank for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Soroa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre and Aitor Soroa. 2009. Personaliz- ing PageRank for word sense disambiguation. In Proceedings of the 12th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL), pages 33-41, Athens, Greece.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Pro- ceedings of the International Conference on Learning Representations, San Diego, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Breaking sticks and ambiguities with adaptive skipgram", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Bartunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Kondrashkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Osokin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Vetrov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Artificial Intelligence and Statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "130--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, and Dmitry Vetrov. 2016. Breaking sticks and ambiguities with adaptive skip- gram. In Artificial Intelligence and Statistics, pages 130-138.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Findings of the 2014 workshop on statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Buck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Leveling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Pecina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herve", |
|
"middle": [], |
|
"last": "Saint-Amand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ale\u0161", |
|
"middle": [], |
|
"last": "Tamchyna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ale\u0161 Tamchyna. 2014. Findings of the 2014 workshop on statis- tical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Trans- lation, pages 12-58, Baltimore, Maryland, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Findings of the 2016 conference on machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [ |
|
"Jimeno" |
|
], |
|
"last": "Yepes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelie", |
|
"middle": [], |
|
"last": "Neveol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariana", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Popel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Verspoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Pro- ceedings of the First Conference on Machine Translation, pages 131-198, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Using WSD techniques for lexical selection in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Cabezas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "; Dtic", |
|
"middle": [], |
|
"last": "Document", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clara Cabezas and Philip Resnik. 2005. Using WSD techniques for lexical selection in sta- tistical machine translation. Technical report, DTIC Document.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Word sense disambiguation vs. statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "387--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marine Carpuat and Dekai Wu. 2005. Word sense disambiguation vs. statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 387-394, Michigan, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Improving statistical machine translation using word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marine Carpuat and Dekai Wu. 2007. Improv- ing statistical machine translation using word sense disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning (EMNLP- CoNLL), pages 61-72, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "WIT 3 : Web inventory of transcribed and translated talks", |
|
"authors": [ |
|
{ |
|
"first": "Mauro", |
|
"middle": [], |
|
"last": "Cettolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Girardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT 3 : Web inventory of transcribed and translated talks. In Proceed- ings of the 16 th Conference of the European Association for Machine Translation (EAMT), pages 261-268, Trento, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Word sense disambiguation improves statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yee", |
|
"middle": [], |
|
"last": "Seng Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word sense disambiguation im- proves statistical machine translation. In Pro- ceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 33-40, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Context-dependent word representation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Heeyoul", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computer Speech & Language", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "149--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heeyoul Choi, Kyunghyun Cho, and Yoshua Bengio. 2017. Context-dependent word repre- sentation for neural machine translation. Com- puter Speech & Language, 45:149-160.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "WordNet: An Electronic Lexical Database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, MA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A Bayesian analysis of some nonparametric problems", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ferguson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "The Annals of Statistics", |
|
"volume": "1", |
|
"issue": "2", |
|
"pages": "209--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas S. Ferguson. 1973. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209-230.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Grin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Page", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "107--117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Grin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1-7):107-117.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The representational geometry of word meanings acquired by neural machine translation models. Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "31", |
|
"issue": "", |
|
"pages": "3--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill, Kyunghyun Cho, S\u00e9bastien Jean, and Yoshua Bengio. 2017. The representational ge- ometry of word meanings acquired by neural machine translation models. Machine Transla- tion, 31(1):3-18.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "KCDC: Word sense induction by using grammatical dependencies and sentence phrase structure", |
|
"authors": [ |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Kern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Muhr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Granitzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval-2010)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "351--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roman Kern, Markus Muhr, and Michael Granitzer. 2010. KCDC: Word sense induction by us- ing grammatical dependencies and sentence phrase structure. In Proceedings of the 5th In- ternational Workshop on Semantic Evaluation (SemEval-2010), pages 351-354, Los Angeles, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Open-NMT: Open-source toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Senellart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-source toolkit for neural machine translation. CoRR, abs/1701.02810v2.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of MT Summit X", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel cor- pus for statistical machine translation. In Pro- ceedings of MT Summit X, pages 79-86, Phuket, Thailand.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "UoY: graphs of ambiguous vertices for word sense induction and disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Korkontzelos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--358", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ioannis Korkontzelos and Suresh Manandhar. 2010. UoY: graphs of ambiguous vertices for word sense induction and disambiguation. In Proceedings of the 5th International Work- shop on Semantic Evaluation (SemEval 2010), pages 355-358, Los Angeles, California.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Do multi-sense embeddings improve natural language understanding?", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1722--1732", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language under- standing? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1722-1732, Lisbon, Portugal.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Handling homographs in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frederick Liu, Han Lu, and Graham Neubig. 2017. Handling homographs in neural machine trans- lation. CoRR, abs/1708.06510v2.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Effective approaches to attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 1412-1421, Lisbon, Portugal.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Some methods for classification and analysis of multivariate observations", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Macqueen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "281--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James MacQueen. 1967. Some methods for clas- sification and analysis of multivariate obser- vations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, pages 281-297. Oakland, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "SemEval-2010 task 14: Word sense induction and disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Ioannis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitriy", |
|
"middle": [], |
|
"last": "Klapaftis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dligach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sameer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suresh Manandhar, Ioannis P. Klapaftis, Dmitriy Dligach, and Sameer S. Pradhan. 2010. SemEval-2010 task 14: Word sense induction and disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval 2010), pages 63-68, Los Angeles, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neu- ral Information Processing Systems (NIPS), pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Semeval-2015 task 13: Multilingual all-words sense disambiguation and entity linking", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Moro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "288--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Moro and Roberto Navigli. 2015. Semeval-2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 288-297, Denver, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Word sense-aware machine translation: Including senses as contextual features for improved translation models", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Neale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu\u0131s", |
|
"middle": [], |
|
"last": "Gomes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oier", |
|
"middle": [], |
|
"last": "Lopez De Lacalle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ant\u00f3nio", |
|
"middle": [], |
|
"last": "Branco", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2777--2783", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Neale, Lu\u0131s Gomes, Eneko Agirre, Oier Lopez de Lacalle, and Ant\u00f3nio Branco. 2016. Word sense-aware machine translation: Including senses as contextual features for im- proved translation models. In Proceedings of the 10th International Conference on Lan- guage Resources and Evaluation (LREC 2016), pages 2777-2783, Portoroz, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Efficient non-parametric estimation of multiple embeddings per word in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeevan", |
|
"middle": [], |
|
"last": "Shankar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1059--1069", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embed- dings per word in vector space. In Proceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1059-1069, Doha, Qatar.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "BLEU: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine transla- tion. In Proceedings of the 40th Annual Meeting of Association for Computational Linguistics, pages 311-318, Philadelphia, USA.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Duluth-WSI: SenseClusters applied to the sense induction task of SemEval-2", |
|
"authors": [ |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Pedersen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ted Pedersen. 2010. Duluth-WSI: SenseClusters applied to the sense induction task of SemEval- 2. In Proceedings of the 5th International Work- shop on Semantic Evaluation (SemEval 2010), pages 363-366, Los Angeles, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Scikitlearn: Machine learning in Pmaython", |
|
"authors": [ |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00c9douard", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cournapeau, Matthieu Brucher, Matthieu Perrot, and \u00c9douard Duchesnay. 2011. Scikit- learn: Machine learning in Pmaython. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Sense-aware statistical machine translation using adaptive context-dependent clustering", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Pu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Pappas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Popescu-Belis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao Pu, Nikolaos Pappas, and Andrei Popescu- Belis. 2017. Sense-aware statistical machine translation using adaptive context-dependent clustering. In Proceedings of the Second Con- ference on Machine Translation, pages 1-10, Copenhagen, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "United Nations general assembly resolutions: A six-language parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Rafalovitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Dale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of MT Summit XII", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "292--299", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Rafalovitch and Robert Dale. 2009. United Nations general assembly resolutions: A six-language parallel corpus. In Proceedings of MT Summit XII, pages 292-299, Ontario, ON. Canada.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Improving word sense disambiguation in neural machine translation with sense embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Annette", |
|
"middle": [], |
|
"last": "Rios", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Mascarell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annette Rios, Laura Mascarell, and Rico Sennrich. 2017. Improving word sense dis- ambiguation in neural machine translation with sense embeddings. In Second Confer- ence on Machine Translation, pages 11-19, Copenhagen, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Graph-based collective lexical selection for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jinsong", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujian", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xianpei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junfeng", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1238--1247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinsong Su, Deyi Xiong, Shujian Huang, Xianpei Han, and Junfeng Yao. 2015. Graph-based col- lective lexical selection for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1238-1247, Lisbon, Portugal.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Improving translation selection with supersenses", |
|
"authors": [ |
|
{ |
|
"first": "Haiqing", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oier", |
|
"middle": [], |
|
"last": "Lopez De Lacalle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 26th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3114--3123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haiqing Tang, Deyi Xiong, Oier Lopez de Lacalle, and Eneko Agirre. 2016. Improv- ing translation selection with supersenses. In Proceedings of the 26th International Confer- ence on Computational Linguistics (COLING), pages 3114-3123, Osaka, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Word-sense disambiguation for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vickrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Biewald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Teyssier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT-EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "771--778", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Vickrey, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word-sense disambigua- tion for machine translation. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT-EMNLP), pages 771-778, Vancouver, BC, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Self-attentive residual decoder for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Lesly", |
|
"middle": [], |
|
"last": "Miculicich Werlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Pappas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhananjay", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Popescu-Belis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1366--1379", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lesly Miculicich Werlen, Nikolaos Pappas, Dhananjay Ram, and Andrei Popescu-Belis. 2018. Self-attentive residual decoder for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1366-1379, New Orleans, LA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A sensebased translation model for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1459--1469", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deyi Xiong and Min Zhang. 2014. A sense- based translation model for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1459-1469, Baltimore MD, USA.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Multi-sense based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhen", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Joint Conference on Neural Networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3491--3497", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2017. Multi-sense based neural machine trans- lation. In International Joint Conference on Neural Networks, pages 3491-3497, Anchorage, AK, USA.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "A context-aware recurrent encoder for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Biao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinsong", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Speech and Language Processing (TASLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2424--2432", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Biao Zhang, Deyi Xiong, Jinsong Su, and Hong Duan. 2017. A context-aware recurrent encoder for neural machine translation. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), pages 2424-2432.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Human comparison of the EN/FR translations of four word types. (a) Proportion of good (light gray), acceptable (middle gray), and wrong (dark gray) translations per word and system (baseline left, ATT ini right, for each word). (b) Proportion of translations in which ATT ini is better (light gray), equal (middle gray), or worse (dark gray) than the baseline." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Size of data sets used for machine translation from English to five different target languages (TL). FR = French; DE = German; ES = Spanish; ZH = Chinese; NL = Dutch." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Pair</td><td colspan=\"2\">Initialization</td><td colspan=\"8\">BLEU Baseline Graph CRP k-means Oracle Graph</td><td>\u03c1 (%) CRP</td><td>k-means</td></tr><tr><td>EN/ZH</td><td colspan=\"2\">Definitions Examples</td><td/><td>15.23</td><td>15.31</td><td>15.31 15.28</td><td>15.54 15.41</td><td>16.24 15.85</td><td colspan=\"2\">+0.20</td><td>+0.27 +0.13</td><td>+2.25 +1.60</td></tr><tr><td>EN/DE</td><td colspan=\"2\">Definitions Examples</td><td/><td>19.72</td><td>19.74</td><td>19.69 19.74</td><td>20.23 19.87</td><td colspan=\"3\">20.99 \u22120.07 20.45</td><td>\u22120.19 \u22120.12</td><td>+3.96 +2.15</td></tr><tr><td colspan=\"11\">Table 2: Performance of the WSD+SMT factored system for two language pairs from WIT3, with three clustering</td></tr><tr><td colspan=\"4\">methods and two initializations.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>System</td><td/><td>All</td><td/><td colspan=\"2\">V-score Nouns Verbs</td><td>All</td><td colspan=\"2\">F 1 -score Nouns Verbs</td><td>All</td><td>Average Nouns Verbs</td><td>C</td></tr><tr><td>UoY</td><td/><td colspan=\"3\">15.70 20.60</td><td colspan=\"6\">8.50 49.80 38.20 66.60 32.75 29.40 37.50 11.54</td></tr><tr><td>KCDC-GD</td><td/><td colspan=\"2\">6.90</td><td>5.90</td><td colspan=\"6\">8.50 59.20 51.60 70.00 33.05 28.70 39.20</td><td>2.78</td></tr><tr><td colspan=\"2\">Duluth-Mix-Gap</td><td colspan=\"2\">3.00</td><td>2.90</td><td colspan=\"6\">3.00 59.10 54.50 65.80 31.05 29.70 34.40</td><td>1.61</td></tr><tr><td colspan=\"11\">k-means+definitions 13.65 14.70 12.60 56.70 53.70 59.60 35.20 34.20 36.10</td><td>4.45</td></tr><tr><td colspan=\"2\">k-means+examples</td><td colspan=\"9\">11.35 11.00 11.70 53.25 47.70 58.80 32.28 29.30 35.25</td><td>3.58</td></tr><tr><td colspan=\"2\">CRP + definitions</td><td colspan=\"2\">1.45</td><td>1.50</td><td colspan=\"6\">1.45 64.80 56.80 72.80 33.13 29.15 37.10</td><td>1.80</td></tr><tr><td colspan=\"2\">CRP + examples</td><td colspan=\"2\">1.20</td><td>1.30</td><td colspan=\"6\">1.10 64.75 56.80 72.70 32.98 29.05 36.90</td><td>1.66</td></tr><tr><td colspan=\"2\">Graph + definitions</td><td colspan=\"9\">11.30 11.90 10.70 55.10 52.80 57.40 33.20 32.35 34.05</td><td>2.63</td></tr><tr><td colspan=\"2\">Graph + examples</td><td colspan=\"2\">9.05</td><td>8.70</td><td colspan=\"6\">9.40 50.15 45.20 55.10 29.60 26.96 32.25</td><td>2.08</td></tr></table>", |
|
"type_str": "table", |
|
"text": "shows our WSD results in terms of Vscore and F 1 -score, comparing our methods (six" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Performance of various WSD+NMT configurations on a EN/FR subset of Europarl, with variations with respect to baseline. We select the settings with the best performance (bold) for our final experiments in \u00a76." |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Confusion matrix for our WSD+NMT (ATT ini ) system and our WSD+SMT system against their respective baselines (NMT and SMT), over the Europarl test data, for two language pairs." |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td>EN/FR</td><td/><td>EN/ES</td><td/></tr><tr><td/><td>NT12</td><td>NT13</td><td>NT12</td><td>NT13</td></tr><tr><td>Baseline</td><td>29.09</td><td>29.60</td><td>32.66</td><td>29.57</td></tr><tr><td>None +</td><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "ATT 29.47 (+.38) 30.21 (+.61) \u2020 33.15 (+.49) \u2020 30.27 (+.70) \u2021 k-means + ATT ini 30.26 (+1.17) \u2021 30.95 (+.1.35) \u2021 34.14 (+1.48) \u2021 30.67 (+1.1) \u2021 Table 7: BLEU scores on WMT NewsTest 2012 and 2013 (NT) test sets for two language pairs. Significance testing is indicated by \u2020 for p < 0.05 and \u2021 for p < 0.01." |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td>.2</td><td>25.5</td></tr><tr><td>Baseline</td><td>22.79</td><td>24.94</td></tr><tr><td>None + ATT</td><td>23.34 \u2020</td><td>25.28</td></tr><tr><td>k-means + ATT ini</td><td colspan=\"2\">23.85 (+1.14) \u2021 25.71 (+0.77) \u2021</td></tr><tr><td>Table 8:</td><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "BLEU score on English-to-German translation over the WMT NewsTest (NT) 2014 and 2015 test sets. Significance testing is indicated by \u2020 for p < 0.05 and \u2021 for p < 0.01. The highest score per column is in bold." |
|
} |
|
} |
|
} |
|
} |