|
{ |
|
"paper_id": "S10-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:27:28.694112Z" |
|
}, |
|
"title": "USPwlv and WLVusp: Combining Dictionaries and Contextual Information for Cross-Lingual Lexical Substitution", |
|
"authors": [ |
|
{ |
|
"first": "Wilker", |
|
"middle": [], |
|
"last": "Aziz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of S\u00e3o Paulo S\u00e3o Carlos", |
|
"location": { |
|
"region": "SP", |
|
"country": "Brazil" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Wolverhampton", |
|
"location": { |
|
"settlement": "Wolverhampton", |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.", |
|
"pdf_parse": { |
|
"paper_id": "S10-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The goal of the Cross-Lingual Lexical Substitution task in Semeval-2010 (Mihalcea et al., 2010) is to find the best (best subtask) Spanish translation or the 10-best (oot subtask) translations for 100 different English source words depending on their context of occurrence. Source words include nouns, adjectives, adverbs and verbs. 1, 000 occurrences of such words are given along with a short context (a sentence).", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 95, |
|
"text": "(Mihalcea et al., 2010)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This task resembles that of Word Sense Disambiguation (WSD) within Machine Translation (MT). A few approaches have recently been proposed using standard WSD features to learn models using translations instead of senses (Specia et al., 2007; Carpuat and Wu, 2007; Chan and Ng, 2007) . In such approaches, the global WSD score is added as a feature to statistical MT systems, along with additional features, to help the system on its choice for the best translation of a source word or phrase.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 240, |
|
"text": "(Specia et al., 2007;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 262, |
|
"text": "Carpuat and Wu, 2007;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 281, |
|
"text": "Chan and Ng, 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We exploit contextual information in alternative ways to standard WSD features and supervised approaches. Our two systems -USPwlv and WLV usp -use two main components: (i) a list of possible translations for the source word regardless of its context; and (ii) a contextual model that ranks such translations for each occurrence of the source word given its context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While these components constitute the core of most WSD systems, the way they are created and integrated in our systems differs from standard approaches. Our systems do not require a model to disambiguate / translate each particular source word, but instead use general models. We experimented with both corpus-based and standard dictionaries, and different learning methodologies to rank the candidate translations. Our main goal was to maximize the accuracy of the system in choosing the best translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "WLVusp is a very simple system based essentially on (i) a Statistical Machine Translation (SMT) system trained using a large parallel corpus to generate the n-best translations for each occurrence of the source words and (ii) a standard English-Spanish dictionary to filter out noisy translations and provide additional translations in case the SMT system was not able to produce a large enough number of legitimate translations, particularly for the oot subtask.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "USPwlv uses a dictionary built from a large parallel corpus using inter-language information theory metrics and an online-learning supervised algorithm to rank the options from the dictionary. The ranking is based on global and local contextual features, such as the mutual information between the translation and the words in the source context, which are trained using human annotation on the trial dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The English-Spanish part of Europarl (Koehn, 2005) , a parallel corpus from the European Parliament proceedings, was used as a source of sentence level aligned data. The nearly 1.7M sentence pairs of English-Spanish translations, as provided by the Fourth Workshop on Machine Translation (WMT09 1 ), sum up to approximately 48M tokens in each language. Europarl was used both to train the SMT system and to generate dictionaries based on inter-language mutual information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 50, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallel corpus", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The dictionary used by WLVusp was extracted using the free online service Word Reference 2 , which provides two dictionaries: Espasa Concise and Pocket Oxford Spanish Dictionary. Regular expressions were used to extract the content of the webpages, keeping only the translations of the words or phrasal expressions, and the outcome was manually revised. The manual revision was necessary to remove translations of long idiomatic expressions which were only defined through examples, for example, for the verb check: \"we checked up and found out he was lying -hicimos averiguaciones y comprobamos que ment\u00eda\". The resulting dictionary contains a number of open domain (single or multi-word) translations for each of the 100 source words. This number varies from 3 to 91, with an average of 12.87 translations per word. For example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dictionaries", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 yet.r = todav\u00eda, a\u00fan, ya, hasta ahora, sin embargo", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dictionaries", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 paper.n = art\u00edculo, papel, envoltorio, diario, peri\u00f3dico, trabajo, ponencia, examen, parte, documento, libro Any other dictionary can in principle be used to produce the list of translations, possibly without manual intervention. More comprehensive dictionaries could result in better results, particularly those with explicit information about the frequencies of different translations. Automatic metrics based on parallel corpus to learn the dictionary can also be used, but we would expect the accuracy of the system to drop in that case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dictionaries", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The process to generate the corpus-based dictionary for USPwlv is described in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dictionaries", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The Europarl parallel corpus was tokenized and lowercased using standard tools provided by the WMT09 competition. Additionally, the sentences that were longer than 100 tokens after tokenization were discarded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-processing techniques", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Since the task specifies that translations should be given in their basic forms, and also in order to decrease the sparsity due to the rich morphology of Spanish, the parallel corpus was lemmatized using TreeTagger (Schmid, 2006) , a freely available part-of-speech (POS) tagger and lemmatizer. Two different versions of the parallel corpus were built using both lemmatized words and their POS tags:", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 229, |
|
"text": "(Schmid, 2006)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-processing techniques", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Lemma Words are represented by their lemmatized form. In case of ambiguity, the original form was kept, in order to avoid incorrect choices. Words that could not be lemmatized were also kept as in their original form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-processing techniques", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Lemma.pos Words are represented by their lemmatized form followed by their POS tags. POS tags representing content words are generalized into four groups: verbs, nouns, adjectives and adverbs. When the system could not identify a POS tag, a dummy tag was used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-processing techniques", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The same techniques were used to pre-process the trial and test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-processing techniques", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The trial data available for this task was used as a training set for the USPwlv system, which uses a supervised learning algorithm to learn the weights of a number of global features. For the 300 occurrences of 30 words in the trial data, the expected lexical substitutions were given by the task organizers, and therefore the feature weights could be optimized in a way to make the system result in good translations. These sentences were preprocessed in the same way the parallel corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training samples", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "This system is based on a combination of the Statistical Machine Translation (SMT) framework using the English-Spanish Europarl data and an English-Spanish dictionary built semiautomatically (Section 2.2). The parallel corpus was lowercased, tokenized and lemmatized (Section 2.3) and then used to train the standard SMT system Moses (Koehn et al., 2007) and translate the trial/test sentences, producing the 1000-best translations for each input sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 354, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WLVusp system", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Moses produces its own dictionary from the parallel corpus by using a word alignment tool and heuristics to build parallel phrases of up to seven source words and their corresponding target words, to which are assigned translation probabilities using frequency counts in the corpus. This methodology provides some very localized contextual information, which can help guiding the system towards choosing a correct translation. Additional contextual information is used by the language model component in Moses, which considers how likely the sentence translation is in the Spanish language (with a 5-gram language model).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WLVusp system", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Using the phrase alignment information, the translation of each occurrence of a source word is identified in the output of Moses. Since the phrase translations are learned using the Europarl corpus, some translations are very specific to that domain. Moreover, translations can be very noisy, given that the process is unsupervised. We therefore filter the translations given by Moses to keep only those also given as possible Spanish translations according to the semi-automatically built English-Spanish dictionary (Section 2.2). This is a general-domain dictionary, but it is less likely to contain noise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WLVusp system", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For best results, only the top translation produced by Moses is considered. If the actual translation does not belong to the dictionary, the first translation in that dictionary is used. Although there is no information about the order of the translations in the dictionaries used, by looking at the translations provided, we believe that the first translation is in general one of the most frequent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WLVusp system", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For oot results, the alternative translations provided by the 1000-best translations are considered. In cases where fewer than 10 translations are found, we extract the remaining ones from the handcrafted dictionary following their given order until 10 translations (when available) are found, without repetition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WLVusp system", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "WLVusp system therefore combines contextual information as provided by Moses (via its phrases and language model) and general translation information as provided by a dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "WLVusp system", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For each source word occurring in the context of a specific sentence, this system uses a linear combination of features to rank the options from an automatically built English-Spanish dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For the best subtask, the translation ranked first is chosen, while for the oot subtask, the 10 best ranked translations are used without repetition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The building of the dictionary, the features used and the learning scheme are described in what follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Dictionary Building The dictionary building is based on the concept of inter-language Mutual Information (MI) (Raybaud et al., 2009) . It consists in detecting which words in a source-language sentence trigger the appearance of other words in its target-language translation. The inter-language MI in Equation 3 can be defined for pairs of source (s) and target (t) words by observing their occurrences at the sentence level in a parallel, sentence aligned corpus. Both simple (Equation 1) and joint distributions (Equation 2) were built based on the English-Spanish Europarl corpus using its Lemma.pos version (Section 2.3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 132, |
|
"text": "(Raybaud et al., 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p l (x) = count l (x) T otal (1) p en,es (s, t) = f en,es (s, t) T otal (2) M I(s, t) = p en,es (s, t)log p en,es (s, t) p en (s)p es (t)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Avg M I (t j ) = l i=1 w(|i \u2212 j|)M I(s i , t j ) l i=1 w(|i \u2212 j|)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the equations, count l (x) is the number of sentences in which the word x appear in a corpus of l-language texts; count en,es (s, t) is the number of sentences in which source and target words cooccur in the parallel corpus; and T otal is the total number of sentences in the corpus of the language(s) under consideration. The distributions p en and p es are monolingual and can been extracted from any monolingual corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To prevent discontinuities in Equation 3, we used a smoothing technique to avoid null probabilities. We assume that any monolingual event occurs at least once and the joint distribution is smoothed by a Guo's factor \u03b1 = 0.1 (Guo et al., 2004) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 242, |
|
"text": "(Guo et al., 2004)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "p en,es (s, t) \u2190 p en,es (s, t) + \u03b1p en (s)p es (t) 1 + \u03b1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each English source word, a list of Spanish translations was produced and ranked according to inter-language MI. From the resulting list, the 50-best translations constrained by the POS of the original English word were selected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Features The inter-language MI is a feature which indicates the global suitability of translating a source token s into a target one t. However, inter-language MI is not able to provide local contextual information, since it does not take into account the source context sentence c. The following features were defined to achieve such capability:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Weighted Average MI (aMI) consists in averaging the inter-language MI between the target word t j and every source word s in the context sentence c (Raybaud et al., 2009) . The MI component is scaled in a way that long range dependencies are considered less important, as shown in Equation 4. The scaling factor w(\u2022) is assigned 1 for verbs, nouns, adjectives and adverbs up to five positions from the source word, and 0 otherwise. This feature gives an idea of how well the elements in a window centered in the source word head (s j ) align to the target word t j , representing the suitability of t j translating s j in the given context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 170, |
|
"text": "(Raybaud et al., 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Modified Weighted Average MI (mMI) takes the average MI as previously defined, except that the source word head is not taken into account. In other words, the scaling function in Equation 4 equals 0 also when |i \u2212 j| = 0. It gives an idea of how well the source words align to the target word t j without the strong influence of its source translation s j . This should provide less biased information to the learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Best from WLVusp (B) consists in a flag that indicates whether a candidate t is taken as the best ranked option according to the WLVusp system. The goal is to exploit the information from the SMT system and handcrafted dictionary used by that system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "10-best from WLVusp (T) this feature is a flag which indicates whether a candidate t was among the 10 best ranked translations provided by the WLVusp system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Online Learning In order to train a binary ranking system based on the trial dataset as our training set, we used the online passive-aggressive algorithm MIRA (Crammer et al., 2006) . MIRA is said to be passive-aggressive because it updates the parameters only when a misprediction is detected. At training time, for each sentence a set of pairs of candidate translations is retrieved. For each of these pairs, the rank given by the system with the current parameters is compared to the correct rank h (\u2022). A loss function loss(\u2022) controls the updates attributing non 0 values only for mispredictions. In our implementation, it equals 1 for any mistake made by the model. Each element of the kind (c, s, t) = (source context sentence, source head, translation candidate) is assigned a feature vector f (c, s, t) = M I, aM I, mM I, B, T , which is modeled by a vector of parameters w \u2208 R 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 181, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The binary ranking is defined as the task of finding the best parameters w which maximize the number of successful predictions. A successful prediction happens when the system is able to rank two translation candidates as expected. For doing so, we define an oriented pair x = (a, b) of candidate translations of s in the context of c and a feature vector", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "F (x) = f (c, s, a) \u2212 f (c, s, b). signal(w\u2022F (x))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "is the orientation the model gives to x, that is, whether the system believes a is better than b or vice versa. Based on whether or not that orientation is the same as that of the reference 3 , the algorithm takes the decision between updating or not the parameters. When an update occurs, it is the one that results in the minimal changes in the parameters leading to correct labeling x, that is, guaranteeing that after the update the system will rank (a, b) correctly. Algorithm 1 presents the general method, as proposed in (Crammer et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 550, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the case of this binary ranking, the minimization problem has an analytic solution well defined as long as f (c, s, a) = f (c, s, b) and rank h (a) = rank h (b), otherwise signal(w \u2022 F (x)) or the human label would not be defined, respectively. These conditions have an impact on the content of P airs(c), the set of training points built upon the system outputs for c, which can only contain pairs of differently ranked translations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The learning scheme was initialized with a uni-Algorithm 1 MIRA 1: for c \u2208 Training Set do 2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "for x = (a, b) \u2208 P airs(c) do 3:\u0177 \u2190 signal(w \u2022 F (x)) 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "z \u2190 correct label(x) 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "w = argmaxu 1 2 ||w \u2212 u|| 2 6: s.t. u \u2022 F (x) \u2265 loss(\u0177, z) 7: v \u2190 v + w 8: T \u2190 T + 1 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "end for 10: end for 11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "return 1 T v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "form vector. The average parameters after N = 5 iterations over the training set was taken.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "5 Results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USPwlv System", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Tables 1 and 2 show the main results obtained by our two systems in the official competition. We contrast our systems' results against the best baseline provided by the organizers, DIC, which considers translations from a dictionary and frequency information from WordNet, and show the relative position of the system among the 14 participants. The metrics are defined in (Mihalcea et al., 2010) . In the oot subtask, the original systems were able to output the mode translation approximately 80% of the times. From those translations, nearly 50% were actually considered as best options according to human annotators. It is worth noticing that we focused on the best subtask. Therefore, for the oot subtask we did not exploit the fact that translations could be repeated to form the set of 10 best translations. For certain source words, our resulting set of translations is smaller than 10. For example, in the WLVusp system, whenever the set of alternative translations identified in Moses' top 1000-best list did not contain 10 legitimate translations, that is, 10 translations also found in the handcrafted dictionary, we simply copied other translations from that dictionary to amount 10 different translations. If they did not sum to 10 because the list of translations in the dictionary was too short, we left the set as it was. As a result, 58% of the 1000 test cases had fewer than 10 translations, many of them with as few as two or three translations. In fact, the list of oot results for the complete test set resulted in only 1, 950 translations, when there could be 10, 000 (1, 000 test case occurrences * 10 translations). In the next section we describe some additional experiments to take this issue into account.", |
|
"cite_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 395, |
|
"text": "(Mihalcea et al., 2010)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Official results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "After receiving the gold-standard data, we computed the scores for a number of variations of our two systems. For example, we checked whether the performance of USPwlv is too dependent on the handcrafted dictionary, via the features B and T. Table 3 presents the performance of two variations of USPwlv: MI-aMI-mMI was trained without the two contextual flag features which depend on WLVusp. MI-B-T was trained without the mutual information contextual features. The variation MI-aMI-mMI of USPwlv performs well even in the absence of the features coming from WLVusp, although the scores are lower. These results show the effectiveness of the learning scheme, since USPwlv achieves better performance by combining these feature variations, as compared to their individual performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 249, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To provide an intuition on the contribution of the two different components in the system WLVusp, we checked the proportion of times a translation was provided by each of the components. In the best subtask, 48% of the translations came from Moses, while the remaining 52% pro- vided by Moses were not found in the dictionary. In those cases, the first translation in the dictionary was used. In the oot subtask, only 12% (246) of the translations came from Moses, while the remaining (1, 704) came from the dictionary. This can be explained by the little variation in the nbest lists produced by Moses: most of the variations account for word-order, punctuation, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Finally, we performed additional experiments in order to exploit the possibility of replicating well ranked translations for the oot subtask. Table 4 presents the results of some strategies arbitrarily chosen for such replications. For example, in the colums labelled \"5\" we show the scores for repeating (once) the 5 top translations. Notice that precision and recall increase as we take fewer top translation and repeat them more times. In terms of mode metrics, by reducing the number of distinct translations from 10 to 5, USPwlv still outperforms (marginally) the baseline. In general, the new systems outperform the baseline and our previous results (see Table 1 and 2) in terms of precision and recall. However, according to the other mode metrics, they are below our official systems. Table 4 : Comparison between different strategies for duplicating answers in the task oot. The systems output a number of distinct guesses and through arbitrarily schemes replicate them in order to complete a list of 10 translations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 149, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 661, |
|
"end": 668, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 800, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "6 Discussion and future work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We have presented two systems combining contextual information and a pre-defined set of translations for cross-lingual lexical substitution. Both systems performed particularly well in the best subtask. A handcrafted dictionary has shown to be essential for the WLVusp system and also helpful for the USPwlv system, which uses an additional dictionary automatically build from a parallel corpus. We plan to investigate how such systems can be improved by enhancing the corpus-based resources to further minimize the dependency on the handcrafted dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/wmt09/ translation-task.html 2 http://www.wordreference.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given s in the context of c and (a, b) a pair of candidate translations of s, the reference produces 1 if rank h (a) > rank h (b) and \u22121 if rank h (b) > rank h (a).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Improving statistical machine translation using word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marine Carpuat and Dekai Wu. 2007. Improving sta- tistical machine translation using word sense disam- biguation. In Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning, pages 61-72.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word sense disambiguation improves statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yee", |
|
"middle": [], |
|
"last": "Seng Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Seng Chan and Hwee Tou Ng. 2007. Word sense disambiguation improves statistical machine transla- tion. In 45th Annual Meeting of the Association for Computational Linguistics, pages 33-40.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Shai Shalev-Shwartz, and Yoram Singer", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Jornal of Machine Learning Research", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "551--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-agressive algorithms. Jornal of Machine Learning Research, 7:551-585.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A comparative study on various confidence measures in large vocabulary speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Gang", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ren-Hua", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "International Symposium on Chinese Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gang Guo, Chao Huang, Hui Jiang, and Ren-Hua Wang. 2004. A comparative study on various con- fidence measures in large vocabulary speech recog- nition. In International Symposium on Chinese Spo- ken Language Processing, pages 9-12.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "MT Summit", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Semeval-2010 task 2: Cross-lingual lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ravi", |
|
"middle": [], |
|
"last": "Sinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "SemEval-2010: 5th International Workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea, Ravi Sinha, and Diana McCarthy. 2010. Semeval-2010 task 2: Cross-lingual lexical substitution. In SemEval-2010: 5th International Workshop on Semantic Evaluations.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Word-and sentencelevel confidence measures for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Caroline", |
|
"middle": [], |
|
"last": "Sylvain Raybaud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Lavecchia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamel", |
|
"middle": [], |
|
"last": "Langlois", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smaili", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "13th Annual Conference of the European Association for Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sylvain Raybaud, Caroline Lavecchia, David Langlois, and Kamel Smaili. 2009. Word-and sentence- level confidence measures for machine translation. In 13th Annual Conference of the European Associ- ation for Machine Translation, pages 104-111.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Probabilistic part-of-speech tagging using decision trees", |
|
"authors": [ |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "International Conference on New Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "44--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helmut Schmid. 2006. Probabilistic part-of-speech tagging using decision trees. In International Con- ference on New Methods in Natural Language Pro- cessing, pages 44-49.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning expressive models for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Das Gra\u00e7as Volpe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nunes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia, Mark Stevenson, and Maria das Gra\u00e7as Volpe Nunes. 2007. Learning expressive models for word sense disambiguation. In 45th Annual Meet- ing of the Association for Computational Linguis- tics, pages 41-148.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Official results for WLVusp on the test set, compared to the highest baseline, DICT. P = precision, R = recall. The last column shows the relative position of the system.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>Subtask</td><td>Metric</td><td>Baseline</td><td>USPwlv</td><td>Position</td></tr><tr><td/><td>R</td><td>24.34</td><td>26.81</td><td>2 nd</td></tr><tr><td>Best</td><td>P Mode R</td><td>24.34 50.34</td><td>26.81 58.85</td><td>3 rd 1 st</td></tr><tr><td/><td>Mode P</td><td>50.34</td><td>58.85</td><td>2 nd</td></tr><tr><td/><td>R</td><td>44.04</td><td>47.60</td><td>8 th</td></tr><tr><td>OOT</td><td>P Mode R</td><td>44.04 73.53</td><td>47.60 79.84</td><td>8 th 3 rd</td></tr><tr><td/><td>Mode P</td><td>73.53</td><td>79.84</td><td>3 rd</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Official results for USPwlv on the test set, compared to the highest baseline, DICT. The last column shows the relative position of the system.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Comparing between variations of the system USPwlv on the test set and the highest baseline, DICT. The variations are different sources of contextual knowledge: MI (MI-aMI-mMI) and the WLVusp (MI-B-T) system.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |