Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:56:09.068751Z"
},
"title": "Improve Statistical Machine Translation with Context-Sensitive Bilingual Semantic Embedding Model",
"authors": [
{
"first": "Haiyang",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Xiaoguang",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"addrLine": "No. 10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate how to improve bilingual embedding which has been successfully used as a feature in phrase-based statistical machine translation (SMT). Despite bilingual embedding's success, the contextual information, which is of critical importance to translation quality, was ignored in previous work. To employ the contextual information, we propose a simple and memory-efficient model for learning bilingual embedding, taking both the source phrase and context around the phrase into account. Bilingual translation scores generated from our proposed bilingual embedding model are used as features in our SMT system. Experimental results show that the proposed method achieves significant improvements on large-scale Chinese-English translation task.",
"pdf_parse": {
"paper_id": "D14-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate how to improve bilingual embedding which has been successfully used as a feature in phrase-based statistical machine translation (SMT). Despite bilingual embedding's success, the contextual information, which is of critical importance to translation quality, was ignored in previous work. To employ the contextual information, we propose a simple and memory-efficient model for learning bilingual embedding, taking both the source phrase and context around the phrase into account. Bilingual translation scores generated from our proposed bilingual embedding model are used as features in our SMT system. Experimental results show that the proposed method achieves significant improvements on large-scale Chinese-English translation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In Statistical Machine Translation (SMT) system, it is difficult to determine the translation of some phrases that have ambiguous meanings.For example, the phrase \"\u7ed3\u679c jieguo\" can be translated to either \"results\", \"eventually\" or \"fruit\", depending on the context around it. There are two reasons for the problem: First, the length of phrase pairs is restricted due to the limitation of model size and training data. Another reason is that SMT systems often fail to use contextual information in source sentence, therefore, phrase sense disambiguation highly depends on the language model which is trained only on target corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To solve this problem, we present to learn context-sensitive bilingual semantic embedding. Our methodology is to train a supervised model where labels are automatically generated from phrase-pairs. For each source phrase, the aligned target phrase is marked as the positive label whereas other phrases in our phrase table are treated as negative labels. Different from previous work in bilingual embedding learning (Zou et al., 2013; Gao et al., 2014) , our framework is a supervised model that utilizes contextual information in source sentence as features and make use of phrase pairs as weak labels. Bilingual semantic embeddings are trained automatically from our supervised learning task.",
"cite_spans": [
{
"start": 415,
"end": 433,
"text": "(Zou et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 434,
"end": 451,
"text": "Gao et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our learned bilingual semantic embedding model is used to measure the similarity of phrase pairs which is treated as a feature in decoding. We integrate our learned model into a phrase-based translation system and experimental results indicate that our system significantly outperform the baseline system. On the NIST08 Chinese-English translation task, we obtained 0.68 BLEU improvement. We also test our proposed method on much larger web dataset and obtain 0.49 BLEU improvement against the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using vectors to represent word meanings is the essence of vector space models (VSM). The representations capture words' semantic and syntactic information which can be used to measure semantic similarities by computing distance between the vectors. Although most VSMs represent one word with only one vector, they fail to capture homonymy and polysemy of word. Huang et al. (2012) introduced global document context and multiple word prototypes which distinguishes and uses both local and global context via a joint training objective. Much of the research focus on the task of inducing representations for single languages. Recently, a lot of progress has been made at representation learning for bilingual words. Bilingual word representations have been presented by Peirsman and Pad\u00f3 (2010) and Sumita (2000) . Also unsupervised algorithms such as LDA and LSA were used by Boyd-Graber and Resnik (2010) , Tam et al. (2007) and Zhao and Xing (2006) . Zou et al. (2013) learn bilingual embeddings utilizes word alignments and monolingual embeddings result, Le et al. (2012) and Gao et al. (2014) used continuous vector to represent the source language or target language of each phrase, and then computed translation probability using vector distance. Vuli\u0107 and Moens (2013) learned bilingual vector spaces from non-parallel data induced by using a seed lexicon. However, none of these work considered the word sense disambiguation problem which Carpuat and Wu (2007) proved it is useful for SMT. In this paper, we learn bilingual semantic embeddings for source content and target phrase, and incorporate it into a phrasebased SMT system to improve translation quality.",
"cite_spans": [
{
"start": 362,
"end": 381,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF3"
},
{
"start": 770,
"end": 794,
"text": "Peirsman and Pad\u00f3 (2010)",
"ref_id": "BIBREF8"
},
{
"start": 799,
"end": 812,
"text": "Sumita (2000)",
"ref_id": "BIBREF9"
},
{
"start": 877,
"end": 906,
"text": "Boyd-Graber and Resnik (2010)",
"ref_id": "BIBREF0"
},
{
"start": 909,
"end": 926,
"text": "Tam et al. (2007)",
"ref_id": "BIBREF10"
},
{
"start": 931,
"end": 951,
"text": "Zhao and Xing (2006)",
"ref_id": "BIBREF12"
},
{
"start": 954,
"end": 971,
"text": "Zou et al. (2013)",
"ref_id": "BIBREF13"
},
{
"start": 1059,
"end": 1075,
"text": "Le et al. (2012)",
"ref_id": "BIBREF4"
},
{
"start": 1080,
"end": 1097,
"text": "Gao et al. (2014)",
"ref_id": "BIBREF2"
},
{
"start": 1254,
"end": 1276,
"text": "Vuli\u0107 and Moens (2013)",
"ref_id": "BIBREF11"
},
{
"start": 1448,
"end": 1469,
"text": "Carpuat and Wu (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose a simple and memory-efficient model which embeds both contextual information of source phrases and aligned phrases in target corpus into low dimension. Our assumption is that high frequent words are likely to have multiple word senses; therefore, top frequent words are selected in source corpus. We denote our selected words as focused phrase. Our goal is to learn a bilingual embedding model that can capture discriminative contextual information for each focused phrase. To learn an effective context sensitive bilingual embedding, we extract context features nearby a focused phrase that will discriminate focused phrase's target translation from other possible candidates. Our task can be viewed as a classification problem that each target phrase is treated as a class. Since target phrases are usually in very high dimensional space, traditional linear classification model is not suitable for our problem. Therefore, we treat our problem as a ranking problem that can handle large number of classes and optimize the objectives with scalable optimizer stochastic gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Sensitive Bilingual Semantic Embedding Model",
"sec_num": "3"
},
{
"text": "We apply a linear embedding model for bilingual embedding learning. Cosine similarity be-tween bilingual embedding representation is considered as score function. The score function should be discriminative between target phrases and other candidate phrases. Our score function is in the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Word Embedding",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (x, y; W, U) = cos(W T x, U T y)",
"eq_num": "(1)"
}
],
"section": "Bilingual Word Embedding",
"sec_num": "3.1"
},
{
"text": "where x is contextual feature vector in source sentence, and y is the representation of target phrase, W \u2208 R |X|\u00d7k , U \u2208 R |Y|\u00d7k are low rank matrix. In our model, we allow y to be bag-of-words representation. Our embedding model is memoryefficient in that dimensionality of x and y can be very large in practical setting. We use |X| and |Y| means dimensionality of random variable x and y, then traditional linear model such as max-entropy model requires memory space of O(|X||Y|). Our embedding model only requires O(k(|X| + |Y|)) memory space that can handle large scale vocabulary setting. To score a focused phrase and target phrase pair with f (x, y), context features are extracted from nearby window of the focused phrase. Target words are selected from phrase pairs. Given a source sentence, embedding of a focused phrase is estimated from W T x and target phrase embedding can be obtained through U T y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Word Embedding",
"sec_num": "3.1"
},
{
"text": "Context of a focused phrase is extracted from nearby window, and in our experiment we choose window size of 6 as a focused phrase's context. Features are then extracted from the focused phrase's context. We demonstrate our feature extraction and label generation process from the Chinese-to-English example in figure 1. Window size in this example is three. Position features and Part-Of-Speech Tagging features are extracted from the focused phrase's context. The word fruit Figure 1 : Feature extraction and label generation is the aligned phrase of our focused phrase and is treated as positive label. The phrase results is a randomly selected phrase from phrase table results of \u7ed3\u679c. Note that feature window is not well defined near the beginning or the end of a sentence. To conquer this problem, we add special padding word to the beginning and the end of a sentence to augment sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 484,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Context Sensitive Features",
"sec_num": "3.2"
},
{
"text": "To learn model parameter W and U, we apply a ranking scheme on candidates selected from phrase table results of each focused phrase. In particular, given a focus phrase w, aligned phrase is treated as positive label whereas phrases extracted from other candidates in phrase table are treated as negative label. A max-margin loss is applied in this ranking setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "I(\u0398) = 1 m m i=1 (\u03b4 \u2212 f (x i , y i ; \u0398) \u2212 f (x i , y i ; \u0398))+",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "(2) Where f (x i , y i ) is previously defined, \u0398 = {W, U } and + means max-margin hinge loss. In our implementation, a margin of \u03b4 = 0.15 is used during training. Objectives are minimized through stochastic gradient descent algorithm. For each randomly selected training example, parameters are updated through the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u0398 := \u0398 \u2212 \u03b1 \u2202l(\u0398) \u2202\u0398",
"eq_num": "(3)"
}
],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "where \u0398 = {W, U}. Given an instance with positive and negative label pair {x, y, y }, gradients of parameter W and U are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202l(W, U) \u2202W = qsx(W T x) T \u2212 pqs 3 x(U T y) (4) \u2202l(W, U) \u2202U = qsy(U T y) T \u2212 pqs 3 y(W T x)",
"eq_num": "(5)"
}
],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "Where we set p = (W T x) T (U T y), q =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "1 ||W T x|| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "and s = 1 ||U T y|| 2 . To initialize our model parameters with strong semantic and syntactic information, word vectors are pre-trained independently on source and target corpus through word2vec (Mikolov et al., 2013) . And the pre-trained word vectors are treated as initial parameters of our model. The learned scoring function f (x, y) will be used during decoding phase as a feature in loglinear model which we will describe in detail later.",
"cite_spans": [
{
"start": 195,
"end": 217,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Learning",
"sec_num": "3.3"
},
{
"text": "To incorporate the context-sensitive bilingual embedding model into the state-of-the-art Phrase-Based Translation model, we modify the decoding so that context information is available on every source phrase. For every phrase in a source sentence, the following tasks are done at every node in our decoder:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Bilingual Semantic Embedding into Phrase-Based SMT Architectures",
"sec_num": "4"
},
{
"text": "\u2022 Get the focused phrase as well as its context in the source sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Bilingual Semantic Embedding into Phrase-Based SMT Architectures",
"sec_num": "4"
},
{
"text": "\u2022 Extract features from the focused phrase's context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Bilingual Semantic Embedding into Phrase-Based SMT Architectures",
"sec_num": "4"
},
{
"text": "\u2022 Get translation candidate extracted from phrase pairs of the focused phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Bilingual Semantic Embedding into Phrase-Based SMT Architectures",
"sec_num": "4"
},
{
"text": "\u2022 Compute scores for any pair of the focused phrase and a candidate phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Bilingual Semantic Embedding into Phrase-Based SMT Architectures",
"sec_num": "4"
},
{
"text": "We get the target sub-phrase using word alignment of phrase, and we treat NULL as a common target word if there is no alignment for the focused phrase. Finally we compute the matching score for source content and target word using bilingual semantic embedding model. If there are more than one word in the focus phrase, then we add all score together. A penalty value will be given if target is not in translation candidate list. For each phrase in a given SMT input sentence, the Bilingual Semantic score can be used as an additional feature in log-linear translation model, in combination with other typical context-independent SMT bilexicon probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Bilingual Semantic Embedding into Phrase-Based SMT Architectures",
"sec_num": "4"
},
{
"text": "Our experiments are performed using an inhouse phrase-based system with a log-linear framework. Our system includes a phrase translation model, an n-gram language model, a lexicalized reordering model, a word penalty model and a phrase penalty model, which is similar to Moses (Koehn et al., 2007) . The evaluation metric is BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 277,
"end": 297,
"text": "(Koehn et al., 2007)",
"ref_id": null
},
{
"start": 325,
"end": 353,
"text": "BLEU (Papineni et al., 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "We test our approach on LDC corpus first. We just use a subset of the data available for NIST OpenMT08 task 1 . The parallel training corpus Table 1 : Results of lowercase BLEU on NIST08 task. LOC is the location feature and POS is the Part-of-Speech feature * or ** equals to significantly better than our baseline(\u03c1 < 0.05 or \u03c1 < 0.01, respectively) contains 1.5M sentence pairs after we filter with some simple heuristic rules, such as sentence being too long or containing messy codes. As monolingual corpus, we use the XinHua portion of the English GigaWord. In monolingual corpus we filter sentence if it contain more than 100 words or contain messy codes, Finally, we get monolingual corpus containing 369M words. In order to test our approach on a more realistic scenario, we train our models with web data. Sentence pairs obtained from bilingual website and comparable webpage. Monolingual corpus is gained from some large website such as WiKi. There are 50M sentence pairs and 10B words monolingual corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data set",
"sec_num": "5.1"
},
{
"text": "For word alignment, we align all of the training data with GIZA++ (Och and Ney, 2003) , using the grow-diag-final heuristic to improve recall. For language model, we train a 5-gram modified Kneser-Ney language model and use Minimum Error Rate Training (Och, 2003) to tune the SMT. For both OpenMT08 task and WebData task, we use NIST06 as the tuning set, and use NIST08 as the testing set. Our baseline system is a standard phrase-based SMT system, and a language model is trained with the target side of bilingual corpus. Results on Chinese-English translation task are reported in Table 1 . Word position features and partof-speech tagging features are both useful for our bilingual semantic embedding learning. Based on our trained bilingual embedding model, we can easily compute a translation score between any bilingual phrase pair. We list some cases in table 2 to show that our bilingual embedding is context sensitive.",
"cite_spans": [
{
"start": 66,
"end": 85,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF6"
},
{
"start": 252,
"end": 263,
"text": "(Och, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 583,
"end": 590,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.2"
},
{
"text": "Contextual features extracted from source sentence are strong enough to discriminate different Source Sentence 4 Nearest Neighbor from bilingual embedding \u53ea\u6709\u7a33\u5b9a\u7684\u793e\u4f1a\u73af\u5883\uff0c \u6295 \u8d44 \u8005 \u624d \u624d \u624d \u80fd \u80fd \u80fd \u8e0f \u8e0f \u5b9e \u5b9e \u5730 \u505a \u751f \u610f \u3002(Investors can only get down to business in a stable social environment) will be, can only, will, can \u5728\u6bd4\u8d5b\u4e0e\u4ea4\u5f80\u4e2d\uff0c\u4e2d\u56fd \u6b8b \u75be \u4eba \u663e \u793a \u4e86 \u975e \u51e1 \u7684 \u4f53\u80b2\u624d \u624d \u624d\u80fd \u80fd \u80fd\u3002(In competitions, the Chinese Disabled have shown extraordinary athletic abilities) skills, ability, abilities, talent \u5728\u54e5\u56fd\u7684\u81ea\u7136\u73af\u5883\u4e0b\uff0c \u8461 \u8404 \u662f \u65e0 \u6cd5 \u6b63 \u5e38 \u5f00 \u82b1 \u7ed3 \u7ed3 \u7ed3 \u679c \u679c \u679c \u7684 \u3002(In the natural environment of Costa Rica, grapes do not normally yield fruit.) fruit, outcome of, the outcome, result \u7ed3 \u7ed3 \u7ed3 \u679c \u679c \u679c \uff0c \u4e1c \u533a \u533a \u8bae \u4f1a \u901a \u8fc7 \u4e00 \u9879 \u8bae \u6848 \u3002(As a result, Eastern District Council passed a proposal) in the end, eventually, as a result, results word senses. And we also observe from the word \"\u7ed3\u679c jieguo\" that Part-Of-Speech Tagging features are effective in discriminating target phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.2"
},
{
"text": "In this paper, we proposed a context-sensitive bilingual semantic embedding model to improve statistical machine translation. Contextual information is used in our model for bilingual word sense disambiguation. We integrated the bilingual semantic model into the phrase-based SMT system. Experimental results show that our method achieves significant improvements over the baseline on large scale Chinese-English translation task. Our model is memory-efficient and practical for industrial usage that training can be done on large scale data set with large number of classes. Prediction time is also negligible with regard to SMT decoding phase. In the future, we will explore more features to refine the model and try to utilize contextual information in target sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conlusion",
"sec_num": "6"
},
{
"text": "LDC2002E18, LDC2002L27, LDC2002T01, LDC2003E07, LDC2003E14, LDC2004T07, LDC2005E83, LDC2005T06, LDC2005T10, LDC2005T34, LDC2006E24, LDC2006E26, LDC2006E34, LDC2006E86, LDC2006E92, LDC2006E93, LDC2004T08(HK News, HK Hansards )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the three anonymous reviewers for their valuable comments, and Niu Gang and Wu Xianchao for discussions. This paper is supported by 973 program No. 2014CB340505.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Holistic sentiment analysis across languages: Multilingual supervised latent dirichlet allocation",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "45--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber and Philip Resnik. 2010. Holis- tic sentiment analysis across languages: Multilin- gual supervised latent dirichlet allocation. In Pro- ceedings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 45-55, Cambridge, MA, October. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving statistical machine translation using word sense disambiguation",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Carpuat and Dekai Wu. 2007. Improving sta- tistical machine translation using word sense disam- biguation. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 61-72, Prague, Czech Republic, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning continuous phrase representations for translation modeling",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase rep- resentations for translation modeling. In Proc. ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word represen- tations via global context and multiple word proto- types. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 873-882, Jeju Island, Korea, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Continuous space translation models with neural networks",
"authors": [
{
"first": "Hai-Son",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai-Son Le, Alexandre Allauzen, and Fran\u00e7ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of the 2012 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 39-48, Montr\u00e9al, Canada, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their compo- sitionality. In NIPS, pages 3111-3119.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. In Computational Linguistics, Volume 29, Number 1, March 2003. Computational Linguistics, March.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate train- ing in statistical machine translation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167, Sap- poro, Japan, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Crosslingual induction of selectional preferences with bilingual vector spaces",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "921--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Peirsman and Sebastian Pad\u00f3. 2010. Cross- lingual induction of selectional preferences with bilingual vector spaces. In Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, pages 921-929, Los Ange- les, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lexical transfer using a vectorspace model",
"authors": [
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eiichiro Sumita. 2000. Lexical transfer using a vector- space model. In Proceedings of the 38th Annual Meeting of the Association for Computational Lin- guistics. Association for Computational Linguistics, August.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bilingual-lsa based lm adaptation for spoken language translation",
"authors": [
{
"first": "Yik-Cheung",
"middle": [],
"last": "Tam",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "520--527",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yik-Cheung Tam, Ian Lane, and Tanja Schultz. 2007. Bilingual-lsa based lm adaptation for spoken lan- guage translation. In Proceedings of the 45th An- nual Meeting of the Association of Computational Linguistics, pages 520-527, Prague, Czech Repub- lic, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Crosslingual semantic similarity of words as the similarity of their semantic word responses",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "106--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2013. Cross- lingual semantic similarity of words as the similarity of their semantic word responses. In Proceedings of the 2013 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 106-116, At- lanta, Georgia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bitam: Bilingual topic admixture models for word alignment",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions",
"volume": "",
"issue": "",
"pages": "969--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao and Eric P. Xing. 2006. Bitam: Bilingual topic admixture models for word alignment. In Pro- ceedings of the COLING/ACL 2006 Main Confer- ence Poster Sessions, pages 969-976, Sydney, Aus- tralia, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Will",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christo- pher D. Manning. 2013. Bilingual word embed- dings for phrase-based machine translation. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1393- 1398, Seattle, Washington, USA, October. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"text": "Top ranked focused phrases based on bilingual semantic embedding",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}