|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:51:32.741602Z" |
|
}, |
|
"title": "On Machine Translation of User Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Poncelas", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Rakuten", |
|
"middle": [], |
|
"last": "Asia", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Marija", |
|
"middle": [ |
|
"Brki\u0107" |
|
], |
|
"last": "Bakari\u0107", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This work investigates neural machine translation (NMT) systems for translating English user reviews into Croatian and Serbian, two similar morphologically complex languages. Two types of reviews are used for testing the systems: IMDb movie reviews and Amazon product reviews. Two types of training data are explored: large out-of-domain bilingual parallel corpora, as well as small synthetic in-domain parallel corpus obtained by machine translation of monolingual English Amazon reviews into the target languages. Both automatic scores and human evaluation show that using the synthetic in-domain corpus together with a selected subset of out-of-domain data is the best option. Separated results on IMDb and Amazon reviews indicate that MT systems perform differently on different review types so that user reviews generally should not be considered as a homogeneous genre. Nevertheless, more detailed research on larger amount of different reviews covering different domains/topics is needed to fully understand these differences.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This work investigates neural machine translation (NMT) systems for translating English user reviews into Croatian and Serbian, two similar morphologically complex languages. Two types of reviews are used for testing the systems: IMDb movie reviews and Amazon product reviews. Two types of training data are explored: large out-of-domain bilingual parallel corpora, as well as small synthetic in-domain parallel corpus obtained by machine translation of monolingual English Amazon reviews into the target languages. Both automatic scores and human evaluation show that using the synthetic in-domain corpus together with a selected subset of out-of-domain data is the best option. Separated results on IMDb and Amazon reviews indicate that MT systems perform differently on different review types so that user reviews generally should not be considered as a homogeneous genre. Nevertheless, more detailed research on larger amount of different reviews covering different domains/topics is needed to fully understand these differences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Machine translation (MT) has evolved very rapidly since the emergence of neural approaches in 2015, and it is being used for different genres and domains. Every year, evaluation campaigns which include both human and automatic evaluation are carried out with the goal of advancing the state of the art. The most well-known is the WMT shared task 1 which focuses on news articles and (since 2016) on biomedical texts, and both can be considered as instances of \"formal written text\". The IWSLT evaluation campaign 2 , on the other hand, focuses on the translation of TED talks, and some European projects (TraMOOC, transLectures) investigated the translation of online lectures. In both cases, the text can be considered to be \"formal speech\", with the challenges of dealing with characteristics of spoken language and speech recognition output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 628, |
|
"text": "(TraMOOC, transLectures)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, interest in the translation of usergenerated content in the form of \"informal written text\" has been increasing. For example, JSALT 2019 workshop 3 focused on translation of very noisy text content originating from sources like WhatsApp, Twitter and Reddit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we focus on a different type of written user-generated content, namely user reviews. While the style is not as colloquial and noisy as that of Twitter or of other similar sources, it certainly is much less formal than news texts or other sources that have been investigated traditionally in the MT community. There are also important applications for focusing on this kind of data, both from commercial and from user perspective. More and more companies are expanding into multinational markets, and user reviews of products have become an important asset for online transactions and a feature that many customers expect to find. And in the era of always-available internet connectivity, many individuals rely on experiences of other people not only for guiding purchasing decisions, but also for entertainment options like choosing movies, books, restaurants, etc. In this work, we focus on both kinds of user reviews, namely product reviews from Amazon and movie reviews from IMDb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Translating user reviews can increase and improve its reach and utility. The main issue for human translation is the fact that there is way too much content to be translated. Therefore, MT is very helpful for this kind of content. However, the genre introduces several important challenges, such as informal language, spelling errors, a large number of domains/topics, and lack of in-domain parallel (bilingual) data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we compare two approaches for building MT systems for translating user reviews: training on large parallel out-of-domain data and training on small synthetic in-domain data. We also compare MT performance on two types of user reviews: IMDb movies and Amazon products.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We investigate Croatian and Serbian as target languages, as a case involving mid-size less-resourced morphologically rich European languages. For these languages, a reasonable amount of out-ofdomain parallel data is publicly available to train an NMT system, however still much lower than for \"major\" European languages (such as German, French, Spanish).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "All our experiments were carried out on publicly available data sets. We used OPUS 4 parallel data for out-of-domain training and a selected set of Amazon reviews 5 for in-domain training. For development, we used the publicly available texts 6 consisting of a selected set of English IMDb reviews 7 and their Croatian and Serbian human translations. For testing, we used another selected set of IMDb reviews as well as a selected set of Amazon reviews. Neither of the test reviews has been investigated yet, and they will also be made publicly available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A considerable amount of work in the Computational Linguistics/Natural Language Processing community has been done on processing usergenerated content, mostly on sentiment analysis, but also on different aspects of machine translation (MT). Some papers investigate translating social media texts in order to map widely available English sentiment labels to a less supported target language Turchi, 2012, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 409, |
|
"text": "Turchi, 2012, 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Several researchers attempted to build parallel corpora for user-generated content in different language pairs in order to facilitate MT (Jehl et al., 2012; Ling et al., 2013; San Vicente et al., 2016) , while (Banerjee et al., 2012) explored methods for domain adaptation. A recent JSALT Workshop 8 dealt with improving MT for messages (Messenger, WhatsApp), social media (Facebook, Instagram, Twitter) , and discussion forums (Reddit). Evaluating MT outputs of user-generated content was the topic of several publications, too. Two important measures of overall quality, comprehensibility and fidelity, were investigated in (Roturier and Bensadoun, 2011) in order to compare different English-to-German and English-to-French MT systems for technical support forums, and automatic estimation of these two measures for Englishto-French MT was investigated in (Rubino et al., 2013) . Maintaining sentiment polarity in Germanto-English MT of Twitter posts was explored in (Lohar et al., 2017 (Lohar et al., , 2018 . However, none of these publications explored translation of user reviews.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 156, |
|
"text": "(Jehl et al., 2012;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 157, |
|
"end": 175, |
|
"text": "Ling et al., 2013;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 201, |
|
"text": "San Vicente et al., 2016)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 233, |
|
"text": "(Banerjee et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 403, |
|
"text": "(Facebook, Instagram, Twitter)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 656, |
|
"text": "(Roturier and Bensadoun, 2011)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 859, |
|
"end": 880, |
|
"text": "(Rubino et al., 2013)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 970, |
|
"end": 989, |
|
"text": "(Lohar et al., 2017", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 990, |
|
"end": 1011, |
|
"text": "(Lohar et al., , 2018", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "The first publication about MT for user reviews (Lohar et al., 2019) explored translating English IMDb reviews into Croatian and Serbian and reported results of both automatic and human evaluation. However, all the systems were trained on very small amounts of parallel data so that the reported performance was rather low. More experiments on the same IMDb reviews were carried out (Popovi\u0107 et al., 2020) , however, still only small amounts of training data were used. Also, no results of any kind of human evaluation were reported.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 68, |
|
"text": "(Lohar et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 405, |
|
"text": "(Popovi\u0107 et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "In this work, different sizes of the training corpora were explored, including a large corpus consisting of all publicly available parallel data for the two language pairs. Two types of reviews are explored, IMDb and Amazon, and both automatic scores as well as results of human evaluation are reported. In addition, differences between the two types of reviews are examined in order to see whether all user reviews can be considered as a homogeneous genre.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "All our systems are based on the Transformer architecture (Vaswani et al., 2017) and built using the Sockeye implementation (Hieber et al., 2018) . Previous work on the given two target languages (Popovi\u0107 et al., 2020) reported that multilingual sys-tem which translates into both languages performs better than two separated bilingual systems. Therefore, all our systems are multilingual, built using the same technique as (Johnson et al., 2017; Aharoni et al., 2019) , namely adding a target language label \"SR\" or \"HR\" to each source sentence. The amount of Croatian and Serbian data is balanced in all set-ups in order to achieve optimal performance for both target languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 80, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 145, |
|
"text": "(Hieber et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 218, |
|
"text": "(Popovi\u0107 et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 446, |
|
"text": "(Johnson et al., 2017;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 468, |
|
"text": "Aharoni et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The systems operate on sub-word units generated by byte-pair encoding (BPE) (Sennrich et al., 2016b) with 32000 BPE merge operations both for the source and for the target language texts. We do not use shared vocabularies between the source and the target languages because they are distinct. On the other hand, we built a joint vocabulary for the two target languages because they are very similar.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 100, |
|
"text": "(Sennrich et al., 2016b)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "All the systems have Transformer architecture with 6 layers for both the encoder and decoder, model size of 512, feed forward size of 2048, and 8 attention heads. For training, we use Adam optimiser (Kingma and Ba, 2015), initial learning rate of 0.0002, and batch size of 4096 (sub)words. Validation perplexity is calculated after every 4000 batches (at so-called \"checkpoints\"), and if this perplexity does not improve after 20 checkpoints, the training stops.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\"Teacher/student\" model As a first step, we built a system trained on all publicly available parallel data consisting of about 55 million sentences. These data, however, do not contain any user reviews. On the other hand, there is a vast amount of monolingual English user reviews, and in order to get use of it, we created a synthetic in-domain parallel corpus which is a widely used practice in NMT (Sennrich et al., 2016a; Zhang and Zong, 2016; Burlot and Yvon, 2018; Poncelas et al., 2018) . We selected a set of about four million sentences from Amazon reviews originating from 14 different topics, and translated them by the system trained on out-of-domain data. In this way, we applied so-called \"teacher/student\" model, or \"knowledge distillation\" (Saleh et al., 2020; Kim and Rush, 2016) . Knowledge distillation is the training of a smaller network (student) who learns from an already trained network (teacher). The idea is that the student will be performing much faster and hopefully approximately well as the teacher. The method is often used for reducing the amount of training data, to speed up the process, as well as for domain adaptation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 401, |
|
"end": 425, |
|
"text": "(Sennrich et al., 2016a;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 447, |
|
"text": "Zhang and Zong, 2016;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 470, |
|
"text": "Burlot and Yvon, 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 493, |
|
"text": "Poncelas et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 756, |
|
"end": 776, |
|
"text": "(Saleh et al., 2020;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 777, |
|
"end": 796, |
|
"text": "Kim and Rush, 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our set-up, knowledge distillation is used for domain adaptation: the teacher model is the system trained on a large amount of out-of-domain parallel data. This system is used to create a small synthetic in-domain corpus, which is then used to train the student model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\"Advanced student\" model The best option for using synthetic training corpora for NMT is not to use them alone, but to enrich \"natural\" parallel corpora. However, we do not have any natural indomain parallel corpora. Yet, some parts of the large out-of-domain corpora might be more useful for translating reviews than others, especially subtitles which are usually informal spoken language. To explore this potential, we ranked out-of-domain sentences according to their similarity to user reviews, and extracted the most similar ones to combine them with the synthetic parallel corpus and train an \"advanced student\" model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The details about all data sets and data selection are presented in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Data sets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building NMT systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "IMDb movie reviews 9 (Maas et al., 2011) consist of about 10 sentences and 230 words on average. Each review is labelled with a score: negative reviews have a score<4 out of 10, positive reviews have a score>7 out of 10, and the reviews with more neutral ratings are not included.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User reviews", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In our experiments, IMDb reviews were used for development and testing, but not for training. Amazon product reviews 10 (McAuley et al., 2015) are generally shorter, consisting of 5 sentences and 93 words on average. Each review is labelled with a rating from 1 (worst) to 5 (best). The reviews are divided into 24 categories/topics/domains, and we used the reviews from the following 14 topics: \"Beauty\", \"Books\", \"CDs and Vinyl\", \"Cell Phones and Accessories\", \"Grocery and Gourmet Food\", \"Health and Personal Care\", \"Home and Kitchen\", \"Movies and TV\", \"Musical Instruments\", \"Patio, Lawn and Garden\", \"Pet Supplies\", \"Sports and Outdoors\", \"Toys and Games\", and \"Video Games\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 142, |
|
"text": "10 (McAuley et al., 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User reviews", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For our systems, Amazon reviews were used both for training as well as for testing, however not for development. In order to obtain a balanced multi-target training corpus, half of the selected reviews from each of the topics were translated into Serbian and another half into Croatian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User reviews", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We used the publicly available OPUS 11 parallel data (Tiedemann, 2012) as out-of-domain data. The vast majority of these resources for the desired language pairs consists of OpenSubtitles, and there are also SETIMES News, Bible, Tilde, EU-bookshop, QED, and Tatoeba corpora. In addition, we used GlobalVoices for Serbian, and hrenWac, TED and Wikimedia for Croatian. In total, the corpus is well balanced over the two target languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 70, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Out-of-domain data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As mentioned in Section 2, we extracted a set of sentences from the out-of-domain subtitles according to their similarity to Amazon reviews. The subtitles were ranked using the Feature Decay Algorithm (FDA) Yuret, 2011, 2015; Poncelas et al., 2018; Poncelas, 2019) . FDA selects sentences from a set S based on the number of ngrams which overlap with an in-domain text Seed and adds these sentences to a selected set Sel. In addition, in order to promote diversity, the n-grams are penalised proportionally to the number of instances already present in Sel. During the execution of FDA, candidate sentences from the set S are selected one by one according to the following score:", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 225, |
|
"text": "Yuret, 2011, 2015;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 248, |
|
"text": "Poncelas et al., 2018;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 264, |
|
"text": "Poncelas, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selected out-of-domain data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "score(s, Seed, Sel) = ngr\u2208{s Seed} 0.5 C Sel (ngr) length(s)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selected out-of-domain data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The sentence s with the highest score is removed from S and added to Sel. The count of occurrences of n-gram ngr in the selected set Sel, C Sel (ngr), is updated so that in the following iterations this ngram contributes less to the scoring of one sentence. The process is executed iteratively, adding a single sentence from the set S to the selected set Sel at each step, and stopping after enough sentences have been extracted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selected out-of-domain data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For our experiment, the out-of-domain subtitles represent the set S, and the Amazon reviews are Seed. From the 4 million English review sentences selected for training, we selected 140,000 sentences as seed (about 10,000 from each of the topics). We then used this seed to extract the similar sentence pairs from English-Croatian and English-Serbian subtitles. For each target language, we selected the top 9 million sentence pairs, thus 18M balanced sentence pairs in total. Table 1 shows number of sentences, running words and distinct words (vocabulary) in training, development and test sets, as well as contributions of each of the review types.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 483, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selected out-of-domain data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In order to systematically explore influence of different sizes and natures of training data, we built the following MT systems:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "-GENERAL (teacher model): system trained on all publicly available out-of-domain parallel data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "-REVIEWS (student model): system trained on indomain synthetic corpus consisting of original English Amazon reviews and their translations generated by the GENERAL system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "-REVIEWS+SELECTED (advanced student): system trained on combination of synthetic in-domain data and selected natural out-of-domain data. We investigated different amounts of selected data:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 REVIEWS+6M: adding 6 million selected outof-domain sentences (3M for each target language)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 REVIEWS+12M: adding 12 million selected out-of-domain sentences (6M for each target language)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 REVIEWS+18M: adding all 18 million selected out-of-domain sentences (9M for each target language)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "5 Results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental set-up", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to get a quick feedback about each of our systems, we first evaluated them using the following three automatic overall evaluation scores: sacreBLEU (Post, 2018) , chrF (Popovi\u0107, 2015) and characTER (Wang et al., 2016) . The two best systems according to automatic scores, the \"teacher\" system GENERAL and the \"advanced student\" system REVIEWS+18M, were also evaluated by human annotators. The evaluators marked all words considered as adequacy errors, as described in (Popovi\u0107, 2020) , on a sub-set of about 200 sentences per system. The results are presented in Table 2 , and the tendencies are same for both target languages. As expected, the small synthetic in-domain corpus alone (REVIEWS) cannot achieve the same performance as the large out-of-domain corpus (GENERAL), however the difference in scores is not so large as could be expected considering the difference in the sizes (55M vs 4M) as well as the fact that the target part of the in-domain corpus is machine translated. Adding 6M of selected parallel sentences (REVIEWS+6M) slightly improves the performance, while additional 6M selected sentences (REVIEWS+12M) yield (and even slightly improve) the performance of the GENERAL \"teacher\" system. Adding 18M selected sentences (REVIEWS+18M) only slightly improves over the REVIEWS+12M system, and definitely outperforms the GENERAL \"teacher\" system. Since the improvements from 12M to 18M are rather small, we did not experiment with larger selected corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 169, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 192, |
|
"text": "(Popovi\u0107, 2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 226, |
|
"text": "(Wang et al., 2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 492, |
|
"text": "(Popovi\u0107, 2020)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 572, |
|
"end": 579, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing MT systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also present the scores for two on-line MT systems, AMAZON and GOOGLE, and it can be seen that our best two systems outperform both of them. Although their automatic scores are notably lower than the two best systems, they were also evaluated by human annotators in order to gather more annotations for comparing two different types of reviews which will be described in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing MT systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Before moving to that, we will present a set of translation examples for the two best systems in Table 3 . The first four sentences represent examples where the review-oriented \"advanced student\" system REVIEWS+18M performs better. In the sentence (1), the GENERAL system completely mistranslated the noun phrase \"reddish brown hair\", and in the sentences (2) and (3) it choose incorrect variant of ambiguous source words \"characters\" and \"care\". In the sentence (4), the word order is not optimal.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 104, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing MT systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In sentences (5) and (6), REVIEWS+18M performed better on the first part of the sentence while GENERAL performed better on the second part. GENERAL failed to properly rephrase the first part of the sentence (5) and generated overly literal translation. In sentence (6), it choose incorrect variant of the ambiguous source word \"great\". On the other hand, REVIEWS+18M failed to properly disambiguate the word \"review\" in sentence (5) and omitted the preposition \"of\" in sentence (6). For sentences (7), (8) and (9), GENERAL performed well while REVIEWS+18M produced errors. In (7) and (8), it failed to rephrase properly, and in (9) to generate the correct variant of the ambiguous word \"bean\". Finally, both systems failed in translating noun phrases in sentences (10) and (11), although in different ways. In sentence (10), GEN-ERAL generated a noun phrase with changed meaning (animals are cruel instead of someone being cruel to them) and REVIEWS+18M even left the word \"cruelty\" untranslated. In sentence (11), RE-VIEWS+18M failed in disambiguation of the word \"poor\", while GENERAL changed the meaning of the entire noun phrase into \"charger with cell phones of poor quality\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing MT systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to compare the MT performance of two types of reviews, separated scores for joint target languages are presented in Table 4 . The re-views+18M system shows the best results for both types of reviews, which means that the \"knowledge distillation\" in form of forward translation of Ama-zon reviews by the general system was helpful for both review types.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 132, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Amazon and IMDb reviews", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Furthermore, for all systems, automatic scores are notably better for Amazon product reviews than for IMDb movie reviews, indicating that IMDb is more difficult for machine translation. However, the tendencies of human scores are different, except for GOOGLE. For other systems (our two and AMAZON), the evaluators found less errors in IMDb than in Amazon reviews. Also, it has to be taken into account that IMDb reviewers were not used for training, only Amazon reviews, which can influence the results. More experiments with equal distributions in training and test sets should be carried out in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Amazon and IMDb reviews", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "After looking into errors marked by human evaluators in order to identify the most prominent error types (Popovi\u0107, 2021) , we found out that there are some differences in frequencies of certain error types, presented in Table 5 . The largest difference can be seen for named entities, which are generally more frequent in IMDb reviews. Some types of errors are, however, more frequent in Amazon reviews, such as ambiguous words (words with different meanings in different contexts), gender er- Siroma\u0161ni punja\u010d za mobitel Table 3 : Translation examples for the two best systems, GENERAL and REVIEWS+18M. Errors together with the corresponding English parts are marked in bold. For the first four sentences, REVIEWS+18M is better; for (5) and (6), the two systems exhibit errors in different parts of the sentence; for (7), (8) and (9), GENERAL is better; for (10) and (11), both systems fail at the same part of the sentence. 10.9 12.9 gender 1.8 3.4 untranslated 0.9 2.5 non-existing word 0.7 1.6 Table 5 : Different error types in IMDb and Amazon user reviews; the largest difference can be noted for named entity errors, which are especially frequent in IMDb.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 120, |
|
"text": "(Popovi\u0107, 2021)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 227, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 529, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1005, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Amazon and IMDb reviews", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "rors, untranslated words (English words copied into translation) as well as non-existing words (which do not exist either in the source or in the target language). All these results indicate that there are differences between different types of reviews so that user reviews generally do not represent a homogeneous genre. However, the analysis is carried out on relatively small amount of data, especially human evaluation, so that it is not yet possible to draw any conclusions about the nature of these differences. Further analysis on more data as well as detailed analysis of different review topics including more review types (such as hotel reviews from Trip Advisor) should be carried out in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Amazon and IMDb reviews", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "This work investigates machine translation of two types of user reviews, IMDb movie reviews and Amazon product reviews, from English into Serbian and Croatian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Since one of the main challenges for MT of user reviews is lack of parallel in-domain train-ing data, we explored a possibility to make use of large out-of-domain bilingual parallel corpora as well as monolingual in-domain English corpora. We trained a general \"teacher\" system on all outof-domain data and then used this system to create a small synthetic in-domain parallel corpus by translating English Amazon reviews into the target languages. Both automatic scores and human evaluation show that using this synthetic in-domain corpus together with a selected sub-set of out-ofdomain data is the best option.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The results on separated IMDb and Amazon reviews indicate that MT systems perform differently on different review types so that user reviews generally should not be considered as a homogeneous genre. However, evaluating and training on larger amount of different reviews covering different domains/topics is needed to identify the nature of differences between different types of reviews, and also influence of different topics. Another direction of future work should include using more in-domain data, as well as other techniques for domain adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/wmt20/ 2 http://workshop2019.iwslt.org/index. php", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.clsp.jhu. edu/workshops/19-workshop/ improving-translation-of-informal-language/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://opus.nlpl.eu/ 5 http://jmcauley.ucsd.edu/data/amazon/ 6 https://github.com/m-popovic/ imdb-corpus-for-MT 7 https://ai.stanford.edu/~amaas/data/ sentiment/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.clsp.jhu. edu/workshops/19-workshop/ improving-translation-of-informal-language/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://ai.stanford.edu/~amaas/data/ sentiment/ 10 http://jmcauley.ucsd.edu/data/amazon/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://opus.nlpl.eu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant 13/RC/2106. This research was partly funded by financial support of the European Association for Machine Translation (EAMT) under its programme \"2019 Sponsorship of Activities\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Massively multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3874--3884", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT 2019), pages 3874-3884, Minneapo- lis, Minnesota.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Multilingual Sentiment Analysis using Machine Translation?", |
|
"authors": [ |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Balahur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandra Balahur and Marco Turchi. 2012. Multi- lingual Sentiment Analysis using Machine Transla- tion? In Proceedings of the 3rd Workshop in Com- putational Approaches to Subjectivity and Sentiment Analysis, pages 52-60, Jeju, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Comparative Experiments Using Supervised Learning and Machine Translation for Multilingual Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Balahur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computer Speech and Language", |
|
"volume": "28", |
|
"issue": "1", |
|
"pages": "56--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandra Balahur and Marco Turchi. 2014. Com- parative Experiments Using Supervised Learning and Machine Translation for Multilingual Senti- ment Analysis. Computer Speech and Language, 28(1):56-75.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Domain Adaptation in SMT of User-Generated Forum Content Guided by OOV Word Reduction: Normalization and/or Supplementary Data", |
|
"authors": [ |
|
{ |
|
"first": "Pratyush", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sudip", |
|
"middle": [], |
|
"last": "Kumar Naskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johann", |
|
"middle": [], |
|
"last": "Roturier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 16th Annual Conference of the European Association for Machine Translation (EAMT 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pratyush Banerjee, Sudip Kumar Naskar, Johann Ro- turier, Andy Way, and Josef van Genabith. 2012. Domain Adaptation in SMT of User-Generated Fo- rum Content Guided by OOV Word Reduction: Nor- malization and/or Supplementary Data. In Proceed- ings of the 16th Annual Conference of the European Association for Machine Translation (EAMT 2012), pages 169-176, Trento, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Instance selection for machine translation using feature decay algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Ergun", |
|
"middle": [], |
|
"last": "Bi\u00e7ici", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT 2011)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "272--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ergun Bi\u00e7ici and Deniz Yuret. 2011. Instance selec- tion for machine translation using feature decay al- gorithms. In Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT 2011), pages 272-283, Edinburgh, Scotland.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Optimizing instance selection for statistical machine translation with feature decay algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Ergun", |
|
"middle": [], |
|
"last": "Bi\u00e7ici", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", |
|
"volume": "23", |
|
"issue": "2", |
|
"pages": "339--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ergun Bi\u00e7ici and Deniz Yuret. 2015. Optimizing in- stance selection for statistical machine translation with feature decay algorithms. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, 23(2):339-350.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study", |
|
"authors": [ |
|
{ |
|
"first": "Franck", |
|
"middle": [], |
|
"last": "Burlot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Yvon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 3rd Conference on Machine Translation (WMT 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "144--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franck Burlot and Fran\u00e7ois Yvon. 2018. Using Mono- lingual Data in Neural Machine Translation: a Sys- tematic Study. In Proceedings of the 3rd Conference on Machine Translation (WMT 2018), pages 144- 155, Belgium, Brussels.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A teacher-student framework for zeroresource neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 18)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1925--1935", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zero- resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 18), pages 1925- 1935, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The sockeye neural machine translation toolkit at AMTA 2018", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Domhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Denkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Artem", |
|
"middle": [], |
|
"last": "Sokolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Clifton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (AMTA 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "200--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Transla- tion in the Americas (AMTA 2018), pages 200-207, Boston, MA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Twitter Translation Using Translation-based Crosslingual Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Jehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 7th Workshop on Statistical Machine Translation (WMT 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "410--421", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Jehl, Felix Hieber, and Stefan Riezler. 2012. Twitter Translation Using Translation-based Cross- lingual Retrieval. In Proceedings of the 7th Work- shop on Statistical Machine Translation (WMT 2012), pages 410-421.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", |
|
"authors": [ |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Thorat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Vi\u00e9gas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Wattenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Macduff", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "339--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sequencelevel knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1317--1327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP 16), pages 1317- 1327, Austin, Texas.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Microblogs as Parallel Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guang", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabel", |
|
"middle": [], |
|
"last": "Trancoso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "176--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang Ling, Guang Xiang, Chris Dyer, Alan Black, and Isabel Trancoso. 2013. Microblogs as Parallel Cor- pora. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), pages 176-186, Sofia, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Maintaining Sentiment Polarity of Translated User Generated Content", |
|
"authors": [ |
|
{ |
|
"first": "Pintu", |
|
"middle": [], |
|
"last": "Lohar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haithem", |
|
"middle": [], |
|
"last": "Afli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The Prague Bulletin of Mathematical Linguistics", |
|
"volume": "108", |
|
"issue": "1", |
|
"pages": "73--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pintu Lohar, Haithem Afli, and Andy Way. 2017. Main- taining Sentiment Polarity of Translated User Gener- ated Content. The Prague Bulletin of Mathematical Linguistics, 108(1):73-84.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Balancing Translation Quality and Sentiment Preservation", |
|
"authors": [ |
|
{ |
|
"first": "Pintu", |
|
"middle": [], |
|
"last": "Lohar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haithem", |
|
"middle": [], |
|
"last": "Afli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (AMTA 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pintu Lohar, Haithem Afli, and Andy Way. 2018. Bal- ancing Translation Quality and Sentiment Preserva- tion. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (AMTA 2018), pages 81-88, Boston, MA.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Building English-to-Serbian Machine Translation System for IMDb Movie Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Pintu", |
|
"middle": [], |
|
"last": "Lohar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pintu Lohar, Maja Popovi\u0107, and Andy Way. 2019. Building English-to-Serbian Machine Translation System for IMDb Movie Reviews. In Proceedings of the 7th Workshop on Balto-Slavic Natural Lan- guage Processing (BSNLP 2019), pages 105-113, Florence, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning Word Vectors for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics and Human Language Technologies (ACL-HLT 2011)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics and Human Language Technologies (ACL-HLT 2011), pages 142-150, Portland, Oregon, USA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Image-Based Recommendations on Styles and Substitutes", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Mcauley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Targett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qinfeng", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hengel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-Based Rec- ommendations on Styles and Substitutes. In Pro- ceedings of the 38th International ACM SIGIR Con- ference on Research and Development in Informa- tion Retrieval (SIGIR 2015), pages 43-52, Santiago, Chile.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Improving transductive data selection algorithms for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Poncelas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alberto Poncelas. 2019. Improving transductive data selection algorithms for machine translation. Ph.D. thesis, Dublin City University.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Investigating Back translation in Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Poncelas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitar", |
|
"middle": [], |
|
"last": "Shterionov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 21st Annual Conference of the European Association for Machine Translation (EAMT 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--258", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alberto Poncelas, Dimitar Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018. Investigating Back translation in Neural Machine Translation. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation (EAMT 2018), pages 249- 258, Alicante, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "chrF: character n-gram f-score for automatic MT evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation (WMT 2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "392--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram f-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation (WMT 2015), pages 392-395, Lisbon, Portugal. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Informative manual evaluation of machine translation output", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2020. Informative manual evaluation of machine translation output. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain (Online).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "On nature and causes of observed mt errors", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the MT Summit 2021", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2021. On nature and causes of observed mt errors. In Proceedings of the MT Summit 2021, Online.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Neural machine translation for translating into Croatian and Serbian", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Poncelas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marija", |
|
"middle": [], |
|
"last": "Brki\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "102--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107, Alberto Poncelas, Marija Brki\u0107, and Andy Way. 2020. Neural machine translation for translating into Croatian and Serbian. In Pro- ceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2020), pages 102-113, Barcelona, Spain (Online).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation (WMT 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation (WMT 2018), pages 186-191, Brussels, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Evaluation of MT Systems to Translate User Generated Content", |
|
"authors": [ |
|
{ |
|
"first": "Johann", |
|
"middle": [], |
|
"last": "Roturier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Bensadoun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the MT Summit XIII", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johann Roturier and Anthony Bensadoun. 2011. Eval- uation of MT Systems to Translate User Generated Content. In Proceedings of the MT Summit XIII, Xi- amen, China.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Estimating the Quality of Translated User-Generated Content", |
|
"authors": [ |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rasoul Samad Zadeh", |
|
"middle": [], |
|
"last": "Kaljahi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johann", |
|
"middle": [], |
|
"last": "Roturier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Hollowood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of 6th International Joint Conference on Natural Language Processing (IJCNLP 2013)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1167--1173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raphael Rubino, Jennifer Foster, Rasoul Samad Zadeh Kaljahi, Johann Roturier, and Fred Hollowood. 2013. Estimating the Quality of Translated User- Generated Content. In Proceedings of 6th Interna- tional Joint Conference on Natural Language Pro- cessing (IJCNLP 2013), pages 1167-1173, Nagoya, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Collective wisdom: Improving lowresource neural machine translation using adaptive knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Fahimeh", |
|
"middle": [], |
|
"last": "Saleh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics (COLING 20)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fahimeh Saleh, Wray Buntine, and Gholamreza Haf- fari. 2020. Collective wisdom: Improving low- resource neural machine translation using adaptive knowledge distillation. In Proceedings of the 28th International Conference on Computational Linguis- tics (COLING 20), Barcelona, Spain (Online).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "TweetMT: A Parallel Microblog Corpus", |
|
"authors": [ |
|
{ |
|
"first": "I\u00f1aki", |
|
"middle": [], |
|
"last": "I\u00f1aki San Vicente", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [ |
|
"Espa\u00f1a" |
|
], |
|
"last": "Alegria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [], |
|
"last": "Bonet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [ |
|
"Goncalo" |
|
], |
|
"last": "Gamallo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [ |
|
"Martinez" |
|
], |
|
"last": "Oliveira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arkaitz", |
|
"middle": [], |
|
"last": "Toral", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nora", |
|
"middle": [], |
|
"last": "Zubiaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Aranberri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I\u00f1aki San Vicente, I\u00f1aki Alegria, Cristina Espa\u00f1a Bonet, Pablo Gamallo, Hugo Goncalo Oliveira, Eva Martinez Garcia, Antonio Toral, Arkaitz Zubiaga, and Nora Aranberri. 2016. TweetMT: A Parallel Microblog Corpus. In Proceedings of the 10th In- ternational Conference on Language Resources and Evaluation (LREC 2016), Portoro\u017e, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Improving Neural Machine Translation Models with Monolingual Data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "86--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 86- 96, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL 2016), pages 1715-1725, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Parallel data, tools and interfaces in OPUS", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2214--2218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC 2012), pages 2214-2218, Istan- bul, Turkey.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 31st Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017), pages 5998-6008, Long Beach, CA.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "CharacTer: Translation Edit Rate on Character Level", |
|
"authors": [ |
|
{ |
|
"first": "Weiyue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan-Thorsten", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendrik", |
|
"middle": [], |
|
"last": "Rosendahl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 1st Conference on Machine Translation (WMT 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "505--510", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiyue Wang, Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. 2016. CharacTer: Translation Edit Rate on Character Level. In Proceedings of the 1st Conference on Machine Translation (WMT 2016), pages 505-510, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Exploiting source-side monolingual data in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqing", |
|
"middle": [], |
|
"last": "Zong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1535--1545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP 2016), pages 1535-1545, Austin, Texas.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"text": "chrF \u2191 cTER \u2193 BLEU \u2191 chrF \u2191 cTER \u2193 human \u2193 chrF \u2191 cTER \u2193 BLEU \u2191 chrF \u2191 cTER \u2193 human \u2193", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">(a) English\u2192Croatian</td><td/><td/><td/><td/></tr><tr><td>en\u2192hr</td><td/><td colspan=\"3\">development (IMDb )</td><td/><td colspan=\"2\">test (Amazon+IMDb )</td><td/></tr><tr><td colspan=\"3\">system size BLEU \u2191 GENERAL 55M 31.6</td><td>57.4</td><td>39.1</td><td>30.6</td><td>57.0</td><td>39.9</td><td>14.2</td></tr><tr><td>REVIEWS</td><td>4M</td><td>26.2</td><td>53.7</td><td>42.7</td><td>26.3</td><td>54.2</td><td>41.3</td><td>/</td></tr><tr><td>REVIEWS+6M</td><td>10M</td><td>26.3</td><td>53.9</td><td>42.4</td><td>26.4</td><td>54.6</td><td>41.4</td><td>/</td></tr><tr><td>REVIEWS+12M</td><td>16M</td><td>31.7</td><td>58.0</td><td>39.2</td><td>30.7</td><td>57.2</td><td>39.5</td><td>/</td></tr><tr><td colspan=\"2\">REVIEWS+18M 22M</td><td>32.1</td><td>58.2</td><td>39.0</td><td>31.4</td><td>57.8</td><td>38.8</td><td>12.6</td></tr><tr><td>AMAZON</td><td>n.a.</td><td>30.9</td><td>57.6</td><td>38.9</td><td>29.7</td><td>56.7</td><td>39.0</td><td>18.3</td></tr><tr><td>GOOGLE</td><td>n.a.</td><td>28.6</td><td>55.7</td><td>40.6</td><td>26.6</td><td>53.0</td><td>43.8</td><td>17.4</td></tr><tr><td/><td/><td/><td colspan=\"2\">(b) English\u2192Serbian</td><td/><td/><td/><td/></tr><tr><td>en\u2192sr</td><td/><td colspan=\"3\">development (IMDb )</td><td/><td colspan=\"2\">test (Amazon+IMDb )</td><td/></tr><tr><td colspan=\"3\">system size BLEU \u2191 GENERAL 55M 32.1</td><td>57.3</td><td>39.0</td><td>29.8</td><td>55.2</td><td>40.4</td><td>14.2</td></tr><tr><td>REVIEWS</td><td>4M</td><td>26.6</td><td>53.6</td><td>42.4</td><td>26.1</td><td>52.8</td><td>42.1</td><td>/</td></tr><tr><td>REVIEWS+6M</td><td>10M</td><td>27.2</td><td>54.0</td><td>42.2</td><td>26.2</td><td>52.9</td><td>42.3</td><td>/</td></tr><tr><td>REVIEWS+12M</td><td>16M</td><td>31.9</td><td>57.6</td><td>38.2</td><td>29.7</td><td>55.5</td><td>40.1</td><td>/</td></tr><tr><td colspan=\"2\">REVIEWS+18M 22M</td><td>31.9</td><td>57.6</td><td>38.4</td><td>29.9</td><td>55.6</td><td>40.0</td><td>13.5</td></tr><tr><td>AMAZON</td><td>n.a.</td><td>26.7</td><td>54.6</td><td>40.8</td><td>25.2</td><td>52.4</td><td>42.5</td><td>25.6</td></tr><tr><td>GOOGLE</td><td>n.a.</td><td>26.4</td><td>54.2</td><td>40.9</td><td>25.4</td><td>52.8</td><td>41.9</td><td>24.0</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Comparison of English\u2192Croatian (a) and English\u2192Serbian (b) systems trained on different texts by automatic evaluation scores: BLEU, chrF and characTER as well as by percentage of words marked as adequacy errors by human evaluators (\"human\").", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "(1) source Do not buy this unless you purposely want reddish brown hair. reference Ne kupujte ovo osim ako ciljano ne \u017eelite crvenkasto smedu kosu.Ne kupujte ovo, osim ako ne \u017eelite rashladenu kosu Reddisha. like this kind of films, i feel like somebody is trying to pull my leg. reference ne volim ovakve filmove, osje\u0107am se kao da me netko poku\u0161ava prevariti.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>GENERAL \u2212</td><td/></tr><tr><td>REVIEWS+18M +</td><td>Ne kupujte ovo osim ako ne \u017eelite crvenosmedu kosu.</td></tr><tr><td>(2) source</td><td>Boring Characters</td></tr><tr><td>reference</td><td>Dosadni likovi</td></tr><tr><td>GENERAL \u2212</td><td>Dosadni karakteri</td></tr><tr><td>REVIEWS+18M +</td><td>Dosadni likovi</td></tr><tr><td>(3) source</td><td>Wonderful Skin Care</td></tr><tr><td>reference</td><td>Odli\u010dna nega ko\u017ee</td></tr><tr><td>GENERAL \u2212</td><td>Predivna briga za ko\u017eu.</td></tr><tr><td>REVIEWS+18M +</td><td>Predivna nega ko\u017ee.</td></tr><tr><td>(4) source</td><td>This was a pretty dull movie, actually.</td></tr><tr><td>reference</td><td>Ovo je zapravo bio poprili\u010dno dosadan film.</td></tr><tr><td>GENERAL \u2212</td><td>Ovo je bio prili\u010dno dosadan film, zapravo.</td></tr><tr><td>REVIEWS+18M +</td><td>Ovo je zapravo bio prili\u010dno dosadan film.</td></tr><tr><td>(5) source</td><td>I had high hopes for this product after reading all the wonderful reviews.</td></tr><tr><td>reference</td><td>Veliku nadu sam polagao u ovaj proizvod nakon\u010ditanja svih tih divnih recenzija.</td></tr><tr><td>GENERAL \u2212+</td><td>Imao sam velike nade za ovaj proizvod nakon\u010ditanja svih prekrasnih recenzija.</td></tr><tr><td colspan=\"2\">REVIEWS+18M +\u2212 Polagao sam velike nade u ovaj proizvod nakon\u010ditanja svih divnih kritika.</td></tr><tr><td>(6) source</td><td>A Great Story. The Most Amazing Tale of Human Ingenuity and Creativity!</td></tr><tr><td>reference</td><td>Sjajna pri\u010da. Najneverovatnija pripovetka o ljudskoj dovitljivosti i kreativnosti!</td></tr><tr><td>GENERAL \u2212+</td><td>Velika pri\u010da. Najneverovatnija pri\u010da o ljudskoj genijalnosti i kreativnosti!</td></tr><tr><td colspan=\"2\">REVIEWS+18M +\u2212 Sjajna pri\u010da. Najneverovatnija pri\u010da X ljudske genijalnosti i kreativnosti!</td></tr><tr><td colspan=\"2\">(7) source i don't GENERAL + ne volim ovakve filmove, osje\u0107am se kao da me netko poku\u0161ava prevariti.</td></tr><tr><td>REVIEWS+18M \u2212</td><td>ne svida mi se ova vrsta filmova, osje\u0107am se kao da me netko poku\u0161ava</td></tr><tr><td/><td>povu\u0107i za nogu.</td></tr><tr><td>(8) source</td><td>My sense is that it depends to a large degree on the dog.</td></tr><tr><td>reference</td><td>Imam utisak da dosta zavisi od samog psa.</td></tr><tr><td>GENERAL +</td><td>Moj ose\u0107aj je da to mnogo zavisi od psa.</td></tr><tr><td>REVIEWS+18M \u2212</td><td>Moj ose\u0107aj je da to zavisi od velikog stepena na psa.</td></tr><tr><td>(9) source</td><td>I only recently discovered vanilla bean paste.</td></tr><tr><td>reference</td><td>Tek sam skoro otkrio pastu od zrna vanile.</td></tr><tr><td>GENERAL +</td><td>Nedavno sam otkrio pastu od X vanile.</td></tr><tr><td>REVIEWS+18M \u2212</td><td>Nedavno sam otkrio pastu od vanile i pasulja.</td></tr><tr><td>(10) source</td><td>Horrifying Animal Cruelty</td></tr><tr><td>reference</td><td>U\u017easavaju\u0107a okrutnost prema \u017eivotinjama</td></tr><tr><td>GENERAL \u2212</td><td>Zastra\u0161uju\u0107a \u017eivotinjska okrutnost</td></tr><tr><td>REVIEWS+18M \u2212</td><td>U\u017easna \u017divotinjska Cruelty</td></tr><tr><td>(11) source</td><td>Poor Quality Cell Phone Charger</td></tr><tr><td>reference</td><td>Punja\u010d mobitela lo\u0161e kvalitete</td></tr><tr><td>GENERAL \u2212</td><td>Punja\u010d s lo\u0161im kvalitetnim mobilnim telefonima</td></tr><tr><td>REVIEWS+18M \u2212</td><td/></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Comparison of automatic scores and human evaluation for two different types of reviews: Amazon products and IMDb movies. The scores are calculated on the joint test set for both target languages. All automatic scores are better for Amazon product reviews than for IMDb movie reviews, while the situation is different for human evaluation.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>error type (%)</td><td colspan=\"2\">IMDb Amazon</td></tr><tr><td>named entity</td><td>6.7</td><td>2.8</td></tr><tr><td>ambiguous word</td><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |