Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:58:26.238638Z"
},
"title": "Plagiarism Detection across Distant Language Pairs",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Valencia",
"location": {}
},
"email": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Valencia",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group Basque Country University",
"institution": "",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group Basque Country University",
"institution": "",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Plagiarism, the unacknowledged reuse of text, does not end at language boundaries. Cross-language plagiarism occurs if a text is translated from a fragment written in a different language and no proper citation is provided. Regardless of the change of language, the contents and, in particular, the ideas remain the same. Whereas different methods for the detection of monolingual plagiarism have been developed, less attention has been paid to the crosslanguage case. In this paper we compare two recently proposed cross-language plagiarism detection methods (CL-CNG, based on character n-grams and CL-ASA, based on statistical translation), to a novel approach to this problem, based on machine translation and monolingual similarity analysis (T+MA). We explore the effectiveness of the three approaches for less related languages. CL-CNG shows not be appropriate for this kind of language pairs, whereas T+MA performs better than the previously proposed models.",
"pdf_parse": {
"paper_id": "C10-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Plagiarism, the unacknowledged reuse of text, does not end at language boundaries. Cross-language plagiarism occurs if a text is translated from a fragment written in a different language and no proper citation is provided. Regardless of the change of language, the contents and, in particular, the ideas remain the same. Whereas different methods for the detection of monolingual plagiarism have been developed, less attention has been paid to the crosslanguage case. In this paper we compare two recently proposed cross-language plagiarism detection methods (CL-CNG, based on character n-grams and CL-ASA, based on statistical translation), to a novel approach to this problem, based on machine translation and monolingual similarity analysis (T+MA). We explore the effectiveness of the three approaches for less related languages. CL-CNG shows not be appropriate for this kind of language pairs, whereas T+MA performs better than the previously proposed models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Plagiarism is a problem in many scientific and cultural fields. Text plagiarism may imply different operations: from a simple cut-and-paste, to the insertion, deletion and substitution of words, up to an entire process of paraphrasing. Different models approach the detection of monolingual plagiarism (Shivakumar and Garc\u00eda-Molina, 1995; Hoad and Zobel, 2003; Maurer et al., 2006) . Each of these models is appropriate only in those cases where all the implied documents are written in the same language.",
"cite_spans": [
{
"start": 302,
"end": 338,
"text": "(Shivakumar and Garc\u00eda-Molina, 1995;",
"ref_id": "BIBREF23"
},
{
"start": 339,
"end": 360,
"text": "Hoad and Zobel, 2003;",
"ref_id": "BIBREF10"
},
{
"start": 361,
"end": 381,
"text": "Maurer et al., 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nevertheless, the problem does not end at language boundaries. Plagiarism is also committed if the reused text is translated from a fragment written in a different language and no citation is provided. When plagiarism is generated by a translation process, it is known as cross-language plagiarism (CLP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Less attention has been paid to the detection of this kind of plagiarism due to its enhanced difficulty (Ceska et al., 2008; Barr\u00f3n-Cede\u00f1o et al., 2008; Potthast et al., 2010) . In fact, in the recently held 1st International Competition on Plagiarism Detection (Potthast et al., 2009) , no participants tried to approach it.",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "(Ceska et al., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 125,
"end": 152,
"text": "Barr\u00f3n-Cede\u00f1o et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 153,
"end": 175,
"text": "Potthast et al., 2010)",
"ref_id": "BIBREF20"
},
{
"start": 262,
"end": 285,
"text": "(Potthast et al., 2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to describe the prototypical process of automatic plagiarism detection, we establish the following notation. Let d q be a plagiarism suspect document. Let D be a representative collection of reference documents. D presumably includes the source of the potentially plagiarised fragments in d q . Stein et al., (2007) divide the process into three stages 1 :",
"cite_spans": [
{
"start": 304,
"end": 324,
"text": "Stein et al., (2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. heuristic retrieval of potential source documents: given d q , retrieving an appropriate number of its potential source documents D * \u2208 D such that |D * | \u226a |D|; 2. exhaustive comparison of texts: comparing the text from d q and d \u2208 D * in order to identify reused fragments and their potential sources; and 3. knowledge-based post-processing: those detected fragments with proper citation are discarded as they are not plagiarised.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The result is offered to the human expert to take the final decision. In the case of cross-language plagiarism detection (CLPD), the texts are written in different languages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "d q \u2208 L and d \u2032 \u2208 L \u2032 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this research we focus on step 2: crosslanguage exhaustive comparison of texts, approaching it as an Information Retrieval problem of cross-language text similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Step 1, heuristic retrieval, may be approached by different CLIR techniques, such as those proposed by Dumais et al. (1997) and Pouliquen et al. (2003) .",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "Dumais et al. (1997)",
"ref_id": "BIBREF9"
},
{
"start": 128,
"end": 151,
"text": "Pouliquen et al. (2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cross-language similarity between texts, \u03d5(d q , d \u2032 ), has been previously estimated on the basis of different models: multilingual thesauri (Steinberger et al., 2002; Ceska et al., 2008) , comparable corpora -CL-Explicit Semantic Analysis CL-ESA- (Potthast et al., 2008) , machine translation techniques -CL-Alignment-based Similarity Analysis CL-ASA- (Barr\u00f3n-Cede\u00f1o et al., 2008; Pinto et al., 2009) and n-grams comparison -CL-Character n-Grams CL-CNG- (Mcnamee and Mayfield, 2004) .",
"cite_spans": [
{
"start": 142,
"end": 168,
"text": "(Steinberger et al., 2002;",
"ref_id": "BIBREF25"
},
{
"start": 169,
"end": 188,
"text": "Ceska et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 249,
"end": 272,
"text": "(Potthast et al., 2008)",
"ref_id": "BIBREF18"
},
{
"start": 354,
"end": 382,
"text": "(Barr\u00f3n-Cede\u00f1o et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 383,
"end": 402,
"text": "Pinto et al., 2009)",
"ref_id": "BIBREF17"
},
{
"start": 456,
"end": 484,
"text": "(Mcnamee and Mayfield, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A comparison of CL-ASA, CL-ESA, and CL-CNG was carried out recently by Potthast et al. (2010) . The authors report that in general, despite its simplicity, CL-CNG outperformed the other two models. Additionally, CL-ESA showed good results in the cross-language retrieval of topic-related texts, whereas CL-ASA obtained better results in exact (human) translations.",
"cite_spans": [
{
"start": 71,
"end": 93,
"text": "Potthast et al. (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, most of the language pairs used in the reported experiments (English-{German, Spanish, French, Dutch, Polish}) are related, whether because they have common predecessors or because a large proportion of their vocabularies share common roots. In fact, the lower syntactical relation between the English-Polish pair caused a performance degradation for CL-CNG, and for CL-ASA to a lesser extent. In order to confirm whether the closeness among languages is an important factor, this paper works with more distant language pairs: English-Basque and Spanish-Basque.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is structured as follows. Section 2 describes the motivation for working on this research topic, stressing the situation of cross-language plagiarism among writers in less resourced languages. A brief overview of the few works on CLPD is included. The three similarity estimation models compared in this research work are presented in Section 3. The experimental framework and the obtained results are included in Section 4. Finally, Section 5 draws conclusions and discusses further work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cases of CLP are common nowadays because information in multiple languages is available on the Web, but people still write in their own language. This special kind of plagiarism occurs more often when the target language is a less resourced one 2 , as is the case of Basque.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Basque is a pre-indoeuropean language with less than a million speakers in the world and no known relatives in the language families (Wikipedia, 2010a) . Still, Basque shares a portion of its vocabulary with its contact languages (Spanish and French). Therefore, we decided to work with two language pairs: Basque with Spanish, its contact language, and with English, perhaps the language with major influence over the rest of languages in the world. Although the considered pairs share most of their alphabet, the vocabulary and language typologies are very different. For instance Basque is an agglutinative language.",
"cite_spans": [
{
"start": 133,
"end": 151,
"text": "(Wikipedia, 2010a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "In order to illustrate the relations among these languages, Fig. 1 includes extracts from the English (en), Spanish (es) and Basque (eu) versions of the same Wikipedia article. The fragments are a sample of the lexical and syntactic distance between Basque and the other two languages. In fact, these sentences are completely co-derived and the corresponding entire articles are a sample of the typical imbalance in text available in the different languages (around 2, 000, 1, 300, and only 1: First sentences from the Wikipedia articles \"Party of European Socialists\" (en),\"Partido Socialista Europeo\" (es), and \"Europako Alderdi Sozialista\" (eu) (Wikipedia, 2010b) .",
"cite_spans": [
{
"start": 648,
"end": 666,
"text": "(Wikipedia, 2010b)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 60,
"end": 66,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "100 words are contained in the en, es and eu articles, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Of high relevance is that the two corpora used in this work were manually constructed by translating English and Spanish text into Basque. In the experiments carried out by Potthast et al. (2010) , which inspired our work, texts from the JCR-Acquis corpus (Steinberger et al., 2006) and Wikipedia were used. The first one is a multilingual corpus with no clear definition of source and target languages, whereas in Wikipedia no specific relationship exists between the different languages in which a topic may be broached. In some cases (cf. Fig. 1 ) they are clearly co-derived, but in others they are completely independent.",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "Potthast et al. (2010)",
"ref_id": "BIBREF20"
},
{
"start": 256,
"end": 282,
"text": "(Steinberger et al., 2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 542,
"end": 548,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "CLPD has been investigated just recently, mainly by adapting models formerly proposed for cross-language information retrieval. This is the case of cross-language explicit semantic analysis (CL-ESA), proposed by Potthast et al. (2008) . In this case the comparison between texts is not carried out directly. Instead, a comparable corpus C L,L \u2032 is required, containing documents on multiple topics in the two implied languages. One of the biggest corpora of this nature is Wikipedia. The similarity between d q \u2208 L and every document c \u2208 C L is computed based on the cosine measure. The same process is made for L \u2032 . This step generates two vectors",
"cite_spans": [
{
"start": 212,
"end": 234,
"text": "Potthast et al. (2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "[cos(d q , c 1 ), . . . , cos(d q , c |C L | )] and [cos(d \u2032 , c \u2032 1 ), . . . , cos(d \u2032 , c \u2032 |C L \u2032 | )],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "where each dimension is comparable between the two vectors. Therefore, the cosine between such vectors can be estimated in order to -indirectly-estimate how similar d q and d \u2032 are. The authors suggest that this model can be used for CLPD. Another recent model is MLPlag, proposed by Ceska et al. (2008) . It exploits the EuroWord-Net Thesaurus 3 , that includes sets of synonyms in multiple European languages, with common identifiers across languages. The authors report experiments over a subset of documents of the English and Czech sections of the JRC-Acquis corpus as well as a corpus of simplified vocabulary 4 . The main difficulty they faced was the amount of words in the documents not included in the thesaurus (approximately 50% of the vocabulary).",
"cite_spans": [
{
"start": 284,
"end": 303,
"text": "Ceska et al. (2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "This is a very similar approach to that proposed by Pouliquen et al. (2003) for the identification of document translations. In fact, both approaches have something in common: translations are searched at document level. It is assumed that an entire document has been reused (translated). Nevertheless, a writer is free to plagiarise text fragments from different sources, and compose a mixture of original and reused text.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "Pouliquen et al. (2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "A third model is the cross-language alignmentbased similarity analysis (CL-ASA), proposed by Barr\u00f3n-Cede\u00f1o et al. (2008) , which is based on statistical machine translation technology. This model was proposed to detect plagiarised text fragments (similar models have been proposed for extraction of parallel sentences from comparable corpora (Munteanu et al., 2004) ). The authors report experiments over a short set of texts from which simulated plagiarism was created from English to Spanish. Human as well as automatic machine translations were included in the collection. Further descriptions of this model are included in Section 3, as it is one of those being assessed in this research work.",
"cite_spans": [
{
"start": 93,
"end": 120,
"text": "Barr\u00f3n-Cede\u00f1o et al. (2008)",
"ref_id": "BIBREF2"
},
{
"start": 342,
"end": 365,
"text": "(Munteanu et al., 2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "To the best of our knowledge, no work (including the three previously mentioned) has been done considering less resourced languages. In this research work we approach the not uncommon problem of CLPD in Basque, with source texts written in Spanish (the co-official language of the low tok pd bd sd lem T+MA CL-ASA CL-CNG Basque Country) and English (the language with most available texts in the world).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "We compare three cross-language similarity analysis methods: T+MA (translation followed by monolingual analysis), a novel method based on machine translation followed by a monolingual similarity estimation; CL-CNG, a character n-gram based comparison model; and CL-ASA a model that combines translation and similarity estimation in a single step. Neither MLPlag nor CL-ESA are included in the comparison. On the one hand, we are interested in plagiarism at sentence level, and MLPlag is designed to compare entire documents. On the other hand, in previous experiments over exact translations, CL-ASA has shown to outperform it on language pairs whose alphabet or syntax are unrelated (Potthast et al., 2010) . This is precisely the case of en-eu and es-eu language pairs. Additionally, the amount of Wikipedia articles in Basque available for the construction of the required comparable corpus is insufficient for the CL-ESA data requirements.",
"cite_spans": [
{
"start": 684,
"end": 707,
"text": "(Potthast et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "In this section, we describe the three crosslanguage similarity models we compare. For experimental purposes (cf. Section 4) we consider d q to be a suspicious sentence written in L and D \u2032 to be a collection of potential source sentences written in L \u2032 (L = L \u2032 ). The text pre-processing required by the different models is summarised in Table 1. Examples illustrating how the models work are included in Section 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Models",
"sec_num": "3"
},
{
"text": "d q \u2208 L is translated into L \u2032 on the basis of the Giza++ (Och and Ney, 2003) , Moses (Koehn et al., 2007) and SRILM (Stolcke, 2002) Multiple translations from d q into d \u2032 q are possible. Therefore, performing a monolingual similarity analysis based on \"traditional\" techniques, such as those based on word n-grams comparison (Broder, 1997) or hash collisions (Schleimer et al., 2003) , is not an option. Instead, we take the approach of the bag-of-words, which has shown good results in the estimation of monolingual text similarity (Barr\u00f3n-Cede\u00f1o et al., 2009) . Words in d \u2032 q and d \u2032 are weighted by the standard tf -idf , and the similarity between them is estimated by the cosine similarity measure.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 86,
"end": 106,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF12"
},
{
"start": 117,
"end": 132,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF27"
},
{
"start": 327,
"end": 341,
"text": "(Broder, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 361,
"end": 385,
"text": "(Schleimer et al., 2003)",
"ref_id": "BIBREF22"
},
{
"start": 535,
"end": 563,
"text": "(Barr\u00f3n-Cede\u00f1o et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation + Monolingual Analysis",
"sec_num": "3.1"
},
{
"text": "In this model an estimation of how likely is that d \u2032 is a translation of d q is performed. It is based on the adaptation of the Bayes rule for MT:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(d \u2032 | dq) = p(d \u2032 ) p(dq | d \u2032 ) p(dq) .",
"eq_num": "(1)"
}
],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "As",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "p(d q ) does not depend on d \u2032 , it is neglected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "From an MT point of view, the conditional probability p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "(d q | d \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "is known as translation model probability and is computed on the basis of a statistical bilingual dictionary. p(d \u2032 ) is known as language model probability; it describes the target language L \u2032 in order to obtain grammatically acceptable translations (Brown et al., 1993) . Translating d q into L \u2032 is not the concern of this method, rather it focuses on retrieving texts written in L \u2032 which are potential translations of d q . Therefore, Barr\u00f3n-Cede\u00f1o et al. (2008) proposed replacing the language model (the one used in T+MA) by that known as length model. This model depends on text's character lengths instead of language structures.",
"cite_spans": [
{
"start": 252,
"end": 272,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF5"
},
{
"start": 441,
"end": 468,
"text": "Barr\u00f3n-Cede\u00f1o et al. (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "Multiple translations from d into L \u2032 are possible, and it is uncommon to find a pair of translated texts d and d \u2032 such that |d| = |d \u2032 |. Nevertheless, the length of such translations is closely related to a translation length factor. In accordance with Pouliquen et al. (2003) , the length model is defined as:",
"cite_spans": [
{
"start": 256,
"end": 279,
"text": "Pouliquen et al. (2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u033a(d \u2032 ) = e \u22120.5 0 B @ |d \u2032 | |dq | \u2212\u00b5 \u03c3 1 C A 2 ,",
"eq_num": "(2)"
}
],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "where \u00b5 and \u03c3 are the mean and the standard deviation of the character lengths between translations of texts from L into L \u2032 . If the length of d \u2032 is not the expected given d q , it receives a low qualification. The translation model probability is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(d | d \u2032 ) = Y x\u2208d X y\u2208d \u2032 p(x, y),",
"eq_num": "(3)"
}
],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "where p(x, y), a statistical bilingual dictionary, represents the likelihood that x is a valid translation of y. After estimating p(x, y) from a parallel corpus, on the basis of the IBM statistical translation models (Brown et al., 1993) , we consider, for each word x, only the k best translations y (those with the highest probabilities) up to a minimum probability mass of 0.4. This threshold was empirically selected as it eliminated noisy entries without discarding an important amount of relevant pairs. The similarity estimation based on CL-ASA is finally computed as:",
"cite_spans": [
{
"start": 217,
"end": 237,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03d5(dq, d \u2032 ) = \u033a(d \u2032 ) p(dq | d \u2032 ).",
"eq_num": "(4)"
}
],
"section": "CL-Alignment-based Similarity Analysis",
"sec_num": "3.2"
},
{
"text": "This model, the simplest of those compared in this research, has been used in (monolingual) Authorship Attribution (Keselj et al., 2003) as well as cross-language Information Retrieval (Mcnamee and Mayfield, 2004) . The simplified alphabet considered is \u03a3 = {a, . . . , z, 0, . . . , 9}; any other symbol is discarded (cf. Table 1 ). The resulting text strings are codified into character 3-grams, which are weighted by the standard tf -idf (considering this n has previously shown to produce the best results). The similarity between such representations of d q and d \u2032 is estimated by the cosine similarity measure.",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "(Keselj et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 185,
"end": 213,
"text": "(Mcnamee and Mayfield, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 323,
"end": 330,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "CL-Character n-Gram Analysis",
"sec_num": "3.3"
},
{
"text": "The objective of our experiments is to compare the performance of the three similarity estimation models. Section 4.1 introduces the corpora we have exploited. The experimental framework is described in Section 4.2. Section 4.3 illustrates how the models work, and the obtained results are presented and discussed in Section 4.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In other Information Retrieval tasks a plethora of corpora is available for experimental and comparison purposes. However, plagiarism implies an ethical infringement and, to the best of our knowledge, there is no corpora of actual cases available, other than some seminal efforts on creating corpora of text reuse (Clough et al., 2002) , artificial plagiarism (Potthast et al., 2009) , and simulated plagiarism (Clough and Stevenson, 2010) . The problem is worse for cross-language plagiarism. Therefore, in our experiments we use two parallel corpora: Software, an en-eu translation memory of software manuals generously supplied by Elhuyar Fundazioa 5 ; and Consumer, a corpus extracted from a consumer oriented magazine that includes articles written in Spanish along with their Basque, Catalan, and Galician translations 6 (Alc\u00e1zar, 2006) . Software includes 288, 000 parallel sentences; 8.66 (6.83) words per sentence in the English (Basque) section. Consumer contains 58, 202 sentences; 19.77 (15.20) words per sentence in Spanish (Basque). These corpora also reflect the imbalance of text available in the different languages.",
"cite_spans": [
{
"start": 314,
"end": 335,
"text": "(Clough et al., 2002)",
"ref_id": "BIBREF8"
},
{
"start": 360,
"end": 383,
"text": "(Potthast et al., 2009)",
"ref_id": "BIBREF19"
},
{
"start": 411,
"end": 439,
"text": "(Clough and Stevenson, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 827,
"end": 842,
"text": "(Alc\u00e1zar, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4.1"
},
{
"text": "We consider D q and D \u2032 to be two entire documents from which plagiarised sentences and their source are to be detected. We work at this level of granularity, and not entire documents, for two main reasons: (i) we are focused on the exhaustive comparison stage of the plagiarism detection process (cf. Section 1); and (ii) even a single sentence could be considered a case of plagiarism, as it transmits a complete idea. However, a plagiarised sentence is usually not enough to automatically negate the validity of an entire document. This decision is left to the human expert, which can examine the documents where several plagiarised sentences occur. Note that the task becomes computationally more expensive as, for every sentence, we are looking through thousands Table 2 : Length models estimated for each training partition f 1,...,5 . The values describe a normal distribution centred in \u00b5 \u00b1 \u03c3, representing the expected length of the source text given the suspicious one. of topically-related sentences that are potential sources of d q , and not only those of a specific document.",
"cite_spans": [],
"ref_spans": [
{
"start": 768,
"end": 775,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "4.2"
},
{
"text": "CLPD is considered a ranking problem. Let d q \u2208 D q be a plagiarism suspicious sentence and d \u2032 \u2208 D \u2032 be its source sentence. We consider that the result of the process is correct if, given d q , d \u2032 is properly retrieved. A 5-fold cross validation for both en-eu and es-eu was performed. Bilingual dictionaries, language and length models were estimated with the corresponding training partitions. The computed values for \u00b5 and \u03c3 are those included in Table 2 . The values for the different partitions are very similar, showing the low variability in the translation lengths. On the basis of these estimated parameters, an example of length factor for a specific sentence is plotted in Fig. 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 2",
"ref_id": null
},
{
"start": 687,
"end": 693,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "4.2"
},
{
"text": "In the test partitions, for each suspicious sentence d q , 11, 640 source candidate sentences exist for es-eu and 57, 290 for en-eu. This results in more than 135 million and 3 billion comparisons carried out for es-eu and en-eu respectively. Table 3 : Entries in the bilingual dictionary for the words in d q . Relevant entries for the example are in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "4.2"
},
{
"text": "In order to clarify how the different models work, consider the following sentence pair, a suspicious sentence d q written in Basque and its source d \u2032 written in English (sentences are short for illustrative purposes):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Illustration of Models",
"sec_num": "4.3"
},
{
"text": "dq beste dokumentu batzuetako makroak ezin dira atzitu. d \u2032 macros from other documents are not accessible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Illustration of Models",
"sec_num": "4.3"
},
{
"text": "In this case, symbols and spaces are discarded. Sentences become:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-CNG Example",
"sec_num": null
},
{
"text": "dq bestedokumentubatzuetakomakroakezindiraatzitu d \u2032 macrosfromotherdocumentsarenotaccessible",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-CNG Example",
"sec_num": null
},
{
"text": "Only three 3-grams appear in both sentences (ume, men, ent) . In order to keep the example simple, the 3-grams are weighted by tf only (in the actual experiments, tf -idf is used), resulting in a dot product of 3. The corresponding vectors magnitudes are |d q | = 6.70 and |d \u2032 | = 5.65. Therefore, the estimated similarity is \u03d5(d q , d \u2032 ) = 0.079.",
"cite_spans": [
{
"start": 44,
"end": 59,
"text": "(ume, men, ent)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CL-CNG Example",
"sec_num": null
},
{
"text": "In this case, the text must be tokenised and lemmatised, resulting in the following string:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-ASA Example",
"sec_num": null
},
{
"text": "dq beste dokumentu batzu makro ezin izan atzi . d \u2032 macro from other document be not accessible .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-ASA Example",
"sec_num": null
},
{
"text": "The sentences' lengths are |d q | = 38 and |d \u2032 | = 39. Therefore, on the basis of Eq. 2, the length factor between them is \u033a(d q , d \u2032 ) = 0.998.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CL-ASA Example",
"sec_num": null
},
{
"text": "The relevant entries of the previously estimated dictionary are included in Table 3 . Such entries are substituted in Eq. 3, and the overall process results in a similarity \u03d5(d q , d \u2032 ) = 2.74. Whereas not a stochastic value, this is a weight used when ranking all the potential source sentences in D \u2032 .",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "CL-ASA Example",
"sec_num": null
},
{
"text": "In this case, the same pre-processing than in CL-ASA is performed. In T+MA d q is translated into L \u2032 , resulting in the new pair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T+MA Example",
"sec_num": null
},
{
"text": "d \u2032",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T+MA Example",
"sec_num": null
},
{
"text": "q other document macro cannot be access . d \u2032 macro from other document be not accessible .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T+MA Example",
"sec_num": null
},
{
"text": "Note that d \u2032 q is a valid translation of d q . Nevertheless, it has few syntactic relation to d \u2032 . Therefore, applying more sophisticated codifications than the cosine measure over bag-of-words is not an option. The example is again simplified by weighting the words based on tf . Five words appear in both sentences, resulting in a dot product of 5. The vectors magnitudes are |d \u2032 q | = |d \u2032 | = \u221a 7. The estimation by T+MA is \u03d5(d q , d \u2032 ) = 0.71, a high similarity level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T+MA Example",
"sec_num": null
},
{
"text": "For evaluation we consider a standard measure: Recall. More specifically Recall after n texts have been retrieved (n = [1 . . . , 50]). Figure 3 plots the average Recall value obtained in the 5-folds with respect to the rank position (n).",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "In both language pairs, CL-CNG obtained worse results than those reported for English-Polish by Potthast et al. (2010) : R@50 = 0.68 vs. R@50 = 0.53 for es-eu and 0.28 for en-eu. This is due to the fact that neither the vocabulary nor its corresponding roots keep important relations. Therefore, when language pairs have a low syntactical relationship, CL-CNG is not an option. Still, CL-CNG performs better with es-eu than with en-eu because the first pair is composed of contact languages (cf. Section 1).",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "Potthast et al. (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "About CL-ASA, the results obtained with eseu and en-eu are quite different: R@50 = 0.68 for en-eu and R@50 = 0.53 for es-eu. Whereas in the first case they are comparable to those of CL-CNG, in the second one CL-ASA completely outperforms it. The improvement of CL-ASA obtained for en-eu is due to the size of the training corpus available in this case (approximately five times the number of sentences available for eseu). This shows the sensitivity of the model with respect to the size of the available resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "Lastly, although T+MA is a simple approach that reduces the cross-language similarity estimation to a translation followed by a monolingual process, it obtained a good performance (R@50= 0.77 for en-eu and R@50=0.89 for es-eu). Moreover, this method proved to be less sensitive than CL-ASA to the lack of resources. This could be due to the fact that it considers both directions of the translation model (e[n|s]-eu and eu- e [n|s] ). Additionally, the language model, applied in order to compose syntactically correct translations, reduces the amount of wrong translations and, indirectly, includes more syntactic information in the process. On the contrary, CL-ASA only considers one direction translation model eue [n|s] and completely disregards syntactical relations between the texts. Note that the better results come at the cost of higher computational demand. CL-CNG only requires easy to compute string comparisons. CL-ASA requires translation probabilities from aligned corpora, but once the probabilities are estimated, cross-language similarity can be computed very fast. T+MA requires the previous translation of all the texts, which can be very costly for large collections.",
"cite_spans": [
{
"start": 426,
"end": 431,
"text": "[n|s]",
"ref_id": null
},
{
"start": 718,
"end": 723,
"text": "[n|s]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "In a society where information in multiple languages is available on the Web, cross-language plagiarism is occurring every day with increasing frequency. Still, cross-language plagiarism detection has not been approached sufficiently due to its intrinsic complexity. Though few attempts have been made, even less work has been made to tackle this problem for less resourced languages, and to explore distant language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "We investigated the case of Basque, a language where, due to the lack of resources, crosslanguage plagiarism is often committed from texts in Spanish and English. Basque has no known relatives in the language family. However, it shares some of its vocabulary with Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "Two state-of-the-art methods based on translation probabilities and n-gram overlapping, and a novel technique based on statistical machine translation were evaluated. The novel technique obtains the best results in both language pairs, with the n-gram overlap technique performing worst. In this sense, our results complement those of Potthast et al. (2010) , which includes closely related language pairs as well.",
"cite_spans": [
{
"start": 335,
"end": 357,
"text": "Potthast et al. (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "Our results also show that better results come at the cost of more expensive processing time. For the future, we would like to investigate such performance trade-offs in more demanding datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "For future work we consider that exploring semantic text features across languages could improve the results. It could be interesting to further analyse how the reordering of words through translations might be relevant for this task. Additionally, working with languages even more distant from each other, such as Arabic or Hindi, seems to be a challenging and interesting task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "5"
},
{
"text": "This schema was formerly proposed for monolingual plagiarism detection. Nevertheless, it can be applied without further modifications to the cross-language case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Less resourced language is that with a low degree of representation on the Web(Alegria et al., 2009). Whereas the available text for German, French or Spanish is less than for English, the difference is more dramatic with other languages such as Basque.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.illc.uva.nl/EuroWordNet/ 4 The authors do not mention the origin of the documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.elhuyar.org 6 http://revista.consumer.es",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work of the first two authors is partially funded by CONACYT-Mexico and the MICINN project TEXT-ENTERPRISE 2.0 TIN2009-13391-C04-03 (Plan I+D+i).The research work of the last two authors is partially funded by the MICINN projects OPENMT-2 TIN2009-14675-C03-01 and KNOW2 TIN2009-14715-C04-01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards Linguistically Searchable Text",
"authors": [
{
"first": "Asier",
"middle": [],
"last": "Alc\u00e1zar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the BIDE 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alc\u00e1zar, Asier. 2006. Towards Linguistically Search- able Text. In Proceedings of the BIDE 2005, Bilbao, Basque Country.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Proceedings of the SEPLN 2009 Workshop on Information Retrieval and Information Extraction for Less Resourced Languages",
"authors": [
{
"first": "",
"middle": [],
"last": "Alegria",
"suffix": ""
},
{
"first": "Mikel",
"middle": [
"L"
],
"last": "I\u00f1aki",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Forcada",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarasola",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alegria, I\u00f1aki, Mikel L. Forcada, and Kepa Sara- sola, editors. 2009. Proceedings of the SEPLN 2009 Workshop on Information Retrieval and Infor- mation Extraction for Less Resourced Languages, Donostia, Basque Country. University of the Basque Country.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On Cross-lingual Plagiarism Analysis Using a Statistical Model",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Alfons",
"middle": [],
"last": "Juan",
"suffix": ""
}
],
"year": 2008,
"venue": "ECAI 2008 Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse (PAN 2008)",
"volume": "",
"issue": "",
"pages": "9--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barr\u00f3n-Cede\u00f1o, Alberto, Paolo Rosso, David Pinto, and Alfons Juan. 2008. On Cross-lingual Plagia- rism Analysis Using a Statistical Model. In Stein, Stamatatos, and Koppel, editors, ECAI 2008 Work- shop on Uncovering Plagiarism, Authorship, and Social Software Misuse (PAN 2008), pages 9-13, Patras, Greece. CEUR-WS.org.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Monolingual Text Similarity Measures: A Comparison of Models over Wikipedia Articles Revisions",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Eiselt",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2009,
"venue": "Sharma, Verma, and Sangal",
"volume": "",
"issue": "",
"pages": "29--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barr\u00f3n-Cede\u00f1o, Alberto, Andreas Eiselt, and Paolo Rosso. 2009. Monolingual Text Similarity Mea- sures: A Comparison of Models over Wikipedia Ar- ticles Revisions. In Sharma, Verma, and Sangal, ed- itors, ICON 2009, pages 29-38, Hyderabad, India. Macmillan Publishers.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the Resemblance and Containment of Documents",
"authors": [
{
"first": "Andrei",
"middle": [
"Z"
],
"last": "Broder",
"suffix": ""
}
],
"year": 1997,
"venue": "Compression and Complexity of Sequences (SEQUENCES'97)",
"volume": "",
"issue": "",
"pages": "21--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Broder, Andrei Z. 1997. On the Resemblance and Containment of Documents. In Compression and Complexity of Sequences (SEQUENCES'97), pages 21-29. IEEE Computer Society.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics",
"authors": [
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "A",
"middle": [
"Della"
],
"last": "Stephen",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Pa- rameter Estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multilingual Plagiarism Detection",
"authors": [
{
"first": "Zdenek",
"middle": [],
"last": "Ceska",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Toman",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Jezek",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 13th International Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "83--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ceska, Zdenek, Michal Toman, and Karel Jezek. 2008. Multilingual Plagiarism Detection. In Proceedings of the 13th International Conference on Artificial Intelligence, pages 83-92. Springer Verlag Berlin Heidelberg.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Developing a Corpus of Plagiarised Short Answers. Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Clough",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clough, Paul and Mark Stevenson. 2010. Developing a Corpus of Plagiarised Short Answers. Language Resources and Evaluation: Special Issue on Plagia- rism and Authorship Analysis.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Building and Annotating a Corpus for the Study of Journalistic Text Reuse",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Clough",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Piao",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC 2002), volume V",
"volume": "",
"issue": "",
"pages": "1678--1691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clough, Paul, Robert Gaizauskas, and Scott Piao. 2002. Building and Annotating a Corpus for the Study of Journalistic Text Reuse. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC 2002), volume V, pages 1678-1691, Las Palmas, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic Cross-Language Retrieval Using Latent Semantic Indexing",
"authors": [
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Todd",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Letsche",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Littman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Landauer",
"suffix": ""
}
],
"year": 1997,
"venue": "AAAI-97 Spring Symposium Series: Cross-Language Text and Speech Retrieval",
"volume": "",
"issue": "",
"pages": "24--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dumais, Susan T., Todd A. Letsche, Michael L. Littman, and Thomas K. Landauer. 1997. Auto- matic Cross-Language Retrieval Using Latent Se- mantic Indexing. In AAAI-97 Spring Symposium Series: Cross-Language Text and Speech Retrieval, pages 24-26. Stanford University.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Methods for Identifying Versioned and Plagiarized Documents",
"authors": [
{
"first": "Timothy",
"middle": [
"C"
],
"last": "Hoad",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "54",
"issue": "3",
"pages": "203--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoad, Timothy C. and Justin Zobel. 2003. Meth- ods for Identifying Versioned and Plagiarized Doc- uments. Journal of the American Society for Infor- mation Science and Technology, 54(3):203-215.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "N-gram-based Author Profiles for Authorship Attribution",
"authors": [
{
"first": "Vlado",
"middle": [],
"last": "Keselj",
"suffix": ""
},
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Cercone",
"suffix": ""
},
{
"first": "Calvin",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference Pacific Association for Computational Linguistics, PACLING'03",
"volume": "",
"issue": "",
"pages": "255--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keselj, Vlado, Fuchun Peng, Nick Cercone, and Calvin Thomas. 2003. N-gram-based Author Profiles for Authorship Attribution. In Proceedings of the Conference Pacific Association for Computational Linguistics, PACLING'03, pages 255-264, Halifax, Canada.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL), demonstration session",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Annual Meeting of the Association for Computational Linguistics (ACL), demonstra- tion session, Prague, Czech Republic.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Plagiarism -A Survey",
"authors": [
{
"first": "Hermann",
"middle": [],
"last": "Maurer",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Kappe",
"suffix": ""
},
{
"first": "Bilal",
"middle": [],
"last": "Zaka",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Universal Computer Science",
"volume": "12",
"issue": "8",
"pages": "1050--1084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maurer, Hermann, Frank Kappe, and Bilal Zaka. 2006. Plagiarism -A Survey. Journal of Universal Com- puter Science, 12(8):1050-1084.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Character N-Gram Tokenization for European Language Text Retrieval",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
}
],
"year": 2004,
"venue": "Information Retrieval",
"volume": "7",
"issue": "1-2",
"pages": "73--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mcnamee, Paul and James Mayfield. 2004. Character N-Gram Tokenization for European Language Text Retrieval. Information Retrieval, 7(1-2):73-97.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improved Machine Translation Performace via Parallel Sentence Extraction from Comparable Corpora",
"authors": [
{
"first": "Dragos",
"middle": [
"S"
],
"last": "Munteanu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL 2004)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Munteanu, Dragos S., Alexander Fraser, and Daniel Marcu. 2004. Improved Machine Translation Performace via Parallel Sentence Extraction from Comparable Corpora. In Proceedings of the Hu- man Language Technology and North American As- sociation for Computational Linguistics Conference (HLT/NAACL 2004), Boston, MA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Systematic Comparison of Various Statistical Alignment Models",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Josef",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Frank Josef and Hermann Ney. 2003. A System- atic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51. See also http://www.fjoch.com/GIZA++.html.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Statistical Approach to Crosslingual Natural Language Tasks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Alfons",
"middle": [],
"last": "Juan",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Algorithms",
"volume": "64",
"issue": "1",
"pages": "51--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinto, David, Jorge Civera, Alberto Barr\u00f3n-Cede\u00f1o, Alfons Juan, and Paolo Rosso. 2009. A Statistical Approach to Crosslingual Natural Language Tasks. Journal of Algorithms, 64(1):51-60.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Wikipedia-Based Multilingual Retrieval Model",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Maik",
"middle": [],
"last": "Anderka",
"suffix": ""
}
],
"year": 2008,
"venue": "30th European Conference on IR Research",
"volume": "4956",
"issue": "",
"pages": "522--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Potthast, Martin, Benno Stein, and Maik Anderka. 2008. A Wikipedia-Based Multilingual Retrieval Model. In Macdonald, Ounis, Plachouras, Ruthven, and White, editors, 30th European Conference on IR Research, ECIR 2008, Glasgow, volume 4956 LNCS of Lecture Notes in Computer Science, pages 522-530, Berlin Heidelberg New York. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Overview of the 1st International Competition on Plagiarism Detection",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Eiselt",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2009,
"venue": "SEPLN 2009 Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse (PAN 09)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Potthast, Martin, Benno Stein, Andreas Eiselt, Alberto Barr\u00f3n-Cede\u00f1o, and Paolo Rosso. 2009. Overview of the 1st International Competition on Plagiarism Detection. In Stein, Rosso, Stamatatos, Koppel, and Agirre, editors, SEPLN 2009 Workshop on Uncov- ering Plagiarism, Authorship, and Social Software Misuse (PAN 09), pages 1-9, San Sebastian, Spain. CEUS-WS.org.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-Language Plagiarism Detection. Language Resources and Evaluation",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Paolo",
"middle": [
"Rosso"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Potthast, Martin, Alberto Barr\u00f3n-Cede\u00f1o, Benno Stein, and Paolo Rosso. 2010. Cross-Language Pla- giarism Detection. Language Resources and Eval- uation, Special Issue on Plagiarism and Authorship Analysis.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic Identification of Document Translations in Large Multilingual Document Collections",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Pouliquen",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Camelia",
"middle": [],
"last": "Ignat",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-2003)",
"volume": "",
"issue": "",
"pages": "401--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pouliquen, Bruno, Ralf Steinberger, and Camelia Ig- nat. 2003. Automatic Identification of Docu- ment Translations in Large Multilingual Document Collections. In Proceedings of the International Conference on Recent Advances in Natural Lan- guage Processing (RANLP-2003), pages 401-408, Borovets, Bulgaria.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Winnowing: Local Algorithms for Document Fingerprinting",
"authors": [
{
"first": "",
"middle": [],
"last": "Schleimer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Saul",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wilkerson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aiken",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schleimer, Saul, Daniel S. Wilkerson, and Alex Aiken. 2003. Winnowing: Local Algorithms for Document Fingerprinting. In Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, New York, NY. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SCAM: A Copy Detection Mechanism for Digital Documents",
"authors": [
{
"first": "Narayanan",
"middle": [],
"last": "Shivakumar",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Garc\u00eda-Molina",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 2nd Annual Conference on the Theory and Practice of Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivakumar, Narayanan and Hector Garc\u00eda-Molina. 1995. SCAM: A Copy Detection Mechanism for Digital Documents. In Proceedings of the 2nd An- nual Conference on the Theory and Practice of Dig- ital Libraries.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Strategies for Retrieving Plagiarized Documents",
"authors": [
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Meyer Zu Eissen",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "825--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stein, Benno, Sven Meyer zu Eissen, and Martin Pot- thast. 2007. Strategies for Retrieving Plagiarized Documents. In Clarke, Fuhr, Kando, Kraaij, and de Vries, editors, Proceedings of the 30th Annual Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 825- 826, Amsterdam, The Netherlands. ACM.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cross-lingual Document Similarity Calculation Using the Multilingual Thesaurus EU-ROVOC. Computational Linguistics and Intelligent Text Processing",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Pouliquen",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hagman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the CICLing",
"volume": "2276",
"issue": "",
"pages": "415--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steinberger, Ralf, Bruno Pouliquen, and Johan Hag- man. 2002. Cross-lingual Document Similarity Calculation Using the Multilingual Thesaurus EU- ROVOC. Computational Linguistics and Intelligent Text Processing. Proceedings of the CICLing 2002, 2276:415-424.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Pouliquen",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Widiger",
"suffix": ""
},
{
"first": "Camelia",
"middle": [],
"last": "Ignat",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006)",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steinberger, Ralf, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaz Erjavec, Dan Tufis, and D\u00e1niel Varga. 2006. The JRC-Acquis: A multilin- gual aligned parallel corpus with 20+ languages. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006), volume 9, Genoa, Italy.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SRILM -An Extensible Language Modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Intl. Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas. 2002. SRILM -An Extensible Lan- guage Modeling toolkit. In Intl. Conference on Spo- ken Language Processing, Denver, Colorado.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Basque language",
"authors": [
{
"first": "",
"middle": [],
"last": "Wikipedia",
"suffix": ""
}
],
"year": 2010,
"venue": "Online; accessed",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikipedia. 2010a. Basque language. [Online; ac- cessed 5-February-2010].",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Party of European Socialists | Partido Socialista Europeo | Europako Alderdi Sozialista",
"authors": [
{
"first": "",
"middle": [],
"last": "Wikipedia",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikipedia. 2010b. Party of European Socialists | Par- tido Socialista Europeo | Europako Alderdi Sozial- ista . [Online; accessed 10-February-2010].",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Figure 1: First sentences from the Wikipedia articles \"Party of European Socialists\" (en),\"Partido Socialista Europeo\" (es), and \"Europako Alderdi Sozialista\" (eu) (Wikipedia, 2010b).",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Example length factor for a sentence written in Basque(eu) d q , such that |d q | = 90.The normal distributions represent the expected lengths for the translation d \u2032 , either in Spanish (es) or English (en).",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Evaluation of the cross-language ranking. Results plotted as rank versus Recall for the three evaluated models and the two language pairs (R@[1, . . . , 50]).",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"text": "Text preprocessing operations required for the different models. low=lowercasing, tok=tokenization, pd=punctuation marks deletion, bd=blank space deletion, sd=symbols deletion, lem=lematization.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "tools, generating d \u2032 q . The translation system uses a log-linear combination of state-of-the-art features, such as translation probabilities and lexical translation models on both directions and a target language model. After translation, d \u2032 q and d \u2032 are lexically related, making possible a monolingual comparison.",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}