|
{ |
|
"paper_id": "R11-1034", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:03:51.104136Z" |
|
}, |
|
"title": "Using Cognates in a French -Romanian Lexical Alignment System: A Comparative Study", |
|
"authors": [ |
|
{ |
|
"first": "Mirabela", |
|
"middle": [], |
|
"last": "Navlea", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "rue Ren\u00e9 Descartes BP", |
|
"location": { |
|
"addrLine": "LiLPa) Universit\u00e9 de Strasbourg 22", |
|
"postCode": "80010, 67084", |
|
"settlement": "Linguistique, Langues, Parole, Strasbourg cedex" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Amalia", |
|
"middle": [], |
|
"last": "Todira\u015fcu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Linguistique, Langues, Parole (LiLPa) Universit\u00e9 de Strasbourg 22, rue Ren\u00e9 Descartes BP", |
|
"institution": "", |
|
"location": { |
|
"postCode": "80010, 67084", |
|
"settlement": "Strasbourg cedex" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes a hybrid French-Romanian cognate identification module. This module is used by a lexical alignment system. Our cognate identification method uses lemmatized, tagged and sentence-aligned parallel corpora. This method combines statistical techniques, linguistic information (lemmas, POS tags) and orthographic adjustments. We evaluate our cognate identification module and we compare it to other methods using pure statistical techniques. Thus, we study the impact of the used linguistic information and the orthographic adjustments on the results of the cognate identification module and on cognate alignment. Our method obtains the best results in comparison with the other implemented statistical methods.", |
|
"pdf_parse": { |
|
"paper_id": "R11-1034", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes a hybrid French-Romanian cognate identification module. This module is used by a lexical alignment system. Our cognate identification method uses lemmatized, tagged and sentence-aligned parallel corpora. This method combines statistical techniques, linguistic information (lemmas, POS tags) and orthographic adjustments. We evaluate our cognate identification module and we compare it to other methods using pure statistical techniques. Thus, we study the impact of the used linguistic information and the orthographic adjustments on the results of the cognate identification module and on cognate alignment. Our method obtains the best results in comparison with the other implemented statistical methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We present a new French -Romanian cognate identification module, integrated into a lexical alignment system using French -Romanian parallel law corpora. We define cognates as translation equivalents having an identical form or sharing orthographic or phonetic similarities (common etymology, borrowings) . Cognates are very frequent between close languages such as French and Romanian, two Latin languages with a rich morphology. So, they represent important lexical cues in a French -Romanian lexical alignment system. Few linguistic resources and tools for Romanian (dictionaries, parallel corpora, MT systems) are currently available. Some lexically aligned corpora or lexical alignment tools (Tufi\u015f et al., 2005) are available for Romanian -English or Romanian -German (Vertan and Gavril\u0103, 2010) . Most of the cognate identification modules used by these systems are purely statistical. As far as we know, no cognate identification method is available for French and Romanian. Cognate identification is a difficult task due to the high orthographic similarities between bilingual pairs of words having different meanings. Inkpen et al. (2005) develop classifiers for French and English cognates based on several dictionaries and manually built lists of cognates. Inkpen et al. (2005) distinguish between: -cognates (liste (FR) -list (EN)); -false friends (blesser ('to injure') (FR) -bless (EN)); -partial cognates (facteur (FR) -factor or mailman (EN)); -genetic cognates (chef (FR) -head (EN)); -unrelated pairs of words (glace (FR) -ice (EN) and glace (FR) -chair (EN)). Our cognate detection method identifies cognates, partial and genetic cognates. This method is used especially to improve a French -Romanian lexical alignment system. So, we aim to obtain a high precision of our cognate identification method. Thus, we eliminate false friends and unrelated pairs of words combining statistical techniques and linguistic information (lemmas, POS tags). We use a lemmatized, tagged and sentence-aligned parallel corpus. Unlike Inkpen et al. (2005) , we do not use other external resources (dictionaries, lists of cognates). To detect cognates from parallel corpora, several approaches exploit the orthographic similarity between two words of a bilingual pair. An efficient method is the 4-gram method (Simard et al., 1992) . This method considers two words as cognates if their length is greater than or equal to 4 and at least their first 4 characters are common. Other methods exploit Dice's coefficient (Adam-son and Boreham, 1974) or a variant of this coefficient (Brew and McKelvie, 1996) . This measure computes the ratio between the number of common character bigrams of the two words and the total number of two word bigrams. Also, some methods use the Longest Common Subsequence Ratio (LCSR) (Melamed, 1999; Kraif, 1999) . LCSR is computed as the ratio between the length of the longest common substring of ordered (and not necessarily contiguous) characters and the length of the longest word. Thus, two words are considered as cognates if LCSR value is greater than or equal to a given threshold. Similarly, other methods compute the distance between two words, which represents the minimum number of substitutions, insertions and deletions used to transform one word into another (Wagner and Fischer, 1974) . These methods use exclusevly statistical techniques and they are language independent. On the other hand, other methods use the phonetic distance between two words belonging to a bilingual pair (Oakes, 2000) . Kondrak (2009) identifies three characteristics of cognates: recurrent sound correspondences, phonetic similarity and semantic affinity. Thus, our method exploits orthographic and phonetic similarities between French -Romanian cognates. We combine n-grams methods with linguistic information (lemmas, POS tags) and several input data disambiguation strategies (computing cognates' frequencies, iterative extraction of the most reliable cognates and their deletion from the input data). Our method needs no external resources (bilingual dictionaries), so it could easily be extended to other Romance languages. We aim to obtain a high accuracy of our method to be integrated in a lexical alignment system. We evaluate our method and we compare it with pure statistical methods to study the influence of used linguistic information on the final results and on cognate alignment. In the next section, we present the parallel corpora used for our experiments. In section 3, we present the lexical alignment method. We also describe our cognate identification module in section 4. We present the evaluation of our method and a comparison with other methods in section 5. Our conclusions and further work figure in section 6.", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 303, |
|
"text": "(common etymology, borrowings)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 716, |
|
"text": "(Tufi\u015f et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 773, |
|
"end": 799, |
|
"text": "(Vertan and Gavril\u0103, 2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1126, |
|
"end": 1146, |
|
"text": "Inkpen et al. (2005)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1267, |
|
"end": 1287, |
|
"text": "Inkpen et al. (2005)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1544, |
|
"end": 1548, |
|
"text": "(EN)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2029, |
|
"end": 2056, |
|
"text": "Unlike Inkpen et al. (2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2310, |
|
"end": 2331, |
|
"text": "(Simard et al., 1992)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2515, |
|
"end": 2543, |
|
"text": "(Adam-son and Boreham, 1974)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2577, |
|
"end": 2602, |
|
"text": "(Brew and McKelvie, 1996)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 2810, |
|
"end": 2825, |
|
"text": "(Melamed, 1999;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 2826, |
|
"end": 2838, |
|
"text": "Kraif, 1999)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 3301, |
|
"end": 3327, |
|
"text": "(Wagner and Fischer, 1974)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 3524, |
|
"end": 3537, |
|
"text": "(Oakes, 2000)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 3540, |
|
"end": 3554, |
|
"text": "Kondrak (2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our experiments, we use a legal parallel corpus (DGT-TM 1 ) based on the Acquis Communautaire corpus. This multilingual corpus is available in 22 official languages of EU member states. It is composed of laws adopted by EU member states since 1950. DGT-TM contains 9,953,360 tokens in French and 9,142,291 tokens in Romanian. We use a test corpus of 1,000 1:1 aligned complete sentences (starting with a capital letter and finishing with a punctuation sign). The length of each sentence has at most 80 words. This test corpus contains 33,036 tokens in French and 28,645 in Romanian. We use the TTL 2 tagger available for Romanian (Ion, 2007) and for French (Todira\u015fcu et al., 2011) (as Web service 3 ). Thus, the parallel corpus is tokenized, lemmatized, tagged and annotated at chunk level. The tagger uses the set of morpho-syntactic descriptors (MSD) proposed by the Multext Project 4 for French (Ide and V\u00e9ronis, 1994) and for Romanian (Tufi\u015f and Barbu, 1997) . In the Figure 1 , we present an example of TTL's output: lemma attribute represents the lemmas of lexical units, ana attribute provides morphosyntactic information and chunk attribute marks nominal and prepositional phrases. <seg lang=\"FR\"><s id=\"ttlfr.3\"> <w lemma=\"voir\" ana=\"Vmps-s\">vu</w> <w lemma=\"le\" ana=\"Da-fs\" chunk=\"Np#1\">la</w> <w lemma=\"proposition\" ana=\"Ncfs\" chunk=\"Np#1\">proposition</w> <w lemma=\"de\" ana=\"Spd\" chunk=\"Pp#1\">de</w> <w lemma=\"le\" ana=\"Da-fs\" chunk=\"Pp#1,Np#2\">la</w> <w lemma=\"commission\" ana=\"Ncfs\" chunk=\"Pp#1,Np#2\">Commission </w> <c>;</c> </s></seg> 1 http://langtech.jrc.it/DGT-TM.html 2 Tokenizing, Tagging and Lemmatizing free running texts 3 https://weblicht.sfs.uni-tuebingen.de/ 4 http://aune.lpl.univ-aix.FR/projects/multext/", |
|
"cite_spans": [ |
|
{ |
|
"start": 633, |
|
"end": 644, |
|
"text": "(Ion, 2007)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 684, |
|
"text": "(Todira\u015fcu et al., 2011)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 902, |
|
"end": 925, |
|
"text": "(Ide and V\u00e9ronis, 1994)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 943, |
|
"end": 966, |
|
"text": "(Tufi\u015f and Barbu, 1997)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 976, |
|
"end": 984, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Parallel Corpus", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The cognate identification module is integrated in a French -Romanian lexical alignment system (see Figure 2 ). In our lexical alignment method, we first use GIZA++ (Och and Ney, 2003) implementing IBM models (Brown et al., 1993) . These models build word-based alignments from aligned sentences. Indeed, each source word has zero, one or more translation equivalents in the target language. As these models do not provide many-tomany alignments, we also use some heuristics (Koehn et al., 2003; Tufi\u015f et al., 2005) to detect phrase-based alignments such as chunks: nominal, adjectival, verbal, adverbial or prepositional phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 184, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 229, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 495, |
|
"text": "(Koehn et al., 2003;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 515, |
|
"text": "Tufi\u015f et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 108, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexical Alignment Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In our experiments, we use the lemmatized, tagged and annotated parallel corpus described in section 2. Thus, we use lemmas and morphosyntactic properties to improve the lexical alignment. Lemmas are followed by the two first characters of morpho-syntactic tag. This operation morphologically disambiguates the lemmas (Tufi\u015f et al., 2005) . For example, the same French lemma change (=exchange, modify) can be a common noun or a verb: change_Nc vs. change_Vm. This disambiguation procedure improves the GIZA++ system's performance. We realize bidirectional alignments (FR -RO and RO -FR) with GIZA++, and we intersect them (Koehn et al., 2003) to select common alignments. To improve the word alignment results, we add an external list of cognates to the list of the translation equivalents extracted by GIZA++. This list of cognates is built from parallel corpora by our own method (described in the next section). Also, to complete word alignments, we use a French -Romanian dictionary of verbo-nominal collocations (Todira\u015fcu et al., 2008) . They represent multiword expressions, composed of words related by lexico-syntactic relations (Todira\u015fcu et al., 2008) . The dictionary contains the most frequent verbo-nominal collocations extracted from legal corpora. To augment the recall of the lexical alignment method, we apply a set of linguisticallymotivated heuristic rules (Tufi\u015f et al., 2005) : a) we define some POS affinity classes (a noun might be translated by a noun, a verb or an adjective); b) we align content-words such as nouns, adjectives, verbs, and adverbs, according to the POS affinity classes; c) we align chunks containing translation equivalents aligned in a previous step; d) we align elements belonging to chunks by linguistic heuristics. We develop a language dependent module applying 27 morpho-syntactic contextual heuristic rules (Navlea and Todira\u015fcu, 2010) . These rules are defined according to morpho-syntactic differences between French and Romanian. The architecture of the lexical alignment system is presented in the Figure 2 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 318, |
|
"end": 338, |
|
"text": "(Tufi\u015f et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 643, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1018, |
|
"end": 1042, |
|
"text": "(Todira\u015fcu et al., 2008)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1139, |
|
"end": 1163, |
|
"text": "(Todira\u015fcu et al., 2008)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1378, |
|
"end": 1398, |
|
"text": "(Tufi\u015f et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1860, |
|
"end": 1888, |
|
"text": "(Navlea and Todira\u015fcu, 2010)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2055, |
|
"end": 2063, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexical Alignment Method", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In our hybrid cognate identification method, we use the legal parallel corpus described in section 2. This corpus is tokenized, lemmatized, tagged, and sentence-aligned. Thus, we consider as cognates bilingual word pairs respecting the linguistic conditions below: 1) their lemmas are translation equivalents in two parallel sentences; 2) they have identical lemmas or have orthographic or phonetic similarities between lemmas; 3) they are content-words (nouns, verbs, adverbs, etc.) having the same POS tag or belonging to the same POS affinity class. We filter out short words such as prepositions and conjunctions to limit noisy output. We also detect short cognates such as il 'he' vs. el (personal pronoun), cas 'case' vs. caz (nouns). We avoid ambiguous pairs such as lui 'him' (personal pronoun) (FR) vs. lui 's' (possessive determiner) (RO), ce 'this' (demonstrative determiner) (FR) vs. ce 'that' (relative pronoun) (RO). To detect orthographic and phonetic similarities between cognates, we look at the beginning of the words and we ignore their endings. We classify the French -Romanian cognates detected in the studied parallel corpus (at the orthographic or phonetic level), in several categories: 1) cross-lingual invariants (numbers, certain acronyms and abbreviations, punctuation signs); 2) identical cognates (document 'document' vs. document); 3) similar cognates: a) 4-grams (Simard et al., 1992) ; The first 4 characters of lemmas are identical. The length of these lemmas is greater than or equal to 4 (autorit\u00e9 vs. autoritate 'authority'). b) 3-grams; The first 3 characters of lemmas are identical and the length of the lemmas is greater than or equal to 3 (acte vs. act 'paper'). c) 8-bigrams; Lemmas have a common sequence of characters among the first 8 bigrams. At least one character of each bigram is common to both words. This condition allows the jump of a non identical character (souscrire vs. subscrie 'submit'). This method applies only to long lemmas (length greater than 7). d) 4-bigrams; Lemmas have a common sequence of characters among the 4 first bigrams. This method applies for long lemmas (length greater than 7) (homologu\u00e9 vs. omologat 'homologated') but also for short lemmas (length less than or equal to 7) (groupe vs. grup 'group'). We iteratively extract cognates by identified categories. In addition, we use a set of orthographic adjustments and some input data disambiguation strategies. We compute frequency for ambiguous candidates (the same source lemma occurs with several target candidates) and we keep the most frequent candidate. At each iteration, we delete reliable considered cognates from the input data. We start by applying a set of empirically established orthographic adjustments between French -Romanian lemmas, such as: diacritic removal, phonetic mappings detection, etc. (see Table 1 ). We aim to improve the precision of our method. Thus, we iteratively extract cognates by identified categories from the surest ones to less sure candidates (see Table 2 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 483, |
|
"text": "(nouns, verbs, adverbs, etc.)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1395, |
|
"end": 1416, |
|
"text": "(Simard et al., 1992)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2849, |
|
"end": 2856, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 3020, |
|
"end": 3027, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cognate Identification Module", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "v + s + v v + z + v pr\u00e9sent -prezent w w v wagon -vagon y y i yaourt -iaurt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Levels of orthographic adjustments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To decrease the noise of the cognate identification method, we apply two supplementary strategies. We filter out ambiguous cognate candidates (autorit\u00e9 -autoritate|autorizare), by computing their frequencies in the corpus. In this case, we keep the most frequent candidate pair. This strategy is very effective to augment the precision of the results, but it might decrease the recall in certain cases. Indeed, there are cases where French -Romanian cognates have one form in French, but two various forms in Romanian (sp\u00e9cification 'specification' vs. specificare or specifica\u0163ie). We recover these pairs by using regular expressions based on specific lemma endings (ion (fr) vs. re|\u0163ie (ro)). Then, we delete the reliable cognate pairs (high precision) from the input data at the end of the extraction step. This step helps us to disambiguate the input data. For example, the identical cognates transport vs. transport 'transportation', obtained in a previous extraction step and deleted from the input data, eliminate the occurrence of candidate transport vs. tranzit as 4-grams cognate, in a next extraction step. We apply the same method for cognates having POS affinity (N-V; N-ADJ). We keep only 4grams cognates, due to the significant decrease of the precision for the other categories 3 (b, c, d). ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Levels of orthographic adjustments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluated our cognate identification module against a list of cognates initially built from the test corpus, containing 2,034 pairs of cognates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Methods' Comparison", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In addition, we also compared the results of our method with the results provided by pure statistical methods (see Table 3 ). These methods are the following: a) thresholding the Longest Common Subsequence Ratio (LCSR) for two words of a bilingual pair; This measure computes the ratio between the longest common subsequence of characters of two words and the length of the longest word. We empirically establish the threshold of 0.68. We implemented these methods using orthographically adjusted parallel corpus (see Table 1 ). Moreover, we evaluate 4-grams method on the initial parallel corpus and on the orthographically adjusted parallel corpus to study the impact of orthographic adjustments step on the quality of the results. These methods generally apply for words having at least 4 letters in order to decrease the noise of the results. Cognates are searched in aligned parallel sentences. Word characters are almost parallel (rembourser vs. rambursare 'refund'). Our method extracted 1,814 correct cognates from 1,914 provided candidates. The method obtains the best scores (precision=94.78% ; re-call=89.18% ; f-measure=91.89%), in comparison with the other implemented methods. The 4grams method obtains a high precision (90.85%), but a low recall (47.84%). Orthographic adjustments step improves significantly the recall of 4grams method with 24.58% (see Table 4 ). This result is due to the specific properties of the law parallel corpus. Indeed, many Romanian terms were borrowed from French and these terms present high orthographic similarities. Table 4 Evaluation of the 4-grams method before and after orthographic adjustments step However, our method extracts some ambiguous candidates such as num\u00e9ro 'number' -nume 'name', compl\u00e9ter 'complete' -compune 'compose'. Some of these errors were avoided by keeping the most frequent candidate in the studied corpus. So, the remaining errors mainly concern hapax candidates. Also, some cognates were not extracted: heureor\u0103 'hour', semaine -s\u0103pt\u0103m\u00e2n\u0103 'week', lieuloc 'place'. These errors concern cognates sharing very few orthographic similarities.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 526, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1370, |
|
"end": 1377, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1565, |
|
"end": 1572, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Methods' Comparison", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The lowest scores are obtained by the LCSR method (f-measure=50.47%), followed by the DICE's coefficient (f-measure=58.61%). These general methods provide a high noise due to the important orthographic similarities between the words having different meanings. Their results might be improved by combining statistical techniques with linguistic information such as POS affinity or by combining several association scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods P (%) R (%) F (%)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As we mentioned, the output of the cognate identification module is exploited by a French -Romanian lexical alignment system (based on GI-ZA++) described in section 3. We compared the set of cognates provided by GIZA++ with our results to study their impact on cognate alignment. GIZA++ extracted 1,532 cognates representing a recall of 75.32% (see Table 5 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 356, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods P (%) R (%) F (%)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our cognate identification module significantly improved the recall with 13.86%. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods P (%) R (%) F (%)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We present a French -Romanian cognate identification module required by a lexical alignment system. Our method combines statistical techniques and linguistic filters to extract cognates from lemmatized, tagged and sentence-aligned parallel corpus. The use of the linguistic information and the orthographic adjustments significantly improves the results compared with pure statistical methods. However, these results are dependent of the studied languages, of the corpus domain and of the data volume. We need more experiments using other corpora from other domains to be able to generalize. Our system should be improved to detect false friends by using external resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Further Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Cognate identification module will be integrated in a French -Romanian lexical alignment system. This system is part of a larger project aiming to develop a factored phrase-based statistical machine translation system for French and Romanian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Further Work", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The use of an association measure based on character structure to identify semantically related pairs of words and document titles", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jillian", |
|
"middle": [], |
|
"last": "Adamson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Boreham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "Information Storage and Retrieval", |
|
"volume": "10", |
|
"issue": "7-8", |
|
"pages": "253--260", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George W. Adamson and Jillian Boreham. 1974. The use of an association measure based on character structure to identify semantically related pairs of words and document titles, Information Storage and Retrieval, 10(7-8):253-260.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word-pair ex-traction for lexicography", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mckelvie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of International Conference on New Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Brew and David McKelvie. 1996. Word-pair ex-traction for lexicography, in Proceedings of In- ternational Conference on New Methods in Natural Language Processing, Bilkent, Turkey, 45-55.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The ma- thematics of statistical machine translation: Para- meter estimation, Computational Linguistics, 19(2):263-312.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Multext (multilingual tools and corpora", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Ide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "V\u00e9ronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 15th International Conference on Computational Linguistics, CoLing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "90--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Ide and Jean V\u00e9ronis. 1994. Multext (multilin- gual tools and corpora), in Proceedings of the 15th International Conference on Computational Lin- guistics, CoLing 1994, Kyoto, pp. 90-96.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic Identification of Cognates and False Friends in French and English, RANLP-2005", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Inkpen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Frunz\u0103", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "251--257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana Inkpen, Oana Frunz\u0103, and Grzegorz Kondrak. 2005. Automatic Identification of Cognates and False Friends in French and English, RANLP- 2005, Bulgaria, Sept. 2005, p. 251-257.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Metode de dezambiguizare semantic\u0103 automat\u0103", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Radu Ion", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Romanian Academy, Bucharest", |
|
"volume": "148", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radu Ion. 2007. Metode de dezambiguizare semantic\u0103 automat\u0103. Aplica\u0163ii pentru limbile englez\u0103 \u015fi rom\u00e2n\u0103, Ph.D. Thesis, Romanian Academy, Bu- charest, May 2007, 148 pp.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Statistical Phrase-Based Translation, in Proceedings of Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation, in Pro- ceedings of Human Language Technology Confe- rence of the North American Chapter of the Asso- ciation of Computational Linguistics, HLT-NAACL 2003, Edmonton, May-June 2003, pp. 48-54.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Identification of Cognates and Recurrent Sound Correspondences in Word Lists", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Traitement Automatique des Langues (TAL)", |
|
"volume": "50", |
|
"issue": "", |
|
"pages": "201--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Kondrak. 2009. Identification of Cognates and Recurrent Sound Correspondences in Word Lists, in Traitement Automatique des Langues (TAL), 50(2) :201-235.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Identification des cognats et alignement bi-textuel : une \u00e9tude empirique, dans Actes de la 6\u00e8me conf\u00e9rence annuelle sur le Traitement Automatique des Langues Naturelles", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Kraif", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "TALN", |
|
"volume": "99", |
|
"issue": "", |
|
"pages": "205--214", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Kraif. 1999. Identification des cognats et ali- gnement bi-textuel : une \u00e9tude empirique, dans Actes de la 6\u00e8me conf\u00e9rence annuelle sur le Trai- tement Automatique des Langues Naturelles, TALN 99, Carg\u00e8se, 12-17 juillet 1999, 205-214.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bitext Maps and Alignment via Pattern Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational Linguistics", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "107--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan I. Melamed. 1999. Bitext Maps and Alignment via Pattern Recognition, in Computational Linguis- tics, 25(1):107-130.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Linguistic Resources for Factored Phrase-Based Statistical Machine Translation Systems", |
|
"authors": [ |
|
{ |
|
"first": "Mirabela", |
|
"middle": [], |
|
"last": "Navlea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amalia", |
|
"middle": [], |
|
"last": "Todira\u015fcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Workshop on Exploitation of Multilingual Resources and Tools for Central and (South) Eastern European Languages, 7th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mirabela Navlea and Amalia Todira\u015fcu. 2010. Lin- guistic Resources for Factored Phrase-Based Statis- tical Machine Translation Systems, in Proceedings of the Workshop on Exploitation of Multilingual Resources and Tools for Central and (South) East- ern European Languages, 7th International Confe- rence on Language Resources and Evaluation, Malta, Valletta, May 2010, pp. 41-48.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Computer Estimation of Vocabulary in Protolanguage from Word Lists in Four Daughter Languages", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Oakes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "7", |
|
"issue": "3", |
|
"pages": "233--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael P. Oakes. 2000. Computer Estimation of Vo- cabulary in Protolanguage from Word Lists in Four Daughter Languages, in Journal of Quantitative Linguistics, 7(3):233-243.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A Systematic Comparison of Various Statistical Alignment Models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz J. Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models, in Computational Linguistics, 29(1):19- 51.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using cognates to align sentences", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Isabelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Simard, George Foster, and Pierre Isabelle. 1992. Using cognates to align sentences, in Pro- ceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation, Montr\u00e9al, pp. 67-81.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Vers un dictionnaire de collocations multilingue", |
|
"authors": [ |
|
{ |
|
"first": "Amalia", |
|
"middle": [], |
|
"last": "Todira\u015fcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Heid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "\u015etef\u0103nescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Tufi\u015f", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Gledhill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marion", |
|
"middle": [], |
|
"last": "Weller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Rousselot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Cahiers de Linguistique", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "161--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amalia Todira\u015fcu, Ulrich Heid, Dan \u015etef\u0103nescu, Dan Tufi\u015f, Christopher Gledhill, Marion Weller, and Fran\u00e7ois Rousselot. 2008. Vers un dictionnaire de collocations multilingue, in Cahiers de Linguis- tique, 33(1) :161-186, Louvain, ao\u00fbt 2008.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "French text preprocessing with TTL", |
|
"authors": [ |
|
{ |
|
"first": "Amalia", |
|
"middle": [], |
|
"last": "Todira\u015fcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirabela", |
|
"middle": [], |
|
"last": "Navlea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurence", |
|
"middle": [], |
|
"last": "Longo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Romanian Academy", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "151--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amalia Todira\u015fcu, Radu Ion, Mirabela Navlea, and Laurence Longo. 2011. French text preprocessing with TTL, in Proceedings of the Romanian Acad- emy, Series A, Volume 12, Number 2/2011, pp. 151-158, Bucharest, Romania, June 2011, Roma- nian Academy Publishing House. ISSN 1454-9069.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A Reversible and Reusable Morpho-Lexical Description of Romanian", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Tufi\u015f", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [ |
|
"Maria" |
|
], |
|
"last": "Barbu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Recent Advances in Romanian Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Tufi\u015f and Ana Maria Barbu. 1997. A Reversible and Reusable Morpho-Lexical Description of Ro- manian, in Dan Tufi\u015f and Poul Andersen (eds.), Recent Advances in Romanian Language Technol- ogy, pp. 83-93, Editura Academiei Rom\u00e2ne, Bucure\u015fti, 1997. ISBN 973-27-0626-0.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Combined Aligners", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Tufi\u015f", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandru", |
|
"middle": [], |
|
"last": "Ceau\u015fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "\u015etef\u0103nescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "107--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Tufi\u015f, Radu Ion, Alexandru Ceau\u015fu, and Dan \u015etef\u0103nescu. 2005. Combined Aligners, in Proceed- ings of the Workshop on Building and Using Paral- lel Texts: Data-Driven Machine Translation and Beyond, pp. 107-110, Ann Arbor, USA, Associa- tion for Computational Linguistics. ISBN 978-973- 703-208-9.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Multilingual applications for rich morphology language pairs, a case study on German Romanian", |
|
"authors": [ |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Vertan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monica", |
|
"middle": [], |
|
"last": "Gavril\u0103", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Multilinguality and Interoperability in Language Processing with Emphasis on Romanian", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "448--460", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristina Vertan and Monica Gavril\u0103. 2010. Multilin- gual applications for rich morphology language pairs, a case study on German Romanian, in Dan Tufi\u015f and Corina For\u0103scu (eds.): Multilinguality and Interoperability in Language Processing with Emphasis on Romanian, Romanian Academy Pub- lishing House, Bucharest, pp. 448-460, ISBN 978- 973-27-1972-5.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The String-to-String Correction Problem", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Wagner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fischer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "Journal of the ACM", |
|
"volume": "21", |
|
"issue": "1", |
|
"pages": "168--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert A. Wagner and Michael J. Fischer. 1974. The String-to-String Correction Problem, Journal of the ACM, 21(1):168-173.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "TTL's output for French (in XCES format)", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Lexical alignment system architecture", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"text": "For example, cognates stockage 'stock' (FR) vs. stocare (RO) become stocage (FR) vs. stocare (RO). In this example, the French consonant group ck [k] become c [k] (as in Romanian). We also make adjustments in the ambiguous cases, by replacing with both variants (ch([\u222b] or [k])): fiche vs. fi\u015f\u0103 'sheet'; chapitre vs. capitol 'chapter'.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"text": "Precision of cognate extraction steps", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Evaluation and methods' comparison;</td></tr><tr><td>P=Precision; R=Recall; F=F-measure</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"num": null, |
|
"text": "Improvement of our method's recall", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |