ACL-OCL / Base_JSON /prefixG /json /globalex /2020.globalex-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:53.199041Z"
},
"title": "Graph Exploration and Cross-lingual Word Embeddings for Translation Inference Across Dictionaries",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Lanau-Coronas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zaragoza",
"location": {
"country": "Spain"
}
},
"email": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Gracia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zaragoza",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the participation of two different approaches in the 3rd Translation Inference Across Dictionaries (TIAD 2020) shared task. The aim of the task is to automatically generate new bilingual dictionaries from existing ones. To that end, we essayed two different types of techniques: based on graph exploration on the one hand and, on the other hand, based on cross-lingual word embeddings. The task evaluation results show that graph exploration is very effective, accomplishing relatively high precision and recall values in comparison with the other participating systems, while cross-lingual embeddings reaches high precision but smaller recall.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the participation of two different approaches in the 3rd Translation Inference Across Dictionaries (TIAD 2020) shared task. The aim of the task is to automatically generate new bilingual dictionaries from existing ones. To that end, we essayed two different types of techniques: based on graph exploration on the one hand and, on the other hand, based on cross-lingual word embeddings. The task evaluation results show that graph exploration is very effective, accomplishing relatively high precision and recall values in comparison with the other participating systems, while cross-lingual embeddings reaches high precision but smaller recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The fact that the open-source Apertium 1 bilingual dictionaries (Forcada et al., 2011) have been converted into RDF and published on the Web following Linked Data principles (Gracia et al., 2018) allows for a large variety of exploration opportunities. Nowadays, the Apertium RDF Graph 2 contains information from 22 bilingual dictionaries. However, as can be seen in Figure 1 , where languages are represented as nodes and the edges symbolise the translation sets between them, not all the languages are connected to each other. In this context, the objective of the Translation Inference Across Dictionaries (TIAD) shared task 3 is to automatically generate new bilingual dictionaries based on known translations contained in this graph. In particular, in this TIAD edicion (TIAD 2020), the participating systems were asked to generate new translations automatically among three languages, English, French, Portuguese, based on known translations contained in the Apertium RDF graph. As these languages (EN, FR, PT) are not directly connected in such a graph (see Figure 1 ), no translations can be obtained directly among them in this graph. Based on the available RDF data, the participants were asked to apply their methodologies to derive translations, mediated by any other language in the graph, between the pairs EN/FR, FR/PT and PT/EN. The evaluation of the results was carried out by the organisers against manually compiled pairs of K Dictionaries 4 . We have proposed two different systems for participating in the task.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "(Forcada et al., 2011)",
"ref_id": null
},
{
"start": 174,
"end": 195,
"text": "(Gracia et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1066,
"end": 1074,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "1. Cycles-OTIC. The first one is a hybrid technique based on graph exploration. It includes translations coming from a method that explores the density of cycles in the translations graph (Villegas et al., 2016) , combined with the translations obtained by the One Time Inverse apertium/ 3 https://tiad2020.unizar.es/ 4 https://lexicala.com/resources# dictionaries Consultation (OTIC) method, which generates translation pairs by means of an intermediate pivot language (Tanaka and Umemura, 1994) .",
"cite_spans": [
{
"start": 188,
"end": 211,
"text": "(Villegas et al., 2016)",
"ref_id": null
},
{
"start": 470,
"end": 496,
"text": "(Tanaka and Umemura, 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "2. Cross-lingual embeddings. The second proposed system has a different focus. It does not rely on the graph structure but on the distribution of embeddings across languages. To that end, we reuse the system proposed by Artetxe et al. (Artetxe et al., 2018) to build crosslingual word embeddings trained with monolingual corpora and mapped afterwards through an intermediate language.",
"cite_spans": [
{
"start": 220,
"end": 257,
"text": "Artetxe et al. (Artetxe et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The remainder of this paper is organised as follows. In Section 2 we give an overview of the used techniques. Then, in Section 3 we comment the results obtained in the evaluation and, finally, in Section 4 we present some conclusions and future directions of our research. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "As it was stated previously, we developed two different techniques in order to automatically generate new bilingual dictionaries between the language pairs proposed in the task. Following the TIAD rules, the output data of the system was encoded in a TSV (tab separated values) file and had to contain the following information for all the translation pairs: source and target written representation, part of speech and a confidence score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems overview",
"sec_num": "2."
},
{
"text": "Cycles-OTIC is a hybrid system that combines the translation pairs generated by means of the two graph-based methods described in the following paragraphs. The objective of this collaborative system is to reinforce both techniques and cover translations that can not be reached separately by any of the two methods. Because of word polysemy, translation cannot be considered as a transitive relation. Specifically, when an intermediate language is used to generate a bilingual dictionary, the ambiguity of words in the pivot language may infer inappropriate equivalences. Avoiding those wrong translations is the main motivation of both methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cycles-OTIC system",
"sec_num": "2.1."
},
{
"text": "The Cycle-based method was proposed by Villegas et al. (2016) . The idea was using cycles to identify potential targets that may be a translation of a given word. A cycle can be considered a sequence of nodes that starts and ends in the same node, without repetitions of nodes nor edges. The confidence value of each translation is calculated by means of nodes' degree and graph density. The density is higher when higher is the number of edges in the graph, as can be seen in the Equation 1, where E represents the number of edges and V the number of vertices (nodes).",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "Villegas et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cycle-based method",
"sec_num": "2.1.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D = |E| |V | * (|V | \u2212 1)",
"eq_num": "(1)"
}
],
"section": "Cycle-based method",
"sec_num": "2.1.1."
},
{
"text": "The confidence score of a potential target is assigned by the density value of the more dense cycle where the source and target words appear. This value can achieve values from 0 to 1 (from completely disconnected to fully connected graph). Table 1 (Villegas et al., 2016) shows an illustrative example of some target candidates obtained in the Apertium RDF graph when translating the English word 'forest', along with the confidence score and the more dense cycle.",
"cite_spans": [
{
"start": 249,
"end": 272,
"text": "(Villegas et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Cycle-based method",
"sec_num": "2.1.1."
},
{
"text": "The second method utilised in our system was proposed by Tanaka and Umemura (1994) and adapted by Lim et al. (2011) afterwards for the creation of multilingual lexicons. This method is known as One Time Inverse Consultation (OTIC) and its objective is to construct bilingual dictionaries through intermediate translations by a pivot language. The OTIC method, even if relatively old, has proven to be a simple but effective one and a baseline very hard to beat, as it was shown by the previous TIAD edition results (Gracia et al., 2019) and corroborated with the latest TIAD 2020 results (see Table 6 ). The OTIC method works as follows. In order to avoiding ambiguities caused by polysemy, for a given word, a confidence score is assigned to each candidate translation based on the degree of overlap between the pivot translations shared by both the source and target words. The higher is the overlap, the higher is the confidence score. The computation of this value is calculated by the Equation 2, where T1 and T2 are the number of translations into the pivot language from the source and target words respectively, and I the size of the intersection between those translations.",
"cite_spans": [
{
"start": 57,
"end": 82,
"text": "Tanaka and Umemura (1994)",
"ref_id": "BIBREF7"
},
{
"start": 98,
"end": 115,
"text": "Lim et al. (2011)",
"ref_id": "BIBREF6"
},
{
"start": 515,
"end": 536,
"text": "(Gracia et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 593,
"end": 600,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "OTIC method",
"sec_num": "2.1.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score = 2 * I T 1 + T 2",
"eq_num": "(2)"
}
],
"section": "OTIC method",
"sec_num": "2.1.2."
},
{
"text": "As it was mentioned before, the Apertium RDF Graph has been the source data of the experiments. In order to chose a suitable pivot language for the experiments, we explored the two possible ones: Spanish and Catalan. (Villegas et al., 2016; Gracia et al., 2019) . Our hypothesis is that the addition of the Cycles method should increase the coverage of the OTIC baseline, since there are possibly some translation pairs that cannot be linked through Spanish (our pivot language) but trough other languages in the graph. Additionally, we wanted to measure the benefits of adding the Cycles method in terms of precision and recall. During development, some experiments with the Apertium RDF Graph were carried out to evaluate the performance of two possible ways of combining both methods: through the union and through the intersection of the translations results provided by both techniques. Some existing Apertium dictionaries were removed from the Apertium RDF graph and used as golden-standard during the development phase, where the explored method had to re-construct the removed Apertium dictionary. Results provided by those experiments showed that whereas the union of the translations sets from the Cycle-based and the OTIC method reached similar o even better results than the OTIC method alone, the results of the translations obtained from the intersection between both methods achieves much worse values of recall, as many correct translations reached by only one method were dismissed. Therefore we opted for the union operation when combining both systems. It was also observed that the hybrid system improved the results of the OTIC method when the pivot language has a small translation set with source and/or target languages. Thus, the Cycles-OTIC method is simply the result of the union of the sets of translations generated by both methods individually. The translation pairs keep the confidence score obtained by the individual methods. However, when the same translation is provided by the two methods, the bois-fr 0.9 [bosque-es, bosc-ca, bois-fr, arbaro-eo, forest-en] fort-fr 0.9 [bosque-es, fort-fr, bosc-ca, arbaro-eo, forest-en] b\u00f2sc-oc 0.833 [bosque-es, b\u00f2sc-oc, bosc-ca, forest-en] bosque-pt 0.833 [bosque-gl, bosque-pt, bosque-es, forest-en] floresta-pt 0.7",
"cite_spans": [
{
"start": 217,
"end": 240,
"text": "(Villegas et al., 2016;",
"ref_id": null
},
{
"start": 241,
"end": 261,
"text": "Gracia et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OTIC method",
"sec_num": "2.1.2."
},
{
"text": "[fraga-gl, floresta-pt, bosque-gl, bosque-es, forest-en] selva-es 0.619 [bosque-es, bosc-ca, arbaro-eo, fort-fr, selva-es, baso-eu, forest-en] score assigned is the maximum of the two values. The default threshold proposed for this combined method is 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OTIC method",
"sec_num": "2.1.2."
},
{
"text": "The second system developed makes use of cross-lingual word embeddings and a third intermediate language to generate new dictionaries. The vectors of the three languages (source, pivot and target) were all trained with monolingual corpora on Common Crawl and Wikipedia using fastTest (Grave et al., 2018) . Then, they were mapped in pairs into a shared vector space through VecMap (Artetxe et al., 2018) , a framework to learn cross-lingual word embedding mappings. The VecMap system allows for either a supervised or an unsupervised mode. In our case, it was supervised since we use the Apertium dictionaries as source of initial mappings between the source and intermediate monolingual embeddings, and also for the intermediate and target vectors. Given a word in the source language contained in the source vector, the algorithm gets the closest word vector in the embedding mapped. It is obtained by means of the cosine similarity metric, which can reach values from 0 to 1. The closer the vector, the higher the cosine metric. Afterwards, the same mechanism is done for getting the closest word in the target language from the one in the pivot language. Finally, the confidence score of the pair generated is computed by the product of both cosine similarity values calculated. The translation only is considered as candidate if the part of speech of source, pivot and target words are the same. The language used as pivot between source and target were Spanish. In Table 3 can be seen the sizes of the extracts used for doing the initial mappings. These translation sets were obtained from the Apertium RDF Graph excluding those which contain spaces. ",
"cite_spans": [
{
"start": 284,
"end": 304,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 381,
"end": 403,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1471,
"end": 1478,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "CL-embeddings system",
"sec_num": "2.2."
},
{
"text": "The final evaluation of the results was carried out by the organisers against the test data 6 . These gold-standard consisted of the intersection between manually compiled pairs of K Dictionaries and the entries in Apertium dictionaries. The performance was measured in terms of precision, recall, F-measure and coverage. The official results of our systems with variable threshold are shown in Table 4 and Table 5 . It can be seen that in both systems, when threshold gets higher values, precision increases while recall is reduced, as expected. A graph of the average of F-measure per threshold comparing all systems can be seen in Figure 2 . The Cycles-OTIC system achieves the second position in terms of F-measure, although is beaten by the OTIC baseline. The other system, based in cross-lingual word embeddings gets the fifth position. As it is shown in Tables 4 and 5, both systems obtain high precision values, and the graph-based system obtains the highest coverage score among all the participating systems and baselines. Discussion. The results prove our hypothesis that the addition of the Cycles method increases the coverage of the OTIC baseline. In particular from 0.70 to 0.76, being the largest value achieved in the shared task. The reason is that the Cycles method helps to discover, through alternative paths, some translation pairs that cannot be discovered through the pivot language. We see, however, that many of these extra translations are not present in the golden standard, since the value of precision drops from 0.70 to 0.64, while recall is preserved (0.47). We will perform a more careful inspection of the validation data results to better understand this effect. Out initial intuition is that the explored languages (PT, EN, FR) are already very well connected through the pivot language (SP), therefore OTIC can be very effective; while the Cycles strategy could play a more important role between other language pairs that are less directly connected in the graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 395,
"end": 402,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 407,
"end": 414,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 634,
"end": 642,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "3."
},
{
"text": "As it can be seen in Table 6 , the evaluation related to the CL-embeddings method shows that, in average, this technique has the second better value of coverage (0.73), just after the Cycles-OTIC method. The precision achieves also a high value (0.62), but regarding the recall, the value is not so high (0.32). One of the possible reasons behind this is that the embedding-based method only gives one target candidate per source entry (the one with best score). A further research considering different numbers of translations per word will be done in order to optimise recall while minimising the loss in precision.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "3."
},
{
"text": "In this paper we have described our participation in the TIAD 2020 shared task with two different techniques: one based on graph exploration and another one based on crosslingual word embeddings. The official results provided by the organisers demonstrate that the performance of such methods for translation inference across dictionaries are good, specially in terms of precision and coverage. However none of the systems could beat the OTIC baseline in terms of F-measure, although the analysis of the results suggested us some improvements that will be carried out as future steps in this research line.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4."
},
{
"text": "https://www.apertium.org/ 2 http://linguistic.linkeddata.es/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Spanish is also used as pivot language in the baseline evaluation carried out by the organisers, which uses also the same implementation: https://gitlab.com/sid_unizar/otic",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Notice that one of the co-authors is co-organiser of TIAD. However, the test data was also treated as blind for the participating systems reported in this paper, to allow a fair comparison",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the European Union's Horizon 2020 research and innovation programme through the projects Lynx (grant agreement No 780602) and Pr\u00eat-\u00e0-LLOD (grant agreement No 825182). It has been also partially supported by the Spanish projects TIN2016-78011-C4-3-R (AEI/FEDER, UE) and DGA/FEDER 2014-2020 \"Construyendo Europa desde Arag\u00f3n\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "5."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2018). A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 789- 798.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Apertium: a free/open-source platform for rule-based machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Ram\u00edrez-S\u00e1nchez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "F",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine translation",
"volume": "25",
"issue": "",
"pages": "127--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F., Ram\u00edrez-S\u00e1nchez, G., and Tyers, F. M. (2011). Aper- tium: a free/open-source platform for rule-based ma- chine translation. Machine translation, 25(2):127-144.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The apertium bilingual dictionaries on the web of data. Semantic Web",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gracia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Villegas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gomez-Perez",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "9",
"issue": "",
"pages": "231--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gracia, J., Villegas, M., Gomez-Perez, A., and Bel, N. (2018). The apertium bilingual dictionaries on the web of data. Semantic Web, 9(2):231-240.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Results of the Translation Inference Across Dictionaries 2019 Shared Task",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gracia",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kabashi",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Kernerman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lanau-Coronas",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lonke",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of TIAD-2019 Shared Task -Translation Inference Across Dictionaries co-located with the 2nd Language, Data and Knowledge Conference (LDK 2019)",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gracia, J., Kabashi, B., Kernerman, I., Lanau-Coronas, M., and Lonke, D. (2019). Results of the Translation In- ference Across Dictionaries 2019 Shared Task. In Jorge Gracia, et al., editors, Proc. of TIAD-2019 Shared Task -Translation Inference Across Dictionaries co-located with the 2nd Language, Data and Knowledge Confer- ence (LDK 2019), pages 1-12. CEUR Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 lan- guages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Low cost construction of a multilingual lexicon from bilingual lists",
"authors": [
{
"first": "L",
"middle": [
"T"
],
"last": "Lim",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ranaivo-Malan\u00e7on",
"suffix": ""
},
{
"first": "E",
"middle": [
"K"
],
"last": "Tang",
"suffix": ""
}
],
"year": 2011,
"venue": "Polibits",
"volume": "",
"issue": "43",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lim, L. T., Ranaivo-Malan\u00e7on, B., and Tang, E. K. (2011). Low cost construction of a multilingual lexicon from bilingual lists. Polibits, (43):45-51.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Construction of a bilingual dictionary intermediated by a third language",
"authors": [
{
"first": "K",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Umemura",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "297--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanaka, K. and Umemura, K. (1994). Construction of a bilingual dictionary intermediated by a third language. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 297-303. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Leveraging rdf graphs for crossing multiple bilingual dictionaries",
"authors": [],
"year": null,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "868--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leveraging rdf graphs for crossing multiple bilingual dictionaries. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 868-876.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Apertium RDF Graph. It represents how the language pairs are connected by means of bilingual translation sets. The darker the colour, the more connections a node has. [Figure taken from https://tiad2020. unizar.es/task.html]",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "F1 results per different values of threshold for all systems",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": "shows a comparison of the size of the translation sets depending on using Spanish or Catalan as intermediate language. It can be observed that the Catalan language is quite unbalanced.",
"content": "<table/>",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "'Forest-en' best targets, its scores and cycles(Villegas et al., 2016).",
"content": "<table><tr><td>EN</td><td>FR</td><td>PT</td></tr><tr><td colspan=\"3\">ES 25,830 21,475 12,054</td></tr><tr><td colspan=\"2\">CA 33,029 6,550</td><td>7,111</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "Size of the translation sets (in number of translations) for different intermediate languages (ES, CA).",
"content": "<table/>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"5\">: TIAD results for the Cycles-OTIC system with</td></tr><tr><td>variable threshold</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Threshold Precision Recall</td><td>F1</td><td>Coverage</td></tr><tr><td>0.0</td><td>0.58</td><td>0.33</td><td>0.41</td><td>0.81</td></tr><tr><td>0.1</td><td>0.58</td><td>0.33</td><td>0.41</td><td>0.81</td></tr><tr><td>0.2</td><td>0.58</td><td>0.33</td><td>0.41</td><td>0.81</td></tr><tr><td>0.3</td><td>0.59</td><td>0.33</td><td>0.41</td><td>0.81</td></tr><tr><td>0.4</td><td>0.59</td><td>0.33</td><td>0.42</td><td>0.79</td></tr><tr><td>0.5</td><td>0.62</td><td>0.32</td><td>0.42</td><td>0.73</td></tr><tr><td>0.6</td><td>0.68</td><td>0.29</td><td>0.40</td><td>0.60</td></tr><tr><td>0.7</td><td>0.75</td><td>0.20</td><td>0.31</td><td>0.38</td></tr><tr><td>0.8</td><td>0.79</td><td>0.07</td><td>0.13</td><td>0.12</td></tr><tr><td>0.9</td><td>0.40</td><td>0</td><td>0</td><td>0</td></tr><tr><td>1.0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></table>",
"num": null
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: TIAD results for the CL-embeddings system with</td></tr><tr><td>variable threshold</td></tr></table>",
"num": null
},
"TABREF7": {
"type_str": "table",
"html": null,
"text": "Averaged results per language pair for every system and ordered by F-measure in descending order.",
"content": "<table/>",
"num": null
}
}
}
}