Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "F13-2010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:41:45.278558Z"
},
"title": "N-gram Language Models and POS Distribution for the Identification of Spanish Varieties",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cologne",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Binyam",
"middle": [],
"last": "Gebrekidan Gebre",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sascha",
"middle": [],
"last": "Diwersy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cologne",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Ngrammes et Traits Morphosyntaxiques pour la Identification de Vari\u00e9t\u00e9s de l'Espagnol Notre article pr\u00e9sente exp\u00e9rimentations portant sur la classification supervis\u00e9e de vari\u00e9t\u00e9s nationales de l'espagnol. Outre les approches classiques, bas\u00e9es sur l'utilisation de ngrammes de caract\u00e8res ou de mots, nous avons test\u00e9 des mod\u00e8les calcul\u00e9s selon des traits morphosyntaxiques, l'objectif \u00e9tant de v\u00e9rifier dans quelle mesure il est possible de parvenir \u00e0 une classification automatique des vari\u00e9t\u00e9s d'une langue en s'appuyant uniquement sur des descripteurs grammaticaux. Les calculs ont \u00e9t\u00e9 effectu\u00e9s sur la base d'un corpus de textes journalistiques de quatre pays hispanophones (Espagne, Argentine, Mexique et P\u00e9rou).",
"pdf_parse": {
"paper_id": "F13-2010",
"_pdf_hash": "",
"abstract": [
{
"text": "Ngrammes et Traits Morphosyntaxiques pour la Identification de Vari\u00e9t\u00e9s de l'Espagnol Notre article pr\u00e9sente exp\u00e9rimentations portant sur la classification supervis\u00e9e de vari\u00e9t\u00e9s nationales de l'espagnol. Outre les approches classiques, bas\u00e9es sur l'utilisation de ngrammes de caract\u00e8res ou de mots, nous avons test\u00e9 des mod\u00e8les calcul\u00e9s selon des traits morphosyntaxiques, l'objectif \u00e9tant de v\u00e9rifier dans quelle mesure il est possible de parvenir \u00e0 une classification automatique des vari\u00e9t\u00e9s d'une langue en s'appuyant uniquement sur des descripteurs grammaticaux. Les calculs ont \u00e9t\u00e9 effectu\u00e9s sur la base d'un corpus de textes journalistiques de quatre pays hispanophones (Espagne, Argentine, Mexique et P\u00e9rou).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spanish is a world language with official status in 21 countries. It is regarded to be a Pluricentric language with a number of interacting centres and language varieties (Thompson, 1992) . Each of these national varieties has their own characteristics in terms of phonetics, lexicon and syntax.",
"cite_spans": [
{
"start": 171,
"end": 187,
"text": "(Thompson, 1992)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Computational applications can benefit from identifying the correct variety of Spanish texts when undertaking tasks such as Machine Translation or Information Extraction, as they are able to handle lexical, orthographic and syntactic variation more accurately. The task is modelled as a classification problem with very similar methods to those applied to general purpose language identification (Dunning, 1994) .",
"cite_spans": [
{
"start": 396,
"end": 411,
"text": "(Dunning, 1994)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, very few attempts have been made to address the problem of identifying language varieties as evidenced in 2.1. In this work we try to classify texts retrieved from newspapers published in 2008 from four different Spanish speaking countries : Spain, Argentina, Mexico and Peru. Moreover, we propose the use of new features, not limited to the classical word and character n-grams. We experimented features based on POS distribution and morphosyntactic information. The use of knowledge-rich features is not an attempt to outperform word and character n-gram-based methods, but an attempt to examine the extent to which these varieties differ in terms of grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language identification is the task of automatically identifying the language contained in a given document. State-of-the-art methods apply n-gram language models at the character and sometimes word-level with results usually above 95% accuracy. This level of success is very common when dealing with languages which are typologically not closely related. This is however not the case of language varieties in which the distinction is based on very subtle differences that algorithms can be trained to recognize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One of the first general purpose language identification approaches was the work of Ingle (1980) . Ingle applied Zipf's law distribution to order the frequency of stop words in a text and used this information for language identification. Dunning (1994) introduced the use of character n-grams and statistics for language identification. In this study, the likelihood of ngrams was calculated using Markov models and this was used as the most informative feature for identification. Other studies applying n-gram language models for language identification include Cavnar and Trenkle (1994) implemented as TextCat 1 , Grefenstette (1995) , and Vojtek and Belikova (2007) .",
"cite_spans": [
{
"start": 84,
"end": 96,
"text": "Ingle (1980)",
"ref_id": "BIBREF5"
},
{
"start": 239,
"end": 253,
"text": "Dunning (1994)",
"ref_id": "BIBREF2"
},
{
"start": 565,
"end": 590,
"text": "Cavnar and Trenkle (1994)",
"ref_id": "BIBREF0"
},
{
"start": 614,
"end": 615,
"text": "1",
"ref_id": null
},
{
"start": 618,
"end": 637,
"text": "Grefenstette (1995)",
"ref_id": "BIBREF3"
},
{
"start": 644,
"end": 670,
"text": "Vojtek and Belikova (2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the recent years, a number of language identification methods were developed for internet data such as Martins and Silva (2005) and Rehurek and Kolkus (2009) . The most recent general purpose language identification method to our knowledge is the one published by Lui and Baldwin (2012) . Their software, called langid.py, has language models for 97 languages, using various data sources. The method achieved results of up to 94.7% accuracy, thus outperforming similar tools. All models described in this section neglect language varieties. Pluricentric languages, such as the case of Spanish, are represented by a unique class.",
"cite_spans": [
{
"start": 106,
"end": 130,
"text": "Martins and Silva (2005)",
"ref_id": "BIBREF8"
},
{
"start": 135,
"end": 160,
"text": "Rehurek and Kolkus (2009)",
"ref_id": "BIBREF10"
},
{
"start": 267,
"end": 289,
"text": "Lui and Baldwin (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The identification of closely related languages is one of the bottlenecks of most n-gram-based models and there are only a few studies published about it. Ljube\u0161i\u0107 et al. (2007) propose a computational model for the identification of Croatian texts in comparison to other South Slavic languages reporting 99% recall and precision in three processing stages. One of these processing stages, includes a so-called black list, a list of forbidden words that appear only in 1. http ://odur.let.rug.nl/vannoord/TextCat/ Croatian texts, making the algorithm perform better.",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "Ljube\u0161i\u0107 et al. (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Similar Languages, Varieties and Dialects",
"sec_num": "2.1"
},
{
"text": "Ranaivo-Malancon (2006) presents a semi-supervised character-based model to distinguish between Indonesian and Malay, two closely related languages from the Austronesian family and Huang and Lee (2008) proposes a bag-of-words approach to distinguish Chinese texts from Mainland and Taiwan reporting results of up to 92% accuracy. More recently, Trieschnigg et al. Trieschnigg et al. (2012) described classification experiments for a set of sixteen Dutch dialects using the Dutch Folktale Database.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "Huang and Lee (2008)",
"ref_id": "BIBREF4"
},
{
"start": 345,
"end": 389,
"text": "Trieschnigg et al. Trieschnigg et al. (2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Similar Languages, Varieties and Dialects",
"sec_num": "2.1"
},
{
"text": "For romance languages, the DEFT2010 2 shared task aimed to classify French journalistic texts not only with respect to their geographical location but also incorporating a temporal dimension. For Portuguese, Zampieri and Gebre (2012) proposed a log-likelihood estimation method to distinguish between European and Brazilian Portuguese texts with results above 99.5% for character n-grams. The model was later applied to a multilingual setting with French and Spanish texts .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Similar Languages, Varieties and Dialects",
"sec_num": "2.1"
},
{
"text": "We collected four comparable corpora to use in our experiments, one for each language variety. To collect comparable samples, we retrieved texts published in the same year from local newspapers regarded to have similar register, as follows : Each sub-corpus contains a set of 1,000 documents randomly sampled to avoid bias towards a given topic or genre. These sub-corpora were divided in training and test settings of 500 documents each. Following the compilation of the corpora, four groups of features were selected. The list of features used and the aspect of language that these features aim to analyse are presented next :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "-Character n-grams (2 to 5) : orthography and lexicon -Word uni-grams : lexicon -Word bi-grams : lexicon and syntax -POS and morphological features : morphology and syntax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The first three groups of features (knowledge-poor features) are standard in language identification and they were widely used in previous approaches. The fourth group of features (knowledge-rich features) is to our knowledge new and it consists of the use of POS and morphological feature annotation. The POS tags and morphological information were used as one unit in form of a compound tags (e.g. N-msc-sg or V-inf).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "A snapshot of the tagset with nouns, adjectives and verbs is presented in table 2. The classification method is based on n-gram language models and document log-likelihood estimation (Dunning, 1993) as described in . Its performance is comparable to state-of-the-art methods in language identification which focus on similar languages. It was tested on Bosnian, Croatian and Serbian documents 3 achieving 91.0% accuracy. Models described in Ljube\u0161i\u0107 et al. (2007) achieved 90.3% and 95.7% accuracy using the same dataset.",
"cite_spans": [
{
"start": 183,
"end": 198,
"text": "(Dunning, 1993)",
"ref_id": "BIBREF1"
},
{
"start": 441,
"end": 463,
"text": "Ljube\u0161i\u0107 et al. (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The method calculates language models using Laplace probability distribution for smoothing and after this calculation computes the probability of each document to belong to a certain class using a log-likelihood function as shown in equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(L|tex t) = arg max L N i=1 log P(n i |L) + log P(L)",
"eq_num": "(1)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "N is the number of n-grams in the test text, n i is the ith n-gram and L stands for the language models. Given a test text, we calculate the probability for each of the language models. The language model with higher probability determines the identified language of the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The first experiments used knowledge-poor features to classify the four Spanish varieties evaluated using precision (P), recall (R) and f-measure (F). Results ranged from 0.813 fmeasure for character 4-grams to 0.876 f-measure for word bi-grams. The results for each class remained constant for all features and this can be seen in table 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "3. http ://www.nljubesic.net/resources/tools/bs-hr-sr-language-identifier/ Feature P R F C 2-grams 0.835 0.804 0.819 C 3-grams 0.848 0.806 0.826 C 4-grams 0.842 0.787 0.813 C 5-grams 0.854 0.811 0.832 W 1-grams 0.879 0.848 0.848 W 2-grams 0.880 0.870 0.876",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The Peninsular Spanish class seemed to be the most difficult for the algorithm to identify in this setting. As an example, table 4 presents a confusion matrix for the character 4-grams feature in which the algorithm obtained its worst performance. The best results were obtained for the classification of texts from Argentina and Mexico reaching 0.999 average accuracy. As the confusion matrix in 4 indicated, the worst setting was again Spain x Argentina with an average result of 0.842 accuracy. All the results obtained were substantially higher than the 4-class classification setting. As classification algorithms tend to perform better in binary settings, this was an expected outcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TABLE 3 -4-Class Classification",
"sec_num": null
},
{
"text": "Next we present the results obtained using POS distribution and morphological features, combined in sets of 2, 3 and 4 compound tags as explained in section 3. The classification between Mexican and Spanish texts obtained the best results reaching 0.831 using combinations of two tags. These two varieties also obtained satisfactory scores for character and The poorest results were obtained once again in the classification of Spanish and Argentinian texts, which also obtained the worst performance using knowledge-poor features. Even though the results are lower than those obtained using knowledge-poor features, the algorithm scored better than the expected 0.50 baseline, indicating that it is able to identify patterns in the datasets using only sets of morphosyntactical information. Named entities which usually help algorithms to identify varieties at the lexical level are not present in the experiments using POS tags and therefore do not influence the performance of the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS and Morphology",
"sec_num": "4.1"
},
{
"text": "To evaluate the relationship between the features explored here, we analysed results using hierarchical clustering. For each cluster, two p-values (between 0 and 1) are calculated via multiscale bootstrap resampling. These values indicate how strong the cluster is supported by data. The two p-values are : the AU (Approximately Unbiased), in red, computed by multiscale bootstrap resampling and BP (Bootstrap Probability) in green, computed by normal bootstrap resampling. The graphic shows the difference between the performance of knowledge-poor and knowledge-rich features, arranging each in a different cluster 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship Between Features",
"sec_num": "4.2"
},
{
"text": "The analysis grouped the two word-based feature groups in the same cluster, as they performed on average better than the character-based methods. Another interesting point of the analysis is that the results of character 4-and 5-grams are grouped in the same cluster due to an increase in performance when a larger amount of characters are taken into account. Character 4-and 5-grams features are closer to the lexical level taking whole words into account, which suggests that the model is more effective when using complete lexical items as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FIGURE 1 -Cluster Dendogram with AU/BP Values",
"sec_num": null
},
{
"text": "As stated before, the morphological features were not expected to outperform the knowledgepoor models, but to be used to investigate differences in grammar. An interesting outcome of these experiments is the direct relationship between the algorithm's performance using knowledge-poor and knowledge-rich features. One clear example is the classification of Argentina and Spain which obtained the worst results with characters and words as well as when using POS and morphology : 0.843 and 0.666 accuracy respectively. Another example is Argentina and Mexico which achieved the best results using characters and words, 0.999 accuracy and the second best results with POS tags, 0.801 accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FIGURE 1 -Cluster Dendogram with AU/BP Values",
"sec_num": null
},
{
"text": "For these reasons, the results presented here are an encouraging perspective for further studies. It is possible to use the outcome of the classification as a source of information for contrastive linguistics to provide quantitative overview on how these varieties converge and diverge in terms of grammar and lexicon. Linguistic analysis may be carried out using the most informative features in classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FIGURE 1 -Cluster Dendogram with AU/BP Values",
"sec_num": null
},
{
"text": "We presented a first attempt to identify a set of four Spanish varieties in written texts with f-measure results ranging from 0.813 to 0.876. As expected, the binary classification settings have achieved significantly better results in comparison to the 4-class classification setting. The algorithm was able to distinguish between texts from Argentina and Mexico with an average accuracy of 0.999. As previously discussed, the integration of these language models in real-world NLP applications, should improve results in a number of NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Perspectives",
"sec_num": "5"
},
{
"text": "The experiments used not only the classical character and word n-gram models but also morphosyntactic information combined with POS. This is to our knowledge a new contribution of our work to this kind of experiments. The classification with knowledge-rich features achieved up to 0.831 accuracy for Mexican and Peninsular Spanish. We observed a direct relationship between the performance of knowledge-poor and knowledge-rich features, binary settings which obtained good performance using characters and words also present good results using morphosyntactic information. This aspect should be better explored in future work through a careful linguistic analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Perspectives",
"sec_num": "5"
},
{
"text": "As future perspectives, first we wish to compare the performance of our method with general purpose language identification methods such as langid.py (Lui and Baldwin, 2012) . Second, we are replicating our experiments to a set of French varieties. Finally, we would like to experiment the combination of POS and word n-grams to investigate if performance increases.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Lui and Baldwin, 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Perspectives",
"sec_num": "5"
},
{
"text": "c ATALA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their careful feedback. TALN-R\u00c9CITAL 2013, 17-21 Juin, Les Sables d'Olonne ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "N-gram-based text catogorization",
"authors": [
{
"first": "W",
"middle": [],
"last": "Cavnar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Trenkle",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cavnar, W. and Trenkle, J. (1994). N-gram-based text catogorization. 3rd Symposium on Document Analysis and Information Retrieval (SDAIR-94).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics -Special Issue on Using Large Corpora",
"volume": "19",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T. (1993). Accurate methods for the statistics of surprise and coincidence. Compu- tational Linguistics -Special Issue on Using Large Corpora, 19(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical identification of language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T. (1994). Statistical identification of language. Technical report, Computing Research Lab -New Mexico State University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Comparing two language identification schemes",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of JADT 1995, 3rd International Conference on Statistical Analysis of Textual Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, G. (1995). Comparing two language identification schemes. In Proceedings of JADT 1995, 3rd International Conference on Statistical Analysis of Textual Data, Rome.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Contrastive approach towards text source classification based on top-bag-of-word similarity",
"authors": [
{
"first": "C",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of PACLIC 2008",
"volume": "",
"issue": "",
"pages": "404--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, C. and Lee, L. (2008). Contrastive approach towards text source classification based on top-bag-of-word similarity. In Proceedings of PACLIC 2008, pages 404-410.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Language Identification Table",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ingle",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingle, N. (1980). A Language Identification Table. Technical Translation International.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Language identification : How to distinguish similar languages ?",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mikelic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Boras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 29th International Conference on Information Technology Interfaces",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ljube\u0161i\u0107, N., Mikelic, N., and Boras, D. (2007). Language identification : How to distinguish similar languages ? In Proceedings of the 29th International Conference on Information Technology Interfaces.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "langid.py : An off-the-shelf language identification tool",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lui, M. and Baldwin, T. (2012). langid.py : An off-the-shelf language identification tool. In Proceedings of the 50th Meeting of the ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language identification in web pages",
"authors": [
{
"first": "B",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Silva",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 20th ACM Symposium on Applied Computing (SAC)",
"volume": "",
"issue": "",
"pages": "763--768",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martins, B. and Silva, M. (2005). Language identification in web pages. Proceedings of the 20th ACM Symposium on Applied Computing (SAC), Document Engineering Track. Santa Fe, EUA., pages 763-768.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic identification of close languages -case study : Malay and indonesian",
"authors": [
{
"first": "B",
"middle": [],
"last": "Ranaivo-Malancon",
"suffix": ""
}
],
"year": 2006,
"venue": "ECTI Transactions on Computer and Information Technology",
"volume": "2",
"issue": "",
"pages": "126--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranaivo-Malancon, B. (2006). Automatic identification of close languages -case study : Malay and indonesian. ECTI Transactions on Computer and Information Technology, 2 :126- 134.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Language identification on the web : Extending the dictionary method",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rehurek",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kolkus",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CICLing. Lecture Notes in Computer Science",
"volume": "",
"issue": "",
"pages": "357--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rehurek, R. and Kolkus, M. (2009). Language identification on the web : Extending the dictionary method. In Proceedings of CICLing. Lecture Notes in Computer Science, pages 357-368. Springer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spanish as a pluricentric language",
"authors": [
{
"first": "R",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1992,
"venue": "Pluricentric Languages : Different Norms in Different Nations",
"volume": "",
"issue": "",
"pages": "45--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thompson, R. (1992). Spanish as a pluricentric language. In Clyne, M., editor, Pluricentric Languages : Different Norms in Different Nations, pages 45-70. CRC Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An exploration of language identification techniques for the dutch folktale database",
"authors": [
{
"first": "D",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hiemstra",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Theune",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "De Jong",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Meder",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of LREC2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trieschnigg, D., Hiemstra, D., Theune, M., de Jong, F., and Meder, T. (2012). An exploration of language identification techniques for the dutch folktale database. In Proceedings of LREC2012.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Comparing language identification methods based on markov processess",
"authors": [
{
"first": "P",
"middle": [],
"last": "Vojtek",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Belikova",
"suffix": ""
}
],
"year": 2007,
"venue": "Slovko, International Seminar on Computer Treatment of Slavic and East European Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vojtek, P. and Belikova, M. (2007). Comparing language identification methods based on markov processess. In Slovko, International Seminar on Computer Treatment of Slavic and East European Languages.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic identification of language varieties : The case of Portuguese",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "B",
"middle": [
"G"
],
"last": "Gebre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of KONVENS2012",
"volume": "",
"issue": "",
"pages": "233--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M. and Gebre, B. G. (2012). Automatic identification of language varieties : The case of Portuguese. In Proceedings of KONVENS2012, pages 233-237, Vienna, Austria.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Classifying pluricentric languages : Extending the monolingual model",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "B",
"middle": [
"G"
],
"last": "Gebre",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Diwersy",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Fourth Swedish Language Technlogy Conference (SLTC2012)",
"volume": "",
"issue": "",
"pages": "79--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Gebre, B. G., and Diwersy, S. (2012). Classifying pluricentric languages : Extending the monolingual model. In Proceedings of the Fourth Swedish Language Technlogy Conference (SLTC2012), pages 79-80, Lund, Sweden.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "",
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"num": null,
"text": "TagsetAlthough research in language identification and text classification shows that character and word n-gram-based methods outperform knowledge-rich features, we believe that these features are still worth experimenting with. Firstly, from an NLP perspective, these new features",
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>Feature</td><td colspan=\"7\">ARGxMEX ARGxPER MEXxPER SPAxARG SPAxMEX SPAxPER Average</td></tr><tr><td>C 2-grams</td><td>0.999</td><td>0.996</td><td>0.860</td><td>0.852</td><td>0.957</td><td>0.940</td><td>0.934</td></tr><tr><td>C 3-grams</td><td>0.999</td><td>1.000</td><td>0.911</td><td>0.847</td><td>0.987</td><td>0.991</td><td>0.956</td></tr><tr><td>C 4-grams</td><td>1.000</td><td>0.999</td><td>0.922</td><td>0.827</td><td>0.992</td><td>0.996</td><td>0.965</td></tr><tr><td>C 5-grams</td><td>0.999</td><td>0.999</td><td>0.927</td><td>0.802</td><td>0.991</td><td>0.993</td><td>0.952</td></tr><tr><td>W 1-grams</td><td>0.999</td><td>0.999</td><td>0.945</td><td>0.851</td><td>0.994</td><td>0.992</td><td>0.963</td></tr><tr><td>W 2-grams</td><td>0.999</td><td>0.997</td><td>0.951</td><td>0.881</td><td>0.998</td><td>0.989</td><td>0.969</td></tr><tr><td>Average</td><td>0.999</td><td>0.998</td><td>0.919</td><td>0.843</td><td>0.986</td><td>0.983</td><td>0.955</td></tr></table>",
"num": null,
"text": "Confusion MatrixFrom the 500 texts from Spain used for testing, only 218 were correctly classified, 280 were tagged as Argentinian and 2 as Peru. We subsequently classified the varieties in binary settings. Results are reported in terms of accuracy and can be seen in table 5.",
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table/>",
"num": null,
"text": "",
"html": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table><tr><td>Feature</td><td colspan=\"7\">ARGxMEX ARGxPER MEXxPER SPAxARG SPAxMEX SPAxPER Average</td></tr><tr><td>PoS 2-grams</td><td>0.766</td><td>0.650</td><td>0.742</td><td>0.637</td><td>0.831</td><td>0.702</td><td>0.721</td></tr><tr><td>PoS 3-grams</td><td>0.815</td><td>0.670</td><td>0.753</td><td>0.673</td><td>0.821</td><td>0.741</td><td>0.746</td></tr><tr><td>PoS 4-grams</td><td>0.823</td><td>0.732</td><td>0.737</td><td>0.690</td><td>0.806</td><td>0.667</td><td>0.743</td></tr><tr><td>Average</td><td>0.801</td><td>0.684</td><td>0.744</td><td>0.666</td><td>0.819</td><td>0.703</td><td>0.736</td></tr><tr><td/><td/><td colspan=\"4\">TABLE 6 -Classification with POS Tags</td><td/><td/></tr></table>",
"num": null,
"text": "word-based features, 0.986 on average. Accuracy results for all binary classification settings are presented in table 6.",
"html": null,
"type_str": "table"
}
}
}
}