|
{ |
|
"paper_id": "O16-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:05:09.402383Z" |
|
}, |
|
"title": "Automatic evaluation of surface coherence in L2 texts in Czech", |
|
"authors": [ |
|
{ |
|
"first": "Kate\u0159ina", |
|
"middle": [], |
|
"last": "Rysov\u00e1", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Charles University", |
|
"location": { |
|
"settlement": "Prague" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Magdal\u00e9na", |
|
"middle": [], |
|
"last": "Rysov\u00e1", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Charles University", |
|
"location": { |
|
"settlement": "Prague" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "M\u00edrovsk\u00fd", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Charles University", |
|
"location": { |
|
"settlement": "Prague" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We introduce possibilities of automatic evaluation of surface text coherence (cohesion) in texts written by learners of Czech during certified exams for non-native speakers. On the basis of a corpus analysis, we focus on finding and describing relevant distinctive features for automatic detection of A1-C1 levels (established by CEFR-the Common European Framework of Reference for Languages) in terms of surface text coherence. The CEFR levels are evaluated by human assessors and we try to reach this assessment automatically by using several discourse features like frequency and diversity of discourse connectives, density of discourse relations etc. We present experiments with various features using two machine learning algorithms. Our results of automatic evaluation of CEFR coherence/cohesion marks (compared to human assessment) achieved 73.2% success rate for the detection of A1-C1 levels and 74.9% for the detection of A2-B2 levels.", |
|
"pdf_parse": { |
|
"paper_id": "O16-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We introduce possibilities of automatic evaluation of surface text coherence (cohesion) in texts written by learners of Czech during certified exams for non-native speakers. On the basis of a corpus analysis, we focus on finding and describing relevant distinctive features for automatic detection of A1-C1 levels (established by CEFR-the Common European Framework of Reference for Languages) in terms of surface text coherence. The CEFR levels are evaluated by human assessors and we try to reach this assessment automatically by using several discourse features like frequency and diversity of discourse connectives, density of discourse relations etc. We present experiments with various features using two machine learning algorithms. Our results of automatic evaluation of CEFR coherence/cohesion marks (compared to human assessment) achieved 73.2% success rate for the detection of A1-C1 levels and 74.9% for the detection of A2-B2 levels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Our research is carried out on texts written during the international language examinations in Europe) standards. Such type of examination is required by Czech universities (the needed CEFR level is usually B2) or often also by employers and the exam is compulsory for foreigners to be granted permanent residence in the Czech Republic (the required CEFR level is A1) or state citizenship (the required CEFR level is B1). 1 Therefore, it is of great importance to assess these examinations as objectively as possible and according to uniform criteria. This is rather difficult because the writing samples are evaluated manually by human assessors (although according to the uniform rating grid) who naturally bring to the evaluation a subjective human factor. In the present paper, we aim at finding several objective criteria (concerning surface text coherence) for distinguishing the individual CEFR levels automatically. Specifically, we carry out a research on surface text coherence concerning various discourse phenomena (like the use and frequency of connectives etc.) and we test the possibility of their automatic monitoring and evaluating. The results of our research will become a part of a software application that will serve as a tool for objective assessment of surface text coherence, i.e. for automatic division of submitted writing samples into the suitable CEFR levels in the coherence/cohesion category.", |
|
"cite_spans": [ |
|
{ |
|
"start": 422, |
|
"end": 423, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are many studies and projects dealing with automatic evaluation of various language phenomena especially for English. Many of them focus on grammatical aspects of language (e.g. on automatic evaluation of grammatical accuracy, detection of grammatical errors etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "-see [1] ; [2] or [3] ). On the other hand, only few of them aim at automatic evaluation of text coherence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 8, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 11, |
|
"end": 14, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 18, |
|
"end": 21, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Text coherence may be viewed as local (in smaller text segments covering e.g. discourse relations between sentences within a paragraph) or global (coherence concerning larger text segments like correlation between a title and content etc.). Automatic evaluation of local 1 Common European Framework of Reference for Languages (CEFR, the document of the Council of Europe) divides language learners into three broad categories (A: Basic user, B: Independent user, C: Proficient user). These categories may be further subdivided into six levels (A1, A2, B1, B2, C1 and C2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 272, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "coherence is a topic investigated e.g. by Miltsakaki and Kukich [4] analyzing student's essays or Lapata and Barzilay [5] focusing on machine-generated texts. Higgins et al. [6] examine possibilities of automatic assessment of both local and global coherence at once carried out on student's writing samples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 67, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 121, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 177, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A specific topic of automatic evaluation of language is an analysis and assessment of L2 texts, i.e. (both written and spoken) texts by non-native speakers. Again there are many studies focusing especially on English (or languages like German or Dutch) as L2 and examining various aspects of language like automatic assessment of non-native prosody [7] , automatic classification of article errors [8] or automatic detection of frequent pronunciation errors [9] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 352, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 401, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 461, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Whereas there is a number of studies focusing on automatic evaluation of texts written by non-native speakers for different languages, there is no similar research for Czech as L2/FL so far. Therefore, we open this topic for Czech by introducing automatic evaluation of surface text coherence, which has a clear potential for practical usage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Research", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There are many approaches to text coherence as well as capturing and monitoring coherence relations in large corpora, such as Rhetorical Structure Theory (RST, [10] ), Segmented Discourse Representation Theory (SDRT, [11] ) and the project Penn Discourse Treebank (PDTB, [12] ). The PDTB approach inspired also the annotation of discourse in the Prague Dependency Treebank for Czech (PDT, [13] ) -the only corpus of Czech marking relations of text coherence relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 164, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 221, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 275, |
|
"text": "[12]", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 393, |
|
"text": "[13]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Coherence", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this paper, we use the PDT way of capturing coherence relations. We focus on the aspects of surface coherence (cohesion), i.e. on the surface realization of coherence relations that may be processed automatically (like signalization of discourse relations by discourse connectives, distribution of inter-and intra-sentential discourse relations, distribution of semantico-pragmatic relations like contingency, expansion etc.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Coherence", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For our analysis, we use the language data of the corpus MERLIN [ The evaluation reflects both an overall level (general linguistic range) and the individual rating criteria including vocabulary range, vocabulary control, grammatical accuracy, surface coherence (cohesion), sociolinguistic appropriateness and orthography.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 65, |
|
"text": "[", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Material: Corpus MERLIN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "MERLIN uses two rating instruments: an assessor-oriented version of the holistic scale (see Alderson [15] ) for the general linguistic range and an analytical rating grid closely related to CEFR rating table 4 used in the process of scaling the CEFR descriptors, see [16] and [17] . Table 2 . 5 The original Czech text contains some errors in morphology and spelling that are not represented in the English translation. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 105, |
|
"text": "[15]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 209, |
|
"text": "4", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 271, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 280, |
|
"text": "[17]", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 294, |
|
"text": "5", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 290, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Material: Corpus MERLIN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The first step was to parse the data (441 texts) from the raw text up to the deep syntacticosemantic (tectogrammatical) layer in the annotation framework of the Prague Dependency Treebank (PDT) 6 following the theoretical framework of the Functional Generative Description, see Sgall [18, 19] . To parse the data, we used the current version of Treex, a modular system for natural language processing [20] , with a pre-defined scenario for Czech text analysis, which includes tokenization, sentence segmentation, morphological tagging, ", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 195, |
|
"text": "6", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 288, |
|
"text": "[18,", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 292, |
|
"text": "19]", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 405, |
|
"text": "[20]", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To select features for automatic assessment of Coherence/Cohesion text levels, we first carried out a linguistic analysis of a couple of sample texts. Then we extracted (values of) these features from the automatically parsed texts. We established a relatively simple baseline and experimented with several other sets of features, as described below and summarized in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 375, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features and Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The Baseline consists of a single feature that uses a list of 45 most frequent discourse connectives first extracted from the discourse annotation in the PDT 3.0 and complemented by a few informal variants that are likely to appear in texts written by non-native speakers 8 If we aimed at evaluating the global coherence of texts, other theories would be more appropriate, such as the Rhetorical Structure Theory (RST; [10] ), which tries to represent a document as a single tree expressing the hierarchy of discourse relations both between small and larger text segments. Table 3 ), trying to find the best sets of features for the learning algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 273, |
|
"text": "8", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 423, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 573, |
|
"end": 580, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features and Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As for selection of these features as well as for testing the algorithms with these features we used the 10-fold cross validation on all the data, results on these two sets may be slightly biased.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features and Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We used two machine learning algorithms -Random Forest and Multilayer Perceptron, 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features and Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "namely their implementation in the Waikato Environment for Knowledge Analysis -Weka toolkit [25] . 10 We trained and tested the algorithms with 10-fold cross validation on all the available data from the MERLIN corpus (441 instances), using the sets of features defined in Table 3 . 9 These two algorithms provided the best results among several other algorithms that we tried in the preliminary stage of the research; therefore, in the subsequent experiments, we used these two algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 96, |
|
"text": "[25]", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 101, |
|
"text": "10", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 284, |
|
"text": "9", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 280, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features and Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "10 Weka toolkit ver. 3.8.0, downloaded from http://www.cs.waikato.ac.nz/ml/weka/. As the data are relatively small, we chose the 10-fold cross validation instead of setting aside an evaluation test data, which in this case would be too small. Table 4 gives an overview of the performance of the two algorithms run with different feature sets. 11 The table gives the accuracy, i.e. the percentage of correctly classified instances, and also the absolute numbers of correctly and incorrectly classified instances. Statistically significant improvements over the baselines are marked with * for significance level 0.1 and ** for significance level 0.05. 11 Please note again that feature sets Baseline, Surface and All were set beforehand, thus the results of the algorithms using these feature sets may be considered more reliable than for feature sets Set 1 and Set 2, which were defined by subsequent experimenting with the two algorithms in an attempt to find the best set of features for each of them (again using the 10-fold cross validation on all the data). The confusion matrix for the Random Forest algorithm run with features from Set 1 is given in Table 5 . The confusion matrix for the Multilayer Perceptron algorithm run with features from Set 2 is given in Table 6 . We can count from the tables that if we allow for \"one level\" error in the classification (i.e. for example if we consider classification A2 instead of B1 still correct), the accuracy of the algorithms is 97.3% and 98.4%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 345, |
|
"text": "11", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 653, |
|
"text": "11", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 250, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1157, |
|
"end": 1164, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1269, |
|
"end": 1276, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features and Methods", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The tables also demonstrate that the algorithms have never classified levels A1 and C1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "correctly. The reason is that these levels are represented by very small numbers of texts in the corpus (1 writing sample of A1 and 9 of C1) and therefore they do not provide a sufficient language material for training. If the texts of A1 and C1 levels are excluded from the experiments, the succession rates for detection of A2/B1/B2 levels achieve slightly higher results: Random Forest reaches 74.7% over Set 1 and Multilayer Perceptron 74.9% over Set 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In this case, if we allow for \"one level\" error, the results are 97.2% for Random Forest and 98.4% for Multilayer Perceptron.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Linguistically, the experiments demonstrate that the most relevant features of surface coherence the (human or automatic) assessors should take into account are especially the following: frequency of connective words (expressing inter-or intra-sentential discourse relations such as and, but, because, although etc.); richness or variety of connective words (there is a difference between texts using almost exclusively the conjunction and and texts with a bigger diversity of connective words) and lexical richness of text spans (measured as word count per sentence).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In the paper, we have presented experiments on automatic evaluation of surface text coherence in writing samples by non-native speakers of Czech, more specifically on automatic detection of the individual CEFR levels. The main aim of our research was to examine to what extent the human assessment of surface text coherence can be simulated by automatic methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have used several distinctive features concerning discourse and observed which combination of them reaches the best results for the two selected algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The algorithm Random Forest achieved the highest succession rate (73%) with Set 1 and the algorithm Multilayer Perceptron with Set 2 (73.2%). With \"one level\" error in the classification, the accuracy of the algorithms is 97.3% and 98.4%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The experiments were carried out on the language data of the corpus MERLIN containing altogether 441 writing samples across A1-C1 levels of coherence. However, levels A1 and C1 are rather rare (1 text of A1 and 9 of C1). If we exclude these two levels from the experiments and focus only on detection of A2/B1/B2 levels, Random Forest reaches 74.7%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "of success rate over Set 1 and Multilayer Perceptron 74.9% over Set 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://merlin-platform.eu/index.php 3 Corpus MERLIN does not contain C2 texts at the moment. 4 Common European Framework of Reference for Languages: Learning, Teaching, Assessment (Council of Europe, 2001)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors acknowledge support from the Ministry of Culture of the Czech Republic (project No. DG16P02B016 Automatic evaluation of text coherence in Czech).This work has been using language resources developed, stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgment", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluation metrics for generation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Whittaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the First International Conference on Natural Language Generation", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Bangalore, O. Rambow, and S. Whittaker, \"Evaluation metrics for generation,\" in Proceedings of the First International Conference on Natural Language Generation - Volume 14. Morristown, AJ, USA: Association for Computational Linguistics, 2000, pp. 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An unsupervised method for detecting grammatical errors", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "140--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Chodorow and C. Leacock, \"An unsupervised method for detecting grammatical errors,\" in Proceedings of the 1st North American chapter of the Association for Com- putational Linguistics conference (NAACL 2000). Stroudsburg, PA, USA: Association for Computational Linguistics, 2000, pp. 140-147.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W.-J", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics. Philadelphia, USA: Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, \"Bleu: a method for automatic evalu- ation of machine translation,\" in Proceedings of the 40th annual meeting on association for computational linguistics. Philadelphia, USA: Association for Computational Lin- guistics, 2002, pp. 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Evaluation of text coherence for electronic essay scoring systems", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kukich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Natural Language Engineering", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "25--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Miltsakaki and K. Kukich, \"Evaluation of text coherence for electronic essay scoring systems,\" Natural Language Engineering, vol. 10, no. 1, pp. 25-55, 2004.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic Evaluation of Text Coherence: Models and Representations", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI)", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1085--1090", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Lapata and R. Barzilay, \"Automatic Evaluation of Text Coherence: Models and Representations,\" in Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), vol. 5, Edinburgh, 2005, pp. 1085-1090.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Evaluating multiple aspects of coherence in student essays", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Higgins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Burstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gentile", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "185--192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Higgins, J. Burstein, D. Marcu, and C. Gentile, \"Evaluating multiple aspects of coherence in student essays.\" in Proceedings of HLT-NAACL, Boston, 2004, pp. 185- 192.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automatic Assessment of Non-Native Prosody for English as L2", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "H\u00f6nig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Batliner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Weilhammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "N\u00f6th", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of Speech Prosody", |
|
"volume": "100973", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. H\u00f6nig, A. Batliner, K. Weilhammer, and E. N\u00f6th, \"Automatic Assessment of Non- Native Prosody for English as L2,\" in Proceedings of Speech Prosody, vol. 100973, no. 1, Chicago, 2010, pp. 1-4.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic Classification of Article Errors in L2 Written English", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Varde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Fitzpatrick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Twenty-Third International FLAIRS Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. M. Pradhan, A. S. Varde, J. Peng, and E. Fitzpatrick, \"Automatic Classification of Article Errors in L2 Written English,\" in Twenty-Third International FLAIRS Confer- ence, Florida, USA, 2010.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Automatic detection of frequent pronunciation errors made by L2-learners", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Truong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Neri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Wet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cucchiarini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Strik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of InterSpeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1345--1348", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. P. Truong, A. Neri, F. De Wet, C. Cucchiarini, and H. Strik, \"Automatic detection of frequent pronunciation errors made by L2-learners,\" in Proceedings of InterSpeech, Lisbon, Portugal, 2005, pp. 1345-1348.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Rhetorical Structure Theory: Toward a Functional Theory of Text Organization", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "243--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. C. Mann and S. A. Thompson, \"Rhetorical Structure Theory: Toward a Functional Theory of Text Organization,\" Text, vol. 8, no. 3, pp. 243-281, 1988.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Reference to Abstract Objects in Discourse", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. Asher, Reference to Abstract Objects in Discourse. Dordrecht: Kluwer Academic Publishers, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The Penn Discourse Treebank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Dinesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Robaldo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2961--2968", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Prasad, N. Dinesh, A. Lee, E. Miltsakaki, L. Robaldo, A. Joshi, and B. Webber, \"The Penn Discourse Treebank 2.0,\" in Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, and D. Tapias, Eds. Marrakech: European Language Resources Association, 2008, pp. 2961-2968.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The MERLIN corpus: Learner language and the CEFR", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Boyd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Nicolas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Abel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sch\u00f6ne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Stindlov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Vettori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of LREC 2014", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1281--1288", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Boyd, J. Hana, L. Nicolas, D. Meurers, K. Wisniewski, A. Abel, K. Sch\u00f6ne, B. Stindlov\u00e1, and C. Vettori, \"The MERLIN corpus: Learner language and the CEFR.\" in Proceedings of LREC 2014, 2014, pp. 1281-1288.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Language testing in the 1990s", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Alderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. C. Alderson, \"Bands and scores,\" Language testing in the 1990s, pp. 71-86, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The CEFR levels and descriptor scales", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "North", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "2nd International Conference of ALTE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. North, \"The CEFR levels and descriptor scales,\" in Unpublished manuscript, from a paper presented at the 2nd International Conference of ALTE, Berlin, Germany, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The development of a common framework scale of language proficiency", |
|
"authors": [], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "--, The development of a common framework scale of language proficiency. Peter Lang New York, USA, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Generativn\u00ed syst\u00e9my v lingvistice", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1964, |
|
"venue": "Slovo a slovesnost", |
|
"volume": "25", |
|
"issue": "4", |
|
"pages": "274--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Sgall, \"Generativn\u00ed syst\u00e9my v lingvistice [Generative systems in linguistics],\" Slovo a slovesnost, vol. 25(4), no. 274-282, 1964.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Generativn\u00ed popis jazyka a \u010desk\u00e1 deklinace", |
|
"authors": [], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "--, Generativn\u00ed popis jazyka a \u010desk\u00e1 deklinace [Generative Description of Language and Czech Declension]. Prague: Academia, 1967.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Treex -an open-source framework for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "\u017dabokrtsk\u00fd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Information Technologies -Applications and Theory, M. Lopatkov\u00e1", |
|
"volume": "788", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Z. \u017dabokrtsk\u00fd, \"Treex -an open-source framework for natural language processing,\" in Information Technologies -Applications and Theory, M. Lopatkov\u00e1, Ed., vol. 788.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Introducing the Prague Discourse Treebank 1.0", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Pol\u00e1kov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "M\u00edrovsk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nedoluzhko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "J\u00ednov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0160", |
|
"middle": [], |
|
"last": "Zik\u00e1nov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing. Nagoya: Asian Federation of Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Pol\u00e1kov\u00e1, J. M\u00edrovsk\u00fd, A. Nedoluzhko, P. J\u00ednov\u00e1, \u0160. Zik\u00e1nov\u00e1, and E. Haji\u010dov\u00e1, \"Introducing the Prague Discourse Treebank 1.0,\" in Proceedings of the Sixth Interna- tional Joint Conference on Natural Language Processing. Nagoya: Asian Federation of Natural Language Processing, 2013, pp. 91-99.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Semi-Automatic Annotation of Intra-Sentential Discourse Relations in PDT", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "J\u00ednov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "M\u00edrovsk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Pol\u00e1kov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Workshop on Advances in Discourse Analysis and its Computational Aspects (ADACA) at Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. J\u00ednov\u00e1, J. M\u00edrovsk\u00fd, and L. Pol\u00e1kov\u00e1, \"Semi-Automatic Annotation of Intra- Sentential Discourse Relations in PDT,\" in Proceedings of the Workshop on Advances in Discourse Analysis and its Computational Aspects (ADACA) at Coling 2012, E. Ha- ji\u010dov\u00e1, L. Pol\u00e1kov\u00e1, and J. M\u00edrovsk\u00fd, Eds. Bombay: Coling 2012 Organizing Commit- tee, 2012, pp. 43-58.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Querying Diverse Treebanks in a Uniform Way", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "\u0160t\u011bp\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Pajas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1828--1835", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. \u0160t\u011bp\u00e1nek and P. Pajas, \"Querying Diverse Treebanks in a Uniform Way,\" in Pro- ceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010). Valletta, Malta: European Language Resources Association, 2010, pp. 1828-1835.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The weka data mining software: an update", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pfahringer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Reutemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACM SIGKDD explorations newsletter", |
|
"volume": "11", |
|
"issue": "1", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, \"The weka data mining software: an update,\" ACM SIGKDD explorations newsletter, vol. 11, no. 1, pp. 10-18, 2009.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "provided by the Test Centre of the Institute of Language and Preparatory Studies at the Charles University in Prague in line with the high ALTE (Association of Language Testers The 2016 Conference on Computational Linguistics and Speech Processing ROCLING 2016, pp. 214-228 \uf0d3 The Association for Computational Linguistics and Chinese Language Processing", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "containing altogether 2,286 writing samples by non-native speakers (learners) of Czech, German and Italian. German and Italian texts of the corpus were collected by TELC (The European Language Certificates) and Czech texts were provided by the Test Centre of the Institute of Language and Preparatory Studies at the Charles University in Prague. Both institutions (as full members of The Association of Language Testers in Europe (ALTE)) offer internationally recognized language exams in accordance with the high ALTE standards. All texts forming the corpus MERLIN were created as out-puts of standardized tasks aligned to the Common European Framework of Reference for Languages (CEFR) -it means that all writing samples are evaluated across the CEFR levels, in the MERLIN case as A1-C1. 3", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "e.g. teda as an informal variant of tedy [so, therefore]). The feature counts number of occurrences of these connective words in the tested text, without trying to distinguish their connective and non-connective usages, and normalizes the count to 100 sentences. The Baseline is thus as follows: \u2022 number of all connective words per 100 sentences Another set of features -called Surface features -consists of features that only use tokenization and sentence segmentation. They do not use any advanced part of the text analysis such as syntactic parsing and discourse parsing. These features include also the baseline feature and all together are: \u2022 number of all connective words per 100 sentences \u2022 number of coordinating connective words per 100 sentences \u2022 number of subordinating connective words per 100 sentences \u2022 number of tokens per sentence Other features extract information from the automatically parsed tree structures and from automatically annotated discourse relations. Together with the surface features they form a feature set called All features. Here is a list of the additional features: \u2022 number of intra-sentential discourse relations per 100 sentences \u2022 number of inter-sentential discourse relations per 100 sentences \u2022 number of all discourse relations per 100 sentences \u2022 number of different connectives in all discourse relations \u2022 ratio of discourse relations with connective a [and] \u2022 number of predicate-less sentences per 100 sentences \u2022 ratio of discourse relations from class Temporal \u2022 ratio of discourse relations from class Contingency \u2022 ratio of discourse relations from class Contrast \u2022 ratio of discourse relations from class Expansion These three sets of features (Baseline, Surface, All) were predefined before the experiments with the machine learning methods. We also experimented with other sets of features (Set 1", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"text": "of discourse relations with connective a[and]", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Confusion matrix for Random Forest with Set 1 (classes in the rows classified as classes in the columns). Confusion matrix for Multilayer Perceptron with Set 2 (classes in the rows classified as classes in the columns).", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Uvid\u00edm t\u011b po\u017edeji</td></tr><tr><td>David</td></tr><tr><td>Literal translation into English: 5</td></tr><tr><td>Hello Martin,</td></tr><tr><td>See you later</td></tr><tr><td>David</td></tr></table>", |
|
"num": null, |
|
"text": "Example 1 demonstrates a Czech writing sample from the corpus MERLIN (the overall CEFR rating of this text is A2, i.e. basic user -elementary level): (1) \u010cau Martine, Chci T\u011b zaprv\u00e9 pod\u011bkovat \u017ee si m\u011b pozval. J\u00e1 je\u0161t\u011b pot\u0159ebuju ale v\u011bdet kdy to za\u010d\u00edn\u00e1? Abychom jsem mohl v\u011bd\u011bt kdy mus\u00edm z domova odej\u00edt. Kdo je\u0161t\u011b p\u0159\u00edjde, budou tam Tom\u00e1\u0161 a Luk\u00e1\u0161, jestli ano, tak fajn. Budou tam tvoje rodi\u010de, Radek cht\u011bl v\u011bd\u011bt. First, I want to thank you that you have invited me. But I need to know when it begins? In order to know when I must leave my home. Who will come -Tom\u00e1\u0161 and Luk\u00e1\u0161 as well? If yes, it is fine. Your parents will be there? Radek wanted to know. The writing sample in Example 1 is provided with the MERLIN evaluation criteria presented in Table 1, i.e. with the assessments by the trained human evaluators.", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>Overall CEFR rating</td><td>A2</td></tr><tr><td>Grammatical accuracy</td><td>A2</td></tr><tr><td>Orthography</td><td>B1</td></tr><tr><td>Vocabulary range</td><td>A2</td></tr><tr><td>Vocabulary control</td><td>A2</td></tr><tr><td>Coherence/Cohesion</td><td>A2</td></tr><tr><td colspan=\"2\">Sociolinguistic appropriateness A1</td></tr><tr><td>4.2 The</td><td/></tr></table>", |
|
"num": null, |
|
"text": "Evaluating table for the MERLIN writing sample in Example 1 writing sample in Example 1 was assigned A2 level for Coherence/Cohesion. Corpus MERLIN contains altogether 441 writing samples in Czech across the A1-C1 levels. Their", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>MERLIN</td><td/></tr><tr><td colspan=\"2\">Coherence level Number of texts</td></tr><tr><td>A1</td><td>1</td></tr><tr><td>A2</td><td>102</td></tr><tr><td>B1</td><td>172</td></tr><tr><td>B2</td><td>157</td></tr><tr><td>C1</td><td>9</td></tr><tr><td>Total</td><td>441</td></tr><tr><td>5 The Experiment</td><td/></tr></table>", |
|
"num": null, |
|
"text": "Distribution of Czech writing samples across CEFR levels of coherence in corpusOur goal was to experimentally verify whether and to what extent the human annotation of the Coherence/Cohesion CEFR mark can be simulated by automatic methods. We tried to find possible distinctive criteria/features for automatic detection of the individual CEFR levels in this category.", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>syntactic</td></tr></table>", |
|
"num": null, |
|
"text": "parsing and deep syntactic parsing.On top of the automatically parsed dependency trees of the tectogrammatical layer, we automatically annotated explicit discourse relations (i.e. relations expressed by discourse connectives). As a theoretical background for capturing discourse relations in text, we employed the approach described in Pol\u00e1kov\u00e1 et al.[21] and used first in the annotation of the Prague Discourse Treebank 1.0 (PDiT;[22]) and later in the Prague Dependency Treebank 3.0[13]. It is an approach similar to (and based on) the approach used for the annotation of the Penn Discourse Treebank 2.0 (PDTB;[12]). Both these approaches are lexically based and aim at capturing local discourse relations (between clauses, sentences, or", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Various sets of features used in the experiments.", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>Experiment</td><td colspan=\"3\">Accuracy (%) Correct Incorrect</td></tr><tr><td>Random Forest, Baseline</td><td>57.1</td><td>252</td><td>189</td></tr><tr><td>Random Forest, Surface features</td><td>62.6</td><td>276</td><td>165</td></tr><tr><td>Random Forest, Set 1</td><td>** 73.0</td><td>322</td><td>119</td></tr><tr><td>Random Forest, Set 2</td><td>* 67.1</td><td>296</td><td>145</td></tr><tr><td>Random Forest, All features</td><td>** 70.3</td><td>310</td><td>131</td></tr><tr><td>Multilayer Perceptron, Baseline</td><td>60.8</td><td>268</td><td>173</td></tr><tr><td>Multilayer Perceptron, Surface features</td><td>* 66.2</td><td>292</td><td>149</td></tr><tr><td>Multilayer Perceptron, Set 1</td><td>** 71.9</td><td>317</td><td>124</td></tr><tr><td>Multilayer Perceptron, Set 2</td><td>** 73.2</td><td>323</td><td>118</td></tr><tr><td>Multilayer Perceptron, All features</td><td>* 68.0</td><td>300</td><td>141</td></tr></table>", |
|
"num": null, |
|
"text": "Results of the experiments -accuracy, number of correctly classified instances and number of incorrectly classified instances. Statistically significant improvements over the respective baselines (tested with paired t-test) are marked with * for significance level 0.1 and ** for significance level 0.05.", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |