|
{ |
|
"paper_id": "W05-0211", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:45:08.649850Z" |
|
}, |
|
"title": "Evaluating State-of-the-Art Treebank-style Parsers for Coh-Metrix and Other Learning Technology Environments", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Hempelmann", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Memphis Memphis", |
|
"location": { |
|
"postCode": "38120", |
|
"region": "TN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Vasile", |
|
"middle": [], |
|
"last": "Rus", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Memphis Memphis", |
|
"location": { |
|
"postCode": "38120", |
|
"region": "TN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Graesser", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Memphis Memphis", |
|
"location": { |
|
"postCode": "38120", |
|
"region": "TN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Danielle", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mcnamara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Memphis Memphis", |
|
"location": { |
|
"postCode": "38120", |
|
"region": "TN", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.", |
|
"pdf_parse": { |
|
"paper_id": "W05-0211", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The task of syntactic parsing is valuable to most natural language understanding applications, e.g., anaphora resolution, machine translation, or question answering. Syntactic parsing in its most general definition may be viewed as discovering the underlying syntactic structure of a sentence. The specificities include the types of elements and relations that are retrieved by the parsing process and the way in which they are represented. For example, Treebank-style parsers retrieve a bracketed form that encodes a hierarchical organization (tree) of smaller elements (called phrases), while Grammatical-Relations(GR)-style parsers explicitly output relations together with elements involved in the relation (subj(John,walk)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The present paper presents an evaluation of parsers for the Coh-Metrix project (Graesser et al., 2004) at the Institute for Intelligent Systems of the University of Memphis. Coh-Metrix is a text-processing tool that provides new methods of automatically assessing text cohesion, readability, and difficulty. In its present form, v1.1, few cohesion measures are based on syntactic information, but its next incarnation, v2.0, will depend more heavily on hierarchical syntactic information. We are developing these measures. Thus, our current goal is to provide the most reliable parser output available for them, while still being able to process larger texts in real time. The usual trade-off between accuracy and speed has to be taken into account.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 102, |
|
"text": "(Graesser et al., 2004)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the first part of the evaluation, we adopt a constituent-based approach for evaluation, as the output parses are all derived in one way or another from the same data and generate similar, bracketed output. The major goal is to consistently evaluate the freely available state-ofthe-art parsers on a standard data set and across genre on corpora typical for learning technology environments. We report parsers' competitiveness along an array of dimensions including performance, robustness, tagging facility, stability, and length of input they can handle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Next, we briefly address particular types of misparses and mistags in their relation to measures planned for Coh-Metrix 2.0 and assumed to be typical for learning technology applications. Coh-Metrix 2.0 measures that centrally rely on good parses include: causal and intentional cohesion, for which the main verb and its subject must be identified; anaphora resolution, for which the syntactic relations of pronoun and referent must be identified; temporal cohesion, for which the main verb and its tense/aspect must be identified.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "These measures require complex algorithms operating on the cleanest possible sentence parse, as a faulty parse will lead to a cascading error effect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While the purpose of this work is not to propose a taxonomy of all available parsers, we consider it necessary to offer a brief overview of the various parser dimensions. Parsers can be classified according to their general approach (handbuilt-grammar-based versus statistical), the way rules in parses are built (selective vs. generative), the parsing algorithm they use (LR, chart parser, etc.), type of grammar (unification-based grammars, context-free grammars, lexicalized context-free grammars, etc.), the representation of the output (bracketed, list of relations, etc.), and the type of output itself (phrases vs grammatical relations). Of particular interest to our work are Treebank-style parsers, i.e., parsers producing an output conforming to the Penn Treebank (PTB) annotation guidelines. The PTB project defined a tag set and bracketed form to represent syntactic trees that became a standard for parsers developed/trained on PTB. It also produced a treebank, a collection of handannotated texts with syntactic information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parser Types", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Given the large number of dimensions along which parsers can be distinguished, an evaluation framework that would provide both parserspecific (to understand the strength of different technologies) and parser-independent (to be able to compare different parsers) performance figures is desirable and commonly used in the literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parser Types", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Evaluation methods can be broadly divided into non-corpus-and corpus-based methods with the latter subdivided into unannotated and annotated corpus-based methods (Carroll et al., 1999) . The non-corpus method sim-ply lists linguistic constructions covered by the parser/grammar. It is well-suited for handbuilt grammars because during the construction phase the covered cases can be recorded. However, it has problems with capturing complexities occuring from the interaction of covered cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 184, |
|
"text": "(Carroll et al., 1999)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Parser Evaluation Methods", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "The most widely used corpus-based evaluation methods are: (1) the constituentbased (phrase structure) method, and (2) the dependency/GR-based method. The former has its roots in the Grammar Evaluation Interest Group (GEIG) scheme (Grishman et al., 1992) developed to compare parsers with different underlying grammatical formalisms. It promoted the use of phrase-structure bracketed information and defined Precision, Recall, and Crossing Brackets measures. The GEIG measures were extended later to constituent information (bracketing information plus label) and have since become the standard for reporting automated syntactic parsing performance. Among the advantages of constituent-based evaluation are generality (less parser specificity) and fine grain size of the measures. On the other hand, the measures of the method are weaker than exact sentence measures (full identity), and it is not clear if they properly measure how well a parser identifies the true structure of a sentence. Many phrase boundary mismatches spawn from differences between parsers/grammars and corpus annotation schemes (Lin, 1995) . Usually, treebanks are constructed with respect to informal guidelines. Annotators often interpret them differently leading to a large number of different structural configurations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 253, |
|
"text": "(Grishman et al., 1992)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1101, |
|
"end": 1112, |
|
"text": "(Lin, 1995)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Parser Evaluation Methods", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "There are two major approaches to evaluate parsers using the constituent-based method. On the one hand, there is the expert-only approach in which an expert looks at the output of a parser, counts errors, and reports different measures. We use a variant of this approach for the directed parser evaluation (see next section). Using a gold standard, on the other hand, is a method that can be automated to a higher degree. It replaces the counting part of the former method with a software system that compares the output of the parser to the gold standard, highly accurate data, manually parsed \u2212 or automatically parsed and manually corrected \u2212 by human experts. The latter approach is more useful for scaling up evaluations to large collections of data while the expert-only approach is more flexible, allowing for evaluation of parsers from new perspectives and with a view to special applications, e.g., in learning technology environments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Parser Evaluation Methods", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "In the first part of this work we use the gold standard approach for parser evaluation. The evaluation is done from two different points of view. First, we offer a uniform evaluation for the parsers on section 23 from the Wall Street Journal (WSJ) section of PTB, the community norm for reporting parser performance. The goal of this first evaluation is to offer a good estimation of the parsers when evaluated in identical environments (same configuration parameters for the evaluator software). We also observe the following features which are extremely important for using the parsers in large-scale text processing and to embed them as components in larger systems. Self-tagging: whether or not the parser does tagging itself. It is advantageous to take in raw text since it eliminates the need for extra modules. Performance: if the performance is in the mid and upper 80th percentiles. Long sentences: the ability of the parser to handle sentences longer than 40 words. Robustness: relates to the property of a parser to handle any type of input sentence and return a reasonable output for it and not an empty line or some other useless output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Parser Evaluation Methods", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "Second, we evaluate the parsers on narrative and expository texts to study their performance across the two genres. This second evaluation step will provide additional important results for learning technology projects. We use evalb (http://nlp.cs.nyu.edu/evalb/) to evaluate the bracketing performance of the output of a parser against a gold standard. The software evaluator reports numerous measures of which we only report the two most important: labelled precision (LR), labelled recall (LR) which are discussed in more detail below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Parser Evaluation Methods", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "For the third step of this evaluation we looked for specific problems that will affect Coh-Metrix 2.0, and presumably learning technology applications in general, with a view to amending them by postprocessing the parser output. The following four classes of problems in a sentence's parse were distinguished: None: The parse is generally correct, unambiguous, poses no problem for Coh-Metrix 2.0. One: There was one minor problem, e.g., a mislabeled terminal or a wrong scope of an adverbial or prepositional phrase (wrong attachment site) that did not affect the overall parse of the sentence, which is therefore still usable for Coh-Metrix 2.0 measures. Two: There were two or three problems of the type one, or a problem with the tree structure that affected the overall parse of the sentence, but not in a fatal manner, e.g., a wrong phrase boundary, or a mislabelled higher constituent. Three: There were two or more problems of the type two, or two or more of the type one as well as one or more of the type two, or another fundamental problem that made the parse of the sentence completely useless, unintelligible, e.g., an omitted sentence or a sentence split into two, because a sentence boundary was misidentified.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Directed Parser Evaluation Method", |
|
"sec_num": "1.3" |
|
}, |
|
{ |
|
"text": "Apple Pie (AP) (Sekine and Grishman, 1995) extracts a grammar from PTB v.2 in which S and NP are the only true non-terminals (the others are included into the right-hand side of S and NP rules). The rules extracted from the PTB have S or NP on the left-hand side and a flat structure on the right-hand side, for instance S \u2192 NP VBX JJ. Each such rule has the most common structure in the PTB associated with it, and if the parser uses the rule it will generate its corresponding structure. The parser is a chart parser and factors grammar rules with common prefixes to reduce the number of active nodes. Although the underlying model of the parser is simple, it can't handle sentences over 40 words due to the large variety of linguistic constructs in the PTB.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 42, |
|
"text": "(Sekine and Grishman, 1995)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Apple Pie", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Charniak presents a parser (CP) based on probabilities gathered from the WSJ part of the PTB (Charniak, 1997) . It extracts the grammar and probabilities and with a standard context-free chart-parsing mechanism generates a set of possible parses for each sentence retaining the one with the highest probability (probabilities are not computed for all possible parses). The probabilities of an entire tree are computed bottomup. In (Charniak, 2000) , he proposes a generative model based on a Markov-grammar. It uses a standard bottom-up, best-first probabilistic parser to first generate possible parses before ranking them with a probabilistic model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 109, |
|
"text": "(Charniak, 1997)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 447, |
|
"text": "(Charniak, 2000)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Charniak's Parser", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Collins's statistical parser (CBP; (Collins, 1997) ), improved by Bikel (Bikel, 2004) , is based on the probabilities between head-words in parse trees. It explicitly represents the parse probabilities in terms of basic syntactic relationships of these lexical heads. Collins defines a mapping from parse trees to sets of dependencies, on which he defines his statistical model. A set of rules defines a head-child for each node in the tree. The lexical head of the headchild of each node becomes the lexical head of the parent node. Associated with each node is a set of dependencies derived in the following way. For each non-head child, a dependency is added to the set where the dependency is identified by a triplet consisting of the non-head-child non-terminal, the parent non-terminal, and the head-child non-terminal. The parser is a CYKstyle dynamic programming chart parser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 50, |
|
"text": "(Collins, 1997)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 85, |
|
"text": "(Bikel, 2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collins's (Bikel's) Parser", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The Stanford Parser (SP) is an unlexicalized parser that rivals state-of-the-art lexicalized ones (Klein and Manning, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 123, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stanford Parser", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "It uses a context-free grammar with state splits. The parsing algorithm is simpler, the grammar smaller and fewer parameters are needed for the estimation. It uses a CKY chart parser which exhaustively generates all possible parses for a sentence before it selects the highest probability tree. Here we used the default lexicalized version.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stanford Parser", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We performed experiments on three data sets. First, we chose the norm for large scale parser evaluation, the 2416 sentences of WSJ section 23. Since parsers have different parameters that can be tuned leading to (slightly) different results we first report performance values on the standard data set and then use same parameter settings on the second data set for more reliable comparison.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The second experiment is on a set of three narrative and four expository texts. The gold standard for this second data set was built manually by the authors starting from CP's as well as SP's output on those texts. The four texts used initially are two expository and two narrative texts of reasonable length for detailed evaluation: We also tested all four parsers for speed on a corpus of four texts chosen randomly from the Metametrix corpus of school text books, across high and low grade levels and across narrative and science texts (see Section 3.2.2). G4: 4th grade narrative text, 1,500 sentences, 18,835 words: 12.56 words/sentence; G6: 6th grade science text, 1,500 sentences, 18,237 words: 12.16 words/sentence; G11: 11th grade narrative text, 1,558 sentences, 18,583 words: 11.93 words/sentence; G12: 12th grade science text, 1,520 sentences, 25,098 words: 16.51 words/sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The parameters file we used for evalb was the standard one that comes with the package. Some parsers are not robust, meaning that for some input they do not output anything, leading to empty lines that are not handled by the evaluator. Those parses had to be \"aligned\" with the gold standard files so that empty lines are eliminated from the output file together with their peers in the corresponding gold standard files.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accuracy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "In Table 1 we report the performance values on Section 23 of WSJ. Table 2 shows the results for our own corpus. The table gives the average values of two test runs, one against the SP-based gold standard, the other against the CP-based gold standard, to counterbalance the bias of the standards. Note that CP and SP possibly still score high because of this bias. However, CBP is clearly a contender despite the bias, while AP is not. 1 The reported metrics are Labelled Precision (LP) and Labelled Recall (LR). Let us denote by a the number of correct phrases in the output from a parser for a sentence, by b the number of incorrect phrases in the output and by c the number of phrases in the gold standard for the same sentence. LP is defined as a/(a+b) and LR is defined as a/c. A summary of the other dimensions of the evaluation is offered in Table 3 . A stability dimension is not reported 1 AP's performance is reported for sentences < 40 words in length, 2,250 out of 2,416. SP is also not robust enough and the performance reported is only on 2,094 out of 2,416 sentences in section 23 of WSJ.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 855, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accuracy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "because we were not able to find a bullet-proof parser so far, but we must recognize that some parsers are significantly more stable than others, namely CP and CBP. In terms of resources needed, the parsers are comparable, except for AP which uses less memory and processing time. The LP/LR of AP is significantly lower, partly due to its outputting partial trees for longer sentences. Overall, CP offers the best performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accuracy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Note in Table 1 that CP's tagging accuracy is worst among the three top parsers but still delivers best overall parsing results. This means that its parsing-only performance is slighstly better than the numbers in the table indicate. The numbers actually represent the tagging and parsing accuracy of the tested parsing systems. Nevertheless, this is what we would most likely want to know since one would prefer to input raw text as opposed to tagged text. If more finely grained comparisons of only the parsing aspects of the parsers are required, perfect tags extracted from PTB must be provided to measure performance. Table 4 shows average measures for each of the parsers on the PTB and seven expository and narrative texts in the second column and for expository and narrative in the fourth column. The third and fifth columns contain standard deviations for the previous columns, respectively. Here too, CP shows the best result.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 630, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accuracy", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "All parsers ran on the same Linux Debian machine: P4 at 3.4GHz with 1.0GB of RAM. 2 AP's and SP's high speeds can be explained to a large degree by their skipping longer sentences, the very ones that lead to the longer times for the other two candidates. Taking this into account, SP is clearly the fastest, but the large range of processing times need to be heeded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Speed", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "This section reports the results of expert rating of texts for specific problems (see Section 1.3). The best results are produced by CP with an average of 88.69% output useable for Coh-Metrix 2.0 (Table 6 ). CP also produces good output As expected, longer sentences are more problematic for all parsers, as can be seen in Table 7 . No significant trends in performance differences with respect to genre difference, narrative (Orlando, Moving, Betty03) vs. expository texts (Heat, Plants, Barron17, Olga91), were detected (cf. also speed results in Table 5 ). But we assume that the difference in average sentence length obscures any genre differences in our small sample.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 204, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 330, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 556, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Directed Parser Evaluation Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The most common non-fatal problems (type one) involved the well-documented adjunct attachment site issue, in particular for prepositional phrases ((Abney et al., 1999) , (Brill and Resnik, 1994) , (Collins and Brooks, 1995) ) as well as adjectival phrases (Table 8) 3 . Similar misattachment issues for adjuncts are encountered with adverbial phrases, but they were rare 3 PP = wrong attachment site for a prepositional phrase; ADV = wrong attachment site for an adverbial phrase; cNP = misparsed complex noun phrase; &X = wrong coordination Another common problem are deverbal nouns and denominal verbs, as well as -ing/VBG forms. They share surface forms leading to ambiguous part of speech assignments. For many Coh-Metrix 2.0 measures, most obviously temporal cohesion, it is necessary to be able to distinguish gerunds from gerundives and deverbal adjectives and deverbal nouns. Problems with NP misidentification are particularly detrimental in view of the important role of NPs in Coh-Metrix 2.0 measures. This pertains in particular to the mistagging/misparsing of complex NPs and the coordination of NPs. Parses with fatal problems are expected to produce useless results for algorithms operating with them. Wrong coordination is another notorious problem of parsers (cf. (Cremers, 1993) , (Grootveld, 1994) ). In our corpus we found 33 instances of miscoordination, of which 23 involved NPs. Postprocessing approaches that address these issues are currently under investigation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 167, |
|
"text": "((Abney et al., 1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 194, |
|
"text": "(Brill and Resnik, 1994)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 223, |
|
"text": "(Collins and Brooks, 1995)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1281, |
|
"end": 1296, |
|
"text": "(Cremers, 1993)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1299, |
|
"end": 1316, |
|
"text": "(Grootveld, 1994)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 265, |
|
"text": "(Table 8)", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Directed Parser Evaluation Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The paper presented the evaluation of freely available, Treebank-style, parsers. We offered a uniform evaluation for four parsers: Apple Pie, Charniak's, Collins/Bikel's, and the Stanford parser. A novelty of this work is the evaluation of the parsers along new dimensions such as stability and robustness and across genre, in particular narrative and expository. For the latter part we developed a gold standard for narrative and expository texts from the TASA corpus. No significant effect, not already captured by variation in sentence length, could be found here. Another novelty is the evaluation of the parsers with respect to particular error types that are anticipated to be problematic for a given use of the resulting parses. The reader is invited to have a closer look at the figures our tables provide. We lack the space in the present paper to discuss them in more detail. Overall, Charniak's parser emerged as the most succesful candidate of a parser to be integrated where learning technology requires syntactic information from real text in real time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Some of the parsers also run under Windows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded by Institute for Educations Science Grant IES R3056020018-02. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the IES. We are grateful to Philip M. McCarthy for his assistance in preparing some of our data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ACKNOWLEDGEMENTS", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Boosting applied to tagging and pp attachment", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Abney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Schapire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abney, R. E. Schapire, and Y. Singer. 1999. Boosting applied to tagging and pp attachment. Proceedings of the 1999 Joint SIGDAT Confer- ence on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 38-45.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Intricacies of collins' parsing model. Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bikel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "479--511", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. M. Bikel. 2004. Intricacies of collins' parsing model. Computational Linguistics, 30-4:479-511.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A rule-based approach to prepositional phrase attachment disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 15th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Brill and P. Resnik. 1994. A rule-based approach to prepositional phrase attachment disambigua- tion. In Proceedings of the 15th International Con- ference on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Parser evaluation: current practice", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sanfilippo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Carroll, E. Briscoe, and A. Sanfilippo, 1999. Parser evaluation: current practice, pages 140- 150. EC DG-XIII LRE EAGLES Document EAG- II-EWG-PR.1.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Statistical parsing with a context-free grammar and word statistics", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. Pro- ceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI Press/MIT Press, Menlo Park.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the North-American Chapter of Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the North-American Chapter of Association for Computational Lin- guistics, Seattle, Washington.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Prepositional phrase attachment through a backed-off model", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Brooks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Third Workshop on Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins and J. Brooks. 1995. Prepositional phrase attachment through a backed-off model. In Pro- ceedings of the Third Workshop on Very Large Corpora, Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Three generative, lexicalised models for statistical parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistic", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 1997. Three generative, lexicalised mod- els for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Com- putational Linguistic, Madrid, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "On Parsing Coordination Categorially", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cremers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Cremers. 1993. On Parsing Coordination Cate- gorially. Ph.D. thesis, Leiden University.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Coh-metrix: Analysis of text on cohesion and language. Behavior Research Methods, Instruments, and Computers", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Graesser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mcnamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Louwerse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. C. Graesser, D.S. McNamara, M. M. Louwerse, and Z. Cai. 2004. Coh-metrix: Analysis of text on cohesion and language. Behavior Research Meth- ods, Instruments, and Computers, 36-2:193-202.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Evaluating parsing strategies using standardized parse files", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Macleod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sterling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "156--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Grishman, C. MacLeod, and J. . Sterling. 1992. Evaluating parsing strategies using standardized parse files. In Proceedings of the Third Conference on Applied Natural Language Processing, pages 156-161.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Parsing Coordination Generatively", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Grootveld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Grootveld. 1994. Parsing Coordination Genera- tively. Ph.D. thesis, Leiden University.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Accurate unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistic", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Klein and C. Manning. 2003. Accurate unlexi- calized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Lin- guistic, Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A dependency-based method for evaluating broad-coverage parsers", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1420--1427", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Lin. 1995. A dependency-based method for eval- uating broad-coverage parsers. Proceedings of In- ternational Joint Conference on Artificial Intelli- gence, pages 1420-1427.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A maximum entropy model for prepositional phrase attachment", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Renyar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the ARPA Workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Ratnaparkhi, J. Renyar, and S. Roukos. 1994. A maximum entropy model for prepositional phrase attachment. In Proceedings of the ARPA Work- shop on Human Language Technology.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A corpusbased probabilistic grammar with only two nonterminals", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Sekine and R. Grishman. 1995. A corpus- based probabilistic grammar with only two non- terminals. Proceedings of the International Work- shop on Parsing Technologies, pages 216-223.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Accuracy of Parsers.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Parser</td><td colspan=\"2\">Performance(LP/LR/Tagging -%)</td></tr><tr><td/><td>WSJ 23</td><td>Expository Narrative</td></tr><tr><td colspan=\"3\">Applie Pie 43.71/44.29/90.26 41.63/42.70 42.84/43.84</td></tr><tr><td colspan=\"3\">Charniak's 84.35/88.28/92.58 91.91/93.94 93.74/96.18</td></tr><tr><td colspan=\"3\">Collins/Bikel's 84.97/87.30/93.24 82.08/85.35 67.75/85.19</td></tr><tr><td colspan=\"3\">Stanford 84.41/87.00/95.05 75.38/85.12 62.65/87.56</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Performance of parsers on the narrative and expository text (average against CP-based and SP-based gold standard).", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>File</td><td/><td colspan=\"2\">Performance (LR/LP -%)</td></tr><tr><td/><td>AP</td><td>CP</td><td>CBP</td><td>SP</td></tr><tr><td>Heat</td><td colspan=\"4\">48.25/47.59 91.96/93.77 92.47/94.14 92.44/91.85</td></tr><tr><td>Plants</td><td colspan=\"4\">41.85/45.89 85.34/88.02 78.24/88.45 81.00/85.62</td></tr><tr><td>Orlando</td><td colspan=\"4\">45.82/49.03 85.83/91.88 65.87/93.97 57.75/90.72</td></tr><tr><td>Moving</td><td colspan=\"4\">37.77/41.45 88.93/92.74 53.94/91.68 76.56/84.97</td></tr><tr><td>Barron17</td><td colspan=\"4\">43.22/42.95 89.74/91.32 80.49/89.32 87.22/86.31</td></tr><tr><td>Betty03</td><td colspan=\"4\">46.53/44.67 90.77/90.74 87.95/85.21 74.53/80.91</td></tr><tr><td>Olga91</td><td colspan=\"4\">32.29/32.69 77.65/80.04 61.61/75.43 61.65/70.60</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Evaluation of Parsers with Respect to the Criteria Listed at the Top of Each Column.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"5\">Parser Self-tagging Performance Long-sentences Robustness</td></tr><tr><td>AP</td><td>Yes</td><td>No</td><td>No</td><td>No</td></tr><tr><td>CP</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>CBP</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>SP</td><td>Yes</td><td>Yes</td><td>No</td><td>No</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Average Performance of Parsers.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Parser</td><td colspan=\"2\">Ave. (LR/LP -%) S.D. (%)</td><td>Ave. on</td><td>S.D. on</td></tr><tr><td/><td/><td/><td colspan=\"2\">Exp+Nar (LR/LP -%) Exp+Nar (%)</td></tr><tr><td>AP</td><td>42.73/43.61</td><td>1.04/0.82</td><td>42.24/43.46</td><td>5.59/5.41</td></tr><tr><td>CP</td><td>90.00/92.80</td><td>4.98/4.07</td><td>87.17/89.79</td><td>4.85/4.66</td></tr><tr><td>CBP</td><td>78.27/85.95</td><td>9.22/1.17</td><td>74.36/88.31</td><td>14.24/6.51</td></tr><tr><td>SP</td><td>74.14/86.56</td><td>10.93/1.28</td><td>75.88/84.42</td><td>12.66/7.11</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Parser Speed in Seconds.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>G4</td><td>G6</td><td colspan=\"2\">G11 G12</td></tr><tr><td>#sent</td><td colspan=\"4\">619 3336 4976 2215</td></tr><tr><td>AP</td><td>144</td><td>89</td><td>144</td><td>242</td></tr><tr><td>CP</td><td colspan=\"2\">647 499</td><td colspan=\"2\">784 1406</td></tr><tr><td>CBP</td><td colspan=\"4\">485 1947 1418 1126</td></tr><tr><td>SP</td><td colspan=\"2\">449 391</td><td>724</td><td>651</td></tr><tr><td>Ave.</td><td colspan=\"2\">431 732</td><td>768</td><td>856</td></tr><tr><td colspan=\"5\">most consistently at a standard deviation over</td></tr><tr><td colspan=\"5\">the seven texts of 8.86%. The other three candi-</td></tr><tr><td colspan=\"5\">dates are clearly trailing behing, namely by be-</td></tr><tr><td colspan=\"5\">tween 5% (SP) and 11% (AP). The distribution</td></tr><tr><td colspan=\"5\">of severe problems is comparable for all parsers.</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">: Average Performance of Parsers over</td></tr><tr><td colspan=\"3\">all Texts (Directed Evaluation).</td></tr><tr><td/><td colspan=\"2\">Ave. (%) S.D. (%)</td></tr><tr><td>AP</td><td>77.31</td><td>15.00</td></tr><tr><td>CP</td><td>88.69</td><td>8.86</td></tr><tr><td>CBP</td><td>79.82</td><td>18.94</td></tr><tr><td>SP</td><td>83.43</td><td>11.42</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "Correlation of Average Performance per Text for all Parsers and Average Sentence Length (Directed Evaluation).", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Text</td><td colspan=\"2\">perf. (%) length (#words)</td></tr><tr><td>Heat</td><td>92.31</td><td>7.54</td></tr><tr><td>Plants</td><td>90.76</td><td>9.96</td></tr><tr><td>Orlando</td><td>93.46</td><td>6.86</td></tr><tr><td>Moving</td><td>90.91</td><td>13.12</td></tr><tr><td>Barron17</td><td>76.92</td><td>22.15</td></tr><tr><td>Betty03</td><td>71.43</td><td>18.21</td></tr><tr><td>Olga91</td><td>60.42</td><td>25.92</td></tr><tr><td>in our corpus.</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "Specific Problems by Parser.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"4\">PP ADV cNP &X</td></tr><tr><td>AP</td><td>13</td><td>10</td><td>8</td><td>9</td></tr><tr><td>CP</td><td>15</td><td>1</td><td>2</td><td>7</td></tr><tr><td>CBP</td><td>10</td><td>0</td><td>0</td><td>13</td></tr><tr><td>SP</td><td>22</td><td>6</td><td>3</td><td>4</td></tr><tr><td>Sum</td><td>60</td><td>17</td><td>13</td><td>33</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |