|
{ |
|
"paper_id": "W96-0209", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:58:53.343175Z" |
|
}, |
|
"title": "APPORTIONING DEVELOPMENT EFFORT IN A PROBABILISTIC LR PARSING SYSTEM THROUGH EVALUATION", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sussex University of Cambridge", |
|
"location": { |
|
"postCode": "BN1 9QH, CB2 3QG", |
|
"settlement": "Brighton, Pembroke Street, Cambridge", |
|
"country": "UK, UK" |
|
} |
|
}, |
|
"email": "[email protected]@ac.uk" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sussex University of Cambridge", |
|
"location": { |
|
"postCode": "BN1 9QH, CB2 3QG", |
|
"settlement": "Brighton, Pembroke Street, Cambridge", |
|
"country": "UK, UK" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe an implemented system for robust domain-independent syntactic parsing of English, using a unification-based grammar of part-ofspeech and punctuation labels coupled with a probabilistic LR parser. We present evaluations of the system's performance along several different dimensions; these enable us to assess the contribution that each individual part is making to the success of the system as a whole, and thus prioritise the effort to be devoted to its further enhancement. Currently, the system is able to parse around 80% of sentences in a substantial corpus of general text containing a number of distinct genres. On a random sample of 250 such sentences the system has a mean crossing bracket rate of 0.71 and recall and precision of 83% and 84~0 respectively when evaluated against manually-disambiguated analyses I .", |
|
"pdf_parse": { |
|
"paper_id": "W96-0209", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe an implemented system for robust domain-independent syntactic parsing of English, using a unification-based grammar of part-ofspeech and punctuation labels coupled with a probabilistic LR parser. We present evaluations of the system's performance along several different dimensions; these enable us to assess the contribution that each individual part is making to the success of the system as a whole, and thus prioritise the effort to be devoted to its further enhancement. Currently, the system is able to parse around 80% of sentences in a substantial corpus of general text containing a number of distinct genres. On a random sample of 250 such sentences the system has a mean crossing bracket rate of 0.71 and recall and precision of 83% and 84~0 respectively when evaluated against manually-disambiguated analyses I .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "This work is part of an effort to develop a robust, domain-independent syntactic parser capable of yielding the unique correct analysis for unrestricted naturally-occurring input. Our goal is to develop a system with performance comparable to extant part-of-speech taggers, returning a syntactic analysis from which predicate-argument structure can be recovered, and which can support semantic interpretation. The requirement for a domain-independent analyser favours statistical 1Some of this work was carried out while the second author was visiting Rank Xerox, Grenoble. The work was also supported by UK DTI/SALT project 41/5808 'Integrated Language Database', and by SERC/EPSRC Advanced Fellowships to both authors. Geoff Nunberg provided encouragement and much advice on the analysis of punctuation, and Greg Grefenstette undertook the original corpus tokenisation and segmentation for the punctuation experiments. Bernie .]ones and Kiku Ribas made helpful comments on an earlier draft. We are responsible for any mistakes. techniques to resolve ambiguities, whilst the latter goal favours a more sophisticated grammatical formalism than is typical in statistical approaches to robust analysis of corpus material. Briscoe ~ Carroll (1993) describe a probablistic parser using a wide-coverage unificationbased grammar of English written in the Alvey Natural Language Tools (ANLT) metagrammatical formalism (Briscoe et al., 1987) , generating around 800 rules in a syntactic variant of the Definite Clause Grammar formalism (DCG, Pereira Warren, 1980) extended with iterative (Kleene) operators. The ANLT grammar is linked to a lexicon containing about 64K entries for 40K lexemes, including detailed subcategorisation information appropriate for the grammar, built semiautomatically from a learners' dictionary (Carroll L= Grover, 1989) . The resulting parser is efficient, constructing a parse forest in roughly quadratic time (empirically), and efficiently returning the ranked n-most likely analyses (Carroll, 1993 (Carroll, , 1994 ). The probabilistic model is a refinement of probabilistic context-free grammar (PCFG) conditioning CF 'backbone' rule application on LR state and lookahead item. Unification of the 'residue' of features not incorporated into the backbone is performed at parse time in conjunction with reduce operations. Unification failure results in the associated derivation being assigned a probability of zero. Probabilities are assigned to transitions in the LALR(1) action table via a process of supervised training based on computing the frequency with which transitions are traversed in a corpus of parse histories. The result is a probabilistic parser which, unlike a PCFG, is capable of probabilistically discriminating derivations which differ only in terms of order of application of the same set of CF backbone rules, due to the parse context defined by the LR table.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1220, |
|
"end": 1244, |
|
"text": "Briscoe ~ Carroll (1993)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1411, |
|
"end": 1433, |
|
"text": "(Briscoe et al., 1987)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1528, |
|
"end": 1555, |
|
"text": "(DCG, Pereira Warren, 1980)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1816, |
|
"end": 1841, |
|
"text": "(Carroll L= Grover, 1989)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2008, |
|
"end": 2022, |
|
"text": "(Carroll, 1993", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 2023, |
|
"end": 2039, |
|
"text": "(Carroll, , 1994", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Experiments with this system revealed three major problems which our current research is addressing. Firstly, improvements in probabilistic parse selection will require a 'lexicalised' gram-mar/parser in which (minimally) probabilities are associated with alternative subcategorisation possibilities of individual lexical items. Currently, the relative frequency of subcategorisation possibilities for individual lexical items is not recorded in wide-coverage lexicons, such as ANLT or COM-LEX (Grishman e\u00a2 al., 1994) . Secondly, removal of punctuation from the input (after segmentation into text sentences) worsens performance as punctuation both reduces syntactic ambiguity (Jones, 1994) and signals non-syntactic (discourse) relations between text units (Nunberg, 1990) . Thirdly, the largest source of error on unseen input is the omission of appropriate subcategorisation values for lexical items (mostly verbs), preventing the system from finding the correct analysis. The current coverage--the proportion of sentences for which at least one analysis was foundS--of this system on a general corpus (e.g. Brown or LOB) is estimated to be around 20% by Briscoe (1994) . Therefore, we have developed a variant probabilistic LR parser which does not rely on subcategorisation and uses punctuation to reduce ambiguity, The analyses produced by this parser can be utilised for phrase-finding applications, recovery of subcategorisation frames, and other 'intermediate' level parsing problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 517, |
|
"text": "(Grishman e\u00a2 al., 1994)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 690, |
|
"text": "(Jones, 1994)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 758, |
|
"end": 773, |
|
"text": "(Nunberg, 1990)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1158, |
|
"end": 1172, |
|
"text": "Briscoe (1994)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We utilised the ANLT metagrammatical formalism to develop a feature-based, declarative description of part-of-speech (PoS) label sequences (see e.g. Church, 1988) for English. This grammar compiles into a DCG-like grammar of approximately 400 rules. It has been designed to enumerate possible valencies for predicates (verbs, adjectives and nouns) by including separate rules for each pattern of possible complementation in English. The distinction between arguments and adjuncts is expressed, following Xbar theory (e.g. Jackendoff, 1977) , by Chomskyadjunction of adjuncts to maximal projections (XP ~ XP Adjunct) as opposed to government of arguments (i.e. arguments are sisters within X1 projections; X1 --~ X0 Argl... ArgN). Although the grammar enumerates complementation possibilities and checks for global sentential wellformedness, it is best described as 'intermediate' as it does not attempt to associate 'displaced' constituents with their canonical position / grammatical role.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 162, |
|
"text": "Church, 1988)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 539, |
|
"text": "Jackendoff, 1977)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PART:OF-SPEECH TAG SEQUENCE GRAMMAR", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The other difference between this grammar 2Briscoe & Carroll (1995) note that \"coverage\" is a weak measure since discovery of one or more global analyses does not entail that the correct analysis is recovered.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 67, |
|
"text": "Carroll (1995)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PART:OF-SPEECH TAG SEQUENCE GRAMMAR", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "and a more conventional one is that it incorporates some rules specifically designed to overcome limitations or idiosyncrasies of the tagging process. For example, past participles functioning adjectivally, as in (la), are fl'equently tagged as past participles (VVN) as in (lb), so the grammar incorporates a rule (violating X-bar theory) which parses past participles as adjectival premodifiers in this context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PART:OF-SPEECH TAG SEQUENCE GRAMMAR", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "(1) a The disembodied head b The_AT disembodied_VVN head_NN1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PART:OF-SPEECH TAG SEQUENCE GRAMMAR", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Similar idiosyncratic rules are incorporated for dealing with gerunds, adjective-noun conversions, idiom sequences, and so forth. Further details of the PoS grammar are given in Briscoe & Carroll (1994 . The grammar currently covers around 80% of the Susanne corpus (Sampson, 1995) , a 138K word treebanked and balanced subset of the Brown corpus. Many of the 'failures' are due to the root S(entence) requirement enforced by the parser when dealing with fragments from dialogue and so forth. We have not relaxed this requirement since it increases ambiguity, our primary interest at this point being the extraction of subcategorisation information from full clauses in corpus data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 201, |
|
"text": "Briscoe & Carroll (1994", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 281, |
|
"text": "(Sampson, 1995)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PART:OF-SPEECH TAG SEQUENCE GRAMMAR", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Nunberg (1990) develops a partial 'text' grammar for English which incorporates mnany constraints that (ultimately) restrict syntactic and semantic interpretation. For example, textual adjunct clauses introduced by colons scope over following punctuation, as (2a) illustrates; whilst textual adjuncts introduced by dashes cannot intervene between a bracketed adjunct and the textual unit to which it attaches, as in (2b).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TEXT GRAMMAR AND PUNCTUATION", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "(2) a *He told them his reason: he would not renegotiate his contract, but he did not explain to the team owners. (vs. but would stay) b *She left -who could blame her -(during the chainsaw scene) and went home.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TEXT GRAMMAR AND PUNCTUATION", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We have developed a declarative grammar in the ANLT metagrammatical formalism, based on Nunberg's procedural description. This grammar captures the bulk of the text-sentential constraints described by Nunberg with a grammar which compiles into 26 DCG-tike rules. Text grammar analyses are useful because they demarcate some of the syntactic boundaries in the text sentence and thus reduce ambiguity, and because they identify the units for which a syntactic analysis should, in principle, be found; for example, in (3), the absence of dashes would mislead a parser into seeking a syntactic relationship between three and the following names, whilst in fact there is only a discourse relation of elaboration between this text adjunct and pronominal three.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TEXT GRAMMAR AND PUNCTUATION", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "(3) The three -Miles J. Cooperman, Sheldon", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TEXT GRAMMAR AND PUNCTUATION", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Teller, and Richard Austin -and eight other defendants were charged in six indictments with conspiracy to violate federal narcotic law.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TEXT GRAMMAR AND PUNCTUATION", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Further details of the text grammar are given in Briscoe ~ Carroll (1994 . The text grammar has been tested on the Susanne corpus and covers 99.8% of sentences. (The failures are mostly text segmentation problems). The number of analyses varies from one (71%) to the thousands (0.1%). Just over 50% of Susanne sentences contain some punctuation, so around 20% of the singleton parses are punctuated. The major source of ambiguity in the analysis of punctuation concerns the function of commas and their relative scope as a result of a decision to distinguish delimiters and separators (Nunberg 1990:36) . Therefore, a text sentence containing eight commas (and no other punctuation) will have 3170 analyses. The multiple uses of commas cannot be resolved without access to (at least) the syntactic context of occurrence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "Briscoe ~ Carroll (1994", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 585, |
|
"end": 602, |
|
"text": "(Nunberg 1990:36)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TEXT GRAMMAR AND PUNCTUATION", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Despite Nunberg's observation that text grammar is distinct from syntax, text grammatical ambiguity favours interleaved application of text grammatical and syntactic constraints. Integrating the text and the PoS sequence grammars is straightforward and the result remains modular, in that the text grammar is 'folded into' the PoS sequence grammar, by treating text and syntactic categories as overlapping and dealing with the properties of each using disjoint sets of features, principles of feature propagation, and so forth. In addition to the core text-grammatical rules which carry over unchanged from the stand-alone text grammar, 44 syntactic rules (of pre-and post-posing, and coordination) now include (often optional) comma markers corresponding to the purely 'syntactic' uses of punctuation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE INTEGRATED GRAMMAR", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The approach to text grammar taken here is in many ways similar to that of Jones (1994) . However, he opts to treat punctuation marks as clitics on words which introduce additional featural information into standard syntactic rules. Thus, his grammar is thoroughly integrated and it would be harder to extract an independent text grammar or build a modular semantics. Our less-tightly integrated grammar is described in more detail in Briscoe & Carroll (1994) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 87, |
|
"text": "Jones (1994)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 459, |
|
"text": "Briscoe & Carroll (1994)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE INTEGRATED GRAMMAR", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We have used the integrated grammar to parse the Susanne corpus and the quite distinct Spoken English Corpus (SEC; Taylor ~ Knowles, 1988), a 50K word treebanked corpus of transcribed British radio programmes punctuated by the corpus compilers. Both corpora were retagged using the Acquilex HMM tagger (Elworthy, 1993 (Elworthy, , 1994 trained on text tagged with a slightly modified version of CLAWS-II labels (Garside et al., 1987) . In contrast to previous systems taking as input fullydeterminate sequences of PoS labels, such as Fidditch (Hindle, 1989) and MITFP (de Marcken, 1990) , for each word the tagger returns multiple label hypotheses, and each is thresholded before being passed on to the parser: a given label is retained if it is the highest-ranked, or, if the highestranked label is assigned a likelihood of less than 0.9, if its likelihood is within a factor of 50 of this. We thus attempt to minimise the effect of incorrect tagging on the parsing component by allowing label ambiguities, but control the increase in indeterminacy and concomitant decrease in subsequent processing efficiency by applying the thresholding technique. On Susanne, retagging allowing only a single label per word results in a 97.90% label/word assignment accuracy, whereas multilabel tagging with this thresholding scheme results in 99.51% accuracy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 317, |
|
"text": "(Elworthy, 1993", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 335, |
|
"text": "(Elworthy, , 1994", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 433, |
|
"text": "(Garside et al., 1987)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 557, |
|
"text": "(Hindle, 1989)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 586, |
|
"text": "MITFP (de Marcken, 1990)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PARSING THE SUSANNE AND SEC CORPORA", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In an earlier paper (Briscoe & Carroll, 1995) we gave results for a previous version of the grammar and parsing system. We have made a number of significant improvements to the system since then, the most fundamental being the use of multiple labels for each word. System accuracy evaluation results are also improved since we now output trees that conform more closely to the annotation conventions employed in the test treebank.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 45, |
|
"text": "(Briscoe & Carroll, 1995)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PARSING THE SUSANNE AND SEC CORPORA", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "To examine the efficiency and coverage of the grammar we applied it to our retagged versions of Susanne and SEC. We used the ANLT chart parser (Carroll, 1993) , but modified just to count the number of possible parses in the parse forests (Billot ~ Lang, 1989) rather than actually unpacking them. We also imposed a per-sentence time-out of 30 seconds CPU time, running in Franz Allegro Common Lisp 4.2 on an HP PA-RISC 715/100 workstation with 128 Mbytes of physical memory.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 158, |
|
"text": "(Carroll, 1993)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COVERAGE AND AMBIGUITY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For both corpora, the majority of sentences Briscoe & Carroll's (1995) average parse base (APB), defined as the geometric mean over all sentences in the corpus of \u00a2/~, where n is the number of words in a sentence, and p, the number of parses for that sentence. Thus, given a sentence n words long, the APB raised to the nth power gives the number of analyses that the grammar can be expected to assign to a sentence of that length in the corpus. Table 1 gives these measures for all of the sentences in Susanne and in SEC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 70, |
|
"text": "Briscoe & Carroll's (1995)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 446, |
|
"end": 453, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "COVERAGE AND AMBIGUITY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As the grammar was developed solely with reference to Susanne, coverage of SEC is quite robust. The two corpora differ considerably since the former is drawn from American written text whilst the latter represents British transcribed spoken material. The corpora overall contain material drawn from widely disparate genres / registers, and are more complex than those used in DARPA ATIS tests, and more diverse than those used in MUCs and probably also the Penn Treebank. Black et al. (1993) report a coverage of around 95% on computer manuals, as opposed to our coverage rate of 70-80% on much more heterogeneous data and longer sentences. The APBs for Susanne and SEC of 1.313 and 1.300 respectively indicate that sentences of average length in each corpus could be expected to be assigned of the order of 238 and 376 analyses (i.e. 1.3132\u00b0n and 1.300226).", |
|
"cite_spans": [ |
|
{ |
|
"start": 472, |
|
"end": 491, |
|
"text": "Black et al. (1993)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COVERAGE AND AMBIGUITY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The parser throughput on these tests, for sentences successfully analysed, is around 25 words per CPU second on an HP PA-RISC 715/100. Sentences of up to 30 tokens (words plus sentenceinternal punctuation) are parsed in an average of under 1 second each, whilst those around 60 tokens take on average around 7 seconds. Nevertheless, the relationship between sentence length and processing time is fitted well by a quadratic function, supporting the findings of Carroll (1994) that in practice NL grammars do not evince worst-case parsing complexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 475, |
|
"text": "Carroll (1994)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COVERAGE AND AMBIGUITY", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results we report above relate to the latest version of the tag sequence grammar. To date, we have spent about one person-year on grammar development, with the effort spread fairly evenly over a two-and-a-half-year period. The various phases in the development and refinement of the grammar can be observed in an analysis of the coverage and APB for Susanne and SEC over this period--see 12/94-10/95 Improving the accuracy of the system by trying to ensure that the correct analysis was in the set returned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development & Refinement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the coverage on SEC is increasing at the same time as on Susanne, we can conclude that the grammar has not been specifically tuned to the particular sublanguages or genres represented in the development corpus. Also, although the almost-50% initial coverage on the heterogeneous Taylor el al., 1989; Alshawi el al., 1992) , it is clear that the subsequent grammar refinement phases have led to major improvements in coverage and reductions in spurious ambiguity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 305, |
|
"text": "Taylor el al., 1989;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 327, |
|
"text": "Alshawi el al., 1992)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development & Refinement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have experimented with increasing the richness of the lexical feature set by incorporating subcategorisation information for verbs into the grammar and lexicon. We constructed randomly from Susanne a test corpus of 250 in-coverage sentences, and in this, for each word tagged as possibly being an open-class verb (i.e. not a modal or auxiliary) we extracted from the ANLT lexicon (Carroll & Grover, 1989) all verbal entries for that word. We then mapped these entries into our PoS grammar experimental subcategorisation scheme, in which we distinguished each possible pattern of complementation allowed by the grammar (but not control relationships, specification of prepositional heads of PP complements etc. as in the full ANLT representation scheme). We then attempted to parse the test sentences, using the derived verbal entries instead of the original generic entries which generalised over all the subcategorisation possibilities. 31 sentences now failed to receive a parse, a decrease in coverage of 12%. This is due to the fact that the ANLT lexicon, although large and comprehensive by current standards (Briscoe & Carroll, 1996) , nevertheless contains many errors of omission.", |
|
"cite_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 407, |
|
"text": "(Carroll & Grover, 1989)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1117, |
|
"end": 1142, |
|
"text": "(Briscoe & Carroll, 1996)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development & Refinement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A probabilistic LR parser was trained with the integrated grammar by exploiting the Susanne treebank bracketing. An LR parser (Briscoe & Carroll, 1993 ) was applied to unlabelled bracketed sentences from the Susanne treebank, and a new treebank of 1758 correct and complete analyses with respect to the integrated grammar was constructed semi-automatically by manually resolving the remaining ambiguities. 250 sentences from the new treebank, selected randomly, were kept back for testing 3. The remainder, together with a further set of analyses from 2285 treebank sentences that were not checked manually, were used to train a probabilistic version of the LR parser, using Good-Turing smoothing to estimate the probability of unseen transitions in the LALR(1) table (Briscoe & Carroll, 1993; Carroll, 1993) . The probabilistic parser can then return a ranking of all possible analyses for a sentence, or efficiently return just the n-most probable (Carroll, 1993) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 150, |
|
"text": "(Briscoe & Carroll, 1993", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 793, |
|
"text": "(Briscoe & Carroll, 1993;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 794, |
|
"end": 808, |
|
"text": "Carroll, 1993)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 950, |
|
"end": 965, |
|
"text": "(Carroll, 1993)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PARSE SELECTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The probabilistic parser was tested on the 250 sentences held out from the manuallydisambiguated treebank (of lengths 3-56 tokens, mean 18.2). The parser was set up to return only the highest-ranked analysis for each sentence. Table 3 shows the results of this test--with respect to the original Susanne bracketings--using the Grammar Evaluation Interest Group scheme (GEIG, see e.g. Harrison et al., 1991) 4. This compares unlabelled bracketings derived from corpus treebanks with those derived from parses for the same sentences by computing recall, the ratio of matched brackets over all brackets in the treebank; precision, the ratio of matched brackets over all brackets found by the parser; mean crossings, the number of times a bracketed sequence output by the parser overlaps with one from the treebank but neither is properly contained in the other, averaged over all sentences; and zero crossings, the percentage of sentences for which the analysis returned has zero crossings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 406, |
|
"text": "Harrison et al., 1991)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 234, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PARSE SELECTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The table also gives an indication of the best and worst possible performance of the disambiguation component of the system, showing the results obtained when parse selection is replaced by a simple random choice, and the results of evaluating the analyses in the manually-disambiguated treebank against the corresponding original Susanne bracketings. In this latter figure, the mean number of crossings (0.41) is greater than zero mainly because of incompatibilities between the structural representations chosen by the grammarian and the corresponding ones in the treebank. Precision is less than 100% due to crossings, minor mismatches and inconsistencies (due to the manual nature of the markup process) in tree annotations, and the fact that Susanne often favours a \"flat\" treatment of VP constituents, whereas our grammar always makes an explicit choice between argument-and adjunct-hood. Thus, perhaps a more informative test of the accuracy of our probabilistic system would be evaluation against the manuallydisambiguated corpus of analyses assigned by the grammar. In this, the mean crossing figure drops 3The appendix contains a random sample of sentences from the test corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PARSE SELECTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4We would like to thank Phil Harrison for supplying the evaluation software.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PARSE SELECTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recall Black el al. (1993:7) use the crossing brackets measure to define a notion of structural consistency, where the structural consistency rate for the grammar is defined as the proportion of sentences for which at least one analysis--from the many typically returned by the grammar--contains no crossing brackets, and report a rate of around 95% for the IBM grammar tested on the computer manual corpus. However, a problem with the GEIG scheme and with structural consistency is that both are still weak measures (designed to avoid problems of parser/treebank representational compatibility) which lead to unintuitive numbers whose significance still depends heavily on details of the relationship between the representations compared (e.g. between structure assigned by a grammar and that in a treebank). One particular problem with the crossing bracket measure is that a single attachment mistake embedded n levels deep (and perhaps completely innocuous, such as an \"aside\" delimited by dashes) can lead to n crossings being assigned, whereas incorrect identification of arguments and adjuncts can go unpunished in some cases. Schabes et al. (1993) and Magerman (1995) report results using the GEIG evaluation scheme which are numerically similar in terms of parse selection to those reported here, but achieve 100% coverage. However, their experiments are not strictly comparable because they both utilise more homogeneous and probably simpler corpora. (The appendix gives an indication of the diversity of the sentences in our corpus). In addition, Schabes et al. do not recover tree labelling, whilst", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 28, |
|
"text": "Black el al. (1993:7)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1133, |
|
"end": 1154, |
|
"text": "Schabes et al. (1993)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1159, |
|
"end": 1174, |
|
"text": "Magerman (1995)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mean", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Magerman has developed a parser designed to produce identical analyses to those used in the Penn Treebank, removing the problem of spurious errors due to grammatical incompatibility. Both these approaches achieve better coverage by constructing the grammar fully automatically, but as an inevitable side-effect the range of text phenomena that can be parsed becomes limited to those present in the training material, and being able to deal with new ones would entail further substan-tial treebanking efforts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mean", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To date, no robust parser has been shown to be practical and useful for some NLP task. However, it seems likely that, say, rule-to-rule semantic interpretation will be easier with handconstructed grammars with an explicit, determinate rule-set. A more meaningful parser comparison would require application of different parsers to an identical and extended test suite and utilisation of a more stringent standard evaluation procedure sensitive to node labellings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mean", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Statistical HMM-based part-of-speech taggers require of the order of 100K words and upwards of training data (Weischedel et al., 1993:363) ; taggers inducing non-probabilistic rules (e.g. Brill, 1994) require similar amounts (Gaizauskas, pc). Our probabilistic disambiguation system currently makes no use of lexical frequency information, training only on structural configurations. Nevertheless, the number of parameters in the probabilistic model is large: it is the total number of possible transitions in an LALR(1) table containing over 150000 actions. It is therefore interesting to investigate whether the system requires more or less training data than a tagger.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 138, |
|
"text": "(Weischedel et al., 1993:363)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 200, |
|
"text": "Brill, 1994)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Size and Accuracy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We therefore ran the same experiment as above, using GEIG to measure the accuracy of the system on the 250 held-back sentences, but varying the amount of training data with which the system was provided. We started at the full amount (3793 trees), and then successively halved it by selecting the appropriate number of trees at random. The results obtained are given in figure 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Size and Accuracy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results show convincingly that the system is extremely robust when confronted with limited amounts of training data: when using a mere one sixty-fourth of the full amount (59 trees), accuracy was degraded by only 10-20%. However, there is a large decrease in accuracy with no training data (i.e. random choice). Conversely, accuracy is still improving at 3800 trees, with no sign of overtraining, although it appears to be approaching an upper asymptote. To determine what this might", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Size and Accuracy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recall Precision crossings crossings", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mean", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Top-ranked analysis 67.2% 0.71 82.9% 83.9% be, we ran the system on a set of 250 sentences randomly extracted from the training corpus. On this set, the system achieves a zero crossings rate of 60.0%, mean crossings 0.88, and recall and precision of 77.0% and 75.2% respectively, with respect to the original Susanne bracketings. Although this is a different set of sentences, it is likely that the upper asymptote for accuracy for the test corpus lies in this region. Given that accuracy is increasing only slowly and is relatively close to the asymptote it is therefore unlikely that it would be worth investing effort in increasing the size of the training corpus at this stage in the development of the system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic parser analyses", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we have outlined an approach to robust domain-independent parsing, in which subcategorisation constraints play no part, resulting in coverage that greatly improves upon more conventional grammar-based approaches to NL text analysis. We described an implemented system, and evaluated its performance along several different dimensions. We assessed its coverage and that of previous versions on a development corpus and an unseen corpus, and demonstrated that the grammar refinement we have carried out has led to substantial improvements in coverage and reductions in spurious ambiguity. We also evaluated the accuracy of parse selection with respect to treebank analyses, and, by varying the amount of training material, we showed that it requires comparatively little data to achieve a good level of accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "We have made good progress in increasing grammar coverage, though we have now reached a point of diminishing returns. Further significant improvements in this area would require corpusspecific additions and tuning whose benefit would not necessarily carry over to other corpora. In the application we are currently using the system for-automatic extraction of subcategorisation frames, and more generally argument structure, from large amounts of text (Briscoe ~ Carroll, 1996 )--we do not need full coverage; 70-80% appears to be sufficient. However, further improvements in coverage will require some automated approach to rule induction driven by parse failure. Since our evaluations indicate that our system achieves a good level of accuracy with little treebank data, and that 67-75% coverage was achieved for English quite early in the grammar refinement effort, porting the current system to other languages should be possible with small-to-medium-sized treebanks (around 20K words) and feasible manual effort (of the order of 12 person-months for grammarwriting and treebanking). This may yield a system accurate enough for some types of application, given that the system is not restricted to returning the single highest ranked analysis but can return the n-highest ranked for further applicationspecific selection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 452, |
|
"end": 476, |
|
"text": "(Briscoe ~ Carroll, 1996", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Although we report promising results, parse selection that is sufficiently accurate for many practical applications will require a more lexicalised system. Magerman's (1995) parser is an extension of the history-based parsing approach developed at IBM (Black et al., 1993) in which rules are conditioned on lexical and other (essentially arbitrary) information available in the parse history. In future work, we intend to explore a more restricted and semantically-driven version of this approach in which, firstly, probabilities are associated with different subcategorisation possibilities, and secondly, alternative predicateargument structures derived from the grammar are ranked probabilistically. However, the massively increased coverage obtained here by relaxing subcategorisation constraints underlines the need to acquire accurate and complete subcategorisation frames in a corpus-driven fashion, before such constraints can be exploited robustly and effectively with free text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 173, |
|
"text": "Magerman's (1995)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 272, |
|
"text": "(Black et al., 1993)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": "6." |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Below is a random sample of the 250-sentence test set. The test set comprises the Brown genre categories: \"press reportage\"; \"belles lettres, biography, memoirs\"; and \"learned (mainly scientific and technical) writing\". ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "APPENDIX", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "CLARE: a contextual reasoning and cooperative response framework for the Core Language Engine", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Alshawi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Carter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Crouch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pulman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "SRI International", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alshawi, H., Carter, D., Crouch, R., Pulman, S., Rayner, M., ~ Smith, A. 1992. CLARE: a contex- tual reasoning and cooperative response framework for the Core Language Engine. SRI International, Cambridge, UK.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The structure of shared forests in ambiguous parsing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Billot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 27lh Meeting of Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Billot, S. & Lang, B. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of the 27lh Meeting of Association for Computational Linguistics, Vancouver, Canada. 143-151.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Statistically-driven computer grammars of English: the IBM~ Lancaster approach", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Garside", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Black, E., Garside, R. & Leech, G. (eds.) 1993. Statistically-driven computer grammars of En- glish: the IBM~ Lancaster approach. Amsterdam, The Netherlands: Rodopi.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Some advances in transformationbased part of speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill, E. 1994. Some advances in transformation- based part of speech tagging. In Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94), Seattle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Prospects for practical parsing of unrestricted text: robust statistical parsing techniques", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Corpusbased Research into Language. Rodopi, Amsterdam", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Briscoe, E. 1994. Prospects for practical parsing of unrestricted text: robust statistical parsing tech- niques. In Oostdijk, N L: de Haan, P. eds. Corpus- based Research into Language. Rodopi, Amster- dam: 97-120.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Generalised probabilistic LR parsing for unification-based grammars", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "25--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Briscoe, E. & Carroll, J. 1993. Generalised prob- abilistic LR parsing for unification-based gram- mars. Computational Linguistics 19.1: 25-60.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Parsing ('with) punctuation etc. Rank Xerox Research Centre", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Briscoe, E. ,~ Carroll, J. 1994. Parsing ('with) punctuation etc. Rank Xerox Research Centre, Grenoble, MLTT-TR-007.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Developing and evaluating a probabilistic LR parser of part-ofspeech and punctuation labels", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 4th ACL/SIGPARSE International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Briscoe, E. ,~ Carroll, J. 1995. Developing and evaluating a probabilistic LR parser of part-of- speech and punctuation labels. In Proceedings of the 4th ACL/SIGPARSE International Workshop on Parsing Technologies, Prague, Czech Republic. 48-58.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Automatic extraction of subcalegorization from corpora. Under review", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Briscoe, E. \u2022 Carroll, J. 1996. Automatic extrac- tion of subcalegorization from corpora. Under re- view.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A formalism and environment for the development of a large grammar of English", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "'", |
|
"middle": [], |
|
"last": "Grovel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Proceedings of the lOth International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "703--708", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Briscoe, E., Grovel', C., Boguraev, B. & Carroll, J. 1987. A formalism and environment for the devel- opment of a large grammar of English. In Proceed- ings of the lOth International Joint Conference on Artificial Intelligence, Milan, Italy. 703-708.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Practical unification-based parsing of natural language", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carroll, J. 1993. Practical unification-based pars- ing of natural language. Cambridge University, Computer Laboratory, TR-314.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Relating complexity to practical performance in parsing with wide-coverage unification grammars", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 32nd Meeting of Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "287--294", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carroll, J. 1994. Relating complexity to prac- tical performance in parsing with wide-coverage unification grammars. In Proceedings of the 32nd Meeting of Association for Computational Lin- guistics, Las Cruces, NM. 287-294.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The derivation of a large computational lexicon for English from LDOCE", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Computational Lexicography for Natural Language Processing. Longman", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "117--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carroll, J. ~: Grover, C. 1989. The derivation of a large computational lexicon for English from LDOCE. In Boguraev, B. &: Briscoe, E. eds. Com- putational Lexicography for Natural Language Pro- cessing. Longman, London: 117-134.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A stochastic parts program and noun phrase parser for unrestricted text", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Proceedings of the 2nd Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "136--143", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Church, K. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Pro- ceedings of the 2nd Conference on Applied Natural Language Processing, Austin, Texas. 136-143.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Part-of-speech lagging and phrasal tagging", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Elworthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Acquilex-II Working Paper", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elworthy, D. 1993. Part-of-speech lagging and phrasal tagging. Acquilex-II Working Paper 10, Cambridge University Computer Laboratory (can be obtained from [email protected]).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Does Baum:Welch reestimation help taggers", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Elworthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Garside", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Leech", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Sampson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Proceedings of the 4th Conference on Applied NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "Long-- mail", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elworthy, D. 1994. Does Baum:Welch re- estimation help taggers?. In Proceedings of the 4th Conference on Applied NLP, Stuttgart, Germany. Garside, R., Leech, G. & Sampson, G. 1987. Com- putational analysis of English. Harlow, UK: Long- mail.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Comlex syntax: building a computational lexicon", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Macleod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics, COLING-94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "268--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grishman, R., Macleod, C. & Meyers, A. 1994. Comlex syntax: building a computational lexicon. In Proceedings of the International Conference on Computational Linguistics, COLING-94, Kyoto, Japan. 268-272.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Evaluating syntax performance of parser/grammars of English", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Harrison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Flickenger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gdaniec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Ingria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Strzalkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the Workshop on Evaluating Natural Language Processing Systems, ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harrison, P., Abney, S., Black, E., Flickenger, D., Gdaniec, C., Grishman, R., Hindle, D., In- gria, B., Marcus, M., Santorini, B. & Strza- lkowski, T. 1991. Evaluating syntax performance of parser/grammars of English. In Proceedings of the Workshop on Evaluating Natural Language Processing Systems, ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Acquiring disambiguation rules from text", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "118--143", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hindle, D. 1989. Acquiring disambiguation rules from text. In Proceedings of the 27th Annual Meet- ing of the Association for Computational Linguis- tics, Vancouver, Canada. 118-25.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "X-bar syntax", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Jackendoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jackendoff, R. 1977. X-bar syntax. Cambridge, MA: MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Can punctuation help parsing", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics, COLING-94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jones, B. 1994. Can punctuation help parsing?. In Proceedings of the International Conference on Computational Linguistics, COLING-94, Kyoto, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Statistical decision-tree models for parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Magerman, D. 1995. Statistical decision-tree mod- els for parsing. In Proceedings of the 33rd Annual Meeting of the Association for Computational Lin- guistics, Boston, MA.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Parsing the LOB corpus", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "De Marcken", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "243--251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "de Marcken, C. 1990. Parsing the LOB corpus. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, New York. 243-251.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The linguistics of punctuation", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Nunberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "CSLI Lecture Notes", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nunberg, G. 1990. The linguistics of punctuation. CSLI Lecture Notes 18, Stanford, CA.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Definite clause grammars for language analysis -a survey of the formalism and a comparison with augmented transition networks", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Warren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Artificial Intelligence", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "231--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pereira, F. & Warren, D. 1980. Definite clause grammars for language analysis -a survey of the formalism and a comparison with augmented tran- sition networks. Artificial Intelligence 13.3: 231- 278.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "English for the computer", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Sampson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Oxford", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sampson, G. 1995. English for the computer. Ox- ford, UK: Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Parsing of the Wall Street Journal with the insideoutside algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the Meeting of European Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schabes, Y., Roth, M. & Osborne, R. 1993. Pars- ing of the Wall Street Journal with the inside- outside algorithm. In Proceedings of the Meeting of European Association for Computational Lin- guistics, Utrecht, The Netherlands.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The syntactic regularity of English noun phrases", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 4th European Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor, L., Grover, C. & Briscoe, E. 1989. The syntactic regularity of English noun phrases. In Proceedings of the 4th European Meeting of the As- sociation for Computational Linguistics, Manch- ester, UK. 256-263.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Manual of information to accompany the SEC corpus: the machine-readable corpus of spoken English", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Knowles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor, L. &: Knowles, G. 1988. Manual of in- formation to accompany the SEC corpus: the machine-readable corpus of spoken English. Uni- versity of Lancaster, UK, Ms.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Coping with ambiguity and unknown words through probabilistic models", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Meteer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Palmucci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "359--382", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weischedel, R., Meteer, M., Schwartz, R., Ramshaw, L. & Palmucci J. 1993. Coping with ambiguity and unknown words through probabilis- tic models. Computational Linguistics 19(2): 359- 382.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "crossings [] ................................... [] ...................................... El .................................. [] ............................. [] ......................................... [] ......................................... [] .............................. GEIG metrics for held-back sentences, training on varying amounts of data", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Grammar coverage on Susanne and SEC analysed successfully received under 100 parses, although there is a long tail in the distribution. Monitoring this distribution is helpful during grammar development to ensure that coverage is increasing but the ambiguity rate is not. A more succinct though less intuitive measure of ambiguity rate for a given corpus is", |
|
"content": "<table><tr><td/><td>Susanne</td><td/><td>SEC</td><td/></tr><tr><td>Parse fails</td><td colspan=\"2\">1476 21.0%</td><td colspan=\"2\">809 31.3%</td></tr><tr><td>1-9 parses</td><td colspan=\"2\">1436 20.5%</td><td colspan=\"2\">477 18.4%</td></tr><tr><td>10-99 parses</td><td colspan=\"2\">1218 17.4%</td><td colspan=\"2\">378 14.6%</td></tr><tr><td>100-999 parses</td><td colspan=\"2\">953 13.6%</td><td colspan=\"2\">276 10.7%</td></tr><tr><td>1K-9.9K parses</td><td>694</td><td>9.9%</td><td>225</td><td>8.7%</td></tr><tr><td>10K-99K parses</td><td>474</td><td>6.8%</td><td>154</td><td>6.0%</td></tr><tr><td>100K+ parses</td><td colspan=\"2\">750 10.7%</td><td colspan=\"2\">264 10.2%</td></tr><tr><td>Time-outs</td><td>13</td><td>0.2%</td><td>4</td><td>0.2%</td></tr><tr><td>Number of sentences</td><td>7014</td><td/><td>2717</td><td/></tr><tr><td>Mean sentence length (MSL)</td><td>20.1</td><td/><td>22.6</td><td/></tr><tr><td>MSL -fails</td><td>20.9</td><td/><td>29.5</td><td/></tr><tr><td>MSL -time-outs</td><td>73.6</td><td/><td>65.8</td><td/></tr><tr><td>Average Parse Base</td><td>1.313</td><td/><td>1.300</td><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>The phases, with dates, were:</td></tr><tr><td>6/92-11/93 Initial development of the grammar.</td></tr><tr><td>11/93-7/94 Substantial increase in coverage on</td></tr><tr><td>the development corpus (Susanne), correspond-</td></tr><tr><td>ing to a drive to increase the general coverage</td></tr><tr><td>of the grammar by analysing parse failures on</td></tr><tr><td>actual corpus material. From a lower initial fig-</td></tr><tr><td>ure, coverage of SEC (unseen corpus), increased</td></tr><tr><td>by a larger factor.</td></tr><tr><td>7/94-12/94 Incremental improvements in cover-</td></tr><tr><td>age, but at the cost of increasing the ambiguity</td></tr><tr><td>of the grammar.</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>: Grammar coverage and ambiguity during</td></tr><tr><td>development</td></tr><tr><td>text of Susanne compares well with the state-of-</td></tr><tr><td>the-art in grammar-based approaches to NL anal-</td></tr><tr><td>ysis (e.g. see</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>: GEIG evaluation metrics for test set of 250 held-back sentences against Susanne bracketings</td></tr><tr><td>to 0.71 and the recall and precision rise to 83-84%,</td></tr><tr><td>as shown in table 4.</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "GEIG evaluation metrics for test set of 250 held-back sentences against the manually-disambigated analyses", |
|
"content": "<table><tr><td>2-</td></tr><tr><td>1.5-</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |