|
{ |
|
"paper_id": "U08-1006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:11:50.899007Z" |
|
}, |
|
"title": "Automatic Acquisition of Training Data for Statistical Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Howlett", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sydney NSW 2006", |
|
"location": { |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Sydney NSW 2006", |
|
"location": { |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The limitations of existing data sets for training parsers has led to a need for additional data. However, the cost of manually annotating the amount and range of data required is prohibitive. For a number of simple facts like those sought in Question Answering, we compile a corpus of sentences extracted from the Web that contain the fact keywords. We use a state-of-the-art parser to parse these sentences, constraining the analysis of the more complex sentences using information from the simpler sentences. This allows us to automatically create additional annotated sentences which we then use to augment our existing training data.", |
|
"pdf_parse": { |
|
"paper_id": "U08-1006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The limitations of existing data sets for training parsers has led to a need for additional data. However, the cost of manually annotating the amount and range of data required is prohibitive. For a number of simple facts like those sought in Question Answering, we compile a corpus of sentences extracted from the Web that contain the fact keywords. We use a state-of-the-art parser to parse these sentences, constraining the analysis of the more complex sentences using information from the simpler sentences. This allows us to automatically create additional annotated sentences which we then use to augment our existing training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Determining the syntactic structure of sentences is a necessary step in analysing the content of text for a range of language processing tasks such as Question Answering (Harabagiu et al., 2000) and Machine Translation (Melamed, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 194, |
|
"text": "(Harabagiu et al., 2000)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 234, |
|
"text": "(Melamed, 2004)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The structures that a parsing system assigns to sentences are governed by the grammar used. While some parsers make use of hand-crafted grammars, e.g. Riezler et al. (2002) and , these typically cannot accommodate a wide variety of sentences and increasing coverage incurs significant development costs (Cahill et al., 2008) . This has led to interest in automatic acquisition of grammars from raw text or automatically annotated data such as the Penn Treebank (Marcus et al., 1993) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 172, |
|
"text": "Riezler et al. (2002)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 324, |
|
"text": "(Cahill et al., 2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 482, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Automatically-acquired grammars may be classified according to whether the learning algorithm used to estimate the language model is supervised or unsupervised. Supervised algorithms, e.g. Collins (1999) and Charniak (2000) , require a large number of sentences already parsed according to the desired formalism, while unsupervised approaches, e.g. Bod (2006) and Seginer (2007) , operate directly on raw text. While supervised approaches have generally proven more successful, the need for annotated training data is a major bottleneck.", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 203, |
|
"text": "Collins (1999)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 223, |
|
"text": "Charniak (2000)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 359, |
|
"text": "Bod (2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 378, |
|
"text": "Seginer (2007)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although the emergence of the Penn Treebank as a standard resource has been beneficial in parser development and evaluation, parsing performance drops when analysing text from domains other than that represented in the training data (Sekine, 1997; Gildea, 2001 ). In addition, there is evidence that language processing performance can still benefit from orders of magnitude more data (e.g. ). However, the cost of manually annotating the necessary amount of data is prohibitive.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 247, |
|
"text": "(Sekine, 1997;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 260, |
|
"text": "Gildea, 2001", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We investigate a method of automatically creating annotated data to supplement existing training corpora. We constructed a list of facts based on factoid questions from the TREC 2004 Question Answering track (Voorhees, 2004) and the ISI Question Answer Typology (Hovy et al., 2002) . For each of these facts, we extracted sentences from Web text that contained all the keywords of the fact. These sentences were then parsed using a state-of-the-art parser (Clark and Curran, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 224, |
|
"text": "(Voorhees, 2004)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 281, |
|
"text": "(Hovy et al., 2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 480, |
|
"text": "(Clark and Curran, 2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "By assuming that the same grammatical relations always hold between the keywords, we use the analyses of simple sentences to constrain the analysis of more complex sentences expressing the same fact. The constrained parses then form additional training data for the parser. The results here show that parser performance has not been adversely affected; when the scale of the data collection is increased, we expect to see a corresponding increase in performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The grammars used in parsing systems may be classified as either hand-crafted or automatically acquired. Hand-crafted grammars, e.g. Riezler et al. (2002) and , have the advantage of not being dependent on any particular corpus, however extending them to unrestricted input is difficult as new rules and their interactions with existing rules are determined by hand, a process which can be prohibitively expensive (Cahill et al., 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 154, |
|
"text": "Riezler et al. (2002)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 435, |
|
"text": "(Cahill et al., 2008)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In contrast, the rules in acquired grammars are learned automatically from external resources, typically annotated corpora such as the Penn Treebank (Marcus et al., 1993) . Here, the system automatically determines all the rules necessary to generate the structures exemplified in the corpus. Due to the automatic nature of this process, the rules determined may not be linguistically motivated, leading to a potentially noisy grammar.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 170, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Systems that learn their model of syntax from annotated corpora like the Penn Treebank are termed supervised systems; in unsupervised learning algorithms, possible sentence structures are determined directly from raw text. Bod (2006) presents an unsupervised parsing algorithm that out-performs a supervised, binarised PCFG and claims that this heralds the end of purely supervised parsing. However these results and others, e.g. Seginer (2007) , are still well below the performance of state-of-the-art supervised parsers; Bod explicitly says that the PCFG used is not state-of-the-art, and that binarising the PCFG causes a drop in its performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 233, |
|
"text": "Bod (2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 444, |
|
"text": "Seginer (2007)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "By extracting the grammar from an annotated corpus, the bottleneck is shifted from writing rules to the creation of the corpus. The 1.1 million words of newswire text in the Penn Treebank has certainly been invaluable in the development and evaluation of English parsers, however for supervised parsing to improve further, more data is required, from a larger variety of texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To investigate the importance of training data in language processing tasks, consider the problem of confusion set disambiguation, where the task is to determine which of a set of words (e.g. {to, too, two}) is correct in a particular context. Note that this is a problem for which large amounts of training data can be constructed ex-tremely cheaply. They compare four different machine learning algorithms and find that with a training corpus of 1 billion words, there is little difference between the algorithms, and performance appears not to be asymptoting. This result suggests that for other language processing tasks, perhaps including parsing, performance can still be improved by large quantities of additional training data, and that the amount of data is more important than the particular learning algorithm used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Quantity of data is not the only consideration, however. It has been shown that there is considerable variation in the syntactic structures used in different genres of text (Biber, 1993) , suggesting that a parser trained solely on newswire text will show reduced performance when parsing other genres.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 186, |
|
"text": "(Biber, 1993)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Evaluating parser performance on the Brown corpus, Sekine (1997) found that the best performance was achieved when the grammar was extracted from a corpus of the same domain as the test set, followed by one extracted from an equivalent size corpus containing texts from the same class (fiction or nonfiction), while parsers trained on a different class or different domain performed noticeably worse.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 64, |
|
"text": "Sekine (1997)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Comparing parsers trained on the Penn Treebank and Brown corpora, Gildea (2001) found that a small amount of training data from the same genre as the test set was more useful than a large amount of unmatched data, and that adding unmatched training data neither helped nor hindered performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 79, |
|
"text": "Gildea (2001)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To improve parsing performance on nonnewswire genres, then, training data for these additional domains are necessary. The Penn Treebank project took 8 years and was an expensive exercise (Marcus et al., 1993) , so larger and more varied annotated corpora are not likely to be forthcoming. Since unsupervised approaches still under-perform, it is necessary to explore how existing annotated data can be automatically extended, turning supervised into semi-supervised approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 208, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Combinatory Categorial Grammar (CCG; Steedman (2000)) is a mildly context sensitive, lexicalised grammar formalism, where each word in the sentence is assigned a lexical category (or supertag), either atomic or complex, which are then combined Mozart was born in 1756 using a small number of generic combinatory rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "NP (S [dcl ]\\NP )/(S [pss]\\NP ) S [pss]\\NP ((S \\NP )\\(S \\NP ))/NP NP > (S \\NP )\\(S \\NP ) < S [pss]\\NP > S [dcl ]\\NP < S [dcl ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The possible atomic categories are N , NP , PP and S , representing nouns, noun phrases, prepositional phrases and clauses, respectively. They may carry additional features, e.g. S [dcl ] represents a declarative clause. Complex categories are binary structures consisting of two categories and a slash, in the form X /Y or X \\Y . These represent constituents that when combined with a constituent with category Y (argument category) form a constituent with category X (result category). The forward and backward slashes indicate that the Y is to be found to the right and left of X , respectively. Examples of complex categories are NP /N (determiner) and (S \\NP )/NP (transitive verb).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The basic combinatory rules are forward and backward application, where complex categories are combined directly with their arguments to form the result. That is, X /Y Y \u21d2 X and Y X \\Y \u21d2 X. Instances of these rules are marked by the symbols > and < respectively. A derivation illustrating these rules is given in Figure 1 . Here, the word in with category ((S \\NP )\\(S \\NP ))/NP combines with 1756, an NP , using forward application to produce the constituent in 1756 with category (S \\NP )\\(S \\NP ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 321, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In addition, CCG has a number of rules such as forward composition (>B) (X /Y Y /Z \u21d2 X /Z ) and type-raising capabilities (e.g. X \u21d2 T /(T \\X ), represented by >T) that increase the grammar's expressive power from context free to mildly context sensitive. These allow more complex analyses where, instead of the verb taking its subject NP as an argument, the NP takes the verb as an argument. These additional rules mean that the sentence in Figure 1 may have alternative CCG analyses, for example the one shown in Figure 2 . This spurious ambi-guity has been shown not to pose a problem to efficient parsing with CCG (Clark and Curran, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 617, |
|
"end": 641, |
|
"text": "(Clark and Curran, 2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 447, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 522, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As each of the combinatory rules is applied, predicate-argument dependencies are created, of the form h f , f, s, h a , l , where f is the category that governs the dependency, h f is the word carrying the category f , s specifies which dependency of f is being filled, h a is the head word of the argument, and l indicates whether the dependency is local or longrange. For example, in Figure 1 , the word in results in the creation of two dependencies, both local:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 394, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "in, ((S \\NP )\\(S 1 \\NP ))/NP 2 , 1, born, \u2212 in, ((S \\NP )\\(S 1 \\NP ))/NP 2 , 2, 1756, \u2212", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The values for s refer back to the subscripts given on the category f . Although the spurious derivations such as those in Figure 2 have a different shape, they result in the creation of the same set of dependencies, and are therefore equally valid analyses.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 131, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The predicate-argument dependencies to be formed are governed by variables associated with the categories nested within the category f . For example, in is more fully represented by the cat-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "egory (((S Y \\NP Z ) Y \\(S Y,1 \\NP Z ) Y ) X /NP W,2 ) X .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "These variables represent the heads of the corresponding constituents; the variable X always refers to the word bearing the lexical category, in this case in.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "During the derivation, dependencies form between the word filling variable X and the word filling the variable directly associated with the dependency. In this example, the first dependency forms between in and the word filling variable Y , and the second forms between in and the word filling W . When in combines with 1756 through forward application, 1756 is identified with the argument NP W,2 , causing W to be filled by 1756, and so a dependency forms between in and 1756.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with CCG", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "born in 1756 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mozart was", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NP (S [dcl ]\\NP )/(S [pss]\\NP ) S [pss]\\NP ((S \\NP )\\(S \\NP ))/NP NP >T > S /(S \\NP ) (S \\NP )\\(S \\NP ) >B < S /(S [pss\\NP ) S [pss]\\NP > S [dcl ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mozart was", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The parser we use (Clark and Curran, 2007) is a state-of-the-art parser based on CCG. The grammar is automatically extracted from CCGbank, a conversion of the Penn Treebank into CCG (Hockenmaier, 2003) . As the training data is ultimately from the Penn Treebank, the grammar is subject to the same difficulties mentioned in Section 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 42, |
|
"text": "(Clark and Curran, 2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 201, |
|
"text": "(Hockenmaier, 2003)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The C&C Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The two main components of this parser are the supertagger, responsible for assigning each word its possible CCG lexical categories, and the parser, which combines the lexical categories to form the full derivation, using the CKY chart parsing algorithm as described in Steedman (2000) . The supertagger and parser are tightly integrated so that, for example, if a spanning analysis is not found, more lexical categories can be requested.", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 285, |
|
"text": "Steedman (2000)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The C&C Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The third component, the decoder, chooses the most probable of the spanning analyses found by the parser, according to the statistical model used. Clark and Curran (2007) discuss three separate models, one of which involves finding the most probable derivation directly, while the other two involve optimising the dependency structure returned. Here we use the first model, called the normal-form model. Clark and Curran (2007) evaluate their parser in two different ways. Their main method of evaluation is to compare the parser's predicate-argument dependency output against the dependencies in CCGbank. They calculate labelled and unlabelled precision, recall and F-score for the CCG dependencies plus category accuracy, the percentage of words assigned the correct lexical category.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 170, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 427, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The C&C Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This method of evaluation, however, does not allow easy comparison with non-CCG systems. Evaluating a CCG parser using the traditional metrics of precision, recall and F-score over Penn Treebank bracketings is problematic since CCG derivations, being binary-branching, can have a very different shape from the trees found in the Penn Treebank. This is particularly true of the spurious derivations, which will be heavily penalised even though they are correct analyses that lead to the correct predicateargument dependencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The C&C Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To address this problem, Clark and Curran (2007) also evaluate their parser against the re-annotation of the PARC Dependency Bank (DepBank; King et al. (2003) ), which contains 700 sentences from section 23 of the Penn Treebank, represented in the form of grammatical relations, e.g. ncsubj (non-clausal subject) and dobj (direct object). To do this, they convert the predicate-argument dependency output of the parser into grammatical relations, a non-trivial, many-tomany mapping. The conversion process puts the C&C parser at a disadvantage, however the its performance still rivals that of the RASP parser that returns DepBank grammatical relations natively. Figure 3 illustrates the grammatical relations obtained from the analysis in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 48, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 158, |
|
"text": "King et al. (2003)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 663, |
|
"end": 671, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 748, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The C&C Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Although some work has aimed at reducing the cost of training the C&C parser (Clark and Curran, 2006) , the question remains of whether annotated training data can be obtained for free.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 101, |
|
"text": "(Clark and Curran, 2006)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Getting Past the Bottleneck", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In their first entry to the TREC Question-Answering task, reasoned that while finding the correct answer in the given corpus may sometimes require sophisticated linguistic or logical processing, a preliminary answer found on the Web can help to identify the final answer in the corpus. They argue that the sheer size of the Web means that an answer can often be found using simple or shallow processing. We exploit the same idea of the redundancy of information on the Web here. A parsing model trained on existing data can already confidently parse a simple sentence such as Mozart was born in 1756 ( Figure 3 ). Given the size of the Web, many such sentences should be easily found. A longer sentence, such as Wolfgang Amadeus Mozart (baptized Johannes Chrysostomus Wolfgangus Theophilus) was born in Salzburg in 1756, the second survivor out of six children, is more complex and thus contains more opportunities for the parser to produce an incorrect analysis. However, this more complex sentence contains the same grammatical relations between the same words as in the simple sentence (Figure 4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 602, |
|
"end": 610, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1089, |
|
"end": 1099, |
|
"text": "(Figure 4)", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Getting Past the Bottleneck", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Similar to the one sense per collocation constraint (Yarowsky, 1993) , we assume that any sentence containing the words Mozart, born and 1756 will contain the same relationships between these words. We hypothesise that by constraining the parser to output an analysis consistent with these relationships, the correct analysis of the complex sentences can be found without manual intervention. These complex sentences can then be used as additional training data, allowing the parser to learn the general pattern of the sentence. For this process, we use grammatical relations rather than CCG dependencies as the former generalise across the latter and are therefore more transferable between sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 68, |
|
"text": "(Yarowsky, 1993)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Getting Past the Bottleneck", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our procedure may be outlined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "1. Manually select facts and identify keywords. 2. Collect sentences from HTML documents containing all keywords of a given fact. 3. Manually identify grammatical relations connecting fact keywords. 4. Parse all sentences for the fact, using these grammatical relations as constraints. 5. Add successful parses to training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Further detail is given below. Section 6.1 covers items 1 and 2, 6.2 item 3 and 6.3 items 4 and 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "First, we compiled a list of 43 facts based on factoid questions from the TREC 2004 Question Answering track (Voorhees, 2004) and the ISI Question Answer Typology (Hovy et al., 2002) . In order for our hypothesis to hold, it is necessary for the facts used to refer to relatively unambiguous entities. For each fact, we identified a set of keywords and submitted these as queries through the Google SOAP Search API. The query was restricted to files with .html or .htm extensions to simplify document processing. Each unique page returned was saved.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 125, |
|
"text": "(Voorhees, 2004)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 182, |
|
"text": "(Hovy et al., 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence collection", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Prior to splitting the documents into sentences, HTML tags were stripped from the text and HTML character entities replaced with their character equivalents. In order to incorporate some HTML markup information in the sentence boundary identification process, each page was split into a number of chunks by dividing at certain manually-identified HTML tags, such as heading and paragraph markers, which are unlikely to occur mid-sentence. Each of these chunks was then passed through the sentence boundary identifier available in NLTK v.0.9.3 1 (Kiss and Strunk, 2006) . Ideally, the process would use a boundary identifier trained on HTML text including markup, to avoid these heuristic divisions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 545, |
|
"end": 568, |
|
"text": "(Kiss and Strunk, 2006)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence collection", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "To further simplify processing, sentences were discarded if they contained characters other than standard ASCII alphanumeric characters or a small number of additional punctuation characters. We also regarded as noisy and discarded sentences containing whitespace-delimited tokens that contained fewer alphanumeric characters than other characters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence collection", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Sentences were then tokenised using a tokeniser developed the C&C parser for the TREC competition (Bos et al., 2007) . Sentences longer than 40 tokens were discarded as this might indicate noise or an error in sentence identification. Finally, sentences were only kept if each fact keyword appeared exactly once in the tokenised sentence, to avoid having to disambiguate between repetitions of keywords.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 116, |
|
"text": "(Bos et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence collection", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "At the conclusion of this process, each fact had between 0 and 299 associated sentences, for a total of 3472 sentences. 10 facts produced less than 10 sentences; the average yield of the remaining 33 facts was just over 100 sentences each. This processing does result in the loss of a considerable number of sentences, however we believe the quality and level of noise in the sentences to be a more important consideration than the quantity, since there are still many facts that could yet be used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence collection", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For each of the 33 facts that yielded more than 10 sentences, we identified a set of grammatical relations that connects the keywords in simple sentences. These sets contained between two and six grammatical relations; for example the relations for the Mozart fact were (ncsubj born Mozart), (ncmod born in) and (dobj in 1756). Since we assume that the keywords will be related by the same grammatical relations in all sentences (see Section 5), we use these grammatical relations as constraints on the analyses of all sentences for the corresponding fact. Future work will automate this constraint identification process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraint identification", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "To avoid the need to disambiguate between instances of particular words, when constraints are applied each word must be identified in the sentence uniquely. Although sentences that contained repetitions of fact keywords were discarded during the collection stage, the constraints identified in this stage of the process may incorporate additional words. In the Mozart fact, the constraints contain the word in in addition to the keywords Mozart, born and 1756. The sentence in Figure 4 , for example, contains two instances of in, so this sentence is discarded at this point, along with other sentences that contain multiple copies of a constraint word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 485, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Constraint identification", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Having identified the grammatical relations to use as constraints, the sentences for the corresponding fact are parsed with the constraints enforced using the process described below. Sentences from two of the 33 facts were discarded at this point because their constraints included grammatical relations of type conj, which cannot be enforced by this method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing with constraints", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "As described in Section 3, dependencies are formed between variables on lexical categories. For example, a dependency is created between the words in and 1756 in Figure 1 by filling a variable W in the category of in. To force a particular dependency to be formed, we determine which of the two words will bear the category that lists the dependency; this word we call the head. We identify the variable associated with the desired dependency and pre-fill it with the second word. The parser's own unification processes then ensure that the constraint is satisfied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 170, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing with constraints", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Since the constraints are expressed in terms of grammatical relations and not CCG dependencies, each constraint must first be mapped back to a number of possible dependencies. First, the head word may be assigned several possible categories by the supertagger. In this case, if any category does not license the desired dependency, it is removed from the list; all others have a variable filled. In this way, no matter which category is chosen, the constraint is enforced. Secondly, one grammatical relation may map to some dependencies governed by the first word and some governed by the second. That is, in some constraints, either word may operate as the head. To account for this, we parse the sentence several times, with different combinations of head choices, and keep the highest probability analysis. When parsing with constraints, not all sentences can be successfully parsed. Those where a spanning analysis consistent with the constraints cannot be found are discarded; the remainder are added to the training corpus. From the 31 facts still represented at this stage, we obtained 662 sentences. Table 1 traces the quantity of data throughout the process described in Section 6, for a sample of the facts used. It also gives totals for all 43 facts and for the 31 facts represented at completion.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1108, |
|
"end": 1115, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing with constraints", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The first column lists the fact keywords. We used a range of fact types, including abbreviations (CNN stands for Cable News Network) and identities (Canberra is the capital of Australia, hydrogen is the lightest element) as well as actions (Armstrong landed on the Moon in 1969) and passive structures (Martin Luther King, Jr. was assassinated in 1969) . This last example is one where fewer than 10 sentences were collected, most likely due to the large number of keywords.", |
|
"cite_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 352, |
|
"text": "King, Jr. was assassinated in 1969)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The # Pages column in Table 1 shows the number of unique Web pages saved for each fact, while # Sents indicates the number of unique sentences containing all fact keywords, after tokenisation. The next two figures show the coverage of the baseline parser on the Web sentences. The # Discarded column indicates the number of these sentences that were discarded during the constraining process due to constraint words being absent or appearing more than once. The final two columns show how many of the remaining sentences the parser found a spanning analysis for that was consistent with the constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 29, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It is clear that there is considerable variation in the number of sentences discarded during the constraining process. For some facts, such as the Columbus fact, the keywords form a chain of grammatical relations with no additional words necessary. In such cases, no sentences were discarded since any sentences that reached this point already met the unique keyword restriction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The main reason for sentences being discarded during constraining is that some relationships are expressed in a number of different ways. Table 2 : Performance of the baseline (Clark and Curran (2007) normal-form model) and new parser models on CCGbank section 23. Most results use gold standard POS tags; LF (POS) uses automatically assigned tags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 200, |
|
"text": "(Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 145, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "work (CNN) . By selecting only one set of constraints, only one option can be used; sentences involving other formats are therefore discarded. To overcome these difficulties, it would be necessary to allow several different constraint chains to be applied to each sentence, taking the highest probability parse from all resulting analyses. This could also be used to overcome the word-uniqueness restriction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 10, |
|
"text": "(CNN)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "To evaluate the effectiveness of the 662 additional sentences, we trained two models for the C&C parser, a baseline using the parser's standard training data, sections 02-21 of CCGbank, and a second model using CCGbank plus the extra data. For details of the training process, see Clark and Curran (2007) . Following Clark and Curran (2007) , we evaluate the performance of the two parsing models by calculating labelled and unlabelled precision, recall and F-score over CCGbank dependencies, plus coverage, category accuracy (percentage of words with the correct lexical category) and sentence accuracy (percentage of sentences where all dependencies are correct).", |
|
"cite_spans": [ |
|
{ |
|
"start": 281, |
|
"end": 304, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 340, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The difficulty we face here is that the additional data is Web text, while the evaluation data is still newswire text. Since genre effects are important, we cannot expect to see a large difference in results in this evaluation. Future work will compare the two models on a corpus of Web sentences collected according to the same procedure, using different facts. Table 2 summarises the results of our evaluation. The figures for the performance of the baseline system are the latest for the parser, and slightly higher than those given in Clark and Curran (2007) . It is clear that the results for the two systems are very similar and that adding 662 sentences, increasing the amount of data by approximately 1.7%, has had a very small effect. However, now that the automated procedure has been developed, we are in a position to substantially increase this amount of data without requiring manual annotation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 539, |
|
"end": 562, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 370, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We have described a procedure for automatically annotating sentences from Web text for use as training data for a statistical parser. We assume that if several sentences contain the same set of relatively unambiguous keywords, for example Mozart, born and 1756, then those words will be connected by the same chain of grammatical relations in all sentences. On this basis, we constrain a state-of-the-art parser to produce analyses for these sentences that contain this chain of relations. The constrained analyses are then used as additional training data for the parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Aside from the initial identification of facts, the only manual step of this process is the the choice of constraints. The automation of this step will involve the identification of reliable sentences, most likely short sentences consisting of only a single clause, from which the relation chain can be extracted. The chain will need to be supported by a number of such sentences in order to be accepted. This is the step that truly exploits the redundancy of information on the Web, as advocated by .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Once the choice of constraints has been automated, we will have a fully automatic procedure for creating additional annotated training data for a statistical parser. Our initial results show that the training data acquired in this manner can be used to augment existing training corpora while still maintaining high parser performance. We are now in a position to increase the scale of our data collection to determine how performance is affected by a training corpus that has been significantly increased in size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "http://nltk.sourceforge.net", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the anonymous reviewers and the Language Technology Research Group at the University of Sydney for their comments. The first author is supported by a University of Sydney Honours scholarship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Scaling to very very large corpora for natural language disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th Annual Meeting of the Assocation for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th Annual Meeting of the Asso- cation for Computational Linguistics, pages 26-33.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Using register-diversified corpora for general language studies", |
|
"authors": [ |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Biber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "219--241", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas Biber. 1993. Using register-diversified corpora for general language studies. Computational Linguis- tics, 19(2):219-241, June.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An all-subtrees approach to unsupervised parsing", |
|
"authors": [ |
|
{ |
|
"first": "Rens", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rens Bod. 2006. An all-subtrees approach to unsuper- vised parsing. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Annual Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "865--872", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 865-872.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The Pronto QA system at TREC-2007", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edoardo", |
|
"middle": [], |
|
"last": "Guzetti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Sixteenth Text REtreival Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos, James R. Curran, and Edoardo Guzetti. 2007. The Pronto QA system at TREC-2007. In Proceedings of the Sixteenth Text REtreival Conference.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Data-intensive question answering", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Tenth Text REtreival Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "393--400", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill, Jimmy Lin, Michele Banko, Susan Dumais, and Andrew Y. Ng. 2001. Data-intensive question answering. In Proceedings of the Tenth Text REtreival Conference, pages 393-400, November.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank", |
|
"authors": [ |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Poster Session of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank. In Proceedings of the Poster Session of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics, pages 41-48, July.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The second release of the RASP system", |
|
"authors": [ |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Watson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceed- ings of the COLING/ACL 2006 Interactive Presenta- tion Sessions, pages 77-80, Sydney, Australia, July.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Wide-coverage deep statistical parsing using automatic dependency structure annotation", |
|
"authors": [ |
|
{ |
|
"first": "Aoife", |
|
"middle": [], |
|
"last": "Cahill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Burke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Ruth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Donovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "1", |
|
"pages": "81--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aoife Cahill, Michael Burke, Ruth O'Donovan, Stefan Riezler, Josef van Genabith, and Andy Way. 2008. Wide-coverage deep statistical parsing using auto- matic dependency structure annotation. Computa- tional Linguistics, 34(1):81-124, March.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A maxmium entropy inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2000. A maxmium entropy inspired parser. In Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics, pages 132-139.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Partial training for a lexicalized-grammar parser", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "144--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark and James R. Curran. 2006. Partial train- ing for a lexicalized-grammar parser. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 144-151, June.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "4", |
|
"pages": "493--552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552, December.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Head-Driven Statistical Models for Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania, Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Corpus variation and parser performance", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Gildea. 2001. Corpus variation and parser per- formance. In Lillian Lee and Donna Harman, edi- tors, Proceedings of the 2001 Conference on Empir- ical Methods in Natural Language Processing, pages 167-202.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "FAL-CON: Boosting knowledge for answer engines", |
|
"authors": [ |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Pasca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Milhalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasile", |
|
"middle": [], |
|
"last": "Rus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Morarescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Ninth Text REtreival Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanda Harabagiu, Dan Moldovan, Marius Pasca, Rada Milhalcea, Mihai Surdeanu, Razvan Bunescu, Roxana Girju, Vasile Rus, and Paul Morarescu. 2000. FAL- CON: Boosting knowledge for answer engines. In Proceedings of the Ninth Text REtreival Conference.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Data and Models for Statistical Parsing with Combinatory Categorial Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the DARPA Human Language Technology conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "247--251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julia Hockenmaier. 2003. Data and Models for Statis- tical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh, Edinburgh, UK. Eduard Hovy, Ulf Hermjakob, and Deepak Ravichan- dran. 2002. A question/answer typology with surface text patterns. In Proceedings of the DARPA Human Language Technology conference, pages 247-251.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The PARC 700 dependency bank", |
|
"authors": [ |
|
{ |
|
"first": "Tracy Holloway", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Crouch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Dalrymple", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronald", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the EACL03: 4th International Workshop on Linguistically Interpreted Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tracy Holloway King, Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald M. Kaplan. 2003. The PARC 700 dependency bank. In Proceedings of the EACL03: 4th International Workshop on Linguisti- cally Interpreted Corpora, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Unsupervised multilingual sentence boundary detection", |
|
"authors": [ |
|
{ |
|
"first": "Tibor", |
|
"middle": [], |
|
"last": "Kiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computational Linguistics", |
|
"volume": "32", |
|
"issue": "4", |
|
"pages": "485--525", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tibor Kiss and Jan Strunk. 2006. Unsupervised multi- lingual sentence boundary detection. Computational Linguistics, 32(4):485-525.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Building a large annotated corpus of english: The Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of english: The Penn Treebank. Computational Linguistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Statistical machine translation by parsing", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "653--660", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 2004. Statistical machine translation by parsing. In Proceedings of the 42nd Annual Meet- ing of the Association for Computational Linguistics, pages 653-660.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Parsing the Wall Street Journal using a lexical-functional gramar and discriminative estimation techniques", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tracy", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronald", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Crouch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Maxwell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "271--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T Maxwell III, and Mark John- son. 2002. Parsing the Wall Street Journal using a lexical-functional gramar and discriminative estima- tion techniques. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguis- tics, pages 271-278.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Fast unsupervised incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Seginer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "384--391", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Seginer. 2007. Fast unsupervised incremental pars- ing. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 384-391, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The domain dependence of parsing", |
|
"authors": [ |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the fifth conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "96--102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satoshi Sekine. 1997. The domain dependence of pars- ing. In Proceedings of the fifth conference on Applied Natural Language Processing, pages 96-102.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The Syntactic Process. Language, Speech, and Communication series", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Steedman. 2000. The Syntactic Process. Lan- guage, Speech, and Communication series. The MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Overview of the TREC 2004 Question Answering track", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Thirteenth Text REtreival Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen M. Voorhees. 2004. Overview of the TREC 2004 Question Answering track. In Proceedings of the Thir- teenth Text REtreival Conference.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "One sense per collocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "266--271", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky. 1993. One sense per collocation. In Proceedings of the workshop on Human Language Technology, pages 266-271.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "CCG derivation for the sentence Mozart was born in 1756.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Alternative derivation for the sentence inFigure 1, illustrating CCG's spurious ambiguity.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Grammatical relations for Mozart was born in 1756.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "Desired analysis for the more complex sentence, with the analysis inFigure 3indicated by dashed lines.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Number of sentences acquired for a selection of facts. Figures are given for the number of sentences parsed and failed by the normal (unconstrained) parser as an indication of parser coverage on Web sentences.", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Prepositions and punctuation are frequently responsible for this. For example, the date of the Moon landing can be expressed as both in 1969 and on July 20, 1969, while the expansion of the abbreviation CNN may be expressed as CNN: Cable News Network, CNN (Cable News Network) and indeed Cable News Net-", |
|
"content": "<table><tr><td>Model</td><td>LP</td><td>LR</td><td>LF</td><td colspan=\"2\">LF (POS) SENT ACC</td><td>UP</td><td>UR</td><td>UF</td><td>CAT ACC</td><td>COV</td></tr><tr><td>Baseline</td><td colspan=\"3\">85.53 84.71 85.12</td><td>83.38</td><td>32.14</td><td colspan=\"3\">92.37 91.49 91.93</td><td>93.05</td><td>99.06</td></tr><tr><td>New</td><td colspan=\"3\">85.64 84.77 85.21</td><td>83.54</td><td>32.03</td><td colspan=\"3\">92.41 91.47 91.94</td><td>93.08</td><td>99.06</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |