|
{ |
|
"paper_id": "H01-1026", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:31:06.970922Z" |
|
}, |
|
"title": "Facilitating Treebank Annotation Using a Statistical Parser", |
|
"authors": [ |
|
{ |
|
"first": "Fu-Dong", |
|
"middle": [], |
|
"last": "Chiou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": { |
|
"addrLine": "200 S 33rd Street", |
|
"postCode": "19104-6389", |
|
"settlement": "Philadelphia", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": { |
|
"addrLine": "200 S 33rd Street", |
|
"postCode": "19104-6389", |
|
"settlement": "Philadelphia", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": { |
|
"addrLine": "200 S 33rd Street", |
|
"postCode": "19104-6389", |
|
"settlement": "Philadelphia", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "", |
|
"pdf_parse": { |
|
"paper_id": "H01-1026", |
|
"_pdf_hash": "", |
|
"abstract": [], |
|
"body_text": [ |
|
{ |
|
"text": "Corpora of phrase-structure-annotated text, or treebanks, are useful for supervised training of statistical models for natural language processing, as well as for corpus linguistics. Their primary drawback, however, is that they are very time-consuming to produce. To alleviate this problem, the standard approach is to make two passes over the text: first, parse the text automatically, then correct the parser output by hand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper we explore three questions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 How much does an automatic first pass speed up annotation?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 Does this automatic first pass affect the reliability of the final product?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 What kind of parser is best suited for such an automatic first pass?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We investigate these questions by an experiment to augment the Penn Chinese Treebank [15] using a statistical parser developed by Chiang [3] for English. This experiment differs from previous efforts in two ways: first, we quantify the increase in annotation speed provided by the automatic first pass (70-100%); second, we use a parser developed on one language to augment a corpus in an unrelated language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 89, |
|
"text": "[15]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 140, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The parsing model described by Chiang [3] is based on stochastic TAG [13, 14] . In this model a parse tree is built up out of tree fragments (called elementary trees), each of which contains exactly one lexical item (its anchor).", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 41, |
|
"text": "Chiang [3]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 69, |
|
"end": 73, |
|
"text": "[13,", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 77, |
|
"text": "14]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PARSER", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In the variant of TAG used here, there are three kinds of elementary trees: initial, (predicative) auxiliary, and modifier, and three corresponding composition operations: substitution, adjunction, and sister-adjunction. Figure 1 illustrates all three of these operations. The first two come from standard TAG [8] ; the third is borrowed from D-tree grammar [11] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 313, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 362, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 229, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "THE PARSER", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In a stochastic TAG derivation, each elementary tree is generated with a certain probability which depends on the elementary tree itself as well as the node it gets attached to. Since every tree is . lexicalized, each of these probabilities involves a bilexical dependency, as in many recent statistical parsing models [9, 2, 4] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 322, |
|
"text": "[9,", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 325, |
|
"text": "2,", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 328, |
|
"text": "4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PARSER", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Since the number of parameters of a stochastic TAG is quite high, we do two things to make parameter estimation easier. First, we generate an elementary tree in two steps: the unlexicalized tree, then a lexical anchor. Second, we smooth the probability estimates of these two steps by backing off to reduced contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PARSER", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "When trained on about 80,000 words of the Penn Chinese Treebank and tested on about 10,000 words of unseen text, this model obtains 73.9% labeled precision and 72.2% labeled recall [1] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 184, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PARSER", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "For the present experiment the parsing model was trained on the entire treebank (99,720 words). We then prepared a new set of 20,202 segmented, POS-tagged words of Xinhua newswire text, which was blindly divided into 3 sets of equal size (\u00b110 words).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "METHODOLOGY", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Each set was then annotated in two or three passes, as summarized by the following table:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "METHODOLOGY", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Pass 1 Pass 2 Pass 3 1 -Annotator A Annotators A&B 2 parser Annotator A Annotators A&B 3 revised parser Annotator A Annotators A&B Here \"Annotators A&B\" means that Annotator B checked the work of Annotator A, then for each point of disagreement, both annotators worked together to arrive at a consensus structure. \"Parser\" is Chiang's parser, adapted to parse Chinese text as described by Bikel and Chiang [1] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 409, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"Revised parser\" is the same parser with additional modifications suggested by Annotator A after correcting Set 2. These revisions primarily resulted from a difference between the artificial evaluation metric used by Bikel and Chiang [1] and this real-world task. The metric used earlier, following common practice, did not take punctuation or empty elements into account, whereas the present task ideally requires that they be present and correctly placed. Thus following changes were made:", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 237, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 The parser was originally trained on data with the punctuation marks moved, and did not bother to move the punctuation marks back. For Set 3 we simply removed the preprocessing phase which moved the punctuation marks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Similarly, the parser was trained on data which had all empty elements removed. In this case we simply applied a rulebased postprocessor which inserted null relative pronouns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Finally, the parser often produced an NP (or VP) which dominated only a single NP (respectively, VP), whereas such a structure is not specified by the bracketing guidelines. Therefore we applied another rule-based postprocessor to remove these nodes. (This modification would have helped the original evaluation as well.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In short, none of the modifications required major changes to the parser, but they did improve annotation speed significantly, as we will see below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Set", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The annotation times and rates for Pass 2 are as follows: where LP stands for labeled precision and LR stands for labeled recall. The third column reports the accuracy of Pass 1 (the parser) using the results of Pass 2 (Annotator A) as a gold standard. The fourth column reports the accuracy of Pass 2 (Annotator A) using the results of Pass 3 (Annotators A&B) as a gold standard. We note several points:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RESULTS", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "\u2022 There is no indication that the addition of an automatic first pass affected the accuracy of Pass 2. On the other hand, the near-perfect reported accuracy of Pass 2 suggests that in fact each pass biased subsequent passes substantially. We need a more objective measure of reliability, which we leave for future experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RESULTS", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "\u2022 The parser revisions significantly improved the accuracy of the parser with respect to the present metric (which is sensitive to punctuation and empty elements). On Set 2 the revised parser obtained 78.98/77.39% labeled precision/recall, an error reduction of about 9%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RESULTS", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "\u2022 Not surprisingly, errors due to large-scale structural ambiguities were the most time-consuming to correct by hand. To take an extreme example, one parse produced by the parser is shown in Figure 2 . It often matches the correct parse (shown in Figure 3 ) at the lowest levels but the large-scale errors require the annotator to make many corrections.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 199, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 255, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RESULTS", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In summary, although Chiang's parser was not specifically designed for Chinese, and trained on a moderate amount of data (less than 100,000 words), the parses it provided were reliable enough that the annotation rate was effectively doubled.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Now we turn to our third question: what kind of parser is most suitable for an automatic first pass? Marcus et al. [10] describe the use of the deterministic parser Fidditch [6] as an automatic first pass for the Penn (English) Treebank. They cite two features of this parser as strengths:", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 119, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 177, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "1. It only produces a single parse per sentence, so that the annotator does not have to search through many parses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "2. It produces reliable partial parses, and leaves uncertain structures unspecified.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The Penn-Helsinki Parsed Corpus of Middle English was constructed using a statistical parser developed by Collins [4] as an automatic first pass. This parser, as well as Chiang's, retains the first advantage but not the second. However, we suggest two ways a statistical parser might be used to speed annotation further:", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 117, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "First, the parser can be made more useful to the annotator. A statistical parser typically produces a single parse, but can also (with little additional computation) produce multiple parses. Ratnaparkhi [12] has found that choosing (by oracle) the best parse out of the 20 highest-ranked parses boosts labeled recall and precision (IP (NP (DP (DT \u00c2)) from about 87% to about 93%. This suggests that if the annotator had access to several of the highest-ranked parses, he or she could save time by choosing the parse with the best gross structure and making small-scale corrections.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 207, |
|
"text": "[12]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Would such a change defeat the first advantage above by forcing the annotator to search through multiple parses? No, because the parses produced by a statistical parser are ranked. The additional lower-ranked parses can only be of benefit to the annotator. Indeed, because the chart contains information about the certainty of each subparse, a statistical parser might regain the second advantage as well, provided this information can be suitably presented.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Second, the annotator can be made more useful to the parser by means of active learning or sample selection [5, 7] . (We are assuming now that the parser and annotator will take turns in a trainparse-correct cycle, as opposed to a simple two-pass scheme.) The idea behind sample selection is that some sentences are more informative for training a statistical model than others; therefore, if we have some way of automatically guessing which sentences are more informative, these sentences are the ones we should handcorrect first. Thus the parser's accuracy will increase more quickly, potentially requiring the annotator to make fewer corrections overall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 111, |
|
"text": "[5,", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 114, |
|
"text": "7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": "5." |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Fei Xia, Mitch Marcus, Aravind Joshi, Mary Ellen Okurowski and John Kovarik for their helpful comments on the design of the evaluation, Beth Randall for her postprocessing and error-checking code, and Nianwen Xue for serving as \"Annotator B.\" This research was funded by DARPA N66001-00-1-8915, DOD MDA904-97-C-0307, and NSF SBR-89-20230-15.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ACKNOWLEDGMENTS", |
|
"sec_num": "6." |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Two statistical parsing models applied to the Chinese Treebank", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Second Chinese Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Bikel and David Chiang. Two statistical parsing models applied to the Chinese Treebank. In Proceedings of the Second Chinese Language Processing Workshop, pages 1-6, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Statistical parsing with a context-free grammar and word statistics", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "598--603", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), pages 598-603. AAAI Press/MIT Press, 1997.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Statistical parsing with an automatically-extracted tree adjoining grammar", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 38th Annual Meeting of the Assocation for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "456--463", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. Statistical parsing with an automatically-extracted tree adjoining grammar. In Proceedings of the 38th Annual Meeting of the Assocation for Computational Linguistics, pages 456-463, Hong Kong, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Three generative lexicalised models for statistical parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 35th Annual Meeting of the Assocation for Computational Linguistics (ACL-EACL '97)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. Three generative lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Assocation for Computational Linguistics (ACL-EACL '97), pages 16-23, Madrid, 1997.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Committee-based sampling for training probabilistic classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Engelson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Twelfth International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "150--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan and Sean P. Engelson. Committee-based sampling for training probabilistic classifiers. In Proceedings of the Twelfth International Conference on Machine Learning, pages 150-157. Morgan Kaufmann, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Acquiring disambiguation rules from text", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donald Hindle. Acquiring disambiguation rules from text. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Sample selection for statistical grammar induction", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of EMNLP/VLC-2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Hwa. Sample selection for statistical grammar induction. In Proceedings of EMNLP/VLC-2000, pages 45-52, Hong Kong, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Tree-adjoining grammars", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Handbook of Formal Languages and Automata", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "69--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aravind K. Joshi and Yves Schabes. Tree-adjoining grammars. In Grzegorz Rosenberg and Arto Salomaa, editors, Handbook of Formal Languages and Automata, volume 3, pages 69-124. Springer-Verlag, Heidelberg, 1997.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Statistical decision-tree models for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 33rd Annual Meeting of the Assocation for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Magerman. Statistical decision-tree models for parsing. In Proceedings of the 33rd Annual Meeting of the Assocation for Computational Linguistics, pages 276-283, Cambridge, MA, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Building a large annotated corpus of English: the Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19:313-330, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "D-tree grammars", |
|
"authors": [ |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Vijay-Shanker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Weir", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 33rd Annual Meeting of the Assocation for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Owen Rambow, K. Vijay-Shanker, and David Weir. D-tree grammars. In Proceedings of the 33rd Annual Meeting of the Assocation for Computational Linguistics, pages 151-158, Cambridge, MA, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Maximum entropy models for natural language ambiguity resolution", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adwait Ratnaparkhi. Maximum entropy models for natural language ambiguity resolution. PhD thesis, University of Pennsylvania, 1998.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Probabilistic tree-adjoining grammar as a framework for statistical natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics (COLING-92)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "418--424", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Resnik. Probabilistic tree-adjoining grammar as a framework for statistical natural language processing. In Proceedings of the Fourteenth International Conference on Computational Linguistics (COLING-92), pages 418-424, Nantes, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Stochastic lexicalized tree-adjoining grammars", |
|
"authors": [ |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics (COLING-92)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "426--432", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yves Schabes. Stochastic lexicalized tree-adjoining grammars. In Proceedings of the Fourteenth International Conference on Computational Linguistics (COLING-92), pages 426-432, Nantes, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Developing guidelines and ensuring consistency for Chinese text annotation", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ellen" |
|
], |
|
"last": "Okurowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Kovarik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fu-Dong", |
|
"middle": [], |
|
"last": "Chiou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shizhe", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Kroch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitch", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Second International Conference on Language Resources and Evaluation (LREC-2000)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Xia, Martha Palmer, Nianwen Xue, Mary Ellen Okurowski, John Kovarik, Fu-Dong Chiou, Shizhe Huang, Tony Kroch, and Mitch Marcus. Developing guidelines and ensuring consistency for Chinese text annotation. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC-2000), Athens, Greece, 2000.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Grammar and derivation for \"John should leave tomorrow.\" \u03b11 and \u03b12 are initial trees, \u03b2 is a (predicative) auxiliary tree, \u03b3 is a modifier tree.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "VP (PU ) (VP (VV \u00a5) create (NP (NN \u00ec)) income (QP (CD \u01f1 \u01f1\u00ea ) 4.43 billion (CLP (M \u00ff)))))))) RMB (PU \u00a2))", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Parser output. Translation: \"These businesses also transfer and spread the intellectual property rights of 36,000 technologies to other businesses and organizations, creating an income of 4.43 billion RMB.\" (IP (NP-SBJ (DP (DT \u00c2)) WHNP-1 (-NONE-*OP*)) (CP (IP (NP-SBJ (-NONE-*T*-1)) (VP (VV \u00d4 ) possess (NP-OBJ (NN \u00a7 ) to be one's own master OBJ (NN \u00ec)) income (QP-EXT (CD \u01f1 \u01f1\u00ea ) 4.43 billion (CLP (M \u00ff)))))) RMB (PU \u00a2))", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Corrected parse for sentence ofFigure 2.", |
|
"uris": null |
|
} |
|
} |
|
} |
|
} |