Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E93-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:54:55.971045Z"
},
"title": "Parsing the Wall Street Journal with the Inside-Outside Algorithm",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": "",
"affiliation": {
"laboratory": "Mitsubishi Electric Research Laboratories",
"institution": "",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Michal",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "Mitsubishi Electric Research Laboratories",
"institution": "",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Randy",
"middle": [],
"last": "Osborne",
"suffix": "",
"affiliation": {
"laboratory": "Mitsubishi Electric Research Laboratories",
"institution": "",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We report grammar inference experiments on partially parsed sentences taken from the Wall Street Journal corpus using the inside-outside algorithm for stochastic context-free grammars. The initial grammar for the inference process makes no ,assumption of the kinds of structures and their distributions. The inferred grammar is evaluated by its predicting power and by comparing the bracketing of held out sentences imposed by the inferred grammar with the partial bracketings of these sentences given in the corpus. Using part-of-speech tags as the only source of lexical information, high bracketing accuracy is achieved even with a small subset of the available training material (1045 sentences): 94.4% for test sentences shorter than 10 words and 90.2% for sentences shorter than 15 words.",
"pdf_parse": {
"paper_id": "E93-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "We report grammar inference experiments on partially parsed sentences taken from the Wall Street Journal corpus using the inside-outside algorithm for stochastic context-free grammars. The initial grammar for the inference process makes no ,assumption of the kinds of structures and their distributions. The inferred grammar is evaluated by its predicting power and by comparing the bracketing of held out sentences imposed by the inferred grammar with the partial bracketings of these sentences given in the corpus. Using part-of-speech tags as the only source of lexical information, high bracketing accuracy is achieved even with a small subset of the available training material (1045 sentences): 94.4% for test sentences shorter than 10 words and 90.2% for sentences shorter than 15 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most broad coverage natural language parsers have been designed by incorporating hand-crafted rules. These rules are also very often further refined by statistical training. Furthermore, it is widely believed that high performance can only be achieved by disambiguating lexically sensitive phenomena such as prepositional attachment ambiguity, coordination or subcategorizadon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far, grammar inference has not been shown to be effective for designing wide coverage parsers. Baker (1979) describes a training algorithm for stochastic context-free grammars (SCFG) which can be used for grammar reestimation (Fujisaki et al. 1989 , Sharrnan et al. 1990 , Black et al. 1992 , Briscoe and Waegner 1992 or grammar inference from scratch (Lari and Young 1990 ). However, the application of SCFGs and the original inside-outside algorithm for grammar inference has been inconclusive for two reasons. First, each iteration of the algorithm on a gr,-unmar with n nonterminals requires O(n31wl 3) time per t~ning sentence w. Second, the inferred grammar imposes bracketings which do not agree with linguistic judgments of sentence structure. Pereira and Schabes (1992) extended the inside-outside algorithm for inferring the parameters of a stochastic context-free grammar to take advantage of constituent bracketing information in the training text. Although they report encouraging experiments (90% bracketing accuracy) on h'mguage transcriptions in the Texas Instrument subset of the Air Travel Information System (ATIS), the small size of the corpus (770 bracketed sentences containing a total of 7812 words), its linguistic simplicity, and the computation time required to vain the grammar were reasons to believe that these results may not scale up to a larger and more diverse corpus.",
"cite_spans": [
{
"start": 98,
"end": 110,
"text": "Baker (1979)",
"ref_id": "BIBREF0"
},
{
"start": 229,
"end": 250,
"text": "(Fujisaki et al. 1989",
"ref_id": null
},
{
"start": 251,
"end": 273,
"text": ", Sharrnan et al. 1990",
"ref_id": null
},
{
"start": 274,
"end": 293,
"text": ", Black et al. 1992",
"ref_id": "BIBREF3"
},
{
"start": 294,
"end": 320,
"text": ", Briscoe and Waegner 1992",
"ref_id": "BIBREF5"
},
{
"start": 365,
"end": 375,
"text": "Young 1990",
"ref_id": "BIBREF8"
},
{
"start": 755,
"end": 781,
"text": "Pereira and Schabes (1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We report grammar inference experiments with this algorithm from the parsed Wall Street Journal corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experiments prove the feasibility and effectiveness of the inside-outside algorithm on a htrge corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such experiments are made possible by assumi'ng a right br~mching structure whenever the parsed corpus leaves portions of the parsed tree unspecified. This preprocessing of the corpus makes it fully bracketed. By taking adv~mtage of this fact in the implementation of the inside-outside ~dgorithm, its complexity becomes line~tr with respect to the input length (as noted by Pereira and Schabes, 1992) ,and therefore tractable for large corpora.",
"cite_spans": [
{
"start": 375,
"end": 401,
"text": "Pereira and Schabes, 1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We report experiments using several kinds of initial gr~unmars ~md a variety of subsets of the corpus as training data. When the entire Wall Street Journal corpus was used as training material, the time required for training has been further reduced by using a par~dlel implementation of the inside-outside ~dgorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The inferred grammar is evaluated by measuring the percentage of compatible brackets of the bracketing imposed by the inferred grammar with the partial bracketing of held out sentences. Surprisingly high bracketing accuracy is achieved with only 1042 sentences as train-\u2022 ing materi,'d: 94.4% for test sentences shorter th,-m 10 words ~md 90.2% for sentences shorter than 15 words. Furthermore, the bracketing accuracy does not drop drastic~dly as longer sentences ,are considered. These results ,are surprising since the training uses part-ofspeech tags as the only source of lexical information. This raises questions about the statistical distribution of sentence structures observed in naturally occurring text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After having described the training material used, we report experiments using several subsets of the available training material ,and evaluate the effect of the training size on the bracketing perform,'mce. Then, we describe a method for reducing the number of parameters in the inferred gr~unmars. Finally, we suggest a stochastic model for inferring labels on the produced binary br~mching trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experiments use texts from the Wall Street Journ~d Corpus ,and its partially bracketed version provided by the Penn Treebank (Brill et al., 1990) . Out of 38 600 bracketed sentences (914 000 words), we extracted 34500 sentences (817 000 words) as possible source of training material ,and 4100 sentences (97 000 words) as source for testing. We experimented with several subsets (350, 1095, 8000 ,and 34500 sentences) of the available training materi~d.",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "(Brill et al., 1990)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus",
"sec_num": "2"
},
{
"text": "For practiced purposes, the part of the tree bank used for training is preprocessed before being used. First, fiat portions of parse trees found in the tree b,'mk are turned into a right linear binary br~mching structure. This enables us to take full adv~mtage of the fact that the extended inside-outside ~dgorithm (as described in Pereira and Schabes, 1992) behaves in linear time when the text is fully bracketed. Then, the syntactic labels are ignored. This allows the reestimation algorithm to distribute its own set of labels based on their actual distribution. We later suggest a method for recovering these labels.",
"cite_spans": [
{
"start": 333,
"end": 359,
"text": "Pereira and Schabes, 1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus",
"sec_num": "2"
},
{
"text": "The following is ,an ex~unple of a partially parsed sentence found in the found in the tr,'fining corpus. The experiments reported in this paper use only the p,'trt-of-speech sequences of this corpus ,and the resulting fully bracketed parses. For the above example, the following bracketing is used in the training material:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus",
"sec_num": "2"
},
{
"text": "(DT (NN (IN (DT (JJ NNS)))) (VBZ (VBN VBN)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus",
"sec_num": "2"
},
{
"text": "For the set of experiments described in this section, the initial gr,'unmar consists of,all 4095 possible Chore-sky Normal Form rules over 15 nonterminals (X i, 1 < i < 15) and 48 termin,'d symbols (t,,, 1 < m < 48) for part-of-speech tags (the same set as the one used in the Penn Treebank):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Bracketings",
"sec_num": "3"
},
{
"text": "X i =:~ X]X k X i =~ t m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Bracketings",
"sec_num": "3"
},
{
"text": "The parameters of the initial stochastic context-free grammar are set randomly while maintaining the proper conditions for stochastic context-free grammars. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Bracketings",
"sec_num": "3"
},
{
"text": "Using the algorithm described in Pereira and Schabes (1992) , the current rule probabilities and the parsed training set C are used to estimate the expected frequencies of each rule. Once these frequencies are computed over each bracketed sentence c in the training set, new rule probabilities ,are assigned in a way that increases the estimated probability of the bracketed training set. This process is iterated until the increase in the estimated probability of the bracketed training text becomes negligible, or equivalently, until the decrease in cross entropy (negative log probability)",
"cite_spans": [
{
"start": 33,
"end": 59,
"text": "Pereira and Schabes (1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Bracketings",
"sec_num": "3"
},
{
"text": "Z logP (c) ~t (c,G) = cEc Z Icl ceC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Bracketings",
"sec_num": "3"
},
{
"text": "becomes negligible. In the above formula, the probability P(c) of the partially bracketed sentence c is computed as the sum of the probabilities of all derivations compatible with the bracketing of the sentence. This notion of compatible bracketing is defined in details in Pereim and Schabes (1992) . Informally speaking, a derivation is compatible with the bracketing of the input given in the tree bank, if no bracket imposed by the derivation crosses a bracket in the input.",
"cite_spans": [
{
"start": 274,
"end": 299,
"text": "Pereim and Schabes (1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inferring Bracketings",
"sec_num": "3"
},
{
"text": "Incompatible bracket Input bracketing ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compatible bracket Input bracketing",
"sec_num": null
},
{
"text": "As refining material, we selected randomly out of the available training material 1042 sentences of length shorter than 15 words. For evaluation purposes, we also 1. The sum of the probabilities of the rules with same left hand side must be one. nmdomly selected 84 sentences of length shorter than 15 words among the test sentences. Figure 1 shows the cross entropy of the training after each iteration. It also shows for each iteration the cross entropies f/of 84 sentences randomly selected ,among the test sentences of length shorter than 15 words. The cross entropy decreases ,as more iterations ,are performed and no over training is observed.. Bracketing and sentence accuracy of 84 test sentences shorter than 15 words.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A ( )",
"sec_num": null
},
{
"text": "To evaluate the quality of the analyses yielded by the inferred grammars obtained ,after each iteration, we used a Viterbi-style parser to find the most likely analyses of sentences in several test samples, and compared them with the Treebank partial bmcketings of the sentences of those samples. For each sample, we counted the percent-age of brackets of the most likely ~malysis that are not \"crossing\" the partiid bracketing of the same sentences found in the Treebank. This percentage is called the bracketing accuracy (see Pereira and Schabes, 1992 tor the precise definition of this measure). We also computed the percentage of sentences in each smnple in which no crossing bracket wits found. This percentage is called the sentence accuracy. Figure 2 shows the bracketing and sentence accuracy for the s,'une 84 test sentences. Table 1 shows the bracketing and sentence accuracy for test sentences within various length ranges. High bracketing accuracy is obtained even on relatively long sentences. However, as expected, the sentence accuracy decreases rapidly as the sentences get longer. Table 2 compares our results with the bracketing accuracy of analyses obtained by a systematic right linear branching structure for all words except for the final punctuation mark (which we att~tched high). 2 We also evaluated the stochastic context-free gr, unmar obtained by collecting each level of the trees found in the training tree bimk (see Table 2 ). Right linear structures perform surprisingly well. Our results improve by 20 percentage points upon this base line performance. These results suggest that the distribution of sentence structure in naturally occurring text is simpler than one may have thought, especially since only part-of-speech tags were used. This may suggest 2. We thank Eric Brill and David Yarowsky for suggesting these experiments. the existence of clusters of trees in the training material. However, using the number of crossing brackets ils a distance between trees, we have been unable to reveal the existence of clusters.",
"cite_spans": [
{
"start": 528,
"end": 553,
"text": "Pereira and Schabes, 1992",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 749,
"end": 757,
"text": "Figure 2",
"ref_id": null
},
{
"start": 835,
"end": 842,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1098,
"end": 1105,
"text": "Table 2",
"ref_id": null
},
{
"start": 1447,
"end": 1454,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A ( )",
"sec_num": null
},
{
"text": "The grammar obtained by collecting rules from the tree bank performs very poorly. One can conclude that the labels used in the tree bank do not have ,'my statistical property. The task of inferring a stochastic grammar from a tree bank is not trivial and therefore requires statistical training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "In the appendix we give examples of the most likely analyses output by the inferred grammar on severld test sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "In Table 3 , different subsets of the available trltining sentences of lengths up to 15 words long and the grammars were evaluated on the same set of test sentences of lengths shorter than 15 words. The size of the training set does not seem to ,affect the performimce of the parser. Effect of the size of the training set on the bracketing and sentence accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "However if one includes all available sentences (34700 sentences), for the stone test set, the bracketing accuracy drops to 84% ,and the sentence accuracy to 40%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "We have also experimented with the following initial grmnmar which defines a large number of rules (I 10640):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "X i ~ XjX k X i ~ t i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "In this grammar, each non-terminal symbol is uniquely ,associated with a terminal symbol. We observed over-Ix,fining with this grmnmar ,and better statistic~d convergence was obtained, however the performance of the parser did not improve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Bracketing Accuracy Sentence Accuracy",
"sec_num": null
},
{
"text": "As grammars are being inferred at each iteration, the training algorithm was designed to guarantee that no parameter was set below some small threshold. This constraint is important for smoothing. It implies that no rule ever disappears at a reestimation step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "However, once the final grammar is found, for practical purposes, one can reduce the number of parameters being used. For example, the size of the grammar can be reduced by eliminating the rules whose probabilities are below some threshold or by keeping for each non-terminal only the top rules rewriting it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "However, one runs into the risk of not being able to parse sentences given as input. We used the following smoothing heuristics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "Lexieal rule smoothing. In the case no rule in the gnunmar introduces a terminal symbol found in the input string, we assigned a lexical rule (X i ~ tin) with very low \u2022 probability for all non-terminal symbols. This case will not happen if the training is representative of the lexical items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "Syntactic rule smoothing. When the sentence is not recognized from the starting symbol, we considered ,all possible non-terminal symbols as starting symbols ,and considered as starting symbol the one that yields the most likely ,'malysis. Although this procedure may not guarantee that ,all sentences will be recognized, we found it is very useful in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "When none of the above procedures enable parsing of the sentence, we used the entire set of parameters of the inferred gr,~mar (this was never the case on the test sentences we considered).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "For example, the grammar whose performance is depicted in Table 2 defines 4095 parameters. However, the same performance is achieved on these test sets by using only 450 rules (the top 20 binary branching rules X i ~ XjXk for each non-terminal symbol ,and the top 10 lexical rules X i ~ I m for each non-terminal symbol), Pereira and Schabes (1992) note that the training ,algorithm behaves in linear time (with respect to the sentence length) when the training material consists of fully bracketed sentences. By taking advantage of this fact, the experiments using a small number of initial rules and a small subset of the available training materials do not require a lot of computation time and can be performed on a single workstation. However, the experiments using larger initial grammars or using more material require more computation.",
"cite_spans": [
{
"start": 322,
"end": 348,
"text": "Pereira and Schabes (1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reducing the Grammar Size and Smoothing Issues",
"sec_num": "4"
},
{
"text": "The training algorithm can be parallelized by dividing the training corpus into fixed size blocks of sentences ,and by having multiple workstations processing each one of them independently. When ,all blocks have been computed, the counts are merged and the parameters are reestimated. For this purpose, we used PVM (Beguelin et al., 1991) as a mechanism for message passing across workstations.",
"cite_spans": [
{
"start": 316,
"end": 339,
"text": "(Beguelin et al., 1991)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5."
},
{
"text": "The stochastic grmnmars inferred by the training procedures produce unlabeled parse trees. We are currently evaluating the following stochastic model for labeling a binary branching tree. In this approach, we make the simplifying assumption that the label of a node only depends on the labels of its children. Under this assumption, the probability of labeling a tree is the product of the probability of labeling each level in the tree. For example, the probability of the following labeling: These probabilities can be estimated in a simple manher given a tree bank. For example, the probability of labeling a level as NP ~ DTNN is estimated as the number of occurrences (in the tree bank) ofNP ~ DTNN divided by the number of occurrences ofX =~ DTNN where X ranges over every label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Stochastic Model of Labeling for Binary Branching Trees",
"sec_num": null
},
{
"text": "Then the probability of a labeling can be computed bottom-up from leaves to root. Using dyn,'unic programruing on increasingly large subtrees, the labeling with the highest probability can be computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Stochastic Model of Labeling for Binary Branching Trees",
"sec_num": null
},
{
"text": "We are currently evzduating the effectiveness of this vnethod.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Stochastic Model of Labeling for Binary Branching Trees",
"sec_num": null
},
{
"text": "The experiments described in this paper prove the effectiveness of the inside-outside ~dgorithm on a htrge corpus, ,and also shed some light on the distribution of sentence structures found in natural languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "We reported gr~unmar inference experiments using the inside-outside algorithm on the parsed Wall Street Jourmd corpus. The experiments were made possible by turning the partially parsed training corpus into a fully bracketed corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Considering the fact that part-of-speech tags were the only source of lexical information actually used, surprisingly high bracketing accuracy is achieved (90.2% on sentences of length up to 15). We believe that even higher results can be achieved by using a richer set of part-of-speech tags. These results show that the use of simple distributions of constituency structures c~m provide high accuracy perfonnance for broad coverage natund hmguage parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [
{
"text": "We thank Eric Brill, Aravind Joshi, Mark Liberman, Mitchel Marcus, Fernando Pereira, Stuart Shieber ,and David Yarowsky for valuable discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The following parsed sentences are the most likely analyses output by the grammar inferred from 1042 training sentences (at iteration 68) for some randomly selected sentences of length not exceeding 10 words. Each parse is preceded by the bracketing given in the Treebank. SeritenceS output by the parser are printed in bold face and crossing brackets are marked with an asterisk (*).(((The/DT Celtona/NP operations/NNS) would/MD (become/VB (part/NN (of/IN (those/DT ventures/NNS))))) .L) (((The/DT (Celtona/NP operations/NNS)) (would/MD (become/VB (part/NN (of/IN (those/DT ventures/ NNS))))))) i.) ((But/CC then/RB they/PP (wake/VBP up/IN (tofI'O (a/I)T nightmare/NN)))) ./.) ((But/CC (then/RB (they/PP (wake/VBP (up/IN (to/TO (a/DT nightmare/NN))))))) J.) :/: (a/DT \"'/'\" fragrance/NN control/NN system/NN ./. \"/\"))) ((* (The/DT highlight/NN) (* :/: (a/DT ((\"/'\" fragrance/NN) (control/NN system/NN))) *) *) (./. \"/\")) (((Stock/NP prices/NNS) (slipped/VBD lower/DR (in/IN (moderate/JJ trading/NN))) ./.)) (((Stock/NP prices/NNS) (slipped/VBD (lower/J JR (in/IN (moderate/JJ trading/NN))))) ./.) (((Some/DT jewelers/NNS) (have/VBP (Geiger/NP counters/NNS) (to/TO (measure/VB (top~tz/NN radiation/NN)))) ./3) (((Some/DT jewelers/NNS) (have/VBP ((Geiger/NP counters/NNS) (to/TO (measure/VB (topaz/NN radiation/ NN)))))) ./.) ((That/DT ('s/VBZ ( (the/DT only/JJ question/NN ) (we/PP (need/VBP (to/TO address/VB)))))) ./.) ((That/DT ('s/VBZ ((the/DT (only/JJ question/NN)) (we/PP (need/VBP (to/TO address/VB)))))) ./.) ((She/PP (was/VBD (as/RB (cool/JJ (as/IN (a/DT cucumber/NN)))))) ./.) ((She/PP (was/VBD (as/RB (cool/JJ (as/IN (a/DT cucumber/NN)))))) ./.) (((The/DT index/NN) (gained/VBD (99.14/CD points/NNS) Monday/NP)) ./.) (((The/DT index/NN) (gained/VBD ((99.14/CD points/NNS) Monday/NP))) J.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix Examples of parses",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "J",
"middle": [
"K"
],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "MIT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, J.K. 1979. Trainable grammars for speech recog- nition. In Jared J. Wolf,and Dennis H. Klatt, editors, Speech communication papers presented at the 97 th Meeting of the Acoustical Society of America, MIT, Cambridge, MA, June.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Users' guide to PVM Parallel Virtual Machine",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Beguelin",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Dongarra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Geist",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Vaidy",
"middle": [],
"last": "Sunderam",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Beguelin, Jack Dongarra, A1 Geist, Robert M,'mchek, Vaidy Sunderam. July 1991.\"A Users' guide to PVM Parallel Virtual Machine\", Oak Ridge National Lab, TM-11826.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A procedure for quantitatively comparing the syntactic coverage of English grmnmars. DARPA Speech and Natural Language Workshop",
"authors": [
{
"first": "E",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickenger",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Khwans",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "S~mtorini",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "6--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Black, S. Abney, D. Flickenger, R. Grishman, P. Har- rison, D. Hindle, R. Ingria, F. Jelinek, J. Khwans, M. Liberman, M. Marcus, S. Roukos, B. S~mtorini, ~md T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grmn- mars. DARPA Speech and Natural Language Work- shop, pages 3(i)6-311, Pacific Grove, California. Morgan Kaufinann.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Development and Evaluation of a Broad-Coverage Probabilistic Grmnmar of English-Language Computer Manuals",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "John L;Dferty",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1992,
"venue": "20 th Meeting ~+the Association fi)r Computational Linguistics (A CL' 92)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezra Black, John L;dferty, and Salim Roukos. 1992. Development and Evaluation of a Broad-Coverage Probabilistic Grmnmar of English-Language Com- puter Manuals. In 20 th Meeting ~+the Association fi)r Computational Linguistics (A CL' 92), Newark, Dela- ware.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deducing linguistic structure from the statistics of htrge corpora",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Magerm",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1990,
"venue": "DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill, David Magerm,'m, Mitchell Marcus, and Beat- rice Santorini. 1990. Deducing linguistic structure from the statistics of htrge corpora. In DARPA Speech and Natural Language Workshop. Morgan Kaufinann, Hidden Valley, Pennsylv~mia, June.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Robust Stochastic Parsing Using the Inside-Outside Algorithm",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Waegner",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe ,and Nick Waegner. July 1992. Robust Sto- chastic Parsing Using the Inside-Outside Algorithm.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "AAAI workshop on Statistically-based Techniques in Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In AAAI workshop on Statistically-based Techniques in Natural Language Processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A probabilistic parsing method for sentence disarnbiguation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fujimtki",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Nishino",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Fujimtki, F. Jelinek, J. Cocke, E. Black, and T. Nish- ino. 1989. A probabilistic parsing method for sentence disarnbiguation. Proceedings of the International Workshop on Parsing Technologies, Pittsburgh, August.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The estimation of stochastic context-free gr,-unmars using the Inside-Outside ,algorithm",
"authors": [
{
"first": "K",
"middle": [
"L"
],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. L,'ui ,and S.J. Young. 1990. The estimation of stochas- tic context-free gr,-unmars using the Inside-Outside ,algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Inside-outside reestimation from partially bracketed corpora",
"authors": [
{
"first": "Fern",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "20 th Meeting of the Association for Computational Linguistics (ACL' 92)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira, Fern,'mdo and Yves Schabes. 1992. Inside-out- side reestimation from partially bracketed corpora. In 20 th Meeting of the Association for Computational Linguistics (ACL' 92), Newark, Delaware.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": ".........................."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "is P(S ~ NP VP) P(NP ~ DTNN) P(VP ~ VBZ NNS)"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>0-10</td><td>0-15</td><td>10-19</td><td>20-30</td></tr><tr><td>94.4%</td><td>90.2%</td><td>82.5%</td><td>71.5%</td></tr><tr><td>82%</td><td>57.1%</td><td>30%</td><td>6.8%</td></tr><tr><td colspan=\"4\">Bracketing Accuracy on test sentences o</td></tr><tr><td colspan=\"4\">different lengths (using 1042 sentences of</td></tr><tr><td colspan=\"4\">lengths shorter than 15 words as training</td></tr><tr><td>material).</td><td/><td/><td/></tr></table>",
"text": "",
"num": null,
"type_str": "table"
}
}
}
}