|
{ |
|
"paper_id": "E95-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:31:45.997795Z" |
|
}, |
|
"title": "The Problem of Computing the Most Probable Tree in Data-Oriented Parsing and Stochastic Tree Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Rens", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Amsterdam", |
|
"location": { |
|
"addrLine": "Spuistraat 134", |
|
"postCode": "1012 VB", |
|
"settlement": "Amsterdam", |
|
"country": "The Netherlands" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We deal with the question as to whether there exists a polynomial time algorithm for computing the most probable parse tree of a sentence generated by a data-oriented parsing (DOP) model. (Scha, 1990; Bod, 1992, 1993a). Therefore we describe DOP as a stochastic tree-substitution grammar (STSG). In STSG, a tree can be generated by exponentially many derivations involving different elementary trees. The probability of a tree is equal to the sum of the probabilities of all its derivations. We show that in STSG, in contrast with stochastic context-free grammar, the Viterbi algorithm cannot be used for computing a most probable tree of a string. We propose a simple modification of Viterbi which allows by means of a \"select-random\" search to estimate the most probable tree of a string in polynomial time. Experiments with DOP on ATIS show that only in 68% of the cases, the most probable derivation of a string generates the most probable tree of that string. Therefore, the parse accuracy obtained by the most probable trees (96%) is dramatically higher than the parse accuracy obtained by the most probable derivations (65%). It is still an open question whether the most probable tree of a string can be deterministically computed in polynomial time.", |
|
"pdf_parse": { |
|
"paper_id": "E95-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We deal with the question as to whether there exists a polynomial time algorithm for computing the most probable parse tree of a sentence generated by a data-oriented parsing (DOP) model. (Scha, 1990; Bod, 1992, 1993a). Therefore we describe DOP as a stochastic tree-substitution grammar (STSG). In STSG, a tree can be generated by exponentially many derivations involving different elementary trees. The probability of a tree is equal to the sum of the probabilities of all its derivations. We show that in STSG, in contrast with stochastic context-free grammar, the Viterbi algorithm cannot be used for computing a most probable tree of a string. We propose a simple modification of Viterbi which allows by means of a \"select-random\" search to estimate the most probable tree of a string in polynomial time. Experiments with DOP on ATIS show that only in 68% of the cases, the most probable derivation of a string generates the most probable tree of that string. Therefore, the parse accuracy obtained by the most probable trees (96%) is dramatically higher than the parse accuracy obtained by the most probable derivations (65%). It is still an open question whether the most probable tree of a string can be deterministically computed in polynomial time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A Data-Oriented Parsing model (Scha, 1990; Bod, 1992 Bod, , 1993a ) is characterized by a corpus of analyzed language utterances, together with a set of operations that combine sub-analyses from the corpus into new analyses. We will limit ourselves in this paper to corpora with purely syntactic annotations. For the semantic dimension of DOP, the reader is referred to (van den Berg et al., 1994) . Consider the imaginary example corpus consisting of only two trees in figure 1. We will assume one operation for combining subtrees. This operation is called \"composition\", and is indicated by the infix operator o. The composition of t and u, tou, yields a copy of t in which its leftmost nonterminal leaf node has been identified with the roof node of u (i.e., u is substituted on the leftmost nonterminal leaf node of t). For reasons of simplicity we will write in the following (tou) As the reader may easily ascertain, a different derivation may yield a different parse tree. However, a different derivation may also very well yield the same parse tree; for instance: Thus, a parse tree can have several derivations involving different subtrees. Using the corpus for our stochastic estimations, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substituted on that node. 1 The probability of a derivation can be computed as the product of the probabilities of the substitutions that it involves. The probability of a parse tree is equal to the probability that any of its derivations occurs, which is the sum of the probabilities of all derivations of that parse tree. Finally, the probability of a word string is equal to the sum of the probabilities of all its parse trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 42, |
|
"text": "(Scha, 1990;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 43, |
|
"end": 52, |
|
"text": "Bod, 1992", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 53, |
|
"end": 65, |
|
"text": "Bod, , 1993a", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 397, |
|
"text": "(van den Berg et al., 1994)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 881, |
|
"end": 886, |
|
"text": "(tou)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data-Oriented Parsing", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to deal with the problem of computing the most probable parse tree of a string, it is convenient to describe DOP as a \"Stochastic Tree-Substitution Grammar\" (STSG). STSG can be seen as a generalization over DOP, where the elementary tree S of STSG are the subtrees of DOP, and the probabilities of the elementary trees are the 1Very small frequencies are smoothed by Good-Turing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DOP as a Stochastic Tree-Substitution Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "substitution-probabilities of the corresponding subtrees ofDOP (Bod, 1993c) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 75, |
|
"text": "(Bod, 1993c)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DOP as a Stochastic Tree-Substitution Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A Stochastic Tree-Substitution Grammar G is a fivetuple < VN, VT-, S, R, P> where Vu is a finite set of nonterminal symbols. Vr is a finite set of terminal symbols. S ~ VN is the distinguished symbol. R is a finite set of elementary trees whose top nodes and interior nodes are labeled by nonterminal symbols and whose yield nodes are labeled by terminal or nonterminal symbols. P is a function which assigns to every elementary tree t ~ R a probability p(t). For a tree t with a root a, p(t) is interpreted as the probability of substituting t on a. We require, therefore, that 0 < p(t) <-1 and ~-~t:root(t)=Ct p(t) = 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DOP as a Stochastic Tree-Substitution Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "If tl and t2 are trees such that the leftmost nonterminal yield node of tl is equal to the root of t2, then tlot 2 is the tree that results from substituting t 2 for this leftmost nonterminal yield node in tl. The partial function o is called leftmost substitution. For reasons of conciseness we will use the term substitution for leftmost substitution. A leftmost derivation generated by an STSG G is a tuple of trees <t 1 ..... tn> such that t I ..... t n are elements of R, the root of t I is labeled by S and the yield of tl .... otn is labeled by terminal symbols. The set of leftmost derivations generated by G is thus given by Derivations(G) = { <t I ..... tn> I tl ..... tn ~ R ^ root(t1) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 656, |
|
"end": 696, |
|
"text": "I ..... tn> I tl ..... tn ~ R ^ root(t1)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DOP as a Stochastic Tree-Substitution Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A parse tree generated by an STSG G is a tree T such that there is a derivation <tl ..... tn> Derivations(G) for which tl ..... tn = T. The set of parse trees, or tree language, generated by G is given byParses(G) = {TI3 <t I ..... tn> ~ Derivations(G): tl ..... tn = T}. For reasons of conciseness we will often use the terms parse or tree for a parse tree. A parse whose yield is equal to string s, is called a parse of s. The probability of a parse is defined as the sum of the probabilities of all its derivations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". \u2022 p(tn).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A string generated by an STSG G is an element of Vr + such that there is a parse generated by G whose yield is equal to the string. The set of strings, or string language, generated by G is given by Strings(G) = {sl 3 T: T~ Parses(G) ^ s = yield(T)}. The probability of a string is defined as the sum of the probabilities of all its parses. This means that the probability of a string is also equal to the sum of the probabilities of all its derivations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". \u2022 p(tn).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the input string abcd, the following derivation forest is then obtained:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to deal with the problem of computing the most probable parse tree of a sentence, we will distinguish between parsing and disambiguation. By parsing we mean the creation of a parse forest for an input sentence. By disambiguation we mean the selection of the most probable parse 2 from the forest. The creation of a parse forest is an intermediate step for computing the most probable parse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing a most probable parse tree in STSG", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From the way STSG combines elementary trees by means of substitution, it follows that an input sentence can be parsed by the same algorithms as (S)CFGs. Every elementary tree t is used as a context-free rewrite rule root(t) --~ yield(t). Given a chart parsing algorithm, an input sentence of length n can be parsed in n 3 time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to obtain a chart-like forest for a sentence parsed in STSG, we need to label the wellformed substrings in the chart not only with the syntactic categories of that substring but with the full elementary trees t that correspond to the use of the derived rules root(t) ---~yield(t). Note that in a chartlike forest generated by an STSG, different derivations that generate a same tree do not collapse. We will therefore talk about a derivation forest generated by an STSG (cf. Sima'an et al., 1994) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 505, |
|
"text": "STSG (cf. Sima'an et al., 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The following formal example illustrates what a derivation forest of a string may look like. In the example, we leave out the probabilities, which are needed only in the disambiguation process. The visual representation comes from (Kay, 1980) : every entry (i,j) in the chart is indicated by an edge and spans the words between the i-th and the j-th position of a sentence. Every edge is labeled with the elementary trees that denote the underlying phrase. The example-STSG consists of the following elementary trees: 2 Although theoretically there can be more than one most probable parse for a sentence, in practice a system that employs a non-trivial treebank tends to generate exactly one most probable parse for a given input sentence. Note that different derivations in the forest generate the same tree. By exhaustively unpacking the forest, four different derivations generating two different trees are obtained. We may ask whether we can pack the forest by collapsing spurious derivations. Unfortunately, no efficient procedure is known that accomplishes this (remember that there can be exponentially many derivations for one tree).", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 242, |
|
"text": "(Kay, 1980)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "S /Xc AB /Xc S S A A A d c a b B B C AAI a b d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Cubic time parsing does not guarantee cubic time disambiguation, as a sentence may have exponentially many parses and any such parse may have exponentially many derivations. Therefore, in order to find the most probable parse of a sentence, it is not efficient to compare the probabilities of the parses by exhaustively unpacking the chart. Even for determining the probability of one parse, it is not efficient to add the probabilities of all derivations of that parse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "There exists a heuristic optimization algorithm, known as Viterbi optimization, which selects on the basis of an SCFG the most probable derivation of a sentence in cubic time (Viterbi, 1967; Fujisaki et al., 1989; Jelinek et al., 1990) . In STSG, however, the most probable derivation does not necessarily generate the most probable parse, as the probability of a parse is defined as the sum of the probabilities of all its derivations. Thus, there is an important question as to whether we can adapt the Viterbi algorithm for finding the most probable parse.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 190, |
|
"text": "(Viterbi, 1967;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 213, |
|
"text": "Fujisaki et al., 1989;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 235, |
|
"text": "Jelinek et al., 1990)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi optimization is not feasible for finding the most probable parse", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "To understand the difficulty of the problem, we look in more detail at the Viterbi algorithm. The basic idea of the Viterbi algorithm is the early pruning of low probability subderivations in a bottom-up fashion. Two different subderivations of the same part of the sentence and whose resulting subparses have the same root can both be developed (if at all) to derivations of the whole sentence in the same ways. Therefore, if one of these two subderivations has a lower probability, then it can be eliminated. This is illustrated by a formal example in figure 7. Suppose that during bottom-up parsing of the string abcd the following two subderivations dl and d2 have been generated for the substring abc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi optimization is not feasible for finding the most probable parse", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "(Actually represented are their resulting subparses.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi optimization is not feasible for finding the most probable parse", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "A A If the probability of dl is higher than the probability of d2, we can eliminate d2 if we are only interested in finding the most probable derivation of abcd. But if we are interested in finding the most probable parse of abcd (generated by STSG), we are not allowed to eliminate d2. This can be seen by the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Viterbi optimization is not feasible for finding the most probable parse", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Suppose that we have the additional elementary tree given in figure 8. This elementary tree may be developed to the same tree that can be developed by d2, but not to the tree that can be developed by dl. And since the probability of a parse tree is equal to the sum of the probabilities of all its derivations, it is still possible that d 2 contributes to the generation of the most probable parse. Therefore we are not allowed to eliminate d2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A\\, abe", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This counter-example does not prove that there is no heuristic optimization that allows polynomial time selection of the most probable parse. But it makes clear that a \"select-best\" search, as accomplished by Viterbi, is not adequate for finding the most probable parse in STSG. So far, it is unknown whether the problem of finding the most probable parse in a deterministic way is inherently exponential or not (cf. Sima'an et al., 1994) . One should of course ask how often in practice the most probable derivation produces the most probable parse, but this can only be answered by means of experiments on real life corpora. Experiments on the ATIS corpus (see session 4) show that only in 68% of the cases the most probable derivation of a sentence generates the most probable parse of that sentence. Moreover, the parse accuracy obtained by the most probable parse is dramatically higher than the parse accuracy obtained by the parse generated by the most probable derivation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 438, |
|
"text": "(cf. Sima'an et al., 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A\\, abe", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We will leave it as an open question whether the most probable parse can be deterministically derived in polynomial time. Here we will ask whether there exists a polynomial time approximation procedure that estimates the most probable parse with an estimation error that can be made arbitrarily small. We have seen that a \"select-best\" search, as accomplished by Viterbi, can be used for finding the most probable derivation but not for finding the most probable parse. If we apply instead of a select-best search, a \"select-random\" search, we can generate a random derivation. By iteratively generating a large number of random derivations we can estimate the most probable parse as the parse which results most often from these random derivations (since the probability of a parse is the probability that any of its derivations occurs). The most probable parse can be estimated as accurately as desired by making the number of random samples as large as desired.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "According to the Law of Large Numbers, the most often generated parse converges to the most probable parse. Methods that estimate the probability of an event by taking random samples are known as Monte Carlo methods (Hammersley & Handscomb, 1964) . 3 The selection of a random derivation is accomplished in a bottom-up fashion analogous to Viterbi. Instead of selecting the most probable subderivation at each node-sharing in the chart, a random subderivation is selected (i.e. sampled) at each node-sharing (that is, a subderivation that has n times as large a probability as another subderivation should also have n times as large a chance to be chosen as this other subderivation). Once sampled at the S-node, the random derivation of the whole sentence can be retrieved by tracing back the choices made at each node-sharing. Of course, we may postpone sampling until the S-node, such that we sample directly from the distribution of all S-derivations. But this would take exponential time, since there may be exponentially many derivations for the whole sentence. By sampling bottom-up at every node where ambiguity appears, the maximum number of different subderivations at each node-sharing is bounded to a constant (the total number of rules of that node), and therefore the time complexity of generating a random derivation of an input sentence is equal to the time complexity of finding the most probable derivation, O(n3). This is exemplified by the following algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 246, |
|
"text": "(Hammersley & Handscomb, 1964)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "3 Note that Monte Carlo estimation of the most probable parse is more reliable than the estimation of the most probable parse by generating the n most probable derivations by Viterbi, since it might be that the most probable parse is exclusively generated by many low probable derivations. The Monte Carlo method is guaranteed to converge to the most probable parse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Sampling a random 0\u00a2riva~ion from a derivation forest Given a derivation forest, of a sentence of n words, consisting of labeled entries (i,j) that span the words between the i-th and the j-th position of the sentence. Every entry is labeled with linked elementary trees, together with their probabilities, that constitute subderivations of the underlying subsentence. Sampling a derivation from the chart consists of choosing at every labeled entry (bottom-up, breadthfu'st) a random subderivation of each root-node: fork := 1 tondo fori := 0 to n-k do for chart-entry (i,i+k) do for each root-node X do select 4 a random subderivation of root X eliminate the other subderivations", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "We now have an algorithm that selects a random derivation from a derivation forest. Converting this derivation into a parse tree gives a first estimation for the most probable parse. Since one random sample is not a reliable estimate, we sample a large number of random derivations and see which parse is generated most frequently. This is exemplified by the following algorithm. (Note that we might also estimate the most probable derivation by random sampling, namely by counting which derivation is sampled most often; however, the most probable derivation can be more effectively generated by Viterbi.) Eslimating the most probable parse (MPP) Given a derivation forest for an input sentence:", |
|
"cite_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 606, |
|
"text": "Viterbi.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "repeat until the MPP converges sample a random derivation from the forest store the parse generated by the random derivation MPP := the most frequently occurring parse", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "There is an important question as to how long the convergence of the most probable parse may take. Is there a tractable upper bound on the number of derivations that have to be sampled from the forest before stability in the top of the parse distribution occurs? The answer is yes: the worst case time complexity of achieving a maximum estimation error e by means of random sampling is O(e-2),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "independently of the probability distribution. This is a classical result from sampling theory (cf. Hammersley and Handscomb, 1964) , and follows directly from Chebyshev's inequality. In practice, it means that the 4 Let { (e 1, Pl), (e2, P2) ..... (en, Pn) } be a probability distribution of events el, e2, ..., en; an event e i is said to be randomly selected iff its probability of being selected is equal to Pi. In order to allow for \"direct sampling\", one must convert the probability distribution into a corresponding sample space for which holds that the frequency of occurrence 3] of each event e i is a positive integer equal to Npi, where N is the size of the sample space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 131, |
|
"text": "Hammersley and Handscomb, 1964)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "error e is inversely proportional to the square-root of the number of random samples N and therefore, to reduce e by a factor of k, the number of samples N needs to be increased k2-fold. In practical experiments (see \u00a74), we will limit the number of samples to a pre-determined, sufficiently large bound N. What is the theoretical worst case time complexity of parsing and disambiguation together? That is, given an STSG and an input sentence, what is the maximal time cost of finding the most probable parse of a sentence? If we use a CKY-parser, the creation of a derivation forest for a sentence of n words takes O(n 3) time. Taking also into account the size G of an STSG (defined as the sum of the lengths of the yields of all its elementary trees), the time complexity of creating a derivation forest is proportional to Gn 3. The time complexity of disambiguation is both proportional to the cost of sampling a derivation, i.e. Gn 3, and to the cost of the convergence by means of iteration, which is e -2. Tiffs means that the time complexity of disambiguation is given by O(Gn3e-2). The total time complexity of parsing and disambiguation is equal to O(Gn 3) + O(Gn3e -2) = O(Gn3e'2). Thus, there exists a tractable procedure that estimates the most probable parse of an input sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Notice that although the Monte Carlo disambiguation algorithm estimates the most probable parse of a sentence in polynomial time, it is not in the class of polynomial time decidable algorithms. The Monte Carlo algorithm cannot decide in polynomial time what is the most probable parse; it can only make the error-probability of the estimated most probable parse arbitrarily small. As such, the Monte Carlo algorithm is a probabilistic algorithm belonging to the class of Bounded error Probabilistic Polynomial time (BPP) algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "We hypothesize that Monte Carlo disambiguation is also relevant for other stochastic grammars. It turns out that all stochastic extensions of CFGs that are stochastically richer than SCFG need exponential time algorithms for finding a most probable parse tree (cf. Briscoe & Carroll, 1992; Black et al., 1993; Magerman & Weir, 1992; Schabes & Waters, 1993) . To our knowledge, it has never been studied whether there exist BPP-algorithms for these models. Alhough it is beyond the scope of our research, we conjecture that there exists a Monte Carlo disambiguation algorithm for at least Stochastic Tree-Adjoining Grammar (Schabes, 1992) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 289, |
|
"text": "Briscoe & Carroll, 1992;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 309, |
|
"text": "Black et al., 1993;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 332, |
|
"text": "Magerman & Weir, 1992;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 356, |
|
"text": "Schabes & Waters, 1993)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 637, |
|
"text": "(Schabes, 1992)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimating the most probable parse by Monte Carlo search", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Psychological relevance of Monte Carlo disambiguation As has been noted, an important difference between the Viterbi algorithm and the Monte Carlo algorithm is, that with the latter we never have 100% confidence. In our opinion, this should not be seen as a disadvantage. In fact, absolute confidence about the most probable parse does not have any significance, as the probability assigned to a p~se is already an estimation of its actual probability. One may ask as to whether Monte Carlo is appropriate for modeling human sentence perception. The following lists some properties of Monte Carlo disambiguation that may be of psychological interest: 1. As mentioned above, Monte Carlo never provides 100% confidence about the best analysis. This corresponds to the psychological observation that people never have absolute confidence about their interpretation of an ambiguous sentence. 2. Although conceptually Monte Carlo uses the total space of possible analyses, it tends to sample only the most likely ones. Very unlikely analyses may only be sampled after considerable time, but it is not guaranteed that all analyses are found in finite time. This matches with experiments on human sentence perception where very implausible analyses are only perceived with great difficulty and after considerable time. 3. Monte Carlo does not necessarily give the same results for different sequences of samples, especially if different analyses in the top of the distribution are almost equally likely. In the case there is more than one most probable analysis, Monte Carlo does not converge to one analysis but keeps alternating, however large the number of samples is made. In experiments with human sentence perception, it has often been shown that different analyses can be perceived for one sentence. And in case these analyses are equally plausible, people perceive so-called fluctuation effects. This fluctuation phenomenon is also well-known in the perception of ambiguous visual patterns. 4. Monte Carlo can be made parallel in a very straightforward way: N samples can be computed by N processing units, where equal outputs are reinforced. The more processing units are employed, the better the estimation. However, since the number of processing units is finite, there is never absolute confidence. This has some similarity with the Parallel Distributed Processing paradigm for haman (language) processing (Rumelhart & McClelland, 1986) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 2411, |
|
"end": 2441, |
|
"text": "(Rumelhart & McClelland, 1986)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3.2.3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we report on experiments with an implementation of DOP that parses and disambiguates part-of-speech strings. In (Bod, 1995) it is shown how DOP is extended to parse word strings that possibly contain unknown words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 140, |
|
"text": "(Bod, 1995)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For our experiments, we used a manually corrected version of the Air Travel Information System (ATIS) spoken language corpus (Hemphill et al., 1990) annotated in the Pennsylvania Treebank (Marcus et al., 1993) . We employed the \"blind testing\" method, dividing the corpus into a 90% training set and a 10% test set by randomly selecting sentences. The 675 trees from the training set were converted into their subtrees together with their relative frequencies, yielding roughly 4\"105 different subtrees. The 75 part-of-speech sequences from the test set served as input strings that were parsed and disambiguated using the subtrees from the training set. As motivated in (Bed, 1993b), we use the notion of parse accuracy as our accuracy metric, defined as the percentage of the test strings for which the most probable parse is identical to the parse in the test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 148, |
|
"text": "(Hemphill et al., 1990)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 209, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The test environment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "It is one of the most essential features of DOP, that arbitrarily large subtrees are taken into consideration to estimate the probability of a parse. In order to test the usefulness of this feature, we performed different experiments constraining the depth of the subtrees. The following table shows the results of seven experiments for different maximum depths of the training set subtrees. The accuracy refers to the parse accuracy at 400 randomly sampled parses, and is rounded off to the nearest integer. The CPU time refers to the average CPU time per string employed by a Spark II. The table shows a dramatic increase in parse accuracy when enlarging the maximum depth of the subtrees from 1 to 2. (Remember that for depth one, DOP is equivalent to a stochastic context-free grammar.) The accuracy keeps increasing, at a slower rate, when the depth is enlarged further. The highest accuracy is obtained by using all subtrees from the training set: 72 out of the 75 sentences from the test set are parsed correctly. Thus, the accuracy increases if larger subtrees are used, though the CPU time increases considerably as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accuracy as a function of subtree-depth", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Another important feature of DOP is that the probability of a resulting parse tree is computed as the sum of the probabilities of all its derivations. Although the most probable parse of a sentence is not necessarily generated by the most probable derivation of that sentence, there is a question as to how often these two coincide. In order to study this, we also calculated the derivation accuracy, defined as the percentage of the test strings for which the parse generated by the most probable derviation is identical to the parse in the test set. The following table shows the derivation accuracy against the parse accuracy for the 75 test set strings from the ATIS corpus, using different maximum depths for the corpus subtrees. Table 2 . Derivation accuracy vs. parse accuracy The table shows that the derivation accuracy is equal to the parse accuracy if the depth of the subtrees is constrained to 1. This is not surprising, as for depth 1, DOP is equivalent with SCFG where every parse is generated by exactly one derivation. What is remarkable, is, that the derivation accuracy decreases if the depth of the subtrees is enlarged to 2. If the depth is enlarged further, the derivation accuracy increases again. The highest derivation accuracy is obtained by using all subtrees from the corpus (65%), but remains far behind the highest parse accuracy (96%). From this table we conclude that if we.are interested in the most probable analysis of a string we must not look at the probability of the process of achieving that analysis but at the probability of the result of that process. Table 3 . Parse accuracy after eliminating once-occurring subtrees", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 735, |
|
"end": 742, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1595, |
|
"end": 1602, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does the most probable derivation generate the most probable parse?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have shown that in DOP and STSG the Viterbi algorithm cannot be used for computing a most probable tree of a string. We developed a modification of Viterbi which allows by means of an iterative", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Monte Carlo search to estimate the most probable tree of a string in polynomial time. Experiments on ATIS showed that only in 68% of the cases, the most probable derivation of a string generates the most probable tree of that string, and that the parse accuracy is dramatically higher than the derivation accuracy. We conjectured that the Monte Carlo algorithm can also be applied to other stochastic grammars for computing the most probable tree of a string. The question as to whether the most probable tree of a string can also be deterministically derived in polynomial time is still unsolved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The author is indebted to Remko Scha for valuable comments on an earlier version of this paper, and to Khalil Sima'an for useful discussions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There is an important question as to whether we can reduce the \"grammar constant\" of DOP by eliminating very infrequent subtrees, without affecting the parse accuracy. In order to study this question, we start with a test result. Consider the test set sentence \"Arrange the flight code of the flight from Denver to Dallas Worth in descending order\", which has the following parse in the test set: .))In this parse, we see that the prepositional phrase \"in descending order\" is incorrectly attached to the NP \"the flight\" instead of to the verb \"arrange\". This wrong attachment may be explained by the high relative frequencies of the following subtrees of depth 2 (that appear in structures of sentences like \"Show me the transportation from SFO to downtown San Francisco in August\", where the PP \"in August\" is attached to the NP \"the transportation\", and not to the verb \"show\"):Only if the maximum depth was enlarged to 4, subtrees like the following were available, which led to the estimation of the correct tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The significance of once-occurring subtrees", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "It is interesting to note that this subtree occurs only once in the training set. Nevertheless, it induces the correct parsing of the test string. This seems to contradict the fact that probabilities based on sparse data are not reliable. Since many large subtrees are once-occumng events (hapaxes), there seems to be a preference in DOP for an occurrence-based approach if enough context is provided: large subtrees, even if they occur once, tend to contribute to the generation of the correct parse, since they provide much contextual information. Although these subtrees have low probabilities, they tend to induce the correct parse because fewer subtrees are needed to construct a parse. Additional experiments seemed to confirm this hypothesis. Throwing away all hapaxes yielded an accuracy of 92%, which is a decrease of 4%. Distinguishing between small and large hapaxes, showed that the accuracy was not affected by eliminating the hapaxes of depth 1 (however, as an advantage, the convergence seemed to get slightly faster). Eliminating hapaxes larger than depth 1, decreased the accuracy. The following table shows the parse accuracy after eliminating once-occurring subtrees of different maximum depths.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VP VB NP NP PP PP IN NP VP VBG NN", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A Corpus-Based Approach to Semantic Interpretation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Berg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "& R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Scha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings Ninth Amsterdam Colloquium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. van den Berg, R. Bod & R. Scha, 1994. \"A Corpus-Based Approach to Semantic Interpretation\", Proceedings Ninth Amsterdam Colloquium, Amsterdam. E. Black, R. Garside and G. Leech, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The IBM/Lancaster Approach", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Statistically-Driven Computer Grammars o/English: The IBM/Lancaster Approach, Rodopi: Amsterdam- Atlanta.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Computational Model of Language Performance: Data Oriented Parsing", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings COLING'92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bod, 1992. \"A Computational Model of Language Performance: Data Oriented Parsing\", Proceedings COLING'92, Nantes.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Using an Annotated Corpus as a", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bod, 1993a. \"Using an Annotated Corpus as a", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Proceedings European Chapter fo the ACL'93", |
|
"authors": [ |
|
{ |
|
"first": "Stochastic", |
|
"middle": [], |
|
"last": "Grammar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stochastic Grammar\", Proceedings European Chapter fo the ACL'93, Utrecht.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Monte Carlo Parsing", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings Third International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bod, 1993b. \"Monte Carlo Parsing\", Proceedings Third International Workshop on Parsing Technologies, Tilburg/Durbuy.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Data Oriented Parsing as a General Framework for Stochastic Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Parsing Natural Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bod, 1993c. \"Data Oriented Parsing as a General Framework for Stochastic Language Processing\", in: K.Sikkel & A. Nijholt (eds.), Parsing Natural Language, TWLT6, Twente University. R. Bod, 1995. Enriching Linguistics with Statistics: Performance Models of Natural Language.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Generalized Probabilistic LR Parsing of Natural Language (Corpora) with Unification-Based Grammars", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "25--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Briscoe and J. Carroll, 1993. \"Generalized Probabilistic LR Parsing of Natural Language (Corpora) with Unification-Based Grammars\", Computational Linguistics 19(1), 25-59.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A Probabilistic Method for Sentence Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Fujisaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Cocke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Nishino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings 1st Int. Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Fujisaki, F. Jelinek, J. Cocke, E. Black and T. Nishino, 1989. \"A Probabilistic Method for Sentence Disambiguation\", Proceedings 1st Int. Workshop on Parsing Technologies, Pittsburgh.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The ATIS spoken language systems pilot corpus", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Hemphill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Godfrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Doddington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings DARPA Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C.T. Hemphill, J.J. Godfrey and G.R. Doddington, 1990. \"The ATIS spoken language systems pilot corpus\". Proceedings DARPA Speech and Natural Language Workshop, Hidden Valley, Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Basic Methods of Probabilistic Context Free Grammars", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Jelinek, J.D. Lafferty and R.L. Mercer, 1990. Basic Methods of Probabilistic Context Free Grammars, Technical Report IBM RC 16374", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Algorithmic Schemata and Data Structures in Syntactic Processing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Proceedings A CL'92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Kay, 1980. Algorithmic Schemata and Data Structures in Syntactic Processing. Report CSL-80- 12, Xerox PARC, Palo Alto, Ca. D. Magerman and C. Weir, 1992. \"Efficiency, Robustness and Accuracy in Picky Chart Parsing\", Proceedings A CL'92, Newark, Delaware.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Building a Large Annotated Corpus of English: the Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Marcus, B. Santorini and M. Marcinkiewicz, 1993. \"Building a Large Annotated Corpus of English: the Penn Treebank\", Computational Linguistics 19(2).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Language Theory and Language Technology; Competence and Performance", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Computertoepassingen in de Neerlandistiek", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Rumelhart and J. McClelland, 1986. Parallel Distributed Processing, The MIT Press, Cambridge, Mass. R. Scha, 1990. \"Language Theory and Language Technology; Competence and Performance\" (in Dutch), in Q.A.M. de Kort & G.L.J. Leerdam (eds.), Computertoepassingen in de Neerlandistiek, Almere: Landelijke Vereniging van Neerlandici (LVVN- jaarboek).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Stochastic Lexicalized Tree-Adjoining Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings Third International Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Schabes, 1992. \"Stochastic Lexicalized Tree- Adjoining Grammars\", Proceedings COLING'92, Nantes. Y. Schabes and R, Waters, 1993. \"Stochastic Lexicalized Context Free Grammars\", Proceedings Third International Workshop on Parsing Technologies, Tilburg/Durbuy.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Efficient Disambiguation by means of Stochastic Tree Substitution Grammars", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Krauwer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Scha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings International Conference on New Methods in Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Sima'an, R. Bod, S. Krauwer and R. Scha, 1994. \"Efficient Disambiguation by means of Stochastic Tree Substitution Grammars\", Proceedings International Conference on New Methods in Language Processing, UMIST, Manchester.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Viterbi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "IEEE Trans. Information Theory", |
|
"volume": "", |
|
"issue": "13", |
|
"pages": "260--269", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Viterbi, 1967. \"Error bounds for convolutional codes and an asymptotically optimum decoding algorithm\", IEEE Trans. Information Theory, IT-13, 260-269.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Example corpus of two trees. Now the (ambiguous) sentence \"She displayed the dress on the table\" can be parsed by combining subtrees from the corpus. For instance: Derivation and parse tree for \"She displayed the dress on the table\"" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Another derivation generating the same parse tree for \"She displayed the dress on the table\"" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Elementary trees of an example-STSG" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Derivation forest for abed" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Two subparses for the string abcd" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Figure 8. Elementary tree." |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "= S A yield(tlo...otn) ~ Vr +}. For convenience we will use the term derivation for leftmost derivation. A derivation <tl ..... tn> is called a derivation of tree T, iff tlo...ot n = T. A derivation <tl ..... tn> is called a derivation of string s, iff yield(t1 .... \u00b0tn) = s. The probability of a derivation <t I ..... in> is defined as p(tl) \u2022 .." |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>depth of</td><td>parse</td><td>CPU time</td></tr><tr><td>subtrees</td><td>accuracy</td><td>(hours)</td></tr><tr><td>1</td><td>52 %</td><td>.04 h</td></tr><tr><td>_<2</td><td>87 %</td><td>.21 h</td></tr><tr><td>_<3</td><td>92 %</td><td>.72 h</td></tr><tr><td><4</td><td>93 %</td><td>1.6 h</td></tr><tr><td><5</td><td>93 %</td><td>1.9 h</td></tr><tr><td><6</td><td>95 %</td><td>2.2 h</td></tr><tr><td>unbounded</td><td>96 %</td><td>3.5 h</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Parse results on the ATIS corpus" |
|
} |
|
} |
|
} |
|
} |