|
{ |
|
"paper_id": "H91-1045", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:33:16.353372Z" |
|
}, |
|
"title": "Calculating the Probability of a Partial Parse of a Sentence", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Kupin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A standard problem in parsing algorithms is Lhe organizaLion o~ branched searches to deal with an~iguous sentences. We discuss shiftreduce parsing o\u00a3 stochastic context-Gee granlnmrs and show how Lo construct a probabilistic score for tanking compeLing parse hypotheses. The score we use is the likelihood Lhag the collection of subtrees can be completed into a l~ull parse tree by means of the steps the parser is constrained to [oUow.", |
|
"pdf_parse": { |
|
"paper_id": "H91-1045", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A standard problem in parsing algorithms is Lhe organizaLion o~ branched searches to deal with an~iguous sentences. We discuss shiftreduce parsing o\u00a3 stochastic context-Gee granlnmrs and show how Lo construct a probabilistic score for tanking compeLing parse hypotheses. The score we use is the likelihood Lhag the collection of subtrees can be completed into a l~ull parse tree by means of the steps the parser is constrained to [oUow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Stochastic context-free grammars have been suggested for a role in speech-recognition algorithms, e.g. [1, 4, 9] . In order to he fully effective as an adjunct to speech recognition, the power of the probability apparatus needs to be applied to the problem of controlling the branched search for parses of ambiguous input.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 106, |
|
"text": "[1,", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 109, |
|
"text": "4,", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 112, |
|
"text": "9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The method we suggest for doing this employs shift-reduce (LR) parsing of context-free grammars together with a probability based score for ranking competing parse hypotheses. Shiftreduce parsers can be made very efficient for unambiguous grammars (and unambiguous inputs) and Tomita [7] shows how much of this efficiency can be maintained in the face of ambiguity. This makes this class of parsers a good candidate for many speech problems. The structural simplicity of shift-reduce parsers makes the analysis of the interaction of the parser with the stochastic properties of the language particularly clean.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 287, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The score we calculate is the likelihood that the collection of subtrees constructed by the parser so far can he completed into a full parse tree by means of the steps that the parser is constrained to follow, taking into account all possibilities for the unscanned part of the input. This score is the same as that suggested by Wright [9] , who also studied shift-reduce parsers. We provide an exact method for calculating the desired quantity, while Wright's calculation requires several approximations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 339, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Why do we care about this particular quantity? As a first rough answer, note that when this quantity is zero, then the hypothesis should be abandoned; there is no possibility that the parse tree can he completed. Furthermore, the bigger this quantity is, the larger the mass of the probability space that can be explored by pursuing that particular hypothesis. For a more detailed answer, consider a breadth first search of candidate hypotheses. For each one we would like to know which is the correct one, given the grammar and the text segment we have observed: a,,...,at. We would like to calculate P (Hla,,... , a~) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 619, |
|
"text": "(Hla,,... , a~)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The denominator in the above expression P(a,,..., at) is the grand probability of seeing the observations al,..., at given the grammar. This is some fixed quantity. We might not know what it is, but as long as we are only comparing hypotheses that all explain the same string ah.--, at, this quantity is a scaling factor that can safely be ignored. The numerator is the quantity we intend to calculate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "., a~).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For a depth-first or best-first search, as employed by [1] , the quantity P(al,..., at) cannot be ignored. This makes the depthfirst approach significantly more complicated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 58, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "., a~).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the rest of this paper we will restrict our attention to grammars in Chomsky-normal form. A similar probability analysis can be made for arbitrary context-free grammars, but the notation becomes cumbersome and the formulae more complicated. We note that all the topics in this paper are treated in considerably more detail, including proofs, in [3] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 351, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "., a~).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A bottom-up parser is one which reconstructs parsing trees by first constructing parsing subtrees over short disjoint segments of the text, then linking these together into a smaller number of larger trees, and so on recursively until a single parse tree emerges, covering the entire text. In this section we study a particular class of bottom-up parsers, called shift-reduce parsers, which conform to the following rules, leading to the reconstruction of a rightmost-first derivation of the sentence being parsed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The parser receives symbols one at a time from left to right and at each stage of the process, the parser's memory contains a sequence of disjoint parsing subtrees which completely cover the current input. Roughly speaking, as each new symbol is accepted (or shifted-in) the parser decides how to incorporate it into a subtree and perhaps how to link several existing subtrees together (i.e. reduce). The sequence of subtrees in the parser's memory at a given instant is called a parse hypothesis, or a parser stack.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To be more precise, here is how a shift-reduce parser updates the current hypothesis into a new one. Consider a parse hypothesis consisting of n subtrees: 7\"1... I\",,, having root symbols B1... B,, respectively. The three possible \"moves\" for reacting to the next input symbol 'z' are listed below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. 'z' can be shifted in and declared to be Tn+l.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. If there is a rule A ~ B, in the grammar, then r, can be replaced by a new ~-, having A as a root and old ~-, as the left child of A. (Note that 'z' has not yet been shifted in.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. If there is a rule A ~ Bn-iB, in the grammar, then r,-1 and 7, can be removed from the hypothesis and a new subtree 7\",,-1 is added, having A as a root and old ~',-1 as the left child of A and old ~-, as the right child of A. Again note that z remains to be shifted in.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The \"input cycle\" of a shift-reduce parser is typically to shift in a new symbol via move 1, use move 2 to give that symbol a nonterminal root, and then to perform some number of moves of type three. Choosing which (if any) of the allowable type-two and type-three rules should be used next in the parse can be quite difficult, but doing so cleverly makes the difference between efficient and inefficient parsing algorithms. When faced with a choice among possible moves some parsers make a breadth-first search among the possibilities. Others use a depth-first scheme, or even something intermediate between these two extremes. We will not be concerned with such schemes here. We concern ourselves only with a probabilistic score for the plausibihty of available choices. (The best use of that score is a study in its own right.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The important fact about shift-reduce parsers from our point of view is that they are quite limited in the kind of superstructure they can build above a given set of subtrees. Since new parent nodes can only be generated over the final few subtrees in the hypothesis, one can not \"go back\" and make non-final pairs of subtrees into siblings. (A precise result is proved in [3] ). Figure 1 shows the necessary superstructure for an n-subtree hypothesis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 373, |
|
"end": 376, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 380, |
|
"end": 388, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this figure, the ellipses represent sequences of zero or more nodes in which each node is the left child of its parent. The diagram is also meant to admit the possibihty that Ai is the same node as Ci-l. The right children of the nodes labeled C (labeled X) as well as those in the ellipses are all to be found in the remaining input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SHIFT-REDUCE PARSING", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to calculate the sum of the probabilities of all complete parse trees that could result from the parser's further processing of a given hypothesis, we must sum across all the possibilities for the As and the Cs in figure 1 (which is a finite set) as well as summing over the potentially infinite set of sequences of nodes that could be lurking behind the ellipses. This sounds prohibitive, but we are saved by the fact that the sequences of nodes along the left-edge of a tree can be analyzed as the output of a Markov process. This fact is implicit in the work on trees and regular sets by [6] , and was discovered independently by [2] . To illustrate the use of Markov chain theory for the left-edges of trees, we compute the probability of the event that the left edge of a randomly generated subtree terminates in a specified terminal symbol a, given that the root is a specified nonterminal symbol A. This event is the disjoint union of the events that a is the n th symbol in the left-edge sequence, for all n > 1. Correspondingly, we want the sum P(the n th left-edge symbol is a I the root is A) n>l which is the sum of the (A, a) As another illustration we compute the probability that the left edge of a subtree T terminates in some specific subtree ~', again given that the root of T is A. More precisely, we compute the conditional probability that the subtree ~-appears as a subtree of T, with its root B somewhere in the left-edge of T, given that the root of T is A. This is the disjoint union of the events, as n varies, that B is the n th symbol in the left edge of T and that T then appears rooted at this B.", |
|
"cite_spans": [ |
|
{ |
|
"start": 600, |
|
"end": 603, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 645, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1141, |
|
"end": 1147, |
|
"text": "(A, a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "THE LEFT-EDGE PROCESS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If (just for a moment) we exclude the possibility that v is identical to T, then n must be at least 1. For each n > 1, the conditionM probability that ~-appears rooted at the n m symbol B, is P(r]B) multiplied by the (A, B) th entry of the n th power of M. In this case we can find, much as in the preceding illustration, that the sum from 1 to infinity of these l~obabilities is P(rlB ) x the (A, B) th e~try of M(I-M) -1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE LEFT-EDGE PROCESS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To include the possibility that ~ is identical to T, then we must add the term: ~'(~IB) x P(A = BS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE LEFT-EDGE PROCESS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the second factor is one or zero depending on whether B = A, the sum of probabilities for all n > 0 is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE LEFT-EDGE PROCESS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "which simplifies to: P(rlB 5 x the (A, B) th entry of (I -M5 -1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P(v[B) x the (A, B) `h entry of [I + M(I -M5 -I]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(15", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P(v[B) x the (A, B) `h entry of [I + M(I -M5 -I]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to calculate the probability of the set of parse trees which might complete a given parse hypothesis we will need formulas like these, but with the proviso that we need to specify the rule that is used to generate the root of ~ from its parent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P(v[B) x the (A, B) `h entry of [I + M(I -M5 -I]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "So to calculate all the probabilities that could ever arise due to the ellipses, we have work of inverting a rather large, but rather sparse, matrix. This work is done when the rule probabilities are decided upon, and before any sentences are parsed. The size of the matrix depends on the number of symbols (terminals and nonterminals 5 in the grammar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P(v[B) x the (A, B) `h entry of [I + M(I -M5 -I]", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The probability calculation must be divided into two cases. In one ease we are in the midst of processing input and do not know how many input symbols (if any 5 remain to be processed. The second situation is that we know that all input symbols have been processed. This second ease is special because it implies that the only unknown events are which rules are to be used to link up the subtrees to the root. In this case, the summation down the left edges of subtrees is no longer needed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PROBABILITY CALCULATION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When there may be more input to be processed, the calculation of the probability of a parser hypothesis with only one subtree is exactly the equation (15 in which the start symbol of the grammar, S, takes the place of the symbol A in the formula. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "When in the Midst of the Input", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "calculation requires the following four steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Compute l.q* = the S th row of the product MoQ. Zero out all entries except those corresponding to rules which have Bi as a left child and call the result t~.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For i = 2,...,n compute the product ~* = I,~-iZMoQ.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Zero out all entries except those corresponding to rules which have Bi as a left child and call the result t~.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Construct a final vector Vii, by zeroing out all entries of t~j-i except those corresponding to rules which have B, as a right child.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The desired probability is the sum of the entries in Vn and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Vii n multiplied by the conditional probability of the subtrees already constructed:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[I e(~ln~) i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When at the End of the Input If there is no more input and the hypothesis has only one subtree, then either the root of the subtree is the start symbol of the grammar, and hence the hypothesis has yielded a wellformed sentence with probability P(rlSS ) or the hypothesis must be abandoned since it has not yielded a sentence and no further changes to it are possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Things are more interesting if the hypothesis contains more than one subtree. Consider a parser hypothesis H, consisting of n > 1 subtrees rl through rn with root symbols B, through Bn respectively, with all of B1 through B, being nonterminal symbols. Suppose that the leaves of these subtrees exhaust the input, so no further shift operations are possible for the parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each nonterminal B let MB be the {symbols) x {symbols) matrix whose AC th entry is the probability P(A --* BC), if A --~ BC is a rule of the grammar while otherwise the AC th entry is zero. Also, for each pair of nonterminals BC, let FBc be the column vector indexed by nonterminals whose A th entry is P(A --~ BC) if A ~ BC is a rule of the grammar; otherwise the A th entry is zero. Let Vs be a row vector indexed by nonterminals with a 1 in the entry for S and zeros elsewhere.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Then, for n > 1, the probability of the hypothesis is equal to VS,%IB~MB2... ,~IB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-2FB.-IB. x fi P(rilBi) i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Programming Considerations", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are several problems in making a practical parser based on the probabilities eMeulated above. First we must invert the rather large matrix I -M and then for each parse hypothesis we must perform two or three matrix operations for each subtree of the hypothesis. This is not actually as bad as it seems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "First note that we can absorb two matrix operations for each subtree into one operation by precomputing MoQ. If we use this as our \"in-core\" matrix, we can reproduce Mo when needed (for n = I computations) by summing across the relevant rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Next we note that the vector by which we are premultiplying is very sparse. This is true since the preceding step was to zeroout all entries in the vector that have the \"wrong\" left child. This means that there are only a few rows of the big MoQ matrix that concern us. Also note that immediately after we calculate the vector result, we will again zero out entries with the \"wrong\" left child. This means that we really only need calculate those few entries in the result vector that have the desired left child. This reduces the matrix operation to much lower order, say 5 \u00d7 5. The size of the calculation is determined by how many rules have a given nonterminal as left child. A grammar will be easy to parse with this method if each nonterminal only appears as the left child in a few rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we note that each parse step can only create one new subtree, and that at the end of the hypothesis. So, ff we remember the vector associated with each subtree as we make it, we only need to do one of these order 5 x 5 calculations to get the probability of the new hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One might consider implementing the above probability calculation in conjunction with some conventional shift-reduce parser. In this case one would let the LR0 parser suggest possibilities for updating a given parse hypothesis and use the above scheme to compute probabilities and discard unpromising hypotheses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BUILDING A PARSER", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is worth pointing out that all the information needed for LR0 parsing can in fact be reproduced from the probability vectors we calculate. Hence we do not really need to construct such a parser at all! The point is that starting from a particular hypothesis, a given proposal for a next move leads to a nonzero probability if and only ff there is some completion of the input that would not \"crash\" for the conventional parser. The vectors \u00a2~, and ~i, contain all the information we could desire about the next step for the parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BUILDING A PARSER", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, let us remark that our matrix calculations can be adapted to yield a shift-reduce parser even when no probabilities are initially present. We simply replace the transition matrix M with a suitably scaled incidence matrix M', in which M'(A, B) = \u00a2 if B is the left child of A via sorae rule. Otherwise M'(A, B) = O. A similar replacement is made for the matrix Q. The specific values of the \"probabilities\" then arising from our calculations do not matter, only whether or not they are zero. Thus, the off-line construction of parser tables could be accomplished via a matrix inversion, rather than the conventional recursive calculations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BUILDING A PARSER", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With the addition of this score, there are now a number of different methods for controlling the parsing of sentences from a stochastic grammar, each with its own kind of parser and expected form of the grammar. The four we know of are: [1, 5, 8, 9] . It is possible to find \"expensive\" grammars for each of these scores. For our score, a \"cheap\" grammar is one in which each symbol is the left child in relatively few rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 240, |
|
"text": "[1,", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 243, |
|
"text": "5,", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 246, |
|
"text": "8,", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 249, |
|
"text": "9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The goal, then, must be to find a parser, score and grammar that meet the needs of a particular application. We take at least some small comfort from the fact that our score has a Bayesian \"maximum likelihood\" interpretation, even though the superiority of that approach depends on the shaky assumption that the input being parsed really is the randomly-generated output of the stochastic grammar under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Statistical Parsing of Messages", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Chitrao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishlimn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proc. DARPA Speech and ]f alawl Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--266", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chitrao, M. V. and R. Grishlimn., \"Statistical Parsing of Mes- sages,\" Proc. DARPA Speech and ]f alawl Language Workshop, pp. 263-266, June 1990.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Computation of the Probability off Initial Substring Generation by Stochastic Context Free Graznmars', InlefTtal Reporl", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Continuous Speech Recognition Group", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek, F., \"Computation of the Probability off Initial Substring Generation by Stochastic Context Free Graznmars', InlefTtal Re- porl, Continuous Speech Recognition Group, IBM Research, T.J. Watson Research Center, $%rktown Heights, ~\" 10598, 10 pages.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Sequential Processing off Input Using Stochastic Graannars", |
|
"authors": [ |
|
{ |
|
"first": "Ran", |
|
"middle": [], |
|
"last": "Koch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kupin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koch,ran, F. and J. Kupin, \"Sequential Processing off Input Using Stochastic Graannars\" to appear.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Estimation off Stochastic Contextfree Gra[mnars Using the Inside-Outside Algorithm", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Compuler Speech and Language", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "35--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lari, K and S. J. $%ung., \"The Estimation off Stochastic Context- free Gra[mnars Using the Inside-Outside Algorithm,\" Compuler Speech and Language vol. 4, pp. 35-56, 1990", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Stochastic Syntax Analysis Procedure and its Application to Pattern Classification", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Fu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "IEEE T", |
|
"volume": "", |
|
"issue": "21", |
|
"pages": "660--666", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, H. C. and K. S. Fu., \"A Stochastic Syntax Analysis Procedure and its Application to Pattern Classification,\" IEEE T,~n,. Vol. C-21, pp. 660-666, July 1972.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "CharacLerizing Derivation Trees off Context Free Granunars through a Generalization of Finite Automata Theory", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Thatcher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Journal of Compaler and System Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thatcher, J. W., \"CharacLerizing Derivation Trees off Context Free Granunars through a Generalization of Finite Automata Theory\" Journal of Compaler and System Sciences, VoI1A Dec. 1967.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Efficient Parsing for Araluf~l Language", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomita, M., Efficient Parsing for Araluf~l Language, Kluwer Aca- denfic Publishers, Boston, 1986.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "~Sequential Syntactic Decoding", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Velaseo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Souza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "InL J. CompaL Inform. Sci", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "273--287", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Velaseo, F. R. and C. R. Souza, '~Sequential Syntactic Decoding,\" InL J. CompaL Inform. Sci. Vol. 3.4, pp. 273-287, 1974.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "LR parsing off Probabilistie Gra, unars with Input Uncertainty for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wright", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Compgler Speech and Language ~,%1. 4", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "297--323", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wright, J. H., \"LR parsing off Probabilistie Gra, unars with Input Uncertainty for Speech Recognition.\" Compgler Speech and Lan- guage ~,%1. 4, pp. 297-323, 1990.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "This quantity is equal to P(H&al ..... a~)/P(al,..", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Happily, this observation leads to a closed form solution to the problem of calculating all the necessary probabilities for the A parse hypothesis, wish implied superstrucLure infinite set of sequences. To begin, construct the matrix M which is the transition matrix of a Markov chain in which nonterminal symbols are transient states and terminal symbols are absorbing states.M(A, B) = ~P(A ~ BC) for nonterminals B O M(A, b) --P(A ~ b)for terminals bRows of M indexed by terminal symbols are identically zero.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "th entries in the sequence M, M 2, M 3, etc., which is in turn the (A, a) th entry in M+M2+M3+ ... . As it turns out, this matrix sum converges. The sum is equal to M(I -M) -1. Thus the number we seek is the (A, a) th entry of M(I -M) -1. Note our convention that the root is the 0 th symbol along the left edge of the tree.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "hypotheses with n > 1 subtrees we need to take the A and C nodes from figure 1 into account. To calculate the probability of a parser hypothesis with n subtrees ~-, ... r,, with root nodes B1 ...Bn, we keep track of what rule is used to generate each Bi. This defines the necessary relationships among the various Ai Bi and Ci in figure 1. To perform our calculation we need the following matrices: Q(A,r) = p if rule ~\" is A .L BC for some B, C otherwise if rule r is A .L BC for some A,", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |