ACL-OCL / Base_JSON /prefixI /json /iwpt /1991.iwpt-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1991",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:35:25.013582Z"
},
"title": "STOCHASTIC CONTEXT-FREE GRAMMARS F, OR ISLAND-DRIVEN PROBABILISTIC PARSING",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Corazza",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Renato",
"middle": [],
"last": "De Mori",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Gretter",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In automatic speech recognition the use of lan guage models improves performance. Stochastic language models fit rather well the uncertainty created by the acoustic pattern matching. These models are used to score theories corresponding to partial interpretations of sentences. Algorithms have been developed to compute probabilities for theories that grow in a strictly left-to-right fash ion. In this paper we consider new relations to compute probabilities of partial interpretations of sentences. We introduce theories containing a gap corresponding to an uninterpreted signal segment. Algorithms can be easily obtained from these re lations. CoIIJ.putational complexity of these algo rithms is also derived.",
"pdf_parse": {
"paper_id": "1991",
"_pdf_hash": "",
"abstract": [
{
"text": "In automatic speech recognition the use of lan guage models improves performance. Stochastic language models fit rather well the uncertainty created by the acoustic pattern matching. These models are used to score theories corresponding to partial interpretations of sentences. Algorithms have been developed to compute probabilities for theories that grow in a strictly left-to-right fash ion. In this paper we consider new relations to compute probabilities of partial interpretations of sentences. We introduce theories containing a gap corresponding to an uninterpreted signal segment. Algorithms can be easily obtained from these re lations. CoIIJ.putational complexity of these algo rithms is also derived.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The aim of Automatic Speech Understanding (ASU) is to process an utte!ed sentence, determin ing an optimal word sequence along with its inter pretation. The success of such a process depends on the formal system we use to model natural lan guage. There is strong evidence that stochastic regular grammars ( for example . Markov Models) do not capture the large-scale structure of natu ral language. In very recent years, there has been a growing interest toward more powerful stochas tic rewriting systems, like stochastic context-free grammars (SCFG's; see among the others [Wright and Wrigley 89] , [Lari and Young 90] , [Jelinek et al. 90] and [Jelinek and Lafferty 90] ). Stochas tic grammars fit naturally the uncertainty created by the (pattern matching) acoustic search process; moreover SCFG's give syntactic prediction capa bilities that are stronger than the Markov Models. Further motivations for this approach are reported in [Lari and Young 90] a \ufffd d [Jelinek et al. 90] .",
"cite_spans": [
{
"start": 575,
"end": 598,
"text": "[Wright and Wrigley 89]",
"ref_id": null
},
{
"start": 601,
"end": 620,
"text": "[Lari and Young 90]",
"ref_id": null
},
{
"start": 623,
"end": 642,
"text": "[Jelinek et al. 90]",
"ref_id": null
},
{
"start": 647,
"end": 672,
"text": "[Jelinek and Lafferty 90]",
"ref_id": null
},
{
"start": 938,
"end": 983,
"text": "[Lari and Young 90] a \ufffd d [Jelinek et al. 90]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "In ASU we are interested in generating partial interpretations of a \u2022 spoken sentence called theo ries. We score them in terms of their likelihood\u2022 L(A, th) = O(Pr(A It h) Pr(th)), 1 where Pr(A I th) is the probability that theory th derives the acoustic signal segment A and Pr(th) is the prob ability of the obtained theory. The most pop ular parsers used in Automatic Speech Recogni tion ( ASR) generate and expand theories starting from the left and then proceeding rightward. In this case, the best theories already obtained can drive the analysis of the right portion of the in put, restricting the class of possible next preter minals in order to maximize the probabilities of the new extended theories. For ASU, especially for dialogue systems, it may be useful to consider parsers that are \"island-driven\" . These parsers fo cus on islands, that is words of particular semantic relevance which have been previously hypothesized with high acoustic evidence. Then they proceed outward, working in both directions. Island-driven approaches have been proposed and defended in [Woods 81 ] and [Giachin and Rullerit 89]; in [Stock et al. 89] the predictive power of bidirectional pars ing is also discussed. None of the parsers proposed in these works uses a stochastic grammar.",
"cite_spans": [
{
"start": 1081,
"end": 1090,
"text": "[Woods 81",
"ref_id": null
},
{
"start": 1127,
"end": 1144,
"text": "[Stock et al. 89]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": ". In this paper we consider the problem of scor ing partial theories in the island-driven approach. An important quantity is Pr(th) , i.e. the proba bility that a SCFG generates sequences of words (islands) separated by gaps. The gaps are portions of the acoustic signal that are still uninterpreted in the context of th. We develop a theoretical frame work to compute Pr(th) in the case th contains islands and gaps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "In this section definitions related to Stochastic Context Free Grammars (SCFGs) are introduced , along with the notation that will be used through out this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "An SCFG . is \u2022 defined as a quadruple G 8 = (N, :E, P, S) , where N is a finite set of nontermi nal symbols, :E is a finite set of terminal symbols disjoint from N, P is a finite set of productions of the form H -+\"et, H EN, a E (:E U N)*, and S EN is a special symbol called start symbol. ",
"cite_spans": [
{
"start": 44,
"end": 57,
"text": "(N, :E, P, S)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H -FG H -w, H, F, G E N, w E :E.",
"eq_num": "(2)"
}
],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "For reasons discussed in [Jelinek et al. 90] it is useful to have the SCFG in CNF; in the following we will always refer to SCFGs in CNF.",
"cite_spans": [
{
"start": 25,
"end": 44,
"text": "[Jelinek et al. 90]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "The derivation of a string by the grammar G 8 is usually represented as a parse ( or derivation) tree, whose nodes indicate the productions employed in the derivation itself. It is also possible to associate with each derivation tree the probability that it was generated by the grammar G 8 \u2022 This proba bility is the product of the probabilities of all the rules employed in the derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "Given a string z E :E* , the notation H < z >, H E N, indicates the set of all trees with root H generated by G 8 and spanning z . Therefore Pr( H < z >) is the sum of the probabilities of these subtrees, i.e. \u2022 the probability that the string z has been generate. cl by G 8 starting from symbol H . We assume that the grammar G 8 . i\ufffd consistent [Gonzales and Thomason 78] . This means that the following condition holds: 2 L Pr ( S < z > ) = 1.",
"cite_spans": [
{
"start": 347,
"end": 373,
"text": "[Gonzales and Thomason 78]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "(3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NOTATION AND DEFINITIONS",
"sec_num": "2"
},
{
"text": "From this hypothesis it follows that a similar con dition holds for all nonterminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": "A possible application of an island driven parser to a task of ASU is the following. On the basis of a previously obtained theory (partial interpre tation) u = Wi ... Wi+p and of some non-syntactic knowledge, predictions can be made for words not necessarily adjacent to u. This introduces a gap within the theory that \u2022 represents a not yet rec ognized part of the input sentence. Then further syntactical and acoustical analyses will try to fill in the gap. The gap will be then filled by further syntactical and acoustical analysis. \u2022 Therefore we will deal with theories that can be represented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "th : w ; ... w;+pxl \u2022\u2022\u2022X m W j \u2022\u2022\u2022w j+q Yl \u2022\u2022 \u2022 Yk \u2022\u2022\u2022 or ux( m ) v y<\u2022)",
"eq_num": "( 4)"
}
],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": "where Wi ... Wi+p = u and Wj ..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": ". Wj+q = v indi cate strings of already recognized terminals ( i, j > 0,p,q 2:: O, j > i+p ) while xi ... Xm = x ( m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": ", m 2:: 0 and Y1 ... Yk ... = y ( \u2022 ) stand for gaps with speci fied length m (x ( m )) or (finite) unspecified length (x ( *)). We will also indicate a gap with x meaning We studied both the cases in which gap x has specified or unspecified length ( see [Corazza et al. 90] ). In practical cases, it is possible to estimate from the acoustic signal the probability distribu tion of the number of words filling the gap. Since this makes more significant the case in which the gap length is specified, in this work we will fo cus our attention on theories of the form x = ux ( m ) v y ( *).",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "[Corazza et al. 90]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": "that either x = x ( m ) or x = x ( \u2022 ) . In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "zeE\u2022",
"sec_num": null
},
{
"text": "For the calculation of the probability ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARTIAL DERIVATION TREE PROBABILITIES",
"sec_num": "3"
},
{
"text": "Pr(S < uxv y ( *) > ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARTIAL DERIVATION TREE PROBABILITIES",
"sec_num": "3"
},
{
"text": "< ux ( *) > )) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARTIAL DERIVATION TREE PROBABILITIES",
"sec_num": "3"
},
{
"text": "We sketch here a similar algorithm for the cases in which the gap length equals m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARTIAL DERIVATION TREE PROBABILITIES",
"sec_num": "3"
},
{
"text": "In the case of a known length gap x ( m ), a prefix string probability Pr(H < ux ( m ) >) can be com puted on the basis of the following relation. Since G s is in Chomsky Normal Form, if lux ( m ) I > 1 then H must directly derive two nonterminals G1 and G2 . According to the way the string ux C m ) can be divided into two parts spanned by G 1 and G2 respectively, one can distinguish two different situations: in the first one, G 1 spans just a proper prefix of u and G2 spans the remaining part of u and the gap; in the second one, G 1 entirely spans u plus a possible prefix of the gap. Based on these cases, the following relation can be established:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "Pr(H < ux < \"\") > ) = L Pr(H -G1 G2)[ G1G2 p-1 LPr( G1 < w; ... w;+k > ) x k=O 212 ,n-1 + L Pr(G1 < uxik ) > ) Pr(G2 < x\ufffd 1n -k ) > ) ]. (5) k=O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "Note that gap x ( m ) has been split into two shorter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "(k) d (m-k) B . 1\u2022 . gaps x 1 an x 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "\u2022 y a recursive app 1cat1on of (5), prefix-string probabilities can be computed using both the following initial condition: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "Pr(H < x ( 1n ) > ) = L Pr(H -G1 G2)X a 1 ,a 2 eN m-1 X L Pr(G1 < x ( j) > ) Pr(G2 < x ( 1n -i ) > ) , m > 1 . i= l Pr(H < x < 1 ) > ) = L Pr(H -w); weI: (7 ) ( 8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "In a similar way we can define Pr(< xv >) as the suffix-string probability; its computation can be easily obtained from expressions that are sym metrical with respect to the ones employed for the prefix-string probability. Details are not pursued here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "We introduce now two probabilities that will be useful in calculating the prefix-string-with gap probability: the gap-in-string probability Pr(H < uxv >) and the island probability Pr(H < xvy ( *) >).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string and Suffix-string probab ilities",
"sec_num": "3.1"
},
{
"text": "For the gap-in-string probability computation we can distinguish three independent and mutually exclusive cases, according to the position of the boundary between the two parts df string uxv spanned by the two children G 1 and G2 of H. The first word of the string spanned by G2 can belong to the initial string u = wi ... wi+p , to the gap x or to the final string v = Wj ... Wj+q\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-in-string probabilities",
"sec_num": "3.2"
},
{
"text": "In the case of known length gap one gets: H < w ; ... w;+px ( m ) Wj ... wi+ q >) = = L Pr (H -G 1 G2 ) ",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 65,
"text": "H < w ; ... w;+px ( m )",
"ref_id": null
},
{
"start": 91,
"end": 103,
"text": "(H -G 1 G2 )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gap-in-string probabilities",
"sec_num": "3.2"
},
{
"text": "Pr(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-in-string probabilities",
"sec_num": "3.2"
},
{
"text": "[ + + G1G2 p-1 L Pr(G 1 < W i ... w;+ k >) x k=O m L P \ufffd G 1 < ux\ufffd k ) >) Pr(G2 < x\ufffd m -k ) v > ) + k=O g -1 L Pr(G 1 < ux ( m ) Wj ... wi+k > ) X k =O (9 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-in-string probabilities",
"sec_num": "3.2"
},
{
"text": "The inner summations in (9) contain products of already defined probabilities, along with terms that can be computed recursively with the following ini tial condition (p = q = 0):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-in-string probabilities",
"sec_num": "3.2"
},
{
"text": "Pr(H < w ;x ( m ) Wj > ) = L Pr(H -G 1 G2 )X G1,G2 X I: Pr(G 1 < w;x\ufffd k ) >) Pr(G 2 < x\ufffd m - k ) W j > ){10) lc=O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-in-string probabilities",
"sec_num": "3.2"
},
{
"text": "As for the gap-in-string case, the island probabil ity computation involves three cases, depending on the position of the first word of the string spanned by G2 with respect to the island v = Wj ... Wj+q .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "The three sets of strings generated in the three cases above are probabilistically independent, but not disjoint in the case of unspecified length gap. Due to this fact, in such a case one must also con-. sider the probability products, then obtaining a quadratic system of equations. On the other hand, the following relation is obtained for the case of m-length gap:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "213 Pr(H < x ( m ) Wj ... Wi+qY ( \u2022 ) >) = L Pr ( H -G1 G2)[ G1 ,G2 k=l g -1 + L Pr(G 1 < X ( m ) Wj . .. W j+Jc >) X k=O X Pr(G2 < wi+ k +l . .. Wj+qY ( \u2022) > ) + + Pr(G1 < X ( m) Wj ... Wi+qY\ufffd\u2022 ) > ) X X Pr ( G2 < y\ufffd\u2022 ) > )] (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "where the term Pr(G2 < y\ufffd\u2022 ) >) equals 1. Using the definition of QL(H => G1 G2 ) given in [Je linek and Lafferty 90] one can solve the recursion in (11) in the same way the recursive equation for the prefix-string probability is solved there, obtain ing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "Pr(H < x ( m ) Wj ... w;+ 9 yC\u2022 ) > ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "in which: ( G1 < x ( m ) w; ... w;+k > ) X",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 24,
"text": "( G1 < x ( m )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "= L QL ( H \u21d2 G1 G2)C . .,,, 31 ( G1 , G2) m = L Pr(G 1 < x\ufffd k ) >) x k=l q -1 + L Pr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Islan d probabilities",
"sec_num": "3.3"
},
{
"text": "The term C xvy ( G 1 , G2) contains a summation of products between gap probabilities and island probabilities over a left gap shorter than x, along with a summation of products between suffix string probabilities ( with known length gap) and prefix-string probabilities ( with unspecified length gap) . Equation (1 3) can be solved recursively, with the initial condition (x< 0 ) = c:) :",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 26,
"text": "xvy ( G 1 , G2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "k=O (12)",
"sec_num": null
},
{
"text": "g -l . Cvy(G1 , G2) = L Pr(G1 < w; ... W;+k > )x k=O (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k=O (12)",
"sec_num": null
},
{
"text": "An expression for the prefix-string-with-gap prob ability Pr( H < ux < m ) vy < * ) > ) can now be obtained directly from the four cases where the boundary between the two children of H belongs to u , to the gap x, to the island v or to the final gap y:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string-with-gap probabil ities",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(H < Wi ... w i+ p x ( \"' )w; ... w;4q y C \u2022) >) = = L Pr(H _;, G 1 G 2 )[ G 1 ,G2 p-1 L Pr(G 1 < Wi . . \ufffd w i+ k >) x k=O ,n + L Pr(G1 < ux\ufffd k ) > ) Pr(G2 < x\ufffdrn -k )vy C \u2022) > ) + k=O q -1 + L Pr(G 1 < ux < \"') w; ... w;+k >) x k=O X Pr(G2 < W;+k +l ... w;+ q y < \u2022) >) + + Pr(G1 < ux < \"') v y\ufffd\u2022) >) Pr(G2 < y\ufffd\u2022 ) > )].",
"eq_num": "( 15)"
}
],
"section": "Prefix-string-with-gap probabil ities",
"sec_num": "3.4"
},
{
"text": "Solving the recursion in (15) in the same way as for (11), one obtains:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string-with-gap probabil ities",
"sec_num": "3.4"
},
{
"text": "where: '>w; ... W;+k >) x",
"cite_spans": [
{
"start": 7,
"end": 23,
"text": "'>w; ... W;+k >)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string-with-gap probabil ities",
"sec_num": "3.4"
},
{
"text": "Pr(H < Wi ... wi+ p x ( \"')w; ... W;+ q Y(\u2022) >) = = L QL (H =? G1 G2)Du.rvy(G1 , G2 ) (16) G 1 ,G2 Du:cvy(G1, G2 ) = p -1 L Pr(G1 < Wi ... wi+ k >) x k=O 214 ,n + L Pr(G1 < ux\ufffd\") >) Pr(G2 < x\ufffd ;.,. -k )vy C \u2022) >) + k=O q -1 + L Pr(G1 < ux < \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-string-with-gap probabil ities",
"sec_num": "3.4"
},
{
"text": "As for previous computations in this section, equa tion (17) consists of summations over products of already defined probabilities along with a recursive term Pr(G2 < Wi+k+l ... Wi+pX ( m ) v. y ( * ) >) which can be computed starting with the initial condition (p = 0):",
"cite_spans": [
{
"start": 184,
"end": 189,
"text": "( m )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "k=O",
"sec_num": null
},
{
"text": "+ ,n L Pr(G 1 < WiX\ufffd k ) > ) Pr(G2 < x\ufffdrn -k )vy ( \u2022) >) + k=O q -1 L Pr(G1 < WiX ( rn) Wj ... w;+ k >) X k=O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k=O",
"sec_num": null
},
{
"text": "Based on the relation presented in the last sec tion, algorithms for the computation of the prob abilities defined there can be developed strightfor wardly. In the present section we discuss the com putational complexity for the cases of major inter est ( details about the derivation of the complexity expressions are simple but tedious, and therefore will not be reported here). The assumed model of computation is the Random Access Ma chine, taken under the unifo rm cost criterion (see [Aho et al. 74] ). We are mainly concerned here with worst case time complexity results.",
"cite_spans": [
{
"start": 490,
"end": 505,
"text": "[Aho et al. 74]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COMPLEXITY EVALUATION",
"sec_num": "4"
},
{
"text": "We will indicate with IPI the size of set P, i.e. the number of productions in G 3 \u2022 All the probabil ities defined in Section 3 depend upon the grammar 0 3 , strings u and v and the lengths of gaps x and y. Table 1 summarizes worst-case time complexity for sets of these probabilities. O (IPI max{p 2 q,pq 2 , p 2 m, pm 2 }) 6. {Pr(H < Wi ... Wi+pX ( m -l ) aWj ... Wj +qY ( *) >) I HEN} Table 1 : Worst-case time complexity fo r the computation of the probabilities of some sets of theories. Symbol a E :E indicates a one word extension of a theory whose probability had already been computed.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 1",
"ref_id": null
},
{
"start": 289,
"end": 325,
"text": "(IPI max{p 2 q,pq 2 , p 2 m, pm 2 })",
"ref_id": null
},
{
"start": 389,
"end": 396,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "COMPLEXITY EVALUATION",
"sec_num": "4"
},
{
"text": "Both island and prefix-string-with-gap probabil ities require cubic time computations ( rows 1 and 2). Rows 3 to 6 account for cases in which one have to compute the probability of a theory that has been obtained from a previously analyzed the-\u2022 ory by means of a single word extension. In these cases, using a dynamic technique, one can dispense from the computation of elements already involved in the calculation of the previous theory. One word extension on the side of the unlmown length gap yC *) costs quadratic time both in the case of island and prefix-string-with-gap probabilities. The one word extension on the side of the known length gap x (m ) costs cubic time. This asymmetry can be justified observing froin (15) that the addition of \u2022 a single word between a string and a bounded gap forces the reanalysis of a quadratic number of new subterms. Note that this is also true for well known dynamic methods for CFG recognition ( e.g. the CYK algorithm [Younger 67]): one word change in the middle part of a string implies a cubic-time whole recomputation in the worst-case. In fact there is an interesting parallelism between those methods, the Inside algorithm and the methods discussed here (see [Corazza et al. 90 ] for a discus sion) .",
"cite_spans": [
{
"start": 1213,
"end": 1231,
"text": "[Corazza et al. 90",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COMPLEXITY EVALUATION",
"sec_num": "4"
},
{
"text": "A framework has been developed to score par-215 tial sentence interpretations in ASU systems. Gen eral motivations for modeling naturall anguage by SCFG's can be found in [Jelinek et al. 90] , while the importance of scoring measures that are com patible with island-driven strategies has been al ready pointed out in [Woods 81] . In the present section we discuss major advantages of the studied approach and possible applications of the derived framework.",
"cite_spans": [
{
"start": 171,
"end": 190,
"text": "[Jelinek et al. 90]",
"ref_id": null
},
{
"start": 318,
"end": 328,
"text": "[Woods 81]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION",
"sec_num": "5"
},
{
"text": "We are mainly interested in sentence interpre tation systems. Even if semantical and pragmati cal predictive models are not defined, we can rely on high-level\u2022 heuristic information sources. This knowledge can be used to predict words on the base of previous partial interpretations. Predic tions may be words not adjacent to the stimulat ing segments. These words can be recovered us ing word-spotting techniques. 4 Thus, the only way to employ the available heuristic information is to parse sentences in a discontinuous way. This means that the parser has first to find an island and then to fill the gap between the stimulating segment and the island itself. This technique produces partial analyses that are interleaved by gaps and that can be scored using our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION",
"sec_num": "5"
},
{
"text": "The framework introduced in this paper can also be used to predict words adjacent to an already rec ognized string and to compute the probability that the first (last) word x1 (x m ) of a gap is a certain symbol a E I;. This new word will extend the cur rent theory. Words adjacent to an existing theory can be hypothesized by selecting the word(s) which maximize the prefix-string-with-gap probability of the theory augmented with it. Instead of comput ing these probabilities for all the elements in the dictionary, it is possible to restrict this expensive process to the preterminal symbols ( as in [Jelinek and Lafferty 90] ). The approach discussed so far should be compared with standard lattice parsing techniques, where no restriction is imposed by the p\ufffdrser on the word search space (see , for example [Chow and Roukos 89] and the discussion in [Moore et al. 89] ).",
"cite_spans": [
{
"start": 603,
"end": 628,
"text": "[Jelinek and Lafferty 90]",
"ref_id": null
},
{
"start": 856,
"end": 873,
"text": "[Moore et al. 89]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION",
"sec_num": "5"
},
{
"text": "Our framework accounts for bidirectional expan sion of partial analyses; this improves the predic tive capabilities of the system. In fact, bidirec tional strategies can be used in restricting the syn tactic search space for gaps surrounded by two par tial analyses. This point has been discussed in [Stock et al. 89] for cases of one word length gaps. We propose a generalization to m-length gaps and to cases where partial analyses_ do not represent only complete parse trees but also partial deriva tion trees.",
"cite_spans": [
{
"start": 300,
"end": 317,
"text": "[Stock et al. 89]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION",
"sec_num": "5"
},
{
"text": "As a final remark, notice that the proposed framework requests the SCFG to be in Chomsky normal form. Although every SCFG G 3 can be cast in CNF, such a process may result in quadratic size expansion of G 3 , where the size of G 3 is roughly proportional to the sum of the length of all pro ductions in G 3 \u2022 The proposed framework can be easily generalized to other kinds of bilinear forms with linear expansion in the size of G 3 (for example the canonical two fo rm [Harrison 78] ). This con sideration deserves particular attention because in natural language applications the size of the gram mar is considerably larger than the input sentence length.",
"cite_spans": [
{
"start": 469,
"end": 482,
"text": "[Harrison 78]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION",
"sec_num": "5"
},
{
"text": "We write f(x) = O (g(x)) whenever there exist con stants c, x > 0 such that f(x) > c g(x) for every x > x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The normalization property expressed in (1) above guarantees \u2022that the probabilities of all (finite and infinite) derivations swn to one, but the language generated by the grammar only corresponds to the subset of the finite deriva tions, whose probability can be less than one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "By convention, x< 0 ) is the null string e, i.e. the string whose length is zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Word-spotting techniques allow one to find occurences of one ( or more) given word in a speech si gn al. In these sys tems there is a trade off between \"false alarms\" and \"missing words\" that can be controlled by a threshold obtained from training speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Computation of Probabilities for an Island-Driven Parser",
"authors": [
{
"first": "V",
"middle": [],
"last": "Aho",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
},
{
"first": "; J",
"middle": [
"K L"
],
"last": "Baker ; Y",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Scot",
"middle": [],
"last": "Glasgow",
"suffix": ""
},
{
"first": ";",
"middle": [
"A"
],
"last": "Corazza",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rdemori",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gretter",
"suffix": ""
},
{
"first": "",
"middle": [
"P"
],
"last": "Sat Ta ; E",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Giachin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rullent",
"suffix": ""
}
],
"year": 1974,
"venue": "Proceedings of the IEEE In ternational Conference on Acoustic, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "1537--1542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V.Aho, J.E.Hopcroft and J.D.Ullman: \"The Design Analysis of Com puter Algorithms\" , Addison-Wesley Pub lishing Company, Reading, MA, 197 4. [Baker 79] J .K.Baker: \"Trainable Grammars for Speech Recognition\" , Proceedings of the Spring Conference of the Acoustical Society of America, 1979. [Chow and Roukos 89] Y.L.Chow and S.Roukos: \"Speech Understanding Using a Unification Grammar\" , Proceedings of the IEEE In ternational Conference on Acoustic, Speech and Signal Processing, 1989, Glasgow, Scot land. [Corazza et al. 90] A.Corazza, RDeMori, R.Gretter and G .Sat ta: \"Computa- tion of Probabilities for an Island-Driven Parser\" Technical Report SOCS 90-19, Mc Gill University, MONTREAL, Quebec, H3A 2A7 CANADA. Also as Technical Re port TR9009-01, IRST, Trento, Italy, 1990. [Giachin and Rullent 89] E.P.Giachin and C.Rullent: \"A Parallel Parser for Spo ken Natural Language\" , Proceedings of the Eleventh InternatioI).al Joint Conference on Artificial Intelligence, 1989, Detroit, Michi gan USA, pp.1537-1542.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Syntactic Pattern Recog nition",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Gonzales",
"suffix": ""
},
{
"first": "M",
"middle": [
"G"
],
"last": "Thomason",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Gonzales and Thomason 78] R.C.Gonzales and M.G.Thomason: \"Syntactic Pattern Recog nition\" , Addison-Wesley Publishing Com pany, Reading, MA, 1978 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Computation of the Prob ability of Initial Substring Generation by Stochastic Context Free Grammars",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Harrison",
"suffix": ""
},
{
"first": "; F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Laffe Rty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "; F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1978,
"venue": "Introduction to Formal Language Theory",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.A.Harrison: \"Introduction to Formal Language Theory\" , Addison Wesley Publishing Company, Reading, MA, 1978. [Jelinek et al. 90] F .Jelinek J .D.Laffe rty and R.L.Mercer: \"Basic Method of Probabilis tic Context Free Grammars\" , \u2022Internal Re port, T.J .Watson Research Center, York town Heights, NY 10598, 85 pages. [Jelinek and Lafferty 90] F.Jelinek and J.D.Lafferty: \"Computation of the Prob ability of Initial Substring Generation by Stochastic Context Free Grammars\" , In ternal Report, Continuous Speech Recog nition Group, IBM Research, T.J.Watson Research Center, Yorktown Heights, NY 10598, 10 pages.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Estimation of Stochastic Context-Free Grammars using the Inside-Outside Algo rithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "1",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Lari and Young 90] K.Lari and S.J.Young: \"The Estimation of Stochastic Context-Free Grammars using the Inside-Outside Algo rithm\" Computer Speech and Language, vol.4, n.1, 1990, pp.35-56",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Integrating Speech and Nat ural Language Processing",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Moore",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Murveit",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Speech and Natural Language Work shop",
"volume": "",
"issue": "",
"pages": "243--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Moore et al. 89] R.M.Moore F.Pereira and H.Murveit: \"Integrating Speech and Nat ural Language Processing\" , Proceedings of the Speech and Natural Language Work shop, 1989, Philadelphia, Pennsylvania, pp.243-247.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bidirectional Chart: A Po tential Technique for Parsing Spoken N atu ral Language Sentences",
"authors": [
{
"first": "O",
"middle": [],
"last": "Stock",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Falcone",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Insinnamo",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "3",
"issue": "",
"pages": "219--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O.Stock R.Falcone and P.Insinnamo: \"Bidirectional Chart: A Po tential Technique for Parsing Spoken N atu ral Language Sentences\" Computer Speech and Language, vol.3, n.3, 1989, pp.219-237.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Optimal Search Strate gies for Speech Understanding Control",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Woods",
"suffix": ""
}
],
"year": 1981,
"venue": "Artificial Intelligence",
"volume": "18",
"issue": "3",
"pages": "295--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Woods 81] W.A.Woods: \"Optimal Search Strate gies for Speech Understanding Control\" Artificial Intelligence, vol.18, n.3, 1981, pp.295-326.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognition and Parsing of Context-Free Languages in Time n 3",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
},
{
"first": "E",
"middle": [
"N H"
],
"last": "Wrigley ; D",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Younger",
"suffix": ""
}
],
"year": 1967,
"venue": "International Workshop on Parsing Technologies",
"volume": "10",
"issue": "",
"pages": "189--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Wright and Wrigley 89] J.H.Wright and E.N. Wrigley: \"Probabilistic LR Pars ing for Speech Recognition\" International Workshop on Parsing Technologies, Pitts burg, PA, pp.105-114. [Younger 67) D.H.Younger: \"Recognition and Parsing of Context-Free Languages in Time n 3 \" Information and Control, vol. 10, 1967, pp.189\ufffd208.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "\u2022Each pro duction is associated with a probability, indicated with Pr(H \ufffd a). The grammar G 8 is proper if the \u2022 following relatio\ufffd holds: L , Pr(H-a) = 1, HEN.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "8 is in Ch omsky Normal Form (CNF) if all productions in G 8 are in one of the followi_ ng forms:",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "our notation, i and j are position indices, p and q are shift in dices, m indicates a (known) gap length and k, h are used as running indices. Finally, \ufffd\u2022 represents the set of all strings of finite length over \ufffd, while \ufffd m , m 2:: 0 is the set of all strings in \ufffd\u2022 of length m.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Pr(H < w;x ( O ) >) = Pr(Hw;) ( 6) and the gap probabilities Pr(H < x ( m ) > ) , which are the sum of the probabilities of all trees with root H and yield of length m. Gap probabilities can be recursively computed as follows:",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "(H < x ( m ) Wj ... Wj+qY ( * ) >) I H E N} prefix-string-with-gap probabilities2. {Pr(H < Wi . \u2022 . Wi+pX ( m ) Wj ... Wj+qY ( *) >) I HEN } one word extension for island probabilities 3. {Pr(H < x ( m ) Wj \u2022 .. Wj+qayC* ) >) I HEN} 4. {Pr(H < x< ml ) awj . . . Wj +qY ( *) >) I HEN} O(IPI max{q 2 , m\ufffd }) O(IPI max{m 2 q,mq 2 })one word extension for prefix-string-with-gap probabilities5. {Pr(H < Wi ... Wi+pX ( m ) Wj ... Wj+qay ( * ) >) I HEN} O(IPI max{p 2 , q 2 , m 2 , (m + q )p } )",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}