|
{ |
|
"paper_id": "H93-1048", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:30:52.912908Z" |
|
}, |
|
"title": "Prediction of Lexicalized Tree Fragments in Text", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "AT&T Bell Laboratories", |
|
"institution": "", |
|
"location": { |
|
"addrLine": "600 Mountain Avenue Murray Hill", |
|
"postCode": "07974", |
|
"region": "NJ" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "There is a mismatch between the distribution of information in text, and a variety of grammatical formalisms for describing it, including ngrams, context-free grammars, and dependency grammars. Rather than adding probabilities to existing grammars, it is proposed to collect the distributions of flexibly sized partial trees. These can be used to enhance an ngram model, and in analogical parsing.", |
|
"pdf_parse": { |
|
"paper_id": "H93-1048", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "There is a mismatch between the distribution of information in text, and a variety of grammatical formalisms for describing it, including ngrams, context-free grammars, and dependency grammars. Rather than adding probabilities to existing grammars, it is proposed to collect the distributions of flexibly sized partial trees. These can be used to enhance an ngram model, and in analogical parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "For a variety of language processing tasks, it is useful to have a predictive language model, a fact which has recently led to the development probabilistic versions of diverse grammars, including ngram models, context free grammars, various dependency grammars, and lexicalized tree grammars. These enterprises share a common problem: there is a mismatch between the distribution of information in text and the grammar model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PROBLEM WITH PROBABILIZED GRAMMARS", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The problem arises because each grammar formalism is natural for the expression of only some linguistic relationships, but predictive relationships in text are not so restricted. For example, context-free grammars naturally express relations among sisters in a tree, but are less natural for expressing relations between elements deeper the tree. In this paper, first we discuss the distribution of information in text, and its relationship to various grammars. Then we show how a more flexible grammatical description of text can be extracted from a corpus, and how such description can enhance a language model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THE PROBLEM WITH PROBABILIZED GRAMMARS", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The problem can be seen most simply in ngram models, where the basic operation is to guess the probability of a word given n -1 previous words. Obviously, there is a deeper structure in text than an n-gram model admits, though thus far, efforts to exploit this information have been only marginally successful. Yet even on its own terms, ngram models typically fail to take into account predictive information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One way that ngram models ignore predictive information is in their strategy for backing off. Consider, for example, a trigram model where the basic function is to predict a word (wo) given the two previous words (W_l and w-2). In our Wall Street Journal test corpus, the three word sequence give kittens to appears once, but not at all in the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus, a trigram model will have have difficulty predicting to given the words give kittens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this case, the standard move of backing off to a bigram model is not very informative. It is more useful to predict to using the word give than the word kittens, because we know little about what can follow kittens, but much about what typically follows give. We would expect for cases where the bigram (w_ 1,w0) does not exist, the alternative bigram (w_2,wo) will be a better predictor (if it exists) than the simple unigram.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Obviously, in this example, the fact that complementation in English is not expressed purely by adjacency explains some of the power of the w_ 1 predictor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A second problem with ngram models arises because different word sequences call for a greater or smaller n. For example, while many 6-grams are unique and uninformative, some are powerful predictors. A second problem is that CFG's naturally abstract away from syntactic function: for example, in a CFG, a noun phrase is described by the same set of rules whether it occurs as subject, object, object of preposition or whatever. While this ability to generalize across contexts is a strength of CFG's, it is disastrous for guessing whether a noun phrase will be a pronoun or not. Table 2 shows the probabilities of a noun phrase being realized as a pronoun in various contexts, in a sample of spoken and written texts produced by college students and matched for content (Hindle 1978 In dependency grammar, there are two competing analyses both for noun phrases and for verb phrases. For noun phrases, the head may be taken to be either 1) the head noun (e.g. man in the men) or 2) the determiner (e.g the in the men); analogously, for verb phrases, the head may be taken to be either 1) the mail verb (e.g. see in had seen) or 2) the tensed verb of the verb group (e.g have in had seen). Each analysis has its virtues, and different dependency theorists have preferred one analysis or the other. It is not our purpose here to choose a dependency analysis, but to point out that whatever the choice, there are consequences for our predictive language models. The two models imply different natural generalizations for estimating probabilities, and thus will lead to different predictions about the language probabilities. If the determiner is taken to be the head of the noun phrase, then in guessing the probability of a verb-det-noun structure, the association between the verb and the determiner will predominate, since when we don't have enough information about a verb-det-noun triple, we can back off to pairs. Conversely, if the noun is taken to be the head of the noun phrase, then the predominant association will be between verb and noun. (Of course, a more complex relationship between the grammar and the associated predictive language model may be defined, overriding the natural interpretation.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 770, |
|
"end": 782, |
|
"text": "(Hindle 1978", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 579, |
|
"end": 586, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A ten million word sample of Wall Street Journal text was parsed, and a set of verb-det-noun triples extracted. Specifically, object noun phrases consisting of a noun preceded by a single determiner preceded by a verb were tabulated. That is, we consider only verbs with an object, where the object consists of a determiner and a noun. 17.62 Table 3 : Three predictive models for verb-det-noun triples in Wall Street Journal text ignore predictive information, assuming in the first case that the choice of noun is independent of the verb, and in the second case, that the choice of determiner is independent of the verb. Neither assumption is warranted, as Table 3 shows (both have higher entropy than the trigram model), but Model 1, the determiner=head model, is considerably inferior. Model 1 is for this case like a bigram model, and Table 3 makes it clear that this is not a particularly good way to model dependencies between verb and object: the dominant dependency is between verb and noun.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 349, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 665, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 839, |
|
"end": 846, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In terms of using the distributional information available in text, neither choice is correct, since the answer is lexicaUy specific. For example, in predicting the object of verbs, answer is a better predictor of its object noun (call, question), while alter is better a predicting its determiner (the, its).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In contrast to dependency grammars and context free grammars, lexicalized tree adjoining grammars have considerable flexibility in what relations are represented, since the tree is an arbitrary-sized unit (Shabes 1988). In practice however, lexicalized TAGs have typically been written to reduce the number of rules, and thus to assume independence like other grammars. In general, for any grammar that is written without regard to the distribution of forms in text, simply attaching probabilities to the grammar will always ignore useful information. This does not imply any claim about the descriptive power of various grammar formalisms; with sufficient ingenuity, just about any recurrent relation that appears in a corpus can be encoded in any formalism. However, different grammar formalisms do differ in what they can naturally express.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There is a clear linguistic reason for the mismatch between received grammars and the distribution of structures in text: language provides several cross cutting ways of organizing information (including various kinds of dependencies, parallel structures, listing, name-making templates, etc.), and no single model is good for all of these.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ngram Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The preceding section has given evidence that adding probabilities to existing grammars in several formalisms is less than optimal since significant predictive relationships are necessarily ignored. The obvious solution is to enrich the grammars to include more information. To do this, we need variable sized units in our database, with varying terms of description, including adjacency relationships and dependency relationships. That is, given the unpredictable distribution of information in text, we would like to have a more flexible approach to representing the recurrent relations in a corpus. To address this need, we have been collecting a database of partial structures extracted from the Wall Street Journal corpus, in a way designed to record recurrent information over a wide range of size and terms of the description.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "USING PARTIAL STRUCTURES", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The database of partial structures is built up from the words in the corpus, by successively adding larger structures, after augmenting the corpus with the analysis provided by an unsupervised parser. The larger structures found in this way are then entered into the permanent database of structures only if a relation recurs with a frequency above a given threshold. When a structure does not meet the frequency threshold, it is generalized until it does.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The descriptive relationships admitted include:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 basic lexical features Assuming that we require at least two instances for a partial description to be entered into the database, none of these three descriptions qualify for the database. Therefore we must abstract away, using an arbitrarily defined abstraction path. First we abstract from the spelling to the lemma. This move admits two relations (since they are now frequent enough)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PRUNED STRUCTURES (precedes (put, VB,put/V, VG) (,NN,bank/N,NG)) (depends (put,VB,put/V, VG) (,NN,bank/N,NG)) units are selected in using language depends on a variety of factors, including meaning, subject matter, speaking situation, style, interlocutor and so on. Of course, demonstrating that this intuition is valid remains for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The set of partial trees can be used directly in an analogical parser, as described in Hindle 1992. In the parser, we are not concerned with estimating probabilities, but rather with finding the structure which best matches the current parser state, where a match is better the more specific its description is.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The third relation is still too infrequent, so we further generalize to (precedes (,NN,,NG) (between,IN,between/I,PG)) a relation which is amply represented (3802 occurrences).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The process is iterated, using the current abstracted description of each word, adding a level of description, then generalizing when below the frequency threshold. Since each level in elaborating the description adds information to each word, it can only reduce the counts, but never increase them. This process finds a number of recurrent partial structures,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "including between a rock and a hard place ( General Caveats There is of course considerable noise introduced by the errors in analysis that the parser makes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 43, |
|
"text": "(", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are several arbitrary decisions made in collecting the database. The level of the threshold is arbitrarily set at 3 for all structures. The sequence of generalization is arbitrarily determined before the training. And the predicates in the description are arbitrarily selected. We would like to have better motivation for all these decisions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It should be emphasized that while the set of descriptive terms used in the collection of the partial structure database allows a more flexible description of the corpus than simple ngrams, CFG's or some dependency descriptions, it nevertheless is also restrictive. There are many predictive relationships that can not be described. For example, parallelism, reference, topic-based or speaker-based variation, and so on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Motivation The underlying reason for developing a database of partial trees is not primarily for the language modeling task of predicting the next word. Rather the partialtree database is motivated by the intuition that partial trees are are the locus of other sorts of linguistic information, for example, semantic or usage information. Our use of language seems to involve the composition of variably sized partially described units expressed in terms of a variety of predicates (only some of which are included in our database). Which", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Partial Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The partial structure database provides more information than an ngram description, and thus can be used to enhance an ngram model. To explore how to use the best available information in a language model, we turn to a trigram model of Wall Street Journal text. The problem is put into relief by focusing on those cases where the trigram model fails, that is, where the observed trigram condition (w-2, w_l) does not occur in the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In the current test, we randomly assigned each sentence from a 2 million word sample of WSJ text to either the test or training set. This unrealistically minimizes the rate of unseen conditions, since typically the training and test are selected from disjoint documents (see . On the other hand, since the training is only a million words, the trigrams are undertrained. In general, the rate of unseen conditions will vary with the domain to be modeled and the size of training corpus, but it will not (in realistic languages) be eliminated. In this test, 26% (258665/997811) of the bigrams did not appear in the test, and thus it is necessary to backoff from the trigram model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We will assume that a trigram model is sufficiently effective at preediction in those cases where the conditioning bigram has been observed in training, and will focus on the problem of what to do when the conditioning bigram has not appeared in the training. In a standard backoff model, we would look to estimate Pr(wolw_l). Here we want to consider a second predictor derived from our database of partial structures. The particular predictor we use is the lemma of the word that w-1 depends on, which we will call G(W_l). In the example discussed above, the first (standard) predictor for the word between is the preceding word banks and the second predictor for the word between is G(banks), which in this case is put/v.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We want to choose among two predictors, w-i and G(w_l).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In general, if we have two conditions, Ca and CCb and we want to find the probability of the next word given these conditions. is for put/V (G(w_x)) rather than for w-1 (banks) itself, so G(w_l) is used as predictor, giving a logprob estimate of -10.2 rather than -13.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The choice of G(w_ 1) as predictor here seems to make sense, since we are willing to believe that there is a complementation relation between put/V and its second complement between.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Of course, the choice is not always so intuitively appealing. When we go on to predict the next word, we need an estimate of Pr(albanks between). Again, our training corpus does not include the collocation banks between, so no help is available from trigrams. In this case, the maximum IS is for banks rather than between, so we use banks to predict a rather than between, giving a logprob estimate of-5.6 rather than -7.10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Overall, however, the two predictors can be combined to improve the language model, by always choosing the predictor with higher IS score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "As shown in Table 4 , this slightly improves the logprob for our test set over either predictor independently. However, Table 4 also shows that a simple strategy of chosing the raw bigram first and the G(w_l) bigram when there is no information available is slightly better. In a more general situation, where we have a set of different descriptions of the same condition, the IS score provides a way to choose the best predictor.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 128, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ENHANCING A TRIGRAM MODEL", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Recurrent structures in text vary widely both in size and in the terms in which they are described. Existing grammars are too restrictive both in the size of structure they admit and in their terms of description to adequately capture the variation in text. A method has been described for collecting a database of partial structures from text. Methods of fully exploiting the database for language modeling are currently being explored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSION", |
|
"sec_num": "4." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Using statistics in lexical analysis", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Church, Kenneth W., William A. Gale, Patrick Hanks, and Donald Hindle. 1991. \"Using statistics in lexical analysis:' in", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Lexical acquisition: using on-line resources to build a lexicon", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uri Zernik (ed.) Lexical acquisition: using on-line resources to build a lexicon, Lawrence Erlbaum, 115-164.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Gale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Computer Speech and Language", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "19--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Church, Kenneth W. and William A. Gale. 1991. \"A compari- son of the enhanced Good-Turing and deleted estimation meth- ods for estimating probabilities of English bigrams;' Computer Speech and Language, 5, 19-54.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An analogical parser for restricted domains", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the FiSh DARPA Workshop on Speech & Natural Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hindle, Donald. 1992. \"An analogical parser for restricted domains,\" In Proceedings of the FiSh DARPA Workshop on Speech & Natural Language, -.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A probabilistic grammar of noun phrases in spoken and written English", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hindle, Donald. 1981. \"A probabilistic grammar of noun phrases in spoken and written English;' In David Sankoff and Henrietta Cedergren (eds.) Variation Omnibus, Linguistic Re- search, Inc. Edmonton, Alberta.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "DependencySyntax: TheoryandPractice", |
|
"authors": [ |
|
{ |
|
"first": "Igora", |
|
"middle": [], |
|
"last": "Melchuk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melchuk, IgorA. 1988.DependencySyntax: TheoryandPrac- tice, State University of New York Press, Albany.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semantic Classes and Syntactic Ambiguity", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Resnik, Philip. 1993. \"Semantic Classes and Syntactic Ambi- guity;' This volume.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Parsing strategies with 'lexicalized' grammars: application to tree adjoining grammars", |
|
"authors": [ |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Proceedings fo the 12th International Conference on Computational Linguistics", |
|
"volume": "88", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schabes, Yves. 1988. \"Parsing strategies with 'lexicalized' grammars: application to tree adjoining grammars\", in Pro- ceedings fo the 12th International Conference on Computa- tional Linguistics, COLING88, Budapest, Hungary.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "(maximal projection) \u2022 dependency relations -depends on \u2022 adjacency relations -precedes Consider an example from the following sentence from the a training corpus of 20 million words of the Wall Street Journal. Reserve board rules have put banks between a rock and a hard place The first order description of a word consists of its basic lexical features, i.e. the word spelling, its part of speech, its lemma, and its major category. Looking at the word banks, we have as description TERMINAL banks,NN,bank/N,NP At the first level we add adjacency and dependency information, specifically ADDED STRUCTURE (precedes (put,VB,put/V, VG) (banks,NN,bank/N,NG)) (precedes (banks,NN,bank/N,NG) (between,IN,between/I,PG)) (depends (put,VB,put/V, VG) (banks,NN,bank/N,NG))", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "like to choose the predictor C'i for which the predicted distribution of w differs most from the unigram distribution. Various measures are possible; here we con-model logprob unigram backoff w_ l backoff G(w_ l) backoff w-t then G(w_ t) backoff (MAX IS ofw_l and G(w_l))", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Context Free Grammars It is easy to see that a simple-</td></tr><tr><td>minded probabilizing of a CFG -that is, taking an existing</td></tr><tr><td>CFG and assigning probabilities to the rules -is not a very</td></tr><tr><td>good predictor. There several problems. First, CFG's typi-</td></tr><tr><td>cally don't include enough lexical information. Indeed, the</td></tr><tr><td>natural use of non-terminal categories is to abstract away from</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "Subject and non-subject noun phrasesDependency Grammars Dependency grammars naturally address part of the mismatch between CFG's and predictive associations, since they are expressed in terms of relations between words (Melcuk 1988), Nevertheless, in dependency grammars as well, certain syntactic relationships are problematic.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>Model for [V P V [N P d n ]]</td><td>Entropy</td><td/></tr><tr><td>0 Pr(vdn) = Pr(v)Pr(dn[v)</td><td>15.08</td><td/></tr><tr><td>1 Pr(vdn) = Pr(v)Pr(dlv)Pr(nld )</td><td>20.48</td><td/></tr><tr><td>2 Pr(vdn) = Pr(v)Pr(nlv)Pr(dln )</td><td/><td/></tr><tr><td/><td/><td/><td>The five most common</td></tr><tr><td/><td colspan=\"3\">such triples (preceded by their counts) were:</td></tr><tr><td/><td>213 have</td><td>a</td><td>loss</td></tr><tr><td/><td>176 be</td><td/></tr><tr><td/><td>165 be</td><td colspan=\"2\">the first</td></tr><tr><td/><td>140 raise</td><td>its</td><td>stake</td></tr><tr><td/><td>127 reach</td><td>an</td><td>agreement</td></tr></table>", |
|
"text": "Three different probability models for predicting the specific verb, determiner, and noun were investigated, and their entropies calculated. Model 0 is the baseline trigram model, assuming no independence among the three terms. Model 1, the natural model for the determiner=head dependency model, predicts the determiner from the verb and the noun from the determiner (and thus is equivalent to an adjacent word bigram model). Model 2 is the converse, the natural model for the noun=head dependency model. Both Model 1 and Model 2", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>namely the relative entropy between the posterior distribution</td></tr><tr><td>Pr(w]C) and the prior distribution Pr(w). We'll label this</td></tr><tr><td>measure IS, where</td></tr><tr><td>IS(w; C) = E Pr(wlC)loa Pr(wlC) w P (w)</td></tr><tr><td>In the course of processing sentence (1), we need an estimate</td></tr></table>", |
|
"text": "Backoff for unknown trigrams in WSJ text. sider one, whichResnik (1993) calls selectional preference,", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |