ACL-OCL / Base_JSON /prefixI /json /iwpt /1993.iwpt-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1993",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:35:53.121133Z"
},
"title": "Frequency Estimation of Verb Subcategorization Frames Based on Syntactic and Multidimensional Statistical Analysis",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Ushioda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213-3890",
"settlement": "Pittsburgh",
"region": "PA"
}
},
"email": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Evans",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213-3890",
"settlement": "Pittsburgh",
"region": "PA"
}
},
"email": ""
},
{
"first": "Ted",
"middle": [],
"last": "Gibson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213-3890",
"settlement": "Pittsburgh",
"region": "PA"
}
},
"email": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213-3890",
"settlement": "Pittsburgh",
"region": "PA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a mechanism for automatically estimating frequencies of verb subcategorization frames in a large corpus. A tagged corpus is first partially parsed to identify noun phrases and then a regular grammar is used to estimate the appropriate subcategorization frame for each verb token in the corpus. In an experiment involving the identification of six fixed subcategorization frames, our current system showed more than 80% accuracy. In addition, a new statistical method enables the system to learn patterns of errors based on a set of training samples and substantially improves the accuracy of the frequency estimation.",
"pdf_parse": {
"paper_id": "1993",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a mechanism for automatically estimating frequencies of verb subcategorization frames in a large corpus. A tagged corpus is first partially parsed to identify noun phrases and then a regular grammar is used to estimate the appropriate subcategorization frame for each verb token in the corpus. In an experiment involving the identification of six fixed subcategorization frames, our current system showed more than 80% accuracy. In addition, a new statistical method enables the system to learn patterns of errors based on a set of training samples and substantially improves the accuracy of the frequency estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When we construct a grammar, there is always a trade-off between the coverage of the grammar and the ambiguity of the grammar. If we hope to develop an efficient high-coverage parser for unrestricted texts, we must have some means of dealing with the combinatorial explosion of syn tactic ambiguities. While a general probabilistic optimization technique such as the Inside-Outside algorithm (Baker 1979 , Lauri and Yo ung 1990 , Jelinek et al. 1990 , Carroll and Charniak 1992 can be used to reduce ambiguity by providing es timates on the applicability of the context-free rules in a grammar (for example), the algorithm does not take advantage of lexical information, including such information as verb subcategoriza tion frame preferences. Discovering or acquiring lexically-sensitive linguistic structures from large corpora may offer an essential complementary ap proach.",
"cite_spans": [
{
"start": 392,
"end": 403,
"text": "(Baker 1979",
"ref_id": null
},
{
"start": 404,
"end": 427,
"text": ", Lauri and Yo ung 1990",
"ref_id": null
},
{
"start": 428,
"end": 449,
"text": ", Jelinek et al. 1990",
"ref_id": null
},
{
"start": 450,
"end": 477,
"text": ", Carroll and Charniak 1992",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Verb subcategorization (verb-subcat) frames represent one of the most important elements of grammatical/lexical knowledge for efficient and reliable parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At this stage in the computational-linguistic exploration of corpora, dictionaries are still probably more reliable than automatic acquisition systems as a source of sub categorization (subcat) frames for verbs. The Oxford Advanced Learners Dictionary ( OALD) (Hornby 1989), for example, uses 32 verb patterns to describe a usage of each verb for each meaning of the verb. However, dictionaries do not pro vide quantitative information such as how often each verb is used with each of the possible subcat frames. Since dictionaries are repositories, pri marily, of what is possible, not what is most likely, they tend to contain information about rare us age (de Marken 1992) . But without information about the frequencies of the subcat frames we find in dictionaries, we face the prospect of having to treat each frame as equiprobable in parsing. This can lead to serious inefficiency. We also know that the frequency of subcat frames can vary by domain; frames that are very rare in one domain can be quite common in another. If we could au tomatically determine the frequencies of subcat frames for domains, we would be able to tailor parsing with domain-specific heuristics. Indeed, it would be desirable to have a subcat dictionary for each possible domain.",
"cite_spans": [
{
"start": 659,
"end": 675,
"text": "(de Marken 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "parsing with domain-specific heuristics. Indeed, it would be desirable to have a subcat dictionary for each possible domain .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ". This paper describes a mechanism for auto matically acquiring subcat frames and their fre quencies based on a tagged corpus. The method utilizes a tagged corpus because (i) we don't have to deal with a lexical ambiguity (ii) tagged cor pora in various domains are becoming readily available and (iii) simple and robust tagging techniques using such corpora recently have been de veloped (Church 1988 , Brill 1992 .",
"cite_spans": [
{
"start": 389,
"end": 401,
"text": "(Church 1988",
"ref_id": null
},
{
"start": 402,
"end": 414,
"text": ", Brill 1992",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Brent reports a method for automatically acquiring subcat frames but without frequency measurements (Brent and Berwick 1991, Brent 1991). His approach is to count occurrences of those unambiguous verb phrases that contain no noun phrases other than pronouns or proper nouns. By thus restricting the \"features\" that trigger identification of a verb phrase, he avoids possible errors due to syntactic ambiguity. Al though the rate of false positives is very low in his system, his syntactic features are so selective that most verb tokens fail to satisfy them. (For b: sentence initial maker k: target verb i: pronoun n: noun phrase v: finite verb u: participial verb d: base form verb p : preposition example, verbs that occurred fewer than 20 times in the corpus tend to have no co-occurrences with the features.) Therefore his approach is not useful in determining verb-subcat frame frequencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To measure frequencies, we need, ideally, to identify a subcat frame for each verb token in the corpus. This, in turn, requires a full parse of the corpus. Since manually parsed corpora are rare and typically small, and since automatically parsed corpora contain many errors (given cur rent parsing technologies), an alternative source of useful syntactic structure is needed. We have elected to use partially parsed sentences automat ically derived from a lexically-tagged corpus. The partial parse contains information about minimal noun phrases ( without PP attachment or clausal complements). While such derived information about syntactic structure is less accurate and complete than that available in certified, hand parsed corpora, the approach promises to general ize and to yield large sample sizes. In particular, we can use partially parsed corpora to measure verb-subcat frame frequencies. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The\u2022 procedure to find verb-subcat frequencies, automatically, is as follows. 1) Make a list of verbs out of the tagged cor pus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "2) For each verb on the list (the \"target verb\" ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "(2.1) To kenize each sentence containing the target verb in the following way: All the noun phrases except pro nouns are tokenized as \"n\" by a noun phrase parser and all the rest of the words . are also tokenized following the schmema in Table 1 . For example, the sentence \"The corresponding mental state verbs do not follow [target verb] these rules in a straightforward way\" is transformed to a sequence of tokens \"bnvaknpne\" .",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "(2.2) Apply a set of subcat extraction rules to the tokenized sentences. These rules are written as regular expressions arid they are obtained through the exami nation of occurrences of a small sample of verbs in a training text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Note that in the actual implementation of the procedure, all of the redundant operations are eliminated. Our NP parser also uses a fi\ufffdite-state grammar. It is designed especially to support identification of verb-subcat frames. One of its special features is that it detects time-adjuncts such as \"yesterday\" , \"two months ago\" , or \"the following day\" , and eliminates them in the tok enization process. For example, the sentence \"He told the reporters the following day that ... \" is tokenized to \"bivnc ... \" instead of \"bivnnc ... \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "We used the above method in experiments involv ing a tagged corpus of Wall Street Journal (WSJ) articles, provided by the Penn Treebank project. Our experiment was limited in two senses. First, we treated all prepositional phrases as adjuncts. We extracted two sets of tagged sentences from the WSJ corpus, each representing 3-MBytes and approximately 300,000 words of text. One set was used as a training corpus, the other as a test corpus. Table 2 gives the list of verb subcat frame extraction rules obtained ( via exam ination) for four verbs \"expect\" \"reflect\" \"tell\" ' ' '",
"cite_spans": [],
"ref_spans": [
{
"start": 442,
"end": 449,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiment on Wall Street Journal Corpus",
"sec_num": "3"
},
{
"text": "and \"give\" , as they occurred in the training corpus. Sample sentences that can be captured by each set of rules are attached to the list. Table 3 shows the result of the hand comparison of the automatically identified verb-subcat frames for \"give\" and \"expect\" in the test corpus. The tabu lar columns give actual frequencies for each verb subcat frame based on manual review and the tabular rows give the frequencies as determined automatically by the system. The count of each cell ([i, j]) gives the number of occurrences of the verb that are assigned the i-th subcat frame by the system and assigned the j-th frame by man ual review. The frame/column labeled \"REST\" represents all other subcat frames, encompassing such subcat frames as those involving wh-clauses, verb-particle combinations (such as \"give up\" ), and no complements. Despite the simplicity of the rules, the fre quencies for subcat frames determined under au tomatic processing are very close to the real dis tributions. Most of the errors are attributable to errors in the noun phrase parser. For exam ple, 10 out of the 13 errors in the (NP,NP+NP] cell under \"give\" are due to noun\u2022 phrase pars ing errors such as the misidentification of a N N sequence ( e.g., * \"give [NP government officials rights] against the press\" vs. \"give [NP govern ment officials] [NP rights] against the press\" ).",
"cite_spans": [
{
"start": 485,
"end": 493,
"text": "([i, j])",
"ref_id": null
}
],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment on Wall Street Journal Corpus",
"sec_num": "3"
},
{
"text": "To measure the total accuracy of the system, we randomly chose 33 verbs from the 300 most frequent verbs in the test corpus (given in Ta ble 4), automatically estimated the subcat frames for each occurrence of these verbs in the test cor pus, and compared the results to manually deter mined su beat frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment on Wall Street Journal Corpus",
"sec_num": "3"
},
{
"text": "The overall results are quite promising. The total number of occurrences of the 33 verbs in the test corpus (excluding participle forms) is 2,242. Of these, 1,933 were assigned correct subcat frames by the system. (The 'correct' assignment counts always appear in the diagonal cells in a comparison table such as in Table 3 .) This indicates an overall accuracy for the method of 86%.",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 323,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment on Wall Street Journal Corpus",
"sec_num": "3"
},
{
"text": "If we exclude the subcat frame \"REST\" from our statistics, the total number of occurrences of the 33 verbs in one of the six subcat frames is 1,565. Of these, 1,311 were assigned correct sub cat frames by the system; This represents 83% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment on Wall Street Journal Corpus",
"sec_num": "3"
},
{
"text": "For 30 of the 33 verbs, both the first and the second (if any) most frequent subcat frames as determined by the system were correct. For all of the verbs except one (\"need\" ), the most frequent frame was correct. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment on Wall Street Journal Corpus",
"sec_num": "3"
},
{
"text": "In the following sections, we describe a sta tistical method which, based on a set of training samples, enables the system to learn patterns of errors and substantially increase the accuracy of estimated verb-subcat frequencies. samples of a common domain) , we estimate the Y margins using Bayes theorem on the fitted val ues of the training corpus by the formula given in Table 5 . (Y = k l X1 = i1 , X2 = i2 , \u2022\u2022 \u2022 , XN = iN) i1 i2 ",
"cite_spans": [
{
"start": 384,
"end": 402,
"text": "(Y = k l X1 = i1 ,",
"ref_id": null
},
{
"start": 403,
"end": 412,
"text": "X2 = i2 ,",
"ref_id": null
},
{
"start": 413,
"end": 419,
"text": "\u2022\u2022 \u2022 ,",
"ref_id": null
},
{
"start": 420,
"end": 428,
"text": "XN = iN)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 229,
"end": 256,
"text": "samples of a common domain)",
"ref_id": null
},
{
"start": 374,
"end": 381,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "USHIODA -EVANS -GIBSON -WA IBEL target class and to eliminate undesirable rule in teractions.",
"sec_num": null
},
{
"text": "= LL ... LMi i 2 . . \u2022i N +P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E(Y = k I X1 -X2 -\u2022 \u2022 \u2022 -XN marginal table of the new corpus)",
"sec_num": null
},
{
"text": "i N where M 1 i2 \u2022\u2022 \u2022 i n+ is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E(Y = k I X1 -X2 -\u2022 \u2022 \u2022 -XN marginal table of the new corpus)",
"sec_num": null
},
{
"text": "The simplest application of the above method is to use a 2-way contingency table, as in Table 3 To test the latter possibility, we constructed a contingency table for the verb from the test corpus described in the Section 3 that was most problematic (least accurately estimated) among the 33 verbs-\"need\" . Note that we are using the test corpus described in the Section 3 as a train ing corpus here, because we already know both the measured frequency and the hand-judged fre quency of \"need\" which are necessary to construct a contingency table. The total occurrence of this verb was 75. To smooth the table, 0.1 is added to all the cell counts. As new test corpora, we extracted another 300,000 words of tagged text from the WSJ corpus (labeled \"W3\" ) and also three sets of 300,000 words of tagged text from the Brown corpus (labeled \"Bl\", \"B2\" , and \"B3\" ), Table 6 gives the frequency distributions based on the system output, hand judgement, and sta tistical analysis. (As before, we take the hand judgement to be the gold standard, the actual frequency of a particular frame.) After the Y margins are statistically estimated, the least es ti\ufffdated Y values less than 1.0 are truncated to 0. (These are considered to have appeared due to the smoothing.) In all of the test\u2022 corpora, the method gives very accurate frequency distribution estimates. Big gaps between the automatically-measured and manually-determined frequencies of \"NP \" and \"REST\" are shown to be substantially re duced through the use of statistical estimation.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 863,
"end": 870,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 976,
"end": 1086,
"text": "(As before, we take the hand judgement to be the gold standard, the actual frequency of a particular frame.)",
"ref_id": null
},
{
"start": 1200,
"end": 1262,
"text": "(These are considered to have appeared due to the smoothing.)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Heuristics",
"sec_num": "4.2"
},
{
"text": "Furthermore, by combining more feature sets and making use of multi-dimensional analysis, we can expect to obtain more accurate estimations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This result is especially encouraging because the heuristics obtained in one domain are shown to be applicable to a considerably different domain.",
"sec_num": null
},
{
"text": "We have demonstrated that by combining syn tactic and multidimensional statistical analysis, the frequencies of verb-subcat frames can be esti mated with high accuracy. Although the present system measures the frequencies of only six sub cat frames, the method is general enough to be extended to many more frames. Since our current focus is more on the estimation of the frequen cies of subcat frames than on the acquisition of frames themselves, using information on subcat frames in machine-readable dictionaries to guide the frequency measurement can be an interesting direction to explore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Di rection",
"sec_num": "5"
},
{
"text": "The traditional application of regular expres sions as rules for deterministic processing has self evident limitations since a regular grammar is not powerful enough to capture general linguistic phe nomena. The statistical method we propose uses regular expressions as filters for detecting specific features of the occurrences of verbs and employs multi-dimensional analysis of the features based on loglinear models and Bayes Theorem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Di rection",
"sec_num": "5"
},
{
"text": "We expect that by identifying other useful syntactic features we can further improve the ac curacy of the frequency estimation. Such features can be regarded as characterizing the syntactic context of the verbs, quite broadly. The features need not be linked to a local verb context. For ex ample, a regular expression such as \"w [-vex] \u2022k\" can be used to find cases where the target verb is preceded by a relative pronoun such that there is no other finite verb or punctuation or sentence final period between the relative pronoun and the target verb.",
"cite_spans": [
{
"start": 330,
"end": 336,
"text": "[-vex]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Di rection",
"sec_num": "5"
},
{
"text": "If the syntactic structure of a sentence can be predicted using only syntactic and lexical knowl edge, we can hope to estimate the subcat frame of each occurrence of a verb using the context ex pressed by a set of features. We thus can aim to extend and refine this method for use with gen eral probabilistic parsing of unrestricted text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Di rection",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We thank Te ddy Seidenfeld, Jeremy Yo rk, and Alex Franz for their comments and discussions with us.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {},
"ref_entries": {
"FIGREF1": {
"text": "(It is generally difficult to distinguish complement and adjunct PPs.) Second, we measured the fre quencies of only six fixed subcat frames for verbs in non-participle form. (This does not represent an essential shortcoming in the method; we only need to have additional subcat frame extraction rules to accommodate participles.) \" ... saw the man ... \"; \" ... which the president of the company wanted ... \" but not \" ... saw him swim ... \"; \" ... (hotel) in which he stayed ... \"; \" ... (gift) which he expected to get ... \" --Frame 6. \" ... expects to gain ... \"",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "is a histogram showing the number of verbs within each error-rate zone. In corn-",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "the cell count of the X1 -X2 -\u2022 \u2022 \u2022 -XN marginal table of the new corpus obtained as the system output, and Mi i i 2 ... i N k is the fitted value of the (N + I)-dimensional contingency table of the training corpus based on a particular loglinear model.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Set of Subcategorization Frame Extraction Rules"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>marginal table.</td></tr><tr><td>The fitted values are then used to estimate the subcat frame frequencies of a new corpus as fol lows. First, the system is run on the new corpus to obtain an N-dimensional contingency table. This table is considered to be an X1 -X2 -\u2022 \u2022 \u2022 -XN</td></tr></table>",
"text": "table. In the case of a saturated model, in which all kinds of interaction of variables up to (N + 1)-way interactions are included, the raw cell counts are the Maximum Likelihood solution. What we are aiming at is the Y margins that represent the real subcat frame fre quencies of the new corpus. Assuming that the training corpus and the new corpus are homo geneous (e.g., reflecting similar sub-domains or"
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Multidimensional Statistical Estimation of Subcat Frame Frequencies"
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": ""
}
}
}
}