Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H94-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:39.297179Z"
},
"title": "A Report of Recent Progress in Transformation-Based Error-Driven Learning*",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": "",
"affiliation": {
"laboratory": "Spoken Language Systems Group Laboratory for Computer Science",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": {
"postCode": "02139",
"region": "Massachusetts"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In [Brill 92], a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a sma]_l number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that stochastic taggers are currently unable to express. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.",
"pdf_parse": {
"paper_id": "H94-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In [Brill 92], a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a sma]_l number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that stochastic taggers are currently unable to express. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When automated part of speech tagging was initially explored [Klein and Simmons 63, Harris 62] , people manually engineered rules for tagging, sometimes with the aid of a corpus. As large corpora became available, it became clear that simple Markov-model based stochastic taggers that were automatically trained could achieve high rates of tagging accuracy [Jelinek 85] . These stochastic taggers have a number of advantages over the manually built taggers, including obviating the need for laborious manual rule construction, and possibly capturing useful information that may not have been noticed by the human engineer. However, stochastic taggers have the disadvantage that linguistic information is only captured indirectly, in large tables of statistics. Almost all recent work in developing automatically trained part of speech taggers has been on further exploring Markovmodel based tagging [Jetinek 85, Church 88, DeRose 88, DeMarcken 90, Merialdo 91, Cutting et al. 92, Kupiec 92, Charniak et al. 93, Weischedel et al. 93] . 1",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "[Klein and Simmons 63,",
"ref_id": null
},
{
"start": 84,
"end": 94,
"text": "Harris 62]",
"ref_id": null
},
{
"start": 357,
"end": 369,
"text": "[Jelinek 85]",
"ref_id": null
},
{
"start": 899,
"end": 911,
"text": "[Jetinek 85,",
"ref_id": null
},
{
"start": 912,
"end": 922,
"text": "Church 88,",
"ref_id": null
},
{
"start": 923,
"end": 933,
"text": "DeRose 88,",
"ref_id": null
},
{
"start": 934,
"end": 947,
"text": "DeMarcken 90,",
"ref_id": null
},
{
"start": 948,
"end": 960,
"text": "Merialdo 91,",
"ref_id": null
},
{
"start": 961,
"end": 979,
"text": "Cutting et al. 92,",
"ref_id": null
},
{
"start": 980,
"end": 990,
"text": "Kupiec 92,",
"ref_id": null
},
{
"start": 991,
"end": 1010,
"text": "Charniak et al. 93,",
"ref_id": null
},
{
"start": 1011,
"end": 1032,
"text": "Weischedel et al. 93]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "In [Brill 92 ], a trainable rule-based tagger is described *This research was supported by ARPA under contract N00014-89-J-1332, monitored through the Office of Naval Research.",
"cite_spans": [
{
"start": 3,
"end": 12,
"text": "[Brill 92",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "1Markov-model based taggers assign a sentence the tag sequence that maximizes Prob(word[tag) * Prob(taglprevious n tags).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Transformation-based error-driven learning has been applied to a number of natural language problems, including part of speech tagging, prepositional phrase attachment disambiguation, and syntactic parsing [Brill 92, Brill 93, Brill 93a] . A similar approach is being explored for machine translation [Su et al. 92] . Figure 1 illustrates the learning process. First, unannotated text is passed through the initial-state annotator. The initialstate annotator can range in complexity from assigning random structure to assigning the output of a sophisticated manually created annotator. Once text has been passed through the initial-state annotator, it is then compared to the truth, 3 and transformations are learned that can be applied to the output of the initial state annotator to make it better resemble the truth.",
"cite_spans": [
{
"start": 206,
"end": 216,
"text": "[Brill 92,",
"ref_id": null
},
{
"start": 217,
"end": 226,
"text": "Brill 93,",
"ref_id": null
},
{
"start": 227,
"end": 237,
"text": "Brill 93a]",
"ref_id": null
},
{
"start": 301,
"end": 315,
"text": "[Su et al. 92]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 318,
"end": 326,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2. TRANSFORMATION-BASED ERROR-DRIVEN LEARNING",
"sec_num": "256"
},
{
"text": "In all of the applications described in this paper, the following greedy search is applied: at each iteration of learning, the transformation is found whose application resuits in the highest score; that transformation is then added to the ordered transformation list and the training corpus is updated by applying the learned transformation. To define a specific application of transformation-2The programs described in this paper are freely available. 3As specified in a manually annotated corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2. TRANSFORMATION-BASED ERROR-DRIVEN LEARNING",
"sec_num": "256"
},
{
"text": "UNANNOTATEDTExT I based learning, one must specify the following: (1) the start state annotator, (2) the space of transformations the learner is allowed to examine, and (3) the scoring function for comparing the corpus to the lrulh and choosing a transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2. TRANSFORMATION-BASED ERROR-DRIVEN LEARNING",
"sec_num": "256"
},
{
"text": "Once an ordered list of transformations is learned, new text can be annotated by first applying the initial state annotator to it and then applying each of the learned transformations, in order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ANNO~TAJD I TRUTH",
"sec_num": null
},
{
"text": "The original tranformation-based tagger [Brill 92 ] works as follows. The start state annotator assigns each word its most likely tag as indicated in the training corpus. The most likely tag for unknown words is guessed based on a number of features, such as whether the word is capitalized, and what the last three letters of the word are. The allowable transformation templates are:",
"cite_spans": [
{
"start": 40,
"end": 49,
"text": "[Brill 92",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "Change tag a to tag b when:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "1. The preceding (following) word is tagged z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "2. The word two before (after) is tagged z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "3. One of the two preceding (following) words is tagged 2'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "4. One of the three preceding (following) words is tagged z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "5. The preceding word is tagged z and the following word is tagged w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "6. The preceding (following)word is tagged z and the word two before (after) is tagged w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "where a,b,z and w are variables over the set of parts of speech. To learn a transformation, the learner in essence applies every possible transformation, a counts the number of tagging errors after that transformation is applied, and chooses that transformation resulting in the greatest error reduction. 5 Learning stops when no transformations can be found whose application reduces errors beyond some prespecified threshold. An example of a transformation that was learned is: change the tagging of a word from noun to verb if the previous word is tagged as a modal. Once the system is trained, a new sentence is tagged by applying the start state annotator and then applying each transformation, in turn, to the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN EARLIER ATTEMPT",
"sec_num": "3."
},
{
"text": "No relationships between words are directly captured in stochastic taggers. In the Markov model, state tran-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "sition probabilities (P(Tagi]Tagi-z...Tagi_,~)) express",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "the likelihood of a tag immediately following n other tags, and emit probabilities (P(WordjlTagi)) express the likelihood of a word given a tag. Many useful relationships, such as that between a word and the previous word, or between a tag and the following word, are not directly captured by Markov-model based taggers. The same is true of the earlier transformation-based tagger, where transformation templates did not make reference to words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "To remedy this problem, the transformation-based tagger was extended by adding contextual transformations that could make reference to words as well as part of speech tags. The transformation templates that were added are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "Change tag a to tag b when:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "1. The preceding (following) word is w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "2. The word two before (after) is w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "3. One of the two preceding (following) words is w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "5. The current word is w and the preceding (following) word is tagged z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "where w and x are variables over all words in the training corpus, and z is a variable over all parts of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "Below we list two lexicalized transformations that were learned. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "Change the tag: The Penn Treebank tagging style manual specifies that in the collocation as... as, the first as is tagged as an adverb and the second is tagged as a preposition. Since as is most frequently tagged as a preposition in the training corpus, the start state tagger will mistag the phrase as ~all as as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "as/preposition tall/adjective as/preposition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "The first lexicalized transformation corrects this mistagging. Note that a stochastic tagger trained on our training set would not correctly tag the first occurrence of as. Although adverbs are more likely than prepositions to follow some verb form tags, the fact that P(aslprcposition ) is much greater than P(as[adverb), and P(adjectiveIpreposition ) is much greater than P(adjective]adverb) lead to as being incorrectly tagged as a preposition by a stochastic tagger. A trigram tagger will correctly tag this collocation in some instances, due to the fact that P(preposition[adverb adjective) is greater than P(prepositionlpreposition adjective), but the outcome will be highly dependent upon the context in which this collocation appears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "The second transformation arises from the fact that when a verb appears in a context such as We do n'~ __ or We did n't usually ___, the verb is in base form. When training contextual probabilities on 1 million words, an accuracy of 96.7% was achieved. Accuracy dropped to 96.3% when contextual probabilities were trained on 64,000 words. We trained the transformation-based tagger on 600,000 words from the same corpus, making the same closed vocabulary assumption, 9 and achieved an accuracy of 97.2% on a separate 150,000 word test set. The transformationbased learner achieved better performance, despite the fact that contextual information was captured in only 267 simple nonstochastic rules, as opposed to 10,000 contextual probabilities that were learned by the stochastic tagger. To see whether lexicalized transformations were contributing to the accuracy rate, we ran the exact same test using the tagger trained using the earlier transformation template set, which contained no transformations making reference to words. Accuracy of that tagger was 96.9%. Disallowing lexicalized transformations resulted in an 11% increase in the error rate. These results are summarized in table 1. 9In both [Weischedel et al. 93] and here, the test set was incorporated into the lexicon, but was not used in learning contextual information. Testing with no unknown words might seem llke an unrealistic test. We have done so for three reasons (We show results when unknown words are included later in the paper): (1) to allow for a comparison with previously quoted results, (2) to isolate known word accuracy from unknown word accuracy, and (3) in some systems, such as a closed vocabulary speech recognition system, the assumption that all words are known is valid.",
"cite_spans": [
{
"start": 1205,
"end": 1227,
"text": "[Weischedel et al. 93]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "When transformations are allowed to make reference to words and word pairs, some relevant information is probably missed due to sparse data. we are currently exploring the possibility of incorporating word classes into the rule-based learner in hopes of overcoming this problem. The idea is quite simple. Given a source of word class information, such as WordNet [Miller 90], the learner is extended such that a rule is allowed to make reference to parts of speech, words, and word classes, allowing for rules such as Change the tag from X to Y if the following word belongs to word class Z. This approach has already been successfully applied to a system for prepositional phrase disambiguation [Brill 93a ].",
"cite_spans": [
{
"start": 696,
"end": 706,
"text": "[Brill 93a",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICALIZING THE TAGGER",
"sec_num": "4."
},
{
"text": "In addition to not being lexicalized, another problem with the original transformation-based tagger was its relatively low accuracy at tagging unknown words3 \u00b0 In the start state annotator for tagging, words are assigned their most likely tag, estimated from a training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UNKNOWN WORDS",
"sec_num": "5."
},
{
"text": "In khe original formulation of the rule-based tagger, a rather ad-hoc algorithm was used to guess the most likely tag for words not appearing in the training corpus. To try to improve upon unknown word tagging accuracy, we built a transformation-based learner to learn rules for more accurately guessing the most likely tag for words not seen in the training corpus. If the most likely tag for unknown words can be assigned with high accuracy, then the contexual rules can be used to improve accuracy, as described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UNKNOWN WORDS",
"sec_num": "5."
},
{
"text": "In the transformation-based unknown-word tagger, the start state annotator naively labels the most likely tag for unknown words as proper noun if capitalized and common noun otherwise, lz . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UNKNOWN WORDS",
"sec_num": "5."
},
{
"text": "Adding the character string x as a suffix results in a word (Izl <= 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Adding the character string x as a prefix results in a word (1 :1 <= 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Word W ever appears immediately to the left (right) of the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "8. Character Z appears in the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "An unannotated text can be used to check the conditions in all of the above transformation templates. Annotated text is necessary in training to measure the effect of transformations on tagging accuracy. Below are the first 10 transformation learned for tagging unknown words in the Wall Street Journal corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Change tag:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "1. From common noun to plural common noun if the word has suffix -s t2 2. From common noun to number if the word has character .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "3. From common noun to adjective if the word has character -4. From common noun to past participle verb if the word has suffix -ed 5. From common noun to gerund or present participle verb if the word has suffix -ing 6. To adjective if adding the suffix -ly results in a word Below we list the set of allowable transformations: 7. To adverb if the word has suffix -ly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "Change the guess of the most-likely tag of a word (from X) to Y if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "1. Deleting the prefix x, Ixl <=4, results in a word (x is any string of length 1 to 4). (1,2,3,4 ) characters of the word are x.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 97,
"text": "(1,2,3,4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": ".",
"sec_num": null
},
{
"text": "3. Deleting the suffix x, Ix I <= 4, results in a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "'4. The last (1,2,3,4) characters of the word are x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "10 This section describes work done in part while the author was at the University of Pennsylvania.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "llIf we change the tagger to tag all unknown words as common nouns, then a number of rules are learned of the form: change tag to proper noun if the prefix is \"E\", since the learner is not provided with the concept of upper case in its set of transformation templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "8. From common noun to number if the word $ ever appears immediately to the left 9. From common noun to adjective if the word has suffix -al 10. From noun to base form verb if the word would ever appears immediately to the left.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "Keep in mind that no specific affixes are prespecified. A transformation can make reference to any string of characters up to a bounded length. So while the first rule specifies the English suffix \"s\", the rule learner also 12Note that this transformation will result in the mistagging of mistress. The 17th learned rule fixes this problem. This rule states: change a tag from plural common noun to singular common noun if the word has suffix ss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "considered such nonsensical rules as: change a tag to adjective if the word has suffix \"xhqr'. Also, absolutely no English-specific information need be prespecified in the learner. 13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "We then ran the following experiment using 1.1 million words of the Penn Treebank Tagged Wall Street Journal Corpus. The first 950,000 words were used for training and the next 150,000 words were used for testing. Annotations of the test corpus were not used in any way to train the system. From the 950,000 word training corpus, 350,000 words were used to learn rules for tagging unknown words, and 600,000 words were used to learn contextual rules. 148 rules were learned for tagging unknown words, and 267 contextual tagging rules were learned. Unknown word accuracy on the test corpus was 85.0%, and overall tagging accuracy on the test corpus was 96.5%. To our knowledge, this is the highest overall tagging accuracy ever quoted on the Penn Treebank Corpus when making the open vocabulary assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "In [Weischedel et al. 93 ], a statistical approach to tagging unknown words is shown. In this approach, a number of suffixes and important features are prespecified. Then, for unknown words:",
"cite_spans": [
{
"start": 3,
"end": 24,
"text": "[Weischedel et al. 93",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "P(WIT) = p(unknown wordlT) * p(Capitalize-featurelT ) * p(suffixes, hyphenationIT)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "Using this equation for unknown word emit probabilities within the stochastic tagger, an accuracy of 85% was obtained on the Wall Street Journal corpus. This portion of the stochastic model has over 1,000 parameters, with 108 possible unique emit probabilities, as opposed to only 148 simple rules that are learned and used in the rule-based approach. We have obtained comparable performance on unknown words, while capturing the information in a much more concise and perspicuous manner, and without prespecifying any language-specific or corpus-specific information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The first",
"sec_num": "2."
},
{
"text": "There are certain circumstances where one is willing to relax the one tag per word requirement in order to increase the probability that the correct tag will be assigned to each word. In [DeMarcken 90, Weischedel et al. 93] , k-best tags are assigned within a stochastic tagger by returning all tags within some threshold of probability of being correct for a particular word.",
"cite_spans": [
{
"start": 187,
"end": 201,
"text": "[DeMarcken 90,",
"ref_id": null
},
{
"start": 202,
"end": 223,
"text": "Weischedel et al. 93]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "K-BEST TAGS",
"sec_num": "6."
},
{
"text": "We can modify the transformation-based tagger to return multiple tags for a word by making a simple mod-Z3This learner has also been applied to tagging Old English. See Table 2 : Results from k-best tagging.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "K-BEST TAGS",
"sec_num": "6."
},
{
"text": "ification to the contextual transformations described above. The initial-state annotator is the tagging output of the transformation-based tagger described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-BEST TAGS",
"sec_num": "6."
},
{
"text": "The allowable transformation templates are the same as the contextual transformation templates listed above, but with the action change tag X to tag Y modified to add tag X to tag Y or add tag X to word W. Instead of changing the tagging of a word, transformations now add alternative taggings to a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-BEST TAGS",
"sec_num": "6."
},
{
"text": "When allowing more than one tag per word, there is a trade-off between accuracy and the average number of tags for each word. Ideally, we would like to achieve as large an increase in accuracy with as few extra tags as possible. Therefore, in training we find transformations that maximize precisely this function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-BEST TAGS",
"sec_num": "6."
},
{
"text": "In table 2 we present results from first using the one-tagper-word transformation-based tagger described in the previous section and then applying the k-best tag transformations. These transformations were learned from a separate 240,000 word corpus. 14",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-BEST TAGS",
"sec_num": "6."
},
{
"text": "In this paper, we have described a number of extensions to previous work in rule-based part of speech tagging, including the ability to make use of lexical relationships previously unused in tagging, a new method for tagging unknown words, and a way to increase accuracy by returning more than one tag per word in some instances. We have demonstrated that the rule-based approach obtains performance comparable to that of stochastic taggets on unknown word tagging and better performance on known word tagging, despite the fact that the rulebased tagger captures linguistic information in a small number of simple non-stochastic rules, as opposed to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "7."
},
{
"text": "\u2022 14Unfortunately, it is difficult to find results to compare these k-best tag results to. In [DeMarcken 90], the test set is included in the training set, and so it is difficult to know how this system would do on fresh text. In [Weischedel et al. 93 ], a k-best tag experiment was run on the Wall Street Journal corpus. They quote the average number of tags per word for various threshold settings, but do not provide accuracy results. large numbers of lexical and contextual probabilities. Recently, we have begun to explore the possibility of extending these techniques to both learning pronunciation networks for speech recognition and to learning mappings between sentences and semantic representations.",
"cite_spans": [
{
"start": 230,
"end": 251,
"text": "[Weischedel et al. 93",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "7."
},
{
"text": ". The current word is w and the preceding (following) word is x.4 All possible instantiations of transformation templates. 5The search is data-d.riven~ so only a very small percentage of possible transformations need be examined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple rule-based part of speech tagger",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "92] E. Brill 1992. A simple rule-based part of speech tagger. In Proceedings of the Third Conference on Ap- plied Natural Language Processing, Trento, Italy.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic grammar induction and parsing free text: a transformation-based approach",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill 93] E. Brill 1993. Automatic grammar induction and parsing free text: a transformation-based approach. In Proceedings of the 31st Meeting of the Association of Computational Linguistics, Columbus, Ohio.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A corpus-based approach to language learning",
"authors": [
{
"first": "]",
"middle": [
"E"
],
"last": "Brill 93a",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill 93a] E. Brill 1993. A corpus-based approach to lan- guage learning. Ph.D. Dissertation, Department of Computer and Information Science, University of Penn- sylvania.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Equations for part-ofspeech tagging",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hendrickson",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jacobson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perkowitz",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of Conference of the American Association for Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Charniak et al. 93] E. Charniak, C. Hendrickson, N. Jacob- son, and M. Perkowitz. 1993. Equations for part-of- speech tagging. In Proceedings of Conference of the American Association for Artificial Intelligence (AAAI), \u2022 Washington, D.C.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A stochastic parts program and noun phrase parser for unrestricted text",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the Second Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Pro- ceedings of the Second Conference on Applied Natural Language Processing, Austin, Texas.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A practical part-of-speech tagger",
"authors": [
{
"first": "D",
"middle": [],
"last": "Cutting",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kupiec",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sibun",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Cutting et al. 92] D. Cutting, J. Kupiec, J. Pedersen, and P. Sibun. 1992. A practical part-of-speech tagger In Proceedings of the Third Conference on Applied Natural Language Processing, Trento, Italy.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Grammatical category disambiguation by statistical optimization",
"authors": [],
"year": 1988,
"venue": "Computational Linguistics",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "88] S. DeRose 1988. Grammatical category dis- ambiguation by statistical optimization. Computational Linguistics, Volume 14.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parsing the LOB corpus",
"authors": [
{
"first": "C",
"middle": [],
"last": "Demarcken",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 1990 Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[DeMarcken 90] C. DeMarcken. 1990. Parsing the LOB cor- pus. In Proceedings of the 1990 Conference of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "String Analysis of Language Structure",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1962,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harris 62] Z. Harris. 1962. String Analysis of Language Structure, Mouton and Co., The Hague.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A computational approach to grammatical coding of English words",
"authors": [
{
"first": "S",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Simmons",
"suffix": ""
}
],
"year": 1963,
"venue": "JACM",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Klein and Simmons 63] S. Klein and R. Simmons. 1963. A computational approach to grammatical coding of En- glish words. JACM, Volume 10.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Markov source modeling of text generation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1985,
"venue": "Impact of Processing Techniques on Communication. 3. Skwirzinski",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Jelinek 85] F. Jelinek. 1985. Markov source modeling of text generation. In Impact of Processing Techniques on Communication. 3. Skwirzinski, ed., Dordrecht.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Robust part-of-speech tagging using a hidden Markov model",
"authors": [],
"year": 1992,
"venue": "Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec 92] J. Kupiec. 1992. Robust part-of-speech tagging using a hidden Markov model. Computer Speech and Language.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building a large annotated corpus of English: the Penn Treebank",
"authors": [
{
"first": "",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Marcus et al. 93] M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguis- tics, Volume. 19.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Tagging text with a probabilistic model",
"authors": [
{
"first": "B",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1991,
"venue": "1EEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Merialdo 91] B. Merialdo. 1991. Tagging text with a prob- abilistic model. In 1EEE International Conference on Acoustics, Speech and Signal Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A new quantitative quality measure, for machine translation Systems",
"authors": [
{
"first": "//Er 90j G",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of COLING-92",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mi//er 90J G. Miller. 1990. WordNet: an on-line lexical database. International Journal of Lexicography. [Suet al. 92] K. Su, M. Wu, and J. Chang. 1992. A new quantitative quality measure, for machine transla- tion Systems. In Proceedings of COLING-92, Nantes, France. [Weischedel et al. 93] R. Weischedel, M. Meteer, R.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Coping with ambiguity and unknown words through probabilistic models",
"authors": [
{
"first": "L",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmucci",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, L. Ramshaw, and J. Palmucci. 1993. Coping with ambiguity and unknown words through probabilis- tic models. Computational Linguistics, Volume 19.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Transformation-Based Error-Driven Learning.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Srin 93a].",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"3\">A stochastic trigram tagger would have to capture this</td></tr><tr><td colspan=\"3\">linguistic information indirectly from frequency counts</td></tr><tr><td colspan=\"2\">of all trigrams of the form: s</td><td/></tr><tr><td>*</td><td>ADVERB</td><td>PRESENT_VERB</td></tr></table>",
"text": "experiments were run on the Penn Treebank tagged Wall Street Journal corpus, version 0.5[Marcus et al. 93].",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"3\">: Comparison of Tagging Accuracy With No Un-</td></tr><tr><td>known Words</td><td/><td/></tr><tr><td>ADVERB</td><td>*</td><td>PRESENT_VERB</td></tr><tr><td>ADVERB</td><td>*</td><td>BASE_VERB</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
}
}
}
}