Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:54.857033Z"
},
"title": "Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford University Stanford",
"location": {
"postCode": "94305-9040, 94305-9040",
"settlement": "Stanford",
"region": "CA, CA"
}
},
"email": "[email protected]"
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford University Stanford",
"location": {
"postCode": "94305-9040, 94305-9040",
"settlement": "Stanford",
"region": "CA, CA"
}
},
"email": "[email protected]"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hebrew University Stanford",
"location": {
"postCode": "94305-9040, 91904",
"settlement": "Jerusalem",
"region": "CA",
"country": "Israel"
}
},
"email": "[email protected]"
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hebrew University Stanford",
"location": {
"postCode": "94305-9040, 91904",
"settlement": "Jerusalem",
"region": "CA",
"country": "Israel"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. 1 Rather than subscripting all variables with a position index, we use a hopefully clearer relative notation, where t 0 denotes the current position and t \u2212n and t +n are left and right context tags, and similarly for words.",
"pdf_parse": {
"paper_id": "N03-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. 1 Rather than subscripting all variables with a position index, we use a hopefully clearer relative notation, where t 0 denotes the current position and t \u2212n and t +n are left and right context tags, and similarly for words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Almost all approaches to sequence problems such as partof-speech tagging take a unidirectional approach to conditioning inference along the sequence. Regardless of whether one is using HMMs, maximum entropy conditional sequence models, or other techniques like decision trees, most systems work in one direction through the sequence (normally left to right, but occasionally right to left, e.g., Church (1988) ). There are a few exceptions, such as Brill's transformation-based learning (Brill, 1995) , but most of the best known and most successful approaches of recent years have been unidirectional.",
"cite_spans": [
{
"start": 396,
"end": 409,
"text": "Church (1988)",
"ref_id": "BIBREF6"
},
{
"start": 487,
"end": 500,
"text": "(Brill, 1995)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most sequence models can be seen as chaining together the scores or decisions from successive local models to form a global model for an entire sequence. Clearly the identity of a tag is correlated with both past and future tags' identities. However, in the unidirectional (causal) case, only one direction of influence is explicitly considered at each local point. For example, in a left-to-right first-order HMM, the current tag t 0 is predicted based on the previous tag t \u22121 (and the current word). 1 The backward interaction between t 0 and the next tag t +1 shows up implicitly later, when t +1 is generated in turn. While unidirectional models are therefore able to capture both directions of influence, there are good reasons for suspecting that it would be advantageous to make information from both directions explicitly available for conditioning at each local point in the model: (i) because of smoothing and interactions with other modeled features, terms like P(t 0 |t +1 , . . .) might give a sharp estimate of t 0 even when terms like P(t +1 |t 0 , . . .) do not, and (ii) jointly considering the left and right context together might be especially revealing. In this paper we exploit this idea, using dependency networks, with a series of local conditional loglinear (aka maximum entropy or multiclass logistic regression) models as one way of providing efficient bidirectional inference.",
"cite_spans": [
{
"start": 503,
"end": 504,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Secondly, while all taggers use lexical information, and, indeed, it is well-known that lexical probabilities are much more revealing than tag sequence probabilities (Charniak et al., 1993) , most taggers make quite limited use of lexical probabilities (compared with, for example, the bilexical probabilities commonly used in current statistical parsers). While modern taggers may be more principled than the classic CLAWS tagger (Marshall, 1987) , they are in some respects inferior in their use of lexical information: CLAWS, through its IDIOMTAG module, categorically captured many important, correct taggings of frequent idiomatic word sequences. In this work, we incorporate appropriate multiword feature templates so that such facts can be learned and used automatically by the model. Having expressive templates leads to a large number of features, but we show that by suitable use of a prior (i.e., regularization) in the conditional loglinear modelsomething not used by previous maximum entropy taggers -many such features can be added with an overall positive effect on the model. Indeed, as for the voted perceptron of Collins (2002) , we can get performance gains by reducing the support threshold for features to be included in the model. Combining all these ideas, together with a few additional handcrafted unknown word features, gives us a part-of-speech tagger with a per-position tag accuracy of 97.24%, and a whole-sentence correct rate of 56.34% on Penn Treebank WSJ data. This is the best automatically learned part-of-speech tagging result known to us, representing an error reduction of 4.4% on the model presented in Collins (2002) , using the same data splits, and a larger error reduction of 12.1% from the more similar best previous loglinear model in Toutanova and Manning (2000) .",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "(Charniak et al., 1993)",
"ref_id": "BIBREF4"
},
{
"start": 431,
"end": 447,
"text": "(Marshall, 1987)",
"ref_id": "BIBREF15"
},
{
"start": 1131,
"end": 1145,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
},
{
"start": 1642,
"end": 1656,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
},
{
"start": 1780,
"end": 1808,
"text": "Toutanova and Manning (2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "w 1 w 2 w 3 . . . . w n t 1 t 2 t 3 t n (a) Left-to-Right CMM w 1 w 2 w 3 . . . . w n t 1 t 2 t 3 t n (b) Right-to-Left CMM w 1 w 2 w 3 . . . . w n t 1 t 2 t 3 t n (c) Bidirectional Dependency Network",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When building probabilistic models for tag sequences, we often decompose the global probability of sequences using a directed graphical model (e.g., an HMM (Brants, 2000) or a conditional Markov model (CMM) (Ratnaparkhi, 1996) ). In such models, the probability assigned to a tagged sequence of words x = t, w is the product of a sequence of local portions of the graphical model, one from each time slice. For example, in the left-to-right CMM shown in figure 1(a),",
"cite_spans": [
{
"start": 156,
"end": 170,
"text": "(Brants, 2000)",
"ref_id": "BIBREF1"
},
{
"start": 207,
"end": 226,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "P(t, w) = i P(t i |t i\u22121 , w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "That is, the replicated structure is a local model P(t 0 |t \u22121 , w 0 ). 2 Of course, if there are too many conditioned quantities, these local models may have to be estimated in some sophisticated way; it is typical in tagging to populate these models with little maximum entropy models. For example, we might populate a model for P(t 0 |t \u22121 , w 0 ) with a maxent model of the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "P \u03bb (t 0 |t \u22121 , w 0 ) = exp(\u03bb t 0 ,t \u22121 + \u03bb t 0 ,w 0 ) t 0 exp(\u03bb t 0 ,t \u22121 + \u03bb t 0 ,w 0 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "In this case, the w 0 and t \u22121 can have joint effects on t 0 , but there are not joint features involving all three variables (though there could have been such features). We say that this model uses the feature templates t 0 , t \u22121 (previous tag features) and t 0 , w 0 (current word features). Clearly, both the preceding tag t \u22121 and following tag t +1 carry useful information about a current tag t 0 . Unidirectional models do not ignore this influence; in the case of a left-to-right CMM, the influence of t \u22121 on t 0 is explicit in the P(t 0 |t \u22121 , w 0 ) local model, while the influence of t +1 on t 0 is implicit in the local model at the next position (via P(t +1 |t 0 , w +1 )). The situation is reversed for the right-to-left CMM in figure 1(b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "From a seat-of-the-pants machine learning perspective, when building a classifier to label the tag at a certain position, the obvious thing to do is to explicitly include in the local model all predictive features, no matter on which side of the target position they lie. There are two good formal reasons to expect that a model explicitly conditioning on both sides at each position, like figure 1(c) could be advantageous. First, because of smoothing effects and interaction with other conditioning features (like the words), left-to-right factors like P(t 0 |t \u22121 , w 0 ) do not always suffice when t 0 is implicitly needed to determine t \u22121 . For example, consider a case of observation bias (Klein and Manning, 2002) for a first-order left-toright CMM. The word to has only one tag (TO) in the PTB tag set. The TO tag is often preceded by nouns, but rarely by modals (MD). In a sequence will to fight, that trend indicates that will should be a noun rather than a modal verb. However, that effect is completely lost in a CMM like (a): P(t will |will, star t ) prefers the modal tagging, and P(TO|to, t will ) is roughly 1 regardless of t will . While the model has an arrow between the two tag positions, that path of influence is severed. 3 The same problem exists in the other direction. If we use the symmetric right-",
"cite_spans": [
{
"start": 696,
"end": 721,
"text": "(Klein and Manning, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 1245,
"end": 1246,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "A B A B A B (a) (b) (c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "Figure 2: Simple dependency nets: (a) the Bayes' net for P(A)P(B|A), (b) the Bayes' net for P(A|B)P(B), (c) a bidirectional net with models of P(A|B) and P(B|A), which is not a Bayes' net.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "to-left model, fight will receive its more common noun tagging by symmetric reasoning. However, the bidirectional model (c) discussed in the next section makes both directions available for conditioning at all locations, using replicated models of P(t 0 |t \u22121 , t +1 , w 0 ), and will be able to get this example correct. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Dependency Networks",
"sec_num": "2"
},
{
"text": "While the structures in figure 1(a) and (b) are wellunderstood graphical models with well-known semantics, figure 1(c) is not a standard Bayes' net, precisely because the graph has cycles. Rather, it is a more general dependency network (Heckerman et al., 2000) . Each node represents a random variable along with a local conditional probability model of that variable, conditioned on the source variables of all incoming arcs. In this sense, the semantics are the same as for standard Bayes' nets. However, because the graph is cyclic, the net does not correspond to a proper factorization of a large joint probability estimate into local conditional factors. Consider the two-node cases shown in figure 2. Formally, for the net in (a), we can write P(a, b) = P(a)P(b|a). For (b) we write P(a, b) = P(b)P(a|b). However, in (c), the nodes A and B carry the information P(a|b) and P(b|a) respectively. The chain rule doesn't allow us to reconstruct P(a, b) by multiplying these two quantities. Under appropriate conditions, we could reconstruct P(a, b) from these quantities using Gibbs sampling, and, in general, that is the best we can do. However, while reconstructing the joint probabilities from these local conditional probabilities may be difficult, estimating the local probabilities themselves is no harder than it is for acyclic models: we take observations of the local environments and use any maximum likelihood estimation method we desire. In our experiments, we used local maxent models, but if the event space allowed, (smoothed) relative counts would do.",
"cite_spans": [
{
"start": 237,
"end": 261,
"text": "(Heckerman et al., 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics of Dependency Networks",
"sec_num": "2.1"
},
{
"text": "function bestScore() return bestScoreSub(n + 2, end, end, end );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics of Dependency Networks",
"sec_num": "2.1"
},
{
"text": "function bestScoreSub(i + 1, t i\u22121 , t i , t i+1 ) % memoization if (cached(i + 1, t i\u22121 , t i , t i+1 )) return cache(i + 1, t i\u22121 , t i , t i+1 ); % left boundary case if (i = \u22121) if ( t i\u22121 , t i , t i+1 == star t, star t, star t ) return 1; else return 0; % recursive case return max t i\u22122 bestScoreSub(i, t i\u22122 , t i\u22121 , t i )\u00d7 P(t i |t i\u22121 , t i+1 , w i );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics of Dependency Networks",
"sec_num": "2.1"
},
{
"text": "Figure 3: Pseudocode for polynomial inference for the firstorder bidirectional CMM (memoized version).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics of Dependency Networks",
"sec_num": "2.1"
},
{
"text": "Cyclic or not, we can view the product of local probabilities from a dependency network as a score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference for Linear Dependency Networks",
"sec_num": "2.2"
},
{
"text": "scor e(x) = i P(x i |Pa(x i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference for Linear Dependency Networks",
"sec_num": "2.2"
},
{
"text": "where Pa(x i ) are the nodes with arcs to the node x i . In the case of an acyclic model, this score will be the joint probability of the event x, P(x). In the general case, it will not be. However, we can still ask for the event, in this case the tag sequence, with the highest score. For dependency networks like those in figure 1, an adaptation of the Viterbi algorithm can be used to find the maximizing sequence in polynomial time. Figure 3 gives pseudocode for the concrete case of the network in figure 1(d); the general case is similar, and is in fact just a max-plus version of standard inference algorithms for Bayes' nets (Cowell et al., 1999, 97) . In essence, there is no difference between inference on this network and a second-order left-to-right CMM or HMM. The only difference is that, when the Markov window is at a position i , rather than receiving the score for P(t i |t i\u22121 , t i\u22122 , w i ), one receives the score for",
"cite_spans": [
{
"start": 633,
"end": 658,
"text": "(Cowell et al., 1999, 97)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 437,
"end": 445,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference for Linear Dependency Networks",
"sec_num": "2.2"
},
{
"text": "P(t i\u22121 |t i , t i\u22122 , w i\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference for Linear Dependency Networks",
"sec_num": "2.2"
},
{
"text": "There are some foundational issues worth mentioning. As discussed previously, the maximum scoring sequence need not be the sequence with maximum likelihood according to the model. There is therefore a worry with these models about a kind of \"collusion\" where the model locks onto conditionally consistent but jointly unlikely sequences. Consider the two-node network in figure 2(c). If we have the following distribution of observations (in the form ab) 11, 11, 11, 12, 21, 33 , then clearly the most likely state of the network is 11. However, the score of 11 is P(a = 1|b = 1)P(b = 1|a = 1) = 3/4 \u00d7 3/4 = 9/16, while the score of 33 is 1. An additional related problem is that the training set loss (sum of negative logarithms of the sequence scores) does not bound the training set error (0/1 loss on sequences) from Data Set Sect'ns Training 0-18 38,219 912,344 0 Develop 19-21 5,527 131,768 4,467 Test 22-24 5,462 129,654 3,649 Table 1 : Data set splits used.",
"cite_spans": [
{
"start": 454,
"end": 457,
"text": "11,",
"ref_id": null
},
{
"start": 458,
"end": 461,
"text": "11,",
"ref_id": null
},
{
"start": 462,
"end": 465,
"text": "11,",
"ref_id": null
},
{
"start": 466,
"end": 469,
"text": "12,",
"ref_id": null
},
{
"start": 470,
"end": 473,
"text": "21,",
"ref_id": null
},
{
"start": 474,
"end": 476,
"text": "33",
"ref_id": null
}
],
"ref_spans": [
{
"start": 837,
"end": 950,
"text": "Training 0-18 38,219 912,344 0 Develop 19-21 5,527 131,768 4,467 Test 22-24 5,462 129,654 3,649 Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Inference for Linear Dependency Networks",
"sec_num": "2.2"
},
{
"text": "above. Consider the following training set, for the same network, with each entire data point considered as a label: 11, 22 . The relative-frequency model assigns loss 0 to both training examples, but cannot do better than 50% error in regenerating the training data labels. These issues are further discussed in Heckerman et al. (2000) . Preliminary work of ours suggests that practical use of dependency networks is not in general immune to these theoretical concerns: a dependency network can choose a sequence model that is bidirectionally very consistent but does not match the data very well. However, this problem does not appear to have prevented the networks from performing well on the tagging problem, probably because features linking tags and observations are generally much sharper discriminators than tag sequence features.",
"cite_spans": [
{
"start": 313,
"end": 336,
"text": "Heckerman et al. (2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sent. Tokens Unkn",
"sec_num": null
},
{
"text": "It is useful to contrast this framework with the conditional random fields of Lafferty et al. (2001) . The CRF approach uses similar local features, but rather than chaining together local models, they construct a single, globally normalized model. The principal advantage of the dependency network approach is that advantageous bidirectional effects can be obtained without the extremely expensive global training required for CRFs.",
"cite_spans": [
{
"start": 78,
"end": 100,
"text": "Lafferty et al. (2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sent. Tokens Unkn",
"sec_num": null
},
{
"text": "To summarize, we draw a dependency network in which each node has as neighbors all the other nodes that we would like to have influence it directly. Each node's neighborhood is then considered in isolation and a local model is trained to maximize the conditional likelihood over the training data of that node. At test time, the sequence with the highest product of local conditional scores is calculated and returned. We can always find the exact maximizing sequence, but only in the case of an acyclic net is it guaranteed to be the maximum likelihood sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sent. Tokens Unkn",
"sec_num": null
},
{
"text": "The part of speech tagged data used in our experiments is the Wall Street Journal data from Penn Treebank III (Marcus et al., 1994) . We extracted tagged sentences from the parse trees. 5 We split the data into training, development, and test sets as in (Collins, 2002) . Table 1 lists character-istics of the three splits. 6 Except where indicated for the model BEST, all results are on the development set.",
"cite_spans": [
{
"start": 110,
"end": 131,
"text": "(Marcus et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 186,
"end": 187,
"text": "5",
"ref_id": null
},
{
"start": 254,
"end": 269,
"text": "(Collins, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "One innovation in our reporting of results is that we present whole-sentence accuracy numbers as well as the traditional per-tag accuracy measure (over all tokens, even unambiguous ones). This is the quantity that most sequence models attempt to maximize (and has been motivated over doing per-state optimization as being more useful for subsequent linguistic processing: one wants to find a coherent sentence interpretation). Further, while some tag errors matter much more than others, to a first cut getting a single tag wrong in many of the more common ways (e.g., proper noun vs. common noun; noun vs. verb) would lead to errors in a subsequent processor such as an information extraction system or a parser that would greatly degrade results for the entire sentence. Finally, the fact that the measure has much more dynamic range has some appeal when reporting tagging results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The per-state models in this paper are log-linear models, building upon the models in (Ratnaparkhi, 1996) and (Toutanova and Manning, 2000) , though some models are in fact strictly simpler. The features in the models are defined using templates; there are different templates for rare words aimed at learning the correct tags for unknown words. 7 We present the results of three classes of experiments: experiments with directionality, experiments with lexicalization, and experiments with smoothing.",
"cite_spans": [
{
"start": 86,
"end": 105,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF16"
},
{
"start": 110,
"end": 139,
"text": "(Toutanova and Manning, 2000)",
"ref_id": "BIBREF18"
},
{
"start": 346,
"end": 347,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "In this section, we report experiments using log-linear CMMs to populate nets with various structures, exploring the relative value of neighboring words' tags. Table 2 lists the discussed networks. All networks have the same vertical feature templates: t 0 , w 0 features for known words and various t 0 , \u03c3 (w 1n ) word signature features for all words, known or not, including spelling and capitalization features (see section 3.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments with Directionality",
"sec_num": "3.1"
},
{
"text": "Just this vertical conditioning gives an accuracy of 93.69% (denoted as \"Baseline\" in table 2). 8 Condition- 6 Tagger results are only comparable when tested not only on the same data and tag set, but with the same amount of training data. Brants (2000) illustrates very clearly how tagging performance increases as training set size grows, largely because the percentage of unknown words decreases while system performance on them increases (they become increasingly restricted as to word class).",
"cite_spans": [
{
"start": 109,
"end": 110,
"text": "6",
"ref_id": null
},
{
"start": 240,
"end": 253,
"text": "Brants (2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Directionality",
"sec_num": "3.1"
},
{
"text": "7 Except where otherwise stated, a count cutoff of 2 was used for common word features and 35 for rare word features (templates need a support set strictly greater in size than the cutoff before they are included in the model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Directionality",
"sec_num": "3.1"
},
{
"text": "8 Charniak et al. (1993) noted that such a simple model got 90.25%, but this was with no unknown word model beyond a prior distribution over tags. Abney et al. (1999) raise this baseline to 92.34%, and with our sophisticated unknown word model, it gets even higher. The large number of unambiguous tokens and ones with very skewed distributions make the base- ing on the previous tag as well (model L, t 0 , t \u22121 features) gives 95.79%. The reverse, model R, using the next tag instead, is slightly inferior at 95.14%. Model L+R, using both tags simultaneously (but with only the individual-direction features) gives a much better accuracy of 96.57%. Since this model has roughly twice as many tag-tag features, the fact that it outperforms the unidirectional models is not by itself compelling evidence for using bidirectional networks. However, it also outperforms model L+L 2 which adds the t 0 , t \u22122 secondprevious word features instead of next word features, which gives only 96.05% (and R+R 2 gives 95.25%). We conclude that, if one wishes to condition on two neighboring nodes (using two sets of 2-tag features), the symmetric bidirectional model is superior. High-performance taggers typically also include joint three-tag counts in some way, either as tag trigrams (Brants, 2000) or tag-triple features (Ratnaparkhi, 1996, Toutanova and Manning, 2000) . Models LL, RR, and CR use only the vertical features and a single set of tag-triple features: the left tags (t \u22122 , t \u22121 and t 0 ), right tags (t 0 , t +1 , t +2 ), or centered tags (t \u22121 , t 0 , t +1 ) respectively. Again, with roughly equivalent feature sets, the left context is better than the right, and the centered context is better than either unidirectional context. line for this task high, while substantial annotator noise creates an unknown upper bound on the task.",
"cite_spans": [
{
"start": 2,
"end": 24,
"text": "Charniak et al. (1993)",
"ref_id": "BIBREF4"
},
{
"start": 147,
"end": 166,
"text": "Abney et al. (1999)",
"ref_id": "BIBREF0"
},
{
"start": 1275,
"end": 1289,
"text": "(Brants, 2000)",
"ref_id": "BIBREF1"
},
{
"start": 1313,
"end": 1346,
"text": "(Ratnaparkhi, 1996, Toutanova and",
"ref_id": null
},
{
"start": 1347,
"end": 1361,
"text": "Manning, 2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Directionality",
"sec_num": "3.1"
},
{
"text": ", t \u22121 , t 0 , t \u22121 , t \u22122 , t 0 , t \u22121 , t \u22122 , t \u22123 118,752 45.14% 96.20% 86.52% R+LR+LLR t 0 , t +1 , t 0 , t \u22121 , t +1 , t 0 , t \u22121 , t \u22122 , t +1 115,790 51.69% 96.77% 87.91% L+LL+LR+RR+R t 0 , t \u22121 , t 0 , t \u22121 , t \u22122 , t 0 , t \u22121 , t +1 , t 0 , t +1 , t 0 , t +1 , t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Directionality",
"sec_num": "3.1"
},
{
"text": "Lexicalization has been a key factor in the advance of statistical parsing models, but has been less exploited for tagging. Words surrounding the current word have been occasionally used in taggers, such as (Ratnaparkhi, 1996) , Brill's transformation based tagger (Brill, 1995) , and the HMM model of Lee et al. (2000) , but nevertheless, the only lexicalization consistently included in tagging models is the dependence of the part of speech tag of a word on the word itself.",
"cite_spans": [
{
"start": 207,
"end": 226,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF16"
},
{
"start": 265,
"end": 278,
"text": "(Brill, 1995)",
"ref_id": "BIBREF3"
},
{
"start": 302,
"end": 319,
"text": "Lee et al. (2000)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "In maximum entropy models, joint features which look at surrounding words and their tags, as well as joint features of the current word and surrounding words are in principle straightforward additions, but have not been incorporated into previous models. We have found these features to be very useful. We explore here lexicalization both alone and in combination with preceding and following tag histories. Table 3 shows the development set accuracy of several models with various lexical features. All models use the same rare word features as the models in Table 2 . The first two rows show a baseline model using the current word only. The count cutoff for this feature was 0 in the first model and 2 for the model in the second row. As there are no tag sequence features in these models, the accuracy drops significantly if a higher cutoff is used (from a per tag accuracy of about 93.7% to only 60.2%).",
"cite_spans": [],
"ref_spans": [
{
"start": 408,
"end": 415,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 560,
"end": 567,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "The third row shows a model where a tag is decided solely by the three words centered at the tag position (3W). As far as we are aware, models of this sort have not been explored previously, but its accuracy is surprisingly high: despite having no sequence model at all, it is more accurate than a model which uses standard tag fourgram HMM features ( t 0 , w Table 2 , model L+LL+LLL). The fourth and fifth rows show models with bidirectional tagging features.",
"cite_spans": [],
"ref_spans": [
{
"start": 360,
"end": 367,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "0 , t 0 , t \u22121 , t 0 , t \u22121 , t \u22122 , t 0 , t \u22121 , t \u22122 , t \u22123 , shown in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "The fourth model (3W+TAGS) uses the same tag sequence features as the last model in Table 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "( t 0 , t \u22121 , t 0 , t \u22121 , t \u22122 , t 0 , t \u22121 , t +1 , t 0 , t +1 , t 0 , t +1 , t +2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "and current, previous, and next word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "The last model has in addition the feature templates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "t 0 , w 0 , t \u22121 , t 0 , w 0 , t +1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "t 0 , w \u22121 , w 0 , and t 0 , w 0 , w +1 , and includes the improvements in unknown word modeling discussed in section 3.3. 9 We call this model BEST. BEST has a token accuracy on the final test set of 97.24% and a sentence accuracy of 56.34% (see Table 4 ). A 95% confidence interval for the accuracy (using a binomial model) is (97.15%, 97.33%).",
"cite_spans": [
{
"start": 123,
"end": 124,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "In order to understand the gains from using right context tags and more lexicalization, let us look at an example of an error that the enriched models learn not to make. An interesting example of a common tagging error of the simpler models which could be corrected by a deterministic fixup rule of the kind used in the IDIOMTAG module of (Marshall, 1987) is the expression as X as (often, as far as). This should be tagged as/RB X/{RB,JJ} as/IN in the Penn Treebank. A model using only current word and two left tags (model L+L 2 in Table 2 ), made 87 errors on this expression, tagging it as/IN X as/IN -since the tag sequence probabilities do not give strong reasons to disprefer the most common tagging of as (it is tagged as IN over 80% of the time). However, the model 3W+TAGS, which uses two right tags and the two surrounding words in addition, made only 8 errors of this kind, and model BEST made only 6 errors.",
"cite_spans": [
{
"start": 339,
"end": 355,
"text": "(Marshall, 1987)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 534,
"end": 541,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.2"
},
{
"text": "Most of the models presented here use a set of unknown word features basically inherited from (Ratnaparkhi, 1996) , which include using character n-gram prefixes and suffixes (for n up to 4), and detectors for a few other prominent features of words, such as capitalization, hyphens, and numbers. Doing error analysis on unknown words on a simple tagging model (with t 0 , t \u22121 , t 0 , t \u22121 , t \u22122 , and w 0 , t 0 features) suggested several additional specialized features that can usefully improve 9 Thede and Harper (1999) use t \u22121 , t 0 , w 0 templates in their \"full-second order\" HMM, achieving an accuracy of 96.86%. Here we can add the opposite tiling and other features. Table 5 : Accuracy with and without quadratic regularization.",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF16"
},
{
"start": 500,
"end": 501,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 680,
"end": 687,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unknown word features",
"sec_num": "3.3"
},
{
"text": "performance. By far the most significant is a crude company name detector which marks capitalized words followed within 3 words by a company name suffix like Co. or Inc. This suggests that further gains could be made by incorporating a good named entity recognizer as a preprocessor to the tagger (reversing the most common order of processing in pipelined systems!), and is a good example of something that can only be done when using a conditional model. Minor gains come from a few additional features: an allcaps feature, and a conjunction feature of words that are capitalized and have a digit and a dash in them (such words are normally common nouns, such as CFC-12 or F/A-18). We also found it advantageous to use prefixes and suffixes of length up to 10. Together with the larger templates, these features contribute to our unknown word accuracies being higher than those of previously reported taggers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Features",
"sec_num": null
},
{
"text": "With so many features in the model, overtraining is a distinct possibility when using pure maximum likelihood estimation. We avoid this by using a Gaussian prior (aka quadratic regularization or quadratic penalization) which resists high feature weights unless they produce great score gain. The regularized objective F is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "F(\u03bb) = i log(P \u03bb (t i |w, t)) + n j =1 \u03bb 2 j 2\u03c3 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "Since we use a conjugate-gradient procedure to maximize the data likelihood, the addition of a penalty term is easily incorporated. Both the total size of the penalty and the partial derivatives with repsect to each \u03bb j are trivial to compute; these are added to the log-likelihood and log-likelihood derivatives, and the penalized optimization procedes without further modification. We have not extensively experimented with the value of \u03c3 2 -which can even be set differently for different parameters or parameter classes. All the results in this paper use a constant \u03c3 2 = 0.5, so that the denominator disappears in the above expression. Experiments on a simple model with \u03c3 made an order of magnitude higher or lower both resulted in worse performance than with \u03c3 2 = 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "Our experiments show that quadratic regularization is very effective in improving the generalization performance of tagging models, mostly by increasing the number of features which could usefully be incorporated. The Tagger Support cutoff Accuracy Collins (2002) 0 96.60% 5 96.72% Model 3W+TAGS variant 1 96.97% 5 96.93% Table 6 : Effect of changing common word feature cutoffs (features with support \u2264 cutoff are excluded from the model).",
"cite_spans": [
{
"start": 249,
"end": 263,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "number of features used in our complex models -in the several hundreds of thousands, is extremely high in comparison with the data set size and the number of features used in other machine learning domains. We describe two sets of experiments aimed at comparing models with and without regularization. One is for a simple model with a relatively small number of features, and the other is for a model with a large number of features. The usefulness of priors in maximum entropy models is not new to this work: Gaussian prior smoothing is advocated in Chen and Rosenfeld (2000) , and used in all the stochastic LFG work (Johnson et al., 1999) . However, until recently, its role and importance have not been widely understood. For example, Zhang and Oles (2001) attribute the perceived limited success of logistic regression for text categorization to a lack of use of regularization. At any rate, regularized conditional loglinear models have not previously been applied to the problem of producing a high quality part-of-speech tagger: Ratnaparkhi (1996) , Toutanova and Manning (2000) , and Collins (2002) all present unregularized models. Indeed, the result of Collins (2002) that including low support features helps a voted perceptron model but harms a maximum entropy model is undone once the weights of the maximum entropy model are regularized. Table 5 shows results on the development set from two pairs of experiments. The first pair of models use common word templates t 0 , w 0 , t 0 , t \u22121 , t \u22122 and the same rare word templates as used in the models in table 2. The second pair of models use the same features as model BEST with a higher frequency cutoff of 5 for common word features.",
"cite_spans": [
{
"start": 551,
"end": 576,
"text": "Chen and Rosenfeld (2000)",
"ref_id": "BIBREF5"
},
{
"start": 619,
"end": 641,
"text": "(Johnson et al., 1999)",
"ref_id": "BIBREF10"
},
{
"start": 739,
"end": 760,
"text": "Zhang and Oles (2001)",
"ref_id": "BIBREF19"
},
{
"start": 1037,
"end": 1055,
"text": "Ratnaparkhi (1996)",
"ref_id": "BIBREF16"
},
{
"start": 1058,
"end": 1086,
"text": "Toutanova and Manning (2000)",
"ref_id": "BIBREF18"
},
{
"start": 1093,
"end": 1107,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
},
{
"start": 1164,
"end": 1178,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1353,
"end": 1360,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "For the first pair of models, the error reduction from smoothing is 5.3% overall and 20.1% on unknown words. For the second pair of models, the error reduction is even bigger: 16.2% overall after convergence and 5.8% if looking at the best accuracy achieved by the unsmoothed model (by stopping training after 75 iterations; see below). The especially large reduction in unknown word error reflects the fact that, because penalties are effectively stronger for rare features than frequent ones, the presence of penalties increases the degree to which more general cross-word signature features (which apply to unknown words) are used, relative to word-specific sparse features (which do not apply to unknown words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "Secondly, use of regularization allows us to incorporate features with low support into the model while improving performance. Whereas Ratnaparkhi (1996) used feature support cutoffs and early stopping to stop overfitting of the model, and Collins (2002) contends that including low support features harms a maximum entropy model, our results show that low support features are useful in a regularized maximum entropy model. Table 6 contrasts our results with those from Collins (2002) . Since the models are not the same, the exact numbers are incomparable, but the difference in direction is important: in the regularized model, performance improves with the inclusion of low support features. Finally, in addition to being significantly more accurate, smoothed models train much faster than unsmoothed ones, and do not benefit from early stopping. For example, the first smoothed model in Table 5 required 80 conjugate gradient iterations to converge (somewhat arbitrarily defined as a maximum difference of 10 \u22124 in feature weights between iterations), while its corresponding unsmoothed model required 335 iterations, thus training was roughly 4 times slower. 10 The second pair of models required 134 and 370 iterations respectively. As might be expected, unsmoothed models reach their highest generalization capacity long before convergence and accuracy on an unseen test set drops considerably with further iterations. This is not the case for smoothed models, as their test set accuracy increases almost monotonically with training iterations. 11 Figure 4 shows a graph of training iterations versus accuracy for the second pair of models on the development set.",
"cite_spans": [
{
"start": 135,
"end": 153,
"text": "Ratnaparkhi (1996)",
"ref_id": "BIBREF16"
},
{
"start": 240,
"end": 254,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
},
{
"start": 471,
"end": 485,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
},
{
"start": 1165,
"end": 1167,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 425,
"end": 432,
"text": "Table 6",
"ref_id": null
},
{
"start": 892,
"end": 899,
"text": "Table 5",
"ref_id": null
},
{
"start": 1556,
"end": 1564,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "gests that the final accuracy number presented here could be slightly improved upon by classifier combination, it is worth noting that not only is this tagger better than any previous single tagger, but it also appears to outperform Brill and Wu (1998) , the best-known combination tagger (they report an accuracy of 97.16% over the same WSJ data, but using a larger training set, which should favor them).",
"cite_spans": [
{
"start": 233,
"end": 252,
"text": "Brill and Wu (1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "While part-of-speech tagging is now a fairly well-worn road, and our ability to win performance increases in this domain is starting to be limited by the rate of errors and inconsistencies in the Penn Treebank training data, this work also has broader implications. Across the many NLP problems which involve sequence models over sparse multinomial distributions, it suggests that feature-rich models with extensive lexicalization, bidirectional inference, and effective regularization will be key elements in producing state-of-the-art results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "3.4"
},
{
"text": "Throughout this paper we assume that enough boundary symbols always exist that we can ignore the differences which would otherwise exist at the initial and final few positions.3 Despite use of names like \"label bias\"(Lafferty et al., 2001) or \"observation bias\", these effects are really just unwanted explaining-away effects(Cowell et al., 1999, 19), where two nodes which are not actually in causal competition have been modeled as if they were.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The effect of indirect influence being weaker than direct influence is more pronounced for conditionally structured models, but is potentially an issue even with a simple HMM. The probabilistic models for basic left-to-right and right-to-left HMMs with emissions on their states can be shown to be equivalent using Bayes' rule on the transitions, provided start and end symbols are modeled. However, this equivalence is violated in practice by the addition of smoothing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that these tags (and sentences) are not identical to those obtained from the tagged/pos directories of the same disk: hundreds of tags in the RB/RP/IN set were changed to be more consistent in the parsed/mrg version. Maybe we were the last to discover this, but we've never seen it in print.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ConclusionWe have shown how broad feature use, when combined with appropriate model regularization, produces a superior level of tagger performance. While experience sug-10 On a 2GHz PC, this is still an important difference: our largest models require about 25 minutes per iteration to train.11 In practice one notices some wiggling in the curve, but the trend remains upward even beyond our chosen convergence point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Boosting applied to tagging and PP attachment",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "EMNLP/VLC 1999",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney, Robert E. Schapire, and Yoram Singer. 1999. Boosting applied to tagging and PP attachment. In EMNLP/VLC 1999, pages 38-45.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "TnT -a statistical part-of-speech tagger",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2000,
"venue": "ANLP 6",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants. 2000. TnT -a statistical part-of-speech tagger. In ANLP 6, pages 224-231.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Classifier combination for improved lexical disambiguation",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1998,
"venue": "ACL 36/COLING 17",
"volume": "",
"issue": "",
"pages": "191--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Jun Wu. 1998. Classifier combination for improved lexical disambiguation. In ACL 36/COLING 17, pages 191-195.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transformation-based error-driven learning and natural language processing: A case study in part-ofspeech tagging",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "4",
"pages": "543--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of- speech tagging. Computational Linguistics, 21(4):543-565.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Equations for part-of-speech tagging",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Curtis",
"middle": [],
"last": "Hendrickson",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Jacobson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Perkowitz",
"suffix": ""
}
],
"year": 1993,
"venue": "AAAI 11",
"volume": "",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak, Curtis Hendrickson, Neil Jacobson, and Mike Perkowitz. 1993. Equations for part-of-speech tagging. In AAAI 11, pages 784-789.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A survey of smoothing techniques for maximum entropy models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "8",
"issue": "1",
"pages": "37--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Ronald Rosenfeld. 2000. A survey of smoothing techniques for maximum entropy models. IEEE Transactions on Speech and Audio Processing, 8(1):37-50.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A stochastic parts program and noun phrase parser for unrestricted text",
"authors": [
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "2",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In ANLP 2, pages 136-143.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with per- ceptron algorithms. In EMNLP 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Probabilistic Networks and Expert Systems",
"authors": [
{
"first": "Robert",
"middle": [
"G"
],
"last": "Cowell",
"suffix": ""
},
{
"first": "A",
"middle": [
"Philip"
],
"last": "Dawid",
"suffix": ""
},
{
"first": "Steffen",
"middle": [
"L"
],
"last": "Lauritzen",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Spiegelhalter",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert G. Cowell, A. Philip Dawid, Steffen L. Lauritzen, and David J. Spiegelhalter. 1999. Probabilistic Networks and Expert Systems. Springer-Verlag, New York.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dependency networks for inference, collaborative filtering and data visualization",
"authors": [
{
"first": "David",
"middle": [],
"last": "Heckerman",
"suffix": ""
},
{
"first": "David",
"middle": [
"Maxwell"
],
"last": "Chickering",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Rounthwaite",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Myers Kadie",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Machine Learning Research",
"volume": "1",
"issue": "1",
"pages": "49--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, and Carl Myers Kadie. 2000. Dependency networks for inference, collaborative filtering and data visualization. Journal of Machine Learning Re- search, 1(1):49-75.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Estimators for stochastic \"unificationbased\" grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Canon",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL 37",
"volume": "",
"issue": "",
"pages": "535--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification- based\" grammars. In ACL 37, pages 535-541.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Conditional structure versus conditional estimation in NLP models",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "EMNLP 2002",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2002. Conditional structure versus conditional estimation in NLP models. In EMNLP 2002, pages 9-16.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML-2001",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for seg- menting and labeling sequence data. In ICML-2001, pages 282-289.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Partof-speech tagging based on Hidden Markov Model assuming joint independence",
"authors": [
{
"first": "Sang-Zoo",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Ichi Tsujii",
"suffix": ""
},
{
"first": "Hae-Chang",
"middle": [],
"last": "Rim",
"suffix": ""
}
],
"year": 2000,
"venue": "ACL 38",
"volume": "",
"issue": "",
"pages": "263--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sang-Zoo Lee, Jun ichi Tsujii, and Hae-Chang Rim. 2000. Part- of-speech tagging based on Hidden Markov Model assuming joint independence. In ACL 38, pages 263-169.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkie- wicz. 1994. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313- 330.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tag selection using probabilistic methods",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Marshall",
"suffix": ""
}
],
"year": 1987,
"venue": "The Computational analysis of English: a corpusbased approach",
"volume": "",
"issue": "",
"pages": "42--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Marshall. 1987. Tag selection using probabilistic methods. In Roger Garside, Geoffrey Sampson, and Geoffrey Leech, editors, The Computational analysis of English: a corpus- based approach, pages 42-65. Longman, London.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "EMNLP 1",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In EMNLP 1, pages 133-142.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Second-order hidden Markov model for part-of-speech tagging",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"P"
],
"last": "Thede",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL 37",
"volume": "",
"issue": "",
"pages": "175--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M. Thede and Mary P. Harper. 1999. Second-order hidden Markov model for part-of-speech tagging. In ACL 37, pages 175-182.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Enriching the knowledge sources used in a maximum entropy part-ofspeech tagger",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2000,
"venue": "EMNLP/VLC 1999",
"volume": "",
"issue": "",
"pages": "63--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Christopher Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of- speech tagger. In EMNLP/VLC 1999, pages 63-71.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Text categorization based on regularized linear classification methods",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Frank",
"middle": [
"J"
],
"last": "Oles",
"suffix": ""
}
],
"year": 2001,
"venue": "Information Retrieval",
"volume": "4",
"issue": "",
"pages": "5--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Zhang and Frank J. Oles. 2001. Text categorization based on regularized linear classification methods. Information Re- trieval, 4:5-31.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Dependency networks: (a) the (standard) left-to-right first-order CMM, (b) the (reversed) right-to-left CMM, and (c) the bidirectional dependency network."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Accuracy by training iterations, with and without quadratic regularization."
},
"TABREF2": {
"num": null,
"text": "Tagging accuracy on the development set with different sequence feature templates. \u2020All models include the same vertical word-tag features ( t 0 , w 0 and various t 0 , \u03c3 (w 1n ) ), though the baseline uses a lower cutoff for these features.",
"html": null,
"content": "<table><tr><td>Model</td><td>Feature Templates</td><td colspan=\"2\">Support Features</td><td>Sentence</td><td colspan=\"2\">Token Unknown</td></tr><tr><td/><td/><td>Cutoff</td><td/><td colspan=\"3\">Accuracy Accuracy Accuracy</td></tr><tr><td>BASELINE</td><td>t 0 , w 0</td><td>2</td><td>6,501</td><td>1.63%</td><td>60.16%</td><td>82.98%</td></tr><tr><td/><td>t 0 , w 0</td><td>0</td><td>56,805</td><td>26.74%</td><td>93.69%</td><td>82.61%</td></tr><tr><td>3W</td><td>t 0 , w 0 , t 0 , w \u22121 , t 0 , w +1</td><td>2</td><td>239,767</td><td>48.27%</td><td>96.57%</td><td>86.78%</td></tr><tr><td colspan=\"2\">3W+TAGS tag sequences, t 0 , w 0 , t 0 , w \u22121 , t 0 , w +1</td><td>2</td><td>263,160</td><td>53.83%</td><td>97.02%</td><td>88.05%</td></tr><tr><td>BEST</td><td>see text</td><td>2</td><td>460,552</td><td>55.31%</td><td>97.15%</td><td>88.61%</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Tagging accuracy with different lexical feature templates on the development set.",
"html": null,
"content": "<table><tr><td colspan=\"4\">Model Feature Templates Support Features</td><td>Sentence</td><td colspan=\"2\">Token Unknown</td></tr><tr><td/><td/><td>Cutoff</td><td/><td colspan=\"3\">Accuracy Accuracy Accuracy</td></tr><tr><td>BEST</td><td>see text</td><td>2</td><td>460,552</td><td>56.34%</td><td>97.24%</td><td>89.04%</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}