|
{ |
|
"paper_id": "C08-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:26:25.354701Z" |
|
}, |
|
"title": "Weakly supervised supertagging with grammar-informed initialization", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Texas at Austin", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Much previous work has investigated weak supervision with HMMs and tag dictionaries for part-of-speech tagging, but there have been no similar investigations for the harder problem of supertagging. Here, I show that weak supervision for supertagging does work, but that it is subject to severe performance degradation when the tag dictionary is highly ambiguous. I show that lexical category complexity and information about how supertags may combine syntactically can be used to initialize the transition distributions of a first-order Hidden Markov Model for weakly supervised learning. This initialization proves more effective than starting with uniform transitions, especially when the tag dictionary is highly ambiguous.", |
|
"pdf_parse": { |
|
"paper_id": "C08-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Much previous work has investigated weak supervision with HMMs and tag dictionaries for part-of-speech tagging, but there have been no similar investigations for the harder problem of supertagging. Here, I show that weak supervision for supertagging does work, but that it is subject to severe performance degradation when the tag dictionary is highly ambiguous. I show that lexical category complexity and information about how supertags may combine syntactically can be used to initialize the transition distributions of a first-order Hidden Markov Model for weakly supervised learning. This initialization proves more effective than starting with uniform transitions, especially when the tag dictionary is highly ambiguous.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Supertagging involves assigning words lexical entries based on a lexicalized grammatical theory, such as Combinatory Categorial Grammar (CCG) (Steedman, 2000) Tree-adjoining Grammar (Joshi, 1988) , or Head-driven Phrase Structure Grammar (Pollard and Sag, 1994) . Supertag sets are larger than part-of-speech (POS) tag sets and their elements are generally far more articulated. For example, the English verb join has the POS VB and the CCG category ((S b \\NP)/PP)/NP in CCGbank (Hockenmaier and Steedman, 2007) . This category indicates that join requires a noun phrase c 2008.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 158, |
|
"text": "(Steedman, 2000)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 195, |
|
"text": "(Joshi, 1988)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 261, |
|
"text": "(Pollard and Sag, 1994)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 511, |
|
"text": "(Hockenmaier and Steedman, 2007)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "to its left, another to its right, and a prepositional phrase to the right of that.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Supertags convey such detailed syntactic subcategorization information that supertag disambiguation is referred to as almost parsing (Bangalore and Joshi, 1999) . Standard sequence prediction models are highly effective for supertagging, including Hidden Markov Models (Bangalore and Joshi, 1999; Nielsen, 2002) , Maximum Entropy Markov Models (Clark, 2002; Hockenmaier et al., 2004; Clark and Curran, 2007) , and Conditional Random Fields (Blunsom and Baldwin, 2006) . The original motivation for supertags-parse prefiltering for lexicalized grammars-of Bangalore and Joshi (1999) has been realized to good effect: the supertagger of Clark and Curran (2007) provides staged n-best lists of multi-tags that dramatically improve parsing speed and coverage without much loss in accuracy. Espinosa et al. (2008) have shown that hypertagging (predicting the supertag associated with a logical form) can improve both speed and accuracy of wide-coverage sentence realization with CCG. Supertags have gained further relevance as they are increasingly used as features for other tasks, including machine translation (Birch et al., 2007; Hassan et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 160, |
|
"text": "Joshi, 1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 296, |
|
"text": "(Bangalore and Joshi, 1999;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 311, |
|
"text": "Nielsen, 2002)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 357, |
|
"text": "(Clark, 2002;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 383, |
|
"text": "Hockenmaier et al., 2004;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 407, |
|
"text": "Clark and Curran, 2007)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 467, |
|
"text": "(Blunsom and Baldwin, 2006)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 581, |
|
"text": "Bangalore and Joshi (1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 635, |
|
"end": 658, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 786, |
|
"end": 808, |
|
"text": "Espinosa et al. (2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1108, |
|
"end": 1128, |
|
"text": "(Birch et al., 2007;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1129, |
|
"end": 1149, |
|
"text": "Hassan et al., 2007)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Supertaggers typically rely on a significant amount of carefully annotated sentences. As with many problems, there is pressing need to find strategies for reducing the amount of supervision required for producing accurate supertaggers, but as yet, no one has explored the use of weak supervision for the task. In particular, there are many dialog systems which rely on hand-crafted lexicons that both provide a starting point for bootstrapping a supertagger and which could benefit greatly from supertag pre-parse filter. For example, the dialog system used by Kruijff et al. (2007) uses a hand-crafted CCG grammar for OpenCCG (White and Baldridge, 2003) . It is important to stress that there are many such uses of CCG and related frameworks which do not rely on first annotating (even a small number of) sentences in a corpus: these define a lexicon that maps from words to categories (supertags) for a particular domain/application. This scenario is a natural fit for learning taggers from tag dictionaries using hidden Markov models with Expectation-Maximization (EM). Here, I investigate such weakly supervised learning for supertagging and demonstrate the importance of proper initialization of the tag transition distributions of the HMM. In particular, such initialization can be done using inherent properties of the CCG formalism itself regarding how categories 1 may combine. Informed initialization should help with supertagging for two reasons. First, categories have structure-lacking in POS tags-waiting to be exploited. For example, it is far more likely a priori to see the category sequence (S\\NP)/NP NP/N than the sequence S/S NP\\NP. Given the categories for a word, this information can be used to influence our expectations about categories for adjacent words. Second, this kind of information truly matters for the task: a key aspect of supertagging that differentiates it from POS tagging is that the contextual information is much more important for the former. Lexical probabilities handle most of the ambiguity for POS tagging, but supertags are inherently about context and, furthermore, lexical ambiguity is much greater for supertagging, making lexical probabilities less effective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 561, |
|
"end": 582, |
|
"text": "Kruijff et al. (2007)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 654, |
|
"text": "(White and Baldridge, 2003)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "I start by defining a distribution over lexical categories and then use this distribution as part of creating a CCG-informed transition distribution that appropriately breaks the symmetry of uniform HMM initialization. After describing how these components are included in the HMM, I describe experiments with CCGbank varying the ambiguity of the lexicon provided. I show that using knowledge about the formalism consistently improves performance, and is especially important as categorial ambiguity increases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The categories of CCG are an inductively defined set containing elements that are either atomic elements or (curried) functions specifying the canon- Words can be associated with multiple categories; the distribution over these categories is typically quite skewed. For example, the first entry for buy occurs 33 times in CCGbank, compared with just once for the second. That the simpler category is more prevalent is unsurprising: a general strategy when creating CCG lexicons is to use simpler categories whenever possible. This points to the possibility of defining distributions over CCG lexicons based on measures of the complexity of categories. I use a simple distribution here: given a lexicon L, the probability of a category i is inversely proportional to its complexity:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u039b i = 1 complexity(c i ) j\u2208L 1 complexity(c j ) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here, a very simple complexity measure is assumed: the number of subcategories (tokens) contained in a category. 2 For example, ((S\\NP)\\(S\\NP)/NP contains 9: S (twice), NP (thrice), S\\NP (twice), (S\\NP)\\(S\\NP), and ((S\\NP)\\(S\\NP)/NP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The tag transition distribution defined in the next section uses \u039b to bias transitions toward simpler categories, e.g., preferring the first category for buy over the second. Performance when using \u039b is compared to using a uniform distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Other distributions could be given, e.g., one which gives more mass to adjunct categories such as (S\\NP)\\(S\\NP) than to ones which are otherwise similar but do not display such symmetry, like (S/NP)\\(NP\\S). However, the most important thing for present purposes is that simpler categories are more likely than more complex ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This distribution imposes no internal structure on the likelihood of a lexicon. As far as \u039b is concerned, lexicons can as well have the category (S\\NP)\\NP for transitive verbs and ((S/NP)/NP)/NP for ditransitive verbs, even though this is a highly unlikely pattern since we ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S/(S\\NP) (S\\NP)/(S\\NP) ((S\\NP)/PP)/NP NP/N N PP/NP NP/N N > > NP NP > > (S\\NP)/PP PP > S\\NP > S\\NP > S", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Figure 1: Normal form CCG derivation, using only application rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "would expect both types of verbs to seek their arguments in the same direction. Languages also tend to prefer lexicons with one or the other slash direction predominating (Villavicencio, 2002) . In the future, it would be interesting to consider Bayesian approaches that could encode more complex structure and assign priors over distributions over lexicons, building on these observations. An aspect of CCGbank that relevant for \u039b i is that some categories actually are not true categories. For example, many punctuation \"categories\" are given as LRB, ., :, etc. In most grammars, the category of '.' is usually assumed to be S\\S. The grammatical behavior of such pseudo-categories is handled via special rules in the parsers of Hockenmaier and Steedman (2007) and Clark and Curran (2007) . I relabeled three of these: , to NP\\NP, . to S\\S and ; to (S\\S)/S. A single best change was not clear for others such as LRB and :, so they were left as is.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 192, |
|
"text": "(Villavicencio, 2002)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 761, |
|
"text": "Hockenmaier and Steedman (2007)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 766, |
|
"end": 789, |
|
"text": "Clark and Curran (2007)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical category distribution", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "CCG analyses of sentences are built up from lexical categories combining to form derived categories, until an entire sentence is reduced to a single derived category with corresponding dependencies. One of CCG's most interesting linguistic properties is it allows alternative constituents. Consider the derivations in Figures 1 and 2 , which show a normal form derivation (Eisner, 1996) and fully incremental derivation, respectively. Both produce the same dependencies, guaranteed by the semantic consistency of CCG's rules (Steedman, 2000) . This property of CCG of supporting multiple derivations of the same analysis has been termed spurious ambiguity. However, the extra constituents are anything but spurious: they are implicated in a range of CCG (along with other forms of categorial grammar) linguistic analyses, including coordination, long-distance extraction, intonation, and incremental processing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 386, |
|
"text": "(Eisner, 1996)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 541, |
|
"text": "(Steedman, 2000)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 318, |
|
"end": 333, |
|
"text": "Figures 1 and 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This all boils down to associativity: just as This observation can be used in different ways by different models for CCG supertagging. For example, discriminative tagging models could include features that capture whether or not the current supertag can combine with the previous one and possibly via which CCG rule. Here, I show how it can be used to provide a non-uniform starting point for the transition distributions \u03b8 j|i in a first-order Hidden Markov Model. This is similar to how Grenager et al. (2005) use diagonal initialization in an HMM for field segmentation to encourage the model to remain in the same state (and thus predict the same label for adjacent words). For CCG supertagging, the initialization should discourage diagonalization and establish a preference for some transitions over others.", |
|
"cite_spans": [ |
|
{ |
|
"start": 489, |
|
"end": 511, |
|
"text": "Grenager et al. (2005)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are many ways to define such a starting point. The simplest would be to reserve a small part of the mass spread uniformly over category pairs which cannot combine and then spread the rest of the mass uniformly over those which can. However, we can provide a more refined distribution, \u03a8 j|i , by incorporating the lexical category distribution \u039b i defined in the previous section to weight these transitions according to this further information. In a similar manner to Grenager et al. (2005) , I define \u03a8 as follows: Figure 2 : Incremental CCG derivation, using both application and composition (B) rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 498, |
|
"text": "Grenager et al. (2005)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 532, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u03a8 j|i = (1\u2212\u03c3)\u039b j + \u03c3 \u00d7 \u03ba(i, j) \u00d7 \u039b j k\u2208L|\u03ba(i,k) \u039b k (2) where \u03ba(i, j)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "is an indicator function that returns 1 if categories c i and c j can combine when c i immediately precedes c j , \u03c3 is a global parameter that specifying the total probability of transitions that are combinable from i. Each j receives a proportion of \u03c3 according to its lexical prior probability over the sum of the lexical prior probabilities for all categories that combine with i. For the experiments in this paper, \u03c3 was set to .95. For the models referred to as \u03a8U and \u03a8U-EM in section 5, the uniform lexical probability 1/|C| is used for \u039b i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For \u03ba(i, j), I use the standard rules assumed for CCGbank parsers: forward and backward application (>, <), order-preserving composition (>B, <B), and backward crossed composition (<B \u00d7 ) for S-rooted categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Thus, \u03ba(NP, S\\NP)=1, \u03ba(S/NP, NP/N)=1, \u03ba((S\\NP)/NP, (S\\NP)\\(S\\NP))=1 and \u03ba(S/NP, NP\\NP)=0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For application, leftward and rightward arguments are handled separately by assuming that it would be possible to consume all preceding arguments of the first category and all following arguments of the second. So, \u03ba((S/NP)\\S, NP/N)=1 and \u03ba(NP, (S\\NP)/NP)=1. Unification on categories is standard (so \u03ba(NP[nb], S\\NP)=1), except that N unifies with NP only when N is the argument: \u03ba(N, S\\NP)=1, but \u03ba(NP/N, NP)=0. This is to deal with the fact that CCGbank represents many words with N (e.g., Mr.|N/N Vinken|N is|(S[dcl]\\NP)/NP) and assumes that a parser will include the unary type changing rule N\u2192NP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The HMM also has initial and final probabilities; distributions can be defined based on which categories are likely to start or end a sentence. For this, I assume only that categories which seek arguments to the left (e.g., S\\NP) are less likely at the beginning of a sentence and those which seek rightward arguments are less likely at the end. The initializations for these are defined similarly to the transition distribution, substituting functions noLef tArgs(i) and noRightArgs(i) for \u03ba(i, j).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category transition distribution", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A first-order Hidden Markov Model (bitag HMM) is used for bootstrapping a supertagger from a lexicon. See Rabiner (1989) for an extensive introduction to and discussion of HMMs. There are several reasons why this is an attractive tagging model here. First, though extra context in the form of tritag transition distributions or other techniques can improve supervised POS tagging accuracy, the accuracy of bitag HMMs is not far behind. The goal here is to investigate the relative gains of using CCG-based information in weakly supervised HMM learning. Second, the expectationmaximization algorithm for bitag HMMs is efficient and has been shown to be quite effective for acquiring accurate POS taggers given only a lexicon (tag dictionary) and certain favorable conditions (Banko and Moore, 2004) . Third, the model's simplicity makes it straightforward to test the idea of CCG-initialization on tag transitions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 120, |
|
"text": "Rabiner (1989)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 797, |
|
"text": "(Banko and Moore, 2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Dirichlet priors can be used to bias HMMs toward more skewed distributions (Goldwater and Griffiths, 2007; Johnson, 2007) , which is especially useful in the weakly supervised setting considered here. Following Johnson (2007) , I use variational Bayes EM (Beal, 2003) during the M-step for the transition distribution:", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 106, |
|
"text": "(Goldwater and Griffiths, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 121, |
|
"text": "Johnson, 2007)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 225, |
|
"text": "Johnson (2007)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 267, |
|
"text": "(Beal, 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 l+1 j|i = f (E[n i,j ] + \u03b1 i ) f (E[n i ] + |C | \u00d7 \u03b1 i )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (v) = exp(\u03c8(v))", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03c8(v) = g(v \u2212 1 2 ) if v > 7 \u03c8(v + 1) \u2212 1 v o.w. (5) g(x) \u2248 log(x) + 1 24x 2 \u2212 7 960x 4 + 31 8064x 6 \u2212 127 30720x 8 (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where V is the set of word types, \u03c6 is the digamma function (which is approximated by g), and \u03b1 i is the hyperparameter of the Dirichlet priors. In all experiments, the \u03b1 i parameters were set symmetrically to .005. For experiments using the transition prior \u03a8 j|i , the initial expectations of the model were set as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "E[n i,j ] = |E i | \u00d7 \u03a8 j|i and E[n i ] = |E i |,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where E i is the set of emissions for category c i . The uniform probability 1 |C | was used in place of \u03a8 j|i for standard HMM initialization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The emission distributions use standard EM expectations with more mass reserved for unknowns for tags with more emissions as follows: 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c6 l+1 k|i = E[n i,k ] + |E i | \u00d7 1 |V| E[n i ] + |E i |", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The Viterbi algorithm is used for decoding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "CCGbank (Hockenmaier and Steedman, 2007) is a translation of phrase structure analyses of the Penn Treebank into CCG analyses. Here, I consider only the lexical category annotations and ignore derivations. The standard split used for weakly supervised HMM tagging experiments (Banko and Moore, 2004; Wang and Schuurmans, 2005) is used: sections 0-18 for training (train), 19-21 for development (dev), and 22-24 for testing (test). All parameters and models were developed using dev. The test set was used only once to obtain the performance figures reported here. Counts for word types, word tokens and sentences for each data set are given in Table 1 . In train, there are 1241 distinct categories, the ambiguity per word type is 1.69, and the maximum number of categories for a single word type is 126. This is much greater than for POS tags in CCGbank, for which there are 48 POS tags with an av- erage ambiguity of 1.17 per word and a maximum of 7 tags in train. 5 The set of supertags was not reduced: any category found in the data used to initialize a lexicon was considered. This is one of the advantages of the HMM over using discriminative models, where typically only supertags seen at least 10 times in the training material are utilized for efficiency (Clark and Curran, 2007) . Ignoring some supertags makes sense when building supervised supertaggers for pre-parse filtering, but not for learning from lexicons, where we cannot assume we have such frequencies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 40, |
|
"text": "(Hockenmaier and Steedman, 2007)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 299, |
|
"text": "(Banko and Moore, 2004;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 326, |
|
"text": "Wang and Schuurmans, 2005)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 967, |
|
"end": 968, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1265, |
|
"end": 1289, |
|
"text": "(Clark and Curran, 2007)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 651, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For supervised training with the HMM on train, the performance is 87.6%. This compares to 91.4% for the C&C supertagger. The accuracy of the HMM, though quite a bit lower than that of C&C, is still quite good, indicating that it is an adequate model for the task. Note also that it uses only the words themselves and does not rely on POS tags. The performance of the C&C tagger was obtained by training the C&C POS tagger on the given dataset and tagging the evaluation material with it. Finally, the HMM trains in just a few seconds as opposed to over an hour. 6 Five different weakly supervised scenarios are evaluated: (1) standard EM with 50 iterations (EM), (2) \u03a8 initialization with uniform lexical probabilities w/o EM (\u03a8U), (3) \u03a8 with \u039b probabilities w/o EM (\u03a8\u039b), (4) \u03a8 with uniform lexical probabilities and 10 EM iterations, and (5) \u03a8 with \u039b and 10 EM iterations. 7 These scenarios compare the effectiveness of standard EM with the use of grammar informed transitions; these in turn are of two varieties -one using a uniform lexical prior or one that is biased in favor of less complex categories according to \u039b.", |
|
"cite_spans": [ |
|
{ |
|
"start": 562, |
|
"end": 563, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As Banko and Moore (2004) discovered when reimplementing several previous HMMs for POS tagging, the lexicons had been limited to contain only tags occurring above a particular frequency. For POS tagging, this keeps a cleaner lexicon that avoids errors in annotated data (such as the tagged as VB) and rare tags (such as a tagged as SYM).", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 25, |
|
"text": "Banko and Moore (2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "When learning from a lexicon alone, such elements receive the same weight as their other (correct or more fundamental) tags in initializing the HMM. The problem of rare tags turns out to be very important for weakly supervised CCG supertagging. 8 To consider the effect of the CCG-based initialization for lexicons with differing ambiguity, I use tag cutoffs that remove any lexical entry containing a category that appears with a particular word less than X% of the time (Banko and Moore, 2004) , as well as using no cutoffs at all. Recall that the goal of these experiments is to investigate the relative difference in performance between using the grammar-based initialization or not, given some (possibly hand-crafted) lexicon. Lexicon cutoffs actually constitute a strong source of supervision because they use tag frequencies (which would not be known for a hand-crafted lexicon), so it should be stressed that they are used here only so that this relative performance can be measured for different ambiguity levels. Table 2 provides accuracy for ambiguous words (and not including punctuation) for the five scenarios, varying the cutoff to measure the effect of progressively allowing more lexical ambiguity (and much rarer categories). The number of ambiguous, non-punctuation tokens is 101,167.", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 246, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 495, |
|
"text": "(Banko and Moore, 2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1023, |
|
"end": 1030, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The first thing to note is performance given only the lexicon and the \u03a8U or \u03a8\u039b initialization of the transitions. These correspond to taggers which have only been given the lexicon and have not utilized any data to improve their estimates of the transition and emission probabilities. Interestingly, both do quite well with a clean lexicon: see the columns under \u03a8U and \u03a8\u039b. These indicate that initializing the transitions based on whether categories can combine does indeed appropriately capture key aspects of category transitions. Furthermore, using the lexical category distribution (\u03a8\u039b) to create the transition initialization provides a better starting point than the uniform one (\u03a8U), espe- 8 CCGbank actually corrects many errors in the Penn Treebank, and does not suffer as much from mistagged examples. However, there were two instances of an ill-formed category ((S[b]\\NP)/NP)/ in wsj 0595 for the words own and keep. Table 2 : Performance on ambiguous word types of the HMM with standard EM (uniform starting transitions), just the initial \u03a8 transitions (\u03a8U and \u03a8\u039b), and EM initialized with \u03a8U and \u03a8\u039b, for lexicons with varied cutoffs. Note also that these scores do not include punctuation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 698, |
|
"end": 699, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 929, |
|
"end": 936, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "cially as lexical ambiguity increases. Next, note that both \u03a8U-EM and \u03a8\u039b-EM beat the randomly initialized EM for all cutoff levels. For the 10% tag cutoff (the first row), there is an absolute difference of over 2% for both. 9 As the ambiguity increases, the grammar-informed initialization has a much stronger effect. In the extreme case of using no cutoff at all (the None row of Table 2 ), \u03a8U-EM and \u03a8\u039b-EM beat EM by 19.9% and 23.1%, respectively. Finally, using the lexical category distribution \u039b instead of a uniform one is much more effective when there is more lexical ambiguity (e.g., compare the .01 through None rows of the \u03a8U-EM and \u03a8\u039b-EM columns), but has a negligible effect with less ambiguity (rows .05 and .01). This demonstrates that the grammarbased initialization can be effectively exploited -it is in fact crucial for improving performance when we are given much more ambiguous lexicons.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 389, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The majority of errors with \u03a8\u039b-EM involve marking adjectives (N/N) as nouns (N) or vice versa, and assigning the wrong prepositional category (usually the simpler noun phrase postmodifier (NP\\NP)/NP instead of the verb phrase modifier ((S\\NP)\\(S\\NP))/NP. Both of these kinds of errors, and others, could potentially be corrected if the categories proposed by the tagger were further filtered by an attempt to parse each sentence with the categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The idea of using knowledge from the formalism for constraining supertagging originates with Ban-galore and Joshi (1999) . They used constraints based on how elementary trees of Tree-Adjoining Grammar could or could not combine as filters to block out tags that do not fit in certain locations in the string. My approach is different is several ways. First, they dealt with fully supervised supertagging; here I show that using this knowledge is important for weakly supervised supertagging where we are given only a tag dictionary (lexicon). Second, my approach encodes grammarbased cues only as an initial bias, so categories are never explicitly filtered. Finally, I use CCG rather than TAG, which makes it possible to exploit a much higher degree of associativity in derivations. This in turn makes it easier to utilize prior knowledge about adjacent contexts -precisely what is needed for using the grammar to influence the transition probabilities of a bigram HMM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 120, |
|
"text": "Joshi (1999)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "On the other hand, Bangalore and Joshi (1999) use constraints that act at greater distances than I have considered here. For example, if one wishes to provide a word with the category ((S\\NP)/PP)/NP, then there should be a word with a category which results in a PP two or more words to its right -this is something which the bigram transitions considered here cannot capture. An interesting way to extend the present approach would be to enforce such patterns as posterior constraints during EM (Graca et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 45, |
|
"text": "Bangalore and Joshi (1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 516, |
|
"text": "(Graca et al., 2007)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Recent work considers a damaged tag dictionary by assuming that tags are known only for words that occur more than once or twice (Toutanova and Johnson, 2007) . A very interesting aspect of this work is that they explicitly model ambiguity classes to exploit commonality in the lexicon between different word forms, which could be even more useful for supertagging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 158, |
|
"text": "(Toutanova and Johnson, 2007)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In a grammar development context, it is often the case that only some of the categories for a word have been assigned. This is the scenario considered by Haghighi and Klein (2006) for POS tagging: how to construct an accurate tagger given a set of tags and a few example words for each of those tags. They use distributional similarity of words to define features for tagging that effectively allow such prototype words to stand in for others. This idea could be used with my approach as well; the most obvious way would be to use prototype words to suggest extra categories (beyond the tag dictionary) for known words and a reduced set of categories for unknown words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 179, |
|
"text": "Haghighi and Klein (2006)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Other work aims to do truly unsupervised learning of taggers, such as Goldwater and Griffiths (2007) and Johnson (2007) . No tag dictionaries are assumed, and the models are parametrized with Dirichlet priors. The states of these models implicitly represent tags; however, it actually is not clear what the states in such models truly represent: they are (probably interesting) clusters that may or may not correspond to what we normally think of as parts-of-speech. POS tags are relatively inert, passive elements in a grammar, whereas CCG categories are the very drivers of grammatical analysis. That is, syntax is projected, quite locally, by lexical categories. It would thus be interesting to consider the induction of categories with grammarbased priors with such models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 100, |
|
"text": "Goldwater and Griffiths (2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 119, |
|
"text": "Johnson (2007)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "I have shown that weakly supervised learning can indeed be used to induce supertaggers from a lexicon mapping words to their possible categories, but that the extra ambiguity in the supertagging task over that of POS tagging makes performance much more sensitive to rare categories that occur in larger, more ambiguous lexicons. However, I have also shown that the CCG formalism itself can provide the basis for useful distributions over lexical categories and tag transitions in a bitag HMM. By using these distributions to initialize the HMM, it is possible to improve performance regardless of the underlying ambiguity. This is especially important for reducing error when the lexicon used for bootstrapping is highly ambiguous and contains very rare categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For the rest of the paper, I will refer to categories rather supertags, but will still refer to the task as supertagging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This worked better than using category arity or number of unique subcategory types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I make the standard assumption that type-raising is performed in the lexicon, so the possibility of combining these through type-raising plus composition is not available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I also experimented with a Dirichlet prior on the emissions, but it performed worse. Using a symmetric prior was actually detrimental, while performance within a percent of those achieved with the above update was achieved with Dirichlet hyperparameters set relative to |Ei|/|V|.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the POS tag information is not used in these experiments, except for by the C&C tagger.6 It should be stressed that the goal of this paper is not to compete on supervised performance with C&C; instead, this comparison shows that the HMM supervised performance is reasonable and is thus relevant for bootstrapping.7 The number of iterations for standard and grammar informed iteration were determined by performance on dev.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For comparison with the performance of 87.6% for the fully supervised HMM on all tokens, \u03a8-EM achieves 82.1% and 58.9% using a cutoff of .1 or no cutoff, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks to the UT Austin Natural Language Learning group and three anonymous reviewers for useful comments and feedback. This work was supported by NSF grant BCS-0651988 and a Faculty Research Assignment from UT Austin.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Supertagging: an approach to almost parsing", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational Linguistics", |
|
"volume": "25", |
|
"issue": "2", |
|
"pages": "237--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bangalore, Srinivas and Aravind K. Joshi. 1999. Su- pertagging: an approach to almost parsing. Compu- tational Linguistics, 25(2):237-265.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Part-ofspeech tagging in context", |
|
"authors": [ |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of COL-ING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Banko, Michele and Robert C. Moore. 2004. Part-of- speech tagging in context. In Proceedings of COL- ING.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Variational Algorithms for Approximate Inference", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Beal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beal, Matthew. 2003. Variational Algorithms for Ap- proximate Inference. Ph.D. thesis, University of Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "CCG supertags in factored statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2nd Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Birch, Alexandra, Miles Osborne, and Philipp Koehn. 2007. CCG supertags in factored statistical machine translation. In Proceedings of the 2nd Workshop on Statistical Machine Translation.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Multilingual deep lexical acquisition for HPSGs via supertagging", |
|
"authors": [ |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of EMNLP 06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "164--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blunsom, Phil and Timothy Baldwin. 2006. Multi- lingual deep lexical acquisition for HPSGs via su- pertagging. In Proceedings of EMNLP 06, pages 164-171.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, Stephen and James Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Supertagging for combinatory categorial grammar", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of TAG+6", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, Stephen. 2002. Supertagging for combina- tory categorial grammar. In Proceedings of TAG+6, pages 19-24, Venice, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Efficient normal-form parsing for combinatory categorial grammars", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 35th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eisner, Jason. 1996. Efficient normal-form parsing for combinatory categorial grammars. In Proceedings of the 35th ACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Hypertagging: Supertagging for surface realization with CCG", |
|
"authors": [ |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Espinosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dennis", |
|
"middle": [], |
|
"last": "Mehay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Espinosa, Dominic, Michael White, and Dennis Mehay. 2008. Hypertagging: Supertagging for surface real- ization with CCG. In Proceedings of ACL-08: HLT, pages 183-191, Columbus, Ohio, June.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A fully Bayesian approach to unsupervised part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goldwater, Sharon and Tom Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the 45th ACL.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Expectation maximization, posterior constraints, and statistical alignment", |
|
"authors": [ |
|
{ |
|
"first": "Joao", |
|
"middle": [], |
|
"last": "Graca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of NIPS07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graca, Joao, Kuzman Ganchev, and Ben Taskar. 2007. Expectation maximization, posterior constraints, and statistical alignment. In Proceedings of NIPS07.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unsupervised learning of field segmentation models for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "371--378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grenager, Trond, Dan Klein, and Christopher D. Man- ning. 2005. Unsupervised learning of field segmen- tation models for information extraction. In Pro- ceedings of the 43rd ACL, pages 371-378.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Prototype-driven learning for sequence models", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haghighi, Aria and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of HLT-NAACL 2006.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Supertagged phrase-based statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khalil", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hassan, Hany, Khalil Sima'an, and Andy Way. 2007. Supertagged phrase-based statistical machine trans- lation. In Proceedings of the 45th ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "3", |
|
"pages": "355--396", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hockenmaier, Julia and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Com- putational Linguistics, 33(3):355-396.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Extending the coverage of a CCG system", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gann", |
|
"middle": [], |
|
"last": "Bierner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Research in Language and Computation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "165--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hockenmaier, Julia, Gann Bierner, and Jason Baldridge. 2004. Extending the coverage of a CCG system. Research in Language and Computa- tion, 2:165-208.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Why doesn't EM find good HMM POS-taggers?", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johnson, Mark. 2007. Why doesn't EM find good HMM POS-taggers? In Proceedings of the EMNLP- CoNLL 2007.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Tree Adjoining Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Natural Language Parsing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "206--250", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joshi, Aravind. 1988. Tree Adjoining Grammars. In Dowty, David, Lauri Karttunen, and Arnold Zwicky, editors, Natural Language Parsing, pages 206-250. Cambridge University Press, Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Situated dialogue and spacial organization: What, where", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kruijff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Geert-Jan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendrik", |
|
"middle": [], |
|
"last": "Zender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patric", |
|
"middle": [], |
|
"last": "Jensfelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henrik", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Christensen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "International Journal of Advanced Robotic Systems", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "125--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kruijff, Geert-Jan M., Hendrik Zender, Patric Jensfelt, and Henrik I. Christensen. 2007. Situated dialogue and spacial organization: What, where,...and why? International Journal of Advanced Robotic Systems, 4(1):125-138.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Supertagging with combinatory categorial grammar", |
|
"authors": [ |
|
{ |
|
"first": "Leif", |
|
"middle": [], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Seventh ESSLLI Student Session", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "209--220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nielsen, Leif. 2002. Supertagging with combinatory categorial grammar. In Proceedings of the Seventh ESSLLI Student Session, pages 209-220.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Head Driven Phrase Structure Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Sag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pollard, Carl and Ivan Sag. 1994. Head Driven Phrase Structure Grammar. CSLI/Chicago Univer- sity Press, Chicago.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A tutorial on Hidden Markov Models and selected applications in speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "77", |
|
"issue": "2", |
|
"pages": "257--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rabiner, Lawrence. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 286.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The Syntactic Process", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steedman, Mark. 2000. The Syntactic Process. The MIT Press, Cambridge Mass.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A Bayesian LDA-based model for semi-supervised part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of NIPS 20", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toutanova, Kristina and Mark Johnson. 2007. A Bayesian LDA-based model for semi-supervised part-of-speech tagging. In Proceedings of NIPS 20.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "The Acquisition of a Unification-Based Generalised Categorial Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Aline", |
|
"middle": [], |
|
"last": "Villavicencio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Villavicencio, Aline. 2002. The Acquisition of a Unification-Based Generalised Categorial Gram- mar. Ph.D. thesis, University of Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Improved estimation for unsupervised part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dal", |
|
"middle": [], |
|
"last": "Iris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schuurmans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "EEE International Conference on Natural Language Processing and Knowledge Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, Qin Iris and Dal Schuurmans. 2005. Improved estimation for unsupervised part-of-speech tagging. In EEE International Conference on Natural Lan- guage Processing and Knowledge Engineering.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Adapting chart realization to CCG", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of ENLG", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "White, Michael and Jason Baldridge. 2003. Adapting chart realization to CCG. In Proceedings of ENLG.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "ical linear direction in which they seek their arguments. Some example entries from CCGbank are: the := NP nb /N of := (NP\\NP)/NP of := ((S\\NP)\\(S\\NP)/NP were := (S dcl \\NP)/(S pss \\NP) buy := (S dcl \\NP)/NP buy := ((((S b \\NP)/PP)/PP)/(S adj \\NP))/NP" |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "(4 + 2)) = ((1 + 4) + 2) = 7, CCG ensures that (Ed\u2022(saw\u2022Ted)) = ((Ed\u2022saw)\u2022Ted) = S Such multiple derivations arise when adjacent categories can combine through either application or composition. Thus, we would expect that the lexical categories needed to analyze an entire sentence will more often than not be able to combine with their immediate neighbors. For example, six of seven pairs of adjacent lexical categories in the sentence inFigure 1can combine. Only N PP/NP of board as cannot.3" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "S\\NP) (S\\NP)/(S\\NP) ((S\\NP)/PP)/NP NP/" |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "These were corrected to (S[b]\\NP)/NP." |
|
} |
|
} |
|
} |
|
} |