Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:47.128979Z"
},
"title": "More or less supervised supersense tagging of Twitter",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"postCode": "140",
"country": "Denmark Njalsgade"
}
},
"email": "[email protected]"
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"postCode": "140",
"country": "Denmark Njalsgade"
}
},
"email": ""
},
{
"first": "Mart\u00ednez",
"middle": [],
"last": "Alonso",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"postCode": "140",
"country": "Denmark Njalsgade"
}
},
"email": "[email protected]"
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"postCode": "140",
"country": "Denmark Njalsgade"
}
},
"email": "[email protected]"
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"postCode": "140",
"country": "Denmark Njalsgade"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present two Twitter datasets annotated with coarse-grained word senses (supersenses), as well as a series of experiments with three learning scenarios for supersense tagging: weakly supervised learning, as well as unsupervised and supervised domain adaptation. We show that (a) off-the-shelf tools perform poorly on Twitter, (b) models augmented with embeddings learned from Twitter data perform much better, and (c) errors can be reduced using type-constrained inference with distant supervision from WordNet.",
"pdf_parse": {
"paper_id": "S14-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "We present two Twitter datasets annotated with coarse-grained word senses (supersenses), as well as a series of experiments with three learning scenarios for supersense tagging: weakly supervised learning, as well as unsupervised and supervised domain adaptation. We show that (a) off-the-shelf tools perform poorly on Twitter, (b) models augmented with embeddings learned from Twitter data perform much better, and (c) errors can be reduced using type-constrained inference with distant supervision from WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Supersense tagging (SST, Ciaramita and Altun, 2006) is the task of assigning high-level ontological classes to open-class words (here, nouns and verbs). It is thus a coarse-grained word sense disambiguation task. The labels are based on the lexicographer file names for Princeton WordNet (Fellbaum, 1998) . They include 15 senses for verbs and 26 for nouns (see Table 1 ). While WordNet also provides catch-all supersenses for adjectives and adverbs, these are grammatically, not semantically motivated, and do not provide any higherlevel abstraction (recently, however, Tsvetkov et al. (2014) proposed a semantic taxonomy for adjectives). They will not be considered in this paper.",
"cite_spans": [
{
"start": 25,
"end": 51,
"text": "Ciaramita and Altun, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 288,
"end": 304,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 571,
"end": 593,
"text": "Tsvetkov et al. (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 362,
"end": 369,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Coarse-grained categories such as supersenses are useful for downstream tasks such as questionanswering (QA) and open relation extraction (RE). SST is different from NER in that it has a larger set of labels and in the absence of strong orthographic cues (capitalization, quotation marks, etc.). Moreover, supersenses can be applied to any of the lexical parts of speech and not only proper names. Also, while high-coverage gazetteers can be found for named entity recognition, the lexical resources available for SST are very limited in coverage.",
"cite_spans": [
{
"start": 104,
"end": 108,
"text": "(QA)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Twitter is a popular micro-blogging service, which, among other things, is used for knowledge sharing among friends and peers. Twitter posts (tweets) announce local events, say talks or concerts, present facts about pop stars or programming languages, or simply express the opinions of the author on some subject matter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Supersense tagging is relevant for Twitter, because it can aid e.g. QA and open RE. If someone posts a message saying that some LaTeX module now supports \"drawing trees\", it is important to know whether the post is about drawing natural objects such as oaks or pines, or about drawing tree-shaped data representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is, to the best of our knowledge, the first work to address the problem of SST for Twitter. While there exist corpora of newswire and literary texts that are annotated with supersenses, e.g., SEMCOR (Miller et al., 1994) , no data is available for microblogs or related domains. This paper introduces two new data sets.",
"cite_spans": [
{
"start": 210,
"end": 231,
"text": "(Miller et al., 1994)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, most, if not all, of previous work on SST has relied on gold standard part-of-speech (POS) tags as input. However, in a domain such as Twitter, which has proven to be challenging for POS tagging (Foster et al., 2011; Ritter et al., 2011) , results obtained under the assumption of available perfect POS information are almost meaningless for any real-life application.",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Foster et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 230,
"end": 250,
"text": "Ritter et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we instead use predicted POS tags and investigate experimental settings in which one or more of the following resources are available to us:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 a large corpus of unlabeled Twitter data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Princeton WordNet (Fellbaum, 1998) ;",
"cite_spans": [
{
"start": 20,
"end": 36,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 SEMCOR (Miller et al., 1994) ; and",
"cite_spans": [
{
"start": 9,
"end": 30,
"text": "(Miller et al., 1994)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 a small corpus of Twitter data annotated with supersenses. We approach SST of Twitter using various degrees of supervision for both learning and domain adaptation (here, from newswire to Twitter). In weakly supervised learning, only unlabeled data and the lexical resource WordNet are available to us. While the quality of lexical resources varies, this is the scenario for most languages. We present an approach to weakly supervised SST based on type-constrained EM-trained second-order HMMs (HMM2s) with continuous word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast, when using supervised learning, we can distinguish between two degrees of supervision for domain adaptation. For some languages, e.g., Basque, English, Swedish, sense-annotated resources exist, but these corpora are all limited to newswire or similar domains. In such languages, unsupervised domain adaptation (DA) techniques can be used to exploit these resources. The setting does not presume labeled data from the target domain. We use discriminative models for unsupervised domain adaptation, training on SEMCOR and testing on Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we annotated data sets for Twitter, making supervised domain adaptation (SU) experiments possible. For supervised domain adaptation, we use the annotated training data sets from both the newswire and the Twitter domain, as well as WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For both unsupervised domain adaptation and supervised domain adaptation, we use structured perceptron (Collins, 2002) , i.e., a discriminative HMM model, and search-based structured prediction (SEARN) (Daume et al., 2009) . We augment both the EM-trained HMM2, discriminative HMMs and SEARN with type constraints and continuous word representations. We also experimented with conditional random fields (Lafferty et al., 2001 ), but obtained worse or similar results than with the other models.",
"cite_spans": [
{
"start": 103,
"end": 118,
"text": "(Collins, 2002)",
"ref_id": "BIBREF2"
},
{
"start": 202,
"end": 222,
"text": "(Daume et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 403,
"end": 425,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions In this paper, we present two Twitter data sets with manually annotated supersenses, as well as a series of experiments with these data sets. These experiments cover existing approaches to related tasks, as well as some new methods. In particular, we present type-constrained extensions of discriminative HMMs and SEARN sequence models with continuous word representations that perform well. We show that when no in-domain labeled data is available, type constraints improve model performance considerably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our best models achieve a weighted average F1 score of 57.1 over nouns and verbs on our main evaluation data set, i.e., a 20% error reduction over the most frequent sense baseline. The two annotated Twitter data sets are publicly released for download at https://github.com/coastalcph/ supersense-data-twitter. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distant supervision in these experiments was implemented by only allowing a system to predict a certain supersense for a given word if that supersense had either been observed in the training data, or, for unobserved words, if the sense was the most frequent sense in WordNet. If the word did not appear in the training data nor in WordNet, no filtering was applied. We refer to the distantsupervision strategy as type constraints. Distant supervision was implemented differently in SEARN and the HMM model. SEARN decomposes sequential labelling into a series of binary classifications. To constrain the labels we simply pick the top-scoring sense for each token from the allowed set. Structured perceptron uses Viterbi decoding. Here we set the emission probabilities for disallowed senses to negative infinity and decode as usual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distant supervision",
"sec_num": "2.1"
},
{
"text": "The HMM2 model is a second-order hidden Markov model (Mari et al., 1997; Thede and Harper, 1999) using logistic regression to estimate emission probabilities. In addition we constrain Figure 1: HMM2 with continuous word representations the inference space of the HMM2 tagger using type-level tag constraints derived from WordNet, leading to roughly the model proposed by Li et al. (2012) , who used Wiktionary as a (part-ofspeech) tag dictionary. The basic feature model of Li et al. (2012) is augmented with continuous word representation features as shown in Figure 1 , and our logistic regression model thus works over a combination of discrete and continuous variables when estimating emission probabilities. We do 50 passes over the data as in Li et al. (2012) . We introduce two simplifications for the HMM2 model. First, we only use the most frequent senses (k = 1) in WordNet as type constraints. The most frequent senses seem to better direct the EM search for a local optimum, and we see dramatic drops in performance on held-out data when we include more senses for the words covered by WordNet. Second, motivated by computational concerns, we only train and test on sequences of (predicted) nouns and verbs, leaving out all other word classes. Our supervised models performed slightly worse on shortened sequences, and it is an open question whether the HMM2 models would perform better if we could train them on full sentences.",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "(Mari et al., 1997;",
"ref_id": "BIBREF13"
},
{
"start": 73,
"end": 96,
"text": "Thede and Harper, 1999)",
"ref_id": "BIBREF23"
},
{
"start": 371,
"end": 387,
"text": "Li et al. (2012)",
"ref_id": "BIBREF12"
},
{
"start": 474,
"end": 490,
"text": "Li et al. (2012)",
"ref_id": "BIBREF12"
},
{
"start": 749,
"end": 765,
"text": "Li et al. (2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 561,
"end": 569,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Weakly supervised HMMs",
"sec_num": "2.2"
},
{
"text": "We use two approaches to supervised sequential labeling, structured perceptron (Collins, 2002) and search-based structured prediction (SEARN) (Daume et al., 2009) . The structured perceptron is a in-house reimplementation of Ciaramita and Altun (2006) . 1 SEARN performed slightly better than structured perceptron, so we use it as our inhouse baseline in the experiments below. In this section, we briefly explain the two approaches.",
"cite_spans": [
{
"start": 79,
"end": 94,
"text": "(Collins, 2002)",
"ref_id": "BIBREF2"
},
{
"start": 142,
"end": 162,
"text": "(Daume et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 225,
"end": 251,
"text": "Ciaramita and Altun (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured perceptron and SEARN",
"sec_num": "2.3"
},
{
"text": "1 https://github.com/coastalcph/ rungsted 2.3.1 Structured perceptron (HMM) Structured perceptron learning was introduced in Collins (2002) and is an extension of the online perceptron learning algorithm (Rosenblatt, 1958) with averaging (Freund and Schapire, 1999) to structured learning problems such as sequence labeling.",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "Collins (2002)",
"ref_id": "BIBREF2"
},
{
"start": 204,
"end": 222,
"text": "(Rosenblatt, 1958)",
"ref_id": "BIBREF21"
},
{
"start": 238,
"end": 265,
"text": "(Freund and Schapire, 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured perceptron and SEARN",
"sec_num": "2.3"
},
{
"text": "In structured perceptron for sequential labeling, where we learn a function from sequences of data points x 1 . . . x n to sequences of labels y 1 . . . y n , we begin with a random weight vector w 0 initialized to all zeros. This weight vector is used to assign weights to transitions between labels, i.e., the discriminative counterpart of P (y i+1 | y i ), and emissions of tokens given labels, i.e., the counterpart of P (x i | y i ). We use Viterbi decoding to derive a best path\u0177 through the corresponding m\u00d7n lattice (with m the number of labels). Let the feature mapping \u03a6(x, y) be a function from a pair of sequences x, y to all the features that fired to make y the best path through the lattice for x. Now the structured update for a sequence of data points is simply \u03b1(\u03a6(x, y)\u2212\u03a6(x,\u0177)), i.e., a fixed positive update of features that fired to produce the correct sequence of labels, and a fixed negative update of features that fired to produce the best path under the model. Note that if y =\u0177, no features are updated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured perceptron and SEARN",
"sec_num": "2.3"
},
{
"text": "SEARN is a way of decomposing structured prediction problems into search and history-based classification. In sequential labeling, we decompose the sequence of m tokens into m classification problems, conditioning our labeling of the ith token on the history of i \u2212 1 previous decisions. The cost of a mislabeling at training time is defined by a cost function over output structures. We use Hamming loss rather than F 1 as our cost function, and we then use stochastic gradient descent with quantile loss as a our cost-sensitive learning algorithm. We use a publicly available implementation. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEARN",
"sec_num": "2.3.2"
},
{
"text": "We experiment with weakly supervised learning, unsupervised domain adaptation, as well as supervised domain adaptation, i.e., where our models are induced from hand-annotated newswire and Twitter data. Note that in all our experiments, we use predicted POS tags as input to the system, in order to produce a realistic estimate of SST performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Our experiments rely on combinations of available resources and newly annotated Twitter data sets made publicly available with this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Princeton WordNet (Fellbaum, 1998) is the main resource for SST. The lexicographer file names provide the label alphabet of the task, and the taxonomy defined therein is used not only in the baselines, but also as a feature in the discriminative models. We use the WordNet 3.0 distribution.",
"cite_spans": [
{
"start": 18,
"end": 34,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Available resources",
"sec_num": "3.1.1"
},
{
"text": "SEMCOR (Miller et al., 1994 ) is a senseannotated corpus composed of 80% newswire and 20% literary text, using the sense inventory from WordNet. SEMCOR comprises 23k distinct lemmas in 234k instances. We use the texts which have full annotations, leaving aside the verb-only texts (see Section 6).",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Miller et al., 1994",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Available resources",
"sec_num": "3.1.1"
},
{
"text": "We use a distributional semantic model in order to incorporate distributional information as features in our system. In particular, we use the neural-network based models from (Mikolov et al., 2013) , also referred as word embeddings. This model makes use of skip-grams (n-grams that do not need to be consecutive) within a word window to calculate continuous-valued vector representations from a recurrent neural network. These distributional models have been able to outperform state of the art in the SemEval-2012 Task 2 (Measuring degrees of relational similarity). We calculate the embeddings from an in-house corpus of 57m English tweets using a window size 5 and yielding vectors of 100 dimensions.",
"cite_spans": [
{
"start": 176,
"end": 198,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Available resources",
"sec_num": "3.1.1"
},
{
"text": "We also use the first 20k tweets of the 57m tweets to train our HMM2 models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Available resources",
"sec_num": "3.1.1"
},
{
"text": "While an annotated newswire corpus and a highquality lexical resource already enable us to train, we also need at least a small sample of annotated tweets data to evaluate SST for Twitter. Furthermore, if we want to experiment with supervised SST, we also need sufficient annotated Twitter data to learn the distribution of sense tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1.2"
},
{
"text": "This paper presents two data sets: (a) supersense annotations for the POS+NER-annotated data set described in Ritter et al. (2011) , which we use for training, development and evaluation, using the splits proposed in Derczynski et al. (2013) , and (b) supersense annotations for a sample of 200 tweets, which we use for additional, out-of-sample evaluation. We call these data sets RITTER-{TRAIN,DEV,EVAL} and IN-HOUSE-EVAL, respectively. The IN-HOUSE-EVAL dataset was downloaded in 2013 and is a sample of tweets that contain links to external homepages but are otherwise unbiased. It was previously used (with partof-speech annotation) in (Plank et al., 2014) . Both data sets are made publicly available with this paper.",
"cite_spans": [
{
"start": 110,
"end": 130,
"text": "Ritter et al. (2011)",
"ref_id": "BIBREF20"
},
{
"start": 217,
"end": 241,
"text": "Derczynski et al. (2013)",
"ref_id": "BIBREF4"
},
{
"start": 641,
"end": 661,
"text": "(Plank et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1.2"
},
{
"text": "Supersenses are annotated with in spans defined by the BIO (Begin-Inside-Other) notation. To obtain the Twitter data sets, we carried out an annotation task. We first pre-annotated all data sets with WordNet's most frequent senses. If the word was not in WordNet and a noun, we assigned it the sense n.person. All other words were labeled O.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1.2"
},
{
"text": "Chains of nouns were altered to give every element the sense of the head noun, and the BI tags adjusted, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1.2"
},
{
"text": "Empire/B-n.loc State/B-n.loc Building/B-n.artifact was changed to Empire/B-n.artifact State/I-n.artifact Building/In.artifact For the RITTER data, three paid student annotators worked on different subsets of the preannotated data. They were asked to correct mistakes in both the BIO notation and the assigned supersenses. They were free to chose from the full label set, regardless of the pre-annotation. While the three annotators worked on separate parts, they overlapped on a small part of RITTER-TRAIN (841 tokens). On this subset, we computed agreement scores and annotation difficulties. The average raw agreement was 0.86 and Cohen's \u03ba 0.77. The majority of tokens received the O label by all annotators; this happended in 515 out of 841 cases. Excluding these instances to evaluate the performance on the more difficult content words, raw agreement dropped to 0.69 and Cohen's \u03ba to 0.69.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1.2"
},
{
"text": "The IN-HOUSE-EVAL data set was annotated by two different annotators, namely two of the authors of this article. Again, for efficiency reasons they worked on different subsets of the data, with an overlapping portion. Their average raw agreement was 0.65 and their Cohen's \u03ba 0.62. For this data set, we also compute F 1 , defined as usual as the harmonic mean of recall and precision. To compute this, we set one of the annotators as gold data and the other as predicted data. However, since F 1 is symmetrical, the order does not matter. The annotation F 1 gives us another estimate of annotation difficulty. We present the figures in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 636,
"end": 643,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1.2"
},
{
"text": "For most word sense disambiguation studies, predicting the most frequent sense (MFS) of a word has been proven to be a strong baseline. Following this, our MFS baseline simply predicts the supersense of the most frequent WordNet sense for a tuple of a word and a part of speech. We use the part of speech predicted by the LAPOS tagger (Tsuruoka et al., 2011) . Any word not in Word-Net is labeled as noun.person, which is the most frequent sense overall in the training data. After tagging, we run a script to correct the BI tag prefixes, as described above for the annotation ask.",
"cite_spans": [
{
"start": 335,
"end": 358,
"text": "(Tsuruoka et al., 2011)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.2"
},
{
"text": "We also compare to the performance of existing SST systems. In particular we use Sense-Learner (Mihalcea and Csomai, 2005) as a baseline, which produces estimates of the WordNet sense for each word. For these predictions, we retrieve the corresponding supersense. Finally, we use a publicly available reimplementation of Ciaramita and Altun (2006) by Michael Heilman, which reaches comparable performance on goldtagged SEMCOR. 3",
"cite_spans": [
{
"start": 95,
"end": 122,
"text": "(Mihalcea and Csomai, 2005)",
"ref_id": "BIBREF14"
},
{
"start": 321,
"end": 347,
"text": "Ciaramita and Altun (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.2"
},
{
"text": "We use the feature model of Paa\u00df and Reichartz (2009) in all our models, except the weakly supervised models. For the structured perceptron we set the number of passes over the training data on the held-out development data. The weakly supervised models use the default setting proposed in Li et al. (2012) . We have used the standard online setup for SEARN, which only takes one pass over the data.",
"cite_spans": [
{
"start": 28,
"end": 53,
"text": "Paa\u00df and Reichartz (2009)",
"ref_id": "BIBREF17"
},
{
"start": 290,
"end": 306,
"text": "Li et al. (2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.3"
},
{
"text": "The type of embedding is the same in all our experiments. For a given word the embedding feature is a 100 dimensional vector, which combines the embedding of the word with the embedding of adjacent words. The feature combination f e for a word w t is calculated as: f e (w t ) = 1 2 (e(w t\u22121 ) + e(w t+1 )) \u2212 2e(w t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.3"
},
{
"text": "3 http://www.ark.cs.cmu.edu/mheilman/ questions/SupersenseTagger-10-01-12.tar. gz where the factor of two is chosen heurestically to give more weight to the current word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.3"
},
{
"text": "We also set a parameter k on development data for using the k-most frequent senses inWordNet as type constraints. Our supervised models are trained on SEMCOR+RITTER-TRAIN or simply RITTER-TRAIN, depending on what gave us the best performance on the held-out data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.3"
},
{
"text": "The results are presented in Table 2 . We distinguish between three settings with various degrees of supervision: weakly supervised, which uses no domain annotated information, but solely relies on embeddings trained on unlabeled Twitter data; unsupervised domain adaptation (DA), which uses SemCor for supervised training; and supervised domain adaptation (SU), which uses annotated Twitter data in addition to the SemCor data for training.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In each of the two domain adaptation settings, SEARN and HMM are evaluated with type constraints as distant supervision, and without for comparison. SEARN without embeddings or distant supervision serves as an in-house baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In Table 3 we present the WordNet token coverage of predicted nouns and verbs in the development and evaluation data, as well as the interannotator agreement F 1 scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "All the results presented in Table 2 are (weighted averaged) F 1 measures obtained on predicted POS tags. Note that these results are considerably lower than results on supersense tagging newswire (up to 80 F 1 ) that assume gold standard POS tags (Ciaramita and Altun, 2006; Paa\u00df and Reichartz, 2009) .",
"cite_spans": [
{
"start": 248,
"end": 275,
"text": "(Ciaramita and Altun, 2006;",
"ref_id": "BIBREF1"
},
{
"start": 276,
"end": 301,
"text": "Paa\u00df and Reichartz, 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The re-implementation of the state-of-the-art system improves slightly upon the most frequent sense baseline. SenseLearner does not seem to capture the relevant information and does not reach baseline performance. In other words, there is no off-the-shelf tool for supersense tagging of Twitter that does much better than assigning the most frequent sense to predicted nouns and verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Our weakly supervised model performs worse than the most frequent sense baseline. This is a negative result. It is, however, well-known from the word sense disambiguation literature that the MFS is a very strong baseline. Moreover, the EM learning problem is hard because of the large label set and weak distributional evidence for super- Table 3 : Properties of dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The unsupervised domain adaptation and fully supervised systems perform considerably better than this baseline across the board. In the unsupervised domain adaptation setup, we see huge improvements from using type constraints as distant supervision. In the supervised setup, we only see significant improvements adding type constraints for the structured perceptron (HMM), but not for search-based structured prediction (SEARN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "senses.",
"sec_num": null
},
{
"text": "For all the data sets, there is still a gap between model performance and human inter-annotator agreement levels (see Table 3 ), leaving some room for improvements. We hope that the release of the data sets will help further research into this.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "senses.",
"sec_num": null
},
{
"text": "We also experimented with the more coarsegrained classes proposed by Yuret and Yatbaz (2010) . Here our best model obtained an F 1 score for mental concepts (nouns) of 72.3%, and 62.6% for physical concepts, on RITTER-DEV. The overall F 1 score for verbs is 85.6%. The overall F 1 is 75.5%. Note that this result is not directly comparable to the figure (72.9%) reported in Yuret and Yatbaz (2010) , since they use different data sets, exclude verbs and make different assumptions, e.g., relying on gold POS tags.",
"cite_spans": [
{
"start": 69,
"end": 92,
"text": "Yuret and Yatbaz (2010)",
"ref_id": "BIBREF29"
},
{
"start": 374,
"end": 397,
"text": "Yuret and Yatbaz (2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coarse-grained evaluation",
"sec_num": "4.1"
},
{
"text": "We have seen that inter-annotator agreements on supersense annotation are reliable at above .60 but far from perfect. The Hinton diagram in Table 2 presents the confusion matrix between our annotators on IN-HOUSE-EVAL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "Errors in the prediction primarily stem from two sources: out-of-vocabulary words and incorrect POS tags. Figure 3 shows the distribution of senses over the words that were not contained in either the training data, WordNet, or the Twitter data used to learn the embeddings. The distribution follows a power law, with the most frequent sense being noun.person, followed by noun.group, and noun.artifact. The first two are related to NER categories, namely PER and ORG, and can be expected, since Twitter users frequently talk about new actors, musicians, and bands. Nouns of communication are largely related to films, but also include Twitter, Facebook, and other forms of social media. Note that verbs occur only towards the tail end of the distribution, i.e., there are very few unknown verbs, even in Twitter.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "Overall, our models perform best on labels with low lexical variability, such as quantities, states and times for nouns, as well as consumption, possession and stative for verbs. This is unsurprising, since these classes have lower out-of-vocabulary rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "With regards to the differences between source (SEMCOR) and target (Twitter) domains, we observe that the distribution of supersenses is always headed by the same noun categories like noun.person or noun.group, but the frequency of out-of-vocabulary stative verbs plummets in the target domain, as some semantic types are more closed class than others. There are for instance fewer possibilities for creating new time units (noun.time) or stative verbs like be than people or company names (noun.person or noun.group, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "The weakly supervised model HMM2 has higher precision (57% on RITTER-DEV) than recall (48.7%), which means that it often predicts words to not belong to a semantic class. This suggests an alternative strategy, which is to train a model on sequences of purely non-O instances. This would force the model to only predict O on words that do not appear in the reduced sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "One important source of error seems to be unreliable part-of-speech tagging. In particular we predict the wrong POS for 20-35% of the verbs across the data sets, and for 4-6.5% of the nouns. In the SEMCOR data, for comparability, we have wrongly predicted tags for 6-8% of the annotated tokens. Nevertheless, the error propagation of wrongly predicted nouns and verbs is partially compensated by our systems, since they are trained on imperfect input, and thus it becomes possible for the systems to predict a noun supersense for a verb and viceversa. In our data we have found e.g. that the noun Thanksgiving was incorrectly tagged as a verb, but its supersense was correctly predicted to be noun.time, and that the verb guess had been mistagged as noun but the system ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "There has been relatively little previous work on supersense tagging, and to the best of our knowledge, all of it has been limited to English newswire and literature (SEMCOR and SENSEVAL). The task of supersense tagging was first introduced by Ciaramita and Altun (2006) , who used a structured perceptron trained and evaluated on SEMCOR via 5-fold cross validation. Their evaluation included a held-out development set on each fold that was used to estimate the number of epochs. They used additional training data containing only verbs. More importantly, they relied on gold standard POS tags. Their overall F 1 score on SEMCOR was 77.1. Reichartz and Paa\u00df (Reichartz and Paa\u00df, 2008; Paa\u00df and Reichartz, 2009) extended this work, using a CRF model as well as LDA topic features. They report an F 1 score of 80.2, again relying on gold standard POS features. Our implementation follows their setup and feature model, but we rely on predicted POS features, not gold standard features.",
"cite_spans": [
{
"start": 244,
"end": 270,
"text": "Ciaramita and Altun (2006)",
"ref_id": "BIBREF1"
},
{
"start": 640,
"end": 685,
"text": "Reichartz and Paa\u00df (Reichartz and Paa\u00df, 2008;",
"ref_id": "BIBREF19"
},
{
"start": 686,
"end": 711,
"text": "Paa\u00df and Reichartz, 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Supersenses provide information similar to higher-level distributional clusters, but more interpretable, and have thus been used as highlevel features in various tasks, such as preposition sense disambiguation, noun compound interpretation, and metaphor detection (Ye and Baldwin, 2007; Tratz and Hovy, 2010; Tsvetkov et al., 2013) . Princeton WordNet only provides a fully developed taxonomy of supersenses for verbs and nouns, but Tsvetkov et al. (2014) have recently proposed an extension of the taxonomy to cover adjectives. Outside of English, supersenses have been annotated for Arabic Wikipedia articles by Schneider et al. (2012) .",
"cite_spans": [
{
"start": 264,
"end": 286,
"text": "(Ye and Baldwin, 2007;",
"ref_id": "BIBREF28"
},
{
"start": 287,
"end": 308,
"text": "Tratz and Hovy, 2010;",
"ref_id": "BIBREF24"
},
{
"start": 309,
"end": 331,
"text": "Tsvetkov et al., 2013)",
"ref_id": "BIBREF26"
},
{
"start": 433,
"end": 455,
"text": "Tsvetkov et al. (2014)",
"ref_id": "BIBREF27"
},
{
"start": 614,
"end": 637,
"text": "Schneider et al. (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In addition, a few researchers have tried to solve coarse-grained word sense disambiguation problems that are very similar to supersense tagging. Kohomban and Lee (2005) and Kohomban and Lee (2007) also propose to use lexicographer file identifers from Princeton WordNet senses (supersenses) and, in addition, discuss how to retrieve fine-grained senses from those predictions. They evaluate their model on all-words data from SENSEEVAL-2 and SENSEEVAL-3. They use a classification approach rather than structured prediction.",
"cite_spans": [
{
"start": 146,
"end": 169,
"text": "Kohomban and Lee (2005)",
"ref_id": "BIBREF9"
},
{
"start": 174,
"end": 197,
"text": "Kohomban and Lee (2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Yuret and Yatbaz (2010) present a weakly unsupervised approach to this problem, still evaluating on SENSEVAL-2 and SENSEVAL-3. They focus only on nouns, relying on gold part-of-speech, but also experiment with a coarse-grained mapping, using only three high level classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "For Twitter, we are aware of little previous work on word sense disambiguation. Gella et al. (2014) present lexical sample word sense disambiguation annotation of 20 target nouns on Twitter, but no experimental results with this data. There has also been related work on disambiguation to Wikipedia for Twitter (Cassidy et al., 2012) .",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "Gella et al. (2014)",
"ref_id": "BIBREF8"
},
{
"start": 311,
"end": 333,
"text": "(Cassidy et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In sum, existing work on supersense tagging and coarse-grained word sense disambiguation for English has to the best of our knowledge all focused on newswire and literature. Moreover, they all rely on gold standard POS information, making previous performance estimates rather optimistic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we present two Twitter data sets with manually annotated supersenses, as well as a series of experiments with these data sets. The data is publicly available for download.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In this article we have provided, to the best of our knowledge, the first supersense tagger for Twitter. We have shown that off-the-shelf tools perform poorly on Twitter, and we offer two strategies-namely distant supervision and the usage of embeddings as features-that can be combined to improve SST for Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We propose that distant supervision implemented as type constraints during decoding is a viable method to limit the mispredictions of supersenses by our systems, thereby enforcing predicted senses that a word has in WordNet. This approach compensates for the size limitations of the training data and mitigates the out-of-vocabulary effect, but is still subject to the coverage of Word-Net; which is far from perfect for words coming from high-variability sources such as Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Using distributional semantics as features in form of word embeddings also improves the prediction of supersenses, because it provides semantic information for words, regardless of whether they have been observed the training data. This method does not require a hand-created knowledge base like WordNet, and is a promising technique for domain adaptation of supersense tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://hunch.net/\u02dcvw/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Analysis and enhancement of wikification for microblogs with context expansion",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Cassidy",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Hongzhao",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "12",
"issue": "",
"pages": "441--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Cassidy, Heng Ji, Lev-Arie Ratinov, Arkaitz Zu- biaga, and Hongzhao Huang. 2012. Analysis and enhancement of wikification for microblogs with context expansion. In COLING, volume 12, pages 441-456.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "594--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and informa- tion extraction with a supersense sequence tagger. In Proc. of EMNLP, pages 594-602, Sydney, Australia, July.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Search-based structured prediction",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "297--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daume, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learn- ing, pages 297-325.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Twitter part-of-speech tagging for all: overcoming sparse and noisy data",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
}
],
"year": 2013,
"venue": "RANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Alan Ritter, Sam Clark, and Kalina Bontcheva. 2013. Twitter part-of-speech tagging for all: overcoming sparse and noisy data. In RANLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "WordNet: an electronic lexical database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: an electronic lexical database. MIT Press USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From news to comments: Resources and benchmarks for parsing the language of Web 2.0",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Cetinoglu",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Josef",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Deirde",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Josef Le Roux, Joakim Nivre, Deirde Hogan, and Josef van Genabith. 2011. From news to comments: Resources and benchmarks for parsing the language of Web 2.0. In IJCNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Large margin classification using the perceptron algorithm. Machine Learning",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schapire",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "37",
"issue": "",
"pages": "277--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Freund and Robert Schapire. 1999. Large margin classification using the perceptron algorithm. Ma- chine Learning, 37:277-296.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "One sense per tweeter and other lexical semantic tales of Twitter",
"authors": [
{
"first": "Spandana",
"middle": [],
"last": "Gella",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spandana Gella, Paul Cook, and Timothy Baldwin. 2014. One sense per tweeter and other lexical se- mantic tales of Twitter. In EACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning semantic classes for word sense disambiguation",
"authors": [
{
"first": "Upali",
"middle": [],
"last": "Kohomban",
"suffix": ""
},
{
"first": "Wee",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Upali Kohomban and Wee Lee. 2005. Learning se- mantic classes for word sense disambiguation. In ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Optimizing classifier performance in word sense disambiguation by redefining word sense classes",
"authors": [
{
"first": "Upali",
"middle": [],
"last": "Kohomban",
"suffix": ""
},
{
"first": "Wee",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2007,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Upali Kohomban and Wee Lee. 2007. Optimizing classifier performance in word sense disambiguation by redefining word sense classes. In IJCAI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Conditional random fields: probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: prob- abilistic models for segmenting and labeling se- quence data. In ICML.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wiki-ly supervised part-of-speech tagging",
"authors": [
{
"first": "Shen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen Li, Jo\u00e3o Gra\u00e7a, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic word recognition based on second-order hidden Markov models",
"authors": [
{
"first": "Jean-Francois",
"middle": [],
"last": "Mari",
"suffix": ""
},
{
"first": "Jean-Paul",
"middle": [],
"last": "Haton",
"suffix": ""
},
{
"first": "Abdelaziz",
"middle": [],
"last": "Kriouile",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "5",
"issue": "1",
"pages": "22--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Francois Mari, Jean-Paul Haton, and Abdelaziz Kriouile. 1997. Automatic word recognition based on second-order hidden Markov models. IEEE Transactions on Speech and Audio Processing, 5(1):22-25.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Senselearner: Word sense disambiguation for all words in unrestricted text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Andras",
"middle": [],
"last": "Csomai",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL 2005 on Interactive poster and demonstration sessions",
"volume": "",
"issue": "",
"pages": "53--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Andras Csomai. 2005. Sense- learner: Word sense disambiguation for all words in unrestricted text. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, pages 53-56. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their compo- sitionality. In NIPS.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using a semantic concordance for sense identification",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "Shari",
"middle": [],
"last": "Landes",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"G"
],
"last": "Thomas",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "240--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identifica- tion. In Proceedings of the workshop on Human Language Technology, pages 240-243. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting semantic constraints for estimating supersenses with CRFs",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Paa\u00df",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Reichartz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of the Ninth SIAM International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "485--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard Paa\u00df and Frank Reichartz. 2009. Exploit- ing semantic constraints for estimating supersenses with CRFs. In Proc. of the Ninth SIAM Interna- tional Conference on Data Mining, pages 485-496, Sparks, Nevada, May.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning part-of-speech taggers with inter-annotator agreement loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of EACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Estimating Supersenses with Conditional Random Fields",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Reichartz",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Paa\u00df",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ECMLPKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Reichartz and Gerhard Paa\u00df. 2008. Estimating Supersenses with Conditional Random Fields. In Proceedings of ECMLPKDD.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Named entity recognition in tweets: an experimental study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: an ex- perimental study. In EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The perceptron: a probabilistic model for information storage and organization in the brain",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Rosenblatt",
"suffix": ""
}
],
"year": 1958,
"venue": "Psychological Review",
"volume": "65",
"issue": "6",
"pages": "386--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Rosenblatt. 1958. The perceptron: a probabilis- tic model for information storage and organization in the brain. Psychological Review, 65(6):386-408.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Coarse lexical semantic annotation with supersenses: an arabic case study",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Mohit",
"suffix": ""
},
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "253--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Schneider, Behrang Mohit, Kemal Oflazer, and Noah A Smith. 2012. Coarse lexical semantic an- notation with supersenses: an arabic case study. In Proceedings of the 50th Annual Meeting of the As- sociation for Computational Linguistics, pages 253- 258. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A second-order hidden Markov model for part-of-speech tagging",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Thede",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Thede and Mary Harper. 1999. A second-order hidden Markov model for part-of-speech tagging. In ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Isi: automatic classification of relations between nominals using a maximum entropy classifier",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Tratz",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "222--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Tratz and Eduard Hovy. 2010. Isi: automatic classification of relations between nominals using a maximum entropy classifier. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 222-225. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning with lookahead: can history-based models rival globally optimized models?",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Kazama",
"suffix": ""
}
],
"year": 2011,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, Yusuke Miyao, and Jun'ichi Kazama. 2011. Learning with lookahead: can history-based models rival globally optimized mod- els? In CoNLL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Cross-lingual metaphor detection using common semantic features",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Mukomel",
"suffix": ""
},
{
"first": "Anatole",
"middle": [],
"last": "Gershman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Elena Mukomel, and Anatole Gersh- man. 2013. Cross-lingual metaphor detection us- ing common semantic features. Meta4NLP 2013, page 45.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Augmenting english adjective senses with supersenses",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Archna",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Nathan Schneider, Dirk Hovy, Archna Bhatia, Manaal Faruqui, and Chris Dyer. 2014. Augmenting english adjective senses with super- senses. In Proc. of LREC.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Melb-yb: Preposition sense disambiguation using rich semantic features",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "241--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ye and Timothy Baldwin. 2007. Melb-yb: Preposition sense disambiguation using rich seman- tic features. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 241-244. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The noisy channel model for unsupervised word sense disambiguation",
"authors": [
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Mehmet",
"middle": [],
"last": "Yatbaz",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "",
"pages": "111--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deniz Yuret and Mehmet Yatbaz. 2010. The noisy channel model for unsupervised word sense disam- biguation. Computational Linguistics, 36:111-127.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Inter-annotator confusion matrix on TWITTER-EVAL. n .a tt ri b u te n o u n .r el at io n v er b .c o g n it io n v er b .c re at io n v er b .e m o ti o n v er b .m o ti o n v er b .p er ce p ti o n v er b .s ta ti v e",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Sense distribution of OOV words. still predicted the correct verb.cognition as supersense.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td>: The 41 noun and verb supersenses in</td></tr><tr><td>WordNet</td></tr><tr><td>2 More or less supervised models</td></tr><tr><td>This sections covers the varying degree of super-</td></tr><tr><td>vision of our systems as well as the usage of type</td></tr><tr><td>constraints as distant supervision.</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "Weighted F1 average over 41 supersenses.",
"content": "<table/>",
"html": null
}
}
}
}