Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:41:21.680471Z"
},
"title": "Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Brendan",
"middle": [],
"last": "O'connor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"postCode": "60637",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15213",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We consider the problem of part-of-speech tagging for informal, online conversational text. We systematically evaluate the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy. With these features, our system achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks; Twitter tagging is improved from 90% to 93% accuracy (more than 3% absolute). Qualitative analysis of these word clusters yields insights about NLP and linguistic phenomena in this genre. Additionally, we contribute the first POS annotation guidelines for such text and release a new dataset of English language tweets annotated using these guidelines. Tagging software, annotation guidelines, and large-scale word clusters are available at: http://www.ark.cs.cmu.edu/TweetNLP This paper describes release 0.3 of the \"CMU Twitter Part-of-Speech Tagger\" and annotated data.",
"pdf_parse": {
"paper_id": "N13-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "We consider the problem of part-of-speech tagging for informal, online conversational text. We systematically evaluate the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy. With these features, our system achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks; Twitter tagging is improved from 90% to 93% accuracy (more than 3% absolute). Qualitative analysis of these word clusters yields insights about NLP and linguistic phenomena in this genre. Additionally, we contribute the first POS annotation guidelines for such text and release a new dataset of English language tweets annotated using these guidelines. Tagging software, annotation guidelines, and large-scale word clusters are available at: http://www.ark.cs.cmu.edu/TweetNLP This paper describes release 0.3 of the \"CMU Twitter Part-of-Speech Tagger\" and annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online conversational text, typified by microblogs, chat, and text messages, 1 is a challenge for natural language processing. Unlike the highly edited genres that conventional NLP tools have been developed for, conversational text contains many nonstandard lexical items and syntactic patterns. These are the result of unintentional errors, dialectal variation, conversational ellipsis, topic diversity, and creative use of language and orthography (Eisenstein, 2013) . An example is shown in Fig. 1 . As a result of this widespread variation, standard modeling assumptions that depend on lexical, syntactic, and orthographic regularity are inappropriate. There 1 Also referred to as computer-mediated communication. is preliminary work on social media part-of-speech (POS) tagging (Gimpel et al., 2011) , named entity recognition (Ritter et al., 2011; Liu et al., 2011) , and parsing (Foster et al., 2011) , but accuracy rates are still significantly lower than traditional well-edited genres like newswire. Even web text parsing, which is a comparatively easier genre than social media, lags behind newspaper text (Petrov and McDonald, 2012) , as does speech transcript parsing (McClosky et al., 2010) .",
"cite_spans": [
{
"start": 450,
"end": 468,
"text": "(Eisenstein, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 783,
"end": 804,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF14"
},
{
"start": 832,
"end": 853,
"text": "(Ritter et al., 2011;",
"ref_id": "BIBREF32"
},
{
"start": 854,
"end": 871,
"text": "Liu et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 886,
"end": 907,
"text": "(Foster et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 1117,
"end": 1144,
"text": "(Petrov and McDonald, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 1181,
"end": 1204,
"text": "(McClosky et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 494,
"end": 500,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle the challenge of novel words and constructions, we create a new Twitter part-of-speech tagger-building on previous work by Gimpel et al. (2011) -that includes new large-scale distributional features. This leads to state-of-the-art results in POS tagging for both Twitter and Internet Relay Chat (IRC) text. We also annotated a new dataset of tweets with POS tags, improved the annotations in the previous dataset from Gimpel et al., and developed annotation guidelines for manual POS tagging of tweets. We release all of these resources to the research community:",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "Gimpel et al. (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 an open-source part-of-speech tagger for online conversational text ( \u00a72); \u2022 unsupervised Twitter word clusters ( \u00a73); \u2022 an improved emoticon detector for conversational text ( \u00a74);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 POS annotation guidelines ( \u00a75.1); and \u2022 a new dataset of 547 manually POS-annotated tweets ( \u00a75).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our tagging model is a first-order maximum entropy Markov model (MEMM), a discriminative sequence model for which training and decoding are extremely efficient (Ratnaparkhi, 1996; McCallum et al., 2000) . 2 The probability of a tag y t is conditioned on the input sequence x and the tag to its left y t\u22121 , and is parameterized by a multiclass logistic regression:",
"cite_spans": [
{
"start": 160,
"end": 179,
"text": "(Ratnaparkhi, 1996;",
"ref_id": "BIBREF31"
},
{
"start": 180,
"end": 202,
"text": "McCallum et al., 2000)",
"ref_id": "BIBREF24"
},
{
"start": 205,
"end": 206,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "p(y t = k | y t\u22121 , x, t; \u03b2) \u221d exp \u03b2 (trans) y t\u22121 ,k + j \u03b2 (obs) j,k f j (x, t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "We use transition features for every pair of labels, and extract base observation features from token t and neighboring tokens, and conjoin them against all K = 25 possible outputs in our coarse tagset (Appendix A). Our feature sets will be discussed below in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "Decoding. For experiments reported in this paper, we use the O(|x|K 2 ) Viterbi algorithm for prediction; K is the number of tags. This exactly maximizes p(y | x), but the MEMM also naturally allows a faster O(|x|K) left-to-right greedy decoding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "for t = 1 . . . |x|: y t \u2190 arg max k p(y t = k |\u0177 t\u22121 , x, t; \u03b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "which we find is 3 times faster and yields similar accuracy as Viterbi (an insignificant accuracy decrease of less than 0.1% absolute on the DAILY547 test set discussed below). Speed is paramount for social media analysis applications-which often require the processing of millions to billions of messages-so we make greedy decoding the default in the released software.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "2 Although when compared to CRFs, MEMMs theoretically suffer from the \"label bias\" problem (Lafferty et al., 2001) , our system substantially outperforms the CRF-based taggers of previous work; and when comparing to Gimpel et al. system with similar feature sets, we observed little difference in accuracy. This is consistent with conventional wisdom that the quality of lexical features is much more important than the parametric form of the sequence model, at least in our setting: part-ofspeech tagging with a small labeled training set. This greedy tagger runs at 800 tweets/sec. (10,000 tokens/sec.) on a single CPU core, about 40 times faster than Gimpel et al.'s system. The tokenizer by itself ( \u00a74) runs at 3,500 tweets/sec. 3 Training and regularization. During training, the MEMM log-likelihood for a tagged tweet x, y is the sum over the observed token tags y t , each conditional on the tweet being tagged and the observed previous tag (with a start symbol before the first token in x),",
"cite_spans": [
{
"start": 91,
"end": 114,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF18"
},
{
"start": 734,
"end": 735,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "(x, y, \u03b2) = |x| t=1 log p(y t | y t\u22121 , x, t; \u03b2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "We optimize the parameters \u03b2 with OWL-QN, an L 1 -capable variant of L-BFGS (Andrew and Gao, 2007; Liu and Nocedal, 1989) to minimize the regularized objective",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "(Andrew and Gao, 2007;",
"ref_id": "BIBREF1"
},
{
"start": 99,
"end": 121,
"text": "Liu and Nocedal, 1989)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "arg min \u03b2 \u2212 1 N x,y (x, y, \u03b2) + R(\u03b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "where N is the number of tokens in the corpus and the sum ranges over all tagged tweets x, y in the training data. We use elastic net regularization (Zou and Hastie, 2005) , which is a linear combination of L 1 and L 2 penalties; here j indexes over all features:",
"cite_spans": [
{
"start": 149,
"end": 171,
"text": "(Zou and Hastie, 2005)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "R(\u03b2) = \u03bb 1 j |\u03b2 j | + 1 2 \u03bb 2 j \u03b2 2 j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "Using even a very small L 1 penalty eliminates many irrelevant or noisy features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MEMM Tagger",
"sec_num": "2"
},
{
"text": "Our POS tagger can make use of any number of possibly overlapping features. While we have only a small amount of hand-labeled data for training, we also have access to billions of tokens of unlabeled conversational text from the web. Previous work has shown that unlabeled text can be used to induce unsupervised word clusters which can improve the performance of many supervised NLP tasks (Koo et al., 2008; Turian et al., 2010; T\u00e4ckstr\u00f6m et al., 2012, inter alia) . We use a similar approach here to improve tagging performance for online conversational text. We also make our induced clusters publicly available in the hope that they will be useful for other NLP tasks in this genre. ",
"cite_spans": [
{
"start": 390,
"end": 408,
"text": "(Koo et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 409,
"end": 429,
"text": "Turian et al., 2010;",
"ref_id": "BIBREF36"
},
{
"start": 430,
"end": 465,
"text": "T\u00e4ckstr\u00f6m et al., 2012, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Clusters",
"sec_num": "3"
},
{
"text": "G4 111010110001 <3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Word Clusters",
"sec_num": "3"
},
{
"text": "xoxo <33 xo <333 #love s2 <URL-twitition.com> #neversaynever <3333 Figure 2 : Example word clusters (HMM classes): we list the most probable words, starting with the most probable, in descending order. Boldfaced words appear in the example tweet ( Figure 1 ). The binary strings are root-to-leaf paths through the binary cluster tree. For example usage, see e.g. search.twitter.com, bing.com/social and urbandictionary.com.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 2",
"ref_id": null
},
{
"start": 248,
"end": 256,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Unsupervised Word Clusters",
"sec_num": "3"
},
{
"text": "We obtained hierarchical word clusters via Brown clustering (Brown et al., 1992) on a large set of unlabeled tweets. 4 The algorithm partitions words into a base set of 1,000 clusters, and induces a hierarchy among those 1,000 clusters with a series of greedy agglomerative merges that heuristically optimize the likelihood of a hidden Markov model with a one-class-per-lexical-type constraint. Not only does Brown clustering produce effective features for discriminative models, but its variants are better unsupervised POS taggers than some models developed nearly 20 years later; see comparisons in Blunsom and Cohn (2011) . The algorithm is attractive for our purposes since it scales to large amounts of data. When training on tweets drawn from a single day, we observed time-specific biases (e.g., numerical dates appearing in the same cluster as the word tonight), so we assembled our unlabeled data from a random sample of 100,000 tweets per day from September 10, 2008 to August 14, 2012, and filtered out non-English tweets (about 60% of the sample) using langid.py (Lui and Baldwin, 2012) . 5 Each tweet was processed with our to-kenizer and lowercased. We normalized all atmentions to @MENTION and URLs/email addresses to their domains (e.g. http://bit.ly/ dP8rR8 \u21d2 URL-bit.ly ). In an effort to reduce spam, we removed duplicated tweet texts (this also removes retweets) before word clustering. This normalization and cleaning resulted in 56 million unique tweets (847 million tokens). We set the clustering software's count threshold to only cluster words appearing 40 or more times, yielding 216,856 word types, which took 42 hours to cluster on a single CPU. Fig. 2 shows example clusters. Some of the challenging words in the example tweet ( Fig. 1) are highlighted. The term lololol (an extension of lol for \"laughing out loud\") is grouped with a large number of laughter acronyms (A1: \"laughing my (fucking) ass off,\" \"cracking the fuck up\"). Since expressions of laughter are so prevalent on Twitter, the algorithm creates another laughter cluster (A1's sibling A2), that tends to have onomatopoeic, non-acronym variants (e.g., haha). The acronym ikr (\"I know, right?\") is grouped with expressive variations of \"yes\" and \"no\" (A4). Note that A1-A4 are grouped in a fairly specific subtree; and indeed, in this message ikr and lololol are both tagged as interjections.",
"cite_spans": [
{
"start": 60,
"end": 80,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF5"
},
{
"start": 117,
"end": 118,
"text": "4",
"ref_id": null
},
{
"start": 602,
"end": 625,
"text": "Blunsom and Cohn (2011)",
"ref_id": "BIBREF2"
},
{
"start": 1076,
"end": 1099,
"text": "(Lui and Baldwin, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 1102,
"end": 1103,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1675,
"end": 1681,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 1759,
"end": 1766,
"text": "Fig. 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Clustering Method",
"sec_num": "3.1"
},
{
"text": "smh (\"shaking my head,\" indicating disapproval) seems related, though is always tagged in the annotated data as a miscellaneous abbreviation (G); the difference between acronyms that are interjections versus other acronyms may be complicated. Here, smh is in a related but distinct subtree from the above expressions (A5); its usage in this example is slightly different from its more common usage, which it shares with the other words in its cluster: message-ending expressions of commentary or emotional reaction, sometimes as a metacomment on the author's message; e.g., Maybe you could get a guy to date you if you actually respected yourself #smh or There is really NO reason why other girls should send my boyfriend a goodmorning text #justsaying.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Examples",
"sec_num": "3.2"
},
{
"text": "We observe many variants of categories traditionally considered closed-class, including pronouns (B: u = \"you\") and prepositions (C: fir = \"for\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Examples",
"sec_num": "3.2"
},
{
"text": "There is also evidence of grammatical categories specific to conversational genres of English; clusters E1-E2 demonstrate variations of single-word contractions for \"going to\" and \"trying to,\" some of which have more complicated semantics. 6 Finally, the HMM learns about orthographic variants, even though it treats all words as opaque symbols; cluster F consists almost entirely of variants of \"so,\" their frequencies monotonically decreasing in the number of vowel repetitions-a phenomenon called \"expressive lengthening\" or \"affective lengthening\" (Brody and Diakopoulos, 2011; Schnoebelen, 2012 ). This suggests a future direction to jointly model class sequence and orthographic information (Clark, 2003; Smith and Eisner, 2005; Blunsom and Cohn, 2011) .",
"cite_spans": [
{
"start": 240,
"end": 241,
"text": "6",
"ref_id": null
},
{
"start": 552,
"end": 581,
"text": "(Brody and Diakopoulos, 2011;",
"ref_id": null
},
{
"start": 582,
"end": 599,
"text": "Schnoebelen, 2012",
"ref_id": "BIBREF33"
},
{
"start": 697,
"end": 710,
"text": "(Clark, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 711,
"end": 734,
"text": "Smith and Eisner, 2005;",
"ref_id": "BIBREF34"
},
{
"start": 735,
"end": 758,
"text": "Blunsom and Cohn, 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Examples",
"sec_num": "3.2"
},
{
"text": "We have built an HTML viewer to browse these and numerous other interesting examples. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Examples",
"sec_num": "3.2"
},
{
"text": "We use the term emoticon to mean a face or icon constructed with traditional alphabetic or punctua- 6 One coauthor, a native speaker of the Texan English dialect, notes \"finna\" (short for \"fixing to\", cluster E1) may be an immediate future auxiliary, indicating an immediate future tense that is present in many languages (though not in standard English). To illustrate: \"She finna go\" approximately means \"She will go,\" but sooner, in the sense of \"She is about to go.\" 7 http://www.ark.cs.cmu.edu/TweetNLP/ cluster_viewer.html tion symbols, and emoji to mean symbols rendered in software as small pictures, in line with the text. Since our tokenizer is careful to preserve emoticons and other symbols (see \u00a74), they are clustered just like other words. Similar emoticons are clustered together (G1-G4), including separate clusters of happy [[ :) ",
"cite_spans": [
{
"start": 100,
"end": 101,
"text": "6",
"ref_id": null
},
{
"start": 842,
"end": 847,
"text": "[[ :)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emoticons and Emoji",
"sec_num": "3.3"
},
{
"text": "=) \u2227 _ \u2227 ]], sad/disappointed [[ :/ :( -_-</3 ]], love [[ xoxo . ]] and winking [[ ;) ( \u2227 _-) ]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emoticons and Emoji",
"sec_num": "3.3"
},
{
"text": "emoticons. The clusters are not perfectly aligned with our POS annotation guidelines; for example, the \"sad\" emoticon cluster included emotion-bearing terms that our guidelines define as non-emoticons, such as #ugh, #tear, and #fml (\"fuck my life\"), though these seem potentially useful for sentiment analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emoticons and Emoji",
"sec_num": "3.3"
},
{
"text": "One difficult task is classifying different types of symbols in tweets: our annotation guidelines differentiate between emoticons, punctuation, and garbage (apparently non-meaningful symbols or tokenization errors). Several Unicode character ranges are reserved for emoji-style symbols (including the three Unicode hearts in G4); however, depending on the user's software, characters in these ranges might be rendered differently or not at all. We have found instances where the clustering algorithm groups proprietary iOS emoji symbols along with normal emoticons; for example, the character U+E056, which is interpreted on iOS as a smiling face, is in the same G2 cluster as smiley face emoticons. The symbol U+E12F, which represents a picture of a bag of money, is grouped with the words cash and money.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emoticons and Emoji",
"sec_num": "3.3"
},
{
"text": "Since Brown clusters are hierarchical in a binary tree, each word is associated with a tree path represented as a bitstring with length \u2264 16; we use prefixes of the bitstring as features (for all prefix lengths \u2208 {2, 4, 6, . . . , 16}). This allows sharing of statistical strength between similar clusters. Using prefix features of hierarchical clusters in this way was similarly found to be effective for named-entity recognition (Turian et al., 2010) and Twitter POS tagging (Ritter et al., 2011) .",
"cite_spans": [
{
"start": 431,
"end": 452,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF36"
},
{
"start": 477,
"end": 498,
"text": "(Ritter et al., 2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster-Based Features",
"sec_num": "3.4"
},
{
"text": "When checking to see if a word is associated with a cluster, the tagger first normalizes the word using the same techniques as described in \u00a73.1, then creates a priority list of fuzzy match transformations of the word by removing repeated punctuation and repeated characters. If the normalized word is not in a cluster, the tagger considers the fuzzy matches. Although only about 3% of the tokens in the development set ( \u00a76) did not appear in a clustering, this method resulted in a relative error decrease of 18% among such word tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster-Based Features",
"sec_num": "3.4"
},
{
"text": "Besides unsupervised word clusters, there are two other sets of features that contain generalized lexical class information. We use the tag dictionary feature from Gimpel et al., which adds a feature for a word's most frequent part-of-speech tag. 8 This can be viewed as a feature-based domain adaptation method, since it gives lexical type-level information for standard English words, which the model learns to map between PTB tags to the desired output tags. Second, since the lack of consistent capitalization conventions on Twitter makes it especially difficult to recognize names- Gimpel et al. and Foster et al. (2011) found relatively low accuracy on proper nouns-we added a token-level name list feature, which fires on (non-function) words from names from several sources: Freebase lists of celebrities and video games (Google, 2012) , the Moby Words list of US Locations, 9 and lists of male, female, family, and proper names from Mark Kantrowitz's name corpus. 10",
"cite_spans": [
{
"start": 587,
"end": 625,
"text": "Gimpel et al. and Foster et al. (2011)",
"ref_id": null
},
{
"start": 829,
"end": 843,
"text": "(Google, 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Lexical Features",
"sec_num": "3.5"
},
{
"text": "Word segmentation on Twitter is challenging due to the lack of orthographic conventions; in particular, punctuation, emoticons, URLs, and other symbols may have no whitespace separation from textual 8 Frequencies came from the Wall Street Journal and Brown corpus sections of the Penn Treebank. If a word has multiple PTB tags, each tag is a feature with value for the frequency rank; e.g. for three different tags in the PTB, this feature gives a value of 1 for the most frequent tag, 2/3 for the second, etc. Coarse versions of the PTB tags are used (Petrov et al., 2011) . While 88% of words in the dictionary have only one tag, using rank information seemed to give a small but consistent gain over only using the most common tag, or using binary features conjoined with rank as in Gimpel et al.",
"cite_spans": [
{
"start": 552,
"end": 573,
"text": "(Petrov et al., 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and Emoticon Detection",
"sec_num": "4"
},
{
"text": "9 http://icon.shef.ac.uk/Moby/mwords.html 10 http://www.cs.cmu.edu/afs/cs/project/ ai-repository/ai/areas/nlp/corpora/names/ 0.html words (e.g. no:-d,yes should parse as four tokens), and internally may contain alphanumeric symbols that could be mistaken for words: a naive split(/[^a-zA-Z0-9]+/) tokenizer thinks the words \"p\" and \"d\" are among the top 100 most common words on Twitter, due to misanalysis of :p and :d. Traditional Penn Treebank-style tokenizers are hardly better, often breaking a string of punctuation characters into a single token per character.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and Emoticon Detection",
"sec_num": "4"
},
{
"text": "We rewrote twokenize (O'Connor et al., 2010), a rule-based tokenizer, emoticon, and URL detector, for use in the tagger. Emoticons are especially challenging, since they are open-class and productive. We revise O'Connor et al.'s regular expression grammar that describes possible emoticons, adding a grammar of horizontal emoticons (e.g. -_-), known as \"Eastern-style,\" 11 though we observe high usage in English-speaking Twitter (Fig. 2, G2-G3 ). We also add a number of other improvements to the patterns. Because this system was used as preprocessing for the word clustering experiment in \u00a73, we were able to infer the emoticon clusters in Fig. 2 . Furthermore, whether a token matches the emoticon pattern is also used as a feature in the tagger ( \u00a72).",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 444,
"text": "(Fig. 2, G2-G3",
"ref_id": null
},
{
"start": 643,
"end": 649,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tokenization and Emoticon Detection",
"sec_num": "4"
},
{
"text": "URL recognition is also difficult, since the http:// is often dropped, resulting in protocol-less URLs like about.me. We add recognition patterns for these by using a list of top-level and country domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and Emoticon Detection",
"sec_num": "4"
},
{
"text": "Gimpel et al. (2011) provided a dataset of POStagged tweets consisting almost entirely of tweets sampled from one particular day (October 27, 2010). We were concerned about overfitting to timespecific phenomena; for example, a substantial fraction of the messages are about a basketball game happening that day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotated Dataset",
"sec_num": "5"
},
{
"text": "We created a new test set of 547 tweets for evaluation. The test set consists of one random English tweet from every day between January 1, 2011 and June 30, 2012. In order for a tweet to be considered English, it had to contain at least one English word other than a URL, emoticon, or at-mention. We noticed biases in the outputs of langid.py, so we instead selected these messages completely manu-ally (going through a random sample of one day's messages until an English message was found).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotated Dataset",
"sec_num": "5"
},
{
"text": "Gimpel et al. provided a tagset for Twitter (shown in Appendix A), which we used unmodified. The original annotation guidelines were not published, but in this work we recorded the rules governing tagging decisions and made further revisions while annotating the new data. 12 Some of our guidelines reiterate or modify rules made by Penn Treebank annotators, while others treat specific phenomena found on Twitter (refer to the next section).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Methodology",
"sec_num": "5.1"
},
{
"text": "Our tweets were annotated by two annotators who attempted to match the choices made in Gimpel et al.'s dataset. The annotators also consulted the POS annotations in the Penn Treebank (Marcus et al., 1993) as an additional reference. Differences were reconciled by a third annotator in discussion with all annotators. 13 During this process, an inconsistency was found in Gimpel et al.'s data, which we corrected (concerning the tagging of this/that, a change to 100 labels, 0.4%). The new version of Gimpel et al.'s data (called OCT27) , as well as the newer messages (called DAILY547), are both included in our data release.",
"cite_spans": [
{
"start": 183,
"end": 204,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF23"
},
{
"start": 500,
"end": 535,
"text": "Gimpel et al.'s data (called OCT27)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Methodology",
"sec_num": "5.1"
},
{
"text": "Ritter et al. (2011) annotated tweets using an augmented version of the PTB tagset and presumably followed the PTB annotation guidelines. We wrote new guidelines because the PTB conventions are inappropriate for Twitter in several ways, as shown in the design of Gimpel et al.'s tagset. Importantly, \"compound\" tags (e.g., nominal+verbal and nomi-nal+possessive) are used because tokenization is difficult or seemingly impossible for the nonstandard word forms that are commonplace in conversational text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounds in Penn Treebank vs. Twitter",
"sec_num": "5.2"
},
{
"text": "For example, the PTB tokenization splits contractions containing apostrophes: I'm \u21d2 I/PRP 'm/VBP. But conversational text often contains variants that resist a single PTB tag (like im), or even challenge traditional English grammatical categories 12 The annotation guidelines are available online at http://www.ark.cs.cmu.edu/TweetNLP/ 13 Annotators are coauthors of this paper.",
"cite_spans": [
{
"start": 247,
"end": 249,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compounds in Penn Treebank vs. Twitter",
"sec_num": "5.2"
},
{
"text": "(like imma or umma, which both mean \"I am going to\"). One strategy would be to analyze these forms into a PTB-style tokenization, as discussed in Forsyth (2007) , who proposes to analyze doncha as do/VBP ncha/PRP, but notes it would be difficult. We think this is impossible to handle in the rulebased framework used by English tokenizers, given the huge (and possibly growing) number of large compounds like imma, gonna, w/that, etc. These are not rare: the word clustering algorithm discovers hundreds of such words as statistically coherent classes (e.g. clusters E1 and E2 in Fig. 2) ; and the word imma is the 962nd most common word in our unlabeled corpus, more frequent than cat or near. We do not attempt to do Twitter \"normalization\" into traditional written English (Han and Baldwin, 2011 ), which we view as a lossy translation task. In fact, many of Twitter's unique linguistic phenomena are due not only to its informal nature, but also a set of authors that heavily skews towards younger ages and minorities, with heavy usage of dialects that are different than the standard American English most often seen in NLP datasets (Eisenstein, 2013; . For example, we suspect that imma may implicate tense and aspect markers from African-American Vernacular English. 14 Trying to impose PTB-style tokenization on Twitter is linguistically inappropriate: should the lexico-syntactic behavior of casual conversational chatter by young minorities be straightjacketed into the stylistic conventions of the 1980s Wall Street Journal? Instead, we would like to directly analyze the syntax of online conversational text on its own terms.",
"cite_spans": [
{
"start": 146,
"end": 160,
"text": "Forsyth (2007)",
"ref_id": "BIBREF12"
},
{
"start": 776,
"end": 798,
"text": "(Han and Baldwin, 2011",
"ref_id": "BIBREF16"
},
{
"start": 1138,
"end": 1156,
"text": "(Eisenstein, 2013;",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 580,
"end": 587,
"text": "Fig. 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compounds in Penn Treebank vs. Twitter",
"sec_num": "5.2"
},
{
"text": "Thus, we choose to leave these word forms untokenized and use compound tags, viewing compositional multiword analysis as challenging future work. 15 We believe that our strategy is sufficient for many applications, such as chunking or named entity recognition; many applications such as sentiment analysis (Turney, 2002; Pang and Lee, 2008, \u00a74.2. 3), open information extraction (Carlson et al., 2010; Fader et al., 2011) , and information retrieval (Allan and Raghavan, 2002) patterns that seem quite compatible with our approach. More complex downstream processing like parsing is an interesting challenge, since contraction parsing on traditional text is probably a benefit to current parsers. We believe that any PTB-trained tool requires substantial retraining and adaptation for Twitter due to the huge genre and stylistic differences (Foster et al., 2011) ; thus tokenization conventions are a relatively minor concern. Our simple-toannotate conventions make it easier to produce new training data.",
"cite_spans": [
{
"start": 146,
"end": 148,
"text": "15",
"ref_id": null
},
{
"start": 306,
"end": 320,
"text": "(Turney, 2002;",
"ref_id": "BIBREF37"
},
{
"start": 321,
"end": 346,
"text": "Pang and Lee, 2008, \u00a74.2.",
"ref_id": null
},
{
"start": 379,
"end": 401,
"text": "(Carlson et al., 2010;",
"ref_id": "BIBREF6"
},
{
"start": 402,
"end": 421,
"text": "Fader et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 450,
"end": 476,
"text": "(Allan and Raghavan, 2002)",
"ref_id": "BIBREF0"
},
{
"start": 841,
"end": 862,
"text": "(Foster et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compounds in Penn Treebank vs. Twitter",
"sec_num": "5.2"
},
{
"text": "We are primarily concerned with performance on our annotated datasets described in \u00a75 (OCT27, DAILY547), though for comparison to previous work we also test on other corpora (RITTERTW in \u00a76.2, NPSCHAT in \u00a76.3). The annotated datasets are listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 255,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We use OCT27 to refer to the entire dataset described in Gimpel et al.; it is split into training, development, and test portions (OCT27TRAIN, OCT27DEV, OCT27TEST). We use DAILY547 as an additional test set. Neither OCT27TEST nor DAILY547 were extensively evaluated against until final ablation testing when writing this paper. The total number of features is 3.7 million, all of which are used under pure L 2 regularization; but only 60,000 are selected by elastic net regularization with (\u03bb 1 , \u03bb 2 ) = (0.25, 2), which achieves nearly the same (but no better) accuracy as pure L 2 , 16 and we use it for all experiments. We observed that it was 16 We conducted a grid search for the regularizer values on part of DAILY547, and many regularizer values give the best or nearly the best results. We suspect a different setup would have yielded similar results. Table 3 : DAILY547 accuracies (%) for tokens in and out of a traditional dictionary, for models reported in rows 1 and 3 of Table 2. possible to get radically smaller models with only a slight degradation in performance: (4, 0.06) has 0.5% worse accuracy but uses only 1,632 features, a small enough number to browse through manually. First, we evaluate on the new test set, training on all of OCT27. Due to DAILY547's statistical representativeness, we believe this gives the best view of the tagger's accuracy on English Twitter text. The full tagger attains 93.2% accuracy (final row of Table 2).",
"cite_spans": [
{
"start": 57,
"end": 71,
"text": "Gimpel et al.;",
"ref_id": null
},
{
"start": 648,
"end": 650,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 861,
"end": 868,
"text": "Table 3",
"ref_id": null
},
{
"start": 985,
"end": 993,
"text": "Table 2.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "To facilitate comparisons with previous work, we ran a series of experiments training only on OCT27's training and development sets, then report test results on both OCT27TEST and all of DAILY547, shown in Table 2 . Our tagger achieves substantially higher accuracy than Gimpel et al. (2011). 17 Feature ablation. A number of ablation tests indicate the word clusters are a very strong source of lexical knowledge. When dropping the tag dictionaries and name lists, the word clusters maintain most of the accuracy (row 2). If we drop the clusters and rely only on tag dictionaries and namelists, accuracy decreases significantly (row 3). In fact, we can remove all observation features except for word clusters-no word features, orthographic fea- Inter-annotator agreement (Gimpel et al., 2011) 92. 2 7Model trained on all OCT27 93.2 8 Table 2 : Tagging accuracies (%) in ablation experiments. OCT27TEST and DAILY547 95% confidence intervals are roughly \u00b10.7%. Our final tagger uses all features and also trains on OCT27TEST, achieving 93.2% on DAILY547.",
"cite_spans": [
{
"start": 271,
"end": 295,
"text": "Gimpel et al. (2011). 17",
"ref_id": null
},
{
"start": 773,
"end": 794,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 2",
"ref_id": null
},
{
"start": 836,
"end": 843,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "tures, affix n-grams, capitalization, emoticon patterns, etc.-and the accuracy is in fact still better than the previous work (row 4). 18 We also wanted to know whether to keep the tag dictionary and name list features, but the splits reported in Fig. 2 did not show statistically significant differences; so to better discriminate between ablations, we created a lopsided train/test split of all data with a much larger test portion (26,974 tokens), having greater statistical power (tighter confidence intervals of \u00b1 0.3%). 19 The full system got 90.8% while the no-tag dictionary, no-namelists ablation had 90.0%, a statistically significant difference. Therefore we retain these features.",
"cite_spans": [
{
"start": 135,
"end": 137,
"text": "18",
"ref_id": null
},
{
"start": 526,
"end": 528,
"text": "19",
"ref_id": null
}
],
"ref_spans": [
{
"start": 247,
"end": 253,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "Compared to the tagger in Gimpel et al., most of our feature changes are in the new lexical features described in \u00a73.5. 20 We do not reuse the other lexical features from the previous work, including a phonetic normalizer (Metaphone), a name list consisting of words that are frequently capitalized, and distributional features trained on a much smaller unlabeled corpus; they are all worse than our new lexical features described here. (We did include, however, a variant of the tag dictionary feature that uses phonetic normalization for lookup; it seemed to yield a small improvement.) 18 Furthermore, when evaluating the clusters as unsupervised (hard) POS tags, we obtain a many-to-one accuracy of 89.2% on DAILY547. Before computing this, we lowercased the text to match the clusters and removed tokens tagged as URLs and at-mentions.",
"cite_spans": [
{
"start": 120,
"end": 122,
"text": "20",
"ref_id": null
},
{
"start": 589,
"end": 591,
"text": "18",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "19 Reported confidence intervals in this paper are 95% binomial normal approximation intervals for the proportion of correctly tagged tokens: \u00b11.96 p(1 \u2212 p)/n tokens 1/ \u221a n. 20 Details on the exact feature set are available in a technical report (Owoputi et al., 2012) , also available on the website.",
"cite_spans": [
{
"start": 246,
"end": 268,
"text": "(Owoputi et al., 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "Non-traditional words. The word clusters are especially helpful with words that do not appear in traditional dictionaries. We constructed a dictionary by lowercasing the union of the ispell 'American', 'British', and 'English' dictionaries, plus the standard Unix words file from Webster's Second International dictionary, totalling 260,985 word types. After excluding tokens defined by the gold standard as punctuation, URLs, at-mentions, or emoticons, 21 22% of DAILY547's tokens do not appear in this dictionary. Without clusters, they are very difficult to classify (only 79.2% accuracy), but adding clusters generates a 5.7 point improvement-much larger than the effect on in-dictionary tokens (Table 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "Varying the amount of unlabeled data. A tagger that only uses word clusters achieves an accuracy of 88.6% on the OCT27 development set. 22 We created several clusterings with different numbers of unlabeled tweets, keeping the number of clusters constant at 800. As shown in Fig. 3 , there was initially a logarithmic relationship between number of tweets and accuracy, but accuracy (and lexical coverage) levels out after 750,000 tweets. We use the largest clustering (56 million tweets and 1,000 clusters) as the default for the released tagger. tagset plus several Twitter-specific tags, referred to in Table 1 as RITTERTW. Linguistic concerns notwithstanding ( \u00a75.2), for a controlled comparison, we train and test our system on this data with the same 4-fold cross-validation setup they used, attaining 90.0% (\u00b10.5%) accuracy. Ritter et al.'s CRFbased tagger had 85.3% accuracy, and their best tagger, trained on a concatenation of PTB, IRC, and Twitter, achieved 88.3% (Table 4) .",
"cite_spans": [
{
"start": 136,
"end": 138,
"text": "22",
"ref_id": null
}
],
"ref_spans": [
{
"start": 274,
"end": 280,
"text": "Fig. 3",
"ref_id": null
},
{
"start": 605,
"end": 612,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 974,
"end": 983,
"text": "(Table 4)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "6.1"
},
{
"text": "IRC is another medium of online conversational text, with similar emoticons, misspellings, abbreviations and acronyms as Twitter data. We evaluate our tagger on the NPS Chat Corpus (Forsyth and Martell, 2007) , 24 a PTB-part-of-speech annotated dataset of Internet Relay Chat (IRC) room messages from 2006.",
"cite_spans": [
{
"start": 181,
"end": 208,
"text": "(Forsyth and Martell, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IRC: Evaluation on NPSCHAT",
"sec_num": "6.3"
},
{
"text": "First, we compare to a tagger in the same setup as experiments on this data in Forsyth (2007) , training on 90% of the data and testing on 10%; we average results across 10-fold cross-validation. 25 The full tagger model achieved 93.4% (\u00b10.3%) accuracy, significantly improving over the best result they report, 90.8% accuracy with a tagger trained on a mix of several POS-annotated corpora.",
"cite_spans": [
{
"start": 79,
"end": 93,
"text": "Forsyth (2007)",
"ref_id": "BIBREF12"
},
{
"start": 196,
"end": 198,
"text": "25",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IRC: Evaluation on NPSCHAT",
"sec_num": "6.3"
},
{
"text": "We also perform the ablation experiments on this corpus, with a slightly different experimental setup: we first filter out system messages then split data 24 Release 1.0: http://faculty.nps.edu/ cmartell/NPSChat.htm 25 Forsyth actually used 30 different 90/10 random splits; we prefer cross-validation because the same test data is never repeated, thus allowing straightforward confidence estimation of accuracy from the number of tokens (via binomial sample variance, footnote 19). In all cases, the models are trained on the same amount of data (90%).",
"cite_spans": [
{
"start": 155,
"end": 157,
"text": "24",
"ref_id": null
},
{
"start": 216,
"end": 218,
"text": "25",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IRC: Evaluation on NPSCHAT",
"sec_num": "6.3"
},
{
"text": "into 5,067 training and 2,868 test messages. Results show a similar pattern as the Twitter data (see final column of Table 2 ). Thus the Twitter word clusters are also useful for language in the medium of text chat rooms; we suspect these clusters will be applicable for deeper syntactic and semantic analysis in other online conversational text mediums, such as text messages and instant messages.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "IRC: Evaluation on NPSCHAT",
"sec_num": "6.3"
},
{
"text": "We have constructed a state-of-the-art part-ofspeech tagger for the online conversational text genres of Twitter and IRC, and have publicly released our new evaluation data, annotation guidelines, open-source tagger, and word clusters at http://www.ark.cs.cmu.edu/TweetNLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "A Part-of-Speech Tagset N common noun O pronoun (personal/WH; not possessive)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "proper noun S nominal + possessive Z proper noun + possessive V verb including copula, auxiliaries L nominal + verbal (e.g. i'm), verbal + nominal (let's) M proper noun + verbal A adjective R adverb ! interjection D determiner P pre-or postposition, or subordinating conjunction & coordinating conjunction T verb particle X existential there, predeterminers Y X + verbal # hashtag (indicates topic/category for tweet) @ at-mention (indicates a user as a recipient of a tweet) discourse marker, indications of continuation across multiple tweets U URL or email address E emoticon $ numeral , punctuation G other abbreviations, foreign words, possessive endings, symbols, garbage Gimpel et al. (2011) used in this paper, and described further in the released annotation guidelines.",
"cite_spans": [
{
"start": 678,
"end": 698,
"text": "Gimpel et al. (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Runtimes observed on an Intel Core i5 2.4 GHz laptop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As implemented byLiang (2005), v. 1.3: https:// github.com/percyliang/brown-cluster 5 https://github.com/saffsd/langid.py",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://en.wikipedia.org/wiki/List_of_ emoticons",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See \"Tense and aspect\" examples in http: //en.wikipedia.org/wiki/African_American_ Vernacular_English 15 For example, wtf has compositional behavior in \"Wtf just happened??\", but only debatably so in \"Huh wtf\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These numbers differ slightly from those reported by Gimpel et al., due to the corrections we made to the OCT27 data, noted in Section 5.1. We retrained and evaluated their tagger (version 0.2) on our corrected dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We retain hashtags since by our guidelines a #-prefixed token is ambiguous between a hashtag and a normal word, e.g. #1 or going #home.22 The only observation features are the word clusters of a token and its immediate neighbors.23 https://github.com/aritter/twitter_nlp/ blob/master/data/annotated/pos.txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by the National Science Foundation (IIS-0915187 and IIS-1054319).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using part-of-speech patterns to reduce query ambiguity",
"authors": [
{
"first": "J",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Raghavan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Allan and H. Raghavan. 2002. Using part-of-speech patterns to reduce query ambiguity. In Proc. of SIGIR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scalable training of L 1 -regularized log-linear models",
"authors": [
{
"first": "G",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Andrew and J. Gao. 2007. Scalable training of L 1 - regularized log-linear models. In Proc. of ICML.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction",
"authors": [
{
"first": "P",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Blunsom and T. Cohn. 2011. A hierarchical Pitman- Yor process HMM for unsupervised part of speech in- duction. In Proc. of ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "using word lengthening to detect sentiment in microblogs",
"authors": [
{
"first": "",
"middle": [],
"last": "Cooooooooooooooollllllllllllll!!!!!!!!!!!!!!",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cooooooooooooooollllllllllllll!!!!!!!!!!!!!!: using word lengthening to detect sentiment in microblogs. In Proc. of EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Souza",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, P. V. de Souza, R. L. Mercer, V. J. Della Pietra, and J. C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguis- tics, 18(4).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Toward an architecture for never-ending language learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "E",
"middle": [
"R"
],
"last": "Hruschka",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hr- uschka Jr, and T. M. Mitchell. 2010. Toward an archi- tecture for never-ending language learning. In Proc. of AAAI.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Combining distributional and morphological information for part of speech induction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Clark. 2003. Combining distributional and morpho- logical information for part of speech induction. In Proc. of EACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discovering sociolinguistic associations with structured sparsity",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisenstein, N. A. Smith, and E. P. Xing. 2011. Discov- ering sociolinguistic associations with structured spar- sity. In Proc. of ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "What to do about bad language on the internet",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisenstein. 2013. What to do about bad language on the internet. In Proc. of NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Fader, S. Soderland, and O. Etzioni. 2011. Identifying relations for open information extraction. In Proc. of EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lexical and discourse analysis of online chat dialog",
"authors": [
{
"first": "E",
"middle": [
"N"
],
"last": "Forsyth",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Martell",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICSC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. N. Forsyth and C. H. Martell. 2007. Lexical and dis- course analysis of online chat dialog. In Proc. of ICSC.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving automated lexical and discourse analysis of online chat dialog",
"authors": [
{
"first": "E",
"middle": [
"N"
],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. N. Forsyth. 2007. Improving automated lexical and discourse analysis of online chat dialog. Master's the- sis, Naval Postgraduate School.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "#hardtoparse: POS tagging and parsing the Twitterverse",
"authors": [
{
"first": "J",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Cetinoglu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Roux",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of AAAI-11 Workshop on Analysing Microtext",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Foster, O. Cetinoglu, J. Wagner, J. L. Roux, S. Hogan, J. Nivre, D. Hogan, and J. van Genabith. 2011. #hard- toparse: POS tagging and parsing the Twitterverse. In Proc. of AAAI-11 Workshop on Analysing Microtext.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Part-of-speech tagging for Twitter: Annotation, features, and experiments",
"authors": [
{
"first": "K",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "O'connor",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Gimpel, N. Schneider, B. O'Connor, D. Das, D. Mills, J. Eisenstein, M. Heilman, D. Yogatama, J. Flanigan, and N. A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In Proc. of ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Freebase data dumps",
"authors": [
{
"first": "",
"middle": [],
"last": "Google",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Google. 2012. Freebase data dumps. http:// download.freebase.com/datadumps/.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lexical normalisation of short text messages: Makn sens a #twitter",
"authors": [
{
"first": "B",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Han and T. Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proc. of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Simple semisupervised dependency parsing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo, X. Carreras, and M. Collins. 2008. Simple semi- supervised dependency parsing. In Proc. of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proc. of ICML.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semi-supervised learning for natural language",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liang. 2005. Semi-supervised learning for natural language. Master's thesis, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the limited memory BFGS method for large scale optimization",
"authors": [
{
"first": "D",
"middle": [
"C"
],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. C. Liu and J. Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathemat- ical programming, 45(1).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Recognizing named entities in tweets",
"authors": [
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Liu, S. Zhang, F. Wei, and M. Zhou. 2011. Recogniz- ing named entities in tweets. In Proc. of ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "langid.py: An off-theshelf language identification tool",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Lui and T. Baldwin. 2012. langid.py: An off-the- shelf language identification tool. In Proc. of ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguistics, 19(2).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Maximum entropy Markov models for information extraction and segmentation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. McCallum, D. Freitag, and F. Pereira. 2000. Maxi- mum entropy Markov models for information extrac- tion and segmentation. In Proc. of ICML.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Automatic domain adaptation for parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. McClosky, E. Charniak, and M. Johnson. 2010. Au- tomatic domain adaptation for parsing. In Proc. of NAACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "TweetMotif: exploratory search and topic summarization for Twitter",
"authors": [
{
"first": "B",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Krieger",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ahn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. O'Connor, M. Krieger, and D. Ahn. 2010. TweetMotif: exploratory search and topic summariza- tion for Twitter. In Proc. of AAAI Conference on We- blogs and Social Media.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Part-of-speech tagging for Twitter: Word clusters and other advances",
"authors": [
{
"first": "O",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "O'connor",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Owoputi, B. O'Connor, C. Dyer, K. Gimpel, and N. Schneider. 2012. Part-of-speech tagging for Twit- ter: Word clusters and other advances. Technical Re- port CMU-ML-12-107, Carnegie Mellon University.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Opinion mining and sentiment analysis",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Pang and L. Lee. 2008. Opinion mining and sentiment analysis. Now Publishers.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL)",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Petrov and R. McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A universal part-of-speech tagset",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1104.2086"
]
},
"num": null,
"urls": [],
"raw_text": "S. Petrov, D. Das, and R. McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. of EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Named entity recognition in tweets: An experimental study",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ritter, S. Clark, Mausam, and O. Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proc. of EMNLP.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Do you smile with your nose? Stylistic variation in Twitter emoticons. University of Pennsylvania Working Papers in Linguistics",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schnoebelen",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Schnoebelen. 2012. Do you smile with your nose? Stylistic variation in Twitter emoticons. University of Pennsylvania Working Papers in Linguistics, 18(2):14.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. A. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proc. of ACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Cross-lingual word clusters for direct transfer of linguistic structure",
"authors": [
{
"first": "O",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. T\u00e4ckstr\u00f6m, R. McDonald, and J. Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of lin- guistic structure. In Proc. of NAACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Word representations: A simple and general method for semisupervised learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Turian, L. Ratinov, and Y. Bengio. 2010. Word rep- resentations: A simple and general method for semi- supervised learning. In Proc. of ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. D. Turney. 2002. Thumbs up or thumbs down?: se- mantic orientation applied to unsupervised classifica- tion of reviews. In Proc. of ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Regularization and variable selection via the elastic net",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hastie",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)",
"volume": "67",
"issue": "2",
"pages": "301--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Zou and T. Hastie. 2005. Regularization and vari- able selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301-320.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Automatically tagged tweet showing nonstandard orthography, capitalization, and abbreviation. Ignoring the interjections and abbreviations, it glosses as He asked for your last name so he can add you on Facebook. The tagset is defined in Appendix A. Refer toFig. 2for word clusters corresponding to some of these words.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "11101011001010 ;) :p :-) xd ;-) ;d (; :3 ;p =p :-p =)) ;] xdd #gno xddd >:) ;-p >:d 8-) ;-d G2 11101011001011 :) (: =) :)-_--.-:-( :'( d: :| :s -__-=( =/ >.< -___-:-/ </3 :\\ -____-;( /: :(( >_< =[ :[ #fml",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Figure 3: OCT27 development set accuracy using only clusters as features. Model In dict. Out of dict. Full 93.4 85.0 No clusters 92.0 (\u22121.4) 79.3 (\u22125.7) Total tokens 4,808 1,394",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": ".(2011), trained on more data 88.3",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "lmfao lmaoo lmaooo hahahahaha lool ctfu rofl loool lmfaoo lmfaooo lmaoooo lmbo lololol A2 111010100011 haha hahaha hehe hahahaha hahah aha hehehe ahaha hah hahahah kk hahaa ahah A3 111010100100 yes yep yup nope yess yesss yessss ofcourse yeap likewise yepp yesh yw yuup yus A4 111010100101 yeah yea nah naw yeahh nooo yeh noo noooo yeaa ikr nvm yeahhh nahh noooooA5 11101011011100 smh jk #fail #random #fact smfh #smh #winning #realtalk smdh #dead #justsaying B",
"html": null,
"num": null,
"content": "<table><tr><td/><td>Binary path</td><td>Top words (by frequency)</td><td/></tr><tr><td colspan=\"2\">A1 111010100010</td><td/><td/></tr><tr><td/><td>011101011</td><td>u yu yuh yhu uu yuu yew y0u yuhh youh yhuu iget yoy yooh yuo</td><td>yue juu</td><td>dya youz yyou</td></tr><tr><td>C</td><td colspan=\"4\">11100101111001 w fo fa fr fro ov fer fir whit abou aft serie fore fah fuh w/her w/that fron isn agains</td></tr><tr><td>D</td><td>111101011000</td><td colspan=\"3\">facebook fb itunes myspace skype ebay tumblr bbm flickr aim msn netflix pandora</td></tr><tr><td colspan=\"2\">E1 0011001</td><td colspan=\"2\">tryna gon finna bouta trynna boutta gne fina gonn tryina fenna qone trynaa qon</td></tr><tr><td colspan=\"2\">E2 0011000</td><td colspan=\"3\">gonna gunna gona gna guna gnna ganna qonna gonnna gana qunna gonne goona</td></tr><tr><td>F</td><td>0110110111</td><td colspan=\"3\">soo sooo soooo sooooo soooooo sooooooo soooooooo sooooooooo soooooooooo</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: Annotated datasets: number of messages, to-</td></tr><tr><td>kens, tagset, and date range. More information in \u00a75,</td></tr><tr><td>\u00a76.3, and \u00a76.2.</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"text": "Accuracy comparison on Ritter et al.'s Twitter POS corpus ( \u00a76.2).",
"html": null,
"num": null,
"content": "<table><tr><td>Tagger</td><td>Accuracy</td></tr><tr><td>This work</td><td>93.4 \u00b1 0.3</td></tr><tr><td colspan=\"2\">Forsyth (2007) 90.8</td></tr><tr><td colspan=\"2\">Table 5: Accuracy comparison on Forsyth's NPSCHAT</td></tr><tr><td>IRC POS corpus ( \u00a76.3).</td><td/></tr></table>",
"type_str": "table"
},
"TABREF5": {
"text": "",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}