Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:55:26.239231Z"
},
"title": "A Graph-based Approach for Contextual Text Normalization",
"authors": [
{
"first": "\u00c7",
"middle": [],
"last": "Ag\u0131l S\u00f6nmez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bogazici University Bebek",
"location": {
"postCode": "34342",
"settlement": "Istanbul",
"country": "Turkey"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The informal nature of social media text renders it very difficult to be automatically processed by natural language processing tools. Text normalization, which corresponds to restoring the non-standard words to their canonical forms, provides a solution to this challenge. We introduce an unsupervised text normalization approach that utilizes not only lexical, but also contextual and grammatical features of social text. The contextual and grammatical features are extracted from a word association graph built by using a large unlabeled social media text corpus. The graph encodes the relative positions of the words with respect to each other, as well as their part-ofspeech tags. The lexical features are obtained by using the longest common subsequence ratio and edit distance measures to encode the surface similarity among words, and the double metaphone algorithm to represent the phonetic similarity. Unlike most of the recent approaches that are based on generating normalization dictionaries, the proposed approach performs normalization by considering the context of the non-standard words in the input text. Our results show that it achieves state-ofthe-art F-score performance on standard datasets. In addition, the system can be tuned to achieve very high precision without sacrificing much from recall.",
"pdf_parse": {
"paper_id": "D14-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "The informal nature of social media text renders it very difficult to be automatically processed by natural language processing tools. Text normalization, which corresponds to restoring the non-standard words to their canonical forms, provides a solution to this challenge. We introduce an unsupervised text normalization approach that utilizes not only lexical, but also contextual and grammatical features of social text. The contextual and grammatical features are extracted from a word association graph built by using a large unlabeled social media text corpus. The graph encodes the relative positions of the words with respect to each other, as well as their part-ofspeech tags. The lexical features are obtained by using the longest common subsequence ratio and edit distance measures to encode the surface similarity among words, and the double metaphone algorithm to represent the phonetic similarity. Unlike most of the recent approaches that are based on generating normalization dictionaries, the proposed approach performs normalization by considering the context of the non-standard words in the input text. Our results show that it achieves state-ofthe-art F-score performance on standard datasets. In addition, the system can be tuned to achieve very high precision without sacrificing much from recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social text, which has been growing and evolving steadily, has its own lexical and grammatical features (Choudhury et al., 2007; Eisenstein, 2013) .",
"cite_spans": [
{
"start": 104,
"end": 128,
"text": "(Choudhury et al., 2007;",
"ref_id": "BIBREF3"
},
{
"start": 129,
"end": 146,
"text": "Eisenstein, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "lol meaning laughing out loud, xoxo meaning kissing, 4u meaning for you are among the most commonly used examples of this jargon. In addition, these informal expressions in social text usually take many different lexical forms when generated by different individuals (Eisenstein, 2013) . The limited accuracies of the Speech-to-Text (STT) tools in mobile devices, which are increasingly being used to post messages on social media platforms, along with the scarcity of attention of the users result in additional divergence of social text from more standard text such as from the newswire domain. Tools such as spellchecker and slang dictionaries have been shown to be insufficient to cope with this challenge long time ago (Sproat et al., 2001 ). In addition, most Natural Language Processing (NLP) tools including named entity recognizers and dependency parsers generally perform poorly on social text (Ritter et al., 2010) .",
"cite_spans": [
{
"start": 267,
"end": 285,
"text": "(Eisenstein, 2013)",
"ref_id": "BIBREF7"
},
{
"start": 724,
"end": 744,
"text": "(Sproat et al., 2001",
"ref_id": null
},
{
"start": 904,
"end": 925,
"text": "(Ritter et al., 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text normalization is a preprocessing step to restore non-standard words in text to their original (canonical) forms to make use in NLP applications or more broadly to understand the digitized text better (Han and Baldwin, 2011) . For example, talk 2 u later can be normalized as talk to you later or similarly enormoooos, enrmss and enourmos can be normalized as enormous. Other examples of text messages from Twitter and their corresponding normalized forms are shown in Table 1 .",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "(Han and Baldwin, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 473,
"end": 480,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The non-standard words in text are referred to as Out of Vocabulary (OOV) words. The normalization task restores the OOV words to their In Vocabulary (IV) forms. Social text is continuously evolving with new words and named entities that are not in the vocabularies of the systems (Hassan and Menezes, 2013) . Therefore, not every OOV word (e.g. iPhone, WikiLeaks or tok-Hav guts to say wat u desire.. Dnt beat behind da bush!! And 1 mre thng no mre say y r people's man!! Have guts to say what you desire.. Don't beat behind the bush!! And one more thing no more say you are people's man!! There r sm songs u don't want 2 listen 2 yl walking cos when u start dancing ppl won't knw y.",
"cite_spans": [
{
"start": 281,
"end": 307,
"text": "(Hassan and Menezes, 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are some songs you don't want to listen to while walking because when you start dancing people won't know why. enizing) should be considered for normalization. The OOV tokens that should be considered for normalization are referred to as ill-formed words. Ill-formed words can be normalized to different canonical words depending on the context of the text. For example, let's consider the two examples in Table 1 . \"y\" is normalized as \"you\" in the first one and as \"why\" in the second one.",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a graph-based text normalization method that utilizes both contextual and grammatical features of social text. The contextual information of words is modeled by a word association graph that is created from a large social media text corpus. The graph represents the relative positions of the words in the social media text messages and their Part-of-Speech (POS) tags. The lexical similarity features among the words are modeled using the longest common subsequence ratio and edit distance that encode the surface similarity and the double metaphone algorithm that encodes the phonetic similarity. The proposed approach is unsupervised, which is an important advantage over supervised systems, given the continuously evolving language in the social media domain. The same OOV word may have different appropriate normalizations depending on the context of the input text message. Recently proposed dictionary-based text normalization systems perform dictionary look-up and always normalize the same OOV word to the same IV word regardless of the context of the input text (Han et al., 2012; Hassan and Menezes, 2013) . On the other hand, the proposed approach does not only make use of the general context information in a large corpus of social media text, but it also makes use of the context of the OOV word in the input text message. Thus, an OOV word can be normalized to different IV words depending on the context of the input text.",
"cite_spans": [
{
"start": 1097,
"end": 1115,
"text": "(Han et al., 2012;",
"ref_id": "BIBREF11"
},
{
"start": 1116,
"end": 1141,
"text": "Hassan and Menezes, 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early work on text normalization mostly made use of the noisy channel model. The first work that had a significant performance improvement over the previous research was by Brill and Moore (2000) . They proposed a novel noisy channel model for spell checking based on string to string edits. Their model depended on probabilistic modeling of sub-string transformations. Toutanova and Moore (2002) improved this approach by extending the error model with phonetic similarities over words. Their approach is based on learning rules to predict the pronunciation of a single letter in the word depending on the neighbouring letters in the word. Choudhury et al. (2007) developed a supervised Hidden Markov Model based approach for normalizing Short Message Service (SMS) texts. They proposed a word for word decoding approach and used a dictionary based method to normalize commonly used abbreviations and nonstandard usage (e.g. \"howz\" to \"how are\" or \"aint\" to \"are not\"). Cook and Stevenson (2009) extended this model by introducing an unsupervised noisy channel model. Rather than using one generic model for all word formations as in (Choudhury et al., 2007) , they used a mixture model in which each different word formation type is modeled explicitly.",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "Brill and Moore (2000)",
"ref_id": "BIBREF2"
},
{
"start": 370,
"end": 396,
"text": "Toutanova and Moore (2002)",
"ref_id": "BIBREF25"
},
{
"start": 641,
"end": 664,
"text": "Choudhury et al. (2007)",
"ref_id": "BIBREF3"
},
{
"start": 971,
"end": 996,
"text": "Cook and Stevenson (2009)",
"ref_id": "BIBREF6"
},
{
"start": 1135,
"end": 1159,
"text": "(Choudhury et al., 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The limitations of these methods were that they did not consider contextual features and assumed that tokens have unique normalizations. In the text normalization task several OOV tokens are ambiguous and without contextual information it is not possible to build models that can disambiguate transformations correctly. Aw et al. (2006) proposed a phrase-based statistical machine translation (MT) model for the text normalization task. They defined the problem as translating the SMS language to the English language and based their model on two submodels: a word based language model and a phrase based lexical mapping model (channel model). Their system also benefits from the input context and they argue that the strength of their model is in its ability to disambiguate mapping as in \"2\" \u2192 \"two\" or \"to\", and \"w\" \u2192 \"with\" or \"who\". Making use of the whole conversation, this is the closest approach to ours in the sense of utilizing contextual sensitivity and coverage. Pennell and Liu (2011) on the other hand, proposed a character level MT system, that is robust to new abbreviations. In their two phased system, a character level trained MT model is used to produce word hypotheses and a trigram LM is used to choose a hypothesis that fits into the input context.",
"cite_spans": [
{
"start": 320,
"end": 336,
"text": "Aw et al. (2006)",
"ref_id": "BIBREF0"
},
{
"start": 976,
"end": 998,
"text": "Pennell and Liu (2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The MT based models are supervised models, a drawback of which is that they require annotated data. Annotated training data is not readily available and is difficult to create especially for the rapidly evolving social media text (Yang and Eisenstein, 2013) .",
"cite_spans": [
{
"start": 230,
"end": 257,
"text": "(Yang and Eisenstein, 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recent approaches handled the text normalization task by building normalization lexicons. Han and Baldwin (2011) developed a two phased model, where they only consider the illformed OOV words for normalization. First, a confusion set is generated using the lexical and phonetic distance features. Later, the candidates in the confusion set are ranked using a mixture of dictionary look up, word similarity based on lexical edit distance, phonemic edit distance, prefix sub-string, suffix sub-string and longest common subsequence (LCS), as well as context support metrics. Chrupala (2014) on the other hand achieved lower word error rates without using any lexical resources. Gouws et al. (2011) investigated the distinct contributions of features that are highly depended on user-centric information such as the geological location of the users and the twitter client that the tweet is received from. Using such user-based contextual metrics they modelled the transformation distributions across populations. Liu et al. (2012) proposed a broad coverage normalization system, which integrates an extended noisy channel model, that is based on enhanced letter transformations, visual priming, string and phonetic similarity. They try to improve the performance of the top n normalization candidates by integrating human perspective modeling. Yang and Eisenstein (2013) introduced an unsupervised log linear model for text normalization. Their joint statistical approach uses local context based on language modeling and surface similarity. Along with dictionary based models, Yang and Eisenstein's model have obtained a significant improvement on the performance of text normalization systems.",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "Han and Baldwin (2011)",
"ref_id": "BIBREF10"
},
{
"start": 681,
"end": 700,
"text": "Gouws et al. (2011)",
"ref_id": "BIBREF9"
},
{
"start": 1015,
"end": 1032,
"text": "Liu et al. (2012)",
"ref_id": "BIBREF15"
},
{
"start": 1346,
"end": 1372,
"text": "Yang and Eisenstein (2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another relevant study is conducted by Hassan and Menezes (2013) , who generated a normaliza-tion lexicon using Markov random walks on a contextual similarity lattice that they created using 5gram sequences of words. The best normalization candidates are chosen using the average hitting time and lexical similarity features. Context of a word in the center of a 5-gram sequence is defined by the other words in the 5-gram. Even if one word is not the same, the context is considered to be different. This is a relatively conservative way for modeling the prior contexts of words. In our model, we filtered candidate words based on their grammatical properties and let each neighbouring token to contribute to the prior context of a word, which leads to both a higher recall and a higher precision.",
"cite_spans": [
{
"start": 39,
"end": 64,
"text": "Hassan and Menezes (2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we propose a graph-based approach that models both contextual and lexical similarity features among an ill-formed OOV word and candidate IV words. An input text is first preprocessed by tokenizing and Part-Of-Speech (POS) tagging. If the text contains an OOV word, the normalization candidates are chosen by making use of the contextual features, which are extracted from a pre-generated directed word association graph, as well as lexical similarity features. Lexical similarity features are based on edit distance, longest common subsequence ratio, and double metaphone distance. In addition, a slang dictionary 1 is used as an external resource to enrich the normalization candidate set. The details of the approach are explained in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "After tokenization, the next step in the pipeline is POS tagging each token using a POS tagger specifically designed for social media text. Unlike the regular POS taggers designed for well-written newswire-like text, social media POS taggers provide a broader set of tags specific to the peculiarities of social text (Owoputi et al., 2013; Gimpel et al., 2011) . Using this extended set of tags we can identify tokens such as discourse markers (e.g. rt for retweets, cont. for a tweet whose content follows up in the coming tweet) or URLs. This enables us to model better the context of the words in social media text. A sample preprocessed sentence is shown in Table 3 .",
"cite_spans": [
{
"start": 317,
"end": 339,
"text": "(Owoputi et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 340,
"end": 360,
"text": "Gimpel et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 662,
"end": 669,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "As shown in Table 2 , after preprocessing, each token is assigned a POS tag with a confidence score between 0 and 1 2 . Later, we use these confidence scores in calculating the edge weights in our context graph. Note that even though the words w and beatiful are misspelled, they are tagged correctly by the tagger, with lower confidence scores though. ",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "Contextual information of words is modeled through a word association graph created by using a large corpus of social media text. The graph encodes the relative positions of the POS tagged words in the text with respect to each other. After preprocessing, each text message in the corpus is traversed in order to extract the nodes and the edges of the graph. A node is defined with four properties: id, oov, freq and tag. The token itself is the id field. The freq property indicates the node's frequency count in the dataset. The oov field is set to True if the token is an OOV word. Following the prior work by Han and Baldwin, (2011) we used the GNU Aspell dictionary (v0.60.6) to determine whether a word is OOV or not. We also edited the output of Aspell dictionary to accept letters other than \"a\" and \"i\" as OOV words. A portion of the graph that covers parts of the sample sentence in Table 3 is shown in Figure 1 . In the created word association graph, each node is a unique set of a token and its POS tag. This helps us to identify the candidate IV words for a given OOV word by considering not only lexical and contextual similarity, but also grammatical similarity in terms of POS tags. For example, if the token smile has been frequently seen as a Noun or a Verb, and not in other forms in the dataset (e.g. Table 4 ), this provides evidence that it is not a good normalization candidate for an OOV token that has been tagged as a Pronoun. On the 2 CMU Ark Tagger (v0.3.2) Figure 1 : Portion of the word association graph for part of the sample sentence in An edge is created between two nodes in the graph, if the corresponding word pair (i.e. token/POS pair) are contextually associated. Two words are considered to be contextually associated if they satisfy the following criteria:",
"cite_spans": [],
"ref_spans": [
{
"start": 893,
"end": 900,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 913,
"end": 921,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1322,
"end": 1329,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1487,
"end": 1495,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "3.2"
},
{
"text": "\u2022 The two words co-occur within a maximum word distance of t distance in a text message in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "3.2"
},
{
"text": "\u2022 Each word has a minimum frequency of t f requency in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "3.2"
},
{
"text": "The directionality of the edges is based on the sequence of words in the text messages in the corpus. In other words, an edge between two nodes is directed from the earlier seen token towards the later seen token in a message. For example, Figure 2 shows the edges that would be derived Let'sL startV thisD morningN wP aD beatifulA smileN .C Table 3 : Sample tokenized, POS tagged sentence (L: nominal+verbal, V: verb, D: determiner, N: noun, P: Preposition, A: adjective, C: punctuation).",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 248,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 342,
"end": 349,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "3.2"
},
{
"text": "from a text including the phrase \"with a beautiful smile\". The direction (from,to) and the distance together represent a unique triplet. For each pair of nodes with a specific distance there is an edge with a positive weight, if the two nodes are contextually associated. Each co-occurrence of two contextually associated nodes increases the weight of the edge between them with an average of the nodes' POS tag confidence scores in the text message considered. If we are to expand the graph with the example phrase \"with a beautiful smile\", the weight of the edge with distance 2 from the node with|P to the node smile|N would increase by (0.9963 + 0.9712)/2, since the confidence score of the POS tag for the token with is 0.9963 and the confidence score of the POS tag of the token smile is 0.9712 as shown in Table 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 813,
"end": 820,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Graph construction",
"sec_num": "3.2"
},
{
"text": "Our graph-based contextual similarity method is based on the assumption that an IV word that is the canonical form of an OOV word appears in the same context with the corresponding OOV word. In other words, the two nodes in the graph share several neighbors that co-occur within the same distances to the corresponding two words in social media text. We also assume that an OOV word and its canonical form should have the same POS tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "Given an input text for normalization, the next step after preprocessing is finding the normalization candidates for each OOV token in the input text. For each ill-formed OOV token o i in the input text, first the list of tokens that co-occur with o i in the input text and their positional distances to o i are extracted. This list is called the neighbor list of token o i , i.e., NL(o i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "For each neighbor node n j in NL(o i ), the word association graph is traversed, and the edges from or to the node n j are extracted. The resulting edge list EL(o i ) has edges in the form of (n j , c k ) or (c k , n j ), where c k is a candidate canonical form of the OOV word o i . Here the neighbor node n j can be an OOV node, but the candidate node c k is chosen among the IV nodes. The edges in EL(o i ) are filtered by the relative distance of n j to o i as given in the NL(o i ). Any edge between n j and c k , whose distance is not the same as the distance between n j and o i is removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "In addition to distance based filtering, POS tag based filtering is also performed on the edges in EL(o i ). Each candidate node should have the same POS tag with the corresponding OOV token. For the OOV token o i that has the POS tag T i , all the edges that include candidates with a tag other than T i are removed from the edge list EL(o i ). Figure 3 represents a portion from the graph where the neighbors and candidates of the OOV node \"beatiful\" are shown. In the sample sentence in Table 3 there are two OOV tokens to be normalized, o 1 = w and o 2 = beatiful. The neighbor list of o 2 , NL(o 2 ) includes n 1 = w, n 2 = a and n 3 = smile. For each neighbor in NL(o 2 ), the candidate nodes (c 1 = broken, c 2 = nice, c 3 = new, c 4 = beautiful, c 5 = big, c 6 = best, c 7 = great) are extracted. As shown in Figure 3 , there are 11 lines representing the edges between the neighbors of the OOV token and the candidate nodes. These are representative edges in EL(o 2 ). Each member of the edge list has the same tag (A for Adjective) as the OOV node \"beatiful\" and the same distance to the corresponding neighbor node of the OOV node.",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 3",
"ref_id": null
},
{
"start": 490,
"end": 497,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 817,
"end": 825,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "Each edge in EL(o i ) consists of a neighbor node n j , a candidate node c k and an edge weight edgeWeight(n j , c k ). The edge weight represents the likelihood or the strength of association between the neighbor node n j and the candidate node c k . As described in the previous section the edge weights are computed based on the frequency Figure 3 : A portion of the graph that includes the OOV token \"beatiful\", its neighbors and the candidate nodes that each neighbor is connected to. Thick lines show the edge list with relative weights.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 350,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "of co-occurrence of two tokens, as well as the confidence scores of their POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "The edge weights of the edges in EL(o 2 ) are shown in Figure 3 . The edges that are connected to the OOV neighbor \"w\" have smaller edge weights such as 3, 5, and 26. On the other hand, the edges that are connected to common words have higher weights. For example, the weight of the edge between the nodes \"a\" and \"new\" is 24388. This indicates that they are more common words, and frequently co-occur in the same form (\"a new\"). Although this edge weight metric is reasonable for identifying the most likely canonical form for the OOV word o i , it has the drawback of favoring words with high frequencies like common words or stop words. Therefore, to avoid overrated words and get contextually related candidates, we normalize the edge weight edgeWeight(n j , c k ) with the frequency of the candidate node c k as shown in Equation 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "Equation 1 provides a metric that captures contextual similarity based on binary associations. In order to achieve a more comprehensive contextual coverage, a contextual similarity feature is built based on the sum of the binary association scores of several neighbors. As shown in Equation 2, for a candidate node c k the total edge weight score is the sum of the normalized edge weight scores EWNorm(n j , c k ), which are the edge weights coming from the different neighbors of the OOV token o i . We expect this contextual similarity feature to favor and identify the candidates which are (i) related to many neighbors, and (ii) have a high association score with each neighbor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "EW N orm(nj, c k ) = edgeW eight(nj, c k )/f req(c k ) (1) EW Score(oi, c k ) = EL(o i ) EW N orm(nj, c k ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "Our word association graph includes both OOV and IV tokens, and our OOV detection depends on the spellchecker which fails to identify some OOV tokens that have the same spelling with an IV word. In order to propose better canonical forms, the frequencies of the normalization candidates in the social media corpus have also been incorporated to the contextual similarity feature. Nodes with higher frequencies lead to tokens that are in their most likely grammatical forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "The final contextual similarity of the token o i and the candidate c k is the weighted sum of the total edge weight score and the frequency score of the candidate (see Equation 3). The frequency score of the candidate is a real number between 0 and 1. It is proportional to the frequency of the candidate with respect to the frequencies of the other candidates in the corpus. Since the total edge weight score is our primary contextual resource, we may want to favor edge weight scores. We give the frequency score a weight 0 \u2264 \u03b2 \u2264 1 to be able to limit its effect on the total contextual similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "contSimScore(oi, c k ) = EW Score(oi, c k ) + \u03b2 * f reqScore(c k )",
"eq_num": "(3)"
}
],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "Hereby, we have the candidate list CL(o i ) for the OOV token o i that includes all the unique candidates in EL(o i ) and their contextual similarity scores calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Contextual Similarity",
"sec_num": "3.3"
},
{
"text": "Following the prior work in (Han and Baldwin, 2011; Hassan and Menezes, 2013) , our lexical similarity features are based on edit distance (Levenshtein, 1966) , double metaphone (phonetic edit distance) (Philips, 2000) , and a similarity function (simCost) (Contractor et al., 2010) which is defined as the ratio of the Longest Common Subsequence Ratio (LCSR) (Melamed, 1999) of two words and the Edit Distance (ED) between their skeletons (Equations 4 and 5), where the skeleton of a word is obtained by removing its vowels.",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Han and Baldwin, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 52,
"end": 77,
"text": "Hassan and Menezes, 2013)",
"ref_id": "BIBREF12"
},
{
"start": 139,
"end": 158,
"text": "(Levenshtein, 1966)",
"ref_id": "BIBREF14"
},
{
"start": 203,
"end": 218,
"text": "(Philips, 2000)",
"ref_id": "BIBREF21"
},
{
"start": 257,
"end": 282,
"text": "(Contractor et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 360,
"end": 375,
"text": "(Melamed, 1999)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity",
"sec_num": "3.4"
},
{
"text": "LCSR(oj, c k ) = LCS(oj, c k )/maxLength(oj, c k ) (4) simCost(oj, c k ) = LCSR(oj, c k )/ED(oj, c k ) 5Following the tradition that is inspired from (Kaufmann and Kalita, 2010) , before lexical similarity calculations, any repetitions of characters three or more times in OOV tokens are reduced to two (e.g. goooood is reduced to good). Then, the edit distance, phonetic edit distance, and simCost between each candidate in CL(o i ) and the OOV token o i are calculated. Edit distance and phonetic edit distance are used to filter the candidates. Any candidate in CL(o i ) with an edit distance greater than t edit and phonetic edit distance greater than t phonetic to o i is removed from the candidate list CL(o i ).",
"cite_spans": [
{
"start": 150,
"end": 177,
"text": "(Kaufmann and Kalita, 2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "lexSimScore(oi, c k ) = simCost(oi, c k ) + \u03bb * editScore(oi, c k )",
"eq_num": "(6)"
}
],
"section": "Lexical Similarity",
"sec_num": "3.4"
},
{
"text": "For the remaining candidates, the total lexical similarity score (Equation 6) is calculated using simCost and edit distance score 3 . Similar to contextual similarity score, here we have one main lexical similarity feature and one minor lexical similarity feature. The major lexical similarity feature is simCost, whereas the edit distance score is the minor feature. We assigned a weight 0 \u2264 \u03bb \u2264 1 to the edit distance score to be able to lower its contribution while calculating the total lexical similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Similarity",
"sec_num": "3.4"
},
{
"text": "Since some social media text messages are extremely short and contain several OOV words, they do not provide sufficient context, i.e., IV neighbors, to enable the extraction of good candidates from the word association graph. Therefore, we extended the candidate list obtained through contextual similarity as described in the previous section, by including all the tokens in the word association graph that satisfy the edit distance and 3 an approximate string comparison measure (between 0.0 and 1.0) using the edit distance https://sourceforge.net/projects/febrl/ phonetic edit distance criteria. We also incorporated candidates from external resources, in other words from a slang dictionary and a transliteration table of numbers and pronouns. If a candidate occurs in the slang dictionary or in the transliteration table as a correspondence to its OOV word, it is assigned an external score of 1, otherwise it is assigned an external score of 0.",
"cite_spans": [
{
"start": 438,
"end": 439,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Score",
"sec_num": "3.5"
},
{
"text": "The transliterations were first used by (Gouws et al., 2011) . Besides the token and its transliteration we also use its POS tag information, which was not available in their system.",
"cite_spans": [
{
"start": 40,
"end": 60,
"text": "(Gouws et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Score",
"sec_num": "3.5"
},
{
"text": "The external score favors the well known interpretations of common OOV words. However, unlike the dictionary based methodologies, our system does not return the corresponding unabbreviated word in the slang dictionary or in the transliteration table directly. Only an external score gets assigned and the candidate still needs to compete with other candidates which may have higher contextual similarities and one of those contextually more similar candidates may be returned as the correct normalization instead of the candidate found equivalent to the OOV word in the slang dictionary (or in the transliteration table).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Score",
"sec_num": "3.5"
},
{
"text": "As shown in Equation 7, the final score of a candidate IV token c k for an OOV token o i is the sum of its lexical similarity score, contextual similarity score and external score with respect to o i . candScore(oi, c k ) = lexSimScore(oi, c k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Scoring",
"sec_num": "3.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "+ contSimScore(oi, c k ) + externalScore(oi, c k )",
"eq_num": "(7)"
}
],
"section": "Overall Scoring",
"sec_num": "3.6"
},
{
"text": "4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Scoring",
"sec_num": "3.6"
},
{
"text": "We used the LexNorm1.1 (LN) dataset (Han and Baldwin, 2011) and Pennell and Liu (2014) 's trigram dataset to evaluate our proposed approach. LexNorm1.1 contains 549 tweets with 1184 manually annotated ill-formed OOV tokens. It has been used by recent text normalization studies for evaluation, which enables us to directly compare our performance results with results obtained by the recent previous work (Han and Baldwin, 2011; Pennell and Liu, 2011; Han et al., 2012; Liu et al., 2012; Hassan and Menezes, 2013; Yang and Eisenstein, 2013; Chrupala, 2014) . The trigram dataset is an SMS-like corpus collected from twitter status updates sent via SMS. The dataset does not include the complete tweet text but trigrams from tweets and one OOV word in each trigram is annotated. In total 4661 twitter status messages and 7769 tokens are annotated.",
"cite_spans": [
{
"start": 36,
"end": 59,
"text": "(Han and Baldwin, 2011)",
"ref_id": "BIBREF10"
},
{
"start": 64,
"end": 86,
"text": "Pennell and Liu (2014)",
"ref_id": "BIBREF20"
},
{
"start": 405,
"end": 428,
"text": "(Han and Baldwin, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 429,
"end": 451,
"text": "Pennell and Liu, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 452,
"end": 469,
"text": "Han et al., 2012;",
"ref_id": "BIBREF11"
},
{
"start": 470,
"end": 487,
"text": "Liu et al., 2012;",
"ref_id": "BIBREF15"
},
{
"start": 488,
"end": 513,
"text": "Hassan and Menezes, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 514,
"end": 540,
"text": "Yang and Eisenstein, 2013;",
"ref_id": "BIBREF26"
},
{
"start": 541,
"end": 556,
"text": "Chrupala, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We used a large corpus of social media text to construct our word association graph. We extracted 1.5 GB of English tweets from Stanford's 476 million Twitter Dataset (Yang and Leskovec, 2011) . The language identification of tweets was performed by using the langid.py Python library (Lui and Baldwin, 2012; Baldwin and Lui, 2010) . CMU Ark Tagger (v0.3.2), which is a social media specific POS tagger achieving an accuracy of 95% over social media text (Owoputi et al., 2013; Gimpel et al., 2011) , is used for tokenizing and POS tagging the tweets. We used the twitter tagset which includes some extra POS tags specific to social media including URLs and emoticons, Twitter hashtags (#), and twitter at-mentions (@). We made use of these social media specific tags to disambiguate some OOV tokens.",
"cite_spans": [
{
"start": 167,
"end": 192,
"text": "(Yang and Leskovec, 2011)",
"ref_id": "BIBREF27"
},
{
"start": 285,
"end": 308,
"text": "(Lui and Baldwin, 2012;",
"ref_id": "BIBREF16"
},
{
"start": 309,
"end": 331,
"text": "Baldwin and Lui, 2010)",
"ref_id": "BIBREF1"
},
{
"start": 455,
"end": 477,
"text": "(Owoputi et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 478,
"end": 498,
"text": "Gimpel et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Generation",
"sec_num": "4.2"
},
{
"text": "After tokenization, we removed the tokens that were POS tagged as mention (e.g. @brendon), discourse marker (e.g. RT), URL, email address, emoticon, numeral, and punctuation. The remaining tokens are used to build the word association graph. After constructing the graph we only kept the nodes with a frequency greater than 8. For the performance related reasons, the relatedness thresholds t distance and t f requency were chosen as 3 and 8, respectively. The resulting graph contains 105428 nodes and 46609603 edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Generation",
"sec_num": "4.2"
},
{
"text": "While extending the candidate set with lexical features we use t edit \u2264 2 \u2228 t phonetic \u2264 1 to keep up with the settings in (Han and Baldwin, 2011) . In other words, IV words that are within 2 character edit distance or 1 character edit distance of a given OOV word under phonemic transcription were chosen as lexical similarity candidates. The values for the \u03bb and \u03b2 parameters in Equations 3 and 6 are set to 0.5. We did not tune these parameters for optimized performance. We selected the value of 0.5 in order to give less weight (half weight) to our minor contextual and lexical similarity features compared to the major ones.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "(Han and Baldwin, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Set Generation",
"sec_num": "4.3"
},
{
"text": "Most of the prior work assume perfect detection of ill-formed words during test set decoding (Liu et al., 2012; Han and Baldwin, 2011; Pennell and Liu, 2011; Yang and Eisenstein, 2013) . To be able to compare our results with studies that do not assume that ill-formed words have been preidentified (Chrupala, 2014; Hassan and Menezes, 2013; Han et al., 2012) we used our graph and built a dictionary to identify the ill-formed words.",
"cite_spans": [
{
"start": 93,
"end": 111,
"text": "(Liu et al., 2012;",
"ref_id": "BIBREF15"
},
{
"start": 112,
"end": 134,
"text": "Han and Baldwin, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 135,
"end": 157,
"text": "Pennell and Liu, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 158,
"end": 184,
"text": "Yang and Eisenstein, 2013)",
"ref_id": "BIBREF26"
},
{
"start": 299,
"end": 315,
"text": "(Chrupala, 2014;",
"ref_id": "BIBREF4"
},
{
"start": 316,
"end": 341,
"text": "Hassan and Menezes, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 342,
"end": 359,
"text": "Han et al., 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization Candidates",
"sec_num": "4.4"
},
{
"text": "Following Han and Baldwin (2011) and Yang and Eisenstein (2013) , we created a dictionary by choosing the nodes in our graph that have a frequency property higher than 20. Filtering this dictionary of 49657 words using GNU Aspell dictionary (v0.60.6) we produced a set of 26773 \"invocabulary\" (IV) words. In our second setup our system does not attemp to normalize the words in this set.",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "Han and Baldwin (2011)",
"ref_id": "BIBREF10"
},
{
"start": 37,
"end": 63,
"text": "Yang and Eisenstein (2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization Candidates",
"sec_num": "4.4"
},
{
"text": "In this paper we introduced a new contextual approach for text normalization. The lexical similarity score described in Section 3.4 and the external score described in Section 3.5 depend on the work of Han and Baldwin (2011) . With small changes made to the previously proposed method we took it as a baseline in our study.",
"cite_spans": [
{
"start": 202,
"end": 224,
"text": "Han and Baldwin (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "As contextual layer we proposed two metrics extracted from the word association graph. The first one depends on the total edge weights between candidates and OOV neighbours, the second one is based on the frequencies of the candidates in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "As the evaluation metrics we used precision, recall, and F-Measure. Precision calculates the proportion of correctly normalized words among the words for which we produced a normalization. Recall shows the amount of correct normalizations over the words that require normalization (ill-formed OOV words). The main metric that we consider while evaluating the performance of our system is F-Measure which is the harmonic mean of precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "We investigated the impact of lexSimScore and externalScore seperately on both datasets (Table 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "Using only lexSimScore the system achieved an F-measure of 28.24% on the LexNorm1.1 dataset and 38.70% on the Trigram dataset, which shows that lexical similarity alone is not enough for a good normalization system. However, the externalScore which is the layer that is more aware of the Internet jargon, along with some social text specific rule based transliterations performs better than expected on both datasets. Mixing these two layers we reach our baseline that is adopted from (Han and Baldwin, 2011) . This baseline setup obtained an F-measure of 77.12% on LexNorm1.1, which is slightly better than the result (75.30%) reported by the original system of Han and Baldwin (2011) .",
"cite_spans": [
{
"start": 485,
"end": 508,
"text": "(Han and Baldwin, 2011)",
"ref_id": "BIBREF10"
},
{
"start": 663,
"end": 685,
"text": "Han and Baldwin (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "The results obtained by our proposed Contextual Word Association Graph (CWA-Graph) system on the LexNorm1.1 and trigram datasets, as well as the results of recent studies that used the same datasets for evaluation are presented in Table 5. The ill-formed words are assumed to have been pre-identified in advance. Table 5 : Results obtained when ill-formed words are assumed to have been pre-identified in advance.",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 320,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "Our CWA-Graph approach achieves the best Fmeasure (82.24%) and precision (85.50%) among the recent previous studies. The high precision value is obtained without compromising much from recall (79.22%). Our recall is the second best among others. The F-score (82.09%) obtained by Yang and Eisenstein (2013)'s system is close to ours and the second best F-score, which on the other hand, has a lower precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "Without any modification to our system or to the parameters, we were able to improve the results obtained by Pennell and Liu (2011) on the trigram SMS-like dataset. The trigram nature of the dataset resulted in input texts which are (short thus) very limited with regard to contextual information. Nevertheless, our system achieved 72.8% F-Measure using this contextual information even though it is limited.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "Pennell and Liu (2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "Along the systems (presented in Table 5 ) that assume ill-formed tokens have been pre-identified perfectly by an oracle, there are also systems that are not based on this assumption but contain illformed word identification components (Han et al., 2012; Hassan and Menezes, 2013; Chrupala, 2014) . We used the method described in Section 4.4 to identify the candidate tokens for normalization. Table 6 shows our results compared with the results of other systems that perform ill-formed word detection prior to normalization. We could label 1141 tokens correctly as ill-formed among 1184 ill-formed tokens. We achieved a word error rate (WER) of 2.6%, where Chrupala (2014) reported 4.8% and Han et al. (2012) Table 6 : Results obtained without assuming that ill-formed words have been pre-identified.",
"cite_spans": [
{
"start": 235,
"end": 253,
"text": "(Han et al., 2012;",
"ref_id": "BIBREF11"
},
{
"start": 254,
"end": 279,
"text": "Hassan and Menezes, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 280,
"end": 295,
"text": "Chrupala, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 692,
"end": 709,
"text": "Han et al. (2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 5",
"ref_id": null
},
{
"start": 394,
"end": 401,
"text": "Table 6",
"ref_id": null
},
{
"start": 710,
"end": 717,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "As shown in Table 5 some systems have equal precision and recall values (Yang and Eisenstein, 2013; Han and Baldwin, 2011; Pennell and Liu, 2011) . Those systems normalize all ill-formed words. On the other hand, our system does not return a normalization, if there are no candidates that are lexically similar, grammatically correct, and contextually close enough. For this reason, we managed to achieve a higher precision compared to the other systems. Our system returns a normalization candidate for an OOV word only if it achieves a similarity score (contextual, lexical, external, or some degree of each feature) above a threshold value. The default threshold used in the system is set equal to the maximum score that can be obtained by lexical features. Thus, we only retrieve candidates that obtain a non-zero contextual similarity score (conSimScore). The results shown at Table 7 and Table 8 demonstrate that CWA-Graph can obtain even higher values of precision by increasing the percentage of contextual context of candidates. It achieved 94.1% precision on the LexNorm1.1 dataset, where the highest precision reported at the same recall level is 85.37% (Hassan and Menezes, 2013) . The precision of the normalization system can be set (e.g. as high, medium, low) depending on the application where it will be used.",
"cite_spans": [
{
"start": 72,
"end": 99,
"text": "(Yang and Eisenstein, 2013;",
"ref_id": "BIBREF26"
},
{
"start": 100,
"end": 122,
"text": "Han and Baldwin, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 123,
"end": 145,
"text": "Pennell and Liu, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 1165,
"end": 1191,
"text": "(Hassan and Menezes, 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 5",
"ref_id": null
},
{
"start": 882,
"end": 901,
"text": "Table 7 and Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "Our motivation behind introducing the \u03bb and \u03b2 parameters was to investigate the importance of the minor features compared to our major features (described in Sections 3.3 and 3.4). For the experiments reported in Tables 5, 6, 7 and 8 we set the \u03bb and \u03b2 values to 0.5. We did not tune these parameters for optimized performance. Rather, our aim was to give less weight (half weight) to the minor features compared to the major ones. To analyze the effects of the lambda and beta parameters, we plotted the performance of the system on the LexNorm1.1 data set by varying their values (see Figure 4 ). It is shown that for \u03bb and \u03b2 values greater than 0.3 the performance of the system is quite robust. The F-score varies between 80.4% and 82.9%. Figure 4 : The effect of \u03bb and \u03b2 on the system performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 587,
"end": 595,
"text": "Figure 4",
"ref_id": null
},
{
"start": 743,
"end": 751,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.5"
},
{
"text": "In this paper, we present an unsupervised graphbased approach for contextual text normalization. The task of normalization is highly dependent on understanding and capturing the dynamics of the informal nature of social text. Our word association graph is built using a large unlabeled social media corpus. It helps to derive contextual analysis on both clean and noisy data. It is important to emphasize the difference between corpus based contextual information and contextual information of the input text (input context). Most recent unsupervised systems for text normalization only make use of corpus based context information. However, this approach is led by statistical information. In other words, it finds which IV word the OOV word is commonly normalized to, regardless of the context of the OOV word in the input text message. A major strength of our approach is that it utilizes both corpus based contextual information and input based contextual information. We use corpus based statistical information to connect/associate the words in the contextual word association graph. On the other hand, the neighbors of an OOV word in the input text provide us input based context information. Using input context to find normalizations helps us identify the correct normalization, even if it is not the statistically dominant one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We compared our approach with the recent social media text normalization systems and achieved state-of-the-art precision and F-measure scores. We reported our results on two datasets. The first one is the standard text normalization dataset (Lexnorm1.1) derived from Twitter. Our results on this dataset showed that our system can serve as a high precision text normalization system which is highly preferable as an NLP preprocessing step. The second dataset we tested our approach is a SMS-like trigram dataset. The tests showed that the proposed system can perform good on SMS data as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The system does not require a clean corpus or an annotated corpus. The contextual word association graph can be built by using the publicly available social media text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.noslang.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Phrase-based Statistical Model for SMS Text Normalization",
"authors": [
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A Phrase-based Statistical Model for SMS Text Nor- malization. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics, pages 33-40.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language Identification: The Long and the Short of the Matter",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "229--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin and Marco Lui. 2010. Language Identification: The Long and the Short of the Matter. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 229-237.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An Improved Error Model for Noisy Channel Spelling Correction",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "286--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Robert C. Moore. 2000. An Improved Error Model for Noisy Channel Spelling Correction. Proceedings of the 38th Annual Meeting on Associa- tion for Computational Linguistics, pages 286-293.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Investigation and Modeling of the Structure of Texting Language",
"authors": [
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Saraf",
"suffix": ""
},
{
"first": "Vijit",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Sudeshna",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Basu",
"suffix": ""
}
],
"year": 2007,
"venue": "International Journal on Document Analysis and Recognition",
"volume": "10",
"issue": "3",
"pages": "157--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu. 2007. Investigation and Modeling of the Structure of Texting Language. International Journal on Doc- ument Analysis and Recognition, 10(3):157-174.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Normalizing tweets with edit scripts and recurrent neural embeddings",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupala",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "680--686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupala. 2014. Normalizing tweets with edit scripts and recurrent neural embeddings. Pro- ceedings of the 52st Annual Meeting of the Associa- tion for Computational Linguistics, pages 680-686.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised Cleansing of Noisy Text",
"authors": [
{
"first": "Danish",
"middle": [],
"last": "Contractor",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tanveer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Faruquie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Venkata",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Subramaniam",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danish Contractor, Tanveer A. Faruquie, and L. Venkata Subramaniam. 2010. Unsuper- vised Cleansing of Noisy Text. Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 189-196.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Unsupervised Model for Text Message Normalization",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Computational Approaches to Linguistic Creativity",
"volume": "",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Cook and Suzanne Stevenson. 2009. An Un- supervised Model for Text Message Normalization. Proceedings of the Workshop on Computational Ap- proaches to Linguistic Creativity, pages 71-78.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What to Do About Bad Language on the Internet",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies",
"volume": "",
"issue": "",
"pages": "359--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein. 2013. What to Do About Bad Lan- guage on the Internet. Proceedings of the North American Chapter of the Association for Computa- tional Linguistics : Human Language Technologies, pages 359-369.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Part-of-speech Tagging for Twitter: Annotation, Features, and Experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech Tagging for Twitter: Annotation, Features, and Experiments. Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies: Short Papers -Volume 2, pages 42-47.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Contextual Bearing on Linguistic Variation in Social Media",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "Congxing",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Languages in Social Media",
"volume": "",
"issue": "",
"pages": "20--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Donald Metzler, Congxing Cai, and Eduard Hovy. 2011. Contextual Bearing on Lin- guistic Variation in Social Media. Proceedings of the Workshop on Languages in Social Media, pages 20-29.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lexical Normalisation of Short Text Messages: Makn Sens a #Twitter",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "368--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han and Timothy Baldwin. 2011. Lexical Normal- isation of Short Text Messages: Makn Sens a #Twit- ter. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1, pages 368-378.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatically constructing a normalisation dictionary for microblogs",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012. Au- tomatically constructing a normalisation dictionary for microblogs. Proceedings of the 2012 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 421-432.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Social Text Normalization Using Contextual Graph Random Walks",
"authors": [
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1577--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hany Hassan and Arul Menezes. 2013. Social Text Normalization Using Contextual Graph Ran- dom Walks. Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics, pages 1577-1586.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Syntactic Normalization of Twitter Messages",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Kaufmann",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 8th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "149--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Kaufmann and Jugal Kalita. 2010. Syntactic Nor- malization of Twitter Messages. Proceedings of the 8th International Conference on Natural Language Processing, pages 149-158.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Binary Codes Capable of Correcting Deletions, Insertions and Reversals",
"authors": [
{
"first": "",
"middle": [],
"last": "Vladimir Iosifovich Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "Soviet Physics Doklady",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Iosifovich Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Re- versals. Soviet Physics Doklady, 10:707.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Broad-Coverage Normalization System for Social Media Language",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fuliang",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "1035--1044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A Broad-Coverage Normalization System for Social Media Language. Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics: Long Papers-Volume 1, pages 1035-1044.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Langid.Py: An Off-the-shelf Language Identification Tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. Langid.Py: An Off-the-shelf Language Identification Tool. Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 25-30.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bitext Maps and Alignment via Pattern Recognition",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "1",
"pages": "107--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 1999. Bitext Maps and Alignment via Pattern Recognition. Computational Linguistics, 25(1):107-130.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies",
"volume": "",
"issue": "",
"pages": "380--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters. Proceedings of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies, pages 380-390.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Character-Level Machine Translation Approach for Normalization of SMS Abbreviations",
"authors": [
{
"first": "Deana",
"middle": [],
"last": "Pennell",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Fifth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "974--982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deana Pennell and Yang Liu. 2011. A Character- Level Machine Translation Approach for Normal- ization of SMS Abbreviations. Fifth International Joint Conference on Natural Language Processing, pages 974-982.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Normalization of informal text",
"authors": [
{
"first": "Deana",
"middle": [],
"last": "Pennell",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Speech & Language",
"volume": "28",
"issue": "1",
"pages": "256--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deana Pennell and Yang Liu. 2014. Normalization of informal text. Computer Speech & Language, 28(1):256 -277.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Double Metaphone Search Algorithm. C/C++ Users Journal",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Philips",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "18",
"issue": "",
"pages": "38--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Philips. 2000. The Double Meta- phone Search Algorithm. C/C++ Users Journal, 18(6):38-43, June.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsu- pervised modeling of twitter conversations. Human Language Technologies: The 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 172-180.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Pronunciation Modeling for Improved Spelling Correction",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Robert C. Moore. 2002. Pro- nunciation Modeling for Improved Spelling Correc- tion. Proceedings of the 40th Annual Meeting on As- sociation for Computational Linguistics, pages 144- 151.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Log-Linear Model for Unsupervised Text Normalization",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang and Jacob Eisenstein. 2013. A Log-Linear Model for Unsupervised Text Normalization. Pro- ceedings of the Empirical Methods on Natural Lan- guage Processing, pages 61-72.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Patterns of Temporal Variation in Online Media",
"authors": [
{
"first": "Jaewon",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Forth International Conference on Web Search and Web Data Mining",
"volume": "",
"issue": "",
"pages": "177--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaewon Yang and Jure Leskovec. 2011. Patterns of Temporal Variation in Online Media. Proceedings of the Forth International Conference on Web Search and Web Data Mining, pages 177-186.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Sample nodes and edges from the word association graph.",
"num": null,
"uris": null
},
"TABREF0": {
"content": "<table/>",
"html": null,
"text": "Sample tweets and their normalized forms.",
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table><tr><td colspan=\"2\">node id freq</td><td>oov</td><td>tag</td></tr><tr><td>smile</td><td>3</td><td colspan=\"2\">False A</td></tr><tr><td>smile</td><td colspan=\"3\">3403 False N</td></tr><tr><td>smile</td><td colspan=\"3\">2796 False V</td></tr></table>",
"html": null,
"text": "(d: distance, w: edge weight).other hand, smile can be a good candidate for a Noun or a Verb OOV token, if it is lexically and contextually similar to it.",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table/>",
"html": null,
"text": "The different nodes in the word association graph representing the token smile tagged with different POS tags.",
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Method</td><td colspan=\"4\">Dataset Precision Recall F-measure</td></tr><tr><td>Han et al. (2012)</td><td>LN</td><td>70.00</td><td>17.90</td><td>28.50</td></tr><tr><td>Hassan and Menezes (2013)</td><td>LN</td><td>85.37</td><td>56.40</td><td>69.93</td></tr><tr><td>CWA-Graph</td><td>LN</td><td>85.87</td><td>76.52</td><td>80.92</td></tr></table>",
"html": null,
"text": "reported 6.6% WER on the Lexnorm1.1 dataset.",
"type_str": "table",
"num": null
},
"TABREF8": {
"content": "<table><tr><td colspan=\"4\">conSimScore &gt; Precision Recall F-measure</td></tr><tr><td>0</td><td>77.2</td><td>68.8</td><td>72.8</td></tr><tr><td>0.1</td><td>80.9</td><td>65.8</td><td>72.6</td></tr><tr><td>0.2</td><td>84.2</td><td>60.8</td><td>70.6</td></tr><tr><td>0.3</td><td>87.6</td><td>54.6</td><td>67.3</td></tr><tr><td>0.4</td><td>89.5</td><td>47.1</td><td>61.7</td></tr><tr><td>0.5</td><td>90.8</td><td>42.1</td><td>57.6</td></tr></table>",
"html": null,
"text": "Comparison of results for different threshold values on LexNorm1.1, the setup we have used for our other experiments is shown in bold.",
"type_str": "table",
"num": null
},
"TABREF9": {
"content": "<table/>",
"html": null,
"text": "Comparison of results for different threshold values on trigram dataset, the setup we have used for our other experiments is shown in bold.",
"type_str": "table",
"num": null
}
}
}
}