|
{ |
|
"paper_id": "D13-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:42:37.709316Z" |
|
}, |
|
"title": "Paraphrasing 4 Microblog Normalization", |
|
"authors": [ |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Black", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Isabel", |
|
"middle": [], |
|
"last": "Trancoso", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Compared to the edited genres that have played a central role in NLP research, microblog texts use a more informal register with nonstandard lexical items, abbreviations, and free orthographic variation. When confronted with such input, conventional text analysis tools often perform poorly. Normalization-replacing orthographically or lexically idiosyncratic forms with more standard variants-can improve performance. We propose a method for learning normalization rules from machine translations of a parallel corpus of microblog messages. To validate the utility of our approach, we evaluate extrinsically, showing that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.", |
|
"pdf_parse": { |
|
"paper_id": "D13-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Compared to the edited genres that have played a central role in NLP research, microblog texts use a more informal register with nonstandard lexical items, abbreviations, and free orthographic variation. When confronted with such input, conventional text analysis tools often perform poorly. Normalization-replacing orthographically or lexically idiosyncratic forms with more standard variants-can improve performance. We propose a method for learning normalization rules from machine translations of a parallel corpus of microblog messages. To validate the utility of our approach, we evaluate extrinsically, showing that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Microblogs such as Twitter, Sina Weibo (a popular Chinese microblog service) and Facebook have received increasing attention in diverse research communities (Han and Baldwin, 2011; Hawn, 2009, inter alia) . In contrast to traditional text domains that use carefully controlled, standardized language, microblog content is often informal, with less adherence to conventions regarding punctuation, spelling, and style, and with a higher proportion of dialect or pronouciation-derived orthography. While this diversity itself is an important resource for studying, e.g., sociolinguistic variation (Eisenstein et al., 2011; Eisenstein, 2013) , it poses challenges to NLP applications developed for more formal domains. If retaining variation due to sociolinguistic or phonological factors is not crucial, text normalization can improve performance on downstream tasks ( \u00a72).", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 180, |
|
"text": "(Han and Baldwin, 2011;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 204, |
|
"text": "Hawn, 2009, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 619, |
|
"text": "(Eisenstein et al., 2011;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 637, |
|
"text": "Eisenstein, 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper introduces a data-driven approach to learning normalization rules by conceiving of normalization as a kind of paraphrasing and taking inspiration from the bilingual pivot approach to paraphrase detection (Bannard and Callison-Burch, 2005) and the observation that translation is an inherently \"simplifying\" process (Laviosa, 1998; Volansky et al., 2013) . Starting from a parallel corpus of microblog messages consisting of English paired with several other languages (Ling et al., 2013) , we use standard web machine translation systems to re-translate the non-English segment, producing English original, English MT pairs ( \u00a73). These are our normalization examples, with MT output playing the role of normalized English. Several techniques for identifying high-precision normalization rules are proposed, and we introduce a character-based normalization model to account for predictable character-level processes, like repetition and substitution ( \u00a74). We then describe our decoding procedure ( \u00a75) and show that our normalization model improve translation quality for English-Chinese microblog translation ( \u00a76). 1", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 249, |
|
"text": "(Bannard and Callison-Burch, 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 341, |
|
"text": "(Laviosa, 1998;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 364, |
|
"text": "Volansky et al., 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 498, |
|
"text": "(Ling et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider the English tweet shown in the first row of Table 1 which contains several elements that NLP 1 The datasets used in this paper are available from http: //www.cs.cmu.edu/\u02dclingwang/microtopia. orig. To DanielVeuleman yea iknw imma work on that MT 1 \u554aiknw DanielVeuleman\u4f0a\u9a6c\u5de5\u4f5c\uff0c MT 2 DanielVeuleman \u662fiknw \u51cb\u8c22\u5173\u4e8e\u5de5\u4f5c\uff0c MT 3 \u5230DanielVeuleman\u662f\u7684iknw imma\u8fd9\u65b9\u9762\u7684\u5de5\u4f5c systems trained on edited domains may not handle well. First, it contains several nonstandard abbreviations, such as, yea, iknw and imma (abbreviations of yes, I know and I am going to). Second, there is no punctuation in the text although standard convention would dictate that it should be used. To illustrate the effect this can have, consider now the translations produced by Google Translate, 2 Microsoft Bing, 3 and Youdao, 4 shown in rows 2-4. Even with no knowledge of Chinese, it is not hard to see that all engines have produced poor translations: the abbreviation iknw is left translated by all engines, and imma is variously deleted, left untranslated, or transliterated into the meaningless sequence \u4f0a\u9a6c (pronounced y\u012b m\u01ce).", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 103, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 771, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 60, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Why Normalize?", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While normalization to a form like To Daniel Veuleman: Yes, I know. I am going to work on that. does indeed lose some information (information important for an analysis of sociolinguistic or phonological variation clearly goes missing), it expresses the propositional content of the original in a form that is more amenable to processing by traditional tools. Translating the normalized form with Google Translate produces \u8981\u4e39\u5c3c\u5c14Veuleman\uff1a\u662f\u7684\uff0c\u6211 \u77e5\u9053\u3002\u6211\u6253\u7b97\u5728\u90a3\u5de5\u4f5c\u3002, which is a substantial improvement over all translations in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 521, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Why Normalize?", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We want to treat normalization as a supervised learning problem akin to machine translation, and to do so, we need to obtain pairs of microblog posts and their normalized forms. While it would be possible to ask annotators to create such a corpus, it would be quite expensive to obtain large numbers of examples. In this section, we propose a method for creating normalization examples without any human annotation, by leveraging existing tools and data resources. The English example sentence in Table 1 was selected from the \u00b5topia parallel corpus (Ling et al., 2013) , which consists of self-translated messages from Twitter and Sina Weibo (i.e., each message contains a translation of itself). Row 2 of Table 2 shows the Mandarin self-translation from the corpus. The key observation is what happens when we automatically translate the Mandarin version back into English. Rows 3-5 shows automatic translations from three standard web MT engines. While not perfect, the translations contain several correctly normalized subphrases. We will use such re-translations as a source of (noisy) normalization examples. Since such self-translations are relatively numerous on microblogs, this technique can provide a large amount of data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 550, |
|
"end": 569, |
|
"text": "(Ling et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 497, |
|
"end": 504, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 714, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Obtaining Normalization Examples", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Of course, to motivate this paper, we argued that NLP tools -like the very translation systems we propose to use -often fail on unnormalized input. Is this a problem? We argue that it is not for the following two reasons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Obtaining Normalization Examples", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Normalization in translation. Work in translation studies has observed that translation tends to be a generalizing process that \"smooths out\" authorand work-specific idiosyncrasies (Laviosa, 1998; Volansky et al., 2013) . Assuming this observation is robust, we expect that dialectal variant forms found in microblogs to be normalized in translation. Therefore, if the parallel segments in our microblog parallel corpus did indeed originate through a translation process (rather than, e.g., being generated as two independent utterances from a bilingual), we may then state the following assumption about the distribution of variant forms in a parallel segment e, f : if e contains nonstandard lexical variants, then f is likely to be a normalized translation using with fewer nonstandard lexical variants (and viceversa).", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 196, |
|
"text": "(Laviosa, 1998;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 219, |
|
"text": "Volansky et al., 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Obtaining Normalization Examples", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Uncorrelated orthographic variants. Any written language has the potential to make creative use of orthography: alphabetic scripts can render approximations of pronunciation variants; logographic scripts can use homophonic substitutions. However, the kinds of innovations used in particular languages will be language specific (depending on details of the phonology, lexicon, and orthography of the language). However, for language pairs that differ substantially in these dimensions, it may not always be possible (or at least easy) to preserve particular kinds of nonstandard orthographic forms in translation. Consider the (relatively common) pronounverb compounds like iknw and imma from our motivating example: since Chinese uses a logographic script without spaces, there is no obvious equivalent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Obtaining Normalization Examples", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For the two reasons outlined above, we argue that we will be able to translate back into English using MT, even when the underlying English part of the parallel corpus has a great deal of nonstandard content. We leverage this fact to build the normalization corpus, where the original English tweet is treated as the variant form, and the automatic translation obtained from another language is considered a potential normalization. 5 Our process is as follows. The microblog corpus of Ling et al. (2013) contains sentence pairs extracted from Twitter and Sina Weibo, for multiple language pairs. We use all corpora that include English as one of the languages in the pair. The respective non-English side is translated into English using different translation engines. The different sets we used and the engines we used to translate are shown in Table 3 . Thus, for each original English post o, we obtain n paraphrases {p i } n i=1 , from n different translation engines. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 433, |
|
"end": 434, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 504, |
|
"text": "Ling et al. (2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 847, |
|
"end": 854, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Variant-Normalized Parallel Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our parallel microblog corpus was crawled automatically and contains many misaligned sentences. To improve precision, we attempt to find the similarity between the (unnormalized) original and each of the normalizations using an alignment based on the one used in METEOR (Denkowski and Lavie, 2011), which computes the best alignment between the original tweet and each of the normalizations but modified to permit domain-specific approximate matches. To address lexical variants, we allow fuzzy word matching, that is, we allow lexically similar, such as yea and yes to be aligned (similarity is determined by the Levenshtein distance). We also perform phrasal matchings, such as ikwn to i know. To do so, we extend the alignment algorithm from word to phrasal alignments. More precisely, given the original post o and a candidate normalization n, we wish to find the optimal segmentation producing a good alignment. A segmentation s = s 1 , . . . , s |s| is a sequence of segments that aligns as a block to a source word. For instance, for the sentence yea iknw imma work on that, one possible segmentation could be s 1 =yea ikwn, s 2 =imma and s 3 =work on that.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Model. We define the score of an alignment a and segmentation s in using a model that makes semi-Markov independence assumptions, similar to the work in (Bansal et al., 2011) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 174, |
|
"text": "(Bansal et al., 2011)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", u(a, s | o, n) = |s| i=1 u e (s i , a i | n) \u00d7 u t (a i | a i\u22121 ) \u00d7 u (|s i |)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this model, the maximal scoring segmentation and alignment can be found using a polynomial time dynamic programming algorithm. Each segment can be aligned to any word or segment in o. The aligned segment for s k is defined as a k . For the score of a segment correspondence u e (s, a | n), we assume that this can be estimated using the lexical similarity between segments, which we define to be 1 \u2212 L(s k ,a k ) max{|s k |,|a k |} , where L(x, y) denotes the Levenshtein distance between strings x and y, normalized by the highest possible distance between those segments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the alignment score u t , we assume that the relative order of the two sequences will be mostly monotonous. Thus, we approximate u t with the following density", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "pos s (a k ) \u2212 pos e (a k\u22121 ) \u223c N (1, 1),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where the pos s is the index of the first word in the segment and pos e the one of the last word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "After finding the Viterbi alignments, we compute the similarity measure \u03c4 = |A| |A|+|U | , used in (Resnik and Smith, 2003) , where |A| and |U | are the number of words that were aligned and unaligned, respectively. In this work, we extract the pair if \u03c4 > 0.2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 123, |
|
"text": "(Resnik and Smith, 2003)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment and Filtering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "From the normalization corpus, we learn a normalization model that generalizes the normalization process. That is, from the data we observe that To DanielVeuleman yea iknw imma work on that is normalized to To Daniel Veuleman: yes, I know. I am going to work on that. However, this is not useful, since the chances of the exact sentence To DanielVeuleman yea iknw imma work on that occurring in the data is low. We wish to learn a process to convert the original tweet into the normalized form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "There are two mechanisms that we use in our model. The first ( \u00a74.1) learns word-word and phrase-phrase mappings. That is, we wish to find that DanielVeuleman is normalized to Daniel Veuleman, that iknw is normalized to I know and that imma is normalized to I am going. These mappings are more useful, since whenever iknw occurs in the data, we have the option to normalize it to I know. The second ( \u00a74.2) learns character sequence mappings. If we look at the normalization DanielVeuleman to Daniel Veuleman, we can see that it is only applicable when the exact word DanielVeuleman occurs. However, we wish to learn that it is uncommon for the letters l and v to occur in the same word sequentially, so that be can add missing spaces in words that contain the lv character sequence, such as normalizing phenomenalvoter to phenomenal voter. However, there are also cases where this is not true, for instance, in the word velvet, we do not wish to separate the letters l and v. Thus, we shall describe the process we use to decide when to apply these transformations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The process to find phrases from sentences has been throughly studied in Machine Translation. This is generally done in two steps, Word Alignments and Phrase Extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Sentences To Phrases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Alignment. The first step is to find the word-level alignments between the original post and its normalization. This is a well studied problem in MT, referred as Word Alignment (Brown et al., 1993) . Many alignment models have been proposed, such as, the HMM-based word alignment models (Vogel et al., 1996) and the IBM models (Och and Ney, 2003) . Generally, a symmetrization step is performed, where the bidirectional alignments are combined heuristically. In our work, we use the fast aligner proposed in (Dyer et al., 2013) to obtain the word alignments. Figure 1 shows an example of an word aligned pair of a tweet and its normalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 197, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 307, |
|
"text": "(Vogel et al., 1996)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 346, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 527, |
|
"text": "(Dyer et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 559, |
|
"end": 567, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "From Sentences To Phrases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Phrase Extraction. The phrasal extraction step (Ling et al., 2010) , uses the word aligned sentences and extracts phrasal mappings between the original tweet and its normalization, named phrase pairs. For instance, in Figure 1 , we would like to extract the phrasal mapping from go 4 to go for, so that we learn that the word 4 in the context of go is normalized to the proposition for. To do this, the most common approach is to use the template proposed in (Och and Ney, 2004) , which allows phrase pairs to be extracted, if there is at least one word alignment within the pair, and there are no words inside the pair that are aligned to words not in the pair. For instance, in the example above, the phrase pair that normalizes wanna to want to would be extracted, but the phrase pair normalizing wanna to want to go would not, because the word go in the normalization is aligned to a word not in the pair.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 66, |
|
"text": "(Ling et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 478, |
|
"text": "(Och and Ney, 2004)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 226, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "From Sentences To Phrases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Phrasal Features. After extracting the phrase pairs, a model is produced with features derived from phrase pair occurrences during extraction. This model is equivalent to phrasal translation model in MT, but we shall refer to it as the normalization model. For a phrase pair o, n , where o is the original phrase, and n is the normalized phrase, we compute the normalization relative frequency f (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Sentences To Phrases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "n | o) = C(n,o) C(o)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Sentences To Phrases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", where C(n, o) denotes the number of times o was normalized to n and C(o) denotes the number of times o was seen in the extracted phrase pairs. Table 4 gives a fragment of the normalization model. The columns represent the original phrase, its normalization and the probability, respectively. In Table 4 , we observe that the abbreviation wanna is normalized to want to with a relatively high probability, but it can also be normalized to other equivalent expressions, such as will and going to. The word 4 by itself has a low probability to be normalized to the preposition for. This is expected, since this decision cannot be made without context. However, we see that the phrase go 4 is normalized to go for with a high probability, which specifies that within the context of go, 4 is generally used as a preposition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 152, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 304, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "From Sentences To Phrases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While we can learn lexical variants that are in the corpora using the phrase model, we can only address word forms that have been observed in the corpora. This is quite limited, since we cannot expect all the word forms to be present, such as all the possible orthographic errors for the word cat, such as catt, kat and caaaat. Thus, we will build a characterbased model that learns the process lexical variants are generated at the subword level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Phrases to Characters", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our character-based model is similar to the phrase-based model, except that, rather than learning word-based mappings from the original tweet and the normalization sentences, we learn characterbased mappings from the original phrases to the normalizations of those phrases. Thus, we extract the phrase pairs in the phrasal normalization model, and use them as a training corpora. To do this, for each phrase pair, we add a start token, <start>, and a end token, <end>, at the beginning and ending of the phrase pair. Afterwards, we separate all characters by space and add a space token <space> where spaces were originally. For instance, the phrase pair normalizing DanielVeuleman to Daniel Veuleman would be converted to <start> d a n i e l v e u l e m a n <end> and <start> d a n i e l <space> v e u l e m a n <end>.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Phrases to Characters", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Character-based Normalization Model -To build the character-based model, we proceed using the same approach as in the phrasal normalization model. We first align characters using Word Alignment Models, and then we perform phrase extraction to retrieve the phrasal character segments, and build the character-based model by collecting statistics. Once again, we provide examples of entries in the model in Table 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 405, |
|
"end": 412, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "From Phrases to Characters", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We observe that many of the normalizations dealt with in the previous model by memorizing phrases are captured with string transformations. For instance, from phrase pairs such as tooo to too and sooo to so, we learn that sequences of o's can be reduced to 2 or 1 o. Other examples include orthographic substitutions, such as 2 for to and 4 for for (as found in 2gether, 2morrow, 4ever and 4get). Moreover, orthographic errors can be generated from mistaking characters with similar phonetic properties, such as, s to c, z to s and sh to ch, generating lexical variants such as reprecenting. Finally, we learn that the number 0 that resembles the letter o, can be used as a replacement, as in g00d. Finally, we can see that the rule ingfor to ing for attempts to find segmentation errors, such as goingfor, where a space between going and for was omitted. 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From Phrases to Characters", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In section 4, we built two models to learn the process of normalization, the phrase-based model and the character-based model. In this section, we describe the decoder we used to normalize the sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization Decoder", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The advantage of the phrase-based model is that it can make decisions for normalization based on context. That is, it contains phrasal units, such as, go 4, that determine, when the word 4 should be normalized to the preposition for and when to leave it as a number. However, it cannot address words that are unseen in the corpora. For instance, if the word form 4ever is not seen in the training corpora, it is not be able to normalize it, even if it has seen the word 4get normalized to forget. On the other hand, the character-based model learns subword normalizations, for instance, if we see the word nnnnno normalized to no, we can learn that repetitions of the letter n are generally shorted to n, which allows it to generate new word forms. This model has strong generalization potential, but the weakness of the character-based model is that it fails to consider the context of the normalization that the phrase-based model uses to make normalization decisions. Thus, our goal in this section is describe a decoder that uses both models to improve the quality of the normalizations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization Decoder", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We use Moses, an off-the-shelf phrase-based MT system (Koehn et al., 2007) , to \"translate\" the original tweet its normalized form using the phrasal model ( \u00a74.1). Aside form the normalization probability, we also use the common features used in MT. These are the reverse normalization probability, the lexical and reverse lexical probabilities and the phrase penalty. We also use the MSD reordering model proposed in (Koehn et al., 2005) , which adds reordering features. 7 The final score of each phrase pair is given as a sum of weighted log features. The weights for these features are optimized using MERT (Och, 2003) . In our work, we sampled 150 tweets randomly from Twitter and normalized them manually, and used these samples as development data for MERT. As for the character-based model features, we simply rank the training phrase pairs by their relative frequency the f (n | o), and use the top-1000 phrase pairs as development set. Finally, a language model is required during decoding as a prior, since it defines the type of language that is produced by the output. We wish to normalized to formal language, which is generally better processed by NLP tools. Thus, for the phrase model, we use the English NIST dataset composed of 8M sentences in English from the news domain to build a 5-gram Kneser-Ney smoothed language model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 74, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 438, |
|
"text": "(Koehn et al., 2005)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 474, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 622, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrasal Decoder", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We now turn to how to apply the character-based ( \u00a74.2), together with the phrasal model. For this model, we again use Moses, treating each character as a \"word\". The simplest way to combine both methods is first to decode the input o sentence with the character-based decoder, normalizing each word independently and then normalizing the resulting output using the phrase-based decoder, which enables the phrase model to score the outputs of the character model in context. Our process is as follows. Given the input sentence o, with the words o 1 , . . . , o m , where m is the number of words in the input, we generate for each word o i a list of n-best normalization candidates z 1 o i , . . . , z n o i . We further filter the candidates using two criteria. We start by filtering each candidate z j o i that occurs less frequently than the original word o i . This is motivated by our observation that lexical variants occur far less than the respective standard form. Second, we build a corpus of English language Twitter consisting of 70M tweets, extract the unigram counts, and perform Brown clustering (Brown et al., 1992) with k = 3000 clusters. Next, we calculate the cluster similarity between o i and each surviving candidate, z j o i . We filter the candidate if the similarity is less than 0.8. The similarity between two clusters represented as bit strings,", |
|
"cite_spans": [ |
|
{ |
|
"start": 1111, |
|
"end": 1131, |
|
"text": "(Brown et al., 1992)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Character and Phrasal Decoder", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "S[c(o i ), c(z j o i )]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Character and Phrasal Decoder", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": ", calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Character and Phrasal Decoder", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "S(x, y) = 2 \u2022 |lpm{x, y)}| |x| + |y| ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Character and Phrasal Decoder", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "where lpm computes the longest common prefix of the contexts and |x| is the length of the bit string. 8 If a candidate contains more than one word (because a space was inserted), we set its count as the minimum count among its words. To find the cluster for multiple word units, we concatenate the words together, and find the cluster with the resulting word if it exists. This is motivated by the fact that it is common for missing spaces to exist in microblog corpora, generating new word forms, such as wantto, goingfor, and given a large enough corpora as the one we used, these errors occur frequently enough to be placed in the correct cluster. In fact, the variants such as wanna and tmi, occur in the same clusters as the words wantto and toomuchinformation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Character and Phrasal Decoder", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Remaining candidates are combined into a word lattice, enabling us to perform lattice-based decod-ing with the phrasal model (Dyer et al., 2008) . Figure 2, provides an example of such a lattice for the variant sentence I wanna meeeet DanielVeuleman.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 144, |
|
"text": "(Dyer et al., 2008)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 153, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Character and Phrasal Decoder", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Until now, we learned normalizations from pairs of original tweets and their normalizations. We shall now describe a process to leverage monolingual documents to learn new normalizations, since the monolingual data is far easier to obtain than parallel data. This process is similar to the work in (Han et al., 2012) , where confusion sets of contextually similar words are built initially as potential normalization candidates. We again use the k = 3000 Brown clusters, 9 and this time consider the contents of each cluster as a set of possible normalization variants. For instance, we find that the cluster that includes the word never, also includes the variant forms neverrrr, neva and nevahhh. However, the cluster also contains non-variant forms, such as gladly and glady. Thus, we want to find that neverrrr maps to never, while glady maps to gladly in the same cluster. Our work differs from previous work in that, rather than defining features manually, we use our characterbased decoder to find the mappings between lexical variants and their normalizations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 316, |
|
"text": "(Han et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Variants from Monolingual Data", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For every word type w i in cluster c(w i ) = {w 1 , . . . , w n }, we generate a set of possible candidates for each word w 1 i , . . . , w m i . Then, we build a directed acyclic graph (DAG), where every word. We add an edge between w i and w j , if w i can be decoded into w j using the character model from the previous section, and also if w i occurs less than w j ; the second condition guarantees that the graph will be acyclic. Sample graphs are shown in Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 462, |
|
"end": 470, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Variants from Monolingual Data", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Afterwards, we find the number of paths between all nodes in the graph (this can be computed efficiently in O(|V | + |E|) time). Then, for each word w i , we find the w j to which it has the highest number of paths to and extract the normalization of w i to w j . In case of a tie, we choose the word w j that occurs more often in the monolingual corpora. This is motivated by the fact that normalizations are transitive. Thus, even if neva cannot be decoded directly to never, we can use nevar as an intermediate step to find the correct normalization. This is performed for all the clusters, and the resulting dictionary of lexical variants mapped to their standard forms is added to the training data of the character-based model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Variants from Monolingual Data", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We evaluate our normalization model intrinsically by testing whether our normalizations more closely resemble standardized data, and then extrinsically by testing whether we can improve the translation quality of in-house as well as online Machine Translation systems by normalizing the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We use the gold standard by Ling et al. (2013) , composed by 2581 English-Mandarin microblog sentence pairs. From this set, we randomly select 1290 pairs for development and 1291 pairs for testing. The normalizer model is trained on the corpora extracted and filtered in section 3, in total, there were 1.3M normalization pairs used during training. The test sentences are normalized using four different setups. The first setup leaves the input sentence unchanged, which we call No Norm. The second uses the phrase-based model to normalize the input sentence, which we will denote Norm+phrase. The third uses the character-based model to output lattices, and then decodes with the phrase based model, which we will denote Norm+phrase+char. Finally, we test the same model after adding the training data extracted using monolingual documents, which we will refer as Norm+phrase+char+mono.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 46, |
|
"text": "Ling et al. (2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "To test the normalizations themselves, we used Google Translate to translate the Mandarin side of the 1291 test sentence pairs back to English and use the original English tweet. While, this is by itself does not guarantee that the normalizations are correct, since the normalizations could be syntactically and semantically incorrect, it will allow us to check whether the normalizations are closer to those produced by systems trained on news data. This experiment will be called Norm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As an application and extrinsic evaluation for our normalizer, we test if we can obtain gains on the MT task on microblog data by using our normalizer prior to translation. We build two MT systems using Moses. Firstly, we build a out-of-domain model using the full 2012 NIST Chinese-English dataset (approximately 8M sentence pairs), which is dataset from the news domain, and we will denote this system as Inhouse+News. Secondly, we build a indomain model using the 800K sentence pairs from \u00b5topia corpora (Ling et al., 2013) . We also add the NIST dataset to improve coverage. We call this system Inhouse+News+Weibo. To train these systems, we use the Moses phrase-based MT system with standard features (Koehn et al., 2003) . For reordering, we use the MSD reordering model (Axelrod et al., 2005) . As the language model, we train a 5-gram model with Kneser-ney smoothing using a 10M tweets from twitter. Finally, the weights were tuned using MERT (Och, 2003) . As for online systems, we consider the systems used to generate the paraphrase corpora in section 3, which we will denote as Online A, Online B and Online C 10", |
|
"cite_spans": [ |
|
{ |
|
"start": 507, |
|
"end": 526, |
|
"text": "(Ling et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 726, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 777, |
|
"end": 799, |
|
"text": "(Axelrod et al., 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 951, |
|
"end": 962, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The normalization and MT results are evaluated with BLEU-4 (Papineni et al., 2002) comparing the produced translations or normalizations with the appropriate reference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 82, |
|
"text": "BLEU-4 (Papineni et al., 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Results are shown in Table 6 . In terms of the normalizations, we observe a much better match between the normalized text with the reference, than the original tweets. In most cases, adding character-based models improves the quality of the normalizations. We observe that better normalizations tend to lead to better translations. The relative improvements are most significant, when moving from No Norm to norm+phrase normalization. This is because, we are normalizing words that are not seen in general MT system's training data, but occur frequently in microblog data, such as wanna to want to, u to you and im to i'm. The only exception is in the In-house+News+Weibo system, where the normalization deteriorates the results. This is to be expected, since this system is trained on the same microblog data used to learn the normalizations. However, we can observe on norm+phrase+char that if we add the character-based model, we can observe improvements for this system as well as for all other ones. This is because the model is actually learning normalizations that are unseen in the data. Some examples of these normalization include, normalizing lookin to looking, nutz to nuts and maimi to miami but also separating peaceof to peace of. The fact that these improvements are obtained for all systems is strong evidence that we are actually producing good normalizations, and not overfitting to one of the systems that we used to generate our data. The gains are much smaller from norm+phrase to norm+phrase+char, since the improvements we obtain come from normalizing less frequent words. Finally, we can obtain another small improvement by adding monolingual data to the character-based model in norm+phrase+char+mono.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Most of the work in microblog normalization is focused on finding the standard forms of lexical vari-ants (Yang and Eisenstein, 2013; Han et al., 2013; Han et al., 2012; Kaufmann, 2010; Han and Baldwin, 2011; Gouws et al., 2011; Aw et al., 2006) . A lexical variant is a variation of a standard word in a different lexical form. This ranges from minor or major spelling errors, such as jst, juxt and jus that are lexical variants of just, to abbreviations, such as tmi and wanna, which stand for too much information and want to, respectively. Jargon can also be treated as variants, for instance cday is a slang word for birthday, in some groups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 133, |
|
"text": "(Yang and Eisenstein, 2013;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 151, |
|
"text": "Han et al., 2013;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 169, |
|
"text": "Han et al., 2012;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 185, |
|
"text": "Kaufmann, 2010;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 208, |
|
"text": "Han and Baldwin, 2011;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "Gouws et al., 2011;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 245, |
|
"text": "Aw et al., 2006)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "There are many rules that govern the process lexical variants are generated. Some variants are generated from orthographic errors, caused by some mistake from the user when writing. For instance, the variants representin, representting, or reprecenting can be generated by a spurious letter swap, insertion or substitution by the user. One way to normalize these types of errors is to attempt to insert, remove and swap words in a lexical variant until a word in a dictionary of standard words is found (Kaufmann, 2010) . Contextual features are another way to find lexical variants, since variants generally occur in the same context as their standard form. This includes orthographic errors, abbreviations and slang. However, this is generally not enough to detect lexical variants, as many words share similar contexts, such as already, recently and normally. Consequently, contextual features are generally used to generate a confusion set of possible normalizations of a lexical variant, and then more features are used to find the correct normalization (Han et al., 2012) . One simple approach is to compute the Levenshtein distance to find lexical similarities between words, which would effectively capture the mappings between representting, reprecenting and representin to representing. However, a pronunciation model (Tang et al., 2012) would be needed to find the mapping between g8, 2day and 4ever to great, today and forever, respectively. Moreover, visual character similarity features would be required to find the mapping between g00d and \u03b9 to good and i.", |
|
"cite_spans": [ |
|
{ |
|
"start": 503, |
|
"end": 519, |
|
"text": "(Kaufmann, 2010)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1059, |
|
"end": 1077, |
|
"text": "(Han et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1328, |
|
"end": 1347, |
|
"text": "(Tang et al., 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Clearly, learning this process is a challenging task, and addressing each different case individually would require vast amounts of resources. Furthermore, once we change the language to normalize to another language, the types of rules that generate lexical variants would radically change and a new set of features would have to be engineered. We believe that to be successful in normalizing microblogs, the process to learn new lexical variants should be learned from data, making as few assumptions as possible. We learn our models without using any type of predefined features, such as phonetic features or lexical features. In fact, we will not assume that most words and characters map to themselves, as it is assumed in methods using the Levenshtein distance (Kaufmann, 2010; Han et al., 2012; Wang and Ng, 2013) . All these mappings are learned from our data. Furthermore, in the work above, the dictionaries built using these methods assume that lexical variants are mapped to standard forms in a word-toword mapping. Thus, variants such as wanna, gonna and imma are not normalizable, since they are normalized to multiple words want to, going to and I am gonna. Moreover, there are segmentation errors that occur from missing spaces, such as sortof and goingfor, which also map to more than one word to sort of and going for. These cases shall also be addressed in our work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 767, |
|
"end": 783, |
|
"text": "(Kaufmann, 2010;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 784, |
|
"end": 801, |
|
"text": "Han et al., 2012;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 820, |
|
"text": "Wang and Ng, 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Wang and Ng (2013) argue that microblog normalization is not simply to map lexical variants into standard forms, but that other tasks, such as punctuation correction and missing word recovery should be performed. Consider the example tweet you free?, while there are no lexical variants in this message, the authors consider that it is the normalizer should recover the missing article are and normalize this tweet to are you free?. To do this, the authors train a series of models to detect and correct specific errors. While effective for narrow domains, training models to address each specific type of normalization is not scalable over all types of normalizations that need to be performed within the language, and the fact that a set of new models must be implemented for another language limits the applicability of this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Another strong point of the work above is that a decoder is presented, while the work on building dictionaries only normalize out of vocabulary (OOV) words. The work on (Han et al., 2012) trains a classifier to decide whether to normalize a word or not, but is still preconditioned on the fact that the word in question is OOV. Thus, lexical variants, such as, 4 and u, with the standard forms for and you, are left untreated, since they occur in other contexts, such as u in u s a. Inspired by the work above, we also propose a decoder based on the existing off-the-self decoder Moses (Koehn et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 187, |
|
"text": "(Han et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 606, |
|
"text": "Moses (Koehn et al., 2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, the work in (Xu et al., 2013) obtains paraphrases from Twitter, by finding tweets that contain common entities, such as Obama, that occur during the same period by matching temporal expressions. The resulting paraphrase corpora can also be used to train a normalizer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 38, |
|
"text": "(Xu et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We introduced a data-driven approach to microblog normalization based on paraphrasing. We build a corpora of tweets and their normalizations using parallel corpora from microblogs using MT techniques. Then, we build two models that learn generalizations of the normalization process, one the phrase level and on the character level. Then, we build a decoder that combines both models during decoding. Improvements on multiple MT systems support the validity of our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In future work, we shall attempt to build normalizations for other languages. We shall also attempt to learn an unsupervised normalization model with only monolingual data, similar to the work for MT in (Ravi and Knight, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 226, |
|
"text": "(Ravi and Knight, 2011)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "http://translate.google.com/ 3 http://www.bing.com/translator 4 http://fanyi.youdao.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We additionally assume that the translation engines are trained to output more standardized data, so there will be additional normalizing effect from the machine translation system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that this captures the context in which such transformations are likely to occur: there are not many words that contain the sequence ingfor, so the probability that these should be normalized by inserting a space is high. On the other hand, we cannot assume that if we observe the sequence gf, we can safely separate these with a space. This is because, there are many words that contain this sequence, such as the abbreviation of gf (girlfriend), dogfight, and bigfoot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Reordering helps find lexical variants that are generated by transposing characters, such as, mabye to maybe.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Brown clusters are organized such that more words with more similar distributions share common prefixes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Brown clustering algorithm groups words together based on contextual similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The names of the systems are hidden to not violate the privacy issues in the terms and conditions of these online systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The PhD thesis of Wang Ling is supported by FCT -Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, under project SFRH/BD/51157/2010. This work was supported by national funds through FCT -Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, under project PEst-OE/EEI/LA0021/2013.The authors also wish to express their gratitude to the anonymous reviewers for their comments and insight.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Edinburgh system description for the 2005 iwslt speech translation evaluation", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "References", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Aw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "597--604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "References [Aw et al.2006] AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normalization. In Proceedings of the ACL, COLING-ACL '06, pages 33-40, Stroudsburg, PA, USA. Association for Computational Linguistics. [Axelrod et al.2005] Amittai Axelrod, Ra Birch Mayne, Chris Callison-burch, Miles Osborne, and David Tal- bot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In In Proc. Inter- national Workshop on Spoken Language Translation (IWSLT. [Bannard and Callison-Burch2005] Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilin- gual parallel corpora. In Proceedings of the 43rd An- nual Meeting of the Association for Computational Linguistics (ACL'05), pages 597-604, Ann Arbor, Michigan, June. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "85--91", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Bansal et al.2011] Mohit Bansal, Chris Quirk, and Robert C. Moore. 2011. Gappy phrasal alignment by agreement. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 1308-1317, Stroudsburg, PA, USA. Association for Computational Linguistics. [Brown et al.1992] Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural lan- guage. Computational linguistics, 18(4):467-479. [Brown et al.1993] Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mer- cer. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Linguist., 19:263-311, June. [Denkowski and Lavie2011] Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85-91, Edinburgh, Scotland, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Generalizing word lattice translation", |
|
"authors": [], |
|
"year": 2008, |
|
"venue": "Proceedings of HLT-ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "et al.2008] Chris Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice trans- lation. In Proceedings of HLT-ACL.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Discovering sociolinguistic associations with structured sparsity", |
|
"authors": [], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1365--1374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "et al.2013] Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of NAACL-HLT, pages 644-648. [Eisenstein et al.2011] Jacob Eisenstein, Noah A. Smith, and Eric P. Xing. 2011. Discovering sociolinguis- tic associations with structured sparsity. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies -Volume 1, HLT '11, pages 1365-1374, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "What to do about bad language on the internet", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "359--369", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of NAACL-HLT, pages 359-369.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Unsupervised mining of lexical variants from noisy text", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the First Workshop on Unsupervised Learning in NLP, EMNLP '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "82--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Gouws et al.2011] Stephan Gouws, Dirk Hovy, and Don- ald Metzler. 2011. Unsupervised mining of lexical variants from noisy text. In Proceedings of the First Workshop on Unsupervised Learning in NLP, EMNLP '11, pages 82-90, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Lexical normalisation of short text messages: makn sens a #twitter", |
|
"authors": [ |
|
{ |
|
"first": "Baldwin2011] Bo", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "368--378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "and Baldwin2011] Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: makn sens a #twitter. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 368-378, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatically constructing a normalisation dictionary for microblogs", |
|
"authors": [], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "421--432", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "et al.2012] Bo Han, Paul Cook, and Timothy Bald- win. 2012. Automatically constructing a normalisa- tion dictionary for microblogs. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 421- 432, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Lexical normalization for social media text", |
|
"authors": [], |
|
"year": 2013, |
|
"venue": "ACM Transactions on Intelligent Systems and Technology", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "et al.2013] Bo Han, Paul Cook, and Timothy Bald- win. 2013. Lexical normalization for social media text. ACM Transactions on Intelligent Systems and Technology (TIST), 4(1):5.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Take two aspirin and tweet me in the morning: how twitter, facebook, and other social media are reshaping health care", |
|
"authors": [ |
|
{ |
|
"first": "Carleen", |
|
"middle": [], |
|
"last": "Hawn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "361--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carleen Hawn. 2009. Take two aspirin and tweet me in the morning: how twitter, facebook, and other social media are reshaping health care. Health affairs, 28(2):361-368.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Edinburgh system description for the 2005 nist mt evaluation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kaufmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Kaufmann. 2010. Syntactic Nor- malization of Twitter Messages. studies, 2. [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based trans- lation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Com- putational Linguistics on Human Language Technol- ogy -Volume 1, NAACL '03, pages 48-54, Morris- town, NJ, USA. Association for Computational Lin- guistics. [Koehn et al.2005] Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, David Talbot, and Michael White. 2005. Edinburgh system description for the 2005 nist mt evaluation. In Proceedings of Machine Translation Evaluation Workshop 2005. [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexan- dra Birch, Chris Callison-burch, Richard Zens, Rwth", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Aachen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Constantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ondrej", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aachen, Alexandra Constantin, Marcello Federico, Nicola Bertoldi, Chris Dyer, Brooke Cowan, Wade Shen, Christine Moran, and Ondrej Bojar. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meet- ing of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Re- public, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Core patterns of lexical use in a comparable corpus of English lexical prose", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Laviosa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Meta", |
|
"volume": "43", |
|
"issue": "4", |
|
"pages": "557--570", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sara Laviosa. 1998. Core patterns of lexical use in a comparable corpus of English lexical prose. Meta, 43(4):557-570.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Towards a general and extensible phrase-extraction algorithm", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 51st Annual Meeting on Association for Computational Linguistics, ACL '13. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "313--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Ling et al.2010] Wang Ling, Tiago Lu\u00eds, Jo\u00e3o Gra\u00e7a, Lu\u00edsa Coheur, and Isabel Trancoso. 2010. Towards a general and extensible phrase-extraction algorithm. In IWSLT '10: International Workshop on Spoken Lan- guage Translation, pages 313-320, Paris, France. [Ling et al.2013] Wang Ling, Guang Xiang, Chris Dyer, Alan Black, and Isabel Trancoso. 2013. Microblogs as parallel corpora. In Proceedings of the 51st An- nual Meeting on Association for Computational Lin- guistics, ACL '13. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Ney2003] Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "and Ney2003] Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statis- tical alignment models. Computational linguistics, 29(1):19-51.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The alignment template approach to statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ney2004] Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Comput. Linguist", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "417--449", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "and Ney2004] Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Comput. Linguist., 30(4):417- 449, December.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Pro- ceedings of the 41st Annual Meeting on Association for Computational Linguistics -Volume 1, ACL '03, pages 160-167, Stroudsburg, PA, USA. Association for Computational Linguistics. [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Deciphering foreign language", |
|
"authors": [ |
|
{ |
|
"first": "Knight2011] Sujith", |
|
"middle": [], |
|
"last": "Ravi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "and Knight2011] Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In ACL, pages 12-21.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The web as a parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "349--380", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Resnik and Smith2003] Philip Resnik and Noah A Smith. 2003. The web as a parallel corpus. Com- putational Linguistics, 29(3):349-380.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Discriminative pronunciation modeling: A large-margin, feature-rich approach", |
|
"authors": [], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "194--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "et al.2012] Hao Tang, Joseph Keshet, and Karen Livescu. 2012. Discriminative pronunciation mod- eling: A large-margin, feature-rich approach. In Pro- ceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 194-203. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "On the features of translationese. Literary and Linguistic Computing. [Wang and Ng2013] Pidong Wang and Hwee Ng. 2013. A beam-search decoder for normalization of social media text with application to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of NAACL-HLT 2013, NAACL '13. Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "836--841", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Vogel et al.1996] S. Vogel, H. Ney, and C. Tillmann. 1996. Hmm-based word alignment in statistical trans- lation. In Proceedings of the 16th conference on Com- putational linguistics-Volume 2, pages 836-841. Asso- ciation for Computational Linguistics. [Volansky et al.2013] Vered Volansky, Noam Ordan, and Shuly Wintner. 2013. On the features of transla- tionese. Literary and Linguistic Computing. [Wang and Ng2013] Pidong Wang and Hwee Ng. 2013. A beam-search decoder for normalization of social media text with application to machine translation. In Proceedings of NAACL-HLT 2013, NAACL '13. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Gathering and generating paraphrases from twitter with application to normalization", |
|
"authors": [], |
|
"year": 2013, |
|
"venue": "Proceedings of the Sixth Workshop on Building and Using Comparable Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "et al.2013] Wei Xu, Alan Ritter, and Ralph Grish- man. 2013. Gathering and generating paraphrases from twitter with application to normalization. In Pro- ceedings of the Sixth Workshop on Building and Us- ing Comparable Corpora, pages 121-128, Sofia, Bul- garia, August. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A log-linear model for unsupervised text normalization", |
|
"authors": [ |
|
{ |
|
"first": "Eisenstein2013] Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proc. of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "and Eisenstein2013] Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text nor- malization. In Proc. of EMNLP.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Variant-normalized alignment with the variant form above and the normalized form below; solid lines show potential normalizations, while dashed lines represent identical translations.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Example output lattice of the character-based decoder, for the sentence I wanna meeeeet DanielVeuleman.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Example DAGs, built from the cluster containing the words never and gladly.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Translations of an English microblog message into Mandarin, using three web translation services.", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Xiangna efforts</td></tr><tr><td>MT 2 DanielVeuleman said, Yes, I know, I'm that hard</td></tr><tr><td>MT 3 Said to DanielVeuleman, yes, I know, I'm to</td></tr><tr><td>that effort</td></tr></table>", |
|
"text": "Translations of Chinese original post to English using web-based service. orig. To DanielVeuleman yea iknw imma work on that orig. \u5bf9DanielVeuleman\u8bf4\uff0c\u662f\u7684\uff0c\u6211\u77e5\u9053\uff0c \u6211\u6b63\u5728\u5411\u90a3\u65b9\u9762\u52aa\u529b MT 1 Right DanielVeuleman say, yes, I know, I'm", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Lang. Pair Source Segs.</td><td>MT Engines</td></tr><tr><td>ZH-EN</td><td colspan=\"2\">Weibo 800K Google, Bing, Youdao</td></tr><tr><td>ZH-EN</td><td colspan=\"2\">Twitter 113K Google, Bing, Youdao</td></tr><tr><td>AR-EN</td><td>Twitter 114K</td><td>Google, Bing</td></tr><tr><td>RU-EN</td><td>Twitter 119K</td><td>Google, Bing</td></tr><tr><td>KO-EN</td><td>Twitter 78K</td><td>Google, Bing</td></tr><tr><td>JA-EN</td><td>Twitter 75K</td><td>Google, Bing</td></tr></table>", |
|
"text": "Corpora Used for Paraphrasing.", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">Original (o) Normalization (n) f (n | o)</td></tr><tr><td>wanna</td><td>want to</td><td>0.4679</td></tr><tr><td>wanna</td><td>will</td><td>0.0274</td></tr><tr><td>wanna</td><td>going to</td><td>0.0114</td></tr><tr><td>4</td><td>4</td><td>0.5641</td></tr><tr><td>4</td><td>for</td><td>0.01795</td></tr><tr><td>go 4</td><td>go for</td><td>1.0000</td></tr></table>", |
|
"text": "Fragment of the phrase normalization model built, for each original phrase o, we present the top-3 normalized forms ranked by f (n | o).", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Original (o) Normalization (n)</td><td>f (n | o)</td></tr><tr><td>o o o</td><td>o o</td><td>0.0223</td></tr><tr><td>o o o</td><td>o</td><td>0.0439</td></tr><tr><td>s</td><td>c</td><td>0.0331</td></tr><tr><td>z</td><td>s</td><td>0.0741</td></tr><tr><td>s h</td><td>c h</td><td>0.019</td></tr><tr><td>2</td><td>t o</td><td>0.014</td></tr><tr><td>4</td><td>f o r</td><td>0.0013</td></tr><tr><td>0</td><td>o</td><td>0.0657</td></tr><tr><td>i n g f o r</td><td colspan=\"2\">i n g <space> f o r 0.4545</td></tr><tr><td>g f</td><td>g <space> f</td><td>0.01028</td></tr></table>", |
|
"text": "Fragment of the character normalization model where examples representative of the lexical variant generation process are encoded in the model.", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>Moses</td><td>Moses</td><td/><td/><td/></tr><tr><td>Condition</td><td colspan=\"6\">Norm (News) (News+Weibo) Online A Online B Online C</td></tr><tr><td>baseline</td><td>19.90</td><td>15.10</td><td>24.37</td><td>20.09</td><td>17.89</td><td>18.79</td></tr><tr><td>norm+phrase</td><td>21.96</td><td>15.69</td><td>24.29</td><td>20.50</td><td>18.13</td><td>18.93</td></tr><tr><td>norm+phrase+char</td><td>22.39</td><td>15.87</td><td>24.40</td><td>20.61</td><td>18.22</td><td>19.08</td></tr><tr><td colspan=\"2\">norm+phrase+char+mono 22.91</td><td>15.94</td><td>24.46</td><td>20.78</td><td>18.37</td><td>19.21</td></tr></table>", |
|
"text": "Normalization and MT Results. Rows denote different normalizations, and columns different translation systems, except the first column (Norm), which denotes the normalization experiment. Cells display the BLEU score of that experiment.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |