ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:54.244546Z"
},
"title": "Noisy Text Data: Achilles' Heel of BERT",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Makhija",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Anuj",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Owing to the phenomenal success of BERT on various NLP tasks and benchmark datasets, industry practitioners are actively experimenting with fine-tuning BERT to build NLP applications for solving industry use cases. For most datasets that are used by practitioners to build industrial NLP applications, it is hard to guarantee absence of any noise in the data. While BERT has performed exceedingly well for transferring the learnings from one use case to another, it remains unclear how BERT performs when fine-tuned on noisy text. In this work, we explore the sensitivity of BERT to noise in the data. We work with most commonly occurring noise (spelling mistakes, typos) and show that this results in significant degradation in the performance of BERT. We present experimental results to show that BERT's performance on fundamental NLP tasks like sentiment analysis and textual similarity drops significantly in the presence of (simulated) noise on benchmark datasets viz. IMDB Movie Review, STS-B, SST-2. Further, we identify shortcomings in the existing BERT pipeline that are responsible for this drop in performance. Our findings suggest that practitioners need to be vary of presence of noise in their datasets while fine-tuning BERT to solve industry use cases.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Owing to the phenomenal success of BERT on various NLP tasks and benchmark datasets, industry practitioners are actively experimenting with fine-tuning BERT to build NLP applications for solving industry use cases. For most datasets that are used by practitioners to build industrial NLP applications, it is hard to guarantee absence of any noise in the data. While BERT has performed exceedingly well for transferring the learnings from one use case to another, it remains unclear how BERT performs when fine-tuned on noisy text. In this work, we explore the sensitivity of BERT to noise in the data. We work with most commonly occurring noise (spelling mistakes, typos) and show that this results in significant degradation in the performance of BERT. We present experimental results to show that BERT's performance on fundamental NLP tasks like sentiment analysis and textual similarity drops significantly in the presence of (simulated) noise on benchmark datasets viz. IMDB Movie Review, STS-B, SST-2. Further, we identify shortcomings in the existing BERT pipeline that are responsible for this drop in performance. Our findings suggest that practitioners need to be vary of presence of noise in their datasets while fine-tuning BERT to solve industry use cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pre-trained contextualized language models such as BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) , which is the focus of this work, has led to improvement in performance on many Natural Language Processing (NLP) tasks. Without listing down all the tasks, BERT has improved the state-of-the-art for a number of tasks including tasks such as summarization, Name-Entity Recognition, Question Answering and Machine Translation (Devlin et al., 2018 ).",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 462,
"end": 482,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Buoyed by this success, machine learning teams in industry are actively experimenting with finetuning BERT on their data to solve various industry use cases. These include use cases such as chatbots, sentiment analysis systems, automatically routing and prioritizing customer support tickets, NER systems, machine translation systems to name a few. Many of these use cases require practitioners to build training and test datasets by collecting text data from data sources & applications such as chats, emails, discussions from user forums, social media conversations, output of machine translation systems, automatically transcribing text from speech data, automatically recognized text from printed or handwritten material, etc. Owing to these sources & applications, the text data is known to be noisy. In some sources such discussions from user forums & social media conversations, the noise in the data can be significantly high. It is not very clear how the pre-trained BERT performs when fine-tuned with noisy text data and if the performance degrades then why so. These two questions are the focus of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though the datasets used in industry are varied, many of them have a common characteristicnoisy text. This includes spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages (a.k.a code mix) to name a few. This makes cleaning and preprocessing text data a key component of NLP pipeline for industrial applications. However, despite extensive cleaning and preprocessing, some degree of noise often remains. Owing to this residual noise, a common issue that NLP models have to deal with is out of vocabulary (OOV) words. These are words that are found in test and production data but are not part of train-ing data. In this work we find that BERT fails to properly handle OOV words (due to noise). We show that this negatively impacts the performance of BERT on fundamental tasks in NLP when finetuned over noisy text data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is motivated from the business use case where we are building a dialogue system over WhatsApp to screen candidates for blue collar jobs. Our candidate user base often comes from underprivileged backgrounds, many of them are not even college graduates. This coupled with fat finger problem 1 over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to our dialogue system. Hence, for the purpose of this work, we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable to other use cases that deal with noisy text data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We now present some of the relevant work viz the following related areas viz. (1) robustness of BERT, (2) degradation in performance of NLP models due to noise in text data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Robustness of BERT: There has been some work on testing the robustness of BERT in different scenarios. Jin et al. (2019) introduce TEXTFOOLER, a system to generate adversarial text and apply it to text classification and textual entailment to successfully attack the pre-trained BERT among other models. Aspillaga et al. (2020) evaluate robustness of three models -RoBERTa, XLNet, and BERT in Natural Language Inference (NLI) and Question Answering (QA) tasks. They show that while RoBERTa, XLNet and BERT are more robust than Recurrent Neural Network (RNN) models to stress tests on tasks such as NLI and QA, these models are still very fragile and show many unexpected behaviors. Pal and Tople (2020) present novel attack techniques that utilize the unintended features learnt in the teacher (public) model to generate adversarial examples for student (downstream) models. They show that using length-based and sentencebased misclassification attacks for the Fake News Detection task trained using a context-aware BERT model, one gets misclassification accuracy of 78% and 39% respectively for the adversarial examples. Sun et al. (2020) show the BERT under-performs on sentiment analysis and question answering in presence of typos and spelling mistakes. While our work has an overlap with their work, our work is independent (and parallel in terms of timeline) to their work.",
"cite_spans": [
{
"start": 103,
"end": 120,
"text": "Jin et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 304,
"end": 327,
"text": "Aspillaga et al. (2020)",
"ref_id": "BIBREF1"
},
{
"start": 1122,
"end": 1139,
"text": "Sun et al. (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We not only experimented with more datasets, we also pin down the exact reason for degradation in BERT's performance. We demonstrate our findings viz-a-viz two most fundamental NLP tasks -sentence classification (sentiment analysis) and textual similarity. For these, we chose the most popular benchmark datasets -for sentiment analysis we work with SST-2 and IMDB datasets and for textual similarity we use STS-B dataset. Further, Sun et al. (2020) show that mistakes/typos in the most informative words cause maximum damage. In contrast, our work shows stronger resultsmistakes/typos in words chosen at random is good enough to cause substantial drop in BERT's performance. We discuss the reason for performance degradation. Last but not the least, we tried various tokenizers during the fine-tuning phase to see if there is a simple fix for the problem.",
"cite_spans": [
{
"start": 432,
"end": 449,
"text": "Sun et al. (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Degradation in performance of NLP models due to Noise: There has been a lot of work around understanding the effect of noise on the performance of NLP models. Taghva et al. (2000) evaluate the effect of OCR errors on text categorization. Wu et al. (2016) introduced ISSAC, a system to clean dirty text from online sources. Agarwal et al. (2007) studied the effect of different kinds of noise on automatic text classification. Subramaniam et al. (2009) presented a survey of types of text noise and techniques to handle noisy text. Newer communication mediums such as SMS, chats, twitter, messaging apps encourage brevity and informalism, leading to non-canonical text. This presents significant challenges to the known NLP techniques. Belinkov and Bisk (2017) show that character based neural machine translation (NMT) models are also prone to synthetic and natural noise even though these model do better job to handle out-of-vocabulary issues and learn better morphological representation. Ribeiro et al. (2018) develop a technique, called semantically equivalent adversarial rules (SEARs) to debug NLP models. SEAR generate adversial examples to penetrate NLP models. Author experimented this techniques for three domains: machine comprehension, visual question answering, and sentiment analysis.",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "Taghva et al. (2000)",
"ref_id": "BIBREF15"
},
{
"start": 238,
"end": 254,
"text": "Wu et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 323,
"end": 344,
"text": "Agarwal et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 426,
"end": 451,
"text": "Subramaniam et al. (2009)",
"ref_id": "BIBREF13"
},
{
"start": 735,
"end": 759,
"text": "Belinkov and Bisk (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "There exists a vast literature that tries to understand the sensitivity of NLP models to noise and develop techniques to tackle these challenges. It is beyond the scope of this paper to give a comprehensive list of papers on this topic. One can look at the work published in conferences such as 'Workshop on Noisy User-generated Text, ACL', 'Workshop on Analytics for Noisy Unstructured Text Data, IJCAI-2007' that have dedicated tracks on these issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We evaluate the state-of-the-art model BERT 2 on two fundamental NLP tasks: sentiment analysis and textual similarity. For sentiment analysis we use popular datasets of IMDB movie reviews (Maas et al., 2011) and Stanford Sentiment Treebank (SST-2) (Socher et al., 2013) ; for textual similarity we use Semantic Textual Similarity (STS-B) (Cer et al., 2017) . Both STS-B and SST-2 datasets are a part of GLUE benchmark (Wang et al., 2018) tasks. On these benchmark datasets we report the system's performance both -with and without noise.",
"cite_spans": [
{
"start": 188,
"end": 207,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 248,
"end": 269,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 338,
"end": 356,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 418,
"end": 437,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "As mentioned in Section 1 of the paper, we focus on the noise introduced by spelling mistakes and typos. All the benchmark datasets we work with consists of examples X \u2192 Y where X are the text inputs and Y are the corresponding labels. We call the original dataset as D 0 . From D 0 we create new datasets D 2.5 , D 5 , D 7.5 , D 10 , D 12.5 , D 15 , D 17.5 , D 20 and D 22.5 . Here, D k is a variant of D 0 with k% noise in each datapoint in D 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise",
"sec_num": "3.1"
},
{
"text": "To create D k , we take i th data point x i \u2208 D k , and introduce noise in it. We represent the modified datapoint by x i,k noise . Then, D k is simply the collection (x i,k noise , y i ), \u2200i. To create x i,k noise from x i , we randomly choose k% characters from the text of x i and replace them with nearby characters in a qwerty keyboard. For example, if character d is chosen, then it is replaced by a character randomly chosen from e, s, x, c, f, or r. This is because in a qwerty keyboard, these keys surround the key d. We inject noise in the complete dataset. Later we split D i into train and test chunks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise",
"sec_num": "3.1"
},
{
"text": "We believe a systematic study should be done to understand how the performance of SOTA models is impacted when fine-tuned on noisy text data. To motivate the community for this, we suggest a simple framework for the study. The framework uses four variables -SOTA model, task, dataset, and the 2 BERTBase uncased model degree of noise in the data. For such a study it is imperative to have a scalable way to create variants of a dataset that differ in the degree of noise in them. The method for creating noisy datasets as described in the previous paragraph does exactly this. Creating datasets at scale with varying degree of natural noise is very human intensive task. Despite our method introducing synthetic noise, owing to mobile penetration across the globe, and fat finger problem, our noise model is very realistic. Also, unlike Sun et al. 2020, we introduce noise randomly rather than targeting the most informative words. This helps us model the average case setting rather than the worst case. For these reasons we stick to synthetic noise introduced randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise",
"sec_num": "3.1"
},
{
"text": "For sentiment analysis we use IMDB movie reviews (Maas et al., 2011) and Stanford Sentiment Treebank (SST-2 ) (Socher et al., 2013) datasets in binary prediction settings. IMDB datasets consist of 25000 training and 25000 test sentences. We represent the original IMDB dataset (one with no noise) as IMDB 0 . Using the process of introducing noise (as described in section 3.1), we create 9 variants of IMDB 0 namely IMDB 2.5 , . . . , IMDB 22.5 with varying degrees of noise.",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 110,
"end": 131,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.2"
},
{
"text": "SST-2 dataset consists of 67349 training and 872 test sentences. Here too we we add noise as described in Section 3.1 to create 9 variants of SST-2 0 -SST-2 2.5 , . . . , SST-2 22.5 .To measure the performance of the model for sentiment analysis task we use F1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.2"
},
{
"text": "For textual similarity task, we use Semantic Textual Similarity (STS-B) (Cer et al., 2017) dataset. The dataset consists of 5749 training and 1500 test data points. Each data point consists of 2 sentences and a score between 0-5 representing the similarity between the two sentences. We represent the original data set by STS-B 0 and create 9 noisy variants like we mentioned in section 3.1 Here, we use Pearson-Spearman correlation to measure model's performance. of BERT. Further, as we gradually increase the noise, the performance keeps going down. For sentiment analysis, by the time 15-17% of characters are replaced, the performance drops to almost the chance-level accuracy (i.e., around 50%). This decline is much more rapid for sentence similarity.",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Similarity",
"sec_num": "3.3"
},
{
"text": "To understand the reason behind the drop in BERT's performance in presence of noise, we need to understand how BERT processes input text data. A key component of BERT's pipeline is tokenization of input text. It first performs whitespace tokenization followed by WordPiece tokenization (Wu et al., 2016) . While whitespace tokenizer breaks the input text into tokens around the whitespace boundary, the wordPiece tokenizer uses longest prefix match to further break the tokens 3 . The resultant tokens are then fed as input to the BERT model. When it comes to tokenizing the noisy text data, we see a very interesting behaviour from BERT's pipeline. First whitespace tokenization is applied. Now, when the WordPiece tokenizer encounters these words, owing to the spelling mistakes, these words are not directly found in BERT's dictionary. So, WordPiece tokenizer tries to tokenize these (noisy) words into subwords. However, it ends up breaking the words into subwords whose meaning can be very different from the meaning of the original word. This can change the meaning of the sentence completely, therefore leading to substan- To understand this better let us look at two examples -one each from the IMDB and STS-B datasets respectively, as shown in Example 1 and Example 2. In each Example, (a) is the sentence as it appears in IMDB 0 (i.e. original dataset) while (b) is the corresponding sentence after adding 5% noise (IMDB 5 ). For legibility the misspelled characters are highlighted with italics. The sentences are followed by their corresponding output after applying whitespace and WordPiece tokenizer on them. In the output, ## represents subwords.",
"cite_spans": [
{
"start": 286,
"end": 303,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "Example 1 (from IMDB): (a) that loves its characters and communicates something rather beautiful about human nature. (0% error) (b) that loves 8ts characters abd communicates something rathee beautiful about human natuee. (5% error) Corresponding output of tokenization: (a) 'that ', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human', 'nature' (b) 'that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate', '##s', 'something', 'rat', '##hee', 'beautiful', 'about', 'human', 'nat', '##ue', '##e' Example 2 (from STS-2): (a) poor ben bratt could n't find stardom if mapquest emailed him point-to-point driving directions. (0% error) (b) poor ben bratt could n't find stardom if ', 'ben', 'brat', '##t', 'could', 'n', '\", 't', 'find', 'star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'driving', 'directions', '.' (b) 'poor', 'ben', 'brat', '##t', 'could', 'n', '\", 't', 'find', 'star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib', '##g', 'dir', '##sc', '##ti', '##oge', '.' In Example 1(a), BERT's tokenization splits communicates into communicate and ##s based on longest prefix matching because there is no exact match for communicates in pre-trained BERT's vocabulary. This results in two tokens communicate and s, both of which are present in BERT's vocabulary. We have contextual embeddings for both communicate and ##s. By using these two embeddings, one can get an approximate embedding for communicates.",
"cite_spans": [
{
"start": 281,
"end": 563,
"text": "', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human', 'nature' (b) 'that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate', '##s', 'something', 'rat', '##hee', 'beautiful', 'about', 'human', 'nat', '##ue', '##e'",
"ref_id": null
},
{
"start": 745,
"end": 1157,
"text": "', 'ben', 'brat', '##t', 'could', 'n', '\", 't', 'find', 'star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'driving', 'directions', '.' (b) 'poor', 'ben', 'brat', '##t', 'could', 'n', '\", 't', 'find', 'star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib', '##g', 'dir', '##sc', '##ti', '##oge', '.'",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "However, this approach goes for a complete toss when the word gets misspelled. In example 1(b), the word natuee('nature' is misspelled) is split into tokens nat, ##ue, ##e (based on the longest prefix match). By combining the embeddings for these three tokens, one cannot approximate the embedding of 'nature'. This is because the word nat has a very different meaning (it means 'a person who advocates political independence for a particular country'). This misrepresentation in turn impacts the performance of downstream sub-components of BERT bringing down the overall performance of BERT model. This is why as we introduce more errors, the quality of output of the tokenizer degrades further, resulting in the overall drop in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "We further experimented with different tokenizers other than WordPiece tokenizer. For this we used Character N-gram tokenizer (Mcnamee and Mayfield, 2004) and stanfordNLP whitespace tokenizer (Manning et al., 2014) . For Character Ngram tokenizer, we work with N=6 4 . The results of these experiments on STS-B dataset are given in Table 2 . It is clear that replacing WordPiece by whitespace or N-gram further degrades the performance. The reasons are as follows:",
"cite_spans": [
{
"start": 126,
"end": 154,
"text": "(Mcnamee and Mayfield, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 192,
"end": 214,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "(1) Replace WordPiece by whitespace: In this case every misspelled word (say, natuee) is directly fed to BERT model. Since, these words are not present in BERT's vocabulary, they are treated as UNK 5 token. In presence of even 5-10% noise, there is a significant drop in accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "(2) Replace WordPiece by Character N-gram tokenizer: Here, every misspelled word is broken into character n-grams of length atmost 6. It is high unlikely to find these subwords in BERT's vocabulary. Hence, they get treated as UNK.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "Please note that in our experimental setup, we are not training BERT from scratch. Instead, we simply replaced the existing WordPiece tokenizer with other tokenizers while feeding tokens to BERT's embedding layer during the fine-tuning and testing phases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "We studied the effect of synthetic noise (spelling mistakes) in text data on the performance of BERT. We demonstrated that as the noise increases, BERT's performance drops drastically. We further show that the reason for the performance drop is how BERT's tokenizer (WordPiece) handles the misspelled words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Our results suggest that one needs to conduct a large number of experiments to see if the findings hold across other datasets and popular NLP tasks such as information extraction, text summarization, machine translation, question answering, etc. It will also be interesting to see how BERT performs in presence of other types of noise. One also needs to investigate how other models such as ELMo, RoBERTa, and XLNet which use character-based, byte-level BPE, and SentencePiece tokenizers respectively. It also remains to be seen if the results will hold if the noise was restricted to only frequent misspellings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "To address the problem of drop in performance, there are 2 ways -(i) preprocess the data to correct spelling mistakes in the dataset before fine-tuning BERT on it (ii) make changes in BERT's architecture to make it robust to noise. From a practitioner's perspective, the problem with (i) is that in most industrial settings this becomes a separate project in itself. We leave (ii) as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "https://en.wikipedia.org/wiki/Fat-finger error",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "longer N-grams such as 6-grams are recommended for capturing semantic information(Bojanowski et al., 2016) 5 Unknown tokens",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How much noise is too much: A study in automatic text classification",
"authors": [
{
"first": "Sumeet",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Shantanu",
"middle": [],
"last": "Godbole",
"suffix": ""
},
{
"first": "Diwakar",
"middle": [],
"last": "Punjani",
"suffix": ""
},
{
"first": "Shourya",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2007,
"venue": "Seventh IEEE International Conference on Data Mining (ICDM 2007)",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumeet Agarwal, Shantanu Godbole, Diwakar Pun- jani, and Shourya Roy. 2007. How much noise is too much: A study in automatic text classification. In Seventh IEEE International Conference on Data Mining (ICDM 2007), pages 3-12. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stress test evaluation of transformerbased models in natural language understanding tasks",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Aspillaga",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Carvallo",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Araujo",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.06261"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Aspillaga, Andr\u00e9s Carvallo, and Vladimir Araujo. 2020. Stress test evaluation of transformer- based models in natural language understanding tasks. arXiv preprint arXiv:2002.06261.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.02173"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. arXiv preprint arXiv:1711.02173.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.00055"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Is bert really robust. A Strong Baseline for Natural Language Attack on Text Classification and Entailment",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhijing",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Joey",
"middle": [
"Tianyi"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust. A Strong Base- line for Natural Language Attack on Text Classifica- tion and Entailment.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Maas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daly",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies",
"volume": "1",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the as- sociation for computational linguistics: Human lan- guage technologies-volume 1, pages 142-150. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Character n-gram tokenization for european language text retrieval",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
}
],
"year": 2004,
"venue": "Information retrieval",
"volume": "7",
"issue": "1-2",
"pages": "73--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Mcnamee and James Mayfield. 2004. Character n-gram tokenization for european language text re- trieval. Information retrieval, 7(1-2):73-97.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "To transfer or not to transfer: Misclassification attacks against transfer learned text classifiers",
"authors": [
{
"first": "Bijeeta",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Tople",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.02438"
]
},
"num": null,
"urls": [],
"raw_text": "Bijeeta Pal and Shruti Tople. 2020. To transfer or not to transfer: Misclassification attacks against transfer learned text classifiers. arXiv preprint arXiv:2001.02438.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantically equivalent adversarial rules for debugging nlp models",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "856--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A survey of types of text noise and techniques to handle noisy text",
"authors": [
{
"first": "L",
"middle": [
"Venkata"
],
"last": "Subramaniam",
"suffix": ""
},
{
"first": "Shourya",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tanveer",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Faruquie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Negi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of The Third Workshop on Analytics for Noisy Unstructured Text Data, AND '09",
"volume": "",
"issue": "",
"pages": "115--122",
"other_ids": {
"DOI": [
"10.1145/1568296.1568315"
]
},
"num": null,
"urls": [],
"raw_text": "L. Venkata Subramaniam, Shourya Roy, Tanveer A. Faruquie, and Sumit Negi. 2009. A survey of types of text noise and techniques to handle noisy text. In Proceedings of The Third Workshop on Analyt- ics for Noisy Unstructured Text Data, AND '09, page 115-122, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adv-bert: Bert is not robust on misspellings! generating nature adversarial samples on bert",
"authors": [
{
"first": "Lichao",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Jiugang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jiugang Li, Philip S. Yu, and Caiming Xiong. 2020. Adv-bert: Bert is not robust on mis- spellings! generating nature adversarial samples on bert. ArXiv, abs/2003.04985.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Evaluating text categorization in the presence of ocr errors",
"authors": [
{
"first": "Kazem",
"middle": [],
"last": "Taghva",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Nartker",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Borsack",
"suffix": ""
},
{
"first": "Allen",
"middle": [],
"last": "Lumos",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Condit",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2000,
"venue": "Document Recognition and Retrieval VIII",
"volume": "4307",
"issue": "",
"pages": "68--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazem Taghva, Thomas A Nartker, Julie Borsack, Steven Lumos, Allen Condit, and Ron Young. 2000. Evaluating text categorization in the presence of ocr errors. In Document Recognition and Retrieval VIII, volume 4307, pages 68-74. International Society for Optics and Photonics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07461"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "3 https://github.com/googleresearch/bert/blob/master/tokenization.py Accuracy vs Error tial dip in the performance.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Sentiment Analysis Textual Similarity</td></tr><tr><td colspan=\"2\">% error IMDB</td><td>SST-2</td><td>STS-B</td></tr><tr><td>0.0</td><td>0.93</td><td>0.89</td><td>0.89</td></tr><tr><td>2.5</td><td>0.85</td><td>0.86</td><td>0.84</td></tr><tr><td>5.0</td><td>0.79</td><td>0.80</td><td>0.75</td></tr><tr><td>7.5</td><td>0.67</td><td>0.76</td><td>0.65</td></tr><tr><td>10.0</td><td>0.62</td><td>0.70</td><td>0.65</td></tr><tr><td>12.5</td><td>0.53</td><td>0.67</td><td>0.49</td></tr><tr><td>15.0</td><td>0.51</td><td>0.60</td><td>0.40</td></tr><tr><td>17.5</td><td>0.46</td><td>0.59</td><td>0.39</td></tr><tr><td>20.0</td><td>0.44</td><td>0.54</td><td>0.29</td></tr><tr><td>22.5</td><td>0.41</td><td>0.49</td><td>0.31</td></tr></table>",
"html": null,
"text": "and figure 1 lists the performance of BERT on various variants (noiseless and noisy) of IMDB and STS-2 for sentiment analysis and SST-B for sentence similarity. From the numbers it is very clear that noise adversely affects the performance",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Results of experiments on both clean and noisy data.",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>: Comparative results on STS-B dataset with</td></tr><tr><td>different tokenizers</td></tr><tr><td>mapquest emailed him point-to-point drivibg</td></tr><tr><td>dirsctioje. (5% error)</td></tr><tr><td>Output of tokenization:</td></tr><tr><td>(a) 'poor</td></tr></table>",
"html": null,
"text": "",
"num": null
}
}
}
}