Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:59.585327Z"
},
"title": "Upcycle Your OCR: Reusing OCRs for Post-OCR Text Correction in Romanised Sanskrit",
"authors": [
{
"first": "Amrith",
"middle": [],
"last": "Krishna",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIT Kharagpur",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Bodhisattwa",
"middle": [
"Prasad"
],
"last": "Majumder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "San Diego"
}
},
"email": "[email protected]"
},
{
"first": "Rajesh",
"middle": [],
"last": "Shreedhar Bhat",
"suffix": "",
"affiliation": {
"laboratory": "Walmart Labs",
"institution": "",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIT Kharagpur",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a post-OCR text correction approach for digitising texts in Romanised Sanskrit. Owing to the lack of resources our approach uses OCR models trained for other languages written in Roman. Currently, there exists no dataset available for Romanised Sanskrit OCR. So, we bootstrap a dataset of 430 images, scanned in two different settings and their corresponding ground truth. For training, we synthetically generate training images for both the settings. We find that the use of copying mechanism (Gu et al., 2016) yields a percentage increase of 7.69 in Character Recognition Rate (CRR) than the current state of the art model in solving monotone sequence-tosequence tasks (Schnober et al., 2016). We find that our system is robust in combating OCR-prone errors, as it obtains a CRR of 87.01% from an OCR output with CRR of 35.76% for one of the dataset settings. A human judgement survey performed on the models shows that our proposed model results in predictions which are faster to comprehend and faster to improve for a human than the other systems 1 .",
"pdf_parse": {
"paper_id": "K18-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a post-OCR text correction approach for digitising texts in Romanised Sanskrit. Owing to the lack of resources our approach uses OCR models trained for other languages written in Roman. Currently, there exists no dataset available for Romanised Sanskrit OCR. So, we bootstrap a dataset of 430 images, scanned in two different settings and their corresponding ground truth. For training, we synthetically generate training images for both the settings. We find that the use of copying mechanism (Gu et al., 2016) yields a percentage increase of 7.69 in Character Recognition Rate (CRR) than the current state of the art model in solving monotone sequence-tosequence tasks (Schnober et al., 2016). We find that our system is robust in combating OCR-prone errors, as it obtains a CRR of 87.01% from an OCR output with CRR of 35.76% for one of the dataset settings. A human judgement survey performed on the models shows that our proposed model results in predictions which are faster to comprehend and faster to improve for a human than the other systems 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sanskrit used to be the 'lingua franca' for the scientific and philosophical discourse in ancient India with literature that spans more than 3 millennia. Sanskrit primarily had an oral tradition, and the script used for writing Sanskrit varied widely across the time spans and regions. With the advent of printing press, Devanagari emerged as the prominent script for representing Sanskrit. With standardisation of Romanisation using IAST in 1894 (Monier-Williams, 1899) , printing in Sanskrit was extended to roman scripts as well. There 1 The data and the codes for our system are available herehttps://github.com/majumderb/ sanskrit-ocr has been a surge in digitising printed Sanskrit manuscripts written in Roman such as the ones currently digitised by the 'Krishna Path' project 2 .",
"cite_spans": [
{
"start": 447,
"end": 470,
"text": "(Monier-Williams, 1899)",
"ref_id": null
},
{
"start": 539,
"end": 540,
"text": "1",
"ref_id": null
},
{
"start": 784,
"end": 785,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose a model for post-OCR text correction for Sanskrit written in Roman. Post-OCR text correction, which can be seen as a special case of spelling correction (Schnober et al., 2016) , is the task of correcting errors that tend to appear in the output of the OCR in the process of converting an image to text. The errors incurred from OCR can be quite high due to numerous factors including typefaces, paper quality, scan quality, etc. The text can often be eroded, can contain noises and the paper can be bleached or tainted as well (Schnober et al., 2016) . Figure 1 shows the sample images we have collected for the task. Hence it is beneficial to perform a post-processing on the OCR output to obtain an improved text. In the case of Indic OCRs, there have been considerable efforts in collection and annotation of data pertaining to Indic Scripts (Kumar and Jawahar, 2007; Bhaskarabhatla et al., 2004; Govindaraju and Setlur, 2009; Krishnan et al., 2014) . Earlier attempts on Indian scripts were primarily based on handcrafted templates (Govindan and Shivaprasad, 1990; Chaudhuri and Pal, 1997) or features (Arora et al., 2010; Pal et al., 2009) which extensively used the script and language-specific information (Krishnan et al., 2014) . Sequential labelling approaches were later proposed that take the word level inputs and make character level predictions (Shaw et al., 2008; Hellwig, 2015) . The word based sequence labelling approaches were further extended to use neural architectures, especially using RNNs and its variants such as LSTMs and GRUs (Sankaran and Jawahar, 2012; Krishnan et al., 2014; Adiga et al., 2018; Mathew et al., 2016) . But, OCR is putative in exhibiting few long-range dependencies (Schnober et al., 2016) . Singh and Jawahar (2015) find that extending the neural models to process the text at the sentence level (or a textline) leads to improvement in the performance of the OCR systems. This was further corroborated by Saluja et al. where the authors found that using words within a context window of 5 for a given input word worked particularly well for the Post-OCR text correction in Sanskrit. In the case of providing a text line as input, we are essentially providing more context about the input in comparison to the word level models and the RNN (or LSTM) cells are powerful enough to capture the long-term dependencies. Particularly for Indian languages, this decision is beyond a question of performance. In Sanskrit, the word boundaries are often obscured due to phonetic transformations at the word boundaries known as Sandhi. Word segmentation of Sanskrit constructions is a matter of research on its own (Krishna et al., 2016a; Reddy et al., 2018) . However, none of the existing systems are equipped for incorrect spellings and hence these systems may be brittle (Belinkov and Bisk, 2018) when it comes to handling spelling variations in the input. Hence, in our case, we assume an unsegmented sequence as our input and then we perform our Post-OCR text correction on the text. We hypothesise that this will improve the segmentation process and other downstream tasks for Sanskrit in a typical NLP pipeline.",
"cite_spans": [
{
"start": 178,
"end": 201,
"text": "(Schnober et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 553,
"end": 576,
"text": "(Schnober et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 871,
"end": 896,
"text": "(Kumar and Jawahar, 2007;",
"ref_id": "BIBREF19"
},
{
"start": 897,
"end": 925,
"text": "Bhaskarabhatla et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 926,
"end": 955,
"text": "Govindaraju and Setlur, 2009;",
"ref_id": "BIBREF9"
},
{
"start": 956,
"end": 978,
"text": "Krishnan et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 1062,
"end": 1094,
"text": "(Govindan and Shivaprasad, 1990;",
"ref_id": "BIBREF8"
},
{
"start": 1095,
"end": 1119,
"text": "Chaudhuri and Pal, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 1132,
"end": 1152,
"text": "(Arora et al., 2010;",
"ref_id": "BIBREF1"
},
{
"start": 1153,
"end": 1170,
"text": "Pal et al., 2009)",
"ref_id": "BIBREF26"
},
{
"start": 1239,
"end": 1262,
"text": "(Krishnan et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 1386,
"end": 1405,
"text": "(Shaw et al., 2008;",
"ref_id": "BIBREF33"
},
{
"start": 1406,
"end": 1420,
"text": "Hellwig, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1581,
"end": 1609,
"text": "(Sankaran and Jawahar, 2012;",
"ref_id": "BIBREF29"
},
{
"start": 1610,
"end": 1632,
"text": "Krishnan et al., 2014;",
"ref_id": "BIBREF18"
},
{
"start": 1633,
"end": 1652,
"text": "Adiga et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 1653,
"end": 1673,
"text": "Mathew et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 1739,
"end": 1762,
"text": "(Schnober et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 1765,
"end": 1789,
"text": "Singh and Jawahar (2015)",
"ref_id": "BIBREF34"
},
{
"start": 2677,
"end": 2700,
"text": "(Krishna et al., 2016a;",
"ref_id": "BIBREF16"
},
{
"start": 2701,
"end": 2720,
"text": "Reddy et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 2837,
"end": 2862,
"text": "(Belinkov and Bisk, 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 579,
"end": 587,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our major contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Contrary to what is observed in Schnober et al. (2016) , an encoder-decoder model, when equipped with copying mechanism (Gu et al., 2016) , can outperform a traditional sequence labelling model in a monotone sequence labelling task. Our model outperforms Schnober et al. (2016) in the Post-OCR text correction for Romanised Sanskrit task by 7.69 % in terms of CRR.",
"cite_spans": [
{
"start": 35,
"end": 57,
"text": "Schnober et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 123,
"end": 140,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 258,
"end": 280,
"text": "Schnober et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. By making use of digitised Sanskrit texts, we generate images as synthetic training data for our models. We systematically incorporate various distortions to those images so as to emulate the settings of the original images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Through a human judgement experiment, we asked the participants to correct the mistakes from a predicted output from the competing systems. We find that participants were able to correct predictions from our system more frequently and the corrections were done much faster than the CRF model by Schnober et al. (2016) . We observe that predictions from our model score high on acceptability (Lau et al., 2015) than other methods as well.",
"cite_spans": [
{
"start": 298,
"end": 320,
"text": "Schnober et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In principle, the output from any OCR which recognises Romanised Sanskrit can be used as the input to our model. Currently, there exist limited options for recognising Romanised Sanskrit texts from scanned documents. Possibly, the commercial OCR offering by Google as part of their proprietary cloud vision API and SanskritOCR 3 might be the only two viable options. Sanskri-tOCR provides an online interface to the Tesseract OCR, an open source multilingual OCR (Smith, 2007; Smith et al., 2009; Smith, 1987) , trained specifically for recognising Romanised Sanskrit. Additionally, we trained an offline version of Tesseract to recognise the graphemes in the Romanised Sanskrit alphabet. In both the models we find that many scanned images, especially similar to the one shown in Figure 1b , were not recognised by the system. We hypothesise this to be due to lack of enough font styles available in our collection, in spite of using a site with the richest collection of Sanskrit fonts 4 . This leaves the Google OCR as the only option.",
"cite_spans": [
{
"start": 463,
"end": 476,
"text": "(Smith, 2007;",
"ref_id": "BIBREF35"
},
{
"start": 477,
"end": 496,
"text": "Smith et al., 2009;",
"ref_id": "BIBREF36"
},
{
"start": 497,
"end": 509,
"text": "Smith, 1987)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 781,
"end": 790,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2"
},
{
"text": "Considering the fact that working with a commercial offering from Google OCR may not be an affordable option for various digitisation projects, we chose to use Tesseract with models trained for other languages written in Roman script. All the Latin or Roman scripts in the pre-trained models of Tesseract are trained on 400,000 text-lines spanning about 4500 fonts 5 . Use of OCR with pre-trained models for other languages French alphabet has the highest grapheme overlap with that of the Sanskrit alphabet (37 of 50), while all other languages have one less grapheme common with Sanskrit. Hence, we arbitrarily take 5 of the languages in addition to French and perform our analysis. Table 1 shows the character recognition rate (CRR) for OCR using alphabets of different languages, when performed on a dataset of 430 scanned images ( \u00a73.1). The table also shows the count of error types made by the OCR after alignment (Jiampojamarn et al., 2007; D'hondt et al., 2016) . All the languages have a near similar CRR with English and French leading the list. Based on our observations on the OCR performance, we select English for our further experiments.",
"cite_spans": [
{
"start": 921,
"end": 948,
"text": "(Jiampojamarn et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 949,
"end": 970,
"text": "D'hondt et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 685,
"end": 692,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2"
},
{
"text": "Upcycling such a pre-trained model brings its own challenges. For instance, the missing 14 Sanskrit graphemes 6 in English are naturally mispredicted to other graphemes. This leads to ambiguity as the correct and the mispredicted characters now share the same target. Figure 2 shows the heat-map for such mis-predictions when we used the OCR on the set of 430 scanned images. Here, we zoom the relevant cases and show the 5 https://github.com/tesseract-ocr/ tesseract/wiki/TrainingTesseract-4.00",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 276,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2"
},
{
"text": "6 Detailed in \u00a72 of the Supplementary Material row-normalised proportion of predictions 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2"
},
{
"text": "We formalise the task as a monotone seq2seq model. We use an encoder-decoder framework that takes in a character sequence as input and the model finds embeddings at a sub-word level both at the encoder and decoder side. Here the OCR output forms input to the model. Keeping the task in mind we make two design decisions for the model. One is the use of copying mechanism (Gu et al., 2016) and other is the use of Byte Pair Encoding (BPE) (Sennrich et al., 2016) to learn a new vocabulary for the model.",
"cite_spans": [
{
"start": 371,
"end": 388,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 438,
"end": 461,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "CopyNet (Gu et al., 2016) : Since it is possible that there will be reasonable overlap between the input and output strings, we use the copying mechanism as mentioned in CopyNet (Gu et al., 2016) . The model essentially learns two probability distributions, one for generating an entry at the decoder and the other for copying the entry from the encoder. The final prediction is based on the sum of both the probabilities for the class. Given an input sequence X = (x 1 , ..., x N ) we define X , for all the unique entries in the input sequence. We also define the vocabulary",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 178,
"end": 195,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "V = {v 1 , ..., v N }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "Let the out-of-vocabulary (OOV) words be represented with UNK. The probability of the generate mode g and copy mode c are given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "p(y t , g|\u2022)= \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1 Z e \u03c8g(yt) , y t \u2208 V 0, y t \u2208 X \u2212 V 1 Z e \u03c8g(UNK) y t \u2208 V \u222a X p(y t , c|\u2022)= 1 Z j:x j =yt e \u03c8c(x j ) , y t \u2208 X 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "where \u03c8 g (\u2022) and \u03c8 c (\u2022) are score functions for generate-mode (g) and copy-mode (c), respectively, and Z is the normalization term shared by the two modes, Z = v\u2208V\u222a{UNK} e \u03c8g(v) + x\u2208X e \u03c8c(x) . The scoring function for both the modes, respectively, are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "\u03c8 g (y t = v i ) = v i W o s t , v i \u2208 V \u222a UNK \u03c8 c (y t = x j ) = \u03c3 h j W c s t , x j \u2208 X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "where W c \u2208 R d h \u00d7ds , and \u03c3 is a non-linear activation function (Gu et al., 2016) . Table 1 : OCR performances for different languages with overall CRR, total Insertion, Deletion and Substitution errors.",
"cite_spans": [
{
"start": 66,
"end": 83,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "BPE (Sennrich et al., 2016) : Sanskrit is a morphologically rich language. A noun in Sanskrit can have 72 different inflections and a verb may have more than 90 inflections. Additionally, Sanskrit corpora generally express a compound rich vocabulary (Krishna et al., 2016b) . Hence, in a typical Sanskrit corpus, the majority of the tokens appear less than 5 times ( \u00a73.1). These are generally considered to be rare words in a corpus (Sennrich et al., 2016) . However, corpora dominated by rare words are difficult to handle for a statistical model like ours. To combat the sparsity of the data, we convert the tokens into sub-word ngrams using Byte Pair Encoding (BPE) (Sennrich et al., 2016) . Methods such as wordpiece (Schuster and Nakajima, 2012) as well as Sennrich et al. (2016) are means of obtaining a new vocabulary for a given corpus. Every sequence in the corpus is then re-written as a sequence of tokens in terms of the sub-word units which forms the type in the new vocabulary so obtained. These methods essentially use a data-driven approach to maximise the language-model likelihood of the training data, given an evolving word definition (Wu et al., 2016) . We explicitly set the minimum count for a token in the new vocabulary to appear in the corpora as 30. We learn a new vocabulary of size 82 with 22 of them having a length 1 and the rest with a length 2. The IAST standardisation of the Romanised Sanskrit contains 50 graphemes in Sanskrit alphabet. About 12 of the graphemes are represented using 2 character roman character combinations. Now, in the vocabulary learnt using BPE, 7 of the graphemes were not present. Hence, we add them in addition to the 82 entries learnt as vocabulary. This makes the total vocabulary to be 89. By using the new vocabulary, it is guaranteed that there will be no Out Of Vocabulary (OOV) words in our model.",
"cite_spans": [
{
"start": 4,
"end": 27,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 250,
"end": 273,
"text": "(Krishna et al., 2016b)",
"ref_id": "BIBREF17"
},
{
"start": 434,
"end": 457,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 670,
"end": 693,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 763,
"end": 785,
"text": "Sennrich et al. (2016)",
"ref_id": "BIBREF32"
},
{
"start": 1156,
"end": 1173,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "We use 3 stacked layers of LSTM at the encoder and the decoder with the same settings as in Bah-danau et al. (2015) . To enable copying, we share the embeddings of the source and the target vocabulary. By eliminating OOV, we make sure that copying always happens by virtue of the evidence from the training data and not by the presence of an OOV word.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "Bah-danau et al. (2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2.1"
},
{
"text": "Sanskrit is a low-resource language. It is extremely scarce to obtain datasets with scanned images and the corresponding aligned texts for Romanised Sanskrit. We obtain 430 scanned images as shown in Figure 1 and manually annotate the corresponding text. We use this as our test dataset, henceforth to be referred to as OCRTest. For training, we synthetically generate images from digitised Sanskrit texts and use them as our training set and development set. The images for training, OCRTrain, were generated by synthetically adding distortions to those images to match the settings of the real scanned documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 208,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "OCRTest contains 430 images from 1) scanned copy of Vishnu Saha\u015bran\u0101ma 8 and 2) scanned copy of Bhagavad G\u012bt\u0101, a sample of each is shown in Figure 1a and 1b. 140 out of these 430 are from Saha\u015bran\u0101ma and the remaining are from Bhagavad G\u012bt\u0101.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 149,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "OCRTrain: Similar to Ul-Hasan and Breuel (2013), we synthetically generate the images, which are then fed to the OCR, to obtain our training data. We use the digitised text from\u015ar\u012bmad Bh\u0101gavatam 9 for generating the synthetic images. The text contains about 14,094 verses in total, divided into 50,971 text-lines. The dataset is divided into 80-20 split as training set and development set, respectively. The corpus contains a vocabulary of 52,882 word types. 48,249 of the word types in the vocabulary appear less than or equal to 5 times, of which 32,411 appear exactly once. This is primarily due to the inflectional nature of Sanskrit. We find similar trends in the vocabulary of R\u0101m\u0101yan . a 10 and Digital Corpus of Sanskrit (Hellwig, 2010-2016) as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Using the text-lines from Bh\u0101gavatam, we generate synthetic images using ImageMagick 11 . The images were generated with a quality of 60 Dots Per Inch (DPI). The number of pixels along the height for each textline was kept constant at 65 pixels. We add several distortions to the synthetically generated images so as to visually match with the same settings as that of OCRTest. Previously, Ul-Hasan and Breuel (2013) used the approach of synthetically generating training data for multilingual OCR solution of theirs. Table 2 shows the different parameters, namely, gamma correction, noise addition, use of structural kernel for erosion and perspective distortion, that we apply sequentially on the images so as to distort and degrade the images (Chen et al., 2014) . We use grid search for the parameter estimation for these processes, where those parameters and the range of values experimented with are provided in Table 2 . Finally, we filter 7 (out of 38,400 combinations) different configurations based on the distribution of Character Recognition Rate (CRR) across the images compared with that of the OCRTest using KL-divergence. Among these seven configurations, four are closer to the settings for Bhagavad G\u012bt\u0101 and the remaining three for Saha\u015bran\u0101ma. Figure 3 shows the two different settings (closer to each of the source textbook) for the string \"ajo durmars . an . ah .\u015b\u0101 st\u0101 vi\u015brut\u0101tm\u0101 sur\u0101rih\u0101\", along with their corresponding parameter settings and KL-Divergence. Our training set contains images from all the 7 settings for each of the textline in OCRTrain 12 .",
"cite_spans": [
{
"start": 746,
"end": 765,
"text": "(Chen et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 518,
"end": 525,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 918,
"end": 925,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1263,
"end": 1271,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic Generation of training set",
"sec_num": "3.2"
},
{
"text": "We use three different metrics for evaluating all our models. We use Character Recognition Rate (CRR) and Word Recognition Rate (WRR) averaged over each of the sentences in the 430 lines in the test dataset (Sankaran Figure 3 : Samples of synthetically generated images. The parameter settings for the distortions are mentioned below the corresponding image.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 225,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "and Jawahar, 2012). CRR is the fraction of characters recognised correctly against the total number of characters in a line, whereas WRR is the fraction of words correctly recognised against the total number of words in a line. Additionally, we use a sentence level metric, called the acceptability score. The measure indicates the extent to which a sentence is permissible or acceptable to the speakers of the language (Lau et al., 2015) . From Lau et al. 2015, we use the NormLP formulation for the task, as it is found to have a high correlation with the human judgements in evaluating acceptability. NormLP is calculated by obtaining the likelihood of a predicted sentence as per the model, and then normalising it by the likelihood of the string as per a unigram language model trained on a corpus with gold standard sentences. A negative sign is then given to the score. The higher the score, the more acceptable the sentence is.",
"cite_spans": [
{
"start": 420,
"end": 438,
"text": "(Lau et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "Character Tagger -Sequence Labelling using BiLSTMs This is a sequence labelling model which uses BiLSTM cells and input is a character sequence (Saluja et al.) . We use categorical crossentropy as the loss function and softmax as the activation function. For dropout, we employ spatial dropout in our architecture. The model consists of 3 layers with each layer having 128 cells. Embeddings of size 100 are randomly initialised and the learnt representations are stored in a character look-up table similar to Lample et al. (2016) . In addition to every phoneme in Sanskrit as a class, we add an additional class 'no change' which signifies that the character remains as is. We also experimented with a variant where the final layer is a CRF layer (Lafferty et al., 2001 ). We henceforth refer to both the systems as BiLSTM and BiLSTM-CRF, respectively.",
"cite_spans": [
{
"start": 144,
"end": 159,
"text": "(Saluja et al.)",
"ref_id": null
},
{
"start": 510,
"end": 530,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 748,
"end": 770,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "Pruned CRFs (Schnober et al., 2016) : They (Ishikawa, 2011) that approximate the CRF objective function using coarse-to-fine decoding. Schnober et al. (2016) adapt the sequence labeling model as a seq2seq model that can handle variable length input-output pairs. Schnober et al. (2016) show that none of the neural seq2seq models considered in their work were able to outperform the Pruned CRF with order-5. The features to the model are consecutive characters within a window of size w in either of the directions of the current position at which a prediction is made. The model is designed to handle 1-to-zero and 1-to-many matches, facilitated by the use of alignment prior to training. We consider all the three settings reported in Schnober et al. (2016) and report the results for the best setting. The order-5 model which uses 6-grams within a window of 6 performs the best. Henceforth, this model is referred to as PCRF-seq2seq (also referred to as PCRF interchangeably). Encoder-Decoder Models: For the seq2seq model (Sutskever et al., 2014) , we use 3 stacked layers of LSTM each at the encoder and the decoder. Each layer is of 128 dimensions and weighted cross-entropy is used as the loss. We also add residual connections among the layers in a stack (Wu et al., 2016) . To further capture the entire input context for making each prediction at the output, we make use of attention (Bahdanau et al., 2015) , specifically Luong's attention mechanism (Luong et al., 2015) . We experiment with two variants where EncDec+Char uses character level embeddings and EncDec+BPE uses embeddings with BPE. CopyNet+BPE: The model discussed in \u00a72. We use CopyNet+BPE and CopyNet interchangeably throughout the paper. Table 3 shows the results for all the competing systems based on the predictions from OCRTest. CopyNet performs the best among the competing systems across all the three metrics and on both the source texts. For the G\u012bt\u0101 dataset, the models CopyNet and PCRF-Seq2Seq report similar performances. However, Saha\u015bran\u0101ma is a noisier dataset, and we find that CopyNet outperforms all other models by a huge margin. The WRR for the system is double that of the next best system (EncDec) on this dataset.",
"cite_spans": [
{
"start": 12,
"end": 35,
"text": "(Schnober et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 43,
"end": 59,
"text": "(Ishikawa, 2011)",
"ref_id": "BIBREF13"
},
{
"start": 135,
"end": 157,
"text": "Schnober et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 263,
"end": 285,
"text": "Schnober et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 737,
"end": 759,
"text": "Schnober et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 1026,
"end": 1050,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF38"
},
{
"start": 1263,
"end": 1280,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 1394,
"end": 1417,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 1461,
"end": 1481,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 1716,
"end": 1723,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "From Figure 4a , it can be observed that the performance in terms of CRR for CopyNet and PCRF is robust across all the lengths on strings from G\u012bt\u0101 and never goes below 90%. For Saha\u015bran\u0101ma, as shown in Figure 4b , CopyNet outperforms PCRF across inputs of all the lengths except for one setting. But, in the case of WRR, CopyNet is the best performing model across all the lengths as shown in Figure 4d . show CRR for G\u012bt\u0101 and Saha\u015bran\u0101ma respectively, for the competing systems. (c) and (d) shows WRR for G\u012bt\u0101 and Saha\u015bran\u0101ma, respectively. All the entries with insufficient data-points were merged to the nearest smaller number.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 14,
"text": "Figure 4a",
"ref_id": null
},
{
"start": 203,
"end": 212,
"text": "Figure 4b",
"ref_id": null
},
{
"start": 394,
"end": 403,
"text": "Figure 4d",
"ref_id": null
}
],
"eq_spans": [],
"section": "System performances for various input lengths:",
"sec_num": null
},
{
"text": "Error type analysis In Table 5 , we analyse the reduction in specific error types for PCRF Table 5 : Insertion, Deletion and Substitution errors for OCR, PCRF and CopyNet modes for both the datasets. The system errors are extra errors added by the respective systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 5",
"ref_id": null
},
{
"start": 91,
"end": 98,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "System performances for various input lengths:",
"sec_num": null
},
{
"text": "and CopyNet after the alignment of the predicted string with that of the ground truth in terms of insertion, deletion and substitution. We also report the system induced errors, where a correct component at the input (OCR output) is mispredicted to a wrong output by the model. CopyNet outperforms PCRF in correcting the errors and it also introduces lesser number of errors of its own. Both CopyNet and PCRF (Schnober et al., 2016) are seq2seq models and can handle varying length input and output. Both the systems perform well in handling substitution errors, the type which dominated the strings in OCRTest, though neither of the systems was able to correct the insertion errors. Insertion can be seen as a special case of 1-to-many insertion matches, which both systems are ideally capable of handling. We see that for Saha\u015bran\u0101ma, CopyNet corrects about 17.24 % of the deletion errors as against <5% of the deletion errors corrected by PCRF.",
"cite_spans": [
{
"start": 409,
"end": 432,
"text": "(Schnober et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System performances for various input lengths:",
"sec_num": null
},
{
"text": "Since there exist 14 graphemes in Sanskrit alphabet which are not present in the English alphabet, all 14 of them get substituted to a different grapheme by the OCR. While most of them get substituted to an orthographically similar character such as\u0101 \u2192 a and h . \u2192 h, we find that\u00f1 \u2192 i does not fit the scheme, as shown in Figure 2 . In the majority of the cases, CopyNet predicts them to the correct grapheme. But PCRF still fails to correct the OCR induced confusion for\u00f1 \u2192 i in the majority of the instances. Additionally, we find that PCRF introduces its own errors, for instance it often mispredicts p \u2192 s. Figure 5 shows the over-all variations in both the systems as compared to Figure 2 for OCR induced errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 323,
"end": 331,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 612,
"end": 620,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 686,
"end": 694,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "System performances for various input lengths:",
"sec_num": null
},
{
"text": "Copy or generate? For the 14 graphemes, missing at the encoder (input) but present at the decoder side during training, those predictions have to happen with high values of generate probability in general. We find that not only the average generate probability for such instances is high but also the copy probability is extremely low. For the remaining cases, we find that both generate and copy probability are higher. But it needs to be noted that the prediction is made generally by summing of both the distributions and the distributions are not complementary to each other. A similar trend can be observed in Figure 6 as well. For example in the case of a \u2192\u0101, only the generate probability is high. But, for a \u2192 a, both the copy and generate probability scores are high.",
"cite_spans": [],
"ref_spans": [
{
"start": 615,
"end": 623,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "System performances for various input lengths:",
"sec_num": null
},
{
"text": "We further investigate the effect of our vocabulary which is the union of the alphabet in Romanised Sanskrit and what is learnt using BPE. We train the model with only the alphabet as vocabulary and find the CRR and WRR for the combined test sentences to be 86.1% and 66.09%, respectively. When using the original BPE vocabulary, we find that there is a slight increase in the performance than the current vocabulary with a CRR and WRR of 89.53% and 68.11%, respectively 13 . We also find that the current setting performs better than Performance comparison to Google OCR: Google OCR is probably the only available OCR that can handle Romanised Sanskrit. We could not find the architecture of the OCR or whether the service employs post-OCR text correction. We empirically compare the performance of Google OCR on OCRTest with our model. Table 4 shows the results for Google OCR. Overall we find that CopyNet outperforms Google OCR across all the metrics. We find that Google OCR reports a similar CRR for G\u012bt\u0101 with that of ours, but still reports a lower WRR than ours. The system performs better than PCRF in all the metrics other than CRR for G\u012bt\u0101.",
"cite_spans": [],
"ref_spans": [
{
"start": 838,
"end": 845,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effect of BPE and alphabet in the vocabulary",
"sec_num": null
},
{
"text": "Image quality: Our training set was generated with a quality of 60 DPI for the images. We generate images corresponding to strings in OCRTrain with DPI of 50 to 300 in step sizes of 50 for a sample of 500 images. We use noise settings as shown in Figure 3 . The OCR output of the said strings remained as is with that of the one generated with a DPI of 60. This experiment can be seen as a proxy in evaluating the robustness of the OCR to various scanning qualities of input. Our choice of DPI as 60 was based on the lowest setting we observed in digitisation attempts in Sanskrit texts.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of BPE and alphabet in the vocabulary",
"sec_num": null
},
{
"text": "Effect of adding distortions to the synthetically generated images: Table 3 shows the system performance after training our model on data generated as per the procedure mentioned in Section 3.2. Here, we make an implicit assumption that we can have access to a sample of textline images annotated with the corresponding text from the manuscript for which the Post-OCR text correction needs to be performed. This also mandates retraining the model for every new manuscript. We attempted for a more generalised version of our model, by using training data where the image generation settings are not inspired from the target manuscript for which the task needs to be performed. Using the settings from (Chen et al., 2014) for inducing noise, we generated 10 random noise configurations. Here the step sizes were fixed at values such that each parameter, except erosion (E), can assume 5 values each uniformly spread across the corresponding ranges considered. From a total of 2500 (5\u00d75\u00d75\u00d75\u00d74) configuration options, 10 random settings were chosen. Every textline was generated with each of the 10 different settings. The resulting model using CopyNet produced a CRR of 89.02% (96.99% for G\u012bt\u0101 and 85.62% for Saha\u015bran\u0101ma) on the test set, which is close to the reported CRR of 89.65 in Table 3. The noise ranges chosen are used directly from (Chen et al., 2014) and are not influenced by the test data in hand.",
"cite_spans": [
{
"start": 700,
"end": 719,
"text": "(Chen et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1339,
"end": 1358,
"text": "(Chen et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Effect of BPE and alphabet in the vocabulary",
"sec_num": null
},
{
"text": "We also experimented with a setting where no noise was added to the synthetically generated images and the images were fed to the OCR. We obtained a CRR of 80.12% from OCR, where the errors arose mostly from the missing graphemes in the alphabet getting mispredicted to a different grapheme. CopyNet after training with the text so generated reported a CRR of 86.81% (96.01% for G\u012bt\u0101, 75.78% for Saha\u015bran\u0101ma) on the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of BPE and alphabet in the vocabulary",
"sec_num": null
},
{
"text": "Human judgement survey: In this survey 14 , we evaluate how often a human can recognise the correct construction by viewing only the prediction from one of the systems. We also evaluate how fast a human can correct them. We selected 15 constructions from Saha\u015bran\u0101ma, and obtained the system outputs from the OCR, CopyNet and PCRF for each of these. The average length of a sentence is 41.73 characters, all ranging between 23 and 47 characters. A respondent is shown a system prediction (system identity anonymised) and is asked to type the corrected string without referring to any sources. A respondent gets 15 different strings altogether, 5 each from each of the three systems. We consider responses from 9 participants where all of them at least have an undergraduate degree in Sanskrit linguistics. Altogether from 3 sets of questionnaires, we have 45 strings (3 outputs for a given string). Every string obtained 3 impressions. We find that a participant on an average could identify 4.44 sentences out of 5 from the CopyNet, while it was only 3.56 for PCRF and 3.11 for the OCR output. The average time taken to complete the correction of a string was 81.4 seconds, 106.6 seconds and 127.6 seconds for CopyNet, PCRF and OCR, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of BPE and alphabet in the vocabulary",
"sec_num": null
},
{
"text": "14 More details at \u00a76 of Supplementary material",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of BPE and alphabet in the vocabulary",
"sec_num": null
},
{
"text": "In this work, we proposed an OCR based solution for digitising Romanised Sanskrit. Our work acts as a Post-OCR text correction approach and is devoid of any OCR-specific feature engineering. We find that the use of copying mechanism in encoderdecoder performs significantly better than other seq2seq models for the task. Our model outperforms the commercially available Google OCR on the Saha\u015bran\u0101ma texts. From our experiments, we find that CopyNet performs stably even for OCR outputs with a CRR as low as 36%. Our immediate research direction will be to rectify insertion errors which currently are not properly handled. Also, there are 135 languages which directly share the Roman alphabet but only 35 of them have OCR system available. Our approach can be easily extended to provide a post-processed OCR for those languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://www.krishnapath.org/library/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://sri.auroville.org/projects/ sanskrit-ocr/. It provides interface to tesseract and Google OCR as well.4 More details about the training procedure in \u00a71 of the supplementary material",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A more detailed figure with all the cases are available in the supplementary material in \u00a73.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://kirtimukha.com/ 9 https://www.vedabase.com/en/sb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://sanskritdocuments.org/sites/ valmikiramayan/ 11 https://www.imagemagick.org/script/ index.php12 Samples of all the 7 seven configurations are shown in the supplementary material in \u00a74",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Please refer to \u00a75 of the Supplementary material for the performance table",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Amba Kulkarni, Arnab Bhattacharya, Ganesh Ramakrishnan, Rohit Saluja, Devaraj Adiga and Hrishikesh Terdalkar for helpful comments and discussions related to Indic OCRs. We would like to thank Madhusoodan Pai, Sanjeev Panchal, Ganesh Iyer and his students for helping us with the human judgement survey. We thank the anonymous reviewers for their constructive and helpful comments, which greatly improved the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improving the learnability of classifiers for sanskrit ocr corrections",
"authors": [
{
"first": "Devaraj",
"middle": [],
"last": "Adiga",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Saluja",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Parag",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "Malhar",
"middle": [],
"last": "Ramasubramaniam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kulkarni",
"suffix": ""
}
],
"year": 2018,
"venue": "The 17th World Sanskrit Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devaraj Adiga, Rohit Saluja, Vaibhav Agrawal, Ganesh Ramakrishnan, Parag Chaudhuri, K Ramasubrama- niam, and Malhar Kulkarni. 2018. Improving the learnability of classifiers for sanskrit ocr corrections. In The 17th World Sanskrit Conference, Vancouver, Canada. IASS.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Recognition of non-compound handwritten devnagari characters using a combination of mlp and minimum edit distance",
"authors": [
{
"first": "Sandhya",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Debotosh",
"middle": [],
"last": "Bhattacharjee",
"suffix": ""
},
{
"first": "Mita",
"middle": [],
"last": "Nasipuri",
"suffix": ""
},
{
"first": "Dipak",
"middle": [
"Kumar"
],
"last": "Basu",
"suffix": ""
},
{
"first": "Mahantapas",
"middle": [],
"last": "Kundu",
"suffix": ""
}
],
"year": 2010,
"venue": "International Journal of Industrial Electronics and Electrical Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandhya Arora, Debotosh Bhattacharjee, Mita Nasipuri, Dipak Kumar Basu, and Mahantapas Kundu. 2010. Recognition of non-compound hand- written devnagari characters using a combination of mlp and minimum edit distance. International Journal of Industrial Electronics and Electrical Engineering.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Third International Conference on Learning Representation (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the Third International Conference on Learning Repre- sentation (ICLR), San Diego, CA, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "The Sixth International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Syn- thetic and natural noise both break neural machine translation. In The Sixth International Conference on Learning Representations (ICLR), New Orleans, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Representation and annotation of online handwritten data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ajay",
"suffix": ""
},
{
"first": "Sriganesh",
"middle": [],
"last": "Bhaskarabhatla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Madhvanath",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2004,
"venue": "Ninth International Workshop on Frontiers in Handwriting Recognition",
"volume": "",
"issue": "",
"pages": "136--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ajay S Bhaskarabhatla, Sriganesh Madhvanath, MNSSKP Kumar, A Balasubramanian, and CV Jawahar. 2004. Representation and annotation of online handwritten data. In Ninth International Workshop on Frontiers in Handwriting Recognition, pages 136-141, Tokyo, Japan. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An ocr system to read two indian language scripts: Bangla and devnagari (hindi)",
"authors": [
{
"first": "",
"middle": [],
"last": "Bb Chaudhuri",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourth International Conference on Document Analysis and Recognition",
"volume": "2",
"issue": "",
"pages": "1011--1015",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BB Chaudhuri and U Pal. 1997. An ocr system to read two indian language scripts: Bangla and devnagari (hindi). In Proceedings of the Fourth International Conference on Document Analysis and Recognition, volume 2, pages 1011-1015, Ulm, Germany. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Large-scale visual font recognition",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianchao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Brandt",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Shechtman",
"suffix": ""
},
{
"first": "Aseem",
"middle": [],
"last": "Agarwala",
"suffix": ""
},
{
"first": "Tony",
"middle": [
"X"
],
"last": "Han",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3598--3605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guang Chen, Jianchao Yang, Hailin Jin, Jonathan Brandt, Eli Shechtman, Aseem Agarwala, and Tony X Han. 2014. Large-scale visual font recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3598-3605.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Low-resource ocr error detection and correction in french clinical texts",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Eva D'hondt",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Grouin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva D'hondt, Cyril Grouin, and Brigitte Grau. 2016. Low-resource ocr error detection and correction in french clinical texts. In Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis, pages 61-68, Auxtin, TX. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Character recognitiona review",
"authors": [
{
"first": "",
"middle": [],
"last": "Vk Govindan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shivaprasad",
"suffix": ""
}
],
"year": 1990,
"venue": "Pattern recognition",
"volume": "23",
"issue": "7",
"pages": "671--683",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "VK Govindan and AP Shivaprasad. 1990. Char- acter recognitiona review. Pattern recognition, 23(7):671-683.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Guide to OCR for Indic Scripts",
"authors": [
{
"first": "Venu",
"middle": [],
"last": "Govindaraju",
"suffix": ""
},
{
"first": "Srirangaraj",
"middle": [],
"last": "Setlur",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Venu Govindaraju and Srirangaraj Setlur. 2009. Guide to OCR for Indic Scripts. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "DCS -The Digital Corpus of Sanskrit",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Hellwig",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Hellwig. 2010-2016. DCS -The Digital Corpus of Sanskrit. Berlin.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ind. senz-ocr software for hindi, marathi, tamil, and sanskrit",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Hellwig",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Hellwig. 2015. ind. senz-ocr software for hindi, marathi, tamil, and sanskrit.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Transformation of general binary mrf minimization to the first-order case",
"authors": [
{
"first": "Hiroshi",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "33",
"issue": "",
"pages": "1234--1249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroshi Ishikawa. 2011. Transformation of general bi- nary mrf minimization to the first-order case. IEEE transactions on pattern analysis and machine intel- ligence, 33(6):1234-1249.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Sittichai Jiampojamarn",
"suffix": ""
},
{
"first": "Tarek",
"middle": [],
"last": "Kondrak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sherif",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics;",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "372--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Main Conference, pages 372- 379, Rochester, New York. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Word segmentation in sanskrit using path constrained random walks",
"authors": [
{
"first": "Amrith",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Bishal",
"middle": [],
"last": "Santra",
"suffix": ""
},
{
"first": "Pavankumar",
"middle": [],
"last": "Satuluri",
"suffix": ""
},
{
"first": "Prasanth",
"middle": [],
"last": "Sasi",
"suffix": ""
},
{
"first": "Bhumi",
"middle": [],
"last": "Bandaru",
"suffix": ""
},
{
"first": "Yajuvendra",
"middle": [],
"last": "Faldu",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goyal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "494--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amrith Krishna, Bishal Santra, Pavankumar Satuluri, Sasi Prasanth Bandaru, Bhumi Faldu, Yajuvendra Singh, and Pawan Goyal. 2016a. Word segmen- tation in sanskrit using path constrained random walks. In Proceedings of COLING 2016, the 26th International Conference on Computational Lin- guistics: Technical Papers, pages 494-504, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Compound type identification in sanskrit: What roles do the corpus and grammar play?",
"authors": [
{
"first": "Amrith",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Pavankumar",
"middle": [],
"last": "Satuluri",
"suffix": ""
},
{
"first": "Shubham",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Apurv",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WS-SANLP2016)",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amrith Krishna, Pavankumar Satuluri, Shubham Sharma, Apurv Kumar, and Pawan Goyal. 2016b. Compound type identification in sanskrit: What roles do the corpus and grammar play? In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WS- SANLP2016), pages 1-10, Osaka, Japan. The COL- ING 2016 Organizing Committee.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Towards a robust ocr system for indic scripts",
"authors": [
{
"first": "Praveen",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "Ajeet",
"middle": [
"Kumar"
],
"last": "Singh",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2014,
"venue": "Eleventh IAPR International Workshop on Document Analysis Systems (DAS)",
"volume": "",
"issue": "",
"pages": "141--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Praveen Krishnan, Naveen Sankaran, Ajeet Kumar Singh, and CV Jawahar. 2014. Towards a robust ocr system for indic scripts. In Eleventh IAPR Inter- national Workshop on Document Analysis Systems (DAS), pages 141-145, Tours, France. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Content-level annotation of large collection of printed document images",
"authors": [
{
"first": "Anand",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2007,
"venue": "Ninth International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "2",
"issue": "",
"pages": "799--803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anand Kumar and CV Jawahar. 2007. Content-level annotation of large collection of printed document images. In Ninth International Conference on Docu- ment Analysis and Recognition (ICDAR), volume 2, pages 799-803, Parana, Brazil. IEEE.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 18th International Conference on Machine Learning (ICML)",
"volume": "951",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning (ICML), volume 951, pages 282-289, Williamstown, MA, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Unsupervised prediction of acceptability judgements",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lap",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1618--1628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lap- pin. 2015. Unsupervised prediction of acceptabil- ity judgements. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1618-1628, Beijing, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multilingual ocr for indic scripts",
"authors": [
{
"first": "Minesh",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Ajeet",
"middle": [
"Kumar"
],
"last": "Singh",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2016,
"venue": "Document Analysis Systems (DAS), 2016 12th IAPR Workshop on",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minesh Mathew, Ajeet Kumar Singh, and CV Jawa- har. 2016. Multilingual ocr for indic scripts. In Document Analysis Systems (DAS), 2016 12th IAPR Workshop on, pages 186-191. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Monier Monier-Williams. 1899. A sanskrit-english dictionary",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monier Monier-Williams. 1899. A sanskrit-english dictionary.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Comparative study of devnagari handwritten character recognition using different feature and classifiers",
"authors": [
{
"first": "Umapada",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Tetsushi",
"middle": [],
"last": "Wakabayashi",
"suffix": ""
},
{
"first": "Fumitaka",
"middle": [],
"last": "Kimura",
"suffix": ""
}
],
"year": 2009,
"venue": "Tenth International Conference on Document Analysis and Recognition (IC-DAR)",
"volume": "",
"issue": "",
"pages": "1111--1115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Umapada Pal, Tetsushi Wakabayashi, and Fumitaka Kimura. 2009. Comparative study of devnagari handwritten character recognition using different feature and classifiers. In Tenth International Con- ference on Document Analysis and Recognition (IC- DAR), pages 1111-1115. IEEE.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Building a word segmenter for sanskrit overnight",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Amrith",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Dutt",
"middle": [],
"last": "Vishnu",
"suffix": ""
},
{
"first": "Prateek",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Vineeth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goyal",
"suffix": ""
}
],
"year": 2018,
"venue": "Eleventh Language Resources and Evaluation Conference (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Reddy, Amrith Krishna, Vishnu Dutt Sharma, Prateek Gupta, MR Vineeth, and Pawan Goyal. 2018. Building a word segmenter for sanskrit overnight. In Eleventh Language Resources and Evaluation Conference (LREC), Miyazaki, Japan.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Error detection and corrections in indic ocr using lstms",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Saluja",
"suffix": ""
},
{
"first": "Devaraj",
"middle": [],
"last": "Adiga",
"suffix": ""
},
{
"first": "Parag",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": ""
}
],
"year": null,
"venue": "14th IAPR International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Saluja, Devaraj Adiga, Parag Chaudhuri, Ganesh Ramakrishnan, and Mark Carman. Error detection and corrections in indic ocr using lstms. In 14th IAPR International Conference on Document Anal- ysis and Recognition (ICDAR).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Recognition of printed devanagari text using blstm neural network",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2012,
"venue": "21st International Conference on Pattern Recognition (ICPR)",
"volume": "",
"issue": "",
"pages": "322--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Sankaran and CV Jawahar. 2012. Recognition of printed devanagari text using blstm neural net- work. In 21st International Conference on Pattern Recognition (ICPR), pages 322-325, Tsukuba Sci- ence City, JAPAN. IEEE.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Still not there? comparing traditional sequence-to-sequence models to encoderdecoder neural networks on monotone string translation tasks",
"authors": [
{
"first": "Carsten",
"middle": [],
"last": "Schnober",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Erik-L\u00e2n Do",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1703--1714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carsten Schnober, Steffen Eger, Erik-L\u00e2n Do Dinh, and Iryna Gurevych. 2016. Still not there? comparing traditional sequence-to-sequence models to encoder- decoder neural networks on monotone string trans- lation tasks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1703-1714.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Japanese and korean voice search",
"authors": [
{
"first": "M",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nakajima",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5149--5152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Schuster and K. Nakajima. 2012. Japanese and ko- rean voice search. In IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152, Kyoto, Japan.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Offline handwritten devanagari word recognition: A holistic approach based on directional chain code feature and hmm",
"authors": [
{
"first": "Bikash",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Swapan",
"middle": [],
"last": "Kumar Parui",
"suffix": ""
},
{
"first": "Malayappan",
"middle": [],
"last": "Shridhar",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Information Technology (ICIT)",
"volume": "",
"issue": "",
"pages": "203--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikash Shaw, Swapan Kumar Parui, and Malayap- pan Shridhar. 2008. Offline handwritten devana- gari word recognition: A holistic approach based on directional chain code feature and hmm. In In- ternational Conference on Information Technology (ICIT), pages 203-208, Bhubaneswar, India. IEEE.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Can rnns reliably separate script and language at word and line level",
"authors": [
{
"first": "Ajeet Kumar",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2015,
"venue": "13th International Conference on Document Analysis and Recognition (ICDAR)",
"volume": "",
"issue": "",
"pages": "976--980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ajeet Kumar Singh and CV Jawahar. 2015. Can rnns reliably separate script and language at word and line level? In 13th International Conference on Document Analysis and Recognition (ICDAR), pages 976-980. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "An overview of the tesseract ocr engine",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2007,
"venue": "Ninth International Conference on Document Analysis and Recognition, (ICDAR)",
"volume": "2",
"issue": "",
"pages": "629--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Smith. 2007. An overview of the tesseract ocr en- gine. In Ninth International Conference on Docu- ment Analysis and Recognition, (ICDAR), volume 2, pages 629-633. IEEE.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Adapting the tesseract open source ocr engine for multilingual ocr",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Daria",
"middle": [],
"last": "Antonova",
"suffix": ""
},
{
"first": "Dar-Shyang",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Workshop on Multilingual OCR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Smith, Daria Antonova, and Dar-Shyang Lee. 2009. Adapting the tesseract open source ocr en- gine for multilingual ocr. In Proceedings of the In- ternational Workshop on Multilingual OCR, page 1. ACM.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Extraction and Recognition of Text from Multimedia Document Images",
"authors": [
{
"first": "W",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond W Smith. 1987. The Extraction and Recog- nition of Text from Multimedia Document Images. Ph.D. thesis, University of Bristol.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Can we build language-independent ocr using lstm networks?",
"authors": [
{
"first": "Adnan",
"middle": [],
"last": "Ul",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"M"
],
"last": "Breuel",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 4th International Workshop on Multilingual OCR, MOCR '13",
"volume": "9",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adnan Ul-Hasan and Thomas M. Breuel. 2013. Can we build language-independent ocr using lstm net- works? In Proceedings of the 4th International Workshop on Multilingual OCR, MOCR '13, pages 9:1-9:5, New York, NY, USA. ACM.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Sample images from our test set with different stylistic parameters",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Heatmap of occurrences of majorly confusing character pairs between Ground Truth and OCR",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Figure 4: (a) and (b) show CRR for G\u012bt\u0101 and Saha\u015bran\u0101ma respectively, for the competing systems. (c) and (d) shows WRR for G\u012bt\u0101 and Saha\u015bran\u0101ma, respectively. All the entries with insufficient data-points were merged to the nearest smaller number.",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "Heatmap for occurrences of majorly confusing character pairs between Ground Truth and predictions of (a) PCRF model (b) CopyNet model",
"type_str": "figure",
"num": null
},
"FIGREF4": {
"uris": null,
"text": "Heatmap of mean copy score (copy) and mean generate score (gen), respectively for 6 (of 14) graphemes not present in the English alphabet. a model that takes word level input. The word level model shows a drop in the performance with a CRR and WRR of 86.42% and 66.54%, respectively. mean copy and generate scores for different predictions from (a) 'a' and (b) 's'.",
"type_str": "figure",
"num": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Image pre-processing steps and parameters are higher order CRF models"
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>CRR</td><td>WRR</td></tr><tr><td colspan=\"3\">Bhagavad G\u012bt\u0101 96.80% 71.23%</td></tr><tr><td>Saha\u015bran\u0101ma</td><td colspan=\"2\">82.81% 26.01%</td></tr><tr><td>Combined</td><td colspan=\"2\">87.88% 60.91%</td></tr></table>",
"html": null,
"text": "Performance in terms of CRR, WRR and Norm LP (acceptability) for all the competing models"
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Model</td><td colspan=\"3\">Bhagavad G\u012bt\u0101</td><td colspan=\"2\">Saha\u015bran\u0101ma</td><td colspan=\"3\">System errors</td></tr><tr><td/><td colspan=\"3\">Ins Del Sub</td><td colspan=\"2\">Ins Del Sub</td><td colspan=\"3\">Ins Del Sub</td></tr><tr><td>OCR</td><td>23</td><td>63</td><td colspan=\"2\">1868 73</td><td colspan=\"2\">696 1596 -</td><td>-</td><td>-</td></tr><tr><td>PCRF</td><td>22</td><td>57</td><td>641</td><td>72</td><td>663 932</td><td>0</td><td>73</td><td>209</td></tr><tr><td colspan=\"2\">CopyNet 22</td><td>45</td><td>629</td><td>72</td><td>576 561</td><td>10</td><td>5</td><td>52</td></tr></table>",
"html": null,
"text": "Performance in terms of CRR, WRR for Google OCR"
}
}
}
}