Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "A94-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:39.518226Z"
},
"title": "Modeling Content Identification from Document Images",
"authors": [
{
"first": "Takehiro",
"middle": [],
"last": "Nakayama",
"suffix": "",
"affiliation": {
"laboratory": "Fuji Xerox Palo Alto Laboratory",
"institution": "",
"location": {
"addrLine": "3400 HiUview Avenue",
"postCode": "94304",
"settlement": "Palo Alto",
"region": "CA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A new technique to locate content-representing words for a given document image using abstract representation of character shapes is described. A character shape code representation defined by the location of a character in a text line has been developed. Character shape code generation avoids the computational expense of conventional optical character recognition (OCR). Because character shape codes are an abstraction of standard character code (e.g., ASCII), the mapping is ambiguous. In this paper, the ambiguity is shown to be practically limited to an acceptable level. It is illustrated that: first, punctuation marks are clearly distinguished from the other characters; second, stop words are generally distinguishable from other words, because the permutations of character shape codes in function words are characteristically different from those in content words; and third, numerals and acronyms in capital letters are distinguishable from other words. With these clAssifications, potential content-representing words are identified, and an analysis of their distribution yields their rank. Consequently, introducing character shape codes makes it possible to inexpensively and robustly bridge the gap between electronic documents and hardcopy documents for the purpose of content identification.",
"pdf_parse": {
"paper_id": "A94-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "A new technique to locate content-representing words for a given document image using abstract representation of character shapes is described. A character shape code representation defined by the location of a character in a text line has been developed. Character shape code generation avoids the computational expense of conventional optical character recognition (OCR). Because character shape codes are an abstraction of standard character code (e.g., ASCII), the mapping is ambiguous. In this paper, the ambiguity is shown to be practically limited to an acceptable level. It is illustrated that: first, punctuation marks are clearly distinguished from the other characters; second, stop words are generally distinguishable from other words, because the permutations of character shape codes in function words are characteristically different from those in content words; and third, numerals and acronyms in capital letters are distinguishable from other words. With these clAssifications, potential content-representing words are identified, and an analysis of their distribution yields their rank. Consequently, introducing character shape codes makes it possible to inexpensively and robustly bridge the gap between electronic documents and hardcopy documents for the purpose of content identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Documents are becoming increasingly available in machine-readable form. As they are stored automatically and transferred on networks, many natural lan-guAge processing techniques that identify their content have been developed to assist users with information retrieval and document classification. Conventionally, stored records axe identified by sets of keywords or phrases, known as index terms (Salton, 1991) .",
"cite_spans": [
{
"start": 398,
"end": 412,
"text": "(Salton, 1991)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although documents are increasingly being computer generated, they are still printed on paper for reading, dissemination, and markup. As it is believed that paper will remain a comfortable medium for reading and modification (O'Gorman and Kasturi, 1992) , development of content identifiCAtion techniques from a document image is still important. OCR is often used to convert a document image into machine-readable form, but processing performance is limited by the overhead of OCR (Mori et al., 1992; Nagy, 1992; Rice et al., 1993) . Because of the inaccuracy and expense of OCR, we decided to avoid using it.",
"cite_spans": [
{
"start": 225,
"end": 253,
"text": "(O'Gorman and Kasturi, 1992)",
"ref_id": "BIBREF7"
},
{
"start": 482,
"end": 501,
"text": "(Mori et al., 1992;",
"ref_id": "BIBREF4"
},
{
"start": 502,
"end": 513,
"text": "Nagy, 1992;",
"ref_id": "BIBREF5"
},
{
"start": 514,
"end": 532,
"text": "Rice et al., 1993)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Instead, we have developed a method that first makes generalizations about images of characters, then performs gross classification of the isolated characters and agglomerates these character shape codes into spatially isolated (word shape) tokens (Nakayama and Spitz, 1993; Sibun and Spitz, this volume) . Generating word shape tokens is inexpensive, fast, and robust. Word shape tokens are a potential alternative to character coded words when they are used for language determination and part-of-speech tagging (Nakayama and Spitz, 1993; Sibun and Spitz, this volume; Sibun and Farrar, 1994) . In this paper, we describe an extension of our approach to content identification.",
"cite_spans": [
{
"start": 248,
"end": 274,
"text": "(Nakayama and Spitz, 1993;",
"ref_id": "BIBREF6"
},
{
"start": 275,
"end": 304,
"text": "Sibun and Spitz, this volume)",
"ref_id": null
},
{
"start": 514,
"end": 540,
"text": "(Nakayama and Spitz, 1993;",
"ref_id": "BIBREF6"
},
{
"start": 541,
"end": 570,
"text": "Sibun and Spitz, this volume;",
"ref_id": null
},
{
"start": 571,
"end": 594,
"text": "Sibun and Farrar, 1994)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we introduce word shape tokens and their generation from document images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word shape token generation from image",
"sec_num": "2"
},
{
"text": "First, we classify characters by determining the characteristics of the text line. We identify the positions of the baseline and the x-height as shown in figure 1 (Spitz, 1993) .",
"cite_spans": [
{
"start": 163,
"end": 176,
"text": "(Spitz, 1993)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word shape token generation from image",
"sec_num": "2"
},
{
"text": "Next, we count the number of connected components in each character cell and note the position of those connected components with respect to the baseline and x-height (Nakayama and Spitz, 1993; Sibun and Spitz, this volume) . The basic character classes { A x i g j U ' -,. : = ! } and the members which constitute those classes are shown in Table 1 . In this paper, they are represented in bold-face type (e.g., Aigxx). Note that a character shape code subset {-,. : !} includes only punctuation marks. This is important for our cleaning process which will be described later.",
"cite_spans": [
{
"start": 167,
"end": 193,
"text": "(Nakayama and Spitz, 1993;",
"ref_id": "BIBREF6"
},
{
"start": 194,
"end": 223,
"text": "Sibun and Spitz, this volume)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Word shape token generation from image",
"sec_num": "2"
},
{
"text": "Figure 1: Text line parameter positions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "~~I~ t\u00b0p x-height baseline bottom",
"sec_num": null
},
{
"text": "Character shape codes are grouped by word boundary into word shape tokens (see Sibun and Spitz, this volume) . The correspondence between the scanned word image and the word shape token is one-to-one; that is, when a certain word shape token is selected, its original word image can be immediately located.",
"cite_spans": [
{
"start": 79,
"end": 108,
"text": "Sibun and Spitz, this volume)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "~~I~ t\u00b0p x-height baseline bottom",
"sec_num": null
},
{
"text": "Recocnizing word shape tokens from images is two or three orders of magnitude faster than conventional OCR (Spitz, 1994) , and is robust for real-world documents which are sometimes degraded by poor printing and which sometimes use more than a single font. '-,.:=! \"-,.:;=!?",
"cite_spans": [
{
"start": 107,
"end": 120,
"text": "(Spitz, 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "~~I~ t\u00b0p x-height baseline bottom",
"sec_num": null
},
{
"text": "The use of ouly 13 character shape codes instead of approximately 100 standard character codes results in a one-to-many correspondence between word shape tokens and words. Figure 2 shows how much character shape codes reduce word variation using the most frequent 10,000 English words (Carroll et al., 1971) in order of frequency. A word is defined as a string of graphic characters bounded on the left and right by spaces. Words are distinguished by their graphic characters. For example, \"apple\", \"Apple\", and \"apples\" are three different words, while \"will\" (modal) and \"will\" (noun) are the same. For the purpose of comparing the character shape code representation with the standard character code representation, the x axis represents the number of timquent words, and the y axis represents the number of distinct words represented in both ASCII and character shape codes. The number of words in ASCII naturally corresponds to the number of original words one-toone. On the other hand, the number of words in character shape codes (the number of word shape tokens) is less than half of the number of original words. This gap is a constraint on the accuracy of our approach, but we show it is not a serious limitation in the following section. ",
"cite_spans": [
{
"start": 285,
"end": 307,
"text": "(Carroll et al., 1971)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "~~I~ t\u00b0p x-height baseline bottom",
"sec_num": null
},
{
"text": "Text characterization is an important domain for natural language processing. Many published techniques utilize word frequencies of a text for information retrieval and text categorization (Jacobs, 1992; Cutting et al., 1993) . We also characterize the content of the document image by finding words that seem to specify the topic of the document. Briefly, our strategy is to identify the frequently occurring word shape tokens.",
"cite_spans": [
{
"start": 189,
"end": 203,
"text": "(Jacobs, 1992;",
"ref_id": null
},
{
"start": 204,
"end": 225,
"text": "Cutting et al., 1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content identification",
"sec_num": "3"
},
{
"text": "In this section, we first describe a process of cleaning the input sequence which precedes the main procedures. Then, we illustrate how to collect the important tokens, introducing a stop list of common word shape tokens which is used to remove the tokens that are insufficiently specific to represent the content of the documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content identification",
"sec_num": "3"
},
{
"text": "Given a sequence of word shape tokens, the system removes the specific character shape codes '-', ',', '.', ':', and '!' that do not contribute important linguistic information to the words to which they adhere, but that change the shape of the tokens. Otherwise, word shape would vary according to position and punctuation, which would interfere with token distribution analysis downstream. We ignore possible sentence initial word shape alteration by capitalization simply because it is almost impossible to presume the original shape. In this paper, capitalized words are counted differently from their uncapitalized counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning input sequence",
"sec_num": "3.1"
},
{
"text": "Our cleaning process concatenates word shape tokens before and after the hyphen at the end of line. The process also deletes intended hyphens (e.g., AxxxA-xixAxA [broad-minded] --> AxxxAxixAxA). Eliminating hyphens reduces the variation of word shape tokens. We measured this effect using the aforementioned frequent 10,000 words. Forty-two words of 10,000 are hyphenated. In character shape code representation, 10,000 words map into 3,679 word shape tokens (figure 2). When hyphens are eliminated, the 10,000 words fall into 3,670 word shape tokens. This small reduction implies that eliminating hyphens does not practically affect the following process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning input sequence",
"sec_num": "3.1"
},
{
"text": "After cleaning is done, the system analyzes word shape token distribution. Word shape tokens are counted on the hypothesis that frequent ones correspond to words that represent content; however, tokens that correspond to function words are also very frequent. One problem awaiting solution is that of developing a technique to separate these two classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introducing a word shape token stop list",
"sec_num": "3.2"
},
{
"text": "In tiffs paper, we define function words as the set of {prepositions, determiners, conjunctions, pronouns, modals, be and have surface forms }, and content words as the set of {nouns, verbs (excluding modals and be and have surface forms), adjectives }. Words that belong in both categories are defined as function words. We exclude adverbs from both, because they sometimes behave as function words and sometimes as content words. Words that can be adverbs but also can be either a function or a context word are not counted as adverbs. In English, function words tend to be short whereas content words tend to be long. For the purpose of investigating characteristics of function and content words in character shape code representation, we compiled a lexicon of 71,372 distinct word shape token entries from an ASCII-represented lexicon of 245,085 word entries which was provided by Xerox PARC and was modified in our laboratory. 254 word shape token entries of the lexicon correspond to 515 function words, 63,356 entries correspond to 226,648 content words, and 209 entries correspond to both function and content words. Finally, 8,921 word shape token entries correspond to 17,922 adverbs. Figure 3 shows the distribution of word shape token length. Frequency of occurrence of word shape tokens was not taken into account; that is, we simply counted the length of each entry and computed the population ratio. The distribution of content words is apparendy different from that of function words. In the figure, we also record the distribution of word shape tokens corresponding to the 100 most frequent words (75 function words, 16 content words, and 9 adverbs) from the source (Carroll et al., 1971) . It illustrates that very common words are short. The length of word shape token A stop list of the most common function word shape tokens was constructed so that they could be removed from sequences of word shape tokens. It is important to select the right word shape tokens for this list, which must selectively remove more function words than content words. In general, the larger the list, the more it removes both function and content words. Thinking back to our goal of finding frequent content words, we don't need to try to remove all function words. We need only to remove the function words that are nsually more frequent than content-representing frequent words in the text on the assumption that the frequency of individual function words is almost independent of topic. Infrequent function words that remain after using the word shape token stop list are distinguishable from frequent content words by comparing their frequencies.",
"cite_spans": [
{
"start": 1684,
"end": 1706,
"text": "(Carroll et al., 1971)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1196,
"end": 1204,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Introducing a word shape token stop list",
"sec_num": "3.2"
},
{
"text": "We generated a word shape token stop list using Carroll's list of frequent function words. We selected several sets of the most freq~ent function words, by limiting the minimum frequency of words in the set to 1%, 0.5%, 0.1%, 0.09%, 0.08% ..... 0%, then converted them into word shape tokens. We tested these word shape tokens on the aforementioned lexicon to count the number of matching entries. Table 2 gives part of the results, where Freq.FW stands for frequencies of the selected function words, # FW for the number of them, # stop-tokens for the number of word shape tokens derived from them, FW.Match for a ratio of the number of matching function words to the total number of function words in the lexicon (515), and CW.Match for a ratio of the number of matching content words to the total number of content words (226,648). A word shape token stop list, for instance, from function words whose frequencies are more than 0.5% removes 0.4% of content words and 18% of function words from the lexicon; a word shape token stop list from function words with frequencies more than 0.01% removes 4.2% of content and 56% of function words; and a word shape token stop list from all function words in the lexicon removes 9.5% of content words. Function words (frequency > 0.05%) the of and a to in is you that it he for was on are as with his they at be this from I have or by one had but what all were when we there can an your which their ff will each about up out them she many some so these would other into has her like him could no than been its who now my over down only may after where most through before our me any same around another must because such off every between should under us along while might next below something both few those We also tested these word shape token stop lists on ASCII encoded documents, and discovered that good results are obtained with the lists derived from function words with frequencies of more than 0.05%. This list identifies all words that occur more than 5 times per 10,000 in the document. Figure 4 shows the selected function words and the corresponding word shape token stop list. The number of stop tokens is 57 for 101 ftmelion words. Table 2 shows that the list removes 2.9% of content words and 44% of function words from the lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 405,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 2044,
"end": 2052,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 2193,
"end": 2200,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introducing a word shape token stop list",
"sec_num": "3.2"
},
{
"text": "In our character classification, all numeric characters are represented by the character shape code A (Table 1) . Therefore, after cleaning is done, all numerals in a text fall into word shape tokens A*, where * means zero or more repetitions of A. This sometimes makes the frequency of A* unreasonably high though numerals are often of little importance in content identification.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 111,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Augmentation of the word shape token stop list",
"sec_num": "3.3"
},
{
"text": "A* matches all numerals, but since it matches few content words except for acronyms in capital letters, we decided to add A* to the word shape token stop list. 3.4 Testing the word shape token stop list",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation of the word shape token stop list",
"sec_num": "3.3"
},
{
"text": "Our word shape token stop list was tested on 20 ASCII encoded sample documents, ranged in length from 571 to 13,756 words, from a variety of sources including business reports, travel guides, advertisements, techni-cal reports, and private letters. First, we generated word shape tokens, and cleaned as described earlier. Next, we removed tokens that were on the word shape token stop list. Table 3 shows the number of content and function words which the documents consist of before and after using the list. In the table, CW.1 and CW.2 stand for the number of distinct content words in the original document and the number after using the word shape token stop list, respectively. CW.R stands for a ratio of (CW.1 -CW.2) to CW.1. Similarly, FW.1 and FW.2 stand for the number of distinct function words before and after using the list, and FW.R is a ratio of (FW.1 -FW.2) to FW.1. FW.R is much larger than CW.R in all sample documents, which shows the good performance of the word shape token stop list. We should note that the values of CW.R are larger than the 2.9% that we get from testing the list on the lexicon. This is because the lexicon includes many uncommon words and these tend to be longer than the function words selected to make the word shape token stop list. This implies that our list removes more ordinary content words than uncommon ones. We believe that removing such words affects content identification little since ordinary content words in many topics usually don't The} { to, be, Fr, In, As, On, An, (e} { 1988 , 1989 , 2000 , 1990 , 1987 , +5%), +28%, +27%, +18%, +14%} {of, at} {and, out, not, end, act} {in, is} { for, due, ten, low, For, Rnz } {some, over, were, more, same, rose, ease } {6%, 5%, At, 9%, 7%, 4%, 3%, 1%, 8%, 2%, 11, 10, +6} Ibuilding, Building} {4, 9, 8, 6, 3, 2, 1, 0, R, A, 5} { work, real, cost, such, much, most } {90%, 83%, 847, 80%, 5%) Figure 5 : Most frequent word shape tokens and corresponding words specify the content of the document well (Salton et al., 1975) . Likewise, the values of FW.R are larger Chart 44% for the same reason. After using the word shape token stop list, we counted the remaining tokens to obtain content-representing words in their frequency order. All samples successfully indicated their content by appropriate words.",
"cite_spans": [
{
"start": 1493,
"end": 1538,
"text": "The} { to, be, Fr, In, As, On, An, (e} { 1988",
"ref_id": null
},
{
"start": 1539,
"end": 1545,
"text": ", 1989",
"ref_id": null
},
{
"start": 1546,
"end": 1552,
"text": ", 2000",
"ref_id": null
},
{
"start": 1553,
"end": 1559,
"text": ", 1990",
"ref_id": null
},
{
"start": 1560,
"end": 1566,
"text": ", 1987",
"ref_id": null
},
{
"start": 1757,
"end": 1760,
"text": "2%,",
"ref_id": null
},
{
"start": 1761,
"end": 1764,
"text": "11,",
"ref_id": null
},
{
"start": 1765,
"end": 1768,
"text": "10,",
"ref_id": null
},
{
"start": 1769,
"end": 1783,
"text": "+6} Ibuilding,",
"ref_id": null
},
{
"start": 1784,
"end": 1797,
"text": "Building} {4,",
"ref_id": null
},
{
"start": 1798,
"end": 1800,
"text": "9,",
"ref_id": null
},
{
"start": 1801,
"end": 1803,
"text": "8,",
"ref_id": null
},
{
"start": 1804,
"end": 1806,
"text": "6,",
"ref_id": null
},
{
"start": 1807,
"end": 1809,
"text": "3,",
"ref_id": null
},
{
"start": 1810,
"end": 1812,
"text": "2,",
"ref_id": null
},
{
"start": 1813,
"end": 1815,
"text": "1,",
"ref_id": null
},
{
"start": 1816,
"end": 1818,
"text": "0,",
"ref_id": null
},
{
"start": 1819,
"end": 1821,
"text": "R,",
"ref_id": null
},
{
"start": 1822,
"end": 1824,
"text": "A,",
"ref_id": null
},
{
"start": 1825,
"end": 1835,
"text": "5} { work,",
"ref_id": null
},
{
"start": 1836,
"end": 1841,
"text": "real,",
"ref_id": null
},
{
"start": 1842,
"end": 1847,
"text": "cost,",
"ref_id": null
},
{
"start": 1848,
"end": 1853,
"text": "such,",
"ref_id": null
},
{
"start": 1854,
"end": 1859,
"text": "much,",
"ref_id": null
},
{
"start": 1860,
"end": 1872,
"text": "most } {90%,",
"ref_id": null
},
{
"start": 1873,
"end": 1877,
"text": "83%,",
"ref_id": null
},
{
"start": 1878,
"end": 1882,
"text": "847,",
"ref_id": null
},
{
"start": 1883,
"end": 1887,
"text": "80%,",
"ref_id": null
},
{
"start": 1888,
"end": 1891,
"text": "5%)",
"ref_id": null
},
{
"start": 2000,
"end": 2021,
"text": "(Salton et al., 1975)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1892,
"end": 1900,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Augmentation of the word shape token stop list",
"sec_num": "3.3"
},
{
"text": "Data for a sample document reporting the growth of the building industry in Switzerland in 1988 and its outlook for 1989, consisting of 1013 words, are shown in figure 5. It shows top frequent word shape token ranking of the original document and the new ranking after using the word shape token stop list. The number of removed tokens was 544. Most of them represented common words and numerals. The top ranking after using the word shape token stop list consists of content words and represents the content of the document much better than the ranking before using it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation of the word shape token stop list",
"sec_num": "3.3"
},
{
"text": "Figure 5 also suggests that we can inexpensively locate key words by performing OCR on the few frequent word shape tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation of the word shape token stop list",
"sec_num": "3.3"
},
{
"text": "Generating word shape tokens from images is inexpensive and robust for real-world documents. Word shape tokens do not carry alphabetical information, but they are potentially usable for content identification by locating content-representing word images. Our method uses a word shape token stop list and analyzes the dislribution of tokens. This technique depends on the observation that, in English, the characteristics of word shape differ between function and content words, and between frequent and infrequent words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and further directions",
"sec_num": "4"
},
{
"text": "We expect to be able to extend the technique to many other European languages that have similar characteristics. For example, German function words tend to be shorter than nouns, which are always capitalized. In addition, by drawing on our language determination technique, which uses the same word shape tokens (Nakayama and Spitz, 1993; Sibun and Spitz, this volume) , we could enhance the technique described here for multilingual sources.",
"cite_spans": [
{
"start": 312,
"end": 338,
"text": "(Nakayama and Spitz, 1993;",
"ref_id": "BIBREF6"
},
{
"start": 339,
"end": 368,
"text": "Sibun and Spitz, this volume)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and further directions",
"sec_num": "4"
},
{
"text": "Other future work involves examining automatic document categorization in which an input document image is assigned to some pre-existing subject category (Cavnar and Trenlde, 1994) . With reliable training data, we feel we can identify the configuration of word shape tokens across subjects. Using a statistical method to compute the distance between input and configurations of categories would be a good approach. This might be useful for document sorting service for fax machines.",
"cite_spans": [
{
"start": 154,
"end": 180,
"text": "(Cavnar and Trenlde, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and further directions",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "The author gratefully acknowledges helpful suggestions by Larry Spitz and Penni Sibun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The American Heritage word frequency book",
"authors": [
{
"first": "John",
"middle": [
"B"
],
"last": "Carroll",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Davies",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Richman",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John B. Carroll, Peter Davies, and Barry Richman, The American Heritage word frequency book, Boston, Houghton-Mifflin, 1971.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "N-Gram-Based Text Categorization",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Cavnar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trenkle",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Third Annual Symposium on Document Analysis and Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Cavnar and John M. Trenkle, N-Gram-Based Text Categorization, Proceedings of the Third Annual Symposium on Document Analysis and Information Retrieval, Las Vegas, U.S.A., 1994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Constant Interaction-Tune Scatter/Gather Browsing of Very Large Document Collections",
"authors": [
{
"first": "Douglass",
"middle": [
"R"
],
"last": "Cutting",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Karger",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "Pedersen",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 16th Annual International ACM SIGIR Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglass R. Cutting, David R. Karger, and Jan O. Pedersen, Constant Interaction-Tune Scatter/Gather Browsing of Very Large Document Collections, Proceedings of the 16th Annual International ACM SIGIR Conference, Pittsburgh, U.S.A., 1993.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joining Statistics with NLP for Text Categorization",
"authors": [
{
"first": "Paul",
"middle": [
"S"
],
"last": "Jacob",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Third Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul S. Jacob, Joining Statistics with NLP for Text Categori- zation, Proceedings of the Third Conference on Applied Natural Language Processing, Trento, Italy, 1992.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Historical Review of OCR Research and Development, Proceedings of IEEE",
"authors": [
{
"first": "Shunji",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Ching",
"middle": [
"Y"
],
"last": "Suen",
"suffix": ""
},
{
"first": "Kazuhiko",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "80",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shunji Mori, Ching Y. Suen, and Kazuhiko Yamamoto, His- torical Review of OCR Research and Development, Pro- ceedings of IEEE, Vol. 80, No. 7, 1992.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "At the Frontiers of OCR",
"authors": [
{
"first": "George",
"middle": [],
"last": "Nagy",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of IEEE",
"volume": "80",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Nagy, At the Frontiers of OCR, Proceedings of IEEE, Vol. 80, No. 7, 1992.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "European Language Determination from Image",
"authors": [
{
"first": "Takehiro",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "A",
"middle": [
"Lawrence"
],
"last": "Spitz",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Second International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takehiro Nakayama and A. Lawrence Spitz, European Lan- guage Determination from Image, Proceedings of the Second International Conference on Document Analysis and Recognition, Tsukuba Science City, Japan, 1993.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Document Image Analysis Systems",
"authors": [
{
"first": "O",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Rangachar",
"middle": [],
"last": "'gorman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kasturi",
"suffix": ""
}
],
"year": 1992,
"venue": "Computer July",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence O'Gorman and Rangachar Kasturi, Document Image Analysis Systems, Computer July 1992 Vol. 25.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Evaluation of OCR Accuracy, Information Science Research Institute",
"authors": [
{
"first": "Stephen",
"middle": [
"V"
],
"last": "Rice",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Kanai",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"A"
],
"last": "Nartker",
"suffix": ""
}
],
"year": 1993,
"venue": "Annual Research Report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen V. Rice, Junichi Kanai and Thomas A. Nartker, An Evaluation of OCR Accuracy, Information Science Research Institute 1993 Annual Research Report, Univer- sity of Nevada, Las Vegas.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Developments in Automatic Text Retrieval",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "253",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton, Developments in Automatic Text Retrieval, Science Vol. 253, No. 5023, Aug. 30, 1991.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Vector Space Model for Automatic Indexing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gerard Salton",
"suffix": ""
},
{
"first": "C",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 1975,
"venue": "Communications of the ACM November",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton, A. Wong, and C. S. Young, A Vector Space Model for Automatic Indexing, Communications of the ACM November 1975 Vol. 18.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Content Characterization Using Word Shape Tokens",
"authors": [
{
"first": "Penelope",
"middle": [],
"last": "Sibun",
"suffix": ""
},
{
"first": "David",
"middle": [
"S"
],
"last": "Farrar",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penelope Sibun and David S. Farrar, Content Characterization Using Word Shape Tokens, Proceedings of the 15th Inter- national Conference on Computational Linguistics, Kyoto, Japan, 1994.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Language Determination: Natural Language Processing from Scanned Document Images",
"authors": [
{
"first": "Penelope",
"middle": [],
"last": "Sibun",
"suffix": ""
},
{
"first": "A",
"middle": [
"Lawrence"
],
"last": "Spitz",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penelope Sibun and A. Lawrence Spitz, Language Determina- tion: Natural Language Processing from Scanned Docu- ment Images, this volume.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generalized Line Word and Character Finding",
"authors": [
{
"first": "A",
"middle": [
"Lawrence"
],
"last": "Spitz",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the International Conference on Image Analysis and Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Lawrence Spitz, Generalized Line Word and Character Finding, Proceedings of the International Conference on Image Analysis and Processing, Bad, Italy, 1993.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using Character Shape Codes for Word Spotting in Document Images",
"authors": [
{
"first": "A",
"middle": [
"Lawrence"
],
"last": "Spitz",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Third International Workshop on Syntactic and Structural Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Lawrence Spitz, Using Character Shape Codes for Word Spotting in Document Images, Proceedings of the Third International Workshop on Syntactic and Structural Pattern Recognition, Haifa, Israel, 1994.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "ASCII and character shape code"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Word shape token length distribution"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Selected function words and word shape token stop list"
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "Shape class code membership",
"html": null,
"content": "<table><tr><td>character shape code</td><td>members</td></tr><tr><td>A</td><td>A-Z lxlfllkltB 0-9 +#$&amp;0/&lt;&gt;[]@{ }1</td></tr><tr><td>x</td><td>acemnorsuvwxz</td></tr><tr><td>g</td><td>gPqY~</td></tr><tr><td>J</td><td>j</td></tr><tr><td>U</td><td>ii~auO0</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Application of word shape token stop list to lexicon FW.",
"html": null,
"content": "<table><tr><td>CW.</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Testing the word shape token stop list on sample documents",
"html": null,
"content": "<table><tr><td/><td>CW.1</td><td>CW.R</td><td>FW.I</td><td>FW.R</td></tr><tr><td>sample</td><td>--' CW.2</td><td>(%)</td><td>--, FW.2</td><td>(%)</td></tr><tr><td>doe.1</td><td>347 -* 321</td><td>7.5</td><td>74 --~ 18</td><td>76</td></tr><tr><td>doe.2</td><td>246 --* 221</td><td>10</td><td>63 --~ 7</td><td>89</td></tr><tr><td>doe.3</td><td>245 --* 225</td><td>8.2</td><td>61 --* 10</td><td>85</td></tr><tr><td>doe.4</td><td>292 --* 272</td><td>6.8</td><td>61 --* 7</td><td>89</td></tr><tr><td>doe.5</td><td>279 -* 265</td><td>5.0</td><td>71 --* 16</td><td>78</td></tr><tr><td>doe.6</td><td>255 ~ 236</td><td>7.5</td><td>56 --* 12</td><td>79</td></tr><tr><td>doe.7</td><td>177 --* 164</td><td>7.3</td><td>53 --* 14</td><td>74</td></tr><tr><td>doe.8</td><td>253 --* 231</td><td>8.7</td><td>71 -~ 17</td><td>76</td></tr><tr><td>doe.9</td><td>227 --* 214</td><td>5.7</td><td>64 --* 11</td><td>83</td></tr><tr><td>doe.10</td><td>239 --* 218</td><td>8.8</td><td>63 --* 14</td><td>78</td></tr><tr><td>doe.ll</td><td>233 ~ 212</td><td>9.0</td><td>62 --* 10</td><td>84</td></tr><tr><td>doe.12</td><td>294 -* 265</td><td>9.9</td><td>58 --* 12</td><td>79</td></tr><tr><td>doe.13</td><td>233-~ 212</td><td>9.0</td><td>57 ~ 12</td><td>79</td></tr><tr><td>doe.14</td><td>271 --* 248</td><td>8.5</td><td>59 --* 13</td><td>78</td></tr><tr><td>doe.15</td><td>130 -* 115</td><td>12</td><td>42 --* 5</td><td>88</td></tr><tr><td>doe.16</td><td>1582--* 1513</td><td>4.4</td><td>150--* 45</td><td>70</td></tr><tr><td>doe.17</td><td>453 --* 409</td><td>9.7</td><td>99 --* 17</td><td>83</td></tr><tr><td>doe.18</td><td>292 --~ 249</td><td>15</td><td>75 --* 8</td><td>89</td></tr><tr><td>doe.19</td><td>1189 --* 1046</td><td>12</td><td>157-~ 35</td><td>78</td></tr><tr><td>doc.20</td><td>309 -* 286</td><td>7.4</td><td>73 -~ 6</td><td>92</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">, 49%, 29%, 27%, 26%, 23%, 21%, 19%, 175, 14%, 13%}</td></tr><tr><td/><td>la, s}</td><td/><td/></tr><tr><td/><td>{ year, pace, grew }</td><td/><td/></tr><tr><td/><td>{by, By}</td><td/><td/></tr><tr><td/><td>! was, are, new, can, saw, own, one }</td><td/><td/></tr><tr><td/><td>{ construction }</td><td/><td/></tr><tr><td/><td>{ expanded, expected, reported }</td><td/><td/></tr><tr><td colspan=\"2\">{on, as22 AxiAAixg {building, Building}</td><td colspan=\"2\">token ranking and corresponding words 6 Aixxx {firms, Since}</td></tr><tr><td colspan=\"2\">12 xxxxAxxxAixx {construction}</td><td>6 AixxA</td><td>{first, fixed}</td></tr><tr><td>10 xxgxxAxA</td><td>{expanded, expected, reported}</td><td>5 ixxxxxxx</td><td>{increase}</td></tr><tr><td>8 xxxAxx</td><td>{ sector, number }</td><td>5 AxxxxA</td><td>{demand, traced, lowest}</td></tr><tr><td>7 xxgixxxxixg</td><td>{ engineering }</td><td>5 Axxxixg</td><td>{housing}</td></tr><tr><td>7 xxAxxx</td><td>{ volume, orders, return }</td><td>5 AiAAixx</td><td>{billion}</td></tr><tr><td>7 ixAxxAxixA</td><td>{ industrial }</td><td>4 xxxAxxxAx</td><td>{contracts}</td></tr><tr><td>7 gxxxAA</td><td>{growth}</td><td>4 xxAx</td><td>{rate }</td></tr><tr><td>7 grdxxx</td><td>{prices}</td><td>4 ixAxxixx</td><td>{ interior }</td></tr><tr><td>7 AxAxAxx</td><td>{ October }</td><td>4 gxxxxAixg</td><td>{preceding}</td></tr><tr><td>6 xxxxix~</td><td>learnings}</td><td>4 gxixA</td><td>{point}</td></tr><tr><td>6 AxxAx</td><td>{ trade }</td><td>4 AxxxxAxx</td><td>{branches}</td></tr><tr><td>6 AxxAAxg</td><td>{ backlog }</td><td>4 Axixx</td><td>{ Swiss }</td></tr></table>"
}
}
}
}