|
{ |
|
"paper_id": "W06-0109", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:04:13.741254Z" |
|
}, |
|
"title": "The Role of Lexical Resources in CJK Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Halpern", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The role of lexical resources is often understated in NLP research. The complexity of Chinese, Japanese and Korean (CJK) poses special challenges to developers of NLP tools, especially in the area of word segmentation (WS), information retrieval (IR), named entity extraction (NER), and machine translation (MT). These difficulties are exacerbated by the lack of comprehensive lexical resources, especially for proper nouns, and the lack of a standardized orthography, especially in Japanese. This paper summarizes some of the major linguistic issues in the development NLP applications that are dependent on lexical resources, and discusses the central role such resources should play in enhancing the accuracy of NLP tools.", |
|
"pdf_parse": { |
|
"paper_id": "W06-0109", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The role of lexical resources is often understated in NLP research. The complexity of Chinese, Japanese and Korean (CJK) poses special challenges to developers of NLP tools, especially in the area of word segmentation (WS), information retrieval (IR), named entity extraction (NER), and machine translation (MT). These difficulties are exacerbated by the lack of comprehensive lexical resources, especially for proper nouns, and the lack of a standardized orthography, especially in Japanese. This paper summarizes some of the major linguistic issues in the development NLP applications that are dependent on lexical resources, and discusses the central role such resources should play in enhancing the accuracy of NLP tools.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Developers of CJK NLP tools face various challenges, some of the major ones being:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. Identifying and processing the large number of orthographic variants in Japanese, and alternate character forms in CJK languages. 2. The lack of easily available comprehensive lexical resources, especially lexical databases, comparable to the major European languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "and Traditional Chinese (Halpern and Kerman 1999) . 4. The morphological complexity of Japanese and Korean. 5. Accurate word segmentation (Emerson 2000 and and disambiguating ambiguous segmentations strings (ASS) (Zhou and Yu 1994) . 6. The difficulty of lexeme-based retrieval and CJK CLIR (Goto et al. 2001 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 49, |
|
"text": "(Halpern and Kerman 1999)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 151, |
|
"text": "(Emerson 2000", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 231, |
|
"text": "(Zhou and Yu 1994)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 308, |
|
"text": "(Goto et al. 2001", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The accurate conversion between Simplified", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "7. Chinese and Japanese proper nouns, which are very numerous, are difficult to detect without a lexicon. 8. Automatic recognition of terms and their variants (Jacquemin 2001 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 174, |
|
"text": "(Jacquemin 2001", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The accurate conversion between Simplified", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The various attempts to tackle these tasks by statistical and algorithmic methods (Kwok 1997 ) have had only limited success. An important motivation for such methodology has been the poor availability and high cost of acquiring and maintaining large-scale lexical databases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 92, |
|
"text": "(Kwok 1997", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The accurate conversion between Simplified", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "This paper discusses how a lexicon-driven approach exploiting large-scale lexical databases can offer reliable solutions to some of the principal issues, based on over a decade of experience in building such databases for NLP applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The accurate conversion between Simplified", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Named Entity Recognition (NER) is useful in NLP applications such as question answering, machine translation and information extraction. A major difficulty in NER, and a strong motivation for using tools based on probabilistic methods, is that the compilation and maintenance of large entity databases is time consuming and expensive. The number of personal names and their variants (e.g. over a hundred ways to spell Mohammed) is probably in the billions. The number of place names is also large, though they are relatively stable compared with the names of organizations and products, which change frequently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A small number of organizations, including The CJK Dictionary Institute (CJKI), maintain databases of millions of proper nouns, but even such comprehensive databases cannot be kept fully up-to-date as countless new names are created daily. Various techniques have been used to automatically detect entities, one being the use of keywords or syntactic structures that co-occur with proper nouns, which we refer to as named entity contextual clues (NECC). Table 1 shows NECCs for Japanese proper nouns, which when used in conjunction with entity lexicons like the one shown in Table 2 below achieve high precision in entity recognition. Of course for NER there is no need for such lexicons to be multilingual, though it is obviously essential for MT. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 461, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 582, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entity Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Azerbaijan \u30a2\u30bc\u30eb\u30d0\u30a4\u30b8\u30e3\u30f3 \u963f\u585e\u62dc\u7586 L \u4e9e\u585e\u62dc\u7136 \uc544\uc81c\ub974\ubc14\uc774\uc794 Caracas \u30ab\u30e9\u30ab\u30b9 \u52a0\u62c9\u52a0\u65af L \u5361\u62c9\u5361\u65af \uce74\ub77c\uce74\uc2a4 Cairo \u30ab\u30a4\u30ed \u5f00\u7f57 O \u958b\u7f85 \uce74\uc774\ub85c Chad \u30c1\u30e3\u30c9 \u4e4d\u5f97 L \u67e5\u5fb7 \ucc28\ub4dc New Zealand \u30cb\u30e5\u30fc\u30b8\u30fc\u30e9\u30f3\u30c9 \u65b0\u897f\u5170 L \u7d10\u897f\u862d \ub274\uc9c8\ub79c\ub4dc Seoul \u30bd\u30a6\u30eb \u9996\u5c14 O \u9996\u723e \uc11c\uc6b8 Seoul \u30bd\u30a6\u30eb \u6c49\u57ce O \u6f22\u57ce \uc11c\uc6b8 Yemen \u30a4\u30a8\u30e1\u30f3 \u4e5f\u95e8 L \u8449\u9580 \uc608\uba58", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Note how the lexemic pairs (\"L\" in the LO column) in Table 2 above are not merely simplified and traditional orthographic (\"O\") versions of each other, but independent lexemes equivalent to American truck and British lorry.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 60, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entity Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "NER, especially of personal names and place names, is an area in which lexicon-driven methods have a clear advantage over probabilistic methods and in which the role of lexical resources should be a central one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A major issue for Chinese segmentors is how to treat compound words and multiword lexical units (MWU), which are often decomposed into their components rather than treated as single units. For example, \u5f55\u50cf\u5e26 l\u00f9xi\u00e0ngd\u00e0i 'video cassette' and \u673a\u5668\u7ffb\u8bd1 j\u012bqif\u0101ny\u00ec 'machine translation' are not tagged as segments in Chinese Gigaword, the largest tagged Chinese corpus in existence, processed by the CKIP morphological analyzer (Ma 2003 This last point is important enough to merit elaboration. A user searching for \u4e2d \u56fd \u4eba zh\u014dnggu\u00f3r\u00e9n 'Chinese (person)' is not interested in \u4e2d\u56fd 'China', and vice-versa. A search for \u4e2d \u56fd should not retrieve \u4e2d\u56fd\u4eba as an instance of \u4e2d\u56fd. Exactly the same logic should apply to \u673a \u5668\u7ffb\u8bd1, so that a search for that keyword should only retrieve documents containing that string in its entirety. Yet performing a Google search on \u673a\u5668\u7ffb\u8bd1 in normal mode gave some 2.3 million hits, hundreds of thousands of which had zero occurrences of \u673a \u5668 \u7ffb \u8bd1 but numerous occurrences of unrelated words like \u673a\u5668\u4eba 'robot', which the user is not interested in.", |
|
"cite_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 424, |
|
"text": "(Ma 2003", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Multiword Units", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This is equivalent to saying that headwaiter should not be considered an instance of waiter, which is indeed how Google behaves. More to the point, English space-delimited lexemes like high school are not instances of the adjective high. As shown in Halpern (2000b) , \"the degree of solidity often has nothing to do with the status of a string as a lexeme. School bus is just as legitimate a lexeme as is headwaiter or wordprocessor. The presence or absence of spaces or hyphens, that is, the orthography, does not determine the lexemic status of a string.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 265, |
|
"text": "Halpern (2000b)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Multiword Units", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In a similar manner, it is perfectly legitimate to consider Chinese MWUs like those shown below as indivisible units for most applications, especially information retrieval and machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Multiword Units", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u4e1d\u7ef8\u4e4b\u8def s\u012bch\u00f3uzh\u012bl\u00f9 silk road \u673a\u5668\u7ffb\u8bd1 j\u012bqif\u0101ny\u00ec machine translation \u7231\u56fd\u4e3b\u4e49 \u00e0igu\u00f3zh\u01d4y\u00ec patriotism \u5f55\u50cf\u5e26 l\u00f9xi\u00e0ngd\u00e0i video cassette \u65b0\u897f\u5170 X\u012bnx\u012bl\u00e1n New Zealand \u4e34\u9635\u78e8\u67aa l\u00ednzh\u00e8nm\u00f3qi\u0101ng start to prepare at the last moment One could argue that \u673a\u5668\u7ffb\u8bd1 is compositional and therefore should be considered \"two words.\" Whether we count it as one or two \"words\" is not really relevant -what matters is that it is one lexeme (smallest distinctive units associating meaning with form). On the other extreme, it is clear that idiomatic expressions like \u4e34\u9635\u78e8\u67aa, literally \"sharpen one's spear before going to battle,\" meaning 'start to prepare at the last moment,' are indivisible units.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Multiword Units", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Predicting compositionality is not trivial and often impossible. For many purposes, the only practical solution is to consider all lexemes as indivisible. Nonetheless, currently even the most advanced segmentors fail to identify such lexemes and missegment them into their constituents, no doubt because they are not registered in the lexicon. This is an area in which expanded lexical resources can significantly improve segmentation accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Multiword Units", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In conclusion, lexical items like \u673a\u5668\u7ffb\u8bd1 'machine translation' represent stand-alone, welldefined concepts and should be treated as single units. The fact that in English machineless is spelled solid and machine translation is not is an historical accident of orthography unrelated to the fundamental fact that both are full-fledged lexemes each of which represents an indivisible, independent concept. The same logic applies to \u673a\u5668\u7ffb\u8bd1,which is a full-fledged lexeme that should not be decomposed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Multiword Units", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Chinese MWUs can consist of nested components that can be segmented in different ways for different levels to satisfy the requirements of different segmentation standards. The example below shows how \uf963\u4eac\u65e5\u672c \u4eba\u5b66 \u6821 B\u011bij\u012bng R\u00ecb\u011bnr\u00e9n Xu\u00e9xi\u00e0o 'Beijing School for Japanese (nationals)' can be segmented on five different levels. For some applications, such as MT and NER, the multiword lexemic level is most appropriate (the level most commonly used in CJKI's dictionaries). For others, such as embedded speech technology where dictionary size matters, the lexemic level is best. A more advanced and expensive solution is to store presegmented MWUs in the lexicon, or even to store nesting delimiters as shown above, making it possible to select the desired segmentation level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilevel Segmentation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The problem of incorrect segmentation is especially obvious in the case of neologisms. Of course no lexical database can expect to keep up with the latest neologisms, and even the first edition of Chinese Gigaword does not yet have \u535a\u5ba2 b\u00f3k\u00e8 'blog'. Here are some examples of MWU neologisms, some of which are not (at least bilingually), compositional but fully qualify as lexemes. \u7535\u8111\u8ff7 di\u00e0nn\u01ceom\u00ed cyberphile \u7535\u5b50\u5546\u52a1 di\u00e0nz\u01d0sh\u0101ngw\u00f9 e-commerce \u8ffd\u8f66\u65cf zhu\u012bch\u0113z\u00fa auto fan", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilevel Segmentation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Numerous Chinese characters underwent drastic simplifications in the postwar period. Chinese written in these simplified forms is called Simplified Chinese (SC). Taiwan, Hong Kong, and most overseas Chinese continue to use the old, complex forms, referred to as Traditional Chinese (TC). Contrary to popular perception, the process of accurately converting SC to/from TC is full of complexities and pitfalls. The linguistic issues are discussed in Halpern and Kerman (1999) , while technical issues are described in Lunde (1999) . The conversion can be implemented on three levels in increasing order of sophistication:", |
|
"cite_spans": [ |
|
{ |
|
"start": 448, |
|
"end": 473, |
|
"text": "Halpern and Kerman (1999)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 528, |
|
"text": "Lunde (1999)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese-to-Chinese Conversion (C2C)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "1. Code Conversion. The easiest, but most unreliable, way to perform C2C is to transcode by using a one-to-one mapping table. Because of the numerous one-to-many ambiguities, as shown below, the rate of conversion failure is unacceptably high. As can be seen, the ambiguities inherent in code conversion are resolved by using orthographic mapping tables, which avoids false conversions such as shown in the Incorrect column. Because of segmentation ambiguities, such conversion must be done with a segmentor that can break the text stream into meaningful units (Emerson 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 561, |
|
"end": 575, |
|
"text": "(Emerson 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese-to-Chinese Conversion (C2C)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "An extra complication, among various others, is that some lexemes have one-to-many orthographic mappings, all of which are correct. For example, SC \u9634\u5e72 correctly maps to both TC \u9670 \u4e7e 'dry in the shade' and TC \u9670\u5e72 'the five even numbers'. Well designed orthographic mapping tables must take such anomalies into account. 3. Lexemic Conversion. The most sophisticated form of C2C conversion is called lexemic conversion, which maps SC and TC lexemes that are semantically, not orthographically, equivalent. For example, SC \u4fe1\u606f x\u00ecnx\u012b 'information' is converted into the semantically equivalent TC \u8cc7\u8a0a z\u012bx\u00f9n. This is similar to the difference between British pavement and American sidewalk. Tsou (2000) has demonstrated that there are numerous lexemic differences between SC and TC, especially in technical terms and proper nouns, e.g. there are more than 10 variants for Osama bin Laden. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 681, |
|
"end": 692, |
|
"text": "Tsou (2000)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese-to-Chinese Conversion (C2C)", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Traditional Chinese has numerous variant character forms, leading to much confusion. Disambiguating these variants can be done by using mapping tables such as the one shown below. If such a table is carefully constructed by limiting it to cases of 100% semantic interchangeability for polysemes, it is easy to normalize a TC text by trivially replacing variants by their standardized forms. For this to work, all relevant components, such as MT dictionaries, search engine indexes and the related documents should be normalized. An extra complication is that Taiwanese and Hong Kong variants are sometimes different (Tsou 2000) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 616, |
|
"end": 627, |
|
"text": "(Tsou 2000)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Traditional Chinese Variants", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The Japanese orthography is highly irregular, significantly more so than any other major language, including Chinese. A major factor is the complex interaction of the four scripts used to write Japanese, e.g. kanji, hiragana, katakana, and the Latin alphabet, resulting in countless words that can be written in a variety of often unpredictable ways, and the lack of a standardized orthography. For example, toriatsukai 'handling' can be written in six ways: \u53d6\u308a\u6271\u3044, \u53d6 \u6271\u3044, \u53d6\u6271, \u3068\u308a\u6271\u3044, \u53d6\u308a\u3042\u3064\u304b\u3044, \u3068\u308a \u3042\u3064\u304b\u3044.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Highly Irregular Orthography", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "An example of how difficult Japanese IR can be is the proverbial 'A hen that lays golden eggs.' The \"standard\" orthography would be \u91d1\u306e\u5375\u3092 \u7523\u3080\u9d8f Kin no tamago o umu niwatori. In reality, tamago 'egg' has four variants (\u5375, \u7389\u5b50, \u305f \u307e\u3054, \u30bf\u30de\u30b4), niwatori 'chicken' three (\u9d8f, \u306b \u308f\u3068\u308a, \u30cb\u30ef\u30c8\u30ea) and umu 'to lay' two (\u7523\u3080, \u751f\u3080), which expands to 24 permutations like \u91d1 \u306e\u5375\u3092\u751f\u3080\u30cb\u30ef\u30c8\u30ea, \u91d1\u306e\u7389\u5b50\u3092\u7523\u3080\u9d8f etc. As can be easily verified by searching the web, these variants occur frequently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Highly Irregular Orthography", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Linguistic tools that perform segmentation, MT, entity extraction and the like must identify and/or normalize such variants to perform dictionary lookup. Below is a brief discussion of what kind of variation occurs and how such normalization can be achieved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Highly Irregular Orthography", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "One of the most common types of orthographic variation in Japanese occurs in kana endings, called okurigana, that are attached to a kanji stem. For example, okonau 'perform' can be written \u884c\u3046 or \u884c\u306a\u3046, whereas toriatsukai can be written in the six ways shown above. Okurigana variants are numerous and unpredictable. Identifying them must play a major role in Japanese orthographic normalization. Although it is possible to create a dictionary of okurigana variants algorithmically, the resulting lexicon would be huge and may create numerous false positives not semantically interchangeable. The most effective solution is to use a lexicon of okurigana variants, such as the one shown below: Since Japanese is highly agglutinative and verbs can have numerous inflected forms, a lexicon such as the above must be used in conjunction with a morphological analyzer that can do accurate stemming, i.e. be capable of recognizing that \u66f8\u304d\u8457\u3057\u307e\u305b\u3093\u3067\u3057\u305f is the polite form of the canonical form \u66f8\u304d\u8457\u3059.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Okurigana Variants", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Variation across the four scripts in Japanese is common and unpredictable, so that the same word can be written in any of several scripts, or even as a hybrid of multiple scripts, as shown below: Cross-script variation can have major consequences for recall, as can be seen from the table below. Using the ID above to represent the number of Google hits, this gives a total of A\uff0bB\uff0bC\uff0b\u03b1 123 = 191,700. \u03b1 is a coincidental occurrence factor, such as in '100 \u4eba\u53c2\u52a0, in which '\u4eba\u53c2' is unrelated to the 'carrot' sense. The formulae for calculating the above are as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-Script Orthographic Variation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "123 \u03b1 + + + C B A C \uff1d 58\uff0c000 191\uff0c700 (\u224830%)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Normalized recall:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "123 \u03b1 + + + + + C B A C B A \uff1d 191\uff0c700 191\uff0c700 (\u2248100\uff05)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unnormalized precision:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3 \u03b1 + C C \uff1d 58\uff0c000 58\uff0c000 (\u2248100\uff05)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Normalized precision:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "123 \u03b1 + + + C B A C \uff1d 191\uff0c700 191\uff0c700 (\u2248100\uff05)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u4eba\u53c2 'carrot' illustrates how serious a problem cross-orthographic variants can be. If orthographic normalization is not implemented to ensure that all variants are indexed on a standardized form like \u4eba\u53c2, recall is only 30%; if it is, there is a dramatic improvement and recall goes up to nearly 100%, without any loss in precision, which hovers at 100%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unnormalized recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A sharp increase in the use of katakana in recent years is a major annoyance to NLP applications because katakana orthography is often irregular; it is quite common for the same word to be written in multiple, unpredictable ways. Although hiragana orthography is generally regular, a small number of irregularities persist. Some of the major types of kana variation are shown in the table below. The above is only a brief introduction to the most important types of kana variation. Though attempts at algorithmic solutions have been made by some NLP research laboratories (Brill 2001) , the most practical solution is to use a katakana normalization table, such as the one shown below, as is being done by Yahoo! Japan and other major portals. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 572, |
|
"end": 584, |
|
"text": "(Brill 2001)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Kana Variants", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "There are various other types of orthographic variants in Japanese, described in Halpern (2000a) . To mention some, kanji even in contemporary Japanese sometimes have variants, such as \u624d for \u6b73 and \u5dfe for \u5e45, and traditional forms such as \u767c for \u767a. In addition, many kun homophones and their variable orthography are often close or even identical in meaning, i.e., noboru means 'go up' when written \u4e0a\u308b but 'climb' when written \u767b\u308b, so that great care must be taken in the normalization process so as to assure semantic interchangeability for all senses of polysemes; that is, to ensure that such forms are excluded from the normalization table.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 96, |
|
"text": "Halpern (2000a)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Miscellaneous Variants", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Leaving statistical methods aside, lexicondriven normalization of Japanese orthographic variants can be achieved by using an orthographic mapping table such as the one shown below, using various techniques such as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon-driven Normalization", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "1. Convert variants to a standardized form for indexing. 2. Normalize queries for dictionary lookup. 3. Normalize all source documents. 4. Identify forms as members of a variant group. Other possibilities for normalization include advanced applications such as domain-specific synonym expansion, requiring Japanese thesauri based on domain ontologies, as is done by a select number of companies like Wand and Convera who build sophisticated Japanese IR systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon-driven Normalization", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Modern Korean has is a significant amount of orthographic variation, though far less than in Japanese. Combined with the morphological complexity of the language, this poses various challenges to developers of NLP tools. The issues are similar to Japanese in principle but differ in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Orthographic Variation in Korean", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Briefly, Korean has variant hangul spellings in the writing of loanwords, such as \ucf00\uc774\ud06c keikeu and \ucf00\uc78c keik for 'cake', and in the writing of non-Korean personal names, such as \ud074\ub9b0\ud134 keulrinteon and \ud074\ub9b0\ud1a4 keulrinton for 'Clinton'. In addition, similar to Japanese but on a smaller scale, Korean is written in a mixture of hangul, Chinese characters and the Latin alphabet. For example, 'shirt' can be written \uc640\uc774\uc154\uce20 wai-syeacheu or Y \uc154\uce20 wai-syeacheu, whereas 'one o'clock' hanzi can written as \ud55c\uc2dc, 1 \uc2dc or \u4e00\u6642. Another issue is the differences between South and North Korea spellings, such as N.K. \uc624\uc0ac\uae4c osakka vs. S.K. \uc624\uc0ac\uce74 osaka for 'Osaka', and the old (pre-1988) orthography versus the new, i.e. modern \uc77c\uad70 'worker' (ilgun) used to be written \uc77c\uafbc (ilkkun).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Orthographic Variation in Korean", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Lexical databases, such as normalization tables similar to the ones shown above for Japanese, are the only practical solution to identifying such variants, as they are in principle unpredictable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Orthographic Variation in Korean", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Because of the irregular orthography of CJK languages, procedures such as orthographic normalization cannot be based on statistical and probabilistic methods (e.g. bigramming) alone, not to speak of pure algorithmic methods. Many attempts have been made along these lines, as for example Brill (2001) and Goto et al. (2001) , with some claiming performance equivalent to lexicon-driven methods, while Kwok (1997) reports good results with only a small lexicon and simple segmentor. Emerson (2000) and others have reported that a robust morphological analyzer capable of processing lexemes, rather than bigrams or ngrams, must be supported by a large-scale computational lexicon. This experience is shared by many of the world's major portals and MT developers, who make extensive use of lexical databases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 300, |
|
"text": "Brill (2001)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 323, |
|
"text": "Goto et al. (2001)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 412, |
|
"text": "Kwok (1997)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 496, |
|
"text": "Emerson (2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role of Lexical Databases", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Unlike in the past, disk storage is no longer a major issue. Many researchers and developers, such as Prof. Franz Guenthner of the University of Munich, have come to realize that \"language is in the data,\" and \"the data is in the dictionary,\" even to the point of compiling full-form dictionaries with millions of entries rather than rely on statistical methods, such as Meaningful Machines who use a full form dictionary containing millions of entries in developing a human quality Spanish-to-English MT system. CJKI, which specializes in CJK and Arabic computational lexicography, is engaged in an ongoing research and development effort to compile CJK and Arabic lexical databases (currently about seven million entries), with special emphasis on proper nouns, orthographic normalization, and C2C. These resources are being subjected to heavy industrial use under realworld conditions, and the feedback thereof is being used to further expand these databases and to enhance the effectiveness of the NLP tools based on them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role of Lexical Databases", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Performing such tasks as orthographic normalization and named entity extraction accurately is beyond the ability of statistical methods alone, not to speak of C2C conversion and morphological analysis. However, the small-scale lexical resources currently used by many NLP tools are inadequate to these tasks. Because of the irregular orthography of the CJK writing systems, lexical databases fine-tuned to the needs of NLP applications are required. The building of large-scale lexicons based on corpora consisting of even billions of words has come of age. Since lexicon-driven techniques have proven their effectiveness, there is no need to overly rely on probabilistic methods. Comprehensive, up-todate lexical resources are the key to achieving major enhancements in NLP technology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatically Harvesting Katakana-English Term Pairs from Search Engine Query Logs. Microsoft Research", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kacmarick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Brocket", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of the Sixth Natural Language Processing Pacific Rim Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill, E. and Kacmarick, G. and Brocket, C. (2001) Automatically Harvesting Katakana-English Term Pairs from Search Engine Query Logs. Microsoft Research, Proc. of the Sixth Natural Language Processing Pacific Rim Symposium, Tokyo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "New Approaches to Chinese Word Formation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Packard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jerome", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Packard, L. Jerome (1998) \"New Approaches to Chinese Word Formation\", Mouton Degruyter, Berlin and New York.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Segmenting Chinese in Unicode", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Emerson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proc. of the 16th International Unicode Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emerson, T. (2000) Segmenting Chinese in Unicode. Proc. of the 16th International Unicode Confer- ence, Amsterdam", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Cross-Language Information Retrieval of Proper Nouns using Context Information. NHK Science and Technical Research Laboratories", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Goto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Uratani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ehara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of the Sixth Natural Language Processing Pacific Rim Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goto, I., Uratani, N. and Ehara T. (2001) Cross- Language Information Retrieval of Proper Nouns using Context Information. NHK Science and Technical Research Laboratories. Proc. of the Sixth Natural Language Processing Pacific Rim Symposium, Tokyo, Japan", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Phrase Structure, Lexical Integrity, and Chinese Compounds", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Journal of the Chinese Teachers Language Association", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "53--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, James C. (1984) Phrase Structure, Lexical Integrity, and Chinese Compounds, Journal of the Chinese Teachers Language Association, 19.2: 53- 78", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Spotting and Discovering Terms through Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Jacquemin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacquemin, C. (2001) Spotting and Discovering Terms through Natural Language Processing. The MIT Press, Cambridge, MA", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The Pitfalls and Complexities of Chinese to Chinese Conversion", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Halpern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Halpern, J. and Kerman J. (1999) The Pitfalls and Complexities of Chinese to Chinese Conversion.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Proc. of the Fourteenth International Unicode Conference in", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Proc. of the Fourteenth International Unicode Con- ference in Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Working paper (www.cjk.org/cjk/joa/joapaper.htm), The CJK Dictionary Institute", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Halpern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Halpern, J. (2000a) The Challenges of Intelligent Japanese Searching. Working paper (www.cjk.org/cjk/joa/joapaper.htm), The CJK Dictionary Institute, Saitama, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Working paper, (www.cjk.org/cjk/reference/engmorph.htm) The CJK Dictionary Institute", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Halpern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Halpern, J. (2000b) Is English Segmentation Trivial?. Working paper, (www.cjk.org/cjk/reference/engmorph.htm) The CJK Dictionary Institute, Saitama, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Lexicon Effects on Chinese Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Kwok", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proc. of 2nd Conf. on Empirical Methods in NLP. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "141--149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kwok, K.L. (1997) Lexicon Effects on Chinese In- formation Retrieval. Proc. of 2nd Conf. on Em- pirical Methods in NLP. ACL. pp.141-8.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "CJKV Information Processing. O'Reilly & Associates", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [], |
|
"last": "Lunde", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lunde, Ken (1999) CJKV Information Processing. O'Reilly & Associates, Sebastopol, CA.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "New Progress of the Grammatical Knowledgebase of Contemporary Chinese", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shiwen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Chinese Information Processing", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu, Shiwen, Zhu, Xue-feng and Wang, Hui (2000) New Progress of the Grammatical Knowledge- base of Contemporary Chinese. Journal of Chinese Information Processing, Institute of Computational Linguistics, Peking University, Vol.15 No.1.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Bakeoff", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Keh-Jiann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processingpp", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "168--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ma, Wei-yun and Chen, Keh-Jiann (2003) Introduc- tion to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmen- tation Bakeoff, Proceedings of the Second SIGHAN Workshop on Chinese Language Proc- essingpp. 168-171 Sapporo, Japan", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "New Progress of the Grammatical Knowledgebase of Contemporary Chinese", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shiwen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Chinese Information Processing", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu, Shiwen, Zhu, Xue-feng and Wang, Hui (2000) New Progress of the Grammatical Knowledge- base of Contemporary Chinese. Journal of Chinese Information Processing, Institute of Computational Linguistics, Peking University, Vol.15 No.1.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "LIVAC, a Chinese synchronous corpus, and some applications", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tsoi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"B Y" |
|
], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [ |
|
"S W K" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "International Conference on Chinese Language Comput-ingICCLC2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsou, B.K., Tsoi, W.F., Lai, T.B.Y. Hu, J., and Chan S.W.K. (2000) LIVAC, a Chinese synchronous corpus, and some applications. In \"2000 Interna- tional Conference on Chinese Language Comput- ingICCLC2000\", Chicago.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Blending Segmentation with Tagging in Chinese Language Corpus Processing", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiwen", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "15th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou, Qiang. and Yu, Shiwen (1994) Blending Seg- mentation with Tagging in Chinese Language Corpus Processing, 15th International Conference on Computational Linguistics (COLING 1994)", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"content": "<table><tr><td>\u30bb\u30f3\u30bf\u30fc</td><td>\u305b\u3093\u305f\u30fc</td><td>\u56fd\u6c11\u751f\u6d3b\u30bb\u30f3\u30bf\u30fc</td></tr><tr><td>\u30db\u30c6\u30eb</td><td>\u307b\u3066\u308b</td><td>\u30db\u30c6\u30eb\u30b7\u30aa\u30ce</td></tr><tr><td>\u99c5</td><td>\u3048\u304d</td><td>\u671d\u971e\u99c5</td></tr><tr><td>\u5354\u4f1a</td><td colspan=\"2\">\u304d\u3087\u3046\u304b\u3044 \u65e5\u672c\u30e6\u30cb\u30bb\u30d5\u5354\u4f1a</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td/><td/><td colspan=\"3\">Code Conversion</td></tr><tr><td colspan=\"4\">SC TC1 TC2 TC3 TC4</td><td>Remarks</td></tr><tr><td>\u95e8 \u5011</td><td/><td/><td/><td>one-to-one</td></tr><tr><td>\u6c64 \u6e6f</td><td/><td/><td/><td>one-to-one</td></tr><tr><td>\u53d1 \u767c</td><td>\u9aee</td><td/><td/><td>one-to-many</td></tr><tr><td>\u6697 \u6697</td><td>\u95c7</td><td/><td/><td>one-to-many</td></tr><tr><td>\u5e72 \u5e79</td><td>\u4e7e</td><td>\u5e72</td><td>\u69a6</td><td>one-to-many</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td colspan=\"2\">Telephone \u7535\u8bdd</td><td>\u96fb\u8a71</td><td/></tr><tr><td>Dry</td><td>\u5e72\u71e5</td><td>\u4e7e\u71e5</td><td>\u5e72\u71e5 \u5e79\u71e5 \u69a6\u71e5</td></tr><tr><td/><td>\u9634\u5e72</td><td>\u9670\u4e7e \u9670\u5e72</td><td/></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>English</td><td>SC</td><td colspan=\"3\">Taiwan TC HK TC Incorrect</td></tr><tr><td/><td/><td/><td/><td>TC</td></tr><tr><td colspan=\"2\">Software \u8f6f\u4ef6</td><td>\u8edf\u9ad4</td><td>\u8edf\u4ef6</td><td>\u8edf\u4ef6</td></tr><tr><td>Taxi</td><td colspan=\"2\">\u51fa\u79df\u6c7d\u8f66 \u8a08\u7a0b\u8eca</td><td>\u7684\u58eb</td><td>\u51fa\u79df\u6c7d\u8eca</td></tr><tr><td>Osama</td><td>\u5965\u8428\u9a6c</td><td>\u5967\u85a9\u746a\u8cd3</td><td>\u5967\u85a9\u746a</td><td>\u5967\u85a9\u99ac\u672c</td></tr><tr><td>Bin Laden</td><td>\u672c\u62c9\u767b</td><td>\u62c9\u767b</td><td>\u8cd3\u62c9\u4e39</td><td>\u62c9\u767b</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td>\u88cf</td><td>\u88e1</td><td>Inside</td><td>100% interchangeable</td></tr><tr><td>\u8457</td><td>\u7740</td><td>Particle</td><td>variant 2 not in Big5</td></tr><tr><td>\u6c89</td><td>\u6c88</td><td colspan=\"2\">sink; surname partially interchangeable</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td>\u66f8\u304d\u8457\u3059</td><td>\u304b\u304d\u3042\u3089\u308f\u3059 \u66f8\u304d\u8457\u3059</td></tr><tr><td>\u66f8\u304d\u8457\u308f\u3059</td><td>\u304b\u304d\u3042\u3089\u308f\u3059 \u66f8\u304d\u8457\u3059</td></tr><tr><td>\u66f8\u8457\u3059</td><td>\u304b\u304d\u3042\u3089\u308f\u3059 \u66f8\u304d\u8457\u3059</td></tr><tr><td>\u66f8\u8457\u308f\u3059</td><td>\u304b\u304d\u3042\u3089\u308f\u3059 \u66f8\u304d\u8457\u3059</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td colspan=\"2\">\u4eba\u53c2 \u306b\u3093\u3058\u3093 \u30cb\u30f3\u30b8\u30f3</td><td/><td>carrot</td></tr><tr><td/><td>\u30aa\u30fc\u30d7\u30f3</td><td>OPEN</td><td>open</td></tr><tr><td>\u786b\u9ec4</td><td>\u30a4\u30aa\u30a6</td><td/><td>sulfur</td></tr><tr><td/><td>\u30ef\u30a4\u30b7\u30e3\u30c4</td><td colspan=\"2\">Y \u30b7\u30e3\u30c4 shirt</td></tr><tr><td>\u76ae\u819a</td><td>\u30d2\u30d5</td><td>\u76ae\u30d5</td><td>skin</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF10": { |
|
"content": "<table><tr><td colspan=\"2\">ID Keyword</td><td>Normal-</td><td>Google</td></tr><tr><td/><td/><td>ized</td><td>Hits</td></tr><tr><td>A</td><td>\u4eba\u53c2</td><td>\u4eba\u53c2</td><td>67,500</td></tr><tr><td>B</td><td>\u306b\u3093\u3058\u3093</td><td>\u4eba\u53c2</td><td>66,200</td></tr><tr><td>C</td><td>\u30cb\u30f3\u30b8\u30f3</td><td>\u4eba\u53c2</td><td>58,000</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF11": { |
|
"content": "<table><tr><td>Macron</td><td colspan=\"3\">computer \u30b3\u30f3\u30d4\u30e5\u30fc\u30bf \u30b3\u30f3\u30d4\u30e5\u30fc\u30bf\u30fc</td></tr><tr><td colspan=\"2\">Long vowels maid</td><td>\u30e1\u30fc\u30c9</td><td>\u30e1\u30a4\u30c9</td></tr><tr><td colspan=\"2\">Multiple kana team</td><td>\u30c1\u30fc\u30e0</td><td>\u30c6\u30a3\u30fc\u30e0</td></tr><tr><td>Traditional</td><td>big</td><td>\u304a\u304a\u304d\u3044</td><td>\u304a\u3046\u304d\u3044</td></tr><tr><td>\u3065 vs. \u305a</td><td colspan=\"2\">continue \u3064\u3065\u304f</td><td>\u3064\u305a\u304f</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF12": { |
|
"content": "<table><tr><td>HEADWORD</td><td>NORMALIZED</td><td>English</td></tr><tr><td>\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3</td><td colspan=\"2\">\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u30fc Architecture</td></tr><tr><td colspan=\"3\">\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u30fc \u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u30fc Architecture</td></tr><tr><td colspan=\"3\">\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e5\u30a2 \u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u30fc Architecture</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF13": { |
|
"content": "<table><tr><td>HEADWORD</td><td>READING</td><td>NORMALIZED</td></tr><tr><td>\u7a7a\u304d\u7f36</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u7f36</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u660e\u304d\u7f50</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u3042\u304d\u7f36</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u3042\u304d\u7f50</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u304d\u304b\u3093</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u304d\u30ab\u30f3</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u304d\u7f50</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u7f50</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u304d\u9475</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr><tr><td>\u7a7a\u9475</td><td>\u3042\u304d\u304b\u3093</td><td>\u7a7a\u304d\u7f36</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |