|
{ |
|
"paper_id": "D07-1034", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:18:30.881135Z" |
|
}, |
|
"title": "Extending a Thesaurus in the Pan-Chinese Context", |
|
"authors": [ |
|
{ |
|
"first": "Oi", |
|
"middle": [ |
|
"Yee" |
|
], |
|
"last": "Kwong", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hong Kong", |
|
"location": { |
|
"addrLine": "Tat Chee Avenue", |
|
"settlement": "Kowloon, Hong Kong" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hong Kong", |
|
"location": { |
|
"addrLine": "Tat Chee Avenue", |
|
"settlement": "Kowloon, Hong Kong" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we address a unique problem in Chinese language processing and report on our study on extending a Chinese thesaurus with region-specific words, mostly from the financial domain, from various Chinese speech communities. With the larger goal of automatically constructing a Pan-Chinese lexical resource, this work aims at taking an existing semantic classificatory structure as leverage and incorporating new words into it. In particular, it is important to see if the classification could accommodate new words from heterogeneous data sources, and whether simple similarity measures and clustering methods could cope with such variation. We use the cosine function for similarity and test it on automatically classifying 120 target words from four regions, using different datasets for the extraction of feature vectors. The automatic classification results were evaluated against human judgement, and the performance was encouraging, with accuracy reaching over 85% in some cases. Thus while human judgement is not straightforward and it is difficult to create a Pan-Chinese lexicon manually, it is observed that combining simple clustering methods with the appropriate data sources appears to be a promising approach toward its automatic construction.", |
|
"pdf_parse": { |
|
"paper_id": "D07-1034", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we address a unique problem in Chinese language processing and report on our study on extending a Chinese thesaurus with region-specific words, mostly from the financial domain, from various Chinese speech communities. With the larger goal of automatically constructing a Pan-Chinese lexical resource, this work aims at taking an existing semantic classificatory structure as leverage and incorporating new words into it. In particular, it is important to see if the classification could accommodate new words from heterogeneous data sources, and whether simple similarity measures and clustering methods could cope with such variation. We use the cosine function for similarity and test it on automatically classifying 120 target words from four regions, using different datasets for the extraction of feature vectors. The automatic classification results were evaluated against human judgement, and the performance was encouraging, with accuracy reaching over 85% in some cases. Thus while human judgement is not straightforward and it is difficult to create a Pan-Chinese lexicon manually, it is observed that combining simple clustering methods with the appropriate data sources appears to be a promising approach toward its automatic construction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Large-scale semantic lexicons are important resources for many natural language processing (NLP) tasks. For a significant world language such as Chinese, it is especially critical to capture the substantial regional variation as an important part of the lexical knowledge, which will be useful for many NLP applications, including natural language understanding, information retrieval, and machine translation. Existing Chinese lexical resources, however, are often based on language use in one particular region and thus lack the desired comprehensiveness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Toward this end, Tsou and Kwong (2006) proposed a comprehensive Pan-Chinese lexical resource, based on a large and unique synchronous Chinese corpus as an authentic source for lexical acquisition and analysis across various Chinese speech communities. To allow maximum versatility and portability, it is expected to document the core and universal substances of the language on the one hand, and also the more subtle variations found in different communities on the other. Different Chinese speech communities might share lexical items in the same form but with different meanings. For instance, the word \u5c45\u5c4b refers to general housing in Mainland China but specifically to housing under the Home Ownership Scheme in Hong Kong; and while the word \u4f4f\u623f is similar to \u5c45\u5c4b to mean general housing in Mainland China, it is rarely seen in the Hong Kong context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 38, |
|
"text": "Tsou and Kwong (2006)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Hence, the current study aims at taking an existing Chinese thesaurus, namely the Tongyici Cilin \u540c\u7fa9\u8a5e\u8a5e\u6797, as leverage and extending it with lexical items specific to individual Chinese speech communities. In particular, the feasibility depends on the following issues: (1) Can lexical items from various Chinese speech communities, that is, from such heterogeneous sources, be classified as effectively with methods shown to work for clustering closely related words from presumably the same, or homogenous, source? (2) Could existing semantic classificatory structures accommodate concepts and expressions specific to individual Chinese speech communities?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Measuring similarity will make sense only if the feature vectors of the two words under comparison are directly comparable. There is usually no problem if both words and their contextual features are from the same data source. Since Tongyici Cilin (or simply Cilin hereafter) is based on the vocabulary used in Mainland China, it is not clear how often these words will be found in data from other places, and even if they are found, how well the feature vectors extracted could reflect the expected usage or sense. Our hypothesis is that it will be more effective to classify new words from Mainland China with respect to Cilin categories, than to do the same on new words from regions outside Mainland China. Furthermore, if this hypothesis holds, one would need to consider separate mechanisms to cluster heterogeneous regionspecific words in the Pan-Chinese context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Thus in the current study we sampled 30 target words specific to each of Beijing, Hong Kong, Singapore, and Taipei, from the financial domain; and used the cosine similarity function to classify them into one or more of the semantic categories in Cilin. The automatic classification results were compared with a simple baseline method, against human judgement as the gold standard. In general, an accuracy of up to 85% could be reached with the top 15 candidates considered. It turns out that our hypothesis is supported by the Taipei test data, whereas the data heterogeneity effect is less obvious in Hong Kong and Singapore test data, though the effect on individual test items varies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Section 2, we will briefly review related work and highlight the innovations of the current study. In Sections 3 and 4, we will describe the materials used and the experimental setup respectively. Results will be presented and discussed with future directions in Section 5, followed by a conclusion in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To build a semantic lexicon, one has to identify the relation between words within a semantic hierarchy, and to group similar words together into a class. Previous work on automatic methods for building semantic lexicons could be divided into two main groups. One is automatic thesaurus acquisition, that is, to identify synonyms or topically related words from corpora based on various measures of similarity (e.g. Riloff and Shepherd, 1997; Thelen and Riloff, 2002) . For instance, Lin (1998) used dependency relation as word features to compute word similarities from large corpora, and compared the thesaurus created in such a way with WordNet and Roget classes. Caraballo (1999) selected head nouns from conjunctions and appositives in noun phrases, and used the cosine similarity measure with a bottomup clustering technique to construct a noun hierarchy from text. Curran and Moens (2002) explored a new similarity measure for automatic thesaurus extraction which better compromises with the speed/performance tradeoff. You and Chen (2006) used a feature clustering method to create a thesaurus from a Chinese newspaper corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 442, |
|
"text": "Riloff and Shepherd, 1997;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 467, |
|
"text": "Thelen and Riloff, 2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 494, |
|
"text": "Lin (1998)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 683, |
|
"text": "Caraballo (1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 872, |
|
"end": 895, |
|
"text": "Curran and Moens (2002)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1027, |
|
"end": 1046, |
|
"text": "You and Chen (2006)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Another line of research, which is more closely related with the current study, is to extend existing thesauri by classifying new words with respect to their given structures (e.g. Tokunaga et al., 1997; Pekar, 2004 ). An early effort along this line is Hearst (1992) , who attempted to identify hyponyms from large text corpora, based on a set of lexico-syntactic patterns, to augment and critique the content of WordNet. Ciaramita (2002) compared several models in classifying nouns with respect to a simplified version of WordNet and signified the gain in performance with morphological features. For Chinese, Tseng (2003) proposed a method based on morphological similarity to assign a Cilin category to unknown words from the Sinica corpus which were not in the Chinese Electronic Dictionary and Cilin; but somehow the test data were taken from Cilin, and therefore could not really demonstrate the effectiveness with unknown words found in the Sinica corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 203, |
|
"text": "Tokunaga et al., 1997;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 215, |
|
"text": "Pekar, 2004", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 267, |
|
"text": "Hearst (1992)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 439, |
|
"text": "Ciaramita (2002)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 625, |
|
"text": "Tseng (2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The current work attempts to classify new words with an existing thesaural classificatory structure. However, the usual practice in past studies is to test with a portion of data from the thesaurus itself and evaluate the results against the original classification of those words. This study is thus different in the following ways: (1) The test data (i.e. the target words to be classified) were not taken from the thesaurus, but extracted from corpora and these words were unknown to the thesaurus. (2) The target words were not limited to nouns. (3) Automatic classification results were compared with a baseline method and with the manual judgement of several linguistics students constituting the gold standard. (4) In view of the heterogeneous nature of the Pan-Chinese context, we experimented with extracting feature vectors from different datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Tongyici Cilin (\u540c\u7fa9\u8a5e\u8a5e\u6797) (Mei et al., 1984) is a Chinese synonym dictionary, or more often known as a Chinese thesaurus in the tradition of the Roget's Thesaurus for English. The Roget's Thesaurus has about 1,000 numbered semantic heads, more generally grouped under higher level semantic classes and subclasses, and more specifically differentiated into paragraphs and semicolon-separated word groups. Similarly, some 70,000 Chinese lexical items are organized into a hierarchy of broad conceptual categories in Cilin. Its classification consists of 12 top-level semantic classes, 94 subclasses, 1,428 semantic heads and 3,925 paragraphs. It was first published in the 1980s and was based on lexical usages mostly of post-1949 Mainland China. The Appendix shows some example subclasses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 45, |
|
"text": "(Mei et al., 1984)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Tongyici Cilin", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the following discussion, we will mainly refer to the subclass level and semantic head level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Tongyici Cilin", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "LIVAC (http://www.livac.org) stands for Linguistic Variation in Chinese Speech Communities. It is a synchronous corpus developed and dynamically maintained by the Language Information Sciences Research Centre of the City University of Hong Kong since 1995 (Tsou and Lai, 2003) . The corpus consists of newspaper articles collected regularly and synchronously from six Chinese speech communities, namely Hong Kong, Beijing, Taipei, Singapore, Shanghai, and Macau. Texts collected cover a variety of domains, including front page news stories, local news, international news, editorials, sports news, entertainment news, and financial news. Up to December 2006, the corpus has already accumulated over 200 million character tokens which, upon automatic word segmentation and manual verification, amount to over 1.2 million word types.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 276, |
|
"text": "(Tsou and Lai, 2003)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The LIVAC Synchronous Corpus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the present study, we made use of the subcorpora collected over the 9-year period 1995-2004 from Beijing (BJ), Hong Kong (HK), Singapore (SG), and Taipei (TW). In particular, we made use of the financial news sections in these subcorpora, from which we extracted feature vectors for comparing similarity between a given target word and a thesaurus class, which is further explained in Section 4.3. Table 1 shows the sizes of the subcorpora.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 409, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The LIVAC Synchronous Corpus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Instead of using a portion of Cilin as the test data, we extracted unique lexical items from the various subcorpora above, and classified them with respect to the Cilin classification. Kwong and Tsou (2006) observed that among the unique lexical items found from the individual subcorpora, only about 30-40% are covered by Cilin, but not necessarily in the expected senses. In other words, Cilin could in fact be enriched with over 60% of the unique items from various regions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 206, |
|
"text": "Kwong and Tsou (2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the current study, we sampled the most frequent 30 words from each of these unique item lists for testing. Classification was based on their similarity with each of the Cilin subclasses, compared by the cosine measure, as discussed in Section 4.3. Three undergraduate linguistics students and one research student on computational linguistics from the City University of Hong Kong were asked to do the task. The undergraduate students were raised in Hong Kong and the research student in Mainland China. They were asked to assign what they consider the most appropriate Cilin category (up to the semantic head level, i.e. third level in the Cilin structure) to each of the 120 target words. The inter-annotator agreement was measured by the Kappa statistic (Siegel and Castellan, 1988) , at both the subclass and semantic head levels. Results on the human judgement are discussed in Section 5.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 760, |
|
"end": 788, |
|
"text": "(Siegel and Castellan, 1988)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The \"gold standard\" was set at both the subclass level and semantic head level. For each level, we formed a \"strict\" standard for which we considered all categories assigned by at least two judges to a word; and a \"loose\" standard for which we considered all categories assigned by one or more judges. For evaluating the automatic classification in this study, however, we only experimented with the loose standard at the subclass level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Gold Standard", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Each target word was automatically classified with respect to the Cilin subclasses based on the similarity between the target word and each subclass. We compute the similarity by the cosine between the two corresponding feature vectors. The feature vector of a given target word contains all its co-occurring content words in the corpus within a window of \u00b15 words (excluding many general adjectives and adverbs, and numbers and proper names were all ignored). The feature vector of a Cilin subclass is based on the union of the features (i.e. co-occurring words in the corpus) from all individual members in the subclass.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The cosine of two feature vectors is computed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "w v w v w v v v v v v v \u22c5 = ) , cos(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In view of the difference in the feature space of a target word and a whole class of words, and thus the potential difference in the number of occurrence of individual features, we experimented with two versions of the cosine measurement, namely binary vectors and real-valued vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition, as mentioned in previous sections, we also experimented with the following conditions: whether feature vectors for the Cilin subclasses were extracted from the subcorpus where a given target word originates, or from the Beijing subcorpus which is assumed to be representative of language use in Mainland China. All automatic classification results were evaluated against the gold standard based on human judgement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To evaluate the effectiveness of the automatic classification, we adopted a simple baseline measure by ranking the 94 subclasses in descending order of the number of words they cover. In other words, assuming the bigger the subclass size, the more likely it covers a new term, thus we compared the top-ranking subclasses with the classifications obtained from the automatic method using the cosine measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "All human judges reported difficulties in various degrees in assigning Cilin categories to the target words. The major problem comes from the regional specificity and thus the unfamiliarity of the judges with the respective lexical items and contexts. For instance, students grown up in Hong Kong were most familiar with the Hong Kong data, and slightly less so with the Beijing data, but more often had the least ideas for the Taipei and Singapore data. The research student from Mainland China had no problem with Beijing data and the lexical items in Cilin, but had a hard time figuring out the meaning for words from Hong Kong, Taipei and Singapore. For example, all judges reported problem with the term \u81ea\u64ae, one of the target words from Singapore referring to \u81ea\u64ae\u80a1\u5e02 (CLOB in the Singaporean stock market), which is really specific to Singapore.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Response from Human Judges", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The demand on cross-cultural knowledge thus poses a challenge for building a Pan-Chinese lexical resource manually. Cilin, for instance, is quite biased in language use in Mainland China, and it requires experts with knowledge of a wide variety of Chinese terms to be able to manually classify lexical items specific to other Chinese speech communities. It is therefore even more important to devise robust ways for automatic acquisition of such a resource.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Response from Human Judges", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Notwithstanding the difficulty, the interannotator agreement was quite satisfactory. At the subclass level, we found K=0.6870. At the semantic head level, we found K=0.5971. Both figures are statistically significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Response from Human Judges", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As mentioned, we set up a loose standard and a strict standard at both the subclass and semantic head level. In general, the judges managed to reach some consensus in all cases, except for two words from Singapore. For these two cases, we considered all categories assigned by any of the judges for both standards.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The gold standards were verified by the authors. Although in several cases the judges did not reach complete agreement with one another, we found that their decisions reflected various possible perspectives to classify a given word with respect to the Cilin classification; and the judges' assignments, albeit varied, were nevertheless reasonable in one way or another.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In the following discussion, we will refer to the various testing conditions for each group of target words with labels in the form of Cos-<Vector Type>-<Target Words>-<Cilin Feature Source>. Thus the label Cos-Bin-hk-hk means testing on Hong Kong target words with binary vectors and extracting features for the Cilin words from the Hong Kong subcorpus; and the label Cos-RV-sg-bj means testing on Singapore target words with realvalued vectors and extracting features for the Cilin words from the Beijing subcorpus. For each target word, we evaluated the automatic classification (and the baseline ranking) by matching the human decisions with the top N candidates. If any of the categories suggested by the human judges is covered, the automatic classification is considered accurate. The results are shown in Figure 1 for test data from individual regions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 813, |
|
"end": 821, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Automatic Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Overall speaking, the results are very encouraging, especially in view of the number of categories (over 90) we have at the subclass level. An accuracy of 80% or more is obtained in general if the top 15 candidates were considered, which is much higher than the baseline result in all cases. Table 2 shows some examples with appropriate classification within the Top 3 candidates. The two-letter codes in the \"Top 3\" column in Table 2 refer to the subclass labels, and the code in bold is the one matching human judgement.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 299, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 434, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Automatic Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In terms of the difference between binary vectors and real-valued vectors in the similarity measurement, the latter almost always gave better re-sults. This was not surprising as we expected by using real-valued vectors we could be less affected by the potential huge difference in the feature space and the number of occurrence of the features for a Cilin subclass and a target word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Automatic Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "As for extracting features for Cilin subclasses from the Beijing subcorpus or other subcorpora, the difference is more obvious for the Singapore and Taipei target words. We will discuss the results for each group of target words in detail below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Automatic Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Target words from Beijing were expected to have a relatively higher accuracy because they are homogenous with the Cilin content. It turned out, however, the accuracy only reached 73% with top 15 candidates and 83% with top 20 candidates even under the Cos-RV-bj-bj condition. Words like \u975e\u5178 (SARS), \u7bc0\u6c34 (save water), \u7522\u696d\u5316 (industrialize / industrialization), \u5408\u683c\u7387 (passing rate) and \u50b3\u92b7 (multi-level marketing) could not be successfully classified. Results were surprisingly good for target words from the Hong Kong subcorpus. Under the Cos-RV-hk-hk condition, the accuracy was 87% with top 15 candidates and even over 95% with top 20 candidates considered. Apart from this high accuracy, another unexpected observation is the lack of significant difference between Cos-RV-hk-hk and Cos-RV-hk-bj. One possible reason is that the relatively larger size of the Hong Kong subcorpus might have allowed enough features to be extracted even for the Cilin words. Nevertheless, the similar results from the two conditions might also suggest that the context in which Cilin words are used might be relatively similar in the Hong Kong subcorpus and the Beijing subcorpus, as compared with other communities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance on Individual Sources", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Similar trends were observed from the Singapore target words. Looking at Cos-RV-sg-sg and Cos-RV-sg-bj, it appears that extracting feature vectors for the Cilin words from the Singapore subcorpus leads to better performance than extracting them from the Beijing subcorpus. It suggests that although the Singapore subcorpus shares those words in Cilin, the context in which they are used might be slightly different from their use in Mainland China. Thus extracting their contextual features from the Singapore subcorpus might better reflect their usage and makes it more comparable with the unique target words from Singapore. Such possible difference in contextual features with shared lexical items between different Chinese speech communities would require further investigation, and will form part of our future work as discussed below. Despite the above observation from the accuracy figures, the actual effect, however, seems to vary on individual lexical items. Table 3 shows some examples of target words which received similar (with white cells) and very different (with shaded cells) classification respectively under the two conditions. It appears that the region-specific but common concepts like \u5beb\u5b57\u6a13 (office), \u7d44 \u5c4b (apartment), \u79c1 \u5b85 (private residence), which relate to building or housing, were affected most. Taipei data, on the contrary, seems to be more affected by the different testing conditions. Cos-Bin-tw-bj and Cos-RV-tw-bj produced similar results, and both conditions showed better results than Cos-RV-tw-tw. This supports our hypothesis that the effect of data heterogeneity is so apparent that it is much harder to classify target words unique to Taipei with respect to the Cilin categories. In addition, as Kwong and Tsou (2006) observed, Beijing and Taipei data share the least number of lexical items, among the four regions under investigation. Hence, words in Cilin might not have the appropriate contextual feature vectors extracted from the Taipei subcorpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1734, |
|
"end": 1755, |
|
"text": "Kwong and Tsou (2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 969, |
|
"end": 976, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance on Individual Sources", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "The different results for individual regions might be partly due to the endocentric and exocentric nature of influence in lexical innovation (e.g. Tsou, 2001) especially with respect to the financial domain and the history of capitalism in individual regions. This factor is worth further investigation. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 158, |
|
"text": "Tsou, 2001)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance on Individual Sources", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "As mentioned in a previous section, the test data in this study were not taken from the thesaurus itself, but were unknown words to the thesaurus. They were extracted from corpora, and were not limited to nouns. We found in this study that the simple cosine measure, which used to be applied for clustering contextually similar words from homogenous sources, performs quite well in general for classifying these unseen words with respect to the Cilin subclasses. The automatic classification results were compared with the manual judgement of several linguistics students. In addition to providing a gold standard for evaluating the automatic classification results in this study, the human 1 English gloss: 1-restoring agricultural lands for afforestation, 2-material, 3-coal mine, 4-to seize (an opportunity), 5-unemployed, 6-sales performance, 7-broadband, 8-red chip, 9-interest rate, 10-property stocks, 11financial year, 12-sell short, 13-proposal, 14-sell, 15brigadier general, 16-financial holdings, 17-individual stocks, 18-property market, 19-cash card, 20-stub. judgement on the one hand proves that the Cilin classificatory structure could accommodate regionspecific lexical items; but on the other hand also suggests how difficult it would be to construct such a Pan-Chinese lexicon manually as rich cultural and linguistic knowledge would be required. Moreover, we started with Cilin as the established semantic classification and attempted to classify words specific to Beijing, Hong Kong, Singapore, and Taipei respectively. The heterogeneity of sources did not seem to hamper the similarity measure on the whole, provided appropriate datasets are used for feature extraction, although the actual effect seemed to vary on individual lexical items.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Discussions and Future Work", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "Cos-RV-hk-hk, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No. Source Word Ranking of 1st appropriate class", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cos-RV-hk-bj, etc. Despite the encouraging results with the top 15 candidates in the current study, it is desirable to improve the accuracy for the system to be useful in 2 English gloss: 1-sales performance, 2-broadband, 3red chip, 4-add (supply to market), 5-low level, 6-office, 7-financial year, 8-sell short, 9-rights issue, 10apartment, 11-holding space rate, 12-private residence, 13-stub, 14-individual stocks, 15-financial holdings, 16investment trust, 17-growth rate, 18-cash card.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No. Source Word Ranking of 1st appropriate class", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "practice. Hence our next step is to expand the test data size and to explore alternative methods such as using a nearest neighbour approach. In addition, we plan to further the investigation in the following directions. First, we will experiment with the automatic classification at the Cilin semantic head level, which is much more fine-grained than the subclasses. The fine-grainedness might make the task more difficult, but at the same time the more specialized grouping might pose less ambiguity for classification. Second, we will further experiment with classifying words from other special domains like sports, as well as the general domain. Third, we will study the classification in terms of the partof-speech of the target words, and their respective requirements on the kinds of features which give best classification performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No. Source Word Ranking of 1st appropriate class", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The current study only dealt with presumably Modern Standard Chinese in different communities, and it could potentially be expanded to handle various dialects within a common resource, eventually benefiting speech lexicons and applications at large.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No. Source Word Ranking of 1st appropriate class", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we have reported our study on a unique problem in Chinese language processing, namely extending a Chinese thesaurus with new words from various Chinese speech communities, including Beijing, Hong Kong, Singapore and Taipei. The critical issues include whether the existing classificatory structure could accommodate concepts and expressions specific to various Chinese speech communities, and whether the difference in textual sources might pose difficulty in using conventional similarity measures for the automatic classification. Our experiments, using the cosine function to measure similarity and testing with various sources for extracting contextual vectors, suggest that the classification performance might depend on the compatibility between the words in the thesaurus and the sources from which the target words are drawn. Evaluated against human judgement, an accuracy of over 85% was reached in some cases, which were much higher than the baseline and were very encouraging in general. While human judgement is not straightforward and it is difficult to create a Pan-Chinese lexicon manually, combining simple classification methods with the appropriate data sources seems to be a promising approach toward its automatic construction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 1317/03H).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following table shows some examples of the Cilin subclasses:Class Subclasses ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Appendix", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatic construction of a hypernym-labeled noun hierarchy from text", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Caraballo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL'99)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caraballo, S.A. (1999) Automatic construction of a hypernym-labeled noun hierarchy from text. In Pro- ceedings of the 37th Annual Meeting of the Associa- tion for Computational Linguistics (ACL'99), Mary- land, USA, pp.120-126.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Boosting automatic lexical acquisition with morphological information", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL'02 Workshop on Unsupervised Lexical Acquisition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ciaramita, M. (2002) Boosting automatic lexical acqui- sition with morphological information. In Proceed- ings of the ACL'02 Workshop on Unsupervised Lexi- cal Acquisition, Philadelphia, USA, pp.17-25.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Improvements in Automatic Thesaurus Extraction", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL'02 Workshop on Unsupervised Lexical Acquisition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "59--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Curran, J.R. and Moens, M. (2002) Improvements in Automatic Thesaurus Extraction. In Proceedings of the ACL'02 Workshop on Unsupervised Lexical Ac- quisition, Philadelphia, USA, pp.59-66.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automatic Acquisition of Hyponyms from Large Text Corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING-92)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "539--545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hearst, M. (1992) Automatic Acquisition of Hyponyms from Large Text Corpora. In Proceedings of the 14th International Conference on Computational Linguis- tics (COLING-92), Nantes, France, pp.539-545.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Feasibility of Enriching a Chinese Synonym Dictionary with a Synchronous Chinese Corpus", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Kwong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Natural Language Processing: Proceedings of Fin-TAL", |
|
"volume": "4139", |
|
"issue": "", |
|
"pages": "322--332", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kwong, O.Y. and Tsou, B.K. (2006) Feasibility of En- riching a Chinese Synonym Dictionary with a Syn- chronous Chinese Corpus. In T. Salakoski, F. Ginter, S. Pyysalo and T. Pahikkala (Eds.), Advances in Natural Language Processing: Proceedings of Fin- TAL 2006. Lecture Notes in Artificial Intelligence, Vol.4139, pp.322-332, Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatic Retrieval and Clustering of Similar Words", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL'98)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "768--774", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, D. (1998) Automatic Retrieval and Clustering of Similar Words. In Proceedings of the 36th Annual Meeting of the Association for Computational Lin- guistics and 17th International Conference on Com- putational Linguistics (COLING-ACL'98), Montreal, Canada, pp.768-774.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Linguistic Preprocessing for Distributional Classification of Words", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Pekar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the COLING2004 Workshop on Enhancing and Using Electronic Dictionaries", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pekar, V. (2004) Linguistic Preprocessing for Distribu- tional Classification of Words. In Proceedings of the COLING2004 Workshop on Enhancing and Using Electronic Dictionaries, Geneva.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A corpus-based approach for building semantic lexicons", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Shepherd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "117--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Riloff, E. and Shepherd, J. (1997) A corpus-based ap- proach for building semantic lexicons. In Proceed- ings of the Second Conference on Empirical Methods in Natural Language Processing, Providence, Rhode Island, pp.117-124.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Nonparametric Statistics for the Behavioral Sciences", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Siegel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Castellan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siegel, S. and Castellan, N.J. (1988) Nonparametric Statistics for the Behavioral Sciences (2nd Ed.). McGraw-Hill.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Bootstrapping Method for Learning Semantic Lexicons using Extraction Pattern Contexts", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Thelen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thelen, M. and Riloff, E. (2002) A Bootstrapping Method for Learning Semantic Lexicons using Ex- traction Pattern Contexts. In Proceedings of the 2002 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2002), Philadelphia, USA.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Extending a thesaurus by classifying words", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Tokunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Fujii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Iwayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Sakurai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the ACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tokunaga, T., Fujii, A., Iwayama, M., Sakurai, N. and Tanaka, H. (1997) Extending a thesaurus by classi- fying words. In Proceedings of the ACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, Madrid, pp.16-21.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Semantic Classification of Chinese Unknown Words", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Tseng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "the Proceedings of the ACL-2003 Student Research Workshop, Companion Volume to the Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tseng, H. (2003) Semantic Classification of Chinese Unknown Words. In the Proceedings of the ACL- 2003 Student Research Workshop, Companion Vol- ume to the Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Sap- poro, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Language Contact and Lexical Innovation", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "New Terms for New Ideas: Western Knowledge and Lexical Change in Late Imperial China", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsou, B.K. (2001) Language Contact and Lexical Inno- vation. In M. Lackner, I. Amelung and J. Kurtz (Eds.), New Terms for New Ideas: Western Knowl- edge and Lexical Change in Late Imperial China. Berlin: Brill.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Toward a Pan-Chinese Thesaurus", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Kwong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC 2006)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsou, B.K. and Kwong, O.Y. (2006) Toward a Pan- Chinese Thesaurus. In Proceedings of the Fifth In- ternational Conference on Language Resources and Evaluation (LREC 2006), Genoa, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "\u300a\u4e2d\u6587\u4fe1\u606f\u8655 \u7406\u82e5\u5e72 \u91cd\u8981\u554f \u984c\u300b (Issues in Chinese Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"B Y" |
|
], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "\u9112\u5609\u5f65\u3001\u9ece\u90a6\u6d0b ; \u6f22 \u8a9e\u5171\u6642\u8a9e\u6599\u5eab\u8207\u4fe1\u606f\u958b\u767c", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--165", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsou, B.K. and Lai, T.B.Y. \u9112\u5609\u5f65\u3001\u9ece\u90a6\u6d0b (2003) \u6f22 \u8a9e\u5171\u6642\u8a9e\u6599\u5eab\u8207\u4fe1\u606f\u958b\u767c. In B. Xu, M. Sun and G. Jin \u5f90\u6ce2\u3001\u5b6b\u8302\u677e\u3001\u9773\u5149\u747e (Eds.), \u300a\u4e2d\u6587\u4fe1\u606f\u8655 \u7406\u82e5\u5e72 \u91cd\u8981\u554f \u984c\u300b (Issues in Chinese Language Processing). \u5317\u4eac\uff1a\u79d1\u5b78\u51fa\u7248\u793e, pp.147-165", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improving Context Vector Models by Feature Clustering for Automatic Thesaurus Construction", |
|
"authors": [ |
|
{ |
|
"first": "J-M", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K-J", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "You, J-M. and Chen, K-J. (2006) Improving Context Vector Models by Feature Clustering for Automatic Thesaurus Construction. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, COLING-ACL 2006, Sydney, Australia, pp.1-8.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"text": "Classification Results with Top N Candidates", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "Different Impact on Individual Items 2", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |