|
{ |
|
"paper_id": "O03-1009", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:02:04.054312Z" |
|
}, |
|
"title": "Auto-Discovery of NVEF Word-Pairs in Chinese", |
|
"authors": [ |
|
{ |
|
"first": "Jia-Lin", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Academia Sinica Nankang", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan, R.O.C" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Gladys", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Academia Sinica Nankang", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan, R.O.C" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Wen-Lian", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Academia Sinica Nankang", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan, R.O.C" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A meaningful noun-verb word-pair in a sentence is called a noun-verb event-frame (NVFE). Previously, we have developed an NVEF word-pair identifier to demonstrate that NVEF knowledge can be used effectively to resolve the Chinese word-sense disambiguation (WSD) problem (with 93.7% accuracy) and the Chinese syllable-to-word (STW) conversion problem (with 99.66% accuracy) on the NVEF related portion. In this paper, we propose a method for automatically acquiring a large scale NVEF knowledge without human intervention. The automatic discovery of NVEF knowledge includes four major processes: (1) segmentation check; (2) Initial Part-of-speech (POS) sequence generation; (3) NV knowledge generation and (4) automatic NVEF knowledge confirmation. Our experimental results show that the precision of the automatically acquired NVEF knowledge reaches 98.52% for the test sentences. In fact, it has automatically discovered more than three hundred thousand NVEF word-pairs from the 2001 United Daily News (2001 UDN) corpus. The acquired NVEF knowledge covers 48% NV-sentences in Academia Sinica Balanced Corpus (ASBC), where an NV-sentence is one including at least a noun and a verb. In the future, we will expand the size of NVEF knowledge to cover more than 75% of NV-sentences in ASBC. We will also apply the acquired NVEF knowledge to support other NLP researches, in particular, shallow parsing, syllable/speech understanding and text indexing.", |
|
"pdf_parse": { |
|
"paper_id": "O03-1009", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A meaningful noun-verb word-pair in a sentence is called a noun-verb event-frame (NVFE). Previously, we have developed an NVEF word-pair identifier to demonstrate that NVEF knowledge can be used effectively to resolve the Chinese word-sense disambiguation (WSD) problem (with 93.7% accuracy) and the Chinese syllable-to-word (STW) conversion problem (with 99.66% accuracy) on the NVEF related portion. In this paper, we propose a method for automatically acquiring a large scale NVEF knowledge without human intervention. The automatic discovery of NVEF knowledge includes four major processes: (1) segmentation check; (2) Initial Part-of-speech (POS) sequence generation; (3) NV knowledge generation and (4) automatic NVEF knowledge confirmation. Our experimental results show that the precision of the automatically acquired NVEF knowledge reaches 98.52% for the test sentences. In fact, it has automatically discovered more than three hundred thousand NVEF word-pairs from the 2001 United Daily News (2001 UDN) corpus. The acquired NVEF knowledge covers 48% NV-sentences in Academia Sinica Balanced Corpus (ASBC), where an NV-sentence is one including at least a noun and a verb. In the future, we will expand the size of NVEF knowledge to cover more than 75% of NV-sentences in ASBC. We will also apply the acquired NVEF knowledge to support other NLP researches, in particular, shallow parsing, syllable/speech understanding and text indexing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The most challenging problem in NLP is to program computers to understand natural languages. For a human being, efficient syllable-to-word (STW) conversion and word sense disambiguation (WSD) arise naturally while a sentence is understood. Therefore, in designing a natural language understanding (NLD) system, two basic problems are to derive methods and knowledge for effectively performing the tasks of STW and WSD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "For most languages, a sentence is a grammatical organization of words expressing a complete thought [Chu 1982 , Fromkin et al. 1998 ]. Since a word is usually encoded with ploy-senses, to understand language, efficient word sense disambiguation (WSD) becomes a critical problem for any NLD system. According to a study in cognitive science [Choueka et al. 1983] , people often disambiguate word sense using only a few other words in a given context (frequently only one additional word). Thus, the relationships between one word and others can be effectively used to resolve ambiguity. Furthermore, from [Small et al. 1988 , Krovetz et al. 1992 , Resnik et al. 2000 , most ambiguities occur with nouns and verbs, and the object-event (i.e. noun-verb) distinction is a major ontological division for humans [Carey 1992 ]. have shown that the knowledge of noun-verb event frame (NVEF) sense/word-pairs can be used effectively to achieve a WSD accuracy of 93.7% for the NVEF related portion in Chinese, which supports the above claim of [Choueka et al. 1983] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 109, |
|
"text": "[Chu 1982", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 131, |
|
"text": ", Fromkin et al. 1998", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 361, |
|
"text": "[Choueka et al. 1983]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 622, |
|
"text": "[Small et al. 1988", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 644, |
|
"text": ", Krovetz et al. 1992", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 665, |
|
"text": ", Resnik et al. 2000", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 817, |
|
"text": "[Carey 1992", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1034, |
|
"end": 1055, |
|
"text": "[Choueka et al. 1983]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The most common relationships between nouns and verbs are subject-predicate (SP) and verb-object (VO) [\u80e1\u88d5\u6a39 et al. 1995 [\u80e1\u88d5\u6a39 et al. , Fromkin et al. 1998 ]. In Chinese, such NV relationships could be found in various language units: compounds, phrases or sentences [Li et al. 1997] . As our observation, the major NV relationships in compounds/phrases are SP, VO, MH (modifier-head) and VC (verb-complement) constructions; the major NV relationships in sentences are SP and VO constructions. Consider the Chinese sentence: \u9019\u8f1b\u8eca\u884c\u99db\u9806\u66a2(This car moves well). There are two possible NV word-pairs, \"\u8eca-\u884c\u99db(car, move)\" and \"\u8eca\u884c-\u99db(auto shop, move).\" It is clear that the permissible (or meaningful) NV word-pair is \"\u8eca-\u884c\u99db(car, move)\" and it is a SP construction. We call such a permissible NV word-pair a noun-verb event frame (NVEF) word-pair. And, the collection of the NV word-pair \u8eca-\u884c\u99db and its sense-pair Land-Vehicle|\u8eca-VehicleGo|\u99db is called a permissible NVEF knowledge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 118, |
|
"text": "[\u80e1\u88d5\u6a39 et al. 1995", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 152, |
|
"text": "[\u80e1\u88d5\u6a39 et al. , Fromkin et al. 1998", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 280, |
|
"text": "[Li et al. 1997]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The most popular input method for Chinese is syllable-based. Since the average number of characters sharing the same syllable is 17, efficient STW conversion becomes an indispensable tool. have shown that the NVEF knowledge can be used to achieve a STW accuracy rate of 99.66% for converting NVEF related words. Since the creation of NVEF knowledge bears no particular application in mind, and still it can be used to effectively resolve the WSD and STW problems, the NVEF knowledge is potentially application independent for NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We shall further investigate the effectiveness of NVEF knowledge in other NLP applications, such as syllable/speech understanding and full/shallow parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We have reported a semi-automatic generation of NVEF knowledge in .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "This method uses the N-V frequencies in sentences groups to generate NVEF candidates to be filtered by human editors. However, it is quite laborious to create a large scale NVEF knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, we propose a new method to discover NVEF knowledge automatically from running texts, and construct a large scale NVEF knowledge efficiently. This paper is arranged as follows. In Section 2, we present the details of auto-discovery of NVEF knowledge. Experimental results and analyses are described in Section 3. Conclusion and directions for future researches will be discussed in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "To develop an auto-discovery system for NVEF knowledge (AUTO-NVEF), we use Hownet 1.0 [Dong] as a system dictionary. This system dictionary provides knowledge of the Chinese word (58,541 words), parts-of-speech (POS) and word senses, in which there are 33,264 nouns, 16,723 verbs and 16,469 senses (including 10,011 noun-senses and 4,462 verb-senses).", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 92, |
|
"text": "[Dong]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development of Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The sense of a word is defined as its DEF (concept definition) in Hownet. Table 1 lists three different senses of the Chinese word \"\u8eca(Che/car/turn).\" In Hownet, the DEF of a word consists of its main feature and secondary features. For example, in the DEF \"character|\u6587 \u5b57,surname|\u59d3,human|\u4eba,ProperName|\u5c08\" of the word \"\u8eca(Che),\" the first item \"character|\u6587 \u5b57\" is the main feature, and the remaining three items, \"surname|\u59d3,\" \"human|\u4eba,\" and \"Prop-erName|\u5c08,\" are its secondary features. The main feature in Hownet can inherit features in the hypernym-hyponym hierarchy. There are approximately 1,500 features in Hownet. Each of these features is called a sememe, which refers to the smallest semantic unit that cannot be further reduced. As we mentioned, a permissible (or meaningful) NV word-pair is a noun-verb event-frame word-pair (NVEF word-pair), such as \u8eca-\u884c\u99db(Che/car/turn, move). From Table 2 , the only permissible NVEF sense-pair for \u8eca-\u884c\u99db(car, move) is LandVehicle|\u8eca-VehicleGo|\u99db. Such an NVEF sense-pair and its corresponding NVEF word-pairs is called NVEF knowledge. Here, the combination of the NVEF sense-pair LandVehicle| \u8eca -VehicleGo| \u99db and the NVEF word-pair \u8eca-\u884c\u99db constructs a collection of NVEF knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 81, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 886, |
|
"end": 893, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Definition of the NVEF Knowledge", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To effectively represent the NVEF knowledge, we have proposed an NVEF knowledge representation tree (NVEF KR-tree) to store and display the collected NVEF knowledge. The details of the NVEF KR-tree are described below .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of the NVEF Knowledge", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A knowledge representation tree (KR-tree) of NVEF sense-pairs is shown in Fig.1 . Figure 1 . An illustration of the KR-tree using \"\u4eba\u5de5\u7269(artifact)\" as an example noun-sense subclass. (The English words in parentheses are provided for explanatory purposes only.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 79, |
|
"text": "Fig.1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 90, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "There are two types of nodes in the KR-tree, namely, function nodes and concept nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Concept nodes refer to words and features in Hownet. Function nodes are used to define the relationships between the parent and children concept nodes. We omit the function node \"subclass\" so that if a concept node B is the child of another concept node A, then B is a subclass of A. We can classify the noun-sense class (\u540d\u8a5e\u8a5e\u7fa9\u5206\u985e) into 15 subclasses according to their main features. These are \"\u5fae\u751f\u7269(bacteria),\" \"\u52d5\u7269\u985e(animal),\" \"\u4eba\u7269\u985e(human),\" \"\u690d\u7269\u985e (plant),\" \"\u4eba\u5de5\u7269(artifact),\" \"\u5929\u7136\u7269(natural),\" \"\u4e8b\u4ef6\u985e(event),\" \"\u7cbe\u795e\u985e(mental),\" \"\u73fe\u8c61\u985e (phenomena),\" \"\u7269\u5f62\u985e(shape),\" \"\u5730\u9ede\u985e(place),\" \"\u4f4d\u7f6e\u985e(location),\" \"\u6642\u9593\u985e(time),\" \"\u62bd\u8c61 \u985e(abstract)\" and \"\u6578\u91cf\u985e(quantity).\" Appendix A provides a sample table of the 15 main features of nouns in each noun-sense subclass.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The three function nodes used in the KR-tree are shown in Figure 1: (1) Major-Event (\u4e3b\u8981\u4e8b\u4ef6): The content of its parent node represents a noun-sense subclass, and the content of its child node represents a verb-sense subclass. A noun-sense subclass and a verb-sense subclass linked by a Major-Event function node is an NVEF subclass sense-pair, such as \"&LandVehicle|\u8eca\" and \"=VehcileGo|\u99db\" in Figure 1 . To describe various relationships between noun-sense and verb-sense subclasses, we design three subclass sense-symbols, in which \"=\" means \"exact,\" \"&\" means \"like,\" and \"%\" means \"inclusive.\" An example using these symbols is provided below.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 67, |
|
"text": "Figure 1:", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 398, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Provided that there are three senses S 1 , S 2, and S 3 as well as their corresponding words W 1 , W 2, and W 3 . Let S 1 = LandVehicle|\u8eca,*transport|\u904b\u9001,#human|\u4eba,#die|\u6b7b W 1 =\"\u9748\u8eca(hearse)\" S 2 = LandVehicle|\u8eca,*transport|\u904b\u9001,#human|\u4eba W 2 =\"\u5ba2\u8eca(bus)\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "S 3 = LandVehicle|\u8eca,police|\u8b66 W 3 =\"\u8b66\u8eca(police car)\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Then, we have that sense/word S 3 /W 3 is in the \"=LandVehicle|\u8eca,police|\u8b66\" exact-subclass; senses/words S 1 /W 1 and S 2 /W 2 are in the \"&LandVehicle|\u8eca,*transport|\u904b \u9001\" like-subclass; and senses/words S 1 /W 1 , S 2 /W 2 , and S 3 /W 3 are in the \"%LandVehi-cle|\u8eca\" inclusive-subclass.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(2) Word-Instance (\u5be6\u4f8b): The content of its children are the words belonging to the sense subclass of its parent node. These words are self-learned by the NVEF sense-pair identifier according to the sentences under the Test-Sentence nodes. (3) Test-Sentence (\u6e2c\u8a66\u984c): The content of its children is several selected test sentences in support of its corresponding NVEF subclass sense-pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge Representation Tree of NVEF Sense-Pairs and Word-Pairs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The task of AUTO-NVEF is to automatically find out meaningful NVEF sense/word-pairs (NVEF knowledge) from Chinese sentences. Figure 1 is the flow chart of AUTO-NVEF. There are four major processes in AUTO-NVEF. The details of these major processes are described as follows (see Figure 2 and Table 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 133, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 298, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Process 1. Segmentation check: In this stage, the Chinese sentence will be segmented by two strategies: right-to-left longest word first (RL-LWF), and left-to-right longest word first (LR-LWF). If both RL-LWF and LR-LWF segmentations are equal (in short form, RL-LWF=LR-LWF) and the word number of the segmentation is greater than one, this segmen-tation result will be sent to process 2; otherwise, a NULL segmentation will be sent. Table 3 is a comparison of word-segmentation accuracies for RL-LWF, LR-LWF and RL-LWF=LR-LWF strategies with CKIP lexicon [CKIP 1995] . The word-segmentation accuracy is the ratio of fully correct segmented sentences to all sentences of Academia Sinica Balancing Corpus (ASBC) [CKIP 1995] . A fully correct segmented sentence means the segmented result exactly matches its corresponding segmentation ASBC. Table 3 shows that the technique of RL-LWF=LR-LWF achieves the best word-segmentation accuracy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 556, |
|
"end": 567, |
|
"text": "[CKIP 1995]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 711, |
|
"end": 722, |
|
"text": "[CKIP 1995]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 434, |
|
"end": 441, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 847, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(1) Segmentation check", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(2) Initial POS sequence genreation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(3) NV knowledge generation Table 2 . An illustration of AUTO-NVEF for the Chinese sentence \"\u97f3\u6a02\u6703\u73fe\u5834\u6e67\u5165\u8a31\u591a\u89c0\u773e (There are many audiences entering the locale of concert).\" (The English words in parentheses are included for explanatory purpose only.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 35, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Process Output", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(1) \u97f3\u6a02\u6703(concert)/\u73fe\u5834(locale)/\u6e67\u5165(enter)/\u8a31\u591a(many)/\u89c0\u773e(audience) (3) NV_1 = \"\u73fe\u5834/place|\u5730\u65b9,#fact|\u4e8b\u60c5/N\" -\"\u6e67\u5165(yong3 ru4)/GoInto|\u9032\u5165/V\" NV_2 = \"\u89c0\u773e/human|\u4eba,*look|\u770b,#entertainment|\u85dd,#sport|\u9ad4\u80b2,*recreation|\u5a1b\u6a02/N\" Process 2. Initial POS sequence generation: If the output of process 1 is not a NULL segmentation, this process will be triggered. This stage is comprised of the following steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(2) N 1 N 2 V 3 ADJ 4 N 5 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "-\"\u6e67\u5165(yong3 ru4)/GoInto|\u9032\u5165/V\"", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1) For the segmentation result w 1 /w 2 /\u2026/w n-1 /w n from process 1, our algorithm compute the POS of w i , where i = 2 to n, as follows. It first computes the following two sets: a) the following POS/frequency set of w i-1 by ASBC tagging corpus and b) the Hownet POS set of w i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Then, it computes the POS intersection of the two sets. Finally, it selects the POS with the largest frequency in the POS intersection to be the POS of w i . If there are more than one POS with the largest frequency, the POS of w i will be set to NULL POS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "2) Similarly, the POS of w 1 will be determined by the POS with the largest frequency in the POS intersection of the preceding POS/frequency set of w 2 and the Hownet POS set of w 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3) By combining the determined POSs of w i , where i =1 to n, the initial POS sequence (IPOS) will be generated. Take the Chinese segmentation \u751f/\u4e86 as an example. NULL POS, this process will be triggered. The steps of this process are given as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1) Compute the final POS sequence (FPOS). For the portion of contiguous noun sequence (such as N 1 N 2 ) of the IPOS, the last noun (such as N 2 ) will be kept and the other nouns will be dropped from the IPOS. This is because the last noun of a contiguous noun sequence (such as \u822a\u7a7a/\u516c\u53f8) in Chinese is usually the head of such a sequence. This step translates an IPOS into a FPOS. Take the Chinese sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u97f3\u6a02\u6703(N 1 )\u73fe\u5834(N 2 )\u6e67\u5165(V 3 )\u8a31\u591a(ADJ 4 ) \u89c0 \u773e (N 5 ) as an example. Its IPOS (N 1 N 2 V 3 ADJ 4 N 5 ) will be translated into FPOS (N 1 V 2 ADJ 3 N 4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "2) According to the FPOS, the NV word-pairs will be generated. In this case, since the auto-generated NV word-pairs for the FPOS N 1 V 2 ADJ 3 N 4 are N 1 V 2 and N 4 V 2 , the NV word-pairs \u73fe\u5834(N)\u6e67\u5165(V) and \u6e67\u5165(V)\u89c0\u773e(N) will be generated. Appendix. B lists three sample mappings of the FPOSs and their corresponding NV word-pairs. In this study, we create about one hundred mappings of FPOSs and their corresponding NV word-pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3) According to Hownet, it computes all NV sense-pairs for the generated NV word-pairs. For the above case, we have two collections of NV knowledge (see Table 2 ): NV_1 = \"\u73fe\u5834(locale)/place|\u5730\u65b9,#fact|\u4e8b\u60c5/N\" -\"\u6e67\u5165(enter)/GoInto|\u9032\u5165/V\", and NV_2 = \"\u89c0\u773e(audience)/human|\u4eba,*look|\u770b,#entertainment|\u85dd,#sport|\u9ad4\u80b2,*recreation|\u5a1b \u6a02/N\" -\"\u6e67\u5165(enter)/GoInto|\u9032\u5165/V\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 160, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Process 4. NVEF knowledge auto-confirmation: In this stage, it automatically confirms whether the generated NV knowledge is NVEF knowledge. The two auto-confirmation procedures are given as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(a) General keeping (GK) condition check: Each GK condition is constructed by a noun-sense class defined in (see Appendix A) and a verb main DEF in Hownet 1.0 [Dong] . For example, the pair of noun-sense class \"\u4eba\u7269\u985e(human)\" and verb main DEF \"GoInto|\u9032\u5165\" is a GK condition. In , we created 5,680 GK conditions from the manually confirmed NVEF knowledge. If the noun-sense class and the verb main DEF of the generated NV knowledge fits one of GK conditions, it will be automatically confirmed as a collection of NVEF knowledge and sent to NVEF KR-tree. Appendix. C gives ten GK conditions used in this study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 165, |
|
"text": "[Dong]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(b) NVEF enclosed-word template (NVEF-EW template) check: If the generated NV knowledge cannot be auto-confirmed as NVEF knowledge in procedure (a), this procedure will be triggered. A NVEF-EW template is composed of all left words and right words of a NVEF word-pair in a Chinese sentence. For example, the NVEF-EW template of the NVEF word-pair \"\u6c7d\u8eca-\u884c\u99db(car, move)\" in the Chinese sentence \u9019(this)/\u6c7d\u8eca (car)/\u4f3c\u4e4e(seem)/\u884c\u99db(move)/\u9806\u66a2(well) is \u9019 N \u4f3c\u4e4e V \u9806\u66a2. In this study, all the NVEF-EW templates are generated from the following resources: i) the collection of manually confirmed NVEF knowledge in , ii) the automatically confirmed NVEF knowledge and iii) the NVEF-EW templates provided by human editor. In this procedure, if the NVEF-EW template of the generated NV word-pair for the Chinese sentence input matches one of the NVEF-EW templates, it will be automatically confirmed as a col-lection of NVEF knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Auto-Discovery of NVEF Knowledge", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To evaluate the performance of the proposed auto-discovery of NVEF knowledge, we define the NVEF accuracy and NVEF-identified sentence coverage by Equations (1) and (2):NVEF accuracy = # of permissible NVEF knowledge / # of total generated NVEF knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "NVEF-identified sentence coverage = # of NVEF-identified sentences / # of total NV sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In Equation 1, a permissible NVEF knowledge means the generated NVEF knowledge is manually confirmed as a collection of NVEF knowledge. In Equation 2, if the Chinese sentence contains greater or equal to one NVEF word-pair on our NVEF KR-tree by the NVEF word-pair identifier Figure 3 . The confirmation UI of NVEF knowledge taking the generated NVEF knowledge for the Chinese sentence \u9ad8\u5ea6\u58d3\u529b\u4f7f\u6709\u4e9b\u4eba\u98df\u91cf\u6e1b\u5c11 (High pressure makes some people that their eating-capacity decreased as an example. (The English words in parentheses, symbols [] used to mark a noun and <> used to mark a verb are there for explanatory purposes only)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 284, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "An evaluation UI for the generated NVEF knowledge is developed as shown in Figure 3 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 83, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Interface (UI) for Manually Confirming NVEF Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "An auto-generated NVEF knowledge should be confirmed as a collection of permissible NVEF knowledge if it fits all three principles below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confirmation Principles of permissible NVEF Knowledge", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Principle 1. Do the NV word-pair make correct POS tags for the given Chinese sentence?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confirmation Principles of permissible NVEF Knowledge", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Principle 2. Do the NV sense-pair and the NV word-pair make sense?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confirmation Principles of permissible NVEF Knowledge", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Principle 3. Do most NV word-pair instances for the NV sense-pair satisfy Principles 1 and 2?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Confirmation Principles of permissible NVEF Knowledge", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To evaluate the acquired NVEF knowledge, we divide the 2001 United Daily News (3) Test sentences set. From the testing corpus, we randomly select three days' sentences (October 27, 2001 , November 23, 2001 and December 17, 2001 ) to be our test sentences set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 185, |
|
"text": "(October 27, 2001", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 205, |
|
"text": ", November 23, 2001", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 227, |
|
"text": "and December 17, 2001", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "All of the acquired NVEF knowledge by AUTO-NVEF on the test sentences are manually confirmed by three evaluators. Table 5 is the experimental results of AUTO-NVEF. From Table 5 , it shows that AUTO-NVEF can achieve a NVEF accuracy of 98.52%. When we apply AUTO-NVEF to the entire 2001 UDN corpus, it auto-generates 167,203", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 121, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 178, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "NVEF sense-pairs (8.6M) and 317,820 NVEF word-pairs (10.1M) on the NVEF KR-tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Within this data, 47% is generated through the general keeping conditions check and the other 53% is generated by the NVEF-enclosed word templates check. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "According to the noun and verb positions of NVEF word-pairs in Chinese sentences, the NVEF knowledge can be classified into four types: N:V, N-V, V:N, and V-N, where the symbols \":\" stands for \"next to\" and \"-\" stands for \"near by.\" Table 6 shows examples and the coverage of the four types of NVEF knowledge, in which the ratios (coverage) of the collections of N:V, N-V, V:N and V-N are 12.41%, 43.83%, 19.61% and 24.15%, respectively, by applying AUTO-NVEF to 2001 UDN corpus. It seems that the percentage of SP construction is a little more than that of VO construction in the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 240, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coverage for the Four Types of NVEF Knowledge", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "One hundred collections of the generated non-permissible NVEF (NP-NVEF) knowledge are analyzed. We classify these into eleven error types as shown in Table 7 , which lists the NP-NVEF confirmation principles and the ratios for the eleven error types. The first three types consist of 52% of the cases that do not satisfy the NVEF confirmation principles 1, 2 and 3 in Section 3.2. The fourth type is rare with 1% of the cases. Types 5 to 7 consists of 11% of the cases and are caused from incorrect Hownet lexicon, such as the incorrect word-sense exist|\u5b58\u5728 for the Chinese word \u76c8\u76c8 (an adjective, normally used to describe a beauty's smile). Types 8 to 11 are referred to as the four NLP errors (36% of NP-NVEF cases): Type 8 is the problem of different word-senses used in Ancient and Modern Chinese; type 9 is caused by errors in WSD;", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 157, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis -The Non-Permissible NVEF Knowledge Generated by AUTO-NVEF", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "type 10 is caused by the unknown word problem; and type 11 is caused by incorrect word segmentation. Table 7 . The eleven error types and their confirming principles of non-permissible NVEF knowledge generated by AUTO-NVEF Type Confirming principle of Non-Permissible NVEF Knowledge Percentage 1 * NV Word-pair cannot make a reasonable and legitimate POS tagging for the Chinese sentence. 33% (33/100) 2 * NV sense-par (DEF) and the NV word-pair cannot make sense for each other 17% (17/100) 3 * In this NV pair, one of word sense cannot inherit its parent category. 2% (2/100) 4 ** The NV pair cannot be the proper combination in the sentence although this pair fits principles (a), (b), and (c).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 108, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis -The Non-Permissible NVEF Knowledge Generated by AUTO-NVEF", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Incorrect word POS in Hownet 1% (1/100) 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1% (1/100) 5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Incorrect word sense in Hownet 3% (3/100) 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1% (1/100) 5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "No proper definition in Hownet Ex:\u66ab\u5c45(temporary residence)\uff0cit has two meanings, one is <reside|\u4f4f\u4e0b>(\u7dca\u6025 \u66ab\u5c45\u670d\u52d9(Emergent temporary residence service))and another one is <situated| \u8655,Timeshort|\u66ab>(SARS \u5e36\u4f86\u66ab\u6642\u6027\u7684\u7d93\u6fdf\u9707\u76ea(SARS will produce only a temporary economic shock))\uff0e 7% (7/100) * Types 1 to 3 are contrast to the confirming principles of permissible NVEF knowledge mentioned in section 3.2, respectively. ** Type 4 contents principles (a), (b), and (c) in section 3.2 but there is no proper combination in that sentence. Table 8 gives the examples for the eleven types of NP-NVEF knowledge. From Tables 8 and 9, 11% of NP-NVEF cases can be resolved by correcting the error lexicon in original Hownet. For the four NLP errors, these cases could be improved with the support of other techniques such as WSD ( [Resnik et al. 2000 , Yang et al. 2002 ), unknown word identification ([Chang et al. 1997 , Lai et al. 2000 , Chen et al. 2002 , Sun et al. 2002 and Tsai et al. 2003 ) and word segmentation ([Sproat et al. 1996 , Teahan et al. 2000 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 798, |
|
"end": 817, |
|
"text": "[Resnik et al. 2000", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 836, |
|
"text": ", Yang et al. 2002", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 887, |
|
"text": "([Chang et al. 1997", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 888, |
|
"end": 905, |
|
"text": ", Lai et al. 2000", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 906, |
|
"end": 924, |
|
"text": ", Chen et al. 2002", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 925, |
|
"end": 942, |
|
"text": ", Sun et al. 2002", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 943, |
|
"end": 963, |
|
"text": "and Tsai et al. 2003", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 988, |
|
"end": 1008, |
|
"text": "([Sproat et al. 1996", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1009, |
|
"end": 1029, |
|
"text": ", Teahan et al. 2000", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 512, |
|
"end": 519, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 595, |
|
"text": "Tables 8", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "1% (1/100) 5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we present an auto-discovery system of NVEF knowledge that can be used to automatically generate a large scale NVEF knowledge for NLP. The experimental results shows that AUTO-NVEF achieves a NVEF accuracy of 98.52%. By applying AUTO-NVEF to the 2001 UDN corpus, we create 167,203 NVEF sense-pairs (8.6M) and 317,820 NVEF word-pairs (10.1M) on the NVEF-KR tree. Using this collection of NVEF knowledge, we have designed an NVEF word-pair identifier ] to achieve a WSD accuracy of 93.7% and a STW accuracy of 99.66% for the NVEF related portion in Chinese sentences. The acquired NVEF knowledge can cover 48% and 50% of NV-sentences in ASBC and in 2001 UDN corpus, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Directions for Future Research", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Our database for the NVEF knowledge has not been completed. Currently, there are 66.34% (=6,641/10,011) of the noun-senses in Hownet have been considered in the NVEF knowledge construction. The remaining 33.66% of the noun-senses in Hownet not dealt with yet are caused by two problems: (1) those words with ploy-noun-senses or poly-verb-senses, which are difficult to be resolved by WSD, especially those single-character words; and (2) corpus sparseness. We will continue expanding our NVEF knowledge through other corpora. The mechanism of AUTO-NVEF will be extended to auto-generate other meaningful co-occurrence semantic restrictions, in particular, noun-noun association frame (NNAF) pairs, noun-adjective grammar frame (NAGF) pairs and verb-adverb grammar frame (VDGF) pairs. As of our knowledge, the NVEF/NNAF/NAGF/VDGF pairs are the four most important co-occurrence semantic restrictions for language understanding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Directions for Future Research", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Since the creation of NVEF knowledge bears no particular application in mind, and still it can be used to effectively resolve the WSD and STW problems, the NVEF knowledge is potentially application independent for NLP. We shall further investigate the effectiveness of NVEF knowledge in other NLP applications, such as syllable/speech understanding and full/shallow parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Directions for Future Research", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We are grateful to our colleagues in the Intelligent Agent Systems Lab. (IASL), Li-Yeng Chiu, Mark Shia, Gladys Hsieh, Masia Yu, Yi-Fan Chang, Jeng-Woei Su and Win-wei Mai, who helped us create and verify all the necessary NVEF knowledge and tools for this study. We would also like to thank Prof. Zhen-Dong Dong for providing us with the Hownet dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "5." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The origin and evolution of everyday concepts", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Carey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Cognitive Models of Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carey, S., \"The origin and evolution of everyday concepts (In R. N. Giere, ed.),\" Cognitive Models of Science, Minneapolis: University of Minnesota Press, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An Unsupervised Iterative Method for Chinese New Lexicon Extraction", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "; Y", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Lusignan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "International Journal of Computational Linguistics & Chinese language Processing", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "89--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chang, J. S. and K. Y. Su, \"An Unsupervised Iterative Method for Chinese New Lexicon Extrac- tion,\" International Journal of Computational Linguistics & Chinese language Processing, 1997Choueka, Y. and S. Lusignan, \"A Connectionist Scheme for Modeling Word Sense Disambiguation,\" Cognition and Brain Theory, 6 (1) 1983, pp.89-120.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unknown Word Extraction for Chinese Documents", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of 19 th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, K. J. and W. Y. Ma, \"Unknown Word Extraction for Chinese Documents,\" Proceedings of 19 th COLING 2002, Taipei, pp.169-175", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Chinese Grammar and English Grammar: a Comparative Study", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"C R" |
|
], |
|
"last": "Chu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu, S. C. R., Chinese Grammar and English Grammar: a Comparative Study, The Commerical Press, Ltd. The Republic of China, 1982", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "-02, the content and illustration of Sinica corpus of Academia Sinica", |
|
"authors": [], |
|
"year": 1995, |
|
"venue": "CKIP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CKIP. Technical Report no. 95-02, the content and illustration of Sinica corpus of Academia Sinica. Institute of Information Science, Academia Sinica, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "An Introduction to Language", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Fromkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fromkin, V. and R. Rodman, An Introduction to Language, Sixth Edition, Holt, Rinehart and Winston, 1998", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Lexical Ambiguity and Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Krovetz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "115--141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krovetz, R. and W. B. Croft, \"Lexical Ambiguity and Information Retrieval,\" ACM Transactions on Information Systems, 10 (2), 1992, pp.115-141.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unknown Word and Phrase Extraction Using a Phrase-Like-Unit-based Likelihood Ratio", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "International Journal of Computer Processing Oriental Language", |
|
"volume": "13", |
|
"issue": "1", |
|
"pages": "83--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lai, Y. S. and Wu, C. H., \"Unknown Word and Phrase Extraction Using a Phrase-Like-Unit-based Likelihood Ratio,\" International Journal of Computer Processing Oriental Language, 13(1), pp.83-95", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mandarin Chinese: a Functional Reference Grammar", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li, N. C. and S. A. Thompson, Mandarin Chinese: a Functional Reference Grammar, The Crane Publishing Co., Ltd. Taipei, Taiwan, 1997", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Distinguishing Systems and Distinguishing Senses: New Evaluation Methods for Word Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Natural Language Engineering", |
|
"volume": "5", |
|
"issue": "3", |
|
"pages": "113--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Resnik, P. and D. Yarowsky, \"Distinguishing Systems and Distinguishing Senses: New Evalua- tion Methods for Word Sense Disambiguation,\" Natural Language Engineering, 5 (3), 2000, pp.113-133.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Lexical Ambiguity Resolution", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Small", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Cottrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Tannenhaus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Small, S., and G. Cottrell, and M. E. Tannenhaus, Lexical Ambiguity Resolution, Morgan Kauf- mann, Palo Alto, Calif., 1988.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Chinese Named Entity Identification Using Class-based Language Model", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of 19 th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "967--973", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sun, J., J. Gao, L. Zhang, M. Zhou and C. Huang, \"Chinese Named Entity Identification Using Class-based Language Model,\" Proceedings of 19 th COLING 2002, Taipei, pp.967-973", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A Stochastic Finite-State Word-Segmentation Algorithm for Chinese", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Shih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "3", |
|
"pages": "377--404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sproat, R. and C. Shih, \"A Stochastic Finite-State Word-Segmentation Algorithm for Chinese,\" Computational Linguistics, 22(3), 1996, pp.377-404", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A compression-based algorithm for chinese word segmentation", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Teahan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mcnab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "375--393", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teahan, W.J., Wen, Y., McNab, R.J., Witten, I.H., \"A compression-based algorithm for chinese word segmentation,\" Computational Linguistics, 26, 2000, pp.375-393", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Word sense disambiguation and sense-based NV event-frame identifier", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Su", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computational Linguistics and Chinese Language Processing", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "29--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsai, J. L, W. L. Hsu and J. W. Su, \"Word sense disambiguation and sense-based NV event-frame identifier,\" Computational Linguistics and Chinese Language Processing, Vol. 7, No. 1, February 2002, pp.29-46", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Applying NVEF Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hsu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of 19 th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1016--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsai, J. L, W. L. Hsu, \"Applying NVEF Word-Pair Identifier to the Chinese Syllable-to-Word Conversion Problem,\" Proceedings of 19 th COLING 2002, Taipei, pp.1016-1022", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Chinese Word Auto-Confirming Agent", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Sung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hsu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceeding of ROCLING XV", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsai, J. L, C. L. Sung and W. L. Hsu, \"Chinese Word Auto-Confirming Agent,\" Proceeding of ROCLING XV, 2003", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A study of Semantic Disambiguation Based on HowNet", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics and Chinese Language Processing", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang, X. and Li T., \"A study of Semantic Disambiguation Based on HowNet,\" Computational Linguistics and Chinese Language Processing, Vol. 7, No. 1, February 2002, pp.47-78 \u9673\u514b\u5065\uff0c\u6d2a\u5049\u7f8e\uff0c\"\u4e2d\u6587\u88cf \u300c\u52d5-\u540d\u300d \u8ff0\u8cd3\u7d50\u69cb\u8207 \u300c\u52d5-\u540d\u300d \u504f\u6b63\u7d50\u69cb\u7684\u5206\u6790\uff0c\" Communication of COLIPS, 6(2), 1996, pp.73-79 \u80e1\u88d5\u6a39\uff0c\u8303\u66c9\uff0c\u52d5\u8a5e\u7814\u7a76\uff0c\u6cb3\u5357\u5927\u5b78\u51fa\u7248\u793e\uff0c1995", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "The flow chart of AUTO-NVEF", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "into two distinct sub-corpora. (The UDN 2001 corpus contains 4,539,624 Chinese sentences that were extracted from the United Daily News Web site [On-Line United Daily News] from January 17, 2001 to December 30, 2001.) (1) Training corpus. This is the collection of Chinese sentences extracted from the 2001 UDN corpus from January 17, 2001 to September 30, 2001. According to the training corpus, we create thirty thousand manually confirmed NVEF word-pairs, which are used to derive the 5,680 general keeping conditions. (2) Testing corpus. This is the collection of Chinese sentences extracted from the 2001 UDN corpus from October 1, 2001 to December 31, 2001.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">C.Word a E.Word a Part-of-speech Sense (i.e. DEF in Hownet)</td></tr><tr><td>\u8eca</td><td>Che</td><td>Noun</td><td>character|\u6587\u5b57,surname|\u59d3,human|\u4eba,ProperName|\u5c08</td></tr><tr><td>\u8eca</td><td>car</td><td>Noun</td><td>LandVehicle|\u8eca</td></tr><tr><td>\u8eca</td><td>turn</td><td>V erb</td><td>cut|\u5207\u524a</td></tr><tr><td colspan=\"4\">a C.Word refers to a Chinese word; E.Word refers to an English word</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Three different senses of the Chinese word \"\u8eca(Che/car/turn)\"" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "NV_1 is NVEF knowledge by keeping-condition; learned NVEF template is[\u97f3\u6a02\u6703 NV \u8a31\u591a]" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>RL-LWF</td><td>LR-LWF</td><td>RL-LWF = LF-LWF</td></tr><tr><td>Accuracy</td><td>82.5%</td><td>81.7%</td><td>86.86%</td></tr><tr><td>Recall</td><td>100%</td><td>100%</td><td>89.33%</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "A comparison of word-segmentation accuracies for RL-LWF, LR-LWF and RL-LWF = LR-LWF strategies (the test sentences are ASBC and the dictionary is CKIP lexicon)" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>Noun</td><td>Verb</td><td colspan=\"2\">Adjective Adverb</td><td colspan=\"2\">Preposition Conjunction</td><td colspan=\"2\">Expletive Structural Particle</td></tr><tr><td>CKIP</td><td>N</td><td>V</td><td>A</td><td>D</td><td>P</td><td>C</td><td>T</td><td>De</td></tr><tr><td>Hownet</td><td>N</td><td>V</td><td>ADJ</td><td>ADV</td><td>PP</td><td>CONJ</td><td>ECHO</td><td>STRU</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "A mapping list of CKIP POS tag and Hownet POS tag" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "By this UI, evaluators (native Chinese speakers) can review the generated NVEF knowledge and determine whether it is a permissible NVEF knowledge. Take the Chinese sentence \u9ad8\u5ea6\u58d3\u529b\u4f7f \u6709\u4e9b\u4eba\u98df\u91cf\u6e1b\u5c11(High pressure makes some people that their eating capacity decreased) as an example. For this case, AUTO-NVEF will generate a collection of NVEF knowledge including the NVEF sense-pair [attribute|\u5c6c\u6027,ability|\u80fd\u529b,&eat| \u5403 ]-[subtract| \u524a\u6e1b] and the NVEF word-pair [\u98df\u91cf(eating capacity)]-[\u6e1b\u5c11(decrease)]. According to the confirmation principles of permissible NVEF knowledge, evaluators will confirm this generated NVEF knowledge as a permissible NVEF knowledge. The confirmation principles of permissible NVEF knowledge are given as follows." |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"content": "<table><tr><td>Date of test news</td><td>NVEF accuracy</td><td>Evaluator</td></tr><tr><td>October 27, 2001</td><td>99.10% (1,095/1,105)</td><td>A</td></tr><tr><td>November 23, 2001</td><td>97.76% (1,090/1,115)</td><td>B</td></tr><tr><td>December 17, 2001</td><td>98.63% (2,156/2,186)</td><td>C</td></tr><tr><td>Total Average</td><td>98.52% (4,341/4,406)</td><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Experimental results of AUTO-NVEF" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">NV pair Sentence</td><td>Noun</td><td>Verb</td><td>Coverage</td></tr><tr><td>Type</td><td/><td>/ DEF</td><td>/ DEF</td><td/></tr><tr><td>N:V</td><td>[\u5de5\u7a0b]<\u5b8c\u6210></td><td>\u5de5\u7a0b (construction)</td><td>\u5b8c\u6210 (complete)</td><td>24.15%</td></tr><tr><td/><td>(The construction is now completed)</td><td>affairs|\u4e8b\u52d9,industrial|\u5de5</td><td>fulfil|\u5be6\u73fe</td><td/></tr><tr><td>N-V</td><td>\u5168\u90e8[\u5de5\u7a0b]\u9810\u5b9a\u5e74\u5e95<\u5b8c\u6210></td><td>\u5de5\u7a0b (construction)</td><td>\u5b8c\u6210 (complete)</td><td>43.83%</td></tr><tr><td/><td>(All of constructions will be completed by</td><td>affairs|\u4e8b\u52d9,industrial|\u5de5</td><td>fulfil|\u5be6\u73fe</td><td/></tr><tr><td/><td>the end of year)</td><td/><td/><td/></tr><tr><td>V:N</td><td><\u5b8c\u6210>[\u5de5\u7a0b]</td><td>\u5de5\u7a0b (construction)</td><td>\u5b8c\u6210 (complete)</td><td>19.61%</td></tr><tr><td/><td>(to complete a construction)</td><td>affairs|\u4e8b\u52d9,industrial|\u5de5</td><td>fulfil|\u5be6\u73fe</td><td/></tr><tr><td>V-N</td><td>\u5efa\u5546\u627f\u8afe\u5728\u5e74\u5e95\u524d<\u5b8c\u6210>\u9435\u8def[\u5de5\u7a0b]</td><td>\u5de5\u7a0b (construction)</td><td>\u5b8c\u6210 (complete)</td><td>12.41%</td></tr><tr><td/><td>(The building contractor promise to complete</td><td>affairs|\u4e8b\u52d9,industrial|\u5de5</td><td>fulfil|\u5be6\u73fe</td><td/></tr><tr><td/><td>railway construction before the end of this year)</td><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "An illustration of four types of NVEF knowledge and their coverage (The English words in parentheses, symbols [] and <> are there for explanatory purposes only)" |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"content": "<table><tr><td>NP</td><td>Sentence</td><td>Noun (English explanation)</td><td>Verb (English explanation)</td></tr><tr><td>type</td><td>(English explanation)</td><td>DEF</td><td>DEF</td></tr><tr><td/><td>\u8b66\u65b9\u7dad\u8b77\u5730\u65b9[\u6cbb\u5b89]<\u8f9b\u52de></td><td>\u6cbb\u5b89 (public security)</td><td>\u8f9b\u52de (work hard)</td></tr><tr><td>1</td><td>(Police work hard to safeguard</td><td>attribute|\u5c6c\u6027,circumstances|\u5883\u6cc1,safe|</td><td>endeavour|\u8ce3\u529b</td></tr><tr><td/><td>the locality security.)</td><td>\u5b89,politics|\u653f,&organization|\u7d44\u7e54</td><td/></tr><tr><td/><td><\u6a21\u7cca>\u7684[\u767d\u5bae]\u666f\u8c61</td><td>\u767d\u5bae (White House)</td><td>\u6a21\u7cca (vague)</td></tr><tr><td>2</td><td>(White House looked vague in</td><td>house|\u623f\u5c4b,institution|\u6a5f\u69cb,#politics|</td><td>PolysemousWord|\u591a\u7fa9</td></tr><tr><td/><td>the heavy fog.)</td><td>\u653f,(US|\u7f8e\u570b)</td><td>\u8a5e,CauseToDo|\u4f7f\u52d5,mix|\u6df7\u5408</td></tr><tr><td/><td><\u751f\u6d3b>\u689d\u4ef6[\u4e0d\u8db3]</td><td>\u4e0d\u8db3 (lackness)</td><td>\u751f\u6d3b (life)</td></tr><tr><td>3</td><td>(Lack of living condtions)</td><td>attribute|\u5c6c\u6027,fullness|\u7a7a\u6eff,incomplete|</td><td>alive|\u6d3b\u8457</td></tr><tr><td/><td/><td>\u7f3a,&entity|\u5be6\u9ad4</td><td/></tr><tr><td/><td>\u7db2\u8def\u5e36\u7d66[\u4f01\u696d]\u8a31\u591a<\u4fbf\u5229></td><td>\u4f01\u696d (Industry)</td><td>\u4fbf\u5229 (benefit)</td></tr><tr><td>4</td><td>(Internet brings numerous bene-</td><td>InstitutePlace|\u5834\u6240,*produce|\u88fd\u9020,*sell|</td><td>benefit|\u4fbf\u5229</td></tr><tr><td/><td>fits to industries.)</td><td>\u8ce3,industrial|\u5de5,commercial|\u5546</td><td/></tr><tr><td/><td><\u76c8\u76c8>[\u7b11\u9768]</td><td>\u7b11\u9768 (a smiling face)</td><td>\u76c8\u76c8 (an adjective, normally to</td></tr><tr><td>5</td><td>(smile radiantly)</td><td>part|\u90e8\u4ef6,%human|\u4eba,skin|\u76ae</td><td>describe a beauty's smile)</td></tr><tr><td/><td/><td/><td>exist|\u5b58\u5728</td></tr><tr><td/><td>\u4fdd\u8cbb\u8f03\u8cb4\u7684<\u58fd\u96aa>[\u4fdd\u55ae]</td><td>\u4fdd\u55ae (insurance policy)</td><td>\u58fd\u96aa (life insurance)</td></tr><tr><td>6</td><td>(higher fare life insurance policy)</td><td>bill|\u7968\u64da,*guarantee|\u4fdd\u8b49</td><td>guarantee|\u4fdd\u8b49,scope=die|\u6b7b,</td></tr><tr><td/><td/><td/><td>commercial|\u5546</td></tr><tr><td/><td>\u50b5\u5238\u578b\u57fa\u91d1\u5438\u91d1[\u5b58\u6b3e]<\u5931\u8840></td><td>\u5b58\u6b3e (bank savings)</td><td>\u5931\u8840 (bleed or loss(only use in</td></tr><tr><td>7</td><td>Bond foundation makes profit</td><td>money|\u8ca8\u5e63,$SetAside|\u7559\u5b58</td><td>finance diction))</td></tr><tr><td/><td>but savings is loss</td><td/><td>bleed|\u51fa\u8840</td></tr><tr><td/><td>\u83ef\u5357[\u9280\u884c] \u4e2d\u5c71<\u5206\u884c></td><td>\u9280\u884c (bank)</td><td>\u5206\u884c (branch)</td></tr><tr><td>8</td><td>(Hwa-Nan Bank Jung-San Branch)</td><td>Aside|\u7559\u5b58,@TakeBack|\u53d6\u56de,@lend|\u501f InstitutePlace|\u5834\u6240,@Set</td><td>separate|\u5206\u96e2</td></tr><tr><td/><td/><td>\u51fa,#wealth|\u9322\u8ca1,commercial|\u5546</td><td/></tr><tr><td>9</td><td>[\u6839\u64da]<\u8abf\u67e5> (according to the investigation)</td><td>\u6839\u64da (evidence) information|\u4fe1\u606f</td><td>\u8abf\u67e5 (investigate) investigate|\u8abf\u67e5</td></tr><tr><td>10</td><td><\u96f6\u552e>[\u901a\u8def] (retail sell routes)</td><td>\u901a\u8def (route) facilities|\u8a2d\u65bd,route|\u8def</td><td>\u96f6\u552e (retail sell) sell|\u8ce3</td></tr><tr><td>11</td><td>\u5f9e\u4eca\u65e5<\u8d77\u5230> 5[\u6708\u5e95] (from today to the end of May)</td><td>\u6708\u5e95 (the end of month) time|\u6642\u9593,ending|\u672b,month|\u6708</td><td>\u8d77\u5230 (to elaborate) do|\u505a</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Examples of the eleven types of non-permissible NVEF knowledge. (The English words in parentheses, symbols [] and <> are there for explanatory purposes only.)" |
|
} |
|
} |
|
} |
|
} |