Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O05-3007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:58:17.491539Z"
},
"title": "A Chinese Term Clustering Mechanism for Generating Semantic Concepts of a News Ontology",
"authors": [
{
"first": "Chang-Shing",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chang Jung Christian University",
"location": {
"settlement": "Tainan",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Yau-Hwang",
"middle": [],
"last": "Kuo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chia-Hsin",
"middle": [],
"last": "Liao",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhi-Wei",
"middle": [],
"last": "Jian",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In order to efficiently manage and use knowledge, ontology technologies are widely applied to various kinds of domain knowledge. This paper proposes a Chinese term clustering mechanism for generating semantic concepts of a news ontology. We utilize the parallel fuzzy inference mechanism to infer the conceptual resonance strength of a Chinese term pair. There are four input fuzzy variables, consisting of a Part-of-Speech (POS) fuzzy variable, Term Vocabulary (TV) fuzzy variable, Term Association (TA) fuzzy variable, and Common Term Association (CTA) fuzzy variable, and one output fuzzy variable, the Conceptual Resonance Strength (CRS), in the mechanism. In addition, the CKIP tool is used in Chinese natural language processing tasks, including POS tagging, refining tagging, and stop word filtering. The fuzzy compatibility relation approach to the semantic concept clustering is also proposed. Simulation results show that our approach can effectively cluster Chinese terms to generate the semantic concepts of a news ontology.",
"pdf_parse": {
"paper_id": "O05-3007",
"_pdf_hash": "",
"abstract": [
{
"text": "In order to efficiently manage and use knowledge, ontology technologies are widely applied to various kinds of domain knowledge. This paper proposes a Chinese term clustering mechanism for generating semantic concepts of a news ontology. We utilize the parallel fuzzy inference mechanism to infer the conceptual resonance strength of a Chinese term pair. There are four input fuzzy variables, consisting of a Part-of-Speech (POS) fuzzy variable, Term Vocabulary (TV) fuzzy variable, Term Association (TA) fuzzy variable, and Common Term Association (CTA) fuzzy variable, and one output fuzzy variable, the Conceptual Resonance Strength (CRS), in the mechanism. In addition, the CKIP tool is used in Chinese natural language processing tasks, including POS tagging, refining tagging, and stop word filtering. The fuzzy compatibility relation approach to the semantic concept clustering is also proposed. Simulation results show that our approach can effectively cluster Chinese terms to generate the semantic concepts of a news ontology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An ontology is an explicit, machine-readable specification of a shared conceptualization [Studer et al. 1998 ]. It is an essential element in many applications, including agent systems, knowledge management systems, and e-commerce platforms. It can help generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts [Gomez-Perez et al. 2002] [Fensel 2002 ] [Schreiber et al. 2001] . Soo et al. [2001] considered an ontology to be a collection of key concepts and their inter-relationships, collectively providing an abstract view of an application domain. With the support of an ontology, a user and a system can communicate with each other through their shared and common understanding of a domain. M. MissiKoff et al. [2002] proposed an integrated approach to web ontology learning and engineering that can build and access a domain ontology for intelligent information integration within a virtual user community. The proposed approach involves automatic concept learning, machine-supported concept validation, and management. Embley et al. [1998] presented a method of extracting information from unstructured documents based on an ontology. Alani et al. [2003] proposed the Artequakt, which automatically extracts knowledge about artists from the Web based on a domain ontology. It can generate biographies that are tailored to a user's interests and requirements. Navigli et al. [2003] proposed OntoLearn with ontology learning capability to extract relevant domain terms from a corpus of text. OntoSeek [Guarino et al. 1999 ] is a system designed for content-based information retrieval. It combines an ontology-driven content-matching mechanism with moderately expressive representation formalism. Lee et al. [2004] proposed an ontology-based fuzzy event extraction agent for Chinese news summarization. The summarization agent can generate a sentence set for each piece of Chinese news.",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "[Studer et al. 1998",
"ref_id": "BIBREF19"
},
{
"start": 398,
"end": 423,
"text": "[Gomez-Perez et al. 2002]",
"ref_id": "BIBREF6"
},
{
"start": 424,
"end": 436,
"text": "[Fensel 2002",
"ref_id": "BIBREF4"
},
{
"start": 439,
"end": 462,
"text": "[Schreiber et al. 2001]",
"ref_id": "BIBREF17"
},
{
"start": 465,
"end": 482,
"text": "Soo et al. [2001]",
"ref_id": "BIBREF18"
},
{
"start": 785,
"end": 808,
"text": "MissiKoff et al. [2002]",
"ref_id": "BIBREF15"
},
{
"start": 1112,
"end": 1132,
"text": "Embley et al. [1998]",
"ref_id": "BIBREF3"
},
{
"start": 1228,
"end": 1247,
"text": "Alani et al. [2003]",
"ref_id": "BIBREF0"
},
{
"start": 1452,
"end": 1473,
"text": "Navigli et al. [2003]",
"ref_id": "BIBREF16"
},
{
"start": 1592,
"end": 1612,
"text": "[Guarino et al. 1999",
"ref_id": "BIBREF7"
},
{
"start": 1788,
"end": 1805,
"text": "Lee et al. [2004]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we propose a Chinese term clustering mechanism for generating the semantic concepts of a news ontology. The parallel fuzzy inference mechanism is adopted to infer the conceptual resonance strength for any two Chinese terms. The CKIP tool [Academia Sinica 1993] is used in Chinese natural language processing, including POS tagging, refining tagging, and stop word filtering. The remainder of this paper is structured as follows. Section 2 introduces the structure of the Chinese term clustering mechanism. Semantic concept analysis for Chinese term clustering is presented in Section 3. Section 4 introduces the parallel fuzzy inference mechanism for semantic concept generation. Section 5 presents experimental results. Finally, some conclusions are drawn in Section 6.",
"cite_spans": [
{
"start": 253,
"end": 275,
"text": "[Academia Sinica 1993]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "An ontology is defined as a set of representational terms called concepts. The inter-relationships among these concepts describe a target world. Here, we will briefly describe the structure of the object-oriented ontology [Lee et al. 2003 ]. An object-oriented ontology consists of several basic components: (1) Domain: The top layer of the ontology is the name of the domain knowledge. In this study, an ontology was constructed for Chinese news, so its domain name is Chinese news. (2) Category: The second layer contains the categories of the domain ontology. Each category is composed of some concepts with various inter-relationships. There are seven categories for our Chinese news ontology. They are \"Political\" (\u653f\u6cbb\u7126\u9ede), \"International\" (\u570b\u969b\u8981\u805e), \"Finance\" (\u80a1\u5e02\u8ca1\u7d93), \"Cross-Strait\" (\uf978 \u5cb8\u98a8\u96f2), \"Societal\" (\u793e\u6703\u5730\u65b9), \"Entertainment\" (\u904b\u52d5\u5a1b\uf914), and \"Life\" (\u751f\u6d3b\u65b0\u77e5). (3)",
"cite_spans": [
{
"start": 222,
"end": 238,
"text": "[Lee et al. 2003",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Structure of the Chinese Term Clustering Mechanism",
"sec_num": "2."
},
{
"text": "Concept Set: The Concept Set is composed of various concepts and relations. We treat each concept in the ontology as a class, so the structure of the Concept Set can be treated as a class diagram. Figure 1 shows an example for our Chinese Political news domain ontology [Lee et al. 2003 ]. ",
"cite_spans": [
{
"start": 270,
"end": 286,
"text": "[Lee et al. 2003",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 197,
"end": 205,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "In this section, we will propose a Chinese term clustering mechanism for generating the semantic concepts of a news ontology. Figure 2 shows the structure of the Chinese term clustering mechanism. Natural language processing technologies were utilized to deal with the Chinese news that we gathered from the China Times website (http://www.chinatimes.com.tw). Several technologies, including a part-of-speech tagger, refining tagger, stop word filter, and term analyzer, were adopted for document pre-processing. Chinese language processing tools, such as CKIP [Academia Sinica 1993] , the Academia Sinica Balanced Corpus, Segmentation Standard Dictionary [Academia Sinica 1998] , and Chinese Electronic Dictionary [Academia Sinica 1993] provided by Academia Sinica, were used to deal with the Chinese news. In addition, the data mining technique and the concept clustering approach based on the fuzzy compatibility relation were employed. We will briefly describe these technologies in the following.",
"cite_spans": [
{
"start": 561,
"end": 583,
"text": "[Academia Sinica 1993]",
"ref_id": null
},
{
"start": 656,
"end": 678,
"text": "[Academia Sinica 1998]",
"ref_id": null
},
{
"start": 715,
"end": 737,
"text": "[Academia Sinica 1993]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 2. The structure of the Chinese term clustering mechanism",
"sec_num": null
},
{
"text": "First, the CKIP is used to tag each word with its POS tag for the Chinese news. The refining tagger then refers to the Academia Sinica Balanced Corpus and Chinese Electronic Dictionary to refine the POS tags. With the aid of the corpus and the dictionary, we have sufficient Chinese POS knowledge to analyze the features of the terms for semantic concept clustering. The stop word filter is used to select terms with useful POS tags as candidate features. Table 1 shows unmeaning tags as stop words. Then, the term analyzer analyzes the term frequency of the news to select the important terms from a specific class of news. For example, the terms with the POS tags Na (\u666e\u901a\u540d\u8a5e), Nb (\u5c08\u6709\u540d\u8a5e), Nc (\u5730\u65b9\u540d\u8a5e), and Nd (\u6642\u9593\u540d\u8a5e) are preserved and sent to the Parallel Fuzzy Inference Mechanism for further processing. The Data Mining mechanism adopts the Apriori Algorithm to generate association rules, which are used in the Parallel Fuzzy Inference Mechanism. The Apriori Algorithm [Jacobes 1993 ] is described as follows. Apriori Algorithm: Find frequent itemsets using an iterative level-wise approach based on candidate generation.",
"cite_spans": [
{
"start": 968,
"end": 981,
"text": "[Jacobes 1993",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 456,
"end": 463,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Figure 2. The structure of the Chinese term clustering mechanism",
"sec_num": null
},
{
"text": "Database D of transactions and minimum support threshold min_sup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "L, frequent itemsets in D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1: L 1 =find_frequent_1-itemsets(D,min_sup); //find_frequent_1-itemsets denotes to find frequent 1-itemsets in D Step 2: For (k=2; L k-1 \u2260\u03c8; k++) {",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Step 2.1: C k =apriori_gen(L k-1 ,min_sup);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Step 2.2: For each transaction t\u2208D { //scan D for counts C t =subset(C k ,t); //get the subsets of t that are candidates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Step 2.3: For each candidate c\u2208C t c.count++; }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Step 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "L k ={c\u2208C k |c.count\u2267min_sup} } Step 4: Return L=\u222a k L k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Step 5: End.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method:",
"sec_num": null
},
{
"text": "Step 1: Get C 1 from D // C 1 denotes candidate 1-itemsets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 2: For each transaction t\u2208D { //scan D for counts C t =subset(C k ,t); //get the subsets of t that are candidates } Step 3: For each candidate c\u2208C 1 c.count++; }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "L 1 ={c\u2208C k |c.count\u2267min_sup}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 5: Return L 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 6: End. Procedure apriori_gen(L k-1 ; min_sup)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 1: For each itemset l 1 \u2208L k-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 1.1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "For each itemset l 2 \u2208L k-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 1.2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "If (l 1 [1]= l 2 [1]) \u2227 (l 1 [2]= l 2 [2]) \u2227 \u2026 \u2227 (l 1 [k-1]< l 2 [k-1]) then { c= l 1 \u221el 2 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "//join step: generate candidates Step 1.3: If Has_infrequent_subset(c, L k-1 ) then delete c; //prune step: remove unfruitful candidate Else add c to C k }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 2: Return C k ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 3: End. Procedure Has_infrequent_subset (c; L k-1 ) //use priori knowledge",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "For each (k-1)-subset s of c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 1.1: If s\u2209L k-1 then return TRUE;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 1.2: Else return FALSE;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "Step 2: End.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure find_frequent_1-itemsets(D,min_sup)",
"sec_num": null
},
{
"text": "In this paper, we propose the Conceptual Resonance Strength (CRS) fuzzy variable for Chinese term clustering. The CRS is the similar degree for any term pair in the same concept. Hence, any Chinese term pair with a strong CRS will be classified as the same concept. We use four fuzzy variables, the resonance strength in Part-of-Speech (POS), resonance strength in Term Vocabulary (TV), resonance strength in Term Association (TA), and resonance strength in Common Term Association (CTA), to compute the CRS of the Chinese term pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concept Analysis for Chinese Term Clustering",
"sec_num": "3."
},
{
"text": "We will describe these variables in the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concept Analysis for Chinese Term Clustering",
"sec_num": "3."
},
{
"text": "The first fuzzy variable for CRS is the resonance strength in Part-of-Speech (POS). Figure 3 shows the structure of the tagging tree that is used to compute the resonance strength of POS for any Chinese term pair. Table 2 shows the refining POS tags of Chinese noun terms. Nf \uf97e\u8a5e Nfa \u500b\u9ad4\uf97e\u8a5e \u4e00\"\u5f35\"\u684c\u5b50\u3001\u4e00\"\u500b\"\u676f\u5b50 Nfb \u8ddf\u8ff0\u8cd3\u5f0f\u5408\u7528\u7684\uf97e\u8a5e \u5beb\u4e00\"\u624b\"\u597d\u5b57\u3001\u4e0b\u4e00\"\u76e4\"\u68cb Nfc \u7fa4\u9ad4\uf97e\u8a5e \u4e00\"\u96d9\"\u7b77\u5b50\u3001\u4e00\"\u526f\"\u8033\u74b0 Nfd \u90e8\u5206\uf97e\u8a5e \u4e00\"\u7bc0\"\u7518\u8517\u3001\u4e00\"\u6bb5\"\u6587\u7ae0",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 92,
"text": "Figure 3",
"ref_id": null
},
{
"start": 214,
"end": 221,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "A. Resonance Strength in Part-of-Speech (POS)",
"sec_num": null
},
{
"text": "Nfe \u5bb9\u5668\uf97e\u8a5e \u4e00\"\u7bb1\"\u66f8\u3001\u4e00\"\u7897\"\u98ef Nff \u66ab\u6642\uf97e\u8a5e \u4e00\"\u982d\"\u79c0\u9aee\u3001\u4e00\"\u5730\"\uf918\uf96e Nfg \u6a19\u6e96\uf97e\u8a5e \u516c\u65a4\u3001\u6cd5\u90ce Nfh \u6e96\uf97e\u8a5e \u570b\u3001\u9762 Nfi \u8ff0\u8a5e\u7528\uf97e\u8a5e \u770b\u4e00\"\u904d\"\u3001\u6478\u4e00\"\u4e0b\" Nfzz \uf9b2\uf97e\u8a5e \" \u4e09\u842c\"\u4eba\u53e3 Ng \u65b9\u4f4d\u8a5e \u63a5\"\u4e0a\"\u3001\u5c4b\"\u5f8c\"\u3001\u7761\u89ba\"\u4e4b\u524d\" Nh \u4ee3\u540d\u8a5e Nha \u4eba\u7a31\u4ee3\u540d\u8a5e \u4f60\u3001\u6211\u3001\u4ed6\u3001\u81ea\u5df1 Nhb \u7591\u554f\u4ee3\u540d\u8a5e \u8ab0\u3001\uf9fd\u9ebc Nhc \u6cdb\u6307\u4ee3\u540d\u8a5e \u4e4b\u3001\u5176 N Na Nb Nad Nac Nae Nab Nca Nc Nbc Nba Ncd Nce Ncc Ncb Naa Naeb Naea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "Ncdb Ncda",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "The resonance will be strong when the path distance of any Chinese term pair is short. For example, the two terms \"\u96fb\u8166(computer)\" and \"\u8edf\u9ad4(software)\" with their POS are \"\u96fb\u8166 (computer) (Nab)\" and \"\u8edf\u9ad4(software) (Nac),\" respectively. Hence, the path distance of the term pair (\"\u96fb\u8166(computer)\", \"\u8edf\u9ad4(software)\") is 2 (Nab -> Na -> Nac).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3. The structure of the tagging tree derived using CKIP",
"sec_num": null
},
{
"text": "From the viewpoint of Chinese language characteristics, any term pair with common words will be similar in semantic meaning. For example, the Chinese terms in the term set {\u6c11\u9032\u9ee8, \u6c11\u9032\u9ee8\u5718, \u6c11\u4e3b\u9032\u6b65\u9ee8} are similar in semantic meaning since they are composed of the common words \"\u6c11\", \"\u9032\", and \"\u9ee8\". We also consider another characteristic of Chinese terms with respect to term vocabulary. This assumes that terms having the same starting or ending word will share some common linguistic properties [Yang et al. 1994] [Gao et al. 2001] . Good examples of starting and ending words are as follows: {\u661f\u671f\u4e00 (Monday), \u661f\u671f\uf9d1 (Saturday), \u661f\u671f\u65e5 (Sunday)} and {\u6628\u5929 (yesterday), \u660e\u5929 (tomorrow), \u4eca\u5929 (today), \u6bcf \u5929 (everyday)}. The first term set has the same starting word \"\u661f,\" and the second term set has the same ending word \"\u5929.\" The algorithm for computing resonance strength in TV [Lee et al. 2003 ] is shown below.",
"cite_spans": [
{
"start": 485,
"end": 503,
"text": "[Yang et al. 1994]",
"ref_id": "BIBREF21"
},
{
"start": 504,
"end": 521,
"text": "[Gao et al. 2001]",
"ref_id": "BIBREF5"
},
{
"start": 851,
"end": 867,
"text": "[Lee et al. 2003",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B. Resonance Strength in Term Vocabulary (TV)",
"sec_num": null
},
{
"text": "Algorithm for computing the resonance strength in TV Input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "All terms Step4: End.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "For example, the two terms \"\u6c11\u9032\u9ee8\u5718\" and \"\u6c11\u4e3b\u9032\u6b65\u9ee8\" have three common words, \"\u6c11,\" \"\u9032,\" and \"\u9ee8,\" and the same starting word, \"\u6c11,\" so the total strength is 3.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "A large amount of previous research has focused on how to best cluster similar terms together. The proposed methods can be roughly grouped into two categories: knowledge-based clustering and data-driven clustering [Gao et al. 2001] . However, the obtained term knowledge is not sufficient for concept clustering, because a term pair is sometimes similar in meaning but lacks common properties of knowledge. Therefore, the confidence value derived using the Apriori Algorithm for the term pair can be applied to decide the strength of term relation. A term pair with a high confidence value consists of two terms that have a strong relationship and can be classified as the same concept. For example, the term set {\u7e3d\u7d71 (President) (Nab), \u7e3d\u7d71\u5e9c(The Office of the President) (Nca), \u9673\u6c34\u6241(President Chen) (Nb)} represents similar concepts, so they will be clustered into the same concept. But from the viewpoint of term knowledge, only the two terms \"\u7e3d\u7d71(President)\" and \"\u7e3d\u7d71\u5e9c(The Office of the President)\" will be clustered into the same concept. The term \"\u9673\u6c34\u6241(President Chen)\" will not be clustered into the concept {\u7e3d\u7d71(President), \u7e3d\u7d71\u5e9c(The Office of the President)}. Therefore, the resonance strength in TA is necessary for concept clustering. In addition, the resonance strength is decided by the confidence value of the two terms, so we adopt the average of the two confidence values as the resonance strength in TA. For example, the term pair { \u7e3d \u7d71 (Nab), \u9673 \u6c34 \u6241 (Nb)} for the \"Political( \u653f \u6cbb \u7126 \u9ede )\" category (http://www.chinatimes.com.tw) with (\u7e3d\u7d71 -> \u9673\u6c34\u6241) has a confidence value of 0.84, and the confidence value of (\u9673\u6c34\u6241 -> \u7e3d\u7d71) is 0.80, so the resonance strength in TA is 0.82 ((0.84+0.80)/2 ).",
"cite_spans": [
{
"start": 214,
"end": 231,
"text": "[Gao et al. 2001]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C. Resonance Strength in Term Association (TA)",
"sec_num": null
},
{
"text": "Any two Chinese terms with the same common words or starting/ending words may not have the similar meaning. For example, consider the three Chinese terms \"\u7f8e\u570b(U.S.A.),\" \"\u7f8e\u65b9 (U.S.A.),\" and \"\u8b66\u65b9(police)\". The Chinese terms \"\u7f8e\u570b(U.S.A.)\" and \"\u7f8e\u65b9(U.S.A.)\" have the common starting word \"\u7f8e\"; meanwhile, the Chinese terms \"\u7f8e\u65b9(U.S.A.)\" and \"\u8b66\u65b9 (police)\" have the common ending word \"\u65b9\". But the common terms with a specific threshold of confidence for \"\u7f8e\u570b,\" \"\u7f8e\u65b9,\" and \"\u8b66\u65b9\" are as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D. Resonance Strength in Common Term Association (CTA)",
"sec_num": null
},
{
"text": "\u7f8e\u570b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D. Resonance Strength in Common Term Association (CTA)",
"sec_num": null
},
{
"text": "We adopt the parallel fuzzy inference mechanism for semantic concept clustering. The fuzzy variables for computing the CRS of any Chinese term pair are adopted in the mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parallel Fuzzy Inference Mechanism for Semantic Concept Generating",
"sec_num": "4."
},
{
"text": "In this subsection, we will explain how four input fuzzy variables can be aggregated into one output fuzzy variable to compute the CRS of each Chinese term pair. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aggregate Term Resonance with the Parallel Fuzzy Inference Mechanism",
"sec_num": "4.1"
},
{
"text": "Having described the fuzzy variables used to compute the CRS of a Chinese term pair, we will next explain how the parallel fuzzy inference mechanism proposed by Kuo et al. [1998] and Lin [1991] is used to perform semantic concept clustering. Figure 9 shows the structure of the parallel fuzzy inference mechanism. It is a three-layered network which can be constructed by directly mapping from a set of specific fuzzy rules, or can be learned incrementally from a set of training patterns. In our approach, CRS the rules are defined by the domain expert. The structure consists of a premise layer, rule layer, and conclusion layer. There are two kinds of nodes, fuzzy linguistic nodes and rule nodes, in this model. A fuzzy linguistic node represents a fuzzy variable and manipulates the information related to that linguistic variable. A rule node represents a rule and determines the final firing strength of that rule during the inferring process. The premise layer performs the first inference step to compute matching degrees. The conclusion layer is responsible for drawing conclusions and defuzzification. We will describe each layer in the following.",
"cite_spans": [
{
"start": 161,
"end": 178,
"text": "Kuo et al. [1998]",
"ref_id": "BIBREF9"
},
{
"start": 183,
"end": 193,
"text": "Lin [1991]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 9",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Figure 8. The membership functions of the CRS fuzzy variable",
"sec_num": null
},
{
"text": "As shown in Figure 9 , the first layer is called the premise layer and is used to represent the premise part of the fuzzy system. Each fuzzy variable appearing in the premise part is represented with a condition node. Each of the outputs of the condition node is connected to some nodes in the second layer to constitute a condition specified in some rules. Note that the output links must be emitted from proper linguistic terms as specified in the fuzzy rules. In other words, a linguistic node is a polymorphic object that can be viewed differently by different fuzzy rules. Figure 10 shows the fuzzy linguistic node for the TA fuzzy variable. ",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 9",
"ref_id": "FIGREF4"
},
{
"start": 578,
"end": 587,
"text": "Figure 10",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "A. Premise layer:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u23a7 \u23aa \u2212 \u2212 \u23aa \u23aa = \u23a8 \u23aa \u2212 \u2212 \u23aa \u23aa \u23a9 x a a x b b x c c x d x d < \u2264 \u2264 \u2264 \u2264 \u2264 < \u2265 , (2) 1 triangular ij trapezoidal f f f \u23a7 \u23aa = \u23a8 \u23aa \u23a9 1 or 1 or j n j n \u2260 = ,",
"eq_num": "(3)"
}
],
"section": "A. Premise layer:",
"sec_num": null
},
{
"text": "where n is the number of linguistic terms for the i-th linguistic node. Therefore,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A. Premise layer:",
"sec_num": null
},
{
"text": ") ( 1 1 x f ij ij = \u00b5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A. Premise layer:",
"sec_num": null
},
{
"text": "The second layer is called the rule layer. In it, each node is a rule node and is used to represent a fuzzy rule. The links in this layer are used to perform precondition matching of fuzzy logic rules. The output of a rule node in the rule layer is linked to associated linguistic nodes in the third layer. In our model, the rules are previously defined by domain experts. Table 3 shows the fuzzy inference rules for the parallel fuzzy inference mechanism. Figure 11 shows the structure of the rule node. ",
"cite_spans": [],
"ref_spans": [
{
"start": 373,
"end": 380,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 457,
"end": 466,
"text": "Figure 11",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "B. Rule layer:",
"sec_num": null
},
{
"text": "\u00b5 1 \u00b5 2 \u00b5 3 \u00b5 4 w 4 w 2 w 3 w 1 f r (\u2027) S (\u2027) \u00b5 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "TV TA CTA CRS 1 L L L L VL 19 H L L L VL 2 L L L M VL 20 H L L M VL 3 L L L H L 21 H L L H L 4 L L M L L 22 H L M L L 5 L L M M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "The r f function in Figure 11 provides the net input for this node and is defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 29,
"text": "Figure 11",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "1 p r i i i f w \u00b5 = = \u2211 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "The S function is used to normalize the r f function and is defined in Eq. 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "2 2 0 , 2( ) , 2 ( : , ) 1 2( ) , 2 1 , x a x a a b a x b a S x a b x b a b x b b a x b < \u23a7 \u23aa \u2212 + \u23aa \u2264 \u2264 \u23aa \u2212 = \u23a8 \u2212 + \u23aa \u2212 \u2264 < \u23aa \u2212 \u23aa \u2265 \u23a9 . (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "In our case, the rule node has four inputs, and each input value is between 0 and 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule node",
"sec_num": null
},
{
"text": "The third layer is called the conclusion layer. This layer is also composed of a set of fuzzy linguistic nodes. A fuzzy linguistic node can also operate in a reverse mode, called a conclusion node. In the reverse mode, fuzzy linguistic nodes are responsible for drawing conclusions and defuzzification. Figure 12 shows the structure of a linguistic node in the reverse mode, and shows that it is also an output node.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 312,
"text": "Figure 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "C. Conclusion layer:",
"sec_num": null
},
{
"text": "Output Node ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "In our model, the final output y is the crisp value that is produced by combining all the inference results with their firing strengths. The defuzzification process is defined in Eq. 5: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 12. The structure of a fuzzy linguistic node for the conclusion layer",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 1 r c k k ij ij ij i j r c k k ij ij i j y w V CrispOutput y w = = = = = \u2211 \u2211 \u2211 \u2211 ,",
"eq_num": "(5)"
}
],
"section": "Figure 12. The structure of a fuzzy linguistic node for the conclusion layer",
"sec_num": null
},
{
"text": "The conceptual resonance of terms pair can be treated as a fuzzy compatibility relation, because it satisfies the properties of reflexivity and symmetricalness. Therefore, the problem of concept clustering is that of finding all the classes of maximal \u03b1-compatibles with fuzzy compatibility relations. In this model, the value of \u03b1 represents a specified membership degree of the fuzzy compatibility relation. The semantic concept clustering algorithm based on the fuzzy compatibility relation approach is described as follows. with n terms for the specific category News, and its corresponding fuzzy conceptual resonance matrix n n ij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fuzzy Compatibility Relation Approach to Semantic Concept Clustering",
"sec_num": "4.2"
},
{
"text": "A \u00d7 = ] [\u03b1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concept Clustering Algorithm based on the Fuzzy Compatibility Relation",
"sec_num": null
},
{
"text": "The Final_Concept_Set, which is the set of Domain Ontology Concepts. Method:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1: For",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "1 \u2190 i to n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.1: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "\u03a6 \u2190 i Set /* i Set denotes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "0 \u2190 i S /* i S denotes the cardinality of i Set */ Step 1.3: ]} [ { i Term Set Set i i \u222a \u2190",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "\u03a6 \u2190 Set Temp _ , /* Set Temp _",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "denotes the set of existing concept subsets*/ Step 1.5: For j i \u2190 to n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.5.1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "If",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "\u03b1 \u03b1 \u2265 ij Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.5.1.1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "]} [ { j Term Set Set i i \u222a =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.5.1.2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "1 + \u2190 i i S S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.6: Determine the power set k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "p of i Set .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.6.1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "| | k p p S k \u2190 , where i S k 2 ,..., 1 = /* k p S Denotes the cardinality of k p */",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "For 1 \u2190 k to i S 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "Step 1.7.1: If",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output:",
"sec_num": null
},
{
"text": "_ \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temp p k",
"sec_num": null
},
{
"text": "Step 1.7.2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continue",
"sec_num": null
},
{
"text": "Step 1.7.3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "For 1 \u2190 l to 1 \u2212 k p S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "Step 1.7.3.1: For",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "1 + \u2190 l m to k p S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "Step 1.7.3.1.1: Step 1.7.4.1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "\u2190 n Index of ] [l p k in X \u2190 q Index of ] [m p k in X Step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "k p Set Concept Final Set Concept Final \u222a \u2190 _ _ _ _",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "Step 1.7.4.2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "} ) ( { _ _ \u03a6 \u2212 \u2212 \u222a \u2190 k k p p P Set Temp Set Temp /* ) ( k p P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "Denotes the power set of k p */",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "Step 2: End.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 flag",
"sec_num": "0"
},
{
"text": "The method used to determine \u03b1 is very important for semantic concept clustering, because it will influence the number of concepts and the degree of compatibility of Chinese terms. The value of \u03b1 may vary for different domain documents, because their properties may be different. Now, we use an example to explain the relation of concept clustering for different value of \u03b1 . In Figure 13 , the terms are clustered based on a specific value of \u03b1 , and they point to the same concept if their CRS values are greater than \u03b1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 388,
"text": "Figure 13",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "\u9673 \u6c34 \u6241 \u7e3d \u7d71 \u7e3d \u7d71 \u5e9c \u6c11 \u4e3b \u9032 \u6b65 \u9ee8 \u6c11 \u9032 \u9ee8 \u57f7 \u653f \u9ee8 \u4e2d \u592e \u653f \u5e9c \u653f \u9ee8 \u526f \u7e3d \u7d71 \uf980 \u79c0 \uf999 \u89aa \u6c11 \u9ee8 \u5b8b \u695a \u745c \u570b \u6c11 \u9ee8 \u5728 \u91ce \u9ee8 \u53f0 \uf997 \u524d \u7e3d \u7d71 \uf9e1 \u767b \u8f1d \ufa08 \u653f \u9662 \ufa08 \u653f \u9662 \u9577 \u95a3 \u63c6 \u5916 \u4ea4 \u90e8 \u5167 \u653f \u90e8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "If we reduce the value of \u03b1 , then the terms will be clustered with high compatibility. Figure 14 shows the concepts clustered based on a lower value of \u03b1 corresponding to Figure 13 .",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 97,
"text": "Figure 14",
"ref_id": null
},
{
"start": 172,
"end": 183,
"text": "Figure 13",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 13. The concepts clustered based on a specific value of \u03b1",
"sec_num": null
},
{
"text": "\u9673 \u6c34 \u6241 \u7e3d \u7d71 \u7e3d \u7d71 \u5e9c \u6c11 \u4e3b \u9032 \u6b65 \u9ee8 \u6c11 \u9032 \u9ee8 \u57f7 \u653f \u9ee8 \u4e2d \u592e \u653f \u5e9c \u653f \u9ee8 \u526f \u7e3d \u7d71 \uf980 \u79c0 \uf999 \u89aa \u6c11 \u9ee8 \u5b8b \u695a \u745c \u570b \u6c11 \u9ee8 \u5728 \u91ce \u9ee8 \u53f0 \uf997 \u524d \u7e3d \u7d71 \uf9e1 \u767b \u8f1d \ufa08 \u653f \u9662 \ufa08 \u653f \u9662 \u9577 \u95a3 \u63c6 \u5916 \u4ea4 \u90e8 \u5167 \u653f \u90e8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13. The concepts clustered based on a specific value of \u03b1",
"sec_num": null
},
{
"text": "The lower \u03b1 value will result in the formation of the more concepts and strengthen the compatibility degree of terms for a specific concept. The \u03b1 decision algorithm for semantic concept clustering can be described as shown below. The prune-and-search strategy will be applied to solve this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 14. The concepts clustered based on a lower value of \u03b1",
"sec_num": null
},
{
"text": "CRS of terms for a specific news category ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "R = ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "where n is the number of values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "Step 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "Sort all the elements of R , where ] [ ] [ j R i R \u2264 for n j i \u2264 < \u2264 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "Step 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "2 n p \u2190",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "Step 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The \u03b1 Decision Algorithm for semantic concept clustering Input:",
"sec_num": null
},
{
"text": "Step 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 count",
"sec_num": "1"
},
{
"text": "] [ p R \u2190 \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 count",
"sec_num": "1"
},
{
"text": ", and let the number of classes of maximal\u03b1-compatibles be c .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 count",
"sec_num": "1"
},
{
"text": "Step 5.1: Step 6: End.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 count",
"sec_num": "1"
},
{
"text": "1 + \u2190 count count",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2190 count",
"sec_num": "1"
},
{
"text": "In this section, some experiments obtained using the proposed approach will be presented. The news corpus was gathered between May 2001 and March 2002 from the ChinaTimes website (http://www.chinatimes.com.tw). Seven categories of news, consisting of \"Political\" (\u653f\u6cbb\u7126 \u9ede), \"International\" (\u570b\u969b\u8981\u805e), \"Finance\" (\u80a1\u5e02\u8ca1\u7d93), \"Cross-Strait\" (\uf978\u5cb8\u98a8\u96f2), \"Societal\" ( \u793e \u6703 \u5730 \u65b9 ), \"Entertainment\" ( \u904b \u52d5 \u5a1b \uf914 ) and \"Life\" ( \u751f \u6d3b \u65b0 \u77e5 ), were used in the experiments. Table 4 lists the number of documents for each news category, the Chinese terms produced by the refining tagger, the remaining terms produced by the stop word filter, and the filtering percentages for the Chinese terms and remaining terms. Next, we will analyze the results of CRS for any Chinese term pair. Table 5 shows the partial results of CRS for the \"Political\" (\u653f\u6cbb\u7126\u9ede) category with the highest values. Notice that each term pair not only exhibits strong similarity in term knowledge for the POS fuzzy variable and TV fuzzy variable but also high strength for the TA fuzzy variable and CTA fuzzy variable. The term pairs marked with asterisks (*) exhibited strong TA and CTA but weak POS and TV. The next experiment was conducted to obtain semantic concept clustering results under various \u03b1 values. In this experiment, the number of concepts varied between 500 and 1,000 for each news category. Table 6 shows that the different values of \u03b1 produced various numbers of concepts containing different terms. The experimental results show that the semantic concept clustering results were influenced by the values of \u03b1. Table 7 shows the concept clustering results under various values of \u03b1 for the \"Life\" (\u751f\u6d3b\u65b0\u77e5) category. Table 8 shows a partial listing of the concepts, including concept names, attributes and operations, in the golden standard ontology of the \"Life\" (\u751f\u6d3b\u65b0\u77e5) category. Notice that the concepts with higher values of \u03b1 are subsets of the concepts with lower values of \u03b1. That is, a lower value of \u03b1 generated a concept with more Chinese terms. In the final experiment, we tested the performance measures Precision and Recall. We choose four students who were currently working toward the M.S. degree in Computer Science and Information Management, and let them to evaluate the values obtained using Eq.(6) and Eq. 7for precision and recall. Figure 15 -20 show the average precision and recall curves based on the evaluations performed by the four experts. Table 8 shows an example of the gold-standard concepts for the \"Life\" (\u751f\u6d3b\u65b0\u77e5) category. The Precision and Recall measure formulas used in this study are as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 450,
"text": "Table 4",
"ref_id": "TABREF9"
},
{
"start": 751,
"end": 758,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 1346,
"end": 1353,
"text": "Table 6",
"ref_id": "TABREF11"
},
{
"start": 1567,
"end": 1574,
"text": "Table 7",
"ref_id": null
},
{
"start": 1670,
"end": 1677,
"text": "Table 8",
"ref_id": null
},
{
"start": 2305,
"end": 2314,
"text": "Figure 15",
"ref_id": null
},
{
"start": 2420,
"end": 2427,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5."
},
{
"text": "The number of relevant common terms in gold-standard concept and the automatically generated semantic concept The number of terms in the automatically generated semantic concept Precision = , ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "Figure 15-17 show the average precision results obtained based on the evaluations performed by the four domain experts for various \u03b1 values. Figure 18 -20 show the average recall results obtained based on the evaluations performed by the four domain experts for various \u03b1 values. ",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 150,
"text": "Figure 18",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Concepts of a News Ontology",
"sec_num": null
},
{
"text": "This paper has presented a Chinese term clustering mechanism for generating semantic concepts of a news ontology. We utilize the parallel fuzzy inference mechanism to infer the conceptual resonance strength of any two Chinese terms. In addition, the CKIP tool is used in Chinese natural language processing, including part-of-speech tagging, Chinese term analysis, and Chinese term feature selection. A fuzzy compatibility relation approach to semantic concept clustering has also been proposed. Simulation results show that our approach can effectively cluster Chinese terms to generate the semantic concepts of a news ontology. In the future, we will extend use our approach to help construct a domain ontology more efficiently. Moreover, we will adopt the genetic learning mechanism to learn the membership functions of fuzzy inference rules for the parallel fuzzy inference mechanism. Finally, mixed Chinese/English documents will also be employed to construct more a complex domain ontology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "The authors would like to express their gratitude to the anonymous reviewers for their comments, which improved the quality of this paper. This work was partially supported by the Ministry of Economic Affairs in Taiwan under grant 93-EC-17-A-02-S1-029 and partially sponsored by the National Science Council of Taiwan (R. O. C.) under grant NSC-93-2213-E-309-003.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledge",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic Ontology-Based Knowledge Extraction from Web Documents",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "D",
"middle": [
"E"
],
"last": "Millard",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Weal",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "P",
"middle": [
"H"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "N",
"middle": [
"R"
],
"last": "Shadbolt",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Intelligent Systems",
"volume": "18",
"issue": "1",
"pages": "14--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alani, H., S. Kim, D. E. Millard, M. J. Weal, W. Hall, P. H. Lewis and N. R. Shadbolt, \"Automatic Ontology-Based Knowledge Extraction from Web Documents,\" IEEE Intelligent Systems, 18 (1) 2003, pp. 14-21.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Academia Sinica Balanced Corpus",
"authors": [],
"year": 1998,
"venue": "Academia Sinica",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CKIP, \"Academia Sinica Balanced Corpus,\" Technical Report, No. 95-02/98-04, Academia Sinica, Taiwan, 1998.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Chinese Electronic Dictionary",
"authors": [],
"year": 1993,
"venue": "Academia Sinica",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CKIP, \"Chinese Electronic Dictionary,\" Technical Report, No. 93-05, Academia Sinica, Taiwan, 1993.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ontology-based extraction and structuring of information from data-rich unstructured documents",
"authors": [
{
"first": "D",
"middle": [
"W"
],
"last": "Embley",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Campbell",
"suffix": ""
},
{
"first": "R",
"middle": [
"D"
],
"last": "Smith",
"suffix": ""
},
{
"first": "S",
"middle": [
"W"
],
"last": "Liddles",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceeding Of ACM Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "52--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Embley, D. W., D. M. Campbell, R. D. Smith and S. W. Liddles, \"Ontology-based extraction and structuring of information from data-rich unstructured documents,\" Proceeding Of ACM Conference on Information and Knowledge Management, USA, 1998, pp. 52-59.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ontology-based Knowledge Management",
"authors": [
{
"first": "D",
"middle": [],
"last": "Fensel",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Computer",
"volume": "35",
"issue": "11",
"pages": "56--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fensel, D., \"Ontology-based Knowledge Management,\" IEEE Computer, 35 (11) 2002, pp. 56-59.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Use of Clustering Techniques for Language Modeling -Application to Asian Language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "Goodman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "6",
"issue": "",
"pages": "27--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, J., J. T. Goodman and J. Miao, \"The Use of Clustering Techniques for Language Modeling -Application to Asian Language,\" Computational Linguistics and Chinese Language Processing, 6 (1) 2001, pp. 27-60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ontology languages for the semantic web",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gomez-Perez",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Corcho",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Intelligent Systems",
"volume": "17",
"issue": "1",
"pages": "54--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gomez-Perez, A and O. Corcho, \"Ontology languages for the semantic web,\" IEEE Intelligent Systems, 17 (1), 2002, pp. 54-60.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "OntoSeek: Content-based access to the web",
"authors": [
{
"first": "N",
"middle": [],
"last": "Guarino",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Masolo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vetere",
"suffix": ""
}
],
"year": 1999,
"venue": "IEEE Intelligent Systems",
"volume": "14",
"issue": "3",
"pages": "70--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guarino, N., C. Masolo and G. Vetere, \"OntoSeek: Content-based access to the web,\" IEEE Intelligent Systems, 14 (3) 1999, pp. 70-80.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using Statistical Methods to Improve Knowledge-Based News Categorization",
"authors": [
{
"first": "P",
"middle": [
"S"
],
"last": "Jacobes",
"suffix": ""
}
],
"year": 1993,
"venue": "IEEE Expert",
"volume": "8",
"issue": "2",
"pages": "13--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacobes, P. S., \"Using Statistical Methods to Improve Knowledge-Based News Categorization,\" IEEE Expert, 8 (2) 1993, pp. 13-23.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Parallel Fuzzy Inference Model with Distributed Prediction Scheme for Reinforcement Learning",
"authors": [
{
"first": "Y",
"middle": [
"H"
],
"last": "Kuo",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Hsu",
"suffix": ""
},
{
"first": "C",
"middle": [
"W"
],
"last": "Wang",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Trans. on Systems, Man, and Cybernetics",
"volume": "28",
"issue": "2",
"pages": "160--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuo, Y. H., J. P. Hsu and C. W. Wang, \"A Parallel Fuzzy Inference Model with Distributed Prediction Scheme for Reinforcement Learning,\" IEEE Trans. on Systems, Man, and Cybernetics, 28 (2) 1998, pp. 160-172.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building and maintaining ontologies: a set of algorithm",
"authors": [
{
"first": "N",
"middle": [],
"last": "Lammari",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Metais",
"suffix": ""
}
],
"year": 2004,
"venue": "Data & Knowledge Engineering",
"volume": "48",
"issue": "2",
"pages": "155--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lammari, N. and E. Metais, \"Building and maintaining ontologies: a set of algorithm,\" Data & Knowledge Engineering, 48 (2) 2004, pp. 155-176.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ontology-based Fuzzy Event Extraction Agent for Chinese e-news Summarization",
"authors": [
{
"first": "C",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Y",
"middle": [
"J"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [
"W"
],
"last": "Jian",
"suffix": ""
}
],
"year": 2003,
"venue": "Expert Systems with Applications",
"volume": "25",
"issue": "3",
"pages": "431--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, C. S., Y. J. Chen and Z. W. Jian, \"Ontology-based Fuzzy Event Extraction Agent for Chinese e-news Summarization,\" Expert Systems with Applications, 25 (3) 2003, pp. 431-447",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Weighted Fuzzy Ontology for Chinese e-News Summarization",
"authors": [
{
"first": "C",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Guo",
"suffix": ""
},
{
"first": "Z",
"middle": [
"W"
],
"last": "Jian",
"suffix": ""
}
],
"year": 2004,
"venue": "2004 IEEE International Conference on Fuzzy Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, C. S., S. M. Guo and Z. W. Jian, \"Weighted Fuzzy Ontology for Chinese e-News Summarization,\" 2004 IEEE International Conference on Fuzzy Systems, USA, 2004.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Introduction to the Design and Analysis of Algorithms",
"authors": [
{
"first": "R",
"middle": [
"C T"
],
"last": "Lee",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
},
{
"first": "S",
"middle": [
"S"
],
"last": "Tseng",
"suffix": ""
},
{
"first": "Y",
"middle": [
"T"
],
"last": "Tsai",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, R. C. T., R. C. Chang, S. S. Tseng and Y. T. Tsai, \"Introduction to the Design and Analysis of Algorithms,\" Unalis co., Taipei, 1999.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural-Network-Based Fuzzy Logic Control and Decision System",
"authors": [
{
"first": "C",
"middle": [
"T"
],
"last": "Lin",
"suffix": ""
},
{
"first": "C",
"middle": [
"S G"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Trans. Computers",
"volume": "40",
"issue": "12",
"pages": "1320--1336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C. T. and C. S. G. Lee, \"Neural-Network-Based Fuzzy Logic Control and Decision System,\" IEEE Trans. Computers, 40 (12) 1991, pp. 1320-1336.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Integrated approach to web ontology learning and engineering",
"authors": [
{
"first": "M",
"middle": [],
"last": "Missikoff",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Computer",
"volume": "35",
"issue": "11",
"pages": "60--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Missikoff, M., R. Navigli and P. Velardi, \"Integrated approach to web ontology learning and engineering,\" IEEE Computer, 35 (11) 2002, pp. 60-63.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ontology learning and its application to automated terminology translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Intelligent Systems",
"volume": "18",
"issue": "1",
"pages": "22--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navigli, R. and P. Velardi, \"Ontology learning and its application to automated terminology translation,\" IEEE Intelligent Systems, 18 (1) 2003, pp. 22-31.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ontology-based photo annotation",
"authors": [
{
"first": "A",
"middle": [
"T"
],
"last": "Schreiber",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dubbeldam",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wielemaker",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wielinga",
"suffix": ""
}
],
"year": 2001,
"venue": "IEEE Intelligent Systems",
"volume": "16",
"issue": "3",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schreiber, A.T., B. Dubbeldam, J. Wielemaker and B. Wielinga, \"Ontology-based photo annotation,\" IEEE Intelligent Systems, 16 (3) 2001, pp. 66-74.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ontology-based information retrieval in a multi-agent system for digital library",
"authors": [
{
"first": "V",
"middle": [
"W"
],
"last": "Soo",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceeding Of the sixth conference on artificial intelligence and applications",
"volume": "",
"issue": "",
"pages": "241--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo, V. W. and C. Y. Lin, \"Ontology-based information retrieval in a multi-agent system for digital library,\" Proceeding Of the sixth conference on artificial intelligence and applications, Taiwan, 2001, pp. 241-246.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Knowledge engineering: principles and methods",
"authors": [
{
"first": "R",
"middle": [],
"last": "Studer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Benjamins",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Fensel",
"suffix": ""
}
],
"year": 1998,
"venue": "Data and Knowledge Engineering",
"volume": "25",
"issue": "1",
"pages": "161--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Studer, R., R. Benjamins and D. Fensel, \"Knowledge engineering: principles and methods,\" Data and Knowledge Engineering, 25 (1) 1998, pp. 161-197.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bottom-Up Construction Ontologies",
"authors": [
{
"first": "",
"middle": [],
"last": "Van Der",
"suffix": ""
},
{
"first": "P",
"middle": [
"E"
],
"last": "Vet",
"suffix": ""
},
{
"first": "N",
"middle": [
"J I"
],
"last": "Mars",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Trans. on Knowledge and data Engineering",
"volume": "10",
"issue": "4",
"pages": "513--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Der Vet, P.E. and N.J.I. Mars, \"Bottom-Up Construction Ontologies,\" IEEE Trans. on Knowledge and data Engineering, 10 (4) 1998, pp. 513-526.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An intelligent and efficient word-class-based Chinese language model for Mandarin speech recognition with very large vocabulary",
"authors": [
{
"first": "Y",
"middle": [
"J"
],
"last": "Yang",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Lin",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "Chien",
"suffix": ""
},
{
"first": "K",
"middle": [
"J"
],
"last": "Chen",
"suffix": ""
},
{
"first": "L",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceeding of ICSLP-94",
"volume": "",
"issue": "",
"pages": "1371--1374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, Y. J., S. C. Lin, L. F. Chien, K. J. Chen and L. S. Lee, \"An intelligent and efficient word-class-based Chinese language model for Mandarin speech recognition with very large vocabulary,\" Proceeding of ICSLP-94, Yokohama, Japan, 1994, pp. 1371-1374.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "If the starting word of two Chinese terms is the same then TV ( , ) If the ending word of two Chinese terms is the same then TV ( , ) a b t t = TV ( , ) a b t t + 0.5. Step2: TV Max = maximum TV ( , ) i j t t value of all term pairs. Step3: TV Min = minimum TV ( , ) i j t t value of all term pairs.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "(U.S.A.) -> {\u767d\u5bae(White House),\u5e03\u5e0c(Bush),\uf9cf\u7d04(New York)}; \u7f8e\u65b9(U.S.A.) -> {\u767d\u5bae(White House),\u5e03\u5e0c(Bush),\u4e94\u89d2\u5927\u5ec8(Pentagon)}; \u8b66\u65b9(police) -> {\u8b66\u54e1(policeman),\u5211\u4e8b\u7d44(criminal investigation),\u5206\u5c40(police station)}. Hence, the common term set for {\u7f8e\u570b(U.S.A.), \u7f8e\u65b9(U.S.A.)} is {\u767d\u5bae(White House), \u5e03\u5e0c (Bush)}, and for {\u8b66\u65b9(police), \u7f8e\u65b9(U.S.A.)} is Null. Therefore, the term pair {\u7f8e\u570b (U.S.A.), \u7f8e\u65b9(U.S.A.)} has stronger resonance in CTA than the term pair {\u8b66\u65b9(police), \u7f8e \u65b9(U.S.A.)} does.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "The membership functions of the POS fuzzy variableTwo linguistic terms, TV_Low and TV_High, are defined in the TV fuzzy variable.Figure 5 shows the membership functions of the fuzzy sets {TV_Low, TV_High} for the fuzzy variable TV similarity, The membership functions of the TV fuzzy variable Three linguistic terms, consisting of TA_Low, TA_Medium, and TA_High, are defined in the TA fuzzy variable. The membership functions of the fuzzy sets {TA_Low, TA_Medium, TA_High} for the fuzzy variable TA strength are shown in Figure 6The membership functions of the TA fuzzy variable Three linguistic terms, consisting of CTA_Low, CTA_Medium, and CTA_High, are defined in the CTA fuzzy variable. Figure 7 shows the membership functions of the fuzzy sets {CTA_Low, CTA_Medium, CTA_High} for the fuzzy variable CTA strength, The membership functions of the CTA fuzzy variable Five linguistic terms, consisting of CRS_Very_Low, CRS_Low, CRS_Medium, CRS_High, and CRS_Very_High, are defined in the CRS fuzzy variable. Figure 8 shows the membership functions of the fuzzy sets {CRS_Very_Low, CRS_Low, CRS_Medium, CRS_High, CRS_Very_High} for the fuzzy variable CRS strength,",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "The structure of the parallel fuzzy inference mechanism for semantic concept clustering",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "The structure of the fuzzy linguistic node for the TA fuzzy variableThe premise layer performs the first inference step to compute matching degrees. the j-th linguistic term in the i-th condition node. In our approach, the triangular function and trapezoidal function are adopted as the membership functions for the linguistic terms. Equation 1 and 2 show the triangular and trapezoidal membership functions, respectively:",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "The structure of the rule node",
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"text": "is the center of gravity, r is the number of corresponding rule nodes, c is the number of linguistic terms of the output node, n is the number of fuzzy variables in the premise layer, and k represents the k-th layer. The values of r, c, n, and k adopted here are 36, 5, 4 and 2, respectively.",
"num": null,
"type_str": "figure"
},
"FIGREF11": {
"uris": null,
"text": "Figure 15. The average precision results for different\u03b1 values (Concept name)",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Ca</td><td>\u4e26\uf99c\uf99a\u63a5\u8a5e</td><td>\u548c\u3001\u6216\u8005</td></tr><tr><td>Cb</td><td>\u95dc\uf997\uf99a\u63a5\u8a5e</td><td>\u96d6\u7136\u3001\uf967\u4f46</td></tr><tr><td>Da</td><td>\uf969\uf97e\u526f\u8a5e</td><td>\u4e00\u5171\u3001\u6070\u597d</td></tr><tr><td>Dba</td><td>\u6cd5\u76f8\u526f\u8a5e</td><td>\u4e00\u5b9a\u3001\u4e5f\u8a31</td></tr><tr><td>Dbb,Dbc</td><td>\u8a55\u50f9\u526f\u8a5e</td><td>\u5c45\u7136\u3001\u679c\u7136</td></tr><tr><td>Dc</td><td>\u5426\u5b9a\u526f\u8a5e</td><td>\u6c92\u6709\u3001\u672a</td></tr><tr><td>Dd</td><td>\u6642\u9593\u526f\u8a5e</td><td>\u96a8\u5373\u3001\u7a0d\u5f8c</td></tr><tr><td>Df</td><td>\u7a0b\ufa01\u526f\u8a5e</td><td>\u975e\u5e38\u3001\uf901</td></tr><tr><td>Dg</td><td>\u5730\u65b9\u526f\u8a5e</td><td>\u5230\u8655\u3001\u904d\u5730</td></tr><tr><td>Dh</td><td>\u65b9\u5f0f\u526f\u8a5e</td><td>\u5982\u6b64\u3001\u5f9e\u4e2d</td></tr><tr><td>Di</td><td>\u6a19\u8a8c\u526f\u8a5e</td><td>\u904e\u3001\u8d77\uf92d</td></tr><tr><td>Dj</td><td>\u7591\u554f\u526f\u8a5e</td><td>\u70ba\u4f55\u3001\u4f55\u6545</td></tr><tr><td>Dk</td><td>\uf906\u526f\u8a5e</td><td>\u64da\u5831\u3001\u64da\uf9ba\u89e3</td></tr><tr><td>I</td><td>\u611f\u6b4e\u8a5e</td><td>\u54e6\u3001\u54c7</td></tr><tr><td>P</td><td>\u4ecb\u8a5e</td><td>\u7d93\u904e\u3001\u906d\u53d7</td></tr><tr><td>T</td><td>\u8a9e\u52a9\u8a5e</td><td>\uf9ba\u3001\u7684</td></tr></table>"
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>\u7c97\u8a5e\uf9d0 \u7d30\u8a5e\uf9d0</td><td>Meaning</td><td>Examples</td></tr><tr><td>Na</td><td>\u666e\u901a\u540d\u8a5e</td><td/></tr><tr><td>Naa</td><td>\u7269\u8cea\u540d\u8a5e</td><td>\uf9e3\u571f\u3001\u6c34</td></tr><tr><td>Nab</td><td>\u500b\u9ad4\u540d\u8a5e</td><td>\u684c\u5b50\u3001\u676f\u5b50</td></tr><tr><td>Nac</td><td>\u53ef\uf969\u62bd\u8c61\u540d\u8a5e</td><td>\u5922\u3001\u7b26\u865f</td></tr><tr><td>Nad</td><td>\u62bd\u8c61\u540d\u8a5e</td><td>\u98a8\ufa01\u3001\u9999\u6c23</td></tr><tr><td>Nae</td><td>\u96c6\u5408\u540d\u8a5e</td><td>\uf902\u8f1b\u3001\u8239\u96bb</td></tr><tr><td>Nb</td><td>\u5c08\u6709\u540d\u8a5e</td><td/></tr><tr><td>Nba</td><td>\u6b63\u5f0f\u5c08\u6709\u540d\u8a5e</td><td>\u96d9\u9b5a\u5ea7\u3001\u4f59\u5149\u4e2d</td></tr><tr><td>Nbc</td><td>\u59d3\u6c0f</td><td>\u5f35\u3001\u738b</td></tr><tr><td>Nc</td><td>\u5730\u65b9\u540d\u8a5e</td><td/></tr><tr><td>Nca</td><td>\u5c08\u6709\u5730\u65b9\u540d\u8a5e</td><td>\u897f\u73ed\u7259\u3001\u53f0\uf963</td></tr><tr><td>Ncb</td><td>\u666e\u901a\u5730\u65b9\u540d\u8a5e</td><td>\u90f5\u5c40\u3001\u5e02\u5834</td></tr><tr><td>Ncc</td><td>\u540d\u65b9\u5f0f\u5730\u65b9\u540d\u8a5e</td><td>\u6d77\u5916\u3001\u8eab\u4e0a</td></tr><tr><td>Ncd</td><td>\u8868\u4e8b\u7269\u76f8\u5c0d\u4f4d\u7f6e\u7684\u5730\u65b9\u8a5e</td><td>\u4e0a\u982d\u3001\u4e2d\u9593</td></tr><tr><td>Nce</td><td>\u5b9a\u540d\u5f0f\u5730\u65b9\u540d\u8a5e</td><td>\u56db\u6d77\u3001\u7576\u5730</td></tr><tr><td>Nd</td><td>\u6642\u9593\u540d\u8a5e</td><td/></tr><tr><td>Nda</td><td>\u6642\u9593\u540d\u8a5e(\uf98c\u53f2\u6027\u3001\u5faa\u74b0\u91cd\u8907)</td><td>\u5510\u671d\u3001\u6625\u3001\u590f\u3001\u79cb\u3001\u51ac</td></tr><tr><td>Ndc</td><td>\u540d\u65b9\u5f0f\u6642\u9593\u540d\u8a5e</td><td>\uf98e\u5e95\u3001\u9031\u672b</td></tr><tr><td>Ndd</td><td>\u526f\u8a5e\u6027\u6642\u9593\u540d\u8a5e</td><td>\u73fe\u5728\u3001\u7576\u4eca</td></tr><tr><td>Ne</td><td>\u5b9a\u8a5e</td><td>\u9019\u3001\u54ea\u3001\u5c11\u8a31</td></tr></table>"
},
"TABREF4": {
"text": "POS_Low and POS_High, are defined in the POS fuzzy variables. Figure 4 shows the membership functions of the fuzzy sets {POS_Low, POS_High} for the fuzzy variable POS similarity, where max min 100",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"11\">Semantic Concepts of a News Ontology</td><td/><td/><td/><td/></tr><tr><td>Membership</td><td colspan=\"4\">POS_High</td><td/><td/><td/><td/><td colspan=\"5\">POS_Low</td><td/></tr><tr><td>Degree</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0</td><td>min</td><td>POS</td><td>min</td><td>POS</td><td>+</td><td>5</td><td>p</td><td>max</td><td>POS</td><td>\u2212</td><td>5</td><td>p</td><td>max</td><td>POS</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>p</td><td>=</td><td/><td colspan=\"3\">POS</td><td>\u2212</td><td>POS</td><td>.</td></tr></table>"
},
"TABREF5": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF9": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>News</td><td>\u653f\u6cbb\u7126\u9ede</td><td>\u570b\u969b\u8981\u805e</td><td>\u80a1\u5e02\u8ca1\u7d93</td><td>\uf978\u5cb8\u98a8\u96f2</td><td>\u793e\u6703\u5730\u65b9</td><td>\u904b\u52d5\u5a1b\uf914</td><td>\u751f\u6d3b\u65b0\u77e5</td></tr><tr><td>Category</td><td>(Political)</td><td>(International)</td><td>(Finance)</td><td>(Cross-Strait)</td><td>(Societal)</td><td>(Entertainment)</td><td>(Life)</td></tr><tr><td>Number of Doc.</td><td>11277</td><td>13542</td><td>22756</td><td>6040</td><td>13441</td><td>5974</td><td>9279</td></tr><tr><td>Chinese Terms</td><td>25448</td><td>25484</td><td>18960</td><td>22856</td><td>35846</td><td>24178</td><td>35932</td></tr><tr><td>Remaining Terms</td><td>17091</td><td>15367</td><td>11346</td><td>15085</td><td>24813</td><td>16543</td><td>24287</td></tr><tr><td>Filter Percent</td><td>32.84%</td><td>39.70%</td><td>40.16%</td><td>34.00%</td><td>30.78%</td><td>31.58%</td><td>32.41%</td></tr></table>"
},
"TABREF10": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Chinese Term Pair</td><td>CRS</td><td/><td>Chinese Term Pair</td><td>CRS</td></tr><tr><td>(\u6c11\u4e3b\u9032\u6b65\u9ee8,\u6c11\u9032\u9ee8)</td><td>0.595481543</td><td>*</td><td>(\u570b\u6c11\u9ee8\u4e3b\u5e2d,\uf99a\u6230)</td><td>0.534303235</td></tr><tr><td>(\u89aa\u6c11\u9ee8\u4e3b\u5e2d,\u89aa\u6c11\u9ee8)</td><td>0.594101448</td><td/><td>(\u570b\u6c11\u9ee8,\u89aa\u6c11\u9ee8)</td><td>0.534003082</td></tr><tr><td>(\u570b\u6c11\u9ee8\u4e3b\u5e2d,\u570b\u6c11\u9ee8)</td><td>0.587348993</td><td/><td>(\u6c11\u9032\u9ee8\u7c4d,\u6c11\u9032\u9ee8\u5718)</td><td>0.5338768</td></tr><tr><td>(\u6c11\u9032\u9ee8\u4e3b\u5e2d,\u6c11\u9032\u9ee8)</td><td>0.581987023</td><td/><td>(\u526f\u79d8\u66f8\u9577,\u79d8\u66f8\u9577)</td><td>0.531714804</td></tr><tr><td>(\u89aa\u6c11\u9ee8\u4e3b\u5e2d,\u570b\u6c11\u9ee8\u4e3b\u5e2d)</td><td>0.577182427</td><td>*</td><td>(\u89aa\u6c11\u9ee8\u4e3b\u5e2d,\u5b8b\u695a\u745c)</td><td>0.530113977</td></tr><tr><td>(\ufa08\u653f\u9662\u9577,\ufa08\u653f\u9662)</td><td>0.571720293</td><td/><td>(\uf9f7\u59d4,\uf9f7\u6cd5\u59d4\u54e1)</td><td>0.52974897</td></tr><tr><td>(\u6c11\u9032\u9ee8\u4e3b\u5e2d,\u6c11\u4e3b\u9032\u6b65\u9ee8)</td><td>0.571574542</td><td/><td>(\u570b\u9632\u90e8,\u570b\u9632)</td><td>0.529134478</td></tr><tr><td>(\uf9f7\u9662,\uf9f7\u6cd5\u9662)</td><td>0.56774317</td><td/><td>(\u99ac\u82f1\u4e5d,\u53f0\uf963\u5e02\u9577)</td><td>0.522650369</td></tr><tr><td>(\uf9f7\u6cd5\u9662,\uf9f7\u6cd5\u9662\u9577)</td><td>0.558938037</td><td/><td>(\u89aa\u6c11\u9ee8,\u6c11\u9032\u9ee8)</td><td>0.522529795</td></tr><tr><td>(\u6c11\u9032\u9ee8\u5718,\u6c11\u9032\u9ee8)</td><td>0.558824035</td><td/><td>(\u59d4\u54e1\u6703,\u59d4\u54e1)</td><td>0.522146828</td></tr><tr><td>(\u53f0\uf963\u5e02,\u53f0\uf963\u5e02\u9577)</td><td>0.557260479</td><td/><td>(\u7e23\u5e02\u9577,\u7e23\u5e02)</td><td>0.520918138</td></tr><tr><td>(\u6c11\u9032\u9ee8\u7c4d,\u6c11\u9032\u9ee8)</td><td>0.555527542</td><td/><td>(\u653f\u5e9c,\u4e2d\u592e\u653f\u5e9c)</td><td>0.518259497</td></tr><tr><td>(\u6c11\u9032\u9ee8\u4e3b\u5e2d,\u570b\u6c11\u9ee8\u4e3b\u5e2d)</td><td>0.554741061</td><td>*</td><td>(\uf980\u79c0\uf999,\u526f\u7e3d\u7d71)</td><td>0.51637767</td></tr><tr><td>(\u9ad8\u96c4\u5e02,\u9ad8\u96c4)</td><td>0.553642597</td><td/><td>(\u6c11\u4e3b\u9032\u6b65\u9ee8,\u6c11\u9032\u9ee8\u5718)</td><td>0.516346569</td></tr><tr><td>(\u53f0\uf997,\u53f0\u7063\u5718\u7d50\uf997\u76df)</td><td>0.553602148</td><td>*</td><td>(\uf9e1\u767b\u8f1d,\u524d\u7e3d\u7d71)</td><td>0.515308468</td></tr><tr><td>(\u570b\u6c11\u9ee8,\u6c11\u9032\u9ee8)</td><td>0.551060166</td><td>*</td><td>(\u9673\u6c34\u6241,\u7e3d\u7d71)</td><td>0.513920884</td></tr><tr><td>(\u570b\u6c11\u9ee8,\u570b\u6c11\u9ee8\u7c4d)</td><td>0.547553789</td><td/><td>(\u526f\u9662\u9577,\uf9f7\u6cd5\u9662\u9577)</td><td>0.512207578</td></tr><tr><td>(\u89aa\u6c11\u9ee8\u4e3b\u5e2d,\u6c11\u9032\u9ee8\u4e3b\u5e2d)</td><td>0.54062093</td><td/><td>(\u6c11\u4e3b\u9032\u6b65\u9ee8,\u570b\u6c11\u9ee8)</td><td>0.511493131</td></tr><tr><td>(\u6c11\u9032\u9ee8\u7c4d,\u570b\u6c11\u9ee8\u7c4d)</td><td>0.53928488</td><td/><td>(\u7e3d\u7d71\u5e9c,\u7e3d\u7d71)</td><td>0.507264737</td></tr><tr><td>(\u6c11\u9032\u9ee8\u4e3b\u5e2d,\u9ee8\u4e3b\u5e2d)</td><td>0.538797148</td><td>*</td><td>(\u738b\uf90a\u5e73,\uf9f7\u6cd5\u9662\u9577)</td><td>0.504385767</td></tr><tr><td>(a)</td><td/><td/><td>(b)</td><td/></tr></table>"
},
"TABREF11": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>News</td><td>\u653f\u6cbb\u7126\u9ede</td><td>\u570b\u969b\u8981\u805e</td><td>\u80a1\u5e02\u8ca1\u7d93</td><td>\uf978\u5cb8\u98a8\u96f2</td><td>\u793e\u6703\u5730\u65b9</td><td>\u904b\u52d5\u5a1b\uf914</td><td>\u751f\u6d3b\u65b0\u77e5</td></tr><tr><td>Category</td><td>(Political)</td><td>(International)</td><td>(Finance)</td><td>(Cross-Strait)</td><td>(Societal)</td><td>(Entertainment)</td><td>(Life)</td></tr><tr><td>\u03b1</td><td>0.40</td><td>0.40</td><td>0.42</td><td>0.41</td><td>0.40</td><td>0.39</td><td>0.40</td></tr><tr><td>Number of Concepts</td><td>971</td><td>948</td><td>543</td><td>791</td><td>783</td><td>640</td><td>880</td></tr><tr><td>Number of</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Average Terms per</td><td>3.45</td><td>3.56</td><td>3.13</td><td>3.39</td><td>3.08</td><td>3.65</td><td>3.64</td></tr><tr><td>Concept</td><td/><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF12": {
"text": "Concept No. 1 \u6559\u80b2 \u6559\u5e2b \u6559\u6388 \u6559\u80b2\u90e8\u9577 Concept No. 2 \u5b78\u6821 \u5b78\u751f \u5b78\u8853 \u5927\u5b78 \u53f0\u7063\u5927\u5b78 \u03b1=0.44 Concept No. 3 \u5b78\u8005 \u5b78\u8853 \u5927\u5b78 \u6559\u6388 Concept No. 1 \u6559\u80b2 \u6559\u5e2b \u6559\u6388 \u6559\u80b2\u90e8\u9577 \u5b78\u751f \u5b78\u6821 \u6559\u80b2\u90e8 Concept No. 2 \u5b78\u6821 \u5b78\u751f \u5b78\u8853 \u5927\u5b78 \u53f0\u7063\u5927\u5b78 \u6821\u9577 Concept No. 3 \u5b78\u8005 \u5b78\u8853 \u5927\u5b78 \u6559\u6388 \u7814\u7a76 Concept No. 4 \u5b78\u6821 \u5bb6\u9577 \uf934\u5e2b \u5b78\u751f Concept No. 5 \u79d1\u5b78 \u5b78\u8853 \u5927\u5b78 \u03b1=0.42 Concept No. 6 \u5b78\u8005 \u79d1\u5b78\u5bb6 \u5c08\u5bb6 Concept No. 1 \u6559\u80b2 \u6559\u5e2b \u6559\u6388 \u6559\u80b2\u90e8\u9577 \u5b78\u751f \u5b78\u6821 \u6559\u80b2\u90e8 \u5927\u5b78 \u8ab2\u7a0b \u8cc7\u6e90 Concept No. 2 \u5b78\u6821 \u5b78\u751f \u5b78\u8853 \u5927\u5b78 \u53f0\u7063\u5927\u5b78 \u6821\u9577 \u9662\u9577 \u6559\u6388 Concept No. 3 \u5b78\u8005 \u5b78\u8853 \u5927\u5b78 \u6559\u6388 \u7814\u7a76 \u6210\u679c \uf9b4\u57df Concept No. 4 \u5b78\u6821 \u5bb6\u9577 \uf934\u5e2b \u5b78\u751f \u6559\u80b2\u90e8\u9577 \u6559\u80b2\u90e8 \u9ad8\u4e2d \u5927\u5b78 Concept No. 5 \u79d1\u5b78 \u5b78\u8853 \u5927\u5b78 \u7814\u7a76\u6240 \u7814\u7a76 Concept No. 6 \u5b78\u8005 \u5c08\u5bb6 \u79d1\u5b78\u5bb6 \u79d1\u5b78 Concept No. 7 \u5b78\u8005 \u79d1\u5b78 \u5b78\u751f \u751f\u7269 \uf9b4\u57df \u5b78\u8853 \u5927\u5b78 Concept No. 8 \u6210\u679c \u7814\u7a76 \u79d1\u5b78 \uf9b4\u57df \u5b78\u8853 \u03b1=0.40 Concept No. 9 \u6280\u8853 \u7814\u7a76 \uf9b4\u57df \u61c9\u7528 \u7522\u696d Semantic Concepts of a News Ontology",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Concept name</td><td>Attribute</td><td>Operation</td></tr><tr><td/><td>\u6559\u80b2\u90e8\u9577\uff1aString</td><td>Null</td></tr><tr><td>\u6559\u80b2\u3001\u6559\u80b2\u90e8</td><td>\u6559\u5e2b\u3001\u6559\u6388\u3001\u6559\u5e2b\uff1aString</td><td/></tr><tr><td/><td>\u5b78\u751f\uff1aString</td><td/></tr><tr><td/><td>\u5c0f\u5b78\uff1aString</td><td>Null</td></tr><tr><td/><td>\u4e2d\u5b78\uff1aString=\u570b\u4e2d\u3001\u9ad8\u4e2d</td><td/></tr><tr><td>\u5b78\u6821\u3001\u6821\u5712\u3001\u6821\u65b9</td><td>\u5927\u5b78\u3001\u5b78\u9662\uff1aString=\u53f0\u7063\u5927\u5b78</td><td/></tr><tr><td/><td>\u7814\u7a76\u6240\uff1aString</td><td/></tr><tr><td/><td>\u8ab2\u7a0b\uff1aString=\u8003\u8a66\u3001\u6691\u5047\u3001\u5bd2\u5047</td><td/></tr><tr><td/><td>\u5b78\u8005\uff1aString=\u6559\u6388\u3001\u79d1\u5b78\u5bb6\u3001\u5c08\u5bb6</td><td>\u767c\u5c55\u3001\u7814\u767c\u3001\u7814\u7a76\u3001\u5be6\u9a57</td></tr><tr><td>\u5b78\u8853\u3001\u7814\u7a76</td><td>\uf9b4\u57df\uff1aString=\u751f\u7269\u3001\u91ab\u5b78\u3001\u96fb\u8166\u79d1\u5b78</td><td/></tr><tr><td/><td>\u7814\u8a0e\u6703\uff1aString</td><td/></tr><tr><td/><td>\uf9b4\u57df\uff1aString=\u96fb\u6a5f\u3001\u96fb\u5b50\u3001\u8cc7\u8a0a\u3001</td><td>Null</td></tr><tr><td>\u79d1\u6280\u3001\u79d1\u5b78</td><td>\u901a\u8a0a\u3001\u534a\u5c0e\u9ad4\u3001\u5149\u96fb\u3001\u7db2\uf937 \u7522\u54c1\uff1aString=\u624b\u6a5f\u3001\u786c\u9ad4\u3001\u8edf\u9ad4\u3001</td><td/></tr><tr><td/><td>\u96fb\u8166\u3001\u8655\uf9e4\u5668\u3001\u6db2\u6676\u87a2\u5e55</td><td/></tr><tr><td>\u91ab\u9662\u3001\u91ab\u754c</td><td>\u91ab\u5e2b\u3001\u91ab\u751f\uff1aString</td><td>\u624b\u8853\u3001\u6aa2\u9a57</td></tr><tr><td>\u885b\u751f\u5c40\u3001\u885b\u751f\u7f72</td><td>\u5065\u4fdd\uff1aString=\u5168\u6c11\u5065\u4fdd\u3001\u5065\u4fdd\u5361</td><td>Null</td></tr><tr><td/><td>\u96e8\uf97e\u3001\u96e8\u52e2\u3001\u96e8\uff1aString=\u5927\u96e8\u3001\u8c6a\u96e8\u3001</td><td>\u8b8a\u5316</td></tr><tr><td/><td>\uf949\u96e8\u3001\u9663\u96e8\u3001\uf949\u9663\u96e8</td><td/></tr><tr><td>\u6c23\u8c61\u3001\u5929\u6c23\u3001\u6c23\u5019</td><td>\u92d2\u9762\uff1aString</td><td/></tr><tr><td/><td>\u6c23\u6eab\uff1aString</td><td/></tr><tr><td/><td>\u5b63\u98a8\uff1aString=\u6771\uf963\u5b63\u98a8</td><td/></tr></table>"
}
}
}
}