Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C96-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:51:34.150211Z"
},
"title": "Concept clustering and knowledge integration from a children's dict ionary",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Barri",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Knowledge structures called Concel)t (?lustering Knowledge (]raphs (CCKGs) are introduced along with a process for their construction from a machine readable dictionary. C(3K(]s contain multiple concepts interrelated through multil)le semantic relations together forming a semantic duster represented by a conceptual al graph. '1'he knowledge acquisition is performed on a children's first dictionary. The concepts inw)lved are general and typical of a daily lid conw'a'salion. A collection of conceptual clusters together can lbrm the basis of a lexical knowledge base, where each C'(,'l((.~ contains a limited nnmber of highly connected words giving usefid information about a particular domain or situation.",
"pdf_parse": {
"paper_id": "C96-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Knowledge structures called Concel)t (?lustering Knowledge (]raphs (CCKGs) are introduced along with a process for their construction from a machine readable dictionary. C(3K(]s contain multiple concepts interrelated through multil)le semantic relations together forming a semantic duster represented by a conceptual al graph. '1'he knowledge acquisition is performed on a children's first dictionary. The concepts inw)lved are general and typical of a daily lid conw'a'salion. A collection of conceptual clusters together can lbrm the basis of a lexical knowledge base, where each C'(,'l((.~ contains a limited nnmber of highly connected words giving usefid information about a particular domain or situation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When constructing a l,exieal Knowledge Ilase (1,KB) useful for Natural l,anguage Processing, the source of information from which knowledge is acquired and the structuring of this information within the LKB are two key issues. Machine Readable Dictionaries (MIH)s) are a good sour(:e of lexical information and have been shown to be al)plical)le to the task of I,KII COllStruction (l)ola.n ct al., 1993; Calzolari, t992; Copestake, [990; Wilks et al., 1989; Byrd et al., 1987) . Often though, a localist approaeh is adopted whereby the words are kept in alphabetical order with some representation of their definitions in the form of a template or feature structure. F, flbrt in findlug cormections between words is seen in work on automatic extraction of sem~mtic relations Dora MRI)s (Ahlswede and Evens, 1988; Alshawi, 1989; Montemagrfi and Vandorwende, 19!32) . Additionally, effort in finding words that are close semantically is seen by the current interest in statistical techniques for word clustering, looking at (-ooccurrences of words in text corpora or dictionaries (Church and IIanks, 1989; Wilks et al., 1989; Brown et al., 11992; l'ereira et al., 11995) .",
"cite_spans": [
{
"start": 393,
"end": 403,
"text": "al., 1993;",
"ref_id": "BIBREF10"
},
{
"start": 404,
"end": 420,
"text": "Calzolari, t992;",
"ref_id": null
},
{
"start": 421,
"end": 437,
"text": "Copestake, [990;",
"ref_id": null
},
{
"start": 438,
"end": 457,
"text": "Wilks et al., 1989;",
"ref_id": "BIBREF18"
},
{
"start": 458,
"end": 476,
"text": "Byrd et al., 1987)",
"ref_id": "BIBREF5"
},
{
"start": 786,
"end": 812,
"text": "(Ahlswede and Evens, 1988;",
"ref_id": "BIBREF0"
},
{
"start": 813,
"end": 827,
"text": "Alshawi, 1989;",
"ref_id": null
},
{
"start": 828,
"end": 863,
"text": "Montemagrfi and Vandorwende, 19!32)",
"ref_id": null
},
{
"start": 1078,
"end": 1103,
"text": "(Church and IIanks, 1989;",
"ref_id": null
},
{
"start": 1104,
"end": 1123,
"text": "Wilks et al., 1989;",
"ref_id": "BIBREF18"
},
{
"start": 1124,
"end": 1144,
"text": "Brown et al., 11992;",
"ref_id": null
},
{
"start": 1145,
"end": 1168,
"text": "l'ereira et al., 11995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "it Introduction",
"sec_num": null
},
{
"text": "Inspired by research in the. areas of semantic relations, semantic distance, concept clustering, and using (,once I tual (Ji a l hs (Sowa, 1984) as our knowledge representation, we introduce (;oncept (?lustering I{nowledge Graphs (CCKGs). Each (JCKG will start as a Conceptual Graph representation of a trigger word and will expaud following a search algorit, hm to incorporate related words and ibrm a C'oncept Cn,s(,er. The concept chlstcr in itself is interesting for tasks such as word disambiguation, but the C(~K(] will give more to that cluster. It will give the relations between the words, making the graph in some aspects similar to a script (Schank and Abelson, 11975) . llowever, a CCK(I is generated automaticMly and does not rely on prin,itives but on an unlimited number of concel)ts , showing objects, persons, and actions interacting with each other. This interaction will be set, within a lmrtieular domain, and the trigger word should be a key word of the domain to represent. 11' that process would be done for the whole dictionary, we would obtain an l,l( II divided into multiple clusters of words, each represented by a CCK(]. Then during text processing fin: example, a portion of text could be analyzed using the appropriate CCK(] to lind implicit relations and hell) understanding the text.",
"cite_spans": [
{
"start": 132,
"end": 144,
"text": "(Sowa, 1984)",
"ref_id": "BIBREF17"
},
{
"start": 652,
"end": 679,
"text": "(Schank and Abelson, 11975)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "it Introduction",
"sec_num": null
},
{
"text": "Our source of knowledge is the Americ~m iteritage First I)ictionary t which contains 1800 entries aml is designed for children of age six to eight. lit is made for yom~g l)eople learning the structure and the basic w)cabulary of their language. In comparison, an adult's dictiouary is more of a ref erence tool which assumes knowledge of a large basic vocabulary, while a learner's dictionary assumes at limited vocabulary but still some very sophisticated concepts. Using a children's dictionary allows us to restrict our vocabulary, but still work on general knowledge about day to day (:Oil-cel)tS and actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "it Introduction",
"sec_num": null
},
{
"text": "In the folk)wing sections, we first present the l Copyright @1994 by [Ioughton Miftlin Company. Reproduced by permission h'om TIlE AMERICAN ItERITAGI'; FIRST DIC'I?IONAIlY. transformation steps from the definitions into conceptual graphs, then we elaborate on the integration process, and finally, we close with a discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "it Introduction",
"sec_num": null
},
{
"text": "Our definitions may contain up to three general types of information, as shown in the examples in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Transforming definitions",
"sec_num": null
},
{
"text": "This contains genus/differentia information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "Such information is frequently used for noun taxonomy construction (Byrd et al., 1987; Klavans et al., 1990 ; Barri~re and Popowich, To appear August 1996). The information given by the description and general knowledge will be used to perform the knowledge integration proposed in section 3. The specific examples are excluded as they tend to involve specific concepts not always deeply related to the word defined.",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "(Byrd et al., 1987;",
"ref_id": "BIBREF5"
},
{
"start": 87,
"end": 107,
"text": "Klavans et al., 1990",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "Our processing of the definitions results in the construction of a special type of conceptual graph which we call a temporary graph. The set of relations used in temporary graphs come from three sources. Table 1 shows some examples for each type.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "1. the set of closed class words, ex: of, to, in, and; 2. relations extracted via defining formulas ex: partof, made-of, instrument; defining formulas correspond to phrasal patterns that occur often through the dictionary suggesting particular semantic relations (ix. A is a part of B) (Ahlswede and Evens, 1988; Dolan et al., 1993) .",
"cite_spans": [
{
"start": 286,
"end": 312,
"text": "(Ahlswede and Evens, 1988;",
"ref_id": "BIBREF0"
},
{
"start": 313,
"end": 332,
"text": "Dolan et al., 1993)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "3. the relations that are extracted from the syntactic structure of a sentence, ex: subject, object, goal, attribute, modifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "As some relations are defined using the closed class words, and many of those words are ambiguous, the resulting graph will itself be ambiguous. This is the main reason for calling our graphs temporary as we assume a conceptual graph, the ultimate goal of our translation process, should contain a restricted set of well-defined and nonambiguous semantic relations. For example, by can be a relation of manner (by chewing), time (by noon) or place (by the door). By keeping the preposition itself within the temporary graph, we delay the ambiguity resolution process until we have gathered more information and we even hopefully avoid the decision process as the ambiguity might later be resolved by the integration process itself. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "[B]-> (instrument )-> [A] [A]-> (part-of)-> [B] [Bl->(loc)->[A] temporary graph [B]-> (agent)-> [h] [eat]-> (agent)-> [John] [A]-> (goal)-> [B] [e at]-> ( goal)-> [grow]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 description:",
"sec_num": null
},
{
"text": "This section describes how given a trigger word, we perform a series of forward and backward searches in the dictionary to build a CCKG containing useful information pertaining to the trigger word and to closely related words. The primary building blocks for the CCKG are the temporary graphs built from the dictionary definitions of those words using our transformation process mentioned in the previous section. Those temporary graphs express similar or related ideas in different ways and with different levels of detail. As we will try to put all this information together into one large graph, we must first find what information the various temporary graphs have in common and then join them around this common knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge integration",
"sec_num": null
},
{
"text": "To help us build this CCKG and perform our integration process, we assume two main knowledge structures are available, a concept hierarchy and a relation hierarchy, and we assume the existance of some graph operations. The concept hierarchy concentrates on nouns and verbs as they account for three quarters of the dictionary definitions. It has been constructed automatically according to the techniques described in (Barri~re and Popowich, To appear August 1996). The relation hierarchy was constructed manually. A rich hierarchical structure between the set of relations is essential to the graph matching operations we use for the integration phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge integration",
"sec_num": null
},
{
"text": "As we are using the conceptual graph formalism to represent our definitions, we can use the graph matching operations defined in (Sowa, 1984) . The t, wo operations we will need are the maximal common subgraph algorithm and the maximal join algorithm.",
"cite_spans": [
{
"start": 129,
"end": 141,
"text": "(Sowa, 1984)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge integration",
"sec_num": null
},
{
"text": "The maximal common subgraph between two graphs consists of finding a subgraph of tile first graph that is isomorphic to a subgraph of the seeond graph. In our case, we cannot often expect to find two graphs that contain an identical subgral)h with the exact same relations and concepts. Ideas cart be expressed in many ways and we therefore need a more relaxed matching schema. We describe a few elements of this \"relaxation\" process and illustrate them by an example in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 479,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "(1) John makes a nice drawing on a piece of paper with the pen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "[make]->(sub)->[John] ->(obj)->[drawing]->(nit)->[nice] ->(on)->[piece]->(or)->[paper] ->(with)->[pen]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "(2) John uses the big crayon to draw rapidly on the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "[(haw]->(sub)->[John l ->(on)->[paper] ->(inst ........ t)-> [crayon] ->(manner)->[rapidly]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "MAXIMAl, COMMON SUBGRAPn: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "[make(draw)]->(sub)->[John] ->(obj)->[drawing] ->(on)->[piece]->(of)->[paper] ->(instrument)->[label-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "[make(draw)]->(sub)->[John] ->(obj)-> [drawing]->(art)->[nice] ->(o.)-> [piece]->(of)->[paperl ->(inst ...... t)->[l~b\u00a2l-1] ->( ......... )->[rapidly]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "Figure 2: Example of \"relaxed\" maximal common sui)graph and maximal join algorithms Semantic distance between concepts. In the maximal common subgraph algorithm proposed by (Sow% :1984), two concepts (C1,CY) could be matched if one snbsumed the other in the concept hierarchy. We can relax that criteria to match two concepts when a third concept C which subsumes C1 and C2 has a high enough degree of informativeness (Resnik, 1995) . The concept hierarchy can be useful in many cases, but it is generated from the dictionary and might not be complete enough to find all similar concepts. In the example of Figure 2 , when using tile concept hierarchy to establish the similarity between pen and crayon, we find that; one is a subclass of lool and the other of wax, both then are substoned by the general concept something. We have reached the root of the noun tree in the concept hierarchy and this would give a similarity of 0 based on the informativeness notion.",
"cite_spans": [
{
"start": 418,
"end": 432,
"text": "(Resnik, 1995)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 607,
"end": 615,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "We extend the subsumption notion to the graphs. Iustead of finding a concept that subsulnes two concepts, we will try finding a common subgraph that subsumes the graph representation of both concepts. In our example, pen and crayon have a common subgraph [write]->(inst)->~. The notion of semantic distance can be seen as the informativeness of the subsuming graph. The resuiting maximal comlnon snbgraph as shown in Figure 2 contains the concept label-1. This label is associated to a covert category ~s presented in (Barri~re and Popowich, To appear August 1996). We carl update tile concept hierarchy and add this label-1 as a subclass of something and a superclass of pen and crayon. It expresses a concept of \"writing instrument\". t Relation subsmnption.",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "Since we have a relation hierarchy in addition to our concept hierarchy, we can similarly use subsumption to match two relations. In i,'igure 2, with is subsumed by instrument, and by lnapping them, we disantbiguate wilh from corresponding to another semantic relation, such as possession or accompaniment. This is a case where an arnbiguons preposition left in the temporary graph is resolved by the integration process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "Predictable meaning shift. A set of lexical implication rules were developed by (Ostler and Atkins, 1992) for relating word senses. Based on them, we are developing a set of graph matching rules. Figure 2 exemplifies one of theln where two graphs containing the same word (or morphologically related), here draw and drawing, used as different parts of speech can be related.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Ostler and Atkins, 1992)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "Relation transitivity. Some relations, like part-of, in, from can be transitive. For example, we can map a graph that contains a concept A in a certain relation to concept B onto another graph where concept A is in the same relation with a part or a piece of B as exemplified in Figure 2 . Transitivity in relations is in itself a challenging area of study (Cruse, 1986) and we have only begun to explore it.",
"cite_spans": [
{
"start": 357,
"end": 370,
"text": "(Cruse, 1986)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 279,
"end": 287,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximal common subgraph",
"sec_num": "3.1."
},
{
"text": "The basic operation for the integration of temporary graphs is the maximal join operation where a union of two graphs is formed around their maximal common subgraph using the most specific concepts of each. We just saw how to relax the maximal common subgraph operation and we will perform the join around that \"relaxed\" subgraph. Figure 2 shows the result of the maximal join. The join operation allows us to bring new conccpts into a graph by finding relations with ex-isting concepts, as well as bringing new relations between existing concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 339,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximal join",
"sec_num": "3.2"
},
{
"text": "Given the concept hierarchy, relation hierarchy and graph matching operations, we now describe the two major steps required to integrate all the temporary graphs into a CCKG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration process",
"sec_num": "3.3"
},
{
"text": "Start with a central word, a keyword for the subject of interest that becomes the trigger word. The temporary graph built from the trigger word forms the initial CCKG. To expand its meaning, we want to look at the important concepts involved and use their respective temporary graphs to extend our initial graph. We deem words in the definition to be important if they have a large semantic weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRIGGER. PHASE.",
"sec_num": null
},
{
"text": "2.'he semantic weight of a word or its informativeness can be related to its frequency (l~esnik, 1995) . Itere, we calculate the number of occurrence of each word within the definitions of nouns and verbs in our dictionary. The most frequent word \"a\" occurs 2600 times among a total of 38000 word occurrences. Only 1% of the words occur more than 130 times, 5% occur more than 30 times but over 60% occur less than 5 times.",
"cite_spans": [
{
"start": 87,
"end": 102,
"text": "(l~esnik, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TRIGGER. PHASE.",
"sec_num": null
},
{
"text": "Ordering the dictionary words in terms of decreasing number of occurrences, the top 10% of these words account for 75% of word occurrences. For our current investigation, we propose this as the division between semantically significant words, and semantically insignificant ones. So a word from the dictionary is deemed to be semantically significant if it occurs less than 17 times. Note that constraining the number of semantically significant words is important in limiting the exploration process tbr constructing the concept cluster, as we shall soon see.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRIGGER. PHASE.",
"sec_num": null
},
{
"text": "Trigger forward: Find the semantically significant words fi'om the CCKG, and join their respective temporary graphs to the initial CCKG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRIGGER. PHASE.",
"sec_num": null
},
{
"text": "Trigger backward: Find all the words in the dictionary that use the trigger word in their definition and join their respective temporary graphs to the CCKG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRIGGER. PHASE.",
"sec_num": null
},
{
"text": "Instead of a single trigger word, we now have a cluster of words that are related through the CCKG. Those words ,form the concept cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRIGGER. PHASE.",
"sec_num": null
},
{
"text": "We try finding words in the dictionary containing many concepts identical to the ones already present in the CCKG but perhaps interacting through different relations allowing us to create additional links within the set of concepts present in the CCKG. Our goal is to create a more interconnected graph rather than sprouting from a particular concept. For this reason, we establish a graph matching threshold to decide whether we will join a new graph to the CCKG being built. We set this threshold empirically: the maximal common subgraph between the CCKG and the new temporary graph must contain at least three concepts connected through two relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPANSION PHASE.",
"sec_num": null
},
{
"text": "Expansion forward: For each semantically significant word in the CCKG, not already part of the concept cluster, find the maximal common subgraph between its temporary graph and the CCKG. If matching surpasses the graph matching threshold, perform integration (maximal join operation) and add the word in the concept cluster. Continue forward until no changes are made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPANSION PHASE.",
"sec_num": null
},
{
"text": "Find words in the dictionary whose definitions contain the semantically significant words from the concept cluster. For each possible new word, find the maximal common subgraph between its temporary graph and the CCKG. Again, if matching is over the graph matching threshold, perform integration and add the word to the concept cluster. Continue until no changes are made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expansion backward:",
"sec_num": null
},
{
"text": "We can set a limit to the number of steps in the expansion phase to ensure its termination. Ilowever in practice, M'ter two or three steps forward or backward, the maximal common subgraphs between the new graphs and CCKG do not exceed the graph matching threshold and thus are not added to the cluster, terminating the expansion. Figure 3 shows the starting point of an integration process with the trigger word (TW) lelter, its definition, its temporary graph (TG), the concept cluster (CC) containing only the trigger word, and the CCKG being the same as the temporary graph. Then we show the trigger forward phase. The number of occurences (NOte) of each word present in the definition of letter is given. Using the criteria described in the previous section, only the word message is a semantically significant word (SSW). We then see the definition of message, the new concept cluster and the resulting CCKG.",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Expansion backward:",
"sec_num": null
},
{
"text": "The trigger backward phase, would incorporate the temporary graphs for address, mail, post office and stamp. The expansion forward phase would further add the temporary graphs for the semantically significant words: {send, package} during the first step and then would terminate with the second step as no more semantically significant words not yet explored have a maximal common subgraph with the CCKG that exceeds the graph matching threshold. The expansion backward would finally add the temporary graphs for card and note, again terminating after two steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
},
{
"text": "The resulting cluster is: {letter, message, address, mail, post office, stamp, send, package, card, note}. The resulting CCKG shows the interaction between those concepts which smnmarizes general knowledge about lnow we use those concepts together in a da.ily conversation: we go to the post office to mail letters, or packages; we write letters, notes and cards to send to peoI)le through the mail, etc. Ilaving such clusters and such knowledge of the relationship between words as part of our lexical knowledge base can be useflfl to understand or even generate a text containing the concepts involved in the cluster. Discussion 'l'lu:ough this paper, we showed the multiple steps leading us to tile building of Concept Clustering Knowledge Graphs (CCKGs). Those knowledge structm:es arc built within the Lexical Knowledge Base (LKB), integrating lnultiple parts of the I,Kt~ around a particular concept to form a clus.ter and express the multiple relations among the words in that cluster. The CCKGs could be either permanent or temporary structures depending on the. applicatkm using the LKB. For example, for a text understanding tusk, we can build before hand the CCKGs corresponding to one or multiple keywords from the text. Once built, the CCKGs will help us in our comprehension and disambiguation of the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
},
{
"text": "By using the American lh;ritage First l)ictionary a~s our source of lexical information, we were able to restrict our vocabulary to result ill a project of reasonable size, dealing with general knowledge about (lay to day concepts and actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
},
{
"text": "The ideas explored using this dictionary can be extended to other dictionaries as well, but the task might becorne more complex as the defilfitions in adult's dictionaries are not as clear and usage oriented. In fact, an LKB lmilt fl'om a children's dictionary could be seen as a starting point from which we could extend our acquisition of knowledge using text corpora or other dictionaries. Certainly, if we euvisage applications trying to understand children's stories or help in child education, a corpora of texts for children would be a good source of information to extend our LKB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
},
{
"text": "The graph operations (maximM commou subgraph and maximal join) defined on conceptual graphs, anti adapted here, play an important role in our integration process toward a final CCKG. Graph matching was also suggested as an alternatiw; to taxonomic search when trying to establish semantic similarity between concepts. As well, by putting a threshohl on the graph matching process, we were able to limit the expansion of our clustering, as we can decide and justify the incorporation of a new concept into a particular cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
},
{
"text": "Many aspects of the concept clustering and knowledge integration processes have already been implemented and it will soon be possible to test the techniques on different trigger words using different thresholds to see how they effect the quality of the clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
},
{
"text": "(~lustering is often seen as a statistical operation that puts together words \"somehow\" related. ltere, we give a meaning to their clustering, we tint[ and show the connections between concepts, and by doing so, we build more than a cluster oF words. We build a knowledge graph where the concepts interact with each other giving impel taut implicit information that will be useful for Natural Language Processing tusks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example of integration",
"sec_num": "3.4"
}
],
"back_matter": [
{
"text": "Acknowledgments i['his research was supported by the Institute for Robotics and Intelligent Systems. The autlnors would like to thank the anonymous referees for their comments and suggestions, and Petr Kubon for his many comments on the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating a relational lexicon from a machine-readable dictionary",
"authors": [
{
"first": "T",
"middle": [],
"last": "Ahlswede",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Evens",
"suffix": ""
}
],
"year": 1988,
"venue": "International JowrnM of Lexicography",
"volume": "1",
"issue": "3",
"pages": "214--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Ahlswede and M. Evens. 1988. Generating a relational lexicon from a machine-readable dic- tionary. International JowrnM of Lexicography, 1l(3):214 237.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analysing tile dictionary definil.ions",
"authors": [
{
"first": "",
"middle": [],
"last": "Ii",
"suffix": ""
}
],
"year": 1989,
"venue": "Compulalional Lexicography for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "153--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "II. A]shawi. 1989. Analysing tile dictionary def- inil.ions. In 1~. Boguraev and T. llriscoe, ed- itors, Compulalional Lexicography for Natural Language Processing, chapter 7, pages 153-170.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Building a noun taxonomy from a children's dictionary",
"authors": [
{
"first": "C",
"middle": [],
"last": "Barri6re",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Popowich",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of Euralex'96",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Barri6re and F. Popowich. To apl)ear, August 1996. Building a noun taxonomy from a chil- dren's dictionary. In Proceedings of Euralex'96, GSteborg, Sweden.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Class-based ngrain models of natural language",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "I{",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, V.J. Della Pietra, P.V. deSouza, J.C. Lai, and I{.L. Mercer. 1992. Class-based n- grain models of natural language. Computa- tional Linguistics, 18(4):467-480.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tools and methods for computational lexieology",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Byrd",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Calzolari",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neff",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Rizk",
"suffix": ""
}
],
"year": 1987,
"venue": "Computational Linguistics",
"volume": "13",
"issue": "3-4",
"pages": "219--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.J. Byrd, N. Calzolari, M. Chodorow, J. Kla- vans, M. Neff, and O. Rizk. 1987. Tools and methods for computational lexieology. Compu- tational Linguistics, 13(3-4):219-240.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Acquiring and representing semantic information in a lexical knowledge base",
"authors": [
{
"first": "N",
"middle": [],
"last": "Calzolari",
"suffix": ""
}
],
"year": 1992,
"venue": "Lexical Semantics and Knowledge Representation : First SIGLEX Workshop, chapter 16",
"volume": "",
"issue": "",
"pages": "235--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Calzolari. 1992. Acquiring and representing se- mantic information in a lexical knowledge base. In J. Pustejovsky and S. Bergler, editors, Lex- ical Semantics and Knowledge Representation : First SIGLEX Workshop, chapter 16, pages 235-244. Springer-Verlag.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word association norms, mutual information and lexicography",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 27lh Annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Church and P. Hanks. 1989. Word associa- tion norms, mutual information and lexicogra- phy. In Proceedings of the 27lh Annual meeting of the Association for Computational Linguis- tics, pages 76-83, Vancouver, BC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An approach to building the hierarchical element of a lexical knowledge base from a machine readable dictionary",
"authors": [
{
"first": "A",
"middle": [
"A"
],
"last": "Copestake",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Workshop on Inheritance in Natural Language Processing, 7'ilburg",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.A. Copestake. 1990. An approach to building the hierarchical element of a lexical knowledge base from a machine readable dictionary, in Proceedings of the Workshop on Inheritance in Natural Language Processing, 7'ilburg.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lexical Semantics",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Cruse",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.A. Cruse. 1986. Lexical Semantics. Cambridge University Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatically deriving structured knowledge bases from on-line dictionaries",
"authors": [
{
"first": "W",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "S",
"middle": [
"D"
],
"last": "Richardson",
"suffix": ""
}
],
"year": 1993,
"venue": "The First Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Dolan, L. Vanderwende, and S. D. Richard- son. 1993. Automatically deriving structured knowledge bases from on-line dictionaries. In The First Conference of the Pacific Associa- tion for Computational Linguistics, pages 5-14, IIarbour Center, Campus of SFU, Vancouver, April.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "From dictionary to knowledge base via taxonomy",
"authors": [
{
"first": "J",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Chodorow",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Wacholder",
"suffix": ""
}
],
"year": 1990,
"venue": "P~vceedings of the 6th Annual Conference of the UW Centre for the New OED: Electronic Text Research",
"volume": "",
"issue": "",
"pages": "110--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Klavans, M. S. Chodorow, and N. Wacholder. 1990. From dictionary to knowledge base via taxonomy. In P~vceedings of the 6th Annual Conference of the UW Centre for the New OED: Electronic Text Research, pages 110-132.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Structural patterns vs. string patterns for extracting semantic information from dictionaries",
"authors": [
{
"first": "S",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of the 14 o~ COLING",
"volume": "",
"issue": "",
"pages": "546--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Montemagni and L. Vanderwende. 1992. Struc- tural patterns vs. string patterns for extract- ing semantic information from dictionaries. In Proc. of the 14 o~ COLING, pages 546-552, Nantes, France.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predictable meaning shift: Some linguistic properties of lexical implication rules",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ostler",
"suffix": ""
},
{
"first": "B",
"middle": [
"T S"
],
"last": "Atkins",
"suffix": ""
}
],
"year": 1992,
"venue": "Lexical Semantics and Knowledge Representation : First S[GLEX Workshop",
"volume": "",
"issue": "",
"pages": "87--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Ostler and B.T.S. Atkins. 1992. Predictable meaning shift: Some linguistic properties of lexical implication rules. In J. Pustejovsky and S. Bergler, editors, Lexical Semantics and Knowledge Representation : First S[GLEX Workshop, chapter 7, pages 87-100. Springer- Verlag.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributional clustering of english words",
"authors": [
{
"first": "N",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of the 33 th A CL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira, N. Tishby, and L. Lee. 1995. Distri- butional clustering of english words. In Proc. of the 33 th A CL, Cambridge,MA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of the 14 th IJCAL",
"volume": "1",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proc. of the 14 th IJCAL volume 1, pages 448- 453, Montreal, Canada.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Scripts, plans and knowledge",
"authors": [
{
"first": "R",
"middle": [],
"last": "Schank",
"suffix": ""
},
{
"first": "F",
"middle": [
"L"
],
"last": "Abelson",
"suffix": ""
}
],
"year": 1975,
"venue": "Advance papers 4th Intl. Joint Conf. Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Schank and FL. Abelson. 1975. Scripts, plans and knowledge. In Advance papers 4th Intl. Joint Conf. Artificial Intelligence.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conceptual Structures in Mind and Machines",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sowa",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Sowa. 1984. Conceptual Structures in Mind and Machines. Addison-Wesley.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A tractable machine dictionary as a resource for computational semantics",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Fass",
"suffix": ""
},
{
"first": "G-M",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Plate",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Slator",
"suffix": ""
}
],
"year": 1989,
"venue": "Computational Lexicography for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Wilks, D. Fass, G-M Guo, J. McDonald, T. Plate, and B. Slator. 1989. A tractable ma- chine dictionary as a resource for computational semantics. In Bran Boguraev and Ted Briseoe, editors, Computational Lexicography for Natu- ral Language Processing, chapter 9, pages 193- 231. Longman Group UK Limited.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Example of definitions",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "write]-> (obj)-> [message(let ter)] -> (sub j)-> [per .... :you] '.ssage is a group of words that is sent l'l'Olll ()lie person to ;~.Ilothel'. Many people send nmssages through the mail. (;C: {letter, message} CCKG: [word:group(message(letter))] <@~bj) <-[write]-> (sub)-> [person:you] -> (o,)-> [l,~per] <-(obj)<-[ ..... 11->(.~.bj)->[pe~\" ............ y] -> (frol/l)-> [pe ........... ] -> (to)-> [I .............. ther] -> (through)->[mail I Fignre 3: iDigger forward from letter.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "4",
"type_str": "figure"
},
"TABREF2": {
"num": null,
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table"
}
}
}
}