|
{ |
|
"paper_id": "1992", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:45:20.304941Z" |
|
}, |
|
"title": "Natural Language Analysis Using a Network Model -Modification Deciding Network", |
|
"authors": [ |
|
{ |
|
"first": "Masahiko", |
|
"middle": [], |
|
"last": "Ishikawa", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Matsushita Electric Industrial Co.,Ltd. 1006", |
|
"location": { |
|
"addrLine": "Kadoma-shi", |
|
"postCode": "571", |
|
"settlement": "Kadoma, Osaka", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ryoichi", |
|
"middle": [], |
|
"last": "Sugimura", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Matsushita Electric Industrial Co.,Ltd. 1006", |
|
"location": { |
|
"addrLine": "Kadoma-shi", |
|
"postCode": "571", |
|
"settlement": "Kadoma, Osaka", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We have developed an original analyzing method using the network structure called the MDN. The network is similar to that of a neural network. In the MDN, all of the modification candidates can be compared in parallel, and it can decide the most appropriate interpretation effectively. It allows high quality of natural language analysis, and high analyzing speed. In this paper we will describe Japanese sentence analysis using the MDN, and then describe discussions about the MDN, comparing with sequential analysis, neural networks, and interactive analysis.", |
|
"pdf_parse": { |
|
"paper_id": "1992", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We have developed an original analyzing method using the network structure called the MDN. The network is similar to that of a neural network. In the MDN, all of the modification candidates can be compared in parallel, and it can decide the most appropriate interpretation effectively. It allows high quality of natural language analysis, and high analyzing speed. In this paper we will describe Japanese sentence analysis using the MDN, and then describe discussions about the MDN, comparing with sequential analysis, neural networks, and interactive analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "As an application of natural language processing technology, a machine translation system from Japanese into English is under development. Our machine translation system is based on the transfer method that includes a Japanese sentence analysis module, a transfer module, and a generation module. The system has an interactive module to disambiguate sentence meaning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our main concern in Japanese sentence analysis is how to integrate syntactic constraints, semantic constraints, pragmatic constraints and user knowledge for sentence analysis which are traditionally applied in a sequential manner [Tsujii 84 ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 240, |
|
"text": "[Tsujii 84", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As a solution to this problem, we have developed an original method of analysis using a network structure. This network structure is called the MDN (Modification Deciding Network) . In the MDN, because all of the modification candidates can be compared in parallel, it becomes possible to ensure the correct analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 179, |
|
"text": "(Modification Deciding Network)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Significant characteristics of MDN are shown below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Because all of the modification candidates can be compared in parallel, it becomes possible to ensure the correct analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 MDN does not need learning. If we use a neural network method which needs learning, we have to get a vast amount of pre-analyzed example sentences. However, it is almost impossible to get a vast amount of data. Therefore, the neural network method Is not suited to a large-scale practical system. Whereas, because MDN does not need learning, it is suitable for a large-scale practical NLP system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Because the number of nodes in the MDN is reduced to the minimum necessary, MDN gives the improvement of efficiency in memory space, and a high speed of analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The modification candidates in the MDN are selected gradually using four rounds of activation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This method is useful for avoiding a local minimum problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we describe Japanese sentence analysis using the MDN, and then discuss the MDN, comparing with sequential analysis, neural networks, and so on. Though we describe Japanese analysis, the basic idea of the MDN is also useful for other natural languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Modification Analysis of Japanese \"Modification Analysis\" of Japanese is the analysis of what kinds of phrase modify a verb phrase and the relationships that hold between phrases in terms of meaning. In this paper, a modifying phrase is postpositions which relate to the modification of a verb).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One commonly used method of modification analysis is case analysis [Fillmore 68, Tsujii 84] . In case analysis, modification relations are decided by expressing the relationship between a verb and the phrases that modify it in terms of deep case (time, location, etc) and then by matching the semantic information of the modifying phrases with the case frame of the verb (which describes what semantics or what deep case a noun must have to be compatible with a verb).", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 80, |
|
"text": "[Fillmore 68,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 81, |
|
"end": 91, |
|
"text": "Tsujii 84]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This system also uses the case analysis method. With approx. 18,000 frames and over 3,000 semantic categories, our system embodies detailed case frames. However, because of the great vagueness caused by the structural and semantic freedom of Japanese, using case analysis alone, the system cannot resolve semantic ambiguities of the input sentence with the result that many interpretations are left. As a solution, the application of grammar rules based on heuristics has been considered , but the sequence of application of rules has presented a problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a solution to this problem, we have developed the MDN. In this method, all of the modification candidates are first compiled as nodes of the network structure. The network has a structure similar to that of a neural network (a connectionist model) [Rumelhart 86, Selman 89] . Each node has a numerical value called the activation level and is connected to the other nodes by a cooperative link or by an exclusive link. Each link has a weight. However, there is no stratified structure, it is a mutually connected network like a Hopfield network [Hopfield 85] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 265, |
|
"text": "[Rumelhart 86,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 276, |
|
"text": "Selman 89]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 561, |
|
"text": "[Hopfield 85]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A node updates its activation level with each time step. The new activation level is influenced by the sum of the weighted outputs from other nodes connected to it. This is called spreading activation [Collins 75, Rumelhart 86, Selman 89] . If the node A is connected to node B by a cooperative link, an increase in the activation level of node A will increase the activation level of node B. If nodes are connected by an exclusive link, an increase in the activation level of node A will decrease that of node B.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 213, |
|
"text": "[Collins 75,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 227, |
|
"text": "Rumelhart 86,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 238, |
|
"text": "Selman 89]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the MDN, the activation level is not only influenced by the weight of the links, but also by a value called the activation level parameter. The activation level parameter is set on each node. It represents the syntactic and semantic priority of the node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The weight of the links and the activation level parameter are determined by a set of grammar rules called control rules. Also, by carrying out spreading activation in a manner similar to a neural network, the MDN can judge that the candidates that enter the activated state (whose activation level is higher than the threshold level) are correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Consider the following sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Example sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Kono kamera de kodomo ga asondeiru bamen wo satsueishita. This camera child playing scene filmed \"The child who was playing was filmed with this camera.\" Figure 1 shows a part of the network for this sentence. This sentence has many ambiguities. (a1, b1, etc. represent modification candidates in Figure 1 ) a1,b2: suitable \"The child who was playing was filmed with this camera.\" a2,b2: unsuitable \"The child who was playing with this camera was filmed.\" a1,b1: unsuitable \"(Someone) who was playing was filmed with this camera by the child.\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 162, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 305, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "By means of spreading activation, activation energy spreads over the network. Eventually, the node a1 (\"kono kamera de\" modifies \"satsueishita\") and the node b2 ( \"kodomo ga\" modifies \"asondeiru\") are activated. Then the first interpretation (\"The child who was playing was filmed with this camera.\") is chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Japanese Analysis Using MDN", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The process of modification analysis as carried out in the MDN is shown below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Process of Modification Analysis with MDN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. The sentence to be analyzed is input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Process of Modification Analysis with MDN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2. The input sentence is morphologically analyzed (word segmentation, supplementation of dictionary information etc).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Process of Modification Analysis with MDN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3. Syntactic analysis is carried out. Verbs and any modifying phrases are recognized in this process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Process of Modification Analysis with MDN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "4. Case Analysis is carried out and all possible modification candidates are extracted. 6. The link control rules are applied to the network. As a result, each node is connected with others by either a cooperative or exclusive link.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Process of Modification Analysis with MDN", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The level control rules are applied to the network. As a result, an activation level parameter is set on each node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "8. Spreading activation is carried out on the network (refer to Section 7) and only those modification candidates that are in the activated state remain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "9. If modification candidates that are in a complete exclusive relation (eg. Rule1 and Rule3 in Section6.1) remain, the process is repeated from stage 8. At such a time, the present activation level is halved and the process continued, the nodes whose activation level is below the threshold level are deleted and spreading activation is repeated. If no modification candidates in a complete exclusive relation remain, the modification candidates that are in the activated state at that time are used as the correct interpretation. Then the analysis tree is built.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The possible number of repetitions of stages 8 and 9 is restricted to four. If it is the case that, even after spreading activation has been carried out four times, there are still modification candidate pairs with complete exclusive relations, then the average value of the activation level up to that point is compared and the higher score is the one used. For example, the modification candidate [8] of Figure 2 represents the modification relation in which the modifying phrase \"kamera de\" modifies the 13th slot (which indicates the deep case, here \"instrument\") of the first frame (the verb meaning, here \"play progressive\") of the verb phrase (\"asondeiru\").", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 414, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this system, the modification candidate node is compiled as a structure of C language. Figure 3 shows a partial MDN for example sentence (1). The meaning of each term in Figure 3 is shown below. Examples of link control rules are as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 98, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 181, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rule 1: A single verb cannot simultaneously express two meanings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When comparing two modification candidates, if the vp number is the same, but the frame, number (the number corresponding to the meaning of the verb) is different, these are connected by an exclusive link. These are in a complete exclusive relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Watashi wa 1nensei no eigo wo matte iru. I freshmen English take charge of \"I teach English to the freshmen.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The verb 'motte' has several meanings (have, take charge of, etc.) in general, but in this sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) the word has only one meaning (that is 'take charge of').", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this case, the node (vp = 'motte', meaning = 'have') and the node (vp='motte', meaning = 'take care of') are connected with an exclusive link.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rule 2: When a single modifying phrase simultaneously modifies a number of verb phrases, there are many cases when the phrase's deep case is identical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When comparing two modification candidates, if their advp number is the same, their vp number is different and their deep case is the same, then they are connected by a cooperative link.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Kare wa hon wo kai, sore wo yonda.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"He bought a book and read it.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "He book buy it read", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this sentence, the noun 'kare' simultaneously modifies the verb 'kai' and the verb 'yonda'. The deep case of 'kare' and 'kai' is 'subject', and the deep case of 'kare' and 'yonda' is also 'subject'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "He book buy it read", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In most cases, they are identical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "He book buy it read", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pairs of modification candidates that break the non-intersection condition are connected by an exclusive link. These are in a complete exclusive relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule 3: Modification relationships must not cross each other (The Non-Intersection Condition).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Watashi wa kare ga kinou sakkyokushita kyoku wo kiita. I he yesterday composed song listened \"I listened to the song he composed yesterday.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The node ('kare' modifies 'kiita') and the node ('watashi' modifies 'sakkyokushita') break the nonintersection condition (their modification relationships cross each other). They are connected by an exclusive link.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An activation level parameter is set on each node according to level control rules. In the case that the priority of a node is to be increased, the activation level parameter is increased. In the case that the priority of a node is to be lowered, the activation level parameter is decreased.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Examples of level control rules are as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Rule 4: Obligatory case is given precedence over optional case and free case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The activation level parameter is changed according to the degree of precedence of the deep cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Rule 5: Precedence is given to the modification candidate whose modifying phrases are closer in distance to verbs than others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The activation level parameter is changed according to the degree of precedence of distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Rule 6: It is rare that a modifying phrase with a touten punctuation mark (\"_\") modifies an embedded verb phrase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "When checking a modification candidate, if the modifying phrase contains a touten mark and the verb phrase it modifies is embedded, then the activation level parameter is lowered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Rule 7: Precedence is given to a modification candidate in which the modifying phrase and the verb phrase have strong connection in terms of semantics (for example, \"camera\" and \"filmed\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Level Control Rules", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Spreading activation is performed as shown below (refer to [Williams 86, Collins 75] ). In the general spreading activation algorithm, the new activation level is a function of its previous activation level and the sum of the weighted outputs from other nodes connected to it. However, in the MDN, the activation level parameter is also added at each time step. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 72, |
|
"text": "[Williams 86,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 73, |
|
"end": 84, |
|
"text": "Collins 75]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spreading Activation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Most conventional natural language processing systems use a sequential analysis method. Syntactic and semantic processing are separated in these systems. However, this separation is not good in many cases, and the sequence of applying grammar rules may cause a problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDN vs Sequential Analysis", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Consider the following sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDN vs Sequential Analysis", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Watashi wa kono kamera de kodomo ga asondeiru bamen wo satsueishita. I this camera child playing scene filmed \"I filmed with this camera the child who was playing\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentences:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Watashi wa kodomo ga kono kamera de asondeiru bamen wo satsueishita. I child this camera playing scene filmed \"I filmed the child who was playing with this camera.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentences:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Both sentences have the following ambiguities. modification candidate 1: \"kamera de\" modifies \"asondeiru\" modification candidate 2: \"kamera de\" modifies \"satsueishita\" Candidate 1 has precedence for distance (the modifying phrase is closer to the verb phrase; refer to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentences:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rule 5 in Section 6.2). Candidate 2 has precedence for strong connection (\"kamera\" and \"satsueishita\" have a strong connection; refer to Rule 7 in Section 6.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentences:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In sentence (5), candidate 2 is preferable to candidate 1 (Rule 7 is stronger than Rule 5). However, in sentence (6), candidate 1 is preferable (Rule 5 is stronger than Rule 7). Therefore, the order of applying rules should be changed according to the situation. Sequential analysis systems are at a disadvantage in this problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentences:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Syntactic, semantic, and pragmatic rules should be processed in a highly integrated manner. The MDN is suitable for modeling highly integrated forms of processing. In the MDN, because all of the possible modification candidates can be compared in parallel, compared to systems that use sequential analysis, more accurate analyses can be obtained. been previously reduced by case analysis, the number of nodes in the network is reduced to the minimum necessary, and efficiency of processing and use of memory space are improved. Therefore, the MDN achieves a high speed of analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example sentences:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In a neural network, the weight of a link is determined by using a single learning rule (such as a back-propagation rule [Rumelhart & Hinton 86] ). Whereas in the MDN, it is determined by a number of control rules, and so the network can be controlled much more precisely.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 144, |
|
"text": "[Rumelhart & Hinton 86]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDN vs Neural Network", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "Because natural languages, such as Japanese and English, have a lot of ambiguities, it is difficult to analyze all input sentences perfectly. So interaction with users can be useful [Maruyama 90 ]. However,", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 194, |
|
"text": "[Maruyama 90", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interactive analysis and MDN", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "if the system asks the user about all modification ambiguities, they may become tired of the interaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interactive analysis and MDN", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "A system using an MDN can make use of only a little help from the user. For example, take the case where the user teaches the system: \"This modification ('kamera' modifies 'asondeiru') is wrong.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interactive analysis and MDN", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "I'm not sure about the other modifications.\" In the MDN, the initial activation level of the wrong modification candidate is set at zero, and then spreading activation is carried out again. As a result, the new interpretation which satisfies the intention of the user will be obtained. Therefore, the MDN is suitable for interactive analysis. Indeed, our MT system has an interactive function using the MDN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interactive analysis and MDN", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "\u2022 In the MDN, the modification is determined using four rounds of activation rather than one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other characteristics of MDN", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "Because deactivated modification candidates are removed after each round of spreading activation, the candidates are selected gradually using this method. As a result, a more precise interpretation can be obtained. This is useful for avoiding a local minimum problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other characteristics of MDN", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "\u2022 The MDN can embed different type of rules easily. We have implemented link control rules and level control rules. If you want to handle rules concerning mutual relations between modification candidates, you use link control rules. In the case of rules concerning the precedence level of the modification candidates, you use level control rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other characteristics of MDN", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "\u2022 Because the reliability of the grammar rules can be expressed by the weight of a link and the activation level parameter, the influence of each grammar rule on the network varies with the reliability of the rule. Therefore, even heuristic rules whose reliability is not very high are sufficient for use in the system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other characteristics of MDN", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "We have discussed the basic idea of the MDN and compared natural language analysis approaches using a sequential process, a neural network and an interactive process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We believe that syntactic, semantic, and pragmatic rules should be processed in an integrated manner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We wanted to handle rules of a different type including heuristic rules in a highly integrated manner. A neural network or a connectionist model is suitable for that purpose. However, it needs a vast amount of example sentences for learning, and it takes much time to compute. So most natural language processing systems with a neural network can only handle a small number of input sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We have therefore developed the MDN. The network structure of the MDN is similar to that of a neural network, but the MDN does not need learning. Therefore, there is no need to get a lot of example sentences for learning. Also, the number of nodes in the MDN is reduced to the minimum necessary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Consequently, the MDN can handle a large number of sentences at high speed and is suitable for a large-scale natural language processing system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We can also think of natural language analysis as a constraint satisfaction problem (a combinatorial optimization problem). There are a lot of constraints, such as syntactic, semantic and heuristic constraints. Hopfield proposed a mutually connected network called the Hopfield network as a solution to this problem [Hopfield 85 ]. The MDN is a mutually connected network. In the MDN, constraints are", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 328, |
|
"text": "[Hopfield 85", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "represented as control rules. The spreading activation mechanism plays an important role in deciding the most appropriate interpretation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We tried to improve the disadvantages (speed, scale-problem) of a neural network, and apply its advantages (parallelism, suitability for a constraint satisfaction problem) to natural language processing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We have now developed many grammatical rules. The MDN is a method which amalgamates a neural approach and a rule-base approach. So our approach is a middle ground approach between empiricist and rationalist.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Timothy Cornish and Tatsuro Kyoden for useful comments and discussions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Passing Markers: A Theory of Contextual Influence in Language Comprehension", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "Cognitive Science", |
|
"volume": "83", |
|
"issue": "", |
|
"pages": "171--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charniak 83] Charniak,E.: \"Passing Markers: A Theory of Contextual Influence in Language Compre- hension\", Cognitive Science 7, pp.171-190, 1983", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A spreading-activation theory of semantic processing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Loftus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Psychological Review", |
|
"volume": "82", |
|
"issue": "", |
|
"pages": "407--429", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Collins 75] Collins,A.M. and Loftus,E.F.: \"A spreading-activation theory of semantic processing\", Psy- chological Review 82, pp.407-429, 1975", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Design of a Hybrid Deterministic Parser", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Faisal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Kwasny", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Coling-90", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Faisal 90] Faisal,K.A. and Kwasny,S.C.: \"Design of a Hybrid Deterministic Parser\", Coling-90, pp.11-16, 1990", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Case for Case", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "Universals in Linguistic Theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fillmore,C.J. :\"The Case for Case\", In Universals in Linguistic Theory, Bach,E. and Har ms, R.T.(eds), Holt, Rinehart and Winston, 1968", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Neural computation of decisions in optimization problems", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hopfield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Tank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Biol. Cybernetics", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "141--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Hopfield 85] Hopfield,J.J. and Tank,D.W.: \" Neural computation of decisions in optimization problems\", Biol. Cybernetics 52, pp.141-152, 1985", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An Interactive Japanese Parser for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Maruyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Coling-90", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "257--262", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Maruyama 90] Maruyama,H., and Watanabe, H.: \"An Interactive Japanese Parser for Machine Trans- lation\", Coling-90, pp.257-262, 1990", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Kawamoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Parallel Distributed Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "272--325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[McClelland 86] McClelland,J.L. and Kawamoto,A.H.: \"Mechanisms of Sentence Processing: Assign- ing Roles to Constituents of Sentences\", In Parallel Distributed Processing, Rumelhart, D.E. and McClelland, J.L, MIT Press, Volume2, pp.272-325, 1986", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Parallel Distributed Processing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Rumelhart 86] Rumelhart, D.E and McClelland, J.L : Parallel Distributed Processing, MIT Press, 1986", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning Internal Representation by Error Propagation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Parallel Distributed Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "318--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Rumelhart & Hinton 86] Rumelhart, D.E., Hinton, G.E. and Williams, R.J.: \" Learning Internal Represen- tation by Error Propagation\", In Parallel Distributed Processing, Rumelhart,D.E. and McClel- landJ.L., MIT Press, Volume 1, pp.318-362, 1986", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Connectionist systems for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Selman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "23--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Selman 89] Selman, B.: \"Connectionist systems for natural language understanding\", Artificial Intelli- gence Review 3, pp.23-31, 1989", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Analysis Grammar of Japanese in the Mu-Project", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nakamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "267--274", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Tsujii 84] Tsujii,J.,Nakamura,J., and Nagao, M.: \"Analysis Grammar of Japanese in the Mu-Project\", Coling-84, pp.267-274, 1984.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Waltz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Pollack", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "51--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Waltz, D.L. and Pollack, J.B.: \"Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation\", Cognitive Science 9, pp.51-74, 1985", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The Logic of Activation Functions", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Parallel Distributed Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Williams 86] Williams, R.J.: \"The Logic of Activation Functions\", In Parallel Distributed Processing, Rumelhart, D.E. and McClelland, J.L., MIT Press, pp.423-443, 1986", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Modification candidates are compiled as nodes of the network structure." |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "MDN diagrams for example sentence(l)" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Data Structure of MDN Using case analysis, all the candidates with possible modification relations are extracted. Some of the modification candidates for example sentence (1) are shown in Figure 2." |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Modification candidatesInFigure 2, advp represents a modifying phrase, and vp a verb phrase." |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "example of MDN (a part) 6 Control Rules Control rules come in the form of link control rules and level control rules. In the current version, there are 22 control rules.6.1 Examples of Link Control RulesNodes are connected by either cooperative or exclusive links according to the link control rules. In a mutually cooperative relationship, partner nodes are connected with a cooperative link, and partner nodes in an exclusive relationship are connected with an exclusive link. The weight of a link is changed according to the degree of cooperativeness or exclusiveness. The weight of a cooperative link is a positive value, while the weight of an exclusive link is a negative value." |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Aj(t) : the activation level of the jth MDN node at time t act : the activation level parameter Eij(\u2265 0) : the weight of the cooperative link from i node to j node Iij(< 0) : the weight of the exclusive link from i node to j node n : the total number of nodes J_-DELTA1 : the delta coefficient for activation level parameters J_DELTA2 : the delta coefficient for link control rules J_MAX_LEVEL : the maximum value of activation level In the current version, the number of repetitions of spreading activation (the maximum value of t) is set at 20. As mentioned above, this is repeated for a maximum of 4 rounds (refer to Section 4). 8 Discussions about MDN Let us discuss characteristics of MDN comparing sequential analysis, neural networks, etc." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Using a neural network method, the weight of a link is determined using learning[McClelland 86]. If we want to develop a practical MT system, we need a vast amount of sentence data whose analyzed results are already known. They are to be used as learning data. However, it is almost impossible to get such a vast amount of analyzed sentences. Consequently, most natural language processing systems using a neural network method can handle only a small number of input sentences. In the MDN, because the weight of a link and activation level parameter are determined by the control rule, there is no need to get example sentences for learning. Therefore, the MDN is useful for a practical (i.e. large-scale) natural language processing system. Using a neural network method, it is not clear what sort of operation each of the nodes is performing or how many nodes are needed. Consequently, not all of the nodes are functioning effectively. Using the MDN, however, because the nodes correspond to the modification candidates, whose number has" |
|
} |
|
} |
|
} |
|
} |